forked from Ivasoft/traefik
Compare commits
14 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
00315d3d55 | ||
|
|
146a1016a8 | ||
|
|
ca9822cc5f | ||
|
|
71980c9785 | ||
|
|
a3a9edc3c6 | ||
|
|
67b25550c0 | ||
|
|
fa62ccb0c2 | ||
|
|
f5bfe681c2 | ||
|
|
19417d8af5 | ||
|
|
59b3a392f2 | ||
|
|
25bab338c1 | ||
|
|
3f6993f0a0 | ||
|
|
70f39c3326 | ||
|
|
81560827c6 |
@@ -1,3 +1,5 @@
|
||||
dist/
|
||||
vendor/
|
||||
!dist/traefik
|
||||
site/
|
||||
**/*.test
|
||||
|
||||
1
.gitattributes
vendored
Normal file
1
.gitattributes
vendored
Normal file
@@ -0,0 +1 @@
|
||||
glide.lock binary
|
||||
24
.github/CODEOWNERS
vendored
24
.github/CODEOWNERS
vendored
@@ -1,24 +0,0 @@
|
||||
provider/kubernetes/** @containous/kubernetes
|
||||
provider/rancher/** @containous/rancher
|
||||
provider/marathon/** @containous/marathon
|
||||
provider/docker/** @containous/docker
|
||||
|
||||
docs/user-guide/kubernetes.md @containous/kubernetes
|
||||
docs/user-guide/marathon.md @containous/marathon
|
||||
docs/user-guide/swarm.md @containous/docker
|
||||
docs/user-guide/swarm-mode.md @containous/docker
|
||||
|
||||
docs/configuration/backends/docker.md @containous/docker
|
||||
docs/configuration/backends/kubernetes.md @containous/kubernetes
|
||||
docs/configuration/backends/marathon.md @containous/marathon
|
||||
docs/configuration/backends/rancher.md @containous/rancher
|
||||
|
||||
examples/k8s/ @containous/kubernetes
|
||||
examples/compose-k8s.yaml @containous/kubernetes
|
||||
examples/k8s.namespace.yaml @containous/kubernetes
|
||||
examples/compose-rancher.yml @containous/rancher
|
||||
examples/compose-marathon.yml @containous/marathon
|
||||
|
||||
vendor/github.com/gambol99/go-marathon @containous/marathon
|
||||
vendor/github.com/rancher @containous/rancher
|
||||
vendor/k8s.io/ @containous/kubernetes
|
||||
145
.github/CONTRIBUTING.md
vendored
Normal file
145
.github/CONTRIBUTING.md
vendored
Normal file
@@ -0,0 +1,145 @@
|
||||
# Contributing
|
||||
|
||||
### Building
|
||||
|
||||
You need either [Docker](https://github.com/docker/docker) and `make` (Method 1), or `go` and `glide` (Method 2) in order to build traefik.
|
||||
|
||||
#### Method 1: Using `Docker` and `Makefile`
|
||||
|
||||
You need to run the `binary` target. This will create binaries for Linux platform in the `dist` folder.
|
||||
|
||||
```bash
|
||||
$ make binary
|
||||
docker build -t "traefik-dev:no-more-godep-ever" -f build.Dockerfile .
|
||||
Sending build context to Docker daemon 295.3 MB
|
||||
Step 0 : FROM golang:1.5
|
||||
---> 8c6473912976
|
||||
Step 1 : RUN go get github.com/Masterminds/glide
|
||||
[...]
|
||||
docker run --rm -v "/var/run/docker.sock:/var/run/docker.sock" -it -e OS_ARCH_ARG -e OS_PLATFORM_ARG -e TESTFLAGS -v "/home/emile/dev/go/src/github.com/containous/traefik/"dist":/go/src/github.com/containous/traefik/"dist"" "traefik-dev:no-more-godep-ever" ./script/make.sh generate binary
|
||||
---> Making bundle: generate (in .)
|
||||
removed 'gen.go'
|
||||
|
||||
---> Making bundle: binary (in .)
|
||||
|
||||
$ ls dist/
|
||||
traefik*
|
||||
```
|
||||
|
||||
#### Method 2: Using `go` and `glide`
|
||||
|
||||
###### Setting up your `go` environment
|
||||
|
||||
- You need `go` v1.5+ (1.7 is acceptable)
|
||||
- You need to set `$ export GO15VENDOREXPERIMENT=1` environment variable if you are using go v1.5 (it is already enabled in 1.6+)
|
||||
- It is recommended you clone Træfɪk into a directory like `~/go/src/github.com/containous/traefik` (This is the official golang workspace hierarchy, and will allow dependencies to resolve properly)
|
||||
- This will allow your `GOPATH` and `PATH` variable to be set to `~/go` via:
|
||||
```
|
||||
$ export GOPATH=~/go
|
||||
$ export PATH=$PATH:$GOPATH/bin
|
||||
```
|
||||
|
||||
This can be verified via `$ go env`
|
||||
- You will want to add those 2 export lines to your `.bashrc` or `.bash_profile`
|
||||
- You need `go-bindata` to be able to use `go generate` command (needed to build) : `$ go get github.com/jteeuwen/go-bindata/...` (Please note, the ellipses are required)
|
||||
|
||||
###### Setting up your `glide` environment
|
||||
|
||||
- Glide can be installed either via homebrew: `$ brew install glide` or via the official glide script: `$ curl https://glide.sh/get | sh`
|
||||
|
||||
The idea behind `glide` is the following :
|
||||
|
||||
- when checkout(ing) a project, run `$ glide install` from the cloned directory to install
|
||||
(`go get …`) the dependencies in your `GOPATH`.
|
||||
- if you need another dependency, import and use it in
|
||||
the source, and run `$ glide get github.com/Masterminds/cookoo` to save it in
|
||||
`vendor` and add it to your `glide.yaml`.
|
||||
|
||||
```bash
|
||||
$ glide install
|
||||
# generate (Only required to integrate other components such as web dashboard)
|
||||
$ go generate
|
||||
# Standard go build
|
||||
$ go build
|
||||
# Using gox to build multiple platform
|
||||
$ gox "linux darwin" "386 amd64 arm" \
|
||||
-output="dist/traefik_{{.OS}}-{{.Arch}}"
|
||||
# run other commands like tests
|
||||
```
|
||||
|
||||
### Tests
|
||||
|
||||
##### Method 1: `Docker` and `make`
|
||||
|
||||
You can run unit tests using the `test-unit` target and the
|
||||
integration test using the `test-integration` target.
|
||||
|
||||
```bash
|
||||
$ make test-unit
|
||||
docker build -t "traefik-dev:your-feature-branch" -f build.Dockerfile .
|
||||
# […]
|
||||
docker run --rm -it -e OS_ARCH_ARG -e OS_PLATFORM_ARG -e TESTFLAGS -v "/home/vincent/src/github/vdemeester/traefik/dist:/go/src/github.com/containous/traefik/dist" "traefik-dev:your-feature-branch" ./script/make.sh generate test-unit
|
||||
---> Making bundle: generate (in .)
|
||||
removed 'gen.go'
|
||||
|
||||
---> Making bundle: test-unit (in .)
|
||||
+ go test -cover -coverprofile=cover.out .
|
||||
ok github.com/containous/traefik 0.005s coverage: 4.1% of statements
|
||||
|
||||
Test success
|
||||
```
|
||||
|
||||
For development purposes, you can specify which tests to run by using:
|
||||
```
|
||||
# Run every tests in the MyTest suite
|
||||
TESTFLAGS="-check.f MyTestSuite" make test-integration
|
||||
|
||||
# Run the test "MyTest" in the MyTest suite
|
||||
TESTFLAGS="-check.f MyTestSuite.MyTest" make test-integration
|
||||
|
||||
# Run every tests starting with "My", in the MyTest suite
|
||||
TESTFLAGS="-check.f MyTestSuite.My" make test-integration
|
||||
|
||||
# Run every tests ending with "Test", in the MyTest suite
|
||||
TESTFLAGS="-check.f MyTestSuite.*Test" make test-integration
|
||||
```
|
||||
|
||||
More: https://labix.org/gocheck
|
||||
|
||||
##### Method 2: `go` and `glide`
|
||||
|
||||
- Tests can be run from the cloned directory, by `$ go test ./...` which should return `ok` similar to:
|
||||
```
|
||||
ok _/home/vincent/src/github/vdemeester/traefik 0.004s
|
||||
```
|
||||
|
||||
### Documentation
|
||||
|
||||
The [documentation site](http://docs.traefik.io/) is built with [mkdocs](http://mkdocs.org/)
|
||||
|
||||
First make sure you have python and pip installed
|
||||
|
||||
```
|
||||
$ python --version
|
||||
Python 2.7.2
|
||||
$ pip --version
|
||||
pip 1.5.2
|
||||
```
|
||||
|
||||
Then install mkdocs with pip
|
||||
|
||||
```
|
||||
$ pip install mkdocs
|
||||
```
|
||||
|
||||
To test documentation locally run `mkdocs serve` in the root directory, this should start a server locally to preview your changes.
|
||||
|
||||
```
|
||||
$ mkdocs serve
|
||||
INFO - Building documentation...
|
||||
WARNING - Config value: 'theme'. Warning: The theme 'united' will be removed in an upcoming MkDocs release. See http://www.mkdocs.org/about/release-notes/ for more details
|
||||
INFO - Cleaning site directory
|
||||
[I 160505 22:31:24 server:281] Serving on http://127.0.0.1:8000
|
||||
[I 160505 22:31:24 handlers:59] Start watching changes
|
||||
[I 160505 22:31:24 handlers:61] Start detecting changes
|
||||
```
|
||||
69
.github/ISSUE_TEMPLATE.md
vendored
69
.github/ISSUE_TEMPLATE.md
vendored
@@ -1,69 +0,0 @@
|
||||
<!--
|
||||
DO NOT FILE ISSUES FOR GENERAL SUPPORT QUESTIONS.
|
||||
|
||||
The issue tracker is for reporting bugs and feature requests only.
|
||||
For end-user related support questions, refer to one of the following:
|
||||
|
||||
- Stack Overflow (using the "traefik" tag): https://stackoverflow.com/questions/tagged/traefik
|
||||
- the Traefik community Slack channel: https://traefik.herokuapp.com
|
||||
|
||||
-->
|
||||
|
||||
|
||||
### Do you want to request a *feature* or report a *bug*?
|
||||
|
||||
<!--
|
||||
If you intend to ask a support question: DO NOT FILE AN ISSUE.
|
||||
-->
|
||||
|
||||
### What did you do?
|
||||
|
||||
<!--
|
||||
|
||||
HOW TO WRITE A GOOD ISSUE?
|
||||
|
||||
- Respect the issue template as much as possible.
|
||||
- If it's possible use the command `traefik bug`. See https://www.youtube.com/watch?v=Lyz62L8m93I.
|
||||
- The title must be short and descriptive.
|
||||
- Explain the conditions which led you to write this issue: the context.
|
||||
- The context should lead to something, an idea or a problem that you’re facing.
|
||||
- Remain clear and concise.
|
||||
- Format your messages to help the reader focus on what matters and understand the structure of your message, use Markdown syntax https://help.github.com/articles/github-flavored-markdown
|
||||
|
||||
-->
|
||||
|
||||
### What did you expect to see?
|
||||
|
||||
|
||||
|
||||
### What did you see instead?
|
||||
|
||||
|
||||
|
||||
### Output of `traefik version`: (_What version of Traefik are you using?_)
|
||||
|
||||
<!--
|
||||
For the Traefik Docker image:
|
||||
docker run [IMAGE] version
|
||||
ex: docker run traefik version
|
||||
-->
|
||||
|
||||
```
|
||||
(paste your output here)
|
||||
```
|
||||
|
||||
### What is your environment & configuration (arguments, toml, provider, platform, ...)?
|
||||
|
||||
```toml
|
||||
# (paste your configuration here)
|
||||
```
|
||||
<!--
|
||||
Add more configuration information here.
|
||||
-->
|
||||
|
||||
|
||||
### If applicable, please paste the log output in debug mode (`--debug` switch)
|
||||
|
||||
```
|
||||
(paste your output here)
|
||||
```
|
||||
68
.github/ISSUE_TEMPLATE/bugs.md
vendored
68
.github/ISSUE_TEMPLATE/bugs.md
vendored
@@ -1,68 +0,0 @@
|
||||
<!--
|
||||
DO NOT FILE ISSUES FOR GENERAL SUPPORT QUESTIONS.
|
||||
|
||||
The issue tracker is for reporting bugs and feature requests only.
|
||||
For end-user related support questions, refer to one of the following:
|
||||
|
||||
- Stack Overflow (using the "traefik" tag): https://stackoverflow.com/questions/tagged/traefik
|
||||
- the Traefik community Slack channel: https://traefik.herokuapp.com
|
||||
|
||||
-->
|
||||
|
||||
|
||||
### Do you want to request a *feature* or report a *bug*?
|
||||
|
||||
Bug
|
||||
|
||||
### What did you do?
|
||||
|
||||
<!--
|
||||
|
||||
HOW TO WRITE A GOOD ISSUE?
|
||||
|
||||
- Respect the issue template as much as possible.
|
||||
- If it's possible use the command `traefik bug`. See https://www.youtube.com/watch?v=Lyz62L8m93I.
|
||||
- The title must be short and descriptive.
|
||||
- Explain the conditions which led you to write this issue: the context.
|
||||
- The context should lead to something, an idea or a problem that you’re facing.
|
||||
- Remain clear and concise.
|
||||
- Format your messages to help the reader focus on what matters and understand the structure of your message, use Markdown syntax https://help.github.com/articles/github-flavored-markdown
|
||||
|
||||
-->
|
||||
|
||||
### What did you expect to see?
|
||||
|
||||
|
||||
|
||||
### What did you see instead?
|
||||
|
||||
|
||||
|
||||
### Output of `traefik version`: (_What version of Traefik are you using?_)
|
||||
|
||||
<!--
|
||||
For the Traefik Docker image:
|
||||
docker run [IMAGE] version
|
||||
ex: docker run traefik version
|
||||
-->
|
||||
|
||||
```
|
||||
(paste your output here)
|
||||
```
|
||||
|
||||
### What is your environment & configuration (arguments, toml, provider, platform, ...)?
|
||||
|
||||
```toml
|
||||
# (paste your configuration here)
|
||||
```
|
||||
|
||||
<!--
|
||||
Add more configuration information here.
|
||||
-->
|
||||
|
||||
|
||||
### If applicable, please paste the log output in debug mode (`--debug` switch)
|
||||
|
||||
```
|
||||
(paste your output here)
|
||||
```
|
||||
32
.github/ISSUE_TEMPLATE/features.md
vendored
32
.github/ISSUE_TEMPLATE/features.md
vendored
@@ -1,32 +0,0 @@
|
||||
<!--
|
||||
DO NOT FILE ISSUES FOR GENERAL SUPPORT QUESTIONS.
|
||||
|
||||
The issue tracker is for reporting bugs and feature requests only.
|
||||
For end-user related support questions, refer to one of the following:
|
||||
|
||||
- Stack Overflow (using the "traefik" tag): https://stackoverflow.com/questions/tagged/traefik
|
||||
- the Traefik community Slack channel: https://traefik.herokuapp.com
|
||||
|
||||
-->
|
||||
|
||||
|
||||
### Do you want to request a *feature* or report a *bug*?
|
||||
|
||||
Feature
|
||||
|
||||
### What did you expect to see?
|
||||
|
||||
<!--
|
||||
|
||||
HOW TO WRITE A GOOD ISSUE?
|
||||
|
||||
- Respect the issue template as much as possible.
|
||||
- If it's possible use the command `traefik bug`. See https://www.youtube.com/watch?v=Lyz62L8m93I.
|
||||
- The title must be short and descriptive.
|
||||
- Explain the conditions which led you to write this issue: the context.
|
||||
- The context should lead to something, an idea or a problem that you’re facing.
|
||||
- Remain clear and concise.
|
||||
- Format your messages to help the reader focus on what matters and understand the structure of your message, use Markdown syntax https://help.github.com/articles/github-flavored-markdown
|
||||
|
||||
-->
|
||||
|
||||
36
.github/PULL_REQUEST_TEMPLATE.md
vendored
36
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -1,36 +0,0 @@
|
||||
<!--
|
||||
PLEASE READ THIS MESSAGE.
|
||||
|
||||
HOW TO WRITE A GOOD PULL REQUEST?
|
||||
|
||||
- Make it small.
|
||||
- Do only one thing.
|
||||
- Avoid re-formatting.
|
||||
- Make sure the code builds.
|
||||
- Make sure all tests pass.
|
||||
- Add tests.
|
||||
- Write useful descriptions and titles.
|
||||
- Address review comments in terms of additional commits.
|
||||
- Do not amend/squash existing ones unless the PR is trivial.
|
||||
- Read the contributing guide: https://github.com/containous/traefik/blob/master/.github/CONTRIBUTING.md.
|
||||
|
||||
-->
|
||||
|
||||
### What does this PR do?
|
||||
|
||||
<!-- A brief description of the change being made with this pull request. -->
|
||||
|
||||
|
||||
### Motivation
|
||||
|
||||
<!-- What inspired you to submit this pull request? -->
|
||||
|
||||
|
||||
### More
|
||||
|
||||
- [ ] Added/updated tests
|
||||
- [ ] Added/updated documentation
|
||||
|
||||
### Additional Notes
|
||||
|
||||
<!-- Anything else we should know when reviewing? -->
|
||||
7
.github/PULL_REQUEST_TEMPLATE/mergeback.md
vendored
7
.github/PULL_REQUEST_TEMPLATE/mergeback.md
vendored
@@ -1,7 +0,0 @@
|
||||
### What does this PR do?
|
||||
|
||||
Merge v{{.Version}} into master
|
||||
|
||||
### Motivation
|
||||
|
||||
Be sync.
|
||||
7
.github/PULL_REQUEST_TEMPLATE/release.md
vendored
7
.github/PULL_REQUEST_TEMPLATE/release.md
vendored
@@ -1,7 +0,0 @@
|
||||
### What does this PR do?
|
||||
|
||||
Prepare release v{{.Version}}.
|
||||
|
||||
### Motivation
|
||||
|
||||
Create a new release.
|
||||
17
.gitignore
vendored
17
.gitignore
vendored
@@ -1,14 +1,15 @@
|
||||
/dist
|
||||
/autogen/genstatic/gen.go
|
||||
.idea/
|
||||
.intellij/
|
||||
gen.go
|
||||
.idea
|
||||
.intellij
|
||||
*.iml
|
||||
/traefik
|
||||
/traefik.toml
|
||||
/static/
|
||||
traefik
|
||||
traefik.toml
|
||||
*.test
|
||||
vendor/
|
||||
static/
|
||||
.vscode/
|
||||
/site/
|
||||
site/
|
||||
*.log
|
||||
*.exe
|
||||
.DS_Store
|
||||
/examples/acme/acme.json
|
||||
|
||||
@@ -1,11 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
sudo -E apt-get -yq update
|
||||
sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install docker-ce=${DOCKER_VERSION}*
|
||||
docker version
|
||||
|
||||
pip install --user -r requirements.txt
|
||||
|
||||
make pull-images
|
||||
ci_retry make validate
|
||||
@@ -1,6 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
make test-unit
|
||||
ci_retry make test-integration
|
||||
make -j${N_MAKE_JOBS} crossbinary-default-parallel
|
||||
@@ -1,37 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
export REPO='containous/traefik'
|
||||
|
||||
if VERSION=$(git describe --exact-match --abbrev=0 --tags);
|
||||
then
|
||||
export VERSION
|
||||
else
|
||||
export VERSION=''
|
||||
fi
|
||||
|
||||
export CODENAME=cancoillotte
|
||||
|
||||
export N_MAKE_JOBS=2
|
||||
|
||||
|
||||
function ci_retry {
|
||||
|
||||
local NRETRY=3
|
||||
local NSLEEP=5
|
||||
local n=0
|
||||
|
||||
until [ $n -ge $NRETRY ]
|
||||
do
|
||||
"$@" && break
|
||||
n=$[$n+1]
|
||||
echo "$@ failed, attempt ${n}/${NRETRY}"
|
||||
sleep $NSLEEP
|
||||
done
|
||||
|
||||
[ $n -lt $NRETRY ]
|
||||
|
||||
}
|
||||
|
||||
export -f ci_retry
|
||||
|
||||
87
.travis.yml
87
.travis.yml
@@ -1,63 +1,34 @@
|
||||
sudo: required
|
||||
dist: trusty
|
||||
|
||||
git:
|
||||
depth: false
|
||||
|
||||
services:
|
||||
- docker
|
||||
|
||||
branches:
|
||||
env:
|
||||
global:
|
||||
- secure: btt4r13t09gQlHb6gYrvGC2yGCMMHfnp1Mz1RQedc4Mpf/FfT8aE6xmK2a2i9CCvskjrP0t/BFaS4yxIURjnFRn+ugQIEa0pLspB9UJArW/vgOSpIWM9/OQ/fg8z5XuMxN6Md4DL1/iLypMNSageA1x0TRdt89+D1N1dALpg5XRCXLFbC84TLi0gjlFuib9ibPKzEhLT+anCRJ6iZMzeupDSoaCVbAtJMoDvXw4+4AcRZ1+k4MybBLyCib5boaEOt4pTT88mz4Kk0YaMwPVJyg9Qv36VqyUcPS09Yd95LuyVQ4+tZt8Y1ccbIzULsK+sLM3hLCzxlmlpN3dQBlZJiiRtQde0mgGAKyC0P0A1XjuDTywcsa5edB+fTk1Dsewz9xZ9V0NmMz8t+UNZnaSsAPga9i86jULbXUUwMVSzVRc+Xgx02liB/8qI1xYC9FM6ilStt7rn7mF0k3KbiWhcptgeXjO6Lah9FjEKd5w4MXsdUSTi/86rQaLo+kj+XdaTrXCTulKHyRyQEUj+8V1w0oVz7pcGjePHd7y5oU9ByifVQy6sytuFBfRZvugM5bKHo+i0pcWvixrZS42DrzwxZJsspANOvqSe5ifVbvOkfUppQdCBIwptxV5N1b49XPKU3W/w34QJ8xGmKp3TFA7WwVCztriFHjPgiRpB3EG99Bg=
|
||||
- REPO: $TRAVIS_REPO_SLUG
|
||||
- VERSION: $TRAVIS_TAG
|
||||
- CODENAME: cancoillotte
|
||||
- N_MAKE_JOBS: 2
|
||||
|
||||
- CODENAME: camembert
|
||||
matrix:
|
||||
- DOCKER_VERSION=1.9.1
|
||||
- DOCKER_VERSION=1.10.1
|
||||
sudo: required
|
||||
services:
|
||||
- docker
|
||||
install:
|
||||
- sudo service docker stop
|
||||
- sudo curl https://get.docker.com/builds/Linux/x86_64/docker-${DOCKER_VERSION} -o /usr/bin/docker
|
||||
- sudo chmod +x /usr/bin/docker
|
||||
- sudo service docker start
|
||||
- sleep 5
|
||||
- docker version
|
||||
- pip install --user mkdocs
|
||||
- pip install --user pymdown-extensions
|
||||
- pip install --user mkdocs-bootswatch
|
||||
before_script:
|
||||
- make validate
|
||||
- make binary
|
||||
script:
|
||||
- echo "Skipping tests... (Tests are executed on SemaphoreCI)"
|
||||
|
||||
before_deploy:
|
||||
- >
|
||||
if ! [ "$BEFORE_DEPLOY_RUN" ]; then
|
||||
export BEFORE_DEPLOY_RUN=1;
|
||||
sudo -E apt-get -yq update;
|
||||
sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install docker-ce=${DOCKER_VERSION}*;
|
||||
docker version;
|
||||
make image;
|
||||
if [ "$TRAVIS_TAG" ]; then
|
||||
make -j${N_MAKE_JOBS} crossbinary-parallel;
|
||||
tar cfz dist/traefik-${VERSION}.src.tar.gz --exclude-vcs --exclude dist .;
|
||||
fi;
|
||||
curl -sI https://github.com/containous/structor/releases/latest | grep -Fi Location | tr -d '\r' | sed "s/tag/download/g" | awk -F " " '{ print $2 "/structor_linux-amd64"}' | wget --output-document=$GOPATH/bin/structor -i -;
|
||||
chmod +x $GOPATH/bin/structor;
|
||||
structor -o containous -r traefik --dockerfile-url="https://raw.githubusercontent.com/containous/traefik/master/docs.Dockerfile" --menu.js-url="https://raw.githubusercontent.com/containous/structor/master/traefik-menu.js.gotmpl" --rqts-url="https://raw.githubusercontent.com/containous/structor/master/requirements-override.txt" --exp-branch=master --debug;
|
||||
fi
|
||||
deploy:
|
||||
- provider: releases
|
||||
api_key: ${GITHUB_TOKEN}
|
||||
file: dist/traefik*
|
||||
skip_cleanup: true
|
||||
file_glob: true
|
||||
on:
|
||||
repo: containous/traefik
|
||||
tags: true
|
||||
- provider: script
|
||||
script: sh script/deploy.sh
|
||||
skip_cleanup: true
|
||||
on:
|
||||
repo: containous/traefik
|
||||
tags: true
|
||||
- provider: script
|
||||
script: sh script/deploy-docker.sh
|
||||
skip_cleanup: true
|
||||
on:
|
||||
repo: containous/traefik
|
||||
- provider: pages
|
||||
edge: false
|
||||
github_token: ${GITHUB_TOKEN}
|
||||
local_dir: site
|
||||
skip_cleanup: true
|
||||
on:
|
||||
repo: containous/traefik
|
||||
all_branches: true
|
||||
- make test-unit
|
||||
- make test-integration
|
||||
- make crossbinary
|
||||
- make image
|
||||
after_success:
|
||||
- make deploy
|
||||
- make deploy-pr
|
||||
|
||||
BIN
.travis/traefik.id_rsa.enc
Normal file
BIN
.travis/traefik.id_rsa.enc
Normal file
Binary file not shown.
Binary file not shown.
1521
CHANGELOG.md
1521
CHANGELOG.md
File diff suppressed because it is too large
Load Diff
260
CONTRIBUTING.md
260
CONTRIBUTING.md
@@ -1,260 +0,0 @@
|
||||
# Contributing
|
||||
|
||||
## Building
|
||||
|
||||
You need either [Docker](https://github.com/docker/docker) and `make` (Method 1), or `go` (Method 2) in order to build Traefik.
|
||||
For changes to its dependencies, the `dep` dependency management tool is required.
|
||||
|
||||
### Method 1: Using `Docker` and `Makefile`
|
||||
|
||||
You need to run the `binary` target. This will create binaries for Linux platform in the `dist` folder.
|
||||
|
||||
```bash
|
||||
$ make binary
|
||||
docker build -t "traefik-dev:no-more-godep-ever" -f build.Dockerfile .
|
||||
Sending build context to Docker daemon 295.3 MB
|
||||
Step 0 : FROM golang:1.9-alpine
|
||||
---> 8c6473912976
|
||||
Step 1 : RUN go get github.com/golang/dep/cmd/dep
|
||||
[...]
|
||||
docker run --rm -v "/var/run/docker.sock:/var/run/docker.sock" -it -e OS_ARCH_ARG -e OS_PLATFORM_ARG -e TESTFLAGS -v "/home/user/go/src/github.com/containous/traefik/"dist":/go/src/github.com/containous/traefik/"dist"" "traefik-dev:no-more-godep-ever" ./script/make.sh generate binary
|
||||
---> Making bundle: generate (in .)
|
||||
removed 'gen.go'
|
||||
|
||||
---> Making bundle: binary (in .)
|
||||
|
||||
$ ls dist/
|
||||
traefik*
|
||||
```
|
||||
|
||||
### Method 2: Using `go`
|
||||
|
||||
##### Setting up your `go` environment
|
||||
|
||||
- You need `go` v1.9+
|
||||
- It is recommended you clone Træfik into a directory like `~/go/src/github.com/containous/traefik` (This is the official golang workspace hierarchy, and will allow dependencies to resolve properly)
|
||||
- Set your `GOPATH` and `PATH` variable to be set to `~/go` via:
|
||||
|
||||
```bash
|
||||
export GOPATH=~/go
|
||||
export PATH=$PATH:$GOPATH/bin
|
||||
```
|
||||
|
||||
> Note: You will want to add those 2 export lines to your `.bashrc` or `.bash_profile`
|
||||
|
||||
- Verify your environment is setup properly by running `$ go env`. Depending on your OS and environment you should see output similar to:
|
||||
|
||||
```bash
|
||||
GOARCH="amd64"
|
||||
GOBIN=""
|
||||
GOEXE=""
|
||||
GOHOSTARCH="amd64"
|
||||
GOHOSTOS="linux"
|
||||
GOOS="linux"
|
||||
GOPATH="/home/<yourusername>/go"
|
||||
GORACE=""
|
||||
## more go env's will be listed
|
||||
```
|
||||
|
||||
##### Build Træfik
|
||||
|
||||
Once your environment is set up and the Træfik repository cloned you can build Træfik. You need get `go-bindata` once to be able to use `go generate` command as part of the build. The steps to build are:
|
||||
|
||||
```bash
|
||||
cd ~/go/src/github.com/containous/traefik
|
||||
|
||||
# Get go-bindata. Please note, the ellipses are required
|
||||
go get github.com/containous/go-bindata/...
|
||||
|
||||
# Start build
|
||||
|
||||
# generate
|
||||
# (required to merge non-code components into the final binary, such as the web dashboard and provider's Go templates)
|
||||
go generate
|
||||
|
||||
# Standard go build
|
||||
go build ./cmd/traefik
|
||||
# run other commands like tests
|
||||
```
|
||||
|
||||
You will find the Træfik executable in the `~/go/src/github.com/containous/traefik` folder as `traefik`.
|
||||
|
||||
### Updating the templates
|
||||
|
||||
If you happen to update the provider templates (in `/templates`), you need to run `go generate` to update the `autogen` package.
|
||||
|
||||
### Setting up dependency management
|
||||
|
||||
[dep](https://github.com/golang/dep) is not required for building; however, it is necessary to modify dependencies (i.e., add, update, or remove third-party packages)
|
||||
|
||||
You need to use [dep](https://github.com/golang/dep) >= O.4.1.
|
||||
|
||||
If you want to add a dependency, use `dep ensure -add` to have [dep](https://github.com/golang/dep) put it into the vendor folder and update the dep manifest/lock files (`Gopkg.toml` and `Gopkg.lock`, respectively).
|
||||
|
||||
A following `make dep-prune` run should be triggered to trim down the size of the vendor folder.
|
||||
The final result must be committed into VCS.
|
||||
|
||||
Here's a full example using dep to add a new dependency:
|
||||
|
||||
```bash
|
||||
# install the new main dependency github.com/foo/bar and minimize vendor size
|
||||
$ dep ensure -add github.com/foo/bar
|
||||
# generate (Only required to integrate other components such as web dashboard)
|
||||
$ go generate
|
||||
# Standard go build
|
||||
$ go build ./cmd/traefik
|
||||
# run other commands like tests
|
||||
```
|
||||
|
||||
### Tests
|
||||
|
||||
#### Method 1: `Docker` and `make`
|
||||
|
||||
You can run unit tests using the `test-unit` target and the
|
||||
integration test using the `test-integration` target.
|
||||
|
||||
```bash
|
||||
$ make test-unit
|
||||
docker build -t "traefik-dev:your-feature-branch" -f build.Dockerfile .
|
||||
# […]
|
||||
docker run --rm -it -e OS_ARCH_ARG -e OS_PLATFORM_ARG -e TESTFLAGS -v "/home/user/go/src/github/containous/traefik/dist:/go/src/github.com/containous/traefik/dist" "traefik-dev:your-feature-branch" ./script/make.sh generate test-unit
|
||||
---> Making bundle: generate (in .)
|
||||
removed 'gen.go'
|
||||
|
||||
---> Making bundle: test-unit (in .)
|
||||
+ go test -cover -coverprofile=cover.out .
|
||||
ok github.com/containous/traefik 0.005s coverage: 4.1% of statements
|
||||
|
||||
Test success
|
||||
```
|
||||
|
||||
For development purposes, you can specify which tests to run by using:
|
||||
|
||||
```bash
|
||||
# Run every tests in the MyTest suite
|
||||
TESTFLAGS="-check.f MyTestSuite" make test-integration
|
||||
|
||||
# Run the test "MyTest" in the MyTest suite
|
||||
TESTFLAGS="-check.f MyTestSuite.MyTest" make test-integration
|
||||
|
||||
# Run every tests starting with "My", in the MyTest suite
|
||||
TESTFLAGS="-check.f MyTestSuite.My" make test-integration
|
||||
|
||||
# Run every tests ending with "Test", in the MyTest suite
|
||||
TESTFLAGS="-check.f MyTestSuite.*Test" make test-integration
|
||||
```
|
||||
|
||||
More: https://labix.org/gocheck
|
||||
|
||||
#### Method 2: `go`
|
||||
|
||||
Unit tests can be run from the cloned directory by `$ go test ./...` which should return `ok` similar to:
|
||||
|
||||
```
|
||||
ok _/home/user/go/src/github/containous/traefik 0.004s
|
||||
```
|
||||
|
||||
Integration tests must be run from the `integration/` directory and require the `-integration` switch to be passed like this: `$ cd integration && go test -integration ./...`.
|
||||
|
||||
## Documentation
|
||||
|
||||
The [documentation site](http://docs.traefik.io/) is built with [mkdocs](http://mkdocs.org/)
|
||||
|
||||
### Method 1: `Docker` and `make`
|
||||
|
||||
You can test documentation using the `docs` target.
|
||||
|
||||
```bash
|
||||
$ make docs
|
||||
docker build -t traefik-docs -f docs.Dockerfile .
|
||||
# […]
|
||||
docker run --rm -v /home/user/go/github/containous/traefik:/mkdocs -p 8000:8000 traefik-docs mkdocs serve
|
||||
# […]
|
||||
[I 170828 20:47:48 server:283] Serving on http://0.0.0.0:8000
|
||||
[I 170828 20:47:48 handlers:60] Start watching changes
|
||||
[I 170828 20:47:48 handlers:62] Start detecting changes
|
||||
```
|
||||
|
||||
And go to [http://127.0.0.1:8000](http://127.0.0.1:8000).
|
||||
|
||||
### Method 2: `mkdocs`
|
||||
|
||||
First make sure you have python and pip installed
|
||||
|
||||
```shell
|
||||
$ python --version
|
||||
Python 2.7.2
|
||||
$ pip --version
|
||||
pip 1.5.2
|
||||
```
|
||||
|
||||
Then install mkdocs with pip
|
||||
|
||||
```shell
|
||||
pip install --user -r requirements.txt
|
||||
```
|
||||
|
||||
To test documentation locally run `mkdocs serve` in the root directory, this should start a server locally to preview your changes.
|
||||
|
||||
```shell
|
||||
$ mkdocs serve
|
||||
INFO - Building documentation...
|
||||
WARNING - Config value: 'theme'. Warning: The theme 'united' will be removed in an upcoming MkDocs release. See http://www.mkdocs.org/about/release-notes/ for more details
|
||||
INFO - Cleaning site directory
|
||||
[I 160505 22:31:24 server:281] Serving on http://127.0.0.1:8000
|
||||
[I 160505 22:31:24 handlers:59] Start watching changes
|
||||
[I 160505 22:31:24 handlers:61] Start detecting changes
|
||||
```
|
||||
|
||||
|
||||
## How to Write a Good Issue
|
||||
|
||||
Please keep in mind that the GitHub issue tracker is not intended as a general support forum, but for reporting bugs and feature requests.
|
||||
|
||||
For end-user related support questions, refer to one of the following:
|
||||
- the Traefik community Slack channel: [](https://traefik.herokuapp.com)
|
||||
- [Stack Overflow](https://stackoverflow.com/questions/tagged/traefik) (using the `traefik` tag)
|
||||
|
||||
### Title
|
||||
|
||||
The title must be short and descriptive. (~60 characters)
|
||||
|
||||
### Description
|
||||
|
||||
- Respect the issue template as much as possible. [template](.github/ISSUE_TEMPLATE.md)
|
||||
- If it's possible use the command `traefik bug`. See https://www.youtube.com/watch?v=Lyz62L8m93I.
|
||||
- Explain the conditions which led you to write this issue: the context.
|
||||
- The context should lead to something, an idea or a problem that you’re facing.
|
||||
- Remain clear and concise.
|
||||
- Format your messages to help the reader focus on what matters and understand the structure of your message, use [Markdown syntax](https://help.github.com/articles/github-flavored-markdown)
|
||||
|
||||
|
||||
## How to Write a Good Pull Request
|
||||
|
||||
### Title
|
||||
|
||||
The title must be short and descriptive. (~60 characters)
|
||||
|
||||
### Description
|
||||
|
||||
- Respect the pull request template as much as possible. [template](.github/PULL_REQUEST_TEMPLATE.md)
|
||||
- Explain the conditions which led you to write this PR: the context.
|
||||
- The context should lead to something, an idea or a problem that you’re facing.
|
||||
- Remain clear and concise.
|
||||
- Format your messages to help the reader focus on what matters and understand the structure of your message, use [Markdown syntax](https://help.github.com/articles/github-flavored-markdown)
|
||||
|
||||
### Content
|
||||
|
||||
- Make it small.
|
||||
- Do only one thing.
|
||||
- Write useful descriptions and titles.
|
||||
- Avoid re-formatting.
|
||||
- Make sure the code builds.
|
||||
- Make sure all tests pass.
|
||||
- Add tests.
|
||||
- Address review comments in terms of additional commits.
|
||||
- Do not amend/squash existing ones unless the PR is trivial.
|
||||
- If a PR involves changes to third-party dependencies, the commits pertaining to the vendor folder and the manifest/lock file(s) should be committed separated.
|
||||
|
||||
|
||||
Read [10 tips for better pull requests](http://blog.ploeh.dk/2015/01/15/10-tips-for-better-pull-requests/).
|
||||
1405
Gopkg.lock
generated
1405
Gopkg.lock
generated
File diff suppressed because it is too large
Load Diff
206
Gopkg.toml
206
Gopkg.toml
@@ -1,206 +0,0 @@
|
||||
# Gopkg.toml example
|
||||
#
|
||||
# Refer to https://github.com/golang/dep/blob/master/docs/Gopkg.toml.md
|
||||
# for detailed Gopkg.toml documentation.
|
||||
#
|
||||
# required = ["github.com/user/thing/cmd/thing"]
|
||||
# ignored = ["github.com/user/project/pkgX", "bitbucket.org/user/project/pkgA/pkgY"]
|
||||
#
|
||||
# [[constraint]]
|
||||
# name = "github.com/user/project"
|
||||
# version = "1.0.0"
|
||||
#
|
||||
# [[constraint]]
|
||||
# name = "github.com/user/project2"
|
||||
# branch = "dev"
|
||||
# source = "github.com/myfork/project2"
|
||||
#
|
||||
# [[override]]
|
||||
# name = "github.com/x/y"
|
||||
# version = "2.4.0"
|
||||
|
||||
ignored = ["github.com/sirupsen/logrus"]
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/ArthurHlt/go-eureka-client"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/BurntSushi/toml"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/BurntSushi/ty"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/NYTimes/gziphandler"
|
||||
|
||||
[[constraint]]
|
||||
branch = "containous-fork"
|
||||
name = "github.com/abbot/go-http-auth"
|
||||
source = "github.com/containous/go-http-auth"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/armon/go-proxyproto"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/aws/aws-sdk-go"
|
||||
version = "1.6.18"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/cenk/backoff"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/containous/flaeg"
|
||||
version = "1.0.1"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/containous/mux"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/containous/staert"
|
||||
version = "2.1.0"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/containous/traefik-extra-service-fabric"
|
||||
version = "1.0.6"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/coreos/go-systemd"
|
||||
version = "14.0.0"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/docker/leadership"
|
||||
source = "github.com/containous/leadership"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/docker/libkv"
|
||||
source = "github.com/abronan/libkv"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/eapache/channels"
|
||||
version = "1.1.0"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/elazarl/go-bindata-assetfs"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/go-check/check"
|
||||
source = "github.com/containous/check"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/go-kit/kit"
|
||||
version = "0.3.0"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/influxdata/influxdb"
|
||||
version = "1.3.7"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/jjcollinge/servicefabric"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/mattn/go-shellwords"
|
||||
version = "1.0.3"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/mesosphere/mesos-dns"
|
||||
source = "https://github.com/containous/mesos-dns.git"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/mitchellh/copystructure"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/mitchellh/hashstructure"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/mitchellh/mapstructure"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/rancher/go-rancher-metadata"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/ryanuber/go-glob"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/satori/go.uuid"
|
||||
version = "1.1.0"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/stvp/go-udp-testing"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/vdemeester/shakers"
|
||||
version = "0.1.0"
|
||||
|
||||
[[constraint]]
|
||||
branch = "containous-fork"
|
||||
name = "github.com/vulcand/oxy"
|
||||
source = "https://github.com/containous/oxy.git"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/xenolf/lego"
|
||||
version = "0.4.1"
|
||||
|
||||
[[constraint]]
|
||||
name = "google.golang.org/grpc"
|
||||
version = "1.5.2"
|
||||
|
||||
[[constraint]]
|
||||
name = "gopkg.in/fsnotify.v1"
|
||||
source = "github.com/fsnotify/fsnotify"
|
||||
version = "1.4.2"
|
||||
|
||||
[[constraint]]
|
||||
name = "k8s.io/client-go"
|
||||
version = "2.0.0"
|
||||
|
||||
[[override]]
|
||||
name = "github.com/Nvveen/Gotty"
|
||||
revision = "6018b68f96b839edfbe3fb48668853f5dbad88a3"
|
||||
source = "github.com/ijc25/Gotty"
|
||||
|
||||
[[override]]
|
||||
# always keep this override
|
||||
name = "github.com/mailgun/timetools"
|
||||
revision = "7e6055773c5137efbeb3bd2410d705fe10ab6bfd"
|
||||
|
||||
[[override]]
|
||||
name = "github.com/vulcand/predicate"
|
||||
revision = "19b9dde14240d94c804ae5736ad0e1de10bf8fe6"
|
||||
|
||||
[[override]]
|
||||
# remove override on master
|
||||
name = "github.com/coreos/bbolt"
|
||||
revision = "32c383e75ce054674c53b5a07e55de85332aee14"
|
||||
|
||||
[[override]]
|
||||
branch = "master"
|
||||
name = "github.com/miekg/dns"
|
||||
|
||||
[[override]]
|
||||
name = "golang.org/x/crypto"
|
||||
revision = "b080dc9a8c480b08e698fb1219160d598526310f"
|
||||
|
||||
[[override]]
|
||||
name = "golang.org/x/net"
|
||||
revision = "894f8ed5849b15b810ae41e9590a0d05395bba27"
|
||||
|
||||
[prune]
|
||||
non-go = true
|
||||
go-tests = true
|
||||
unused-packages = true
|
||||
@@ -1,6 +1,6 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2016-2018 Containous SAS
|
||||
Copyright (c) 2016 Containous SAS, Emile Vauge, emile@vauge.com
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
||||
154
MAINTAINER.md
154
MAINTAINER.md
@@ -1,154 +0,0 @@
|
||||
# Maintainers
|
||||
|
||||
## The team
|
||||
|
||||
* Emile Vauge [@emilevauge](https://github.com/emilevauge)
|
||||
* Vincent Demeester [@vdemeester](https://github.com/vdemeester)
|
||||
* Ed Robinson [@errm](https://github.com/errm)
|
||||
* Daniel Tomcej [@dtomcej](https://github.com/dtomcej)
|
||||
* Manuel Zapf [@SantoDE](https://github.com/SantoDE)
|
||||
* Timo Reimann [@timoreimann](https://github.com/timoreimann)
|
||||
* Ludovic Fernandez [@ldez](https://github.com/ldez)
|
||||
* Julien Salleyron [@juliens](https://github.com/juliens)
|
||||
* Nicolas Mengin [@nmengin](https://github.com/nmengin)
|
||||
* Marco Jantke [@marco-jantke](https://github.com/marco-jantke)
|
||||
* Michaël Matur [@mmatur](https://github.com/mmatur)
|
||||
|
||||
|
||||
## PR review process:
|
||||
|
||||
* The status `needs-design-review` is only used in complex/heavy/tricky PRs.
|
||||
* From `1` to `2`: 1 design LGTM in comment, by a senior maintainer, if needed.
|
||||
* From `2` to `3`: 3 LGTM by any maintainer.
|
||||
* If needed, a specific maintainer familiar with a particular domain can be requested for the review.
|
||||
|
||||
We use [PRM](https://github.com/ldez/prm) to manage locally pull requests.
|
||||
|
||||
|
||||
## Bots
|
||||
|
||||
### [Myrmica Lobicornis](https://github.com/containous/lobicornis/)
|
||||
|
||||
**Update and Merge Pull Request**
|
||||
|
||||
The maintainer giving the final LGTM must add the `status/3-needs-merge` label to trigger the merge bot.
|
||||
|
||||
By default, a squash-rebase merge will be carried out.
|
||||
If you want to preserve commits you must add `bot/merge-method-rebase` before `status/3-needs-merge`.
|
||||
|
||||
The status `status/4-merge-in-progress` is only for the bot.
|
||||
|
||||
If the bot is not able to perform the merge, the label `bot/need-human-merge` is added.
|
||||
In this case you must solve conflicts/CI/... and after you only need to remove `bot/need-human-merge`.
|
||||
|
||||
A maintainer can add `bot/no-merge` on a PR if he want (temporarily) prevent a merge by the bot.
|
||||
|
||||
`bot/light-review` can be used to decrease required LGTM from 3 to 1 when:
|
||||
|
||||
- vendor updates from previously reviewed PRs
|
||||
- merges branches into master
|
||||
- prepare release
|
||||
|
||||
|
||||
### [Myrmica Bibikoffi](https://github.com/containous/bibikoffi/)
|
||||
|
||||
* closes stale issues [cron]
|
||||
* use some criterion as number of days between creation, last update, labels, ...
|
||||
|
||||
|
||||
### [Myrmica Aloba](https://github.com/containous/aloba)
|
||||
|
||||
**Manage GitHub labels**
|
||||
|
||||
* Add labels on new PR [GitHub WebHook]
|
||||
* Add milestone to a new PR based on a branch version (1.4, 1.3, ...) [GitHub WebHook]
|
||||
* Add and remove `contributor/waiting-for-corrections` label when a review request changes [GitHub WebHook]
|
||||
* Weekly report of PR status on Slack (CaptainPR) [cron]
|
||||
|
||||
|
||||
## Labels
|
||||
|
||||
If we open/look an issue/PR, we must add a `kind/*`, an `area/*` and a `status/*`.
|
||||
|
||||
### Contributor
|
||||
|
||||
* `contributor/need-more-information`: we need more information from the contributor in order to analyze a problem.
|
||||
* `contributor/waiting-for-feedback`: we need the contributor to give us feedback.
|
||||
* `contributor/waiting-for-corrections`: we need the contributor to take actions in order to move forward with a PR. **(only for PR)** _[bot, humans]_
|
||||
* `contributor/needs-resolve-conflicts`: use it only when there is some conflicts (and an automatic rebase is not possible). **(only for PR)** _[bot, humans]_
|
||||
|
||||
### Kind
|
||||
|
||||
* `kind/enhancement`: a new or improved feature.
|
||||
* `kind/question`: It's a question. **(only for issue)**
|
||||
* `kind/proposal`: proposal PR/issues need a public debate.
|
||||
* _Proposal issues_ are design proposal that need to be refined with multiple contributors.
|
||||
* _Proposal PRs_ are technical prototypes that need to be refined with multiple contributors.
|
||||
|
||||
* `kind/bug/possible`: if we need to analyze to understand if it's a bug or not. **(only for issues)**
|
||||
* `kind/bug/confirmed`: we are sure, it's a bug. **(only for issues)**
|
||||
* `kind/bug/fix`: it's a bug fix. **(only for PR)**
|
||||
|
||||
### Resolution
|
||||
|
||||
* `resolution/duplicate`: it's a duplicate issue/PR.
|
||||
* `resolution/declined`: Rule #1 of open-source: no is temporary, yes is forever.
|
||||
* `WIP`: Work In Progress. **(only for PR)**
|
||||
|
||||
### Platform
|
||||
|
||||
* `platform/windows`: Windows related.
|
||||
|
||||
### Area
|
||||
|
||||
* `area/acme`: ACME related.
|
||||
* `area/api`: Traefik API related.
|
||||
* `area/authentication`: Authentication related.
|
||||
* `area/cluster`: Traefik clustering related.
|
||||
* `area/documentation`: regards improving/adding documentation.
|
||||
* `area/infrastructure`: related to CI or Traefik building scripts.
|
||||
* `area/healthcheck`: Health-check related.
|
||||
* `area/logs`: Traefik logs related.
|
||||
* `area/middleware`: Middleware related.
|
||||
* `area/middleware/metrics`: Metrics related. (Prometheus, StatsD, ...)
|
||||
* `area/oxy`: Oxy related.
|
||||
* `area/provider`: related to all providers.
|
||||
* `area/provider/boltdb`: Boltd DB related.
|
||||
* `area/provider/consul`: Consul related.
|
||||
* `area/provider/docker`: Docker and Swarm related.
|
||||
* `area/provider/ecs`: ECS related.
|
||||
* `area/provider/etcd`: Etcd related.
|
||||
* `area/provider/eureka`: Eureka related.
|
||||
* `area/provider/file`: file provider related.
|
||||
* `area/provider/k8s`: Kubernetes related.
|
||||
* `area/provider/marathon`: Marathon related.
|
||||
* `area/provider/mesos`: Mesos related.
|
||||
* `area/provider/rancher`: Rancher related.
|
||||
* `area/provider/zk`: Zoo Keeper related.
|
||||
* `area/sticky-session`: Sticky session related.
|
||||
* `area/tls`: TLS related.
|
||||
* `area/websocket`: WebSocket related.
|
||||
* `area/webui`: Web UI related.
|
||||
|
||||
### Priority
|
||||
|
||||
* `priority/P0`: needs hot fix. **(only for issue)**
|
||||
* `priority/P1`: need to be fixed in next release. **(only for issue)**
|
||||
* `priority/P2`: need to be fixed in the future. **(only for issue)**
|
||||
* `priority/P3`: maybe. **(only for issue)**
|
||||
|
||||
### PR size
|
||||
|
||||
* `size/S`: small PR. **(only for PR)** _[bot only]_
|
||||
* `size/M`: medium PR. **(only for PR)** _[bot only]_
|
||||
* `size/L`: Large PR. **(only for PR)** _[bot only]_
|
||||
|
||||
### Status - Workflow
|
||||
|
||||
The `status/*` labels represent the desired state in the workflow.
|
||||
|
||||
* `status/0-needs-triage`: all new issue or PR have this status. _[bot only]_
|
||||
* `status/1-needs-design-review`: need a design review. **(only for PR)**
|
||||
* `status/2-needs-review`: need a code/documentation review. **(only for PR)**
|
||||
* `status/3-needs-merge`: ready to merge. **(only for PR)**
|
||||
* `status/4-merge-in-progress`: merge in progress. _[bot only]_
|
||||
73
Makefile
73
Makefile
@@ -6,31 +6,21 @@ TRAEFIK_ENVS := \
|
||||
-e TESTFLAGS \
|
||||
-e VERBOSE \
|
||||
-e VERSION \
|
||||
-e CODENAME \
|
||||
-e TESTDIRS \
|
||||
-e CI \
|
||||
-e CONTAINER=DOCKER # Indicator for integration tests that we are running inside a container.
|
||||
-e CODENAME
|
||||
|
||||
SRCS = $(shell git ls-files '*.go' | grep -v '^vendor/')
|
||||
SRCS = $(shell git ls-files '*.go' | grep -v '^external/')
|
||||
|
||||
BIND_DIR := "dist"
|
||||
TRAEFIK_MOUNT := -v "$(CURDIR)/$(BIND_DIR):/go/src/github.com/containous/traefik/$(BIND_DIR)"
|
||||
|
||||
GIT_BRANCH := $(subst heads/,,$(shell git rev-parse --abbrev-ref HEAD 2>/dev/null))
|
||||
TRAEFIK_DEV_IMAGE := traefik-dev$(if $(GIT_BRANCH),:$(subst /,-,$(GIT_BRANCH)))
|
||||
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
|
||||
TRAEFIK_DEV_IMAGE := traefik-dev$(if $(GIT_BRANCH),:$(GIT_BRANCH))
|
||||
REPONAME := $(shell echo $(REPO) | tr '[:upper:]' '[:lower:]')
|
||||
TRAEFIK_IMAGE := $(if $(REPONAME),$(REPONAME),"containous/traefik")
|
||||
INTEGRATION_OPTS := $(if $(MAKE_DOCKER_HOST),-e "DOCKER_HOST=$(MAKE_DOCKER_HOST)", -e "TEST_CONTAINER=1" -v "/var/run/docker.sock:/var/run/docker.sock")
|
||||
TRAEFIK_DOC_IMAGE := traefik-docs
|
||||
INTEGRATION_OPTS := $(if $(MAKE_DOCKER_HOST),-e "DOCKER_HOST=$(MAKE_DOCKER_HOST)", -v "/var/run/docker.sock:/var/run/docker.sock")
|
||||
|
||||
DOCKER_BUILD_ARGS := $(if $(DOCKER_VERSION), "--build-arg=DOCKER_VERSION=$(DOCKER_VERSION)",)
|
||||
DOCKER_RUN_OPTS := $(TRAEFIK_ENVS) $(TRAEFIK_MOUNT) "$(TRAEFIK_DEV_IMAGE)"
|
||||
DOCKER_RUN_TRAEFIK := docker run $(INTEGRATION_OPTS) -it $(DOCKER_RUN_OPTS)
|
||||
DOCKER_RUN_TRAEFIK_NOTTY := docker run $(INTEGRATION_OPTS) -i $(DOCKER_RUN_OPTS)
|
||||
DOCKER_RUN_DOC_PORT := 8000
|
||||
DOCKER_RUN_DOC_MOUNT := -v $(CURDIR):/mkdocs
|
||||
DOCKER_RUN_DOC_OPTS := --rm $(DOCKER_RUN_DOC_MOUNT) -p $(DOCKER_RUN_DOC_PORT):8000
|
||||
|
||||
DOCKER_RUN_TRAEFIK := docker run $(INTEGRATION_OPTS) -it $(TRAEFIK_ENVS) $(TRAEFIK_MOUNT) "$(TRAEFIK_DEV_IMAGE)"
|
||||
|
||||
print-%: ; @echo $*=$($*)
|
||||
|
||||
@@ -45,24 +35,6 @@ binary: generate-webui build ## build the linux binary
|
||||
crossbinary: generate-webui build ## cross build the non-linux binaries
|
||||
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate crossbinary
|
||||
|
||||
crossbinary-parallel:
|
||||
$(MAKE) generate-webui
|
||||
$(MAKE) build crossbinary-default crossbinary-others
|
||||
|
||||
crossbinary-default: generate-webui build
|
||||
$(DOCKER_RUN_TRAEFIK_NOTTY) ./script/make.sh generate crossbinary-default
|
||||
|
||||
crossbinary-default-parallel:
|
||||
$(MAKE) generate-webui
|
||||
$(MAKE) build crossbinary-default
|
||||
|
||||
crossbinary-others: generate-webui build
|
||||
$(DOCKER_RUN_TRAEFIK_NOTTY) ./script/make.sh generate crossbinary-others
|
||||
|
||||
crossbinary-others-parallel:
|
||||
$(MAKE) generate-webui
|
||||
$(MAKE) build crossbinary-others
|
||||
|
||||
test: build ## run the unit and integration tests
|
||||
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate test-unit binary test-integration
|
||||
|
||||
@@ -70,11 +42,10 @@ test-unit: build ## run the unit tests
|
||||
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate test-unit
|
||||
|
||||
test-integration: build ## run the integration tests
|
||||
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate binary test-integration
|
||||
TEST_HOST=1 ./script/make.sh test-integration
|
||||
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate test-integration
|
||||
|
||||
validate: build ## validate gofmt, golint and go vet
|
||||
$(DOCKER_RUN_TRAEFIK) ./script/make.sh validate-gofmt validate-govet validate-golint validate-misspell validate-vendor validate-autogen
|
||||
$(DOCKER_RUN_TRAEFIK) ./script/make.sh validate-gofmt validate-govet validate-golint
|
||||
|
||||
build: dist
|
||||
docker build $(DOCKER_BUILD_ARGS) -t "$(TRAEFIK_DEV_IMAGE)" -f build.Dockerfile .
|
||||
@@ -88,27 +59,15 @@ build-no-cache: dist
|
||||
shell: build ## start a shell inside the build env
|
||||
$(DOCKER_RUN_TRAEFIK) /bin/bash
|
||||
|
||||
image-dirty: binary ## build a docker traefik image
|
||||
image: build ## build a docker traefik image
|
||||
docker build -t $(TRAEFIK_IMAGE) .
|
||||
|
||||
image: clear-static binary ## clean up static directory and build a docker traefik image
|
||||
docker build -t $(TRAEFIK_IMAGE) .
|
||||
|
||||
docs: docs-image
|
||||
docker run $(DOCKER_RUN_DOC_OPTS) $(TRAEFIK_DOC_IMAGE) mkdocs serve
|
||||
|
||||
docs-image:
|
||||
docker build -t $(TRAEFIK_DOC_IMAGE) -f docs.Dockerfile .
|
||||
|
||||
clear-static:
|
||||
rm -rf static
|
||||
|
||||
dist:
|
||||
mkdir dist
|
||||
|
||||
run-dev:
|
||||
go generate
|
||||
go build ./cmd/traefik
|
||||
go build
|
||||
./traefik
|
||||
|
||||
generate-webui: build-webui
|
||||
@@ -124,15 +83,11 @@ lint:
|
||||
fmt:
|
||||
gofmt -s -l -w $(SRCS)
|
||||
|
||||
pull-images:
|
||||
grep --no-filename -E '^\s+image:' ./integration/resources/compose/*.yml | awk '{print $$2}' | sort | uniq | xargs -P 6 -n 1 docker pull
|
||||
deploy:
|
||||
./script/deploy.sh
|
||||
|
||||
dep-ensure:
|
||||
dep ensure -v
|
||||
./script/prune-dep.sh
|
||||
|
||||
dep-prune:
|
||||
./script/prune-dep.sh
|
||||
deploy-pr:
|
||||
./script/deploy-pr.sh
|
||||
|
||||
help: ## this help
|
||||
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z_-]+:.*?## / {sub("\\\\n",sprintf("\n%22c"," "), $$2);printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)
|
||||
|
||||
216
README.md
216
README.md
@@ -1,181 +1,169 @@
|
||||
|
||||
<p align="center">
|
||||
<img src="docs/img/traefik.logo.png" alt="Træfik" title="Træfik" />
|
||||
<img src="docs/img/traefik.logo.png" alt="Træfɪk" title="Træfɪk" />
|
||||
</p>
|
||||
|
||||
[](https://semaphoreci.com/containous/traefik)
|
||||
[](https://travis-ci.org/containous/traefik)
|
||||
[](https://docs.traefik.io)
|
||||
[](http://goreportcard.com/report/containous/traefik)
|
||||
[](http://goreportcard.com/report/containous/traefik)
|
||||
[](https://microbadger.com/images/traefik)
|
||||
[](https://github.com/containous/traefik/blob/master/LICENSE.md)
|
||||
[](https://traefik.herokuapp.com)
|
||||
[](https://twitter.com/intent/follow?screen_name=traefikproxy)
|
||||
|
||||
|
||||
Træfik is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy.
|
||||
Træfik integrates with your existing infrastructure components ([Docker](https://www.docker.com/), [Swarm mode](https://docs.docker.com/engine/swarm/), [Kubernetes](https://kubernetes.io), [Marathon](https://mesosphere.github.io/marathon/), [Consul](https://www.consul.io/), [Etcd](https://coreos.com/etcd/), [Rancher](https://rancher.com), [Amazon ECS](https://aws.amazon.com/ecs), ...) and configures itself automatically and dynamically.
|
||||
Telling Træfik where your orchestrator is could be the _only_ configuration step you need to do.
|
||||
|
||||
---
|
||||
|
||||
. **[Overview](#overview)** .
|
||||
**[Features](#features)** .
|
||||
**[Supported backends](#supported-backends)** .
|
||||
**[Quickstart](#quickstart)** .
|
||||
**[Web UI](#web-ui)** .
|
||||
**[Test it](#test-it)** .
|
||||
**[Documentation](#documentation)** .
|
||||
|
||||
. **[Support](#support)** .
|
||||
**[Release cycle](#release-cycle)** .
|
||||
**[Contributing](#contributing)** .
|
||||
**[Maintainers](#maintainers)** .
|
||||
**[Plumbing](#plumbing)** .
|
||||
**[Credits](#credits)** .
|
||||
|
||||
---
|
||||
Træfɪk is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease.
|
||||
It supports several backends ([Docker](https://www.docker.com/), [Swarm](https://docs.docker.com/swarm), [Kubernetes](http://kubernetes.io), [Marathon](https://mesosphere.github.io/marathon/), [Mesos](https://github.com/apache/mesos), [Consul](https://www.consul.io/), [Etcd](https://coreos.com/etcd/), [Zookeeper](https://zookeeper.apache.org), [BoltDB](https://github.com/boltdb/bolt), Rest API, file...) to manage its configuration automatically and dynamically.
|
||||
|
||||
## Overview
|
||||
|
||||
Imagine that you have deployed a bunch of microservices with the help of an orchestrator (like Swarm or Kubernetes) or a service registry (like etcd or consul).
|
||||
Now you want users to access these microservices, and you need a reverse proxy.
|
||||
Imagine that you have deployed a bunch of microservices on your infrastructure. You probably used a service registry (like etcd or consul) and/or an orchestrator (swarm, Mesos/Marathon) to manage all these services.
|
||||
If you want your users to access some of your microservices from the Internet, you will have to use a reverse proxy and configure it using virtual hosts or prefix paths:
|
||||
|
||||
Traditional reverse-proxies require that you configure _each_ route that will connect paths and subdomains to _each_ microservice.
|
||||
In an environment where you add, remove, kill, upgrade, or scale your services _many_ times a day, the task of keeping the routes up to date becomes tedious.
|
||||
- domain `api.domain.com` will point the microservice `api` in your private network
|
||||
- path `domain.com/web` will point the microservice `web` in your private network
|
||||
- domain `backoffice.domain.com` will point the microservices `backoffice` in your private network, load-balancing between your multiple instances
|
||||
|
||||
**This is when Træfik can help you!**
|
||||
But a microservices architecture is dynamic... Services are added, removed, killed or upgraded often, eventually several times a day.
|
||||
|
||||
Træfik listens to your service registry/orchestrator API and instantly generates the routes so your microservices are connected to the outside world -- without further intervention from your part.
|
||||
Traditional reverse-proxies are not natively dynamic. You can't change their configuration and hot-reload easily.
|
||||
|
||||
**Run Træfik and let it do the work for you!**
|
||||
_(But if you'd rather configure some of your routes manually, Træfik supports that too!)_
|
||||
Here enters Træfɪk.
|
||||
|
||||

|
||||
|
||||
Træfɪk can listen to your service registry/orchestrator API, and knows each time a microservice is added, removed, killed or upgraded, and can generate its configuration automatically.
|
||||
Routes to your services will be created instantly.
|
||||
|
||||
Run it and forget it!
|
||||
|
||||
|
||||
|
||||
|
||||
## Features
|
||||
|
||||
- Continuously updates its configuration (No restarts!)
|
||||
- Supports multiple load balancing algorithms
|
||||
- Provides HTTPS to your microservices by leveraging [Let's Encrypt](https://letsencrypt.org)
|
||||
- Circuit breakers, retry
|
||||
- High Availability with cluster mode (beta)
|
||||
- See the magic through its clean web UI
|
||||
- Websocket, HTTP/2, GRPC ready
|
||||
- Provides metrics (Rest, Prometheus, Datadog, Statsd, InfluxDB)
|
||||
- Keeps access logs (JSON, CLF)
|
||||
- [Fast](https://docs.traefik.io/benchmarks) ... which is nice
|
||||
- Exposes a Rest API
|
||||
- Packaged as a single binary file (made with :heart: with go) and available as a [tiny](https://microbadger.com/images/traefik) [official](https://hub.docker.com/r/_/traefik/) docker image
|
||||
|
||||
|
||||
## Supported Backends
|
||||
|
||||
- [Docker](docs/configuration/backends/docker/) / [Swarm mode](docs/configuration/backends/docker/#docker-swarm-mode)
|
||||
- [Kubernetes](docs/configuration/backends/kubernetes/)
|
||||
- [Mesos](docs/configuration/backends/mesos/) / [Marathon](docs/configuration/backends/marathon/)
|
||||
- [Rancher](docs/configuration/backends/rancher/) (API, Metadata)
|
||||
- [Azure Service Fabric](docs/configuration/backends/servicefabric/)
|
||||
- [Consul Catalog](docs/configuration/backends/consulcatalog/)
|
||||
- [Consul](docs/configuration/backends/consul/) / [Etcd](docs/configuration/backends/etcd/) / [Zookeeper](docs/configuration/backends/zookeeper/) / [BoltDB](docs/configuration/backends/boltdb/)
|
||||
- [Eureka](docs/configuration/backends/eureka/)
|
||||
- [Amazon ECS](docs/configuration/backends/ecs/)
|
||||
- [Amazon DynamoDB](docs/configuration/backends/dynamodb/)
|
||||
- [File](docs/configuration/backends/file/)
|
||||
- [Rest](docs/configuration/backends/rest/)
|
||||
- [It's fast](http://docs.traefik.io/benchmarks)
|
||||
- No dependency hell, single binary made with go
|
||||
- Rest API
|
||||
- Multiple backends supported: Docker, Swarm, Kubernetes, Marathon, Mesos, Consul, Etcd, and more to come
|
||||
- Watchers for backends, can listen for changes in backends to apply a new configuration automatically
|
||||
- Hot-reloading of configuration. No need to restart the process
|
||||
- Graceful shutdown http connections
|
||||
- Circuit breakers on backends
|
||||
- Round Robin, rebalancer load-balancers
|
||||
- Rest Metrics
|
||||
- [Tiny](https://microbadger.com/images/traefik) [official](https://hub.docker.com/r/_/traefik/) docker image included
|
||||
- SSL backends support
|
||||
- SSL frontend support (with SNI)
|
||||
- Clean AngularJS Web UI
|
||||
- Websocket support
|
||||
- HTTP/2 support
|
||||
- Retry request if network error
|
||||
- [Let's Encrypt](https://letsencrypt.org) support (Automatic HTTPS with renewal)
|
||||
- High Availability with cluster mode
|
||||
|
||||
## Quickstart
|
||||
|
||||
To get your hands on Træfik, you can use the [5-Minute Quickstart](http://docs.traefik.io/#the-trfik-quickstart-using-docker) in our documentation (you will need Docker).
|
||||
You can have a quick look at Træfɪk in this [Katacoda tutorial](https://www.katacoda.com/courses/traefik/deploy-load-balancer) that shows how to load balance requests between multiple Docker containers.
|
||||
|
||||
Alternatively, if you don't want to install anything on your computer, you can try Træfik online in this great [Katacoda tutorial](https://www.katacoda.com/courses/traefik/deploy-load-balancer) that shows how to load balance requests between multiple Docker containers.
|
||||
Here is a talk given by [Ed Robinson](https://github.com/errm) at the [ContainerCamp UK](https://container.camp) conference.
|
||||
You will learn fundamental Træfɪk features and see some demos with Kubernetes.
|
||||
|
||||
If you are looking for a more comprehensive and real use-case example, you can also check [Play-With-Docker](http://training.play-with-docker.com/traefik-load-balancing/) to see how to load balance between multiple nodes.
|
||||
[](https://www.youtube.com/watch?v=aFtpIShV60I)
|
||||
|
||||
Here is a talk (in French) given by [Emile Vauge](https://github.com/emilevauge) at the [Devoxx France 2016](http://www.devoxx.fr) conference.
|
||||
You will learn fundamental Træfɪk features and see some demos with Docker, Mesos/Marathon and Let's Encrypt.
|
||||
|
||||
[](http://www.youtube.com/watch?v=QvAz9mVx5TI)
|
||||
|
||||
## Web UI
|
||||
|
||||
You can access the simple HTML frontend of Træfik.
|
||||
You can access to a simple HTML frontend of Træfik.
|
||||
|
||||

|
||||

|
||||
|
||||
## Documentation
|
||||
## Plumbing
|
||||
|
||||
You can find the complete documentation at [https://docs.traefik.io](https://docs.traefik.io).
|
||||
A collection of contributions around Træfik can be found at [https://awesome.traefik.io](https://awesome.traefik.io).
|
||||
- [Oxy](https://github.com/vulcand/oxy): an awesome proxy library made by Mailgun guys
|
||||
- [Gorilla mux](https://github.com/gorilla/mux): famous request router
|
||||
- [Negroni](https://github.com/codegangsta/negroni): web middlewares made simple
|
||||
- [Manners](https://github.com/mailgun/manners): graceful shutdown of http.Handler servers
|
||||
- [Lego](https://github.com/xenolf/lego): the best [Let's Encrypt](https://letsencrypt.org) library in go
|
||||
|
||||
## Support
|
||||
## Test it
|
||||
|
||||
To get community support, you can:
|
||||
- join the Træfik community Slack channel: [](https://traefik.herokuapp.com)
|
||||
- use [Stack Overflow](https://stackoverflow.com/questions/tagged/traefik) (using the `traefik` tag)
|
||||
|
||||
If you need commercial support, please contact [Containo.us](https://containo.us) by mail: <mailto:support@containo.us>.
|
||||
|
||||
## Download
|
||||
|
||||
- Grab the latest binary from the [releases](https://github.com/containous/traefik/releases) page and run it with the [sample configuration file](https://raw.githubusercontent.com/containous/traefik/master/traefik.sample.toml):
|
||||
- The simple way: grab the latest binary from the [releases](https://github.com/containous/traefik/releases) page and just run it with the [sample configuration file](https://raw.githubusercontent.com/containous/traefik/master/traefik.sample.toml):
|
||||
|
||||
```shell
|
||||
./traefik --configFile=traefik.toml
|
||||
```
|
||||
|
||||
- Or use the official tiny Docker image and run it with the [sample configuration file](https://raw.githubusercontent.com/containous/traefik/master/traefik.sample.toml):
|
||||
- Use the tiny Docker image:
|
||||
|
||||
```shell
|
||||
docker run -d -p 8080:8080 -p 80:80 -v $PWD/traefik.toml:/etc/traefik/traefik.toml traefik
|
||||
```
|
||||
|
||||
- Or get the sources:
|
||||
- From sources:
|
||||
|
||||
```shell
|
||||
git clone https://github.com/containous/traefik
|
||||
```
|
||||
|
||||
## Introductory Videos
|
||||
## Documentation
|
||||
|
||||
Here is a talk given by [Emile Vauge](https://github.com/emilevauge) at [GopherCon 2017](https://gophercon.com/).
|
||||
You will learn Træfik basics in less than 10 minutes.
|
||||
|
||||
[](https://www.youtube.com/watch?v=RgudiksfL-k)
|
||||
|
||||
Here is a talk given by [Ed Robinson](https://github.com/errm) at [ContainerCamp UK](https://container.camp) conference.
|
||||
You will learn fundamental Træfik features and see some demos with Kubernetes.
|
||||
|
||||
[](https://www.youtube.com/watch?v=aFtpIShV60I)
|
||||
|
||||
## Maintainers
|
||||
|
||||
[Information about process and maintainers](MAINTAINER.md)
|
||||
You can find the complete documentation [here](https://docs.traefik.io).
|
||||
|
||||
## Contributing
|
||||
|
||||
If you'd like to contribute to the project, refer to the [contributing documentation](CONTRIBUTING.md).
|
||||
Please refer to [this section](.github/CONTRIBUTING.md).
|
||||
|
||||
Please note that this project is released with a [Contributor Code of Conduct](CODE_OF_CONDUCT.md).
|
||||
By participating in this project, you agree to abide by its terms.
|
||||
## Code Of Conduct
|
||||
|
||||
## Release Cycle
|
||||
Please note that this project is released with a [Contributor Code of Conduct](CODE_OF_CONDUCT.md). By participating in this project you agree to abide by its terms.
|
||||
|
||||
- We release a new version (e.g. 1.1.0, 1.2.0, 1.3.0) every other month.
|
||||
- Release Candidates are available before the release (e.g. 1.1.0-rc1, 1.1.0-rc2, 1.1.0-rc3, 1.1.0-rc4, before 1.1.0)
|
||||
- Bug-fixes (e.g. 1.1.1, 1.1.2, 1.2.1, 1.2.3) are released as needed (no additional features are delivered in those versions, bug-fixes only)
|
||||
## Support
|
||||
|
||||
Each version is supported until the next one is released (e.g. 1.1.x will be supported until 1.2.0 is out)
|
||||
You can join [](https://traefik.herokuapp.com) to get basic support.
|
||||
If you prefer commercial support, please contact [containo.us](https://containo.us) by mail: <mailto:support@containo.us>.
|
||||
|
||||
We use [Semantic Versioning](http://semver.org/)
|
||||
## Træfɪk here and there
|
||||
|
||||
## Plumbing
|
||||
These projects use Træfɪk internally. If your company uses Træfɪk, we would be glad to get your feedback :) Contact us on [](https://traefik.herokuapp.com)
|
||||
|
||||
- [Oxy](https://github.com/vulcand/oxy): an awesome proxy library made by Mailgun folks
|
||||
- [Gorilla mux](https://github.com/gorilla/mux): famous request router
|
||||
- [Negroni](https://github.com/urfave/negroni): web middlewares made simple
|
||||
- [Lego](https://github.com/xenolf/lego): the best [Let's Encrypt](https://letsencrypt.org) library in go
|
||||
- Project [Mantl](https://mantl.io/) from Cisco
|
||||
|
||||

|
||||
> Mantl is a modern platform for rapidly deploying globally distributed services. A container orchestrator, docker, a network stack, something to pool your logs, something to monitor health, a sprinkle of service discovery and some automation.
|
||||
|
||||
- Project [Apollo](http://capgemini.github.io/devops/apollo/) from Cap Gemini
|
||||
|
||||

|
||||
> Apollo is an open source project to aid with building and deploying IAAS and PAAS services. It is particularly geared towards managing containerized applications across multiple hosts, and big data type workloads. Apollo leverages other open source components to provide basic mechanisms for deployment, maintenance, and scaling of infrastructure and applications.
|
||||
|
||||
## Partners
|
||||
|
||||
[](https://zenika.com)
|
||||
|
||||
Zenika is one of the leading providers of professional Open Source services and agile methodologies in
|
||||
Europe. We provide consulting, development, training and support for the world’s leading Open Source
|
||||
software products.
|
||||
|
||||
|
||||
[](https://aster.is)
|
||||
|
||||
Founded in 2014, Asteris creates next-generation infrastructure software for the modern datacenter. Asteris writes software that makes it easy for companies to implement continuous delivery and realtime data pipelines. We support the HashiCorp stack, along with Kubernetes, Apache Mesos, Spark and Kafka. We're core committers on mantl.io, consul-cli and mesos-consul.
|
||||
|
||||
## Maintainers
|
||||
|
||||
- Emile Vauge [@emilevauge](https://github.com/emilevauge)
|
||||
- Vincent Demeester [@vdemeester](https://github.com/vdemeester)
|
||||
- Russell Clare [@Russell-IO](https://github.com/Russell-IO)
|
||||
- Ed Robinson [@errm](https://github.com/errm)
|
||||
- Daniel Tomcej [@dtomcej](https://github.com/dtomcej)
|
||||
- Manuel Laufenberg [@SantoDE](https://github.com/SantoDE)
|
||||
|
||||
## Credits
|
||||
|
||||
Kudos to [Peka](http://peka.byethost11.com/photoblog/) for his awesome work on the logo .
|
||||
|
||||
Traefik's logo is licensed under the Creative Commons 3.0 Attributions license.
|
||||
|
||||
Traefik's logo was inspired by the gopher stickers made by Takuya Ueda (https://twitter.com/tenntenn).
|
||||
The original Go gopher was designed by Renee French (http://reneefrench.blogspot.com/).
|
||||
Kudos to [Peka](http://peka.byethost11.com/photoblog/) for his awesome work on the logo 
|
||||
|
||||
@@ -6,15 +6,14 @@ import (
|
||||
"crypto/rsa"
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"fmt"
|
||||
"errors"
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/xenolf/lego/acme"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/xenolf/lego/acme"
|
||||
)
|
||||
|
||||
// Account is used to store lets encrypt registration info
|
||||
@@ -24,7 +23,6 @@ type Account struct {
|
||||
PrivateKey []byte
|
||||
DomainsCertificate DomainsCertificates
|
||||
ChallengeCerts map[string]*ChallengeCert
|
||||
HTTPChallenge map[string]map[string][]byte
|
||||
}
|
||||
|
||||
// ChallengeCert stores a challenge certificate
|
||||
@@ -34,7 +32,7 @@ type ChallengeCert struct {
|
||||
certificate *tls.Certificate
|
||||
}
|
||||
|
||||
// Init inits account struct
|
||||
// Init inits acccount struct
|
||||
func (a *Account) Init() error {
|
||||
err := a.DomainsCertificate.Init()
|
||||
if err != nil {
|
||||
@@ -134,7 +132,7 @@ func (dc *DomainsCertificates) removeDuplicates() {
|
||||
for i2 := i + 1; i2 < len(dc.Certs); i2++ {
|
||||
if reflect.DeepEqual(dc.Certs[i].Domains, dc.Certs[i2].Domains) {
|
||||
// delete
|
||||
log.Warnf("Remove duplicate cert: %+v, expiration :%s", dc.Certs[i2].Domains, dc.Certs[i2].tlsCert.Leaf.NotAfter.String())
|
||||
log.Warnf("Remove duplicate cert: %+v, exipration :%s", dc.Certs[i2].Domains, dc.Certs[i2].tlsCert.Leaf.NotAfter.String())
|
||||
dc.Certs = append(dc.Certs[:i2], dc.Certs[i2+1:]...)
|
||||
i2--
|
||||
}
|
||||
@@ -179,7 +177,7 @@ func (dc *DomainsCertificates) renewCertificates(acmeCert *Certificate, domain D
|
||||
return nil
|
||||
}
|
||||
}
|
||||
return fmt.Errorf("certificate to renew not found for domain %s", domain.Main)
|
||||
return errors.New("Certificate to renew not found for domain " + domain.Main)
|
||||
}
|
||||
|
||||
func (dc *DomainsCertificates) addCertificateForDomains(acmeCert *Certificate, domain Domain) (*DomainsCertificate, error) {
|
||||
@@ -222,24 +220,6 @@ func (dc *DomainsCertificates) exists(domainToFind Domain) (*DomainsCertificate,
|
||||
return nil, false
|
||||
}
|
||||
|
||||
func (dc *DomainsCertificates) toDomainsMap() map[string]*tls.Certificate {
|
||||
domainsCertificatesMap := make(map[string]*tls.Certificate)
|
||||
for _, domainCertificate := range dc.Certs {
|
||||
certKey := domainCertificate.Domains.Main
|
||||
if domainCertificate.Domains.SANs != nil {
|
||||
sort.Strings(domainCertificate.Domains.SANs)
|
||||
for _, dnsName := range domainCertificate.Domains.SANs {
|
||||
if dnsName != domainCertificate.Domains.Main {
|
||||
certKey += fmt.Sprintf(",%s", dnsName)
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
domainsCertificatesMap[certKey] = domainCertificate.tlsCert
|
||||
}
|
||||
return domainsCertificatesMap
|
||||
}
|
||||
|
||||
// DomainsCertificate contains a certificate for multiple domains
|
||||
type DomainsCertificate struct {
|
||||
Domains Domain
|
||||
@@ -254,7 +234,7 @@ func (dc *DomainsCertificate) needRenew() bool {
|
||||
// If there's an error, we assume the cert is broken, and needs update
|
||||
return true
|
||||
}
|
||||
// <= 30 days left, renew certificate
|
||||
// <= 7 days left, renew certificate
|
||||
if crt.NotAfter.Before(time.Now().Add(time.Duration(24 * 30 * time.Hour))) {
|
||||
return true
|
||||
}
|
||||
|
||||
555
acme/acme.go
555
acme/acme.go
@@ -1,75 +1,42 @@
|
||||
package acme
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/tls"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
fmtlog "log"
|
||||
"net"
|
||||
"net/http"
|
||||
"os"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/BurntSushi/ty/fun"
|
||||
"github.com/cenk/backoff"
|
||||
"github.com/containous/flaeg"
|
||||
"github.com/containous/mux"
|
||||
"github.com/containous/staert"
|
||||
"github.com/containous/traefik/cluster"
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/containous/traefik/safe"
|
||||
traefikTls "github.com/containous/traefik/tls"
|
||||
"github.com/containous/traefik/tls/generate"
|
||||
"github.com/containous/traefik/types"
|
||||
"github.com/eapache/channels"
|
||||
"github.com/xenolf/lego/acme"
|
||||
"github.com/xenolf/lego/providers/dns"
|
||||
)
|
||||
|
||||
var (
|
||||
// OSCPMustStaple enables OSCP stapling as from https://github.com/xenolf/lego/issues/270
|
||||
OSCPMustStaple = false
|
||||
"golang.org/x/net/context"
|
||||
"io/ioutil"
|
||||
fmtlog "log"
|
||||
"os"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// ACME allows to connect to lets encrypt and retrieve certs
|
||||
type ACME struct {
|
||||
Email string `description:"Email address used for registration"`
|
||||
Domains []Domain `description:"SANs (alternative domains) to each main domain using format: --acme.domains='main.com,san1.com,san2.com' --acme.domains='main.net,san1.net,san2.net'"`
|
||||
Storage string `description:"File or key used for certificates storage."`
|
||||
StorageFile string // deprecated
|
||||
OnDemand bool `description:"Enable on demand certificate generation. This will request a certificate from Let's Encrypt during the first TLS handshake for a hostname that does not yet have a certificate."` //deprecated
|
||||
OnHostRule bool `description:"Enable certificate generation on frontends Host rules."`
|
||||
CAServer string `description:"CA server to use."`
|
||||
EntryPoint string `description:"Entrypoint to proxy acme challenge to."`
|
||||
DNSChallenge *DNSChallenge `description:"Activate DNS-01 Challenge"`
|
||||
HTTPChallenge *HTTPChallenge `description:"Activate HTTP-01 Challenge"`
|
||||
DNSProvider string `description:"Use a DNS-01 acme challenge rather than TLS-SNI-01 challenge."` // deprecated
|
||||
DelayDontCheckDNS flaeg.Duration `description:"Assume DNS propagates after a delay in seconds rather than finding and querying nameservers."` // deprecated
|
||||
ACMELogging bool `description:"Enable debug logging of ACME actions."`
|
||||
client *acme.Client
|
||||
defaultCertificate *tls.Certificate
|
||||
store cluster.Store
|
||||
challengeTLSProvider *challengeTLSProvider
|
||||
challengeHTTPProvider *challengeHTTPProvider
|
||||
checkOnDemandDomain func(domain string) bool
|
||||
jobs *channels.InfiniteChannel
|
||||
TLSConfig *tls.Config `description:"TLS config in case wildcard certs are used"`
|
||||
dynamicCerts *safe.Safe
|
||||
}
|
||||
|
||||
// DNSChallenge contains DNS challenge Configuration
|
||||
type DNSChallenge struct {
|
||||
Provider string `description:"Use a DNS-01 based challenge provider rather than HTTPS."`
|
||||
DelayBeforeCheck flaeg.Duration `description:"Assume DNS propagates after a delay in seconds rather than finding and querying nameservers."`
|
||||
}
|
||||
|
||||
// HTTPChallenge contains HTTP challenge Configuration
|
||||
type HTTPChallenge struct {
|
||||
EntryPoint string `description:"HTTP challenge EntryPoint"`
|
||||
Email string `description:"Email address used for registration"`
|
||||
Domains []Domain `description:"SANs (alternative domains) to each main domain using format: --acme.domains='main.com,san1.com,san2.com' --acme.domains='main.net,san1.net,san2.net'"`
|
||||
Storage string `description:"File or key used for certificates storage."`
|
||||
StorageFile string // deprecated
|
||||
OnDemand bool `description:"Enable on demand certificate. This will request a certificate from Let's Encrypt during the first TLS handshake for a hostname that does not yet have a certificate."`
|
||||
OnHostRule bool `description:"Enable certificate generation on frontends Host rules."`
|
||||
CAServer string `description:"CA server to use."`
|
||||
EntryPoint string `description:"Entrypoint to proxy acme challenge to."`
|
||||
client *acme.Client
|
||||
defaultCertificate *tls.Certificate
|
||||
store cluster.Store
|
||||
challengeProvider *challengeProvider
|
||||
checkOnDemandDomain func(domain string) bool
|
||||
jobs *channels.InfiniteChannel
|
||||
}
|
||||
|
||||
//Domains parse []Domain
|
||||
@@ -114,66 +81,24 @@ type Domain struct {
|
||||
}
|
||||
|
||||
func (a *ACME) init() error {
|
||||
// FIXME temporary fix, waiting for https://github.com/xenolf/lego/pull/478
|
||||
acme.HTTPClient = http.Client{
|
||||
Transport: &http.Transport{
|
||||
Proxy: http.ProxyFromEnvironment,
|
||||
Dial: (&net.Dialer{
|
||||
Timeout: 30 * time.Second,
|
||||
KeepAlive: 30 * time.Second,
|
||||
}).Dial,
|
||||
TLSHandshakeTimeout: 15 * time.Second,
|
||||
ResponseHeaderTimeout: 15 * time.Second,
|
||||
ExpectContinueTimeout: 1 * time.Second,
|
||||
},
|
||||
}
|
||||
|
||||
if a.ACMELogging {
|
||||
acme.Logger = fmtlog.New(os.Stderr, "legolog: ", fmtlog.LstdFlags)
|
||||
} else {
|
||||
acme.Logger = fmtlog.New(ioutil.Discard, "", 0)
|
||||
}
|
||||
acme.Logger = fmtlog.New(ioutil.Discard, "", 0)
|
||||
// no certificates in TLS config, so we add a default one
|
||||
cert, err := generate.DefaultCertificate()
|
||||
cert, err := generateDefaultCertificate()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
a.defaultCertificate = cert
|
||||
|
||||
// TODO: to remove in the futurs
|
||||
if len(a.StorageFile) > 0 && len(a.Storage) == 0 {
|
||||
log.Warnf("ACME.StorageFile is deprecated, use ACME.Storage instead")
|
||||
a.Storage = a.StorageFile
|
||||
}
|
||||
a.jobs = channels.NewInfiniteChannel()
|
||||
return nil
|
||||
}
|
||||
|
||||
// AddRoutes add routes on internal router
|
||||
func (a *ACME) AddRoutes(router *mux.Router) {
|
||||
router.Methods(http.MethodGet).
|
||||
Path(acme.HTTP01ChallengePath("{token}")).
|
||||
Handler(http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
|
||||
if a.challengeHTTPProvider == nil {
|
||||
rw.WriteHeader(http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
|
||||
vars := mux.Vars(req)
|
||||
if token, ok := vars["token"]; ok {
|
||||
domain, _, err := net.SplitHostPort(req.Host)
|
||||
if err != nil {
|
||||
log.Debugf("Unable to split host and port: %v. Fallback to request host.", err)
|
||||
domain = req.Host
|
||||
}
|
||||
tokenValue := a.challengeHTTPProvider.getTokenValue(token, domain)
|
||||
if len(tokenValue) > 0 {
|
||||
rw.WriteHeader(http.StatusOK)
|
||||
rw.Write(tokenValue)
|
||||
return
|
||||
}
|
||||
}
|
||||
rw.WriteHeader(http.StatusNotFound)
|
||||
}))
|
||||
}
|
||||
|
||||
// CreateClusterConfig creates a tls.config using ACME configuration in cluster mode
|
||||
func (a *ACME) CreateClusterConfig(leadership *cluster.Leadership, tlsConfig *tls.Config, certs *safe.Safe, checkOnDemandDomain func(domain string) bool) error {
|
||||
func (a *ACME) CreateClusterConfig(leadership *cluster.Leadership, tlsConfig *tls.Config, checkOnDemandDomain func(domain string) bool) error {
|
||||
err := a.init()
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -182,10 +107,8 @@ func (a *ACME) CreateClusterConfig(leadership *cluster.Leadership, tlsConfig *tl
|
||||
return errors.New("Empty Store, please provide a key for certs storage")
|
||||
}
|
||||
a.checkOnDemandDomain = checkOnDemandDomain
|
||||
a.dynamicCerts = certs
|
||||
tlsConfig.Certificates = append(tlsConfig.Certificates, *a.defaultCertificate)
|
||||
tlsConfig.GetCertificate = a.getCertificate
|
||||
a.TLSConfig = tlsConfig
|
||||
listener := func(object cluster.Object) error {
|
||||
account := object.(*Account)
|
||||
account.Init()
|
||||
@@ -211,12 +134,12 @@ func (a *ACME) CreateClusterConfig(leadership *cluster.Leadership, tlsConfig *tl
|
||||
}
|
||||
|
||||
a.store = datastore
|
||||
a.challengeTLSProvider = &challengeTLSProvider{store: a.store}
|
||||
a.challengeProvider = &challengeProvider{store: a.store}
|
||||
|
||||
ticker := time.NewTicker(24 * time.Hour)
|
||||
leadership.Pool.AddGoCtx(func(ctx context.Context) {
|
||||
log.Info("Starting ACME renew job...")
|
||||
defer log.Info("Stopped ACME renew job...")
|
||||
log.Infof("Starting ACME renew job...")
|
||||
defer log.Infof("Stopped ACME renew job...")
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
@@ -227,75 +150,74 @@ func (a *ACME) CreateClusterConfig(leadership *cluster.Leadership, tlsConfig *tl
|
||||
}
|
||||
})
|
||||
|
||||
leadership.AddListener(a.leadershipListener)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (a *ACME) leadershipListener(elected bool) error {
|
||||
if elected {
|
||||
_, err := a.store.Load()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
transaction, object, err := a.store.Begin()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
account := object.(*Account)
|
||||
account.Init()
|
||||
var needRegister bool
|
||||
if account == nil || len(account.Email) == 0 {
|
||||
account, err = NewAccount(a.Email)
|
||||
leadership.AddListener(func(elected bool) error {
|
||||
if elected {
|
||||
object, err := a.store.Load()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
needRegister = true
|
||||
}
|
||||
a.client, err = a.buildACMEClient(account)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if needRegister {
|
||||
// New users will need to register; be sure to save it
|
||||
log.Debug("Register...")
|
||||
reg, err := a.client.Register()
|
||||
transaction, object, err := a.store.Begin()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
account.Registration = reg
|
||||
}
|
||||
// The client has a URL to the current Let's Encrypt Subscriber
|
||||
// Agreement. The user will need to agree to it.
|
||||
log.Debug("AgreeToTOS...")
|
||||
err = a.client.AgreeToTOS()
|
||||
if err != nil {
|
||||
log.Debug(err)
|
||||
// Let's Encrypt Subscriber Agreement renew ?
|
||||
reg, err := a.client.QueryRegistration()
|
||||
account := object.(*Account)
|
||||
account.Init()
|
||||
var needRegister bool
|
||||
if account == nil || len(account.Email) == 0 {
|
||||
account, err = NewAccount(a.Email)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
needRegister = true
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
account.Registration = reg
|
||||
a.client, err = a.buildACMEClient(account)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if needRegister {
|
||||
// New users will need to register; be sure to save it
|
||||
log.Debugf("Register...")
|
||||
reg, err := a.client.Register()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
account.Registration = reg
|
||||
}
|
||||
// The client has a URL to the current Let's Encrypt Subscriber
|
||||
// Agreement. The user will need to agree to it.
|
||||
log.Debugf("AgreeToTOS...")
|
||||
err = a.client.AgreeToTOS()
|
||||
if err != nil {
|
||||
log.Errorf("Error sending ACME agreement to TOS: %+v: %s", account, err.Error())
|
||||
// Let's Encrypt Subscriber Agreement renew ?
|
||||
reg, err := a.client.QueryRegistration()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
account.Registration = reg
|
||||
err = a.client.AgreeToTOS()
|
||||
if err != nil {
|
||||
log.Errorf("Error sending ACME agreement to TOS: %+v: %s", account, err.Error())
|
||||
}
|
||||
}
|
||||
err = transaction.Commit(account)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
err = transaction.Commit(account)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
a.retrieveCertificates()
|
||||
a.renewCertificates()
|
||||
a.runJobs()
|
||||
}
|
||||
a.retrieveCertificates()
|
||||
a.renewCertificates()
|
||||
a.runJobs()
|
||||
}
|
||||
return nil
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
// CreateLocalConfig creates a tls.config using local ACME configuration
|
||||
func (a *ACME) CreateLocalConfig(tlsConfig *tls.Config, certs *safe.Safe, checkOnDemandDomain func(domain string) bool) error {
|
||||
defer a.runJobs()
|
||||
func (a *ACME) CreateLocalConfig(tlsConfig *tls.Config, checkOnDemandDomain func(domain string) bool) error {
|
||||
err := a.init()
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -304,19 +226,18 @@ func (a *ACME) CreateLocalConfig(tlsConfig *tls.Config, certs *safe.Safe, checkO
|
||||
return errors.New("Empty Store, please provide a filename for certs storage")
|
||||
}
|
||||
a.checkOnDemandDomain = checkOnDemandDomain
|
||||
a.dynamicCerts = certs
|
||||
tlsConfig.Certificates = append(tlsConfig.Certificates, *a.defaultCertificate)
|
||||
tlsConfig.GetCertificate = a.getCertificate
|
||||
a.TLSConfig = tlsConfig
|
||||
|
||||
localStore := NewLocalStore(a.Storage)
|
||||
a.store = localStore
|
||||
a.challengeTLSProvider = &challengeTLSProvider{store: a.store}
|
||||
a.challengeProvider = &challengeProvider{store: a.store}
|
||||
|
||||
var needRegister bool
|
||||
var account *Account
|
||||
|
||||
if fileInfo, fileErr := os.Stat(a.Storage); fileErr == nil && fileInfo.Size() != 0 {
|
||||
log.Info("Loading ACME Account...")
|
||||
log.Infof("Loading ACME Account...")
|
||||
// load account
|
||||
object, err := localStore.Load()
|
||||
if err != nil {
|
||||
@@ -324,7 +245,7 @@ func (a *ACME) CreateLocalConfig(tlsConfig *tls.Config, certs *safe.Safe, checkO
|
||||
}
|
||||
account = object.(*Account)
|
||||
} else {
|
||||
log.Info("Generating ACME Account...")
|
||||
log.Infof("Generating ACME Account...")
|
||||
account, err = NewAccount(a.Email)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -334,34 +255,28 @@ func (a *ACME) CreateLocalConfig(tlsConfig *tls.Config, certs *safe.Safe, checkO
|
||||
|
||||
a.client, err = a.buildACMEClient(account)
|
||||
if err != nil {
|
||||
log.Errorf(`Failed to build ACME client: %s
|
||||
Let's Encrypt functionality will be limited until Traefik is restarted.`, err)
|
||||
return nil
|
||||
return err
|
||||
}
|
||||
|
||||
if needRegister {
|
||||
// New users will need to register; be sure to save it
|
||||
log.Info("Register...")
|
||||
log.Infof("Register...")
|
||||
reg, err := a.client.Register()
|
||||
if err != nil {
|
||||
log.Errorf(`Failed to register user: %s
|
||||
Let's Encrypt functionality will be limited until Traefik is restarted.`, err)
|
||||
return nil
|
||||
return err
|
||||
}
|
||||
account.Registration = reg
|
||||
}
|
||||
|
||||
// The client has a URL to the current Let's Encrypt Subscriber
|
||||
// Agreement. The user will need to agree to it.
|
||||
log.Debug("AgreeToTOS...")
|
||||
log.Debugf("AgreeToTOS...")
|
||||
err = a.client.AgreeToTOS()
|
||||
if err != nil {
|
||||
// Let's Encrypt Subscriber Agreement renew ?
|
||||
reg, err := a.client.QueryRegistration()
|
||||
if err != nil {
|
||||
log.Errorf(`Failed to renew subscriber agreement: %s
|
||||
Let's Encrypt functionality will be limited until Traefik is restarted.`, err)
|
||||
return nil
|
||||
return err
|
||||
}
|
||||
account.Registration = reg
|
||||
err = a.client.AgreeToTOS()
|
||||
@@ -381,12 +296,14 @@ Let's Encrypt functionality will be limited until Traefik is restarted.`, err)
|
||||
|
||||
a.retrieveCertificates()
|
||||
a.renewCertificates()
|
||||
a.runJobs()
|
||||
|
||||
ticker := time.NewTicker(24 * time.Hour)
|
||||
safe.Go(func() {
|
||||
for range ticker.C {
|
||||
a.renewCertificates()
|
||||
}
|
||||
|
||||
})
|
||||
return nil
|
||||
}
|
||||
@@ -394,12 +311,7 @@ Let's Encrypt functionality will be limited until Traefik is restarted.`, err)
|
||||
func (a *ACME) getCertificate(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
|
||||
domain := types.CanonicalDomain(clientHello.ServerName)
|
||||
account := a.store.Get().(*Account)
|
||||
|
||||
if providedCertificate := a.getProvidedCertificate(domain); providedCertificate != nil {
|
||||
return providedCertificate, nil
|
||||
}
|
||||
|
||||
if challengeCert, ok := a.challengeTLSProvider.getCertificate(domain); ok {
|
||||
if challengeCert, ok := a.challengeProvider.getCertificate(domain); ok {
|
||||
log.Debugf("ACME got challenge %s", domain)
|
||||
return challengeCert, nil
|
||||
}
|
||||
@@ -413,13 +325,13 @@ func (a *ACME) getCertificate(clientHello *tls.ClientHelloInfo) (*tls.Certificat
|
||||
}
|
||||
return a.loadCertificateOnDemand(clientHello)
|
||||
}
|
||||
log.Debugf("No certificate found or generated for %s", domain)
|
||||
log.Debugf("ACME got nothing %s", domain)
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (a *ACME) retrieveCertificates() {
|
||||
a.jobs.In() <- func() {
|
||||
log.Info("Retrieving ACME certificates...")
|
||||
log.Infof("Retrieving ACME certificates...")
|
||||
for _, domain := range a.Domains {
|
||||
// check if cert isn't already loaded
|
||||
account := a.store.Get().(*Account)
|
||||
@@ -450,33 +362,50 @@ func (a *ACME) retrieveCertificates() {
|
||||
}
|
||||
}
|
||||
}
|
||||
log.Info("Retrieved ACME certificates")
|
||||
log.Infof("Retrieved ACME certificates")
|
||||
}
|
||||
}
|
||||
|
||||
func (a *ACME) renewCertificates() {
|
||||
a.jobs.In() <- func() {
|
||||
log.Info("Testing certificate renew...")
|
||||
log.Debugf("Testing certificate renew...")
|
||||
account := a.store.Get().(*Account)
|
||||
for _, certificateResource := range account.DomainsCertificate.Certs {
|
||||
if certificateResource.needRenew() {
|
||||
log.Infof("Renewing certificate from LE : %+v", certificateResource.Domains)
|
||||
renewedACMECert, err := a.renewACMECertificate(certificateResource)
|
||||
log.Debugf("Renewing certificate %+v", certificateResource.Domains)
|
||||
renewedCert, err := a.client.RenewCertificate(acme.CertificateResource{
|
||||
Domain: certificateResource.Certificate.Domain,
|
||||
CertURL: certificateResource.Certificate.CertURL,
|
||||
CertStableURL: certificateResource.Certificate.CertStableURL,
|
||||
PrivateKey: certificateResource.Certificate.PrivateKey,
|
||||
Certificate: certificateResource.Certificate.Certificate,
|
||||
}, true)
|
||||
if err != nil {
|
||||
log.Errorf("Error renewing certificate from LE: %v", err)
|
||||
log.Errorf("Error renewing certificate: %v", err)
|
||||
continue
|
||||
}
|
||||
operation := func() error {
|
||||
return a.storeRenewedCertificate(account, certificateResource, renewedACMECert)
|
||||
log.Debugf("Renewed certificate %+v", certificateResource.Domains)
|
||||
renewedACMECert := &Certificate{
|
||||
Domain: renewedCert.Domain,
|
||||
CertURL: renewedCert.CertURL,
|
||||
CertStableURL: renewedCert.CertStableURL,
|
||||
PrivateKey: renewedCert.PrivateKey,
|
||||
Certificate: renewedCert.Certificate,
|
||||
}
|
||||
notify := func(err error, time time.Duration) {
|
||||
log.Warnf("Renewed certificate storage error: %v, retrying in %s", err, time)
|
||||
}
|
||||
ebo := backoff.NewExponentialBackOff()
|
||||
ebo.MaxElapsedTime = 60 * time.Second
|
||||
err = backoff.RetryNotify(safe.OperationWithRecover(operation), ebo, notify)
|
||||
transaction, object, err := a.store.Begin()
|
||||
if err != nil {
|
||||
log.Errorf("Datastore cannot sync: %v", err)
|
||||
log.Errorf("Error renewing certificate: %v", err)
|
||||
continue
|
||||
}
|
||||
account = object.(*Account)
|
||||
err = account.DomainsCertificate.renewCertificates(renewedACMECert, certificateResource.Domains)
|
||||
if err != nil {
|
||||
log.Errorf("Error renewing certificate: %v", err)
|
||||
continue
|
||||
}
|
||||
|
||||
if err = transaction.Commit(account); err != nil {
|
||||
log.Errorf("Error Saving ACME account %+v: %s", account, err.Error())
|
||||
continue
|
||||
}
|
||||
}
|
||||
@@ -484,72 +413,8 @@ func (a *ACME) renewCertificates() {
|
||||
}
|
||||
}
|
||||
|
||||
func (a *ACME) renewACMECertificate(certificateResource *DomainsCertificate) (*Certificate, error) {
|
||||
renewedCert, err := a.client.RenewCertificate(acme.CertificateResource{
|
||||
Domain: certificateResource.Certificate.Domain,
|
||||
CertURL: certificateResource.Certificate.CertURL,
|
||||
CertStableURL: certificateResource.Certificate.CertStableURL,
|
||||
PrivateKey: certificateResource.Certificate.PrivateKey,
|
||||
Certificate: certificateResource.Certificate.Certificate,
|
||||
}, true, OSCPMustStaple)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
log.Infof("Renewed certificate from LE: %+v", certificateResource.Domains)
|
||||
return &Certificate{
|
||||
Domain: renewedCert.Domain,
|
||||
CertURL: renewedCert.CertURL,
|
||||
CertStableURL: renewedCert.CertStableURL,
|
||||
PrivateKey: renewedCert.PrivateKey,
|
||||
Certificate: renewedCert.Certificate,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (a *ACME) storeRenewedCertificate(account *Account, certificateResource *DomainsCertificate, renewedACMECert *Certificate) error {
|
||||
transaction, object, err := a.store.Begin()
|
||||
if err != nil {
|
||||
return fmt.Errorf("error during transaction initialization for renewing certificate: %v", err)
|
||||
}
|
||||
|
||||
log.Infof("Renewing certificate in data store : %+v ", certificateResource.Domains)
|
||||
account = object.(*Account)
|
||||
err = account.DomainsCertificate.renewCertificates(renewedACMECert, certificateResource.Domains)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error renewing certificate in datastore: %v ", err)
|
||||
}
|
||||
|
||||
log.Infof("Commit certificate renewed in data store : %+v", certificateResource.Domains)
|
||||
if err = transaction.Commit(account); err != nil {
|
||||
return fmt.Errorf("error saving ACME account %+v: %v", account, err)
|
||||
}
|
||||
|
||||
oldAccount := a.store.Get().(*Account)
|
||||
for _, oldCertificateResource := range oldAccount.DomainsCertificate.Certs {
|
||||
if oldCertificateResource.Domains.Main == certificateResource.Domains.Main && strings.Join(oldCertificateResource.Domains.SANs, ",") == strings.Join(certificateResource.Domains.SANs, ",") && certificateResource.Certificate != renewedACMECert {
|
||||
return fmt.Errorf("renewed certificate not stored: %+v", certificateResource.Domains)
|
||||
}
|
||||
}
|
||||
|
||||
log.Infof("Certificate successfully renewed in data store: %+v", certificateResource.Domains)
|
||||
return nil
|
||||
}
|
||||
|
||||
func dnsOverrideDelay(delay flaeg.Duration) error {
|
||||
var err error
|
||||
if delay > 0 {
|
||||
log.Debugf("Delaying %d rather than validating DNS propagation", delay)
|
||||
acme.PreCheckDNS = func(_, _ string) (bool, error) {
|
||||
time.Sleep(time.Duration(delay))
|
||||
return true, nil
|
||||
}
|
||||
} else if delay < 0 {
|
||||
err = fmt.Errorf("invalid negative DelayBeforeCheck: %d", delay)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func (a *ACME) buildACMEClient(account *Account) (*acme.Client, error) {
|
||||
log.Debug("Building ACME client...")
|
||||
log.Debugf("Building ACME client...")
|
||||
caServer := "https://acme-v01.api.letsencrypt.org/directory"
|
||||
if len(a.CAServer) > 0 {
|
||||
caServer = a.CAServer
|
||||
@@ -558,32 +423,8 @@ func (a *ACME) buildACMEClient(account *Account) (*acme.Client, error) {
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if a.DNSChallenge != nil && len(a.DNSChallenge.Provider) > 0 {
|
||||
log.Debugf("Using DNS Challenge provider: %s", a.DNSChallenge.Provider)
|
||||
|
||||
err = dnsOverrideDelay(a.DNSChallenge.DelayBeforeCheck)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var provider acme.ChallengeProvider
|
||||
provider, err = dns.NewDNSChallengeProviderByName(a.DNSChallenge.Provider)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
client.ExcludeChallenges([]acme.Challenge{acme.HTTP01, acme.TLSSNI01})
|
||||
err = client.SetChallengeProvider(acme.DNS01, provider)
|
||||
} else if a.HTTPChallenge != nil && len(a.HTTPChallenge.EntryPoint) > 0 {
|
||||
client.ExcludeChallenges([]acme.Challenge{acme.DNS01, acme.TLSSNI01})
|
||||
a.challengeHTTPProvider = &challengeHTTPProvider{store: a.store}
|
||||
err = client.SetChallengeProvider(acme.HTTP01, a.challengeHTTPProvider)
|
||||
} else {
|
||||
client.ExcludeChallenges([]acme.Challenge{acme.HTTP01, acme.DNS01})
|
||||
err = client.SetChallengeProvider(acme.TLSSNI01, a.challengeTLSProvider)
|
||||
}
|
||||
|
||||
client.ExcludeChallenges([]acme.Challenge{acme.HTTP01, acme.DNS01})
|
||||
err = client.SetChallengeProvider(acme.TLSSNI01, a.challengeProvider)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -620,18 +461,11 @@ func (a *ACME) loadCertificateOnDemand(clientHello *tls.ClientHelloInfo) (*tls.C
|
||||
// LoadCertificateForDomains loads certificates from ACME for given domains
|
||||
func (a *ACME) LoadCertificateForDomains(domains []string) {
|
||||
a.jobs.In() <- func() {
|
||||
log.Debugf("LoadCertificateForDomains %v...", domains)
|
||||
|
||||
if len(domains) == 0 {
|
||||
// no domain
|
||||
return
|
||||
}
|
||||
|
||||
log.Debugf("LoadCertificateForDomains %s...", domains)
|
||||
domains = fun.Map(types.CanonicalDomain, domains).([]string)
|
||||
|
||||
operation := func() error {
|
||||
if a.client == nil {
|
||||
return errors.New("ACME client still not built")
|
||||
return fmt.Errorf("ACME client still not built")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -646,34 +480,36 @@ func (a *ACME) LoadCertificateForDomains(domains []string) {
|
||||
return
|
||||
}
|
||||
account := a.store.Get().(*Account)
|
||||
var domain Domain
|
||||
if len(domains) == 0 {
|
||||
// no domain
|
||||
return
|
||||
|
||||
// Check provided certificates
|
||||
uncheckedDomains := a.getUncheckedDomains(domains, account)
|
||||
if len(uncheckedDomains) == 0 {
|
||||
} else if len(domains) > 1 {
|
||||
domain = Domain{Main: domains[0], SANs: domains[1:]}
|
||||
} else {
|
||||
domain = Domain{Main: domains[0]}
|
||||
}
|
||||
if _, exists := account.DomainsCertificate.exists(domain); exists {
|
||||
// domain already exists
|
||||
return
|
||||
}
|
||||
certificate, err := a.getDomainsCertificates(uncheckedDomains)
|
||||
certificate, err := a.getDomainsCertificates(domains)
|
||||
if err != nil {
|
||||
log.Errorf("Error getting ACME certificates %+v : %v", uncheckedDomains, err)
|
||||
log.Errorf("Error getting ACME certificates %+v : %v", domains, err)
|
||||
return
|
||||
}
|
||||
log.Debugf("Got certificate for domains %+v", uncheckedDomains)
|
||||
log.Debugf("Got certificate for domains %+v", domains)
|
||||
transaction, object, err := a.store.Begin()
|
||||
|
||||
if err != nil {
|
||||
log.Errorf("Error creating transaction %+v : %v", uncheckedDomains, err)
|
||||
log.Errorf("Error creating transaction %+v : %v", domains, err)
|
||||
return
|
||||
}
|
||||
var domain Domain
|
||||
if len(uncheckedDomains) > 1 {
|
||||
domain = Domain{Main: uncheckedDomains[0], SANs: uncheckedDomains[1:]}
|
||||
} else {
|
||||
domain = Domain{Main: uncheckedDomains[0]}
|
||||
}
|
||||
account = object.(*Account)
|
||||
_, err = account.DomainsCertificate.addCertificateForDomains(certificate, domain)
|
||||
if err != nil {
|
||||
log.Errorf("Error adding ACME certificates %+v : %v", uncheckedDomains, err)
|
||||
log.Errorf("Error adding ACME certificates %+v : %v", domains, err)
|
||||
return
|
||||
}
|
||||
if err = transaction.Commit(account); err != nil {
|
||||
@@ -683,105 +519,14 @@ func (a *ACME) LoadCertificateForDomains(domains []string) {
|
||||
}
|
||||
}
|
||||
|
||||
// Get provided certificate which check a domains list (Main and SANs)
|
||||
// from static and dynamic provided certificates
|
||||
func (a *ACME) getProvidedCertificate(domains string) *tls.Certificate {
|
||||
log.Debugf("Looking for provided certificate to validate %s...", domains)
|
||||
cert := searchProvidedCertificateForDomains(domains, a.TLSConfig.NameToCertificate)
|
||||
if cert == nil && a.dynamicCerts != nil && a.dynamicCerts.Get() != nil {
|
||||
cert = searchProvidedCertificateForDomains(domains, a.dynamicCerts.Get().(*traefikTls.DomainsCertificates).Get().(map[string]*tls.Certificate))
|
||||
}
|
||||
if cert == nil {
|
||||
log.Debugf("No provided certificate found for domains %s, get ACME certificate.", domains)
|
||||
}
|
||||
return cert
|
||||
}
|
||||
|
||||
func searchProvidedCertificateForDomains(domain string, certs map[string]*tls.Certificate) *tls.Certificate {
|
||||
// Use regex to test for provided certs that might have been added into TLSConfig
|
||||
for certDomains := range certs {
|
||||
domainCheck := false
|
||||
for _, certDomain := range strings.Split(certDomains, ",") {
|
||||
selector := "^" + strings.Replace(certDomain, "*.", "[^\\.]*\\.?", -1) + "$"
|
||||
domainCheck, _ = regexp.MatchString(selector, domain)
|
||||
if domainCheck {
|
||||
break
|
||||
}
|
||||
}
|
||||
if domainCheck {
|
||||
log.Debugf("Domain %q checked by provided certificate %q", domain, certDomains)
|
||||
return certs[certDomains]
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get provided certificate which check a domains list (Main and SANs)
|
||||
// from static and dynamic provided certificates
|
||||
func (a *ACME) getUncheckedDomains(domains []string, account *Account) []string {
|
||||
log.Debugf("Looking for provided certificate to validate %s...", domains)
|
||||
allCerts := make(map[string]*tls.Certificate)
|
||||
|
||||
// Get static certificates
|
||||
for domains, certificate := range a.TLSConfig.NameToCertificate {
|
||||
allCerts[domains] = certificate
|
||||
}
|
||||
|
||||
// Get dynamic certificates
|
||||
if a.dynamicCerts != nil && a.dynamicCerts.Get() != nil {
|
||||
for domains, certificate := range a.dynamicCerts.Get().(*traefikTls.DomainsCertificates).Get().(map[string]*tls.Certificate) {
|
||||
allCerts[domains] = certificate
|
||||
}
|
||||
}
|
||||
|
||||
// Get ACME certificates
|
||||
if account != nil {
|
||||
for domains, certificate := range account.DomainsCertificate.toDomainsMap() {
|
||||
allCerts[domains] = certificate
|
||||
}
|
||||
}
|
||||
|
||||
return searchUncheckedDomains(domains, allCerts)
|
||||
}
|
||||
|
||||
func searchUncheckedDomains(domains []string, certs map[string]*tls.Certificate) []string {
|
||||
uncheckedDomains := []string{}
|
||||
for _, domainToCheck := range domains {
|
||||
domainCheck := false
|
||||
for certDomains := range certs {
|
||||
domainCheck = false
|
||||
for _, certDomain := range strings.Split(certDomains, ",") {
|
||||
// Use regex to test for provided certs that might have been added into TLSConfig
|
||||
selector := "^" + strings.Replace(certDomain, "*.", "[^\\.]*\\.?", -1) + "$"
|
||||
domainCheck, _ = regexp.MatchString(selector, domainToCheck)
|
||||
if domainCheck {
|
||||
break
|
||||
}
|
||||
}
|
||||
if domainCheck {
|
||||
break
|
||||
}
|
||||
}
|
||||
if !domainCheck {
|
||||
uncheckedDomains = append(uncheckedDomains, domainToCheck)
|
||||
}
|
||||
}
|
||||
if len(uncheckedDomains) == 0 {
|
||||
log.Debugf("No ACME certificate to generate for domains %q.", domains)
|
||||
} else {
|
||||
log.Debugf("Domains %q need ACME certificates generation for domains %q.", domains, strings.Join(uncheckedDomains, ","))
|
||||
}
|
||||
return uncheckedDomains
|
||||
}
|
||||
|
||||
func (a *ACME) getDomainsCertificates(domains []string) (*Certificate, error) {
|
||||
domains = fun.Map(types.CanonicalDomain, domains).([]string)
|
||||
log.Debugf("Loading ACME certificates %s...", domains)
|
||||
bundle := true
|
||||
certificate, failures := a.client.ObtainCertificate(domains, bundle, nil, OSCPMustStaple)
|
||||
certificate, failures := a.client.ObtainCertificate(domains, bundle, nil)
|
||||
if len(failures) > 0 {
|
||||
log.Error(failures)
|
||||
return nil, fmt.Errorf("cannot obtain certificates %+v", failures)
|
||||
return nil, fmt.Errorf("Cannot obtain certificates %s+v", failures)
|
||||
}
|
||||
log.Debugf("Loaded ACME certificates %s", domains)
|
||||
return &Certificate{
|
||||
|
||||
@@ -1,43 +0,0 @@
|
||||
{
|
||||
"Email": "test@traefik.io",
|
||||
"Registration": {
|
||||
"body": {
|
||||
"resource": "reg",
|
||||
"id": 3,
|
||||
"key": {
|
||||
"kty": "RSA",
|
||||
"n": "y5a71suIqvEtovDmDVQ3SSNagk5IVCFI_TvqWpEXSrdbcDE2C-PTEtEUJuLkYwygcpiWYbPmXgdS628vQCw5Uo4DeDyHiuysJOWBLaWow3p9goOdhnPbGBq0liIR9xXyRoctdipVk8UiO9scWsu4jMBM3sMr7_yBWPfYYiLEQmZGFO3iE7Oqr55h_kncHIj5lUQY1j_jkftqxlxUB5_0quyJ7l915j5QY--eY7h4GEhRvx0TlUpi-CnRtRblGeDDDilXZD6bQN2962WdKecsmRaYx-ttLz6jCPXz2VDJRWNcIS501ne2Zh3hzw_DS6IRd2GIia1Wg4sisi9epC9sumXPHi6xzR6-_i_nsFjdtTkUcV8HmorOYoc820KQVZaLScxa8e7-ixpOd6mr6AIbEf7dBAkb9f_iK3GwpqKD8yNcaj1EQgNSyJSjnKSulXI_GwkGnuXe00Qpb1a8ha5Z8yWg7XmZZnJyAZrmK60RfwRNQ1rO5ioerNUBJ2KYTYNzVjBdob9Ug6Cjh4bEKNNjqcbjQ50_Z97Vw40xzpDQ_fYllc6n92eSuv6olxFJTmK7EhHuanDzITngaqei3zL9RwQ7P-1jfEZ03qmGrQYYqXcsS46PQ8cE-frzY2mKp16pRNCG7-03gKVGV0JHyW1aYbevNUk7OumCAXhC2YOigBk",
|
||||
"e": "AQAB"
|
||||
},
|
||||
"contact": [
|
||||
"mailto:test@traefik.io"
|
||||
],
|
||||
"agreement": "http://boulder:4000/terms/v1"
|
||||
},
|
||||
"uri": "http://127.0.0.1:4000/acme/reg/3",
|
||||
"new_authzr_uri": "http://127.0.0.1:4000/acme/new-authz",
|
||||
"terms_of_service": "http://boulder:4000/terms/v1"
|
||||
},
|
||||
"PrivateKey": "MIIJJwIBAAKCAgEAy5a71suIqvEtovDmDVQ3SSNagk5IVCFI/TvqWpEXSrdbcDE2C+PTEtEUJuLkYwygcpiWYbPmXgdS628vQCw5Uo4DeDyHiuysJOWBLaWow3p9goOdhnPbGBq0liIR9xXyRoctdipVk8UiO9scWsu4jMBM3sMr7/yBWPfYYiLEQmZGFO3iE7Oqr55h/kncHIj5lUQY1j/jkftqxlxUB5/0quyJ7l915j5QY++eY7h4GEhRvx0TlUpi+CnRtRblGeDDDilXZD6bQN2962WdKecsmRaYx+ttLz6jCPXz2VDJRWNcIS501ne2Zh3hzw/DS6IRd2GIia1Wg4sisi9epC9sumXPHi6xzR6+/i/nsFjdtTkUcV8HmorOYoc820KQVZaLScxa8e7+ixpOd6mr6AIbEf7dBAkb9f/iK3GwpqKD8yNcaj1EQgNSyJSjnKSulXI/GwkGnuXe00Qpb1a8ha5Z8yWg7XmZZnJyAZrmK60RfwRNQ1rO5ioerNUBJ2KYTYNzVjBdob9Ug6Cjh4bEKNNjqcbjQ50/Z97Vw40xzpDQ/fYllc6n92eSuv6olxFJTmK7EhHuanDzITngaqei3zL9RwQ7P+1jfEZ03qmGrQYYqXcsS46PQ8cE+frzY2mKp16pRNCG7+03gKVGV0JHyW1aYbevNUk7OumCAXhC2YOigBkCAwEAAQKCAgA8XW1EuwTC6tAFSDhuK1JZNUpY6K05hMUHkQRj5jFpzgQmt/C2hc7H/YZkIVJmrA/G6sdsINNlffZwKH9yH6q/d6w/snLeFl7UcdhjmIL5sxAT6sKCY0fLVd/FxERfZvp3Pw2Tw+mr7v+/j7BQm6cU1M/2HRiiB9SydIqMTpKyvXB6NC6ceOFbQTL9GxlQvKyEPbS/kiH/3vRB7I5d1GfPZmNfcp6ark9X0my8VK4HRSo36H8t/OhrfLrZXvh/O82aHVf0OTv/d8AgU/jNu+XVXoXegUfWglQFDChJf1KuaE+g5w1tqgFDNgkGRD475soXA6xgZi0Iw/B9tN3zALzT4IiAW1q72feeTgKOMA2zGtKXxQZZSOV+DuWFZNz/tT7XqGQThqxM09CHv2WGOe80vobtegXYTUt90hysrqIZmBW5XYdzQlJh1KWTtfCaTrWd47kbGvhkEPc8aA3Ji4/AqfkVXiqwaLu+MSlgzPpRj7U7UAIDqnpZjgttgLp74Ujnk3bTaUzdyyNqYDBG3IFGr/Sv+2GQDAUn/PYRJKWr0BteqOzX9zvW3zY8g9CYVXfK/AW3RMWLV8ly6vH/gWqa9gEuzRNRlzjUU6/HCVbUx3UT8RMWH2TQ0uuQZr5JX1iTwjeeT0dEIly1NnRQC92wcrE4UUTBEF3krGVpDBf0AQKCAQEA4jB8w+2fwzbF8X+gCODcY7sTeJRunzGy+jbdaLkcThuylga+6W3ZgWx0BD30ql9K2mouCVu86fCTnBeXXEC3QoTdgw/EzJ83+4JU3QSDdzs9Ta9vLHyvrpUkQfZ8UZpeLLmFsmsBMbBbnfw0S1TzXDsgrAc+G4tia8nO/Iqu75kEMGzmHQAvmN3iSqc1aTS4qumbB19g+v+csq9NEht4F9jt39KotG+OD3MxCxtMu7vxAkJRjFFcgcbb2Rtqe/kQEKA1vLEAJg27lV4k8XibCSerVUR6IzT8WZHrNiXmpRguTLl2k8uFUdCOOx6aLGyRVJ6+8SgIsMR540vnxwQzEQKCAQEA5mu2wtWT19mvXopC3easPsXIPzc5oaRkqfWZYT1KHcVQ7NIXsE3vCjcf/3igZ8l/FVQ4G4fpk/GoTqlpV5Aq/JHCpVOR2O69uB+W4kWgliejpHvF9gszzAYnC8lIXqDbWiinBhmm3ii8sDGAoBaSDw5NMUq3mI+nd8zZ+jx1bLBczDafmQ0YKr8k0YaROxIgoBgDOQDdSqG387lwzpza2DKI5Al3HfS42zjT0RmBahPiuT2aEoUZmIYuvFY0fEjfkpbdvLyexHfZCILRUGlG1nAwASFg86lp+mFSBJ3E3cvbP0CpbFGxon5u4Ao3/7htoOh6huh7MQ91h41fv1hsiQKCAQAe7WRR4e7jYVzlbX7zV9Oqq0y5QwpxJ/mB7viNNiphn7Xmf5uhDU0dPjgK0HHgzdDNVpFe5DVLg4KbaDpg+dRU+xfSsNhG5kpgUGzMH67eIbJ7Kc64tX/MDkZ74nkTK1lPIjrer3TlV2jfjDmWR1JTPR51hzP9ziwx8tEjhM7woeqJuIoqUvkvHL+xV3WdIgFSFUkGVAtNpp/FauTN4gWktRupbAN3UH2LLUP6ccwnK0aD+Y9u8T0F3av33qDLvL1umIlgeI89pMkOXmYMwmHoeY0axpcwszECCkqwB7SmxEyoXv+Qq9ZZ3ntkKAYKpvmkKWSQUtoFWYgVBS727mMRAoIBABLdwusU/bPwuPEutObiWjwRiaHTbb6UbUGVQGe70vO5EjUxxorC9s2JUe9i+w9EakleyfFHIZLheHxoVp26yio/7QYIX6q5cYM/4uTH+qwQts9i6wSISkdsQYovguNsnEk3huVy+Dy8bSaoBvYUowTkkOF2Uq4FJRskBLz+ckbh8dcuqcaoUdA+Mk+NixqhE1bIYIssTPItZ5hnGJtyMGD/UkIJnF0ximk4r+8w/W2oDypHpvPZPg1E/1KgZE/Az7166NDpSL6haX3O6ECDPi+Uo/mTuBJ7TpgXm9WQ7WuTo3H8Y2LhFYBOhdmGPKuNeDxyjIW7R0rvDxp4MtzB6rECggEAJIl7/qp1lxUQPQJRTsEYBkOtdRw0IGG1Rcj0emhHaBN05c9opCy+Osb7mVeU5ZiULe5kD02phL+36pEumprz7QzN46Y5pZc8AQ2W/QkeL4Wo9U9QzczvQQzc1EqrBkzvQTZtBhn4DRzz0IuTn1beVyHtBZeNpBFgMQFv9VYQuUNwFoTOkkQrBRnYbXH6KEnhF3c/1Hzi4KHVdHdfZ3LH7KFQJ34xio0q2tWQSQYeybmwOXdd9sxpz/Y4KBS9fqm7UrwnPK8yuOc05HLEaws+1iam5YyJprlQo3mGKe0wRztwn44HDeQr70LlFm0lzigVAv0hSiWO1Q5hJL7nDu8m/Q==",
|
||||
"DomainsCertificate": {
|
||||
"Certs": [
|
||||
{
|
||||
"Domains": {
|
||||
"Main": "local1.com",
|
||||
"SANs": [
|
||||
"test1.local1.com",
|
||||
"test2.local1.com"
|
||||
]
|
||||
},
|
||||
"Certificate": {
|
||||
"Domain": "local1.com",
|
||||
"CertURL": "http://127.0.0.1:4000/acme/cert/ffc4f3f14def9ee6ec6a0522b5c0baa3379d",
|
||||
"CertStableURL": "",
|
||||
"PrivateKey": "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlKS1FJQkFBS0NBZ0VBdVNoTTR4enF6cE5YcFNaNnAvZnQrRmt5VmgyK1BSZXJUelV0OERRSng2UkVjQS9FCnN2RnNIVmNOSkZMS2twYTNlOEd3SUZBakJQNnJPK3hoR1JjWlJrdENON1gyOW5LZFhGbHZkYzJxd0hyTFF5WWkKTTB3ODhTck41VERiNi96TWU2dTB0dERiYWtDbDd6ZEJKUXJ6a1h5ZU1MeVkzTUs3aVkrMHpwL2JqMVhvbk5DdQpaQStkZ3hsMVNrV01DVUYvQk9HNWFyT1hwb0x4S0dQWGdzV3hOTVNLVmJKSHczL3ZqNTViZU92Um5lT3BNWlhvCmMwOWpZT3VBakNka1Z5czBSWHJLNWNCRDRMbVRXdnN4MFdTK2VMVHlGTTdQTHVZM3lEWkNNWEhjVmlqRHhnbFMKYjB1ZVRQcGFUWEQwYkxqZ0RNOUVEdE15ZEJzMUNPWlpPWG9ickN5Q2I1eWxTOFdVd1NzVXM1UldxZnlVbnAvcgpSNGx2c2RZOWRVZjRPdkNMVnJvWWk5NWFGc1Zxa0xLOExuL0Eyc3kxYWlDTnR4RmpKOXRXbWU0V0NhdzRoU0YvCkR4NWVNNWNYR2JSYXduVlZJQlZXeHhzNTBPMFJlUWRvbXBQZEFNS1RDWk9SRmxYaDdOWTdxQVdWRGtpdzhyam8Kekd3Ni9XdjlOR3hTNTliKzc0YVAxcjBxOTZ2RS9Rdi8zTCtjbjhiN0lBLytPYmFKdzhIT3RGbXc4RjBxQkN3MAprYWVVSloxb1JueGFYQUo4RHhHREpFOVdNUzh0QmJtVm16YkxoRkMzeDdVc0xGeTBrSzh1SFBFT3dQb2NKNUFUCkE1UHBvclNEMmFleHA0Z3VqYVp5c1JManpmY0dnaTdva0JFNlZVNWVqRE1iYS9lNERQNEJQUVg5VmtVQ0F3RUEKQVFLQ0FnQmZjMWdYcUp1ZmZMT3REcVlpbXh4UmIrSVVKT2NpWldaSndmZDVvY244NGtEcHFDZFZ2RUZvNnF4NgpzamQ5MURhb2xOUHdCSC9aSGxRMTR3aTNQNEluQzdzS0wwTXVEeTN5SXFUa0RPOWVwSzdPWWdVMWZyTFgvS0lCCjZlc2x2Ny9HYldFTzhhSjdKdktqM0U4NEFtcEg4UDgzenJIYTlJUnJTT3NEcmNNcEpEZHpSOXp1OW1IVDZMYmYKWC9UdC9KYTNkSW42YUxUZ0FSYkRKSjAvN0J3TFFOcXpqT0dUOWdzUWRhbGdMK2x5eEo4L1ViRndhRmVwNmgzdApvbzBHcHQ0ZWgwdTdueDhlNVd3Q2RnWmJsTnpnS3grMC9Gd3dLRHhQZVRFc2ZpOEJONmlkR2NjbVdzd3prTWdtCnJmbERaeGNSWTNRSlZIVHBCL0dTTWZXRFBPQ3dRdGltQk1WN3kxM2hPMTdPWXpSNDBMZnpUalJBbmtna2V2eWYKcFowb3dLR3o4QS9haHhRWWJmYVQ5VEhXV0wrYUpYeUhFanBKckp5aTg3UExVbzhsOFVydU56MDRWNXpLOFJPbgo2cG9EWmVtbm1EYWRlU09pK3hZRWlGT1NwSXNWbzlpcm9jUGFKN2YzYWpiNUU4RHpuN1o1MmhzL2R6akpLcFZJCm5mVDFkUU9SZEowSXRUNlRlQ2RTL0dpS25IS1RtNjR2T21IbmlJcm8rUGRhUmFjV0IrTUJ0VytRd0cyUStyRGkKc3g4NlpQbHRpTVpLMDZ5TVlyVHZUdGk2aFVGaUY5cWh4b3RGazdNQkNrZlIwYUVhaUREQUpKNm1jb1lpRUQ2QgpBVGJhVmpVaGNaUiswYkRST25PN0ozRk5rZmx3K2dMaVhvcXFRRW9pU2ZWb2h5SWY3UUtDQVFFQThjYTM5K0g4CjN3L2Qrcm0yUGNhM0RMQnBYaWU4Z3ZYcGpjazVYSkpvSGVmbnJjZWQrcFpXaTZEYncwYld0MEdtYkxmVjJNSlAKV2I1aTZzSXhmdkN3YlFqbHY0UnExMVA5ZEswT3poMnVpKzZ6cXVBMG5YTVcrN0lJS0cvdDhmS2NJZGRRNnRGcwpFclFVTFBDak56ODA2cHBiSlhPRmVvMW1BK293TGhHNlA3dDhCdlZHSk1NaTNxejNlSUNuVVE2eDNFY01ITXNuClhrM21DUzI1WUZaNk96cytFK254cGVraTAzZmQwblp3UE1jdElHZys1c3hleE9zREsrTHlvb2FqQnc5N0oyUzIKcUNNWXFtT0tLcmxEQ3Y1WmQ4dlZLN3hXVmpKRVhGTTNMZ2pieHBRcCtuVXNVVWxwS01LOVlGS0lRREl0RU9aMApWcWExTXJaOElzN1l5d0tDQVFFQXhBemZIa2pIVGlvTHdZbG5EcEk0MWlOTDh5Y0ZBallrTC94dWhPU2tlVkE4CjdRWDZPZUpDekR3Z0FUYXVqOWR6Y0wwby9yTndWV0xWcnQ3OXk3YnJvVDdFREZKWVNTY25GRXNMTlVWSXRncGkKckNSUXJTL1F2TkVGTmE5K0pRc1dmYkdBNHdIUTFaSjI4MFp1cWMvNlEyUi9kZVh3cUZBQVBHN2NIcEhHWlR6ZQoyRmFRUHFLRkV4WlEyZkpvRys0SVBRNHVQVERybXlGMmVUWXk2T3BaaDBHbWJRYlVTa1dFWDlQRmF1cHJIWVdGCk8wK25DaVVPNVRaMFZoaGR2dUNKMWdPclZHYzhBUlJtUVZ1aUNEWTZCaGlvVTU0ZmZsSXlDTXZ5a3MwcmRXZ3MKWVJ2TmN4TXNlRGJpTDRKSURkMHhiN1d4VUdmVjRVNHZPMks5Vms1N0x3S0NBUUVBMkd1eE1jcXd1RnRUc0tPYwpaaUFDcXZFZTRKRmhSVGtySHlnSW1MelZSaS9ZU3M1c3MycnZmWDA0T3N5bVZ0UUZUVHdoeUMzbktjWXFkVW52ClZGblBFMHJyblV2Qzk0elBUQ205SHZPaTBzK1JORndOdlFMUWgrME5NR1ZBOFZyaU44aXRQZ1RJWU5XaFdianQKNFA1TE45V0QwVHBmT1J4cFBRZmNxT0JsZjdjcmhtNzNvdUNwemZtMmE3OStCaWpKUFF5NzR1cFhDeXRmeHNlUApNSlU0Uk56NjdJaDFMclpKM2xGbDFvYitZT2xKazhDOHpZd1RLT0hWck9zeGxobyt4SXN2Q2t3MDFMelZ6Mi9hCnRmT3Y5NTlHSnQzbXE0ZWpJUFZPQy9iUlpmdTMvMEdSY2dpQTZ5SnpaM0VxWTVaOU1EbTU3VzdjcE5RRlRxZmEKNXEyUmtRS0NBUUErNGhZSzQ3TXg2aUNkTWxKaEJSdS82OUJucktOWm96NFdPalRFNFlXejk3MmpGU0Mrd2tsRQpzeUJjNDBvNGp4WFRHb2wwc04rZU03WndnY3dNTko3OXVHRXZ4cFhVMlA4YTdqc3BHaEVKZXVsTlo5U015R0orCnZkaWE4TEJZZDJiK2FCbjhOay9pd1Rqd0xTNC92NXI1Vk5uaFdpRElDK2tYZVVPWGRwQ1pWbDN3TEV2V0cxRHQKMzJHTmxzZzM5VENsVE5BZUJudjc1VTdYOEQrQ0gvRVpoa0E0aGxFL2hXN0JRZTczclRzd1creHhLc3BjWWFpVwpjdEg3NzVMYUw3Rm1lUVRTYk01OVZpcTZXZ2J0OVY3Rko5R09DSkQzZHF2ZjBITDlEVndjSzQ3WWt3OWlFc3RYCnY5cnEvREhhYUpGNzBGNlFlTTNNbDhSa212WTZJYkEzQW9JQkFRRGt6RmZLeG9HQ3dWUDlua3k4NmFQSjFvd2kKc2FDZEx6RjRWTENRZzkrUXJITzEyY0p5MFFQUnJ2cUQyMGp1cDFlOWJhWVZzbkdYc1FZTFg2NVR6UzJSSCtlSAp6S0NPTTdnMVE3djMxNWpjMDMvN1lQck4rb3RrV0VBOUkyaDZjUE1vY3c0aERTNk02OFlxQVlKTS9RclVhenZhCnhBTFJaZEVkQW1xWDA4VHhuY1hRUEVxYkk0ZnlSZ2pVM1BYR3RRaFFFbERpR2kwbThjQTJNTXdsR1RmbTdOSXgKaENjZ2ZkL296TEp2VUhiMkxLRi82cXEySmJVRHlOMkVoK0xSZUJjdnp6Y1grZE5MdGQxY0Uvcm1SM2hMbWxmNgo3KzRpTVMxK0t1eWV3VlJVUEE1c1F1aUYyVUVoeEs1MUpZK1FpOG9HbERKdGRrOXB3QlZNN1F0WW9KVEwKLS0tLS1FTkQgUlNBIFBSSVZBVEUgS0VZLS0tLS0K",
|
||||
"Certificate": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZvakNDQklxZ0F3SUJBZ0lUQVAvRTgvRk43NTdtN0dvRklyWEF1cU0zblRBTkJna3Foa2lHOXcwQkFRc0YKQURBZk1SMHdHd1lEVlFRRERCUm9NbkJ3ZVNCb01tTnJaWElnWm1GclpTQkRRVEFlRncweE9EQXhNVFV3TnpJNQpNREJhRncweE9EQTBNVFV3TnpJNU1EQmFNRVF4RXpBUkJnTlZCQU1UQ214dlkyRnNNUzVqYjIweExUQXJCZ05WCkJBVVRKR1ptWXpSbU0yWXhOR1JsWmpsbFpUWmxZelpoTURVeU1tSTFZekJpWVdFek16YzVaRENDQWlJd0RRWUoKS29aSWh2Y05BUUVCQlFBRGdnSVBBRENDQWdvQ2dnSUJBTGtvVE9NYzZzNlRWNlVtZXFmMzdmaFpNbFlkdmowWApxMDgxTGZBMENjZWtSSEFQeExMeGJCMVhEU1JTeXBLV3QzdkJzQ0JRSXdUK3F6dnNZUmtYR1VaTFFqZTE5dlp5Cm5WeFpiM1hOcXNCNnkwTW1Jak5NUFBFcXplVXcyK3Y4ekh1cnRMYlEyMnBBcGU4M1FTVUs4NUY4bmpDOG1OekMKdTRtUHRNNmYyNDlWNkp6UXJtUVBuWU1aZFVwRmpBbEJmd1RodVdxemw2YUM4U2hqMTRMRnNUVEVpbFd5UjhOLwo3NCtlVzNqcjBaM2pxVEdWNkhOUFkyRHJnSXduWkZjck5FVjZ5dVhBUStDNWsxcjdNZEZrdm5pMDhoVE96eTdtCk44ZzJRakZ4M0ZZb3c4WUpVbTlMbmt6NldrMXc5R3k0NEF6UFJBN1RNblFiTlFqbVdUbDZHNndzZ20rY3BVdkYKbE1FckZMT1VWcW44bEo2ZjYwZUpiN0hXUFhWSCtEcndpMWE2R0l2ZVdoYkZhcEN5dkM1L3dOck10V29namJjUgpZeWZiVnBudUZnbXNPSVVoZnc4ZVhqT1hGeG0wV3NKMVZTQVZWc2NiT2REdEVYa0hhSnFUM1FEQ2t3bVRrUlpWCjRleldPNmdGbFE1SXNQSzQ2TXhzT3Yxci9UUnNVdWZXL3UrR2o5YTlLdmVyeFAwTC85eS9uSi9HK3lBUC9qbTIKaWNQQnpyUlpzUEJkS2dRc05KR25sQ1dkYUVaOFdsd0NmQThSZ3lSUFZqRXZMUVc1bFpzMnk0UlF0OGUxTEN4Ywp0SkN2TGh6eERzRDZIQ2VRRXdPVDZhSzBnOW1uc2FlSUxvMm1jckVTNDgzM0JvSXU2SkFST2xWT1hvd3pHMnYzCnVBeitBVDBGL1ZaRkFnTUJBQUdqZ2dHd01JSUJyREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdIUVlEVlIwbEJCWXcKRkFZSUt3WUJCUVVIQXdFR0NDc0dBUVVGQndNQ01Bd0dBMVVkRXdFQi93UUNNQUF3SFFZRFZSME9CQllFRk5LZQpBVUZYc2Z2N2lML0lYVVBXdzY2ZU5jQnhNQjhHQTFVZEl3UVlNQmFBRlB0NFR4TDVZQldETEo4WGZ6UVpzeTQyCjZrR0pNR1lHQ0NzR0FRVUZCd0VCQkZvd1dEQWlCZ2dyQmdFRkJRY3dBWVlXYUhSMGNEb3ZMekV5Tnk0d0xqQXUKTVRvME1EQXlMekF5QmdnckJnRUZCUWN3QW9ZbWFIUjBjRG92THpFeU55NHdMakF1TVRvME1EQXdMMkZqYldVdgphWE56ZFdWeUxXTmxjblF3T1FZRFZSMFJCREl3TUlJS2JHOWpZV3d4TG1OdmJZSVFkR1Z6ZERFdWJHOWpZV3d4CkxtTnZiWUlRZEdWemRESXViRzlqWVd3eExtTnZiVEFuQmdOVkhSOEVJREFlTUJ5Z0dxQVloaFpvZEhSd09pOHYKWlhoaGJYQnNaUzVqYjIwdlkzSnNNR0VHQTFVZElBUmFNRmd3Q0FZR1o0RU1BUUlCTUV3R0F5b0RCREJGTUNJRwpDQ3NHQVFVRkJ3SUJGaFpvZEhSd09pOHZaWGhoYlhCc1pTNWpiMjB2WTNCek1COEdDQ3NHQVFVRkJ3SUNNQk1NCkVVUnZJRmRvWVhRZ1ZHaHZkU0JYYVd4ME1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ3A0Q2FxZlR4THNQTzQKS2JueDJZdEc4bTN3MC9keTVVR1VRNjZHbGxPVTk0L2I0MmNhbTRuNUZrTWlpZ01IaUx4c2JZVXh0cDZKQ3R5cQpLKzFNcDFWWEtSTTVKbFBTNWRIaWhxdHk1U3BrTUhjampwQSs3U2YyVWtoNmpKRWYxTUVJY2JnWnpJRk5IT0hYClVUUUppVFhKcno3blJDZnlQWFZtbWErUGtIRlU4R0VEVzJGOVptU1kzVFBiQWhiWkV2UkZubjUrR1lxbkZuancKWWw3Y0I2MXYwRzVpOGQwbnVvbTB4a2hiNTU3Y3BiZHhLblhsaFU4N2RZSTR5SUdPdUFGUWpYcXFXN2NIZCtXUQpWSDB2dFA3cEgrRmt2YnY4WkkxMHMrNU5ZcCtzZjFQZGQxekJsRmdNSGF3dnFFYUg3SU9sejdkajlCdmtVc0dpClhxQWVqQnFPCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVpakNDQTNLZ0F3SUJBZ0lDRWswd0RRWUpLb1pJaHZjTkFRRUxCUUF3S3pFcE1DY0dBMVVFQXd3Z1kyRmoKYTJ4cGJtY2dZM0o1Y0hSdlozSmhjR2hsY2lCbVlXdGxJRkpQVDFRd0hoY05NVFV4TURJeE1qQXhNVFV5V2hjTgpNakF4TURFNU1qQXhNVFV5V2pBZk1SMHdHd1lEVlFRREV4Um9ZWEJ3ZVNCb1lXTnJaWElnWm1GclpTQkRRVENDCkFTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTUlLUjNtYUJjVVNzbmNYWXpRVDEzRDUKTnIrWjNtTHhNTWgzVFVkdDZzQUNtcWJKMGJ0UmxnWGZNdE5MTTJPVTFJNmEzSnUrdElaU2RuMnYyMUpCd3Z4VQp6cFpRNHp5MmNpbUlpTVFEWkNRSEp3ekM5R1puOEhhVzA5MWl6OUgwR28zQTdXRFh3WU5tc2RMTlJpMDBvMTRVCmpvYVZxYVBzWXJaV3ZSS2FJUnFhVTBoSG1TMEFXd1FTdk4vOTNpTUlYdXlpd3l3bWt3S2JXbm54Q1EvZ3NjdEsKRlV0Y05yd0V4OVdnajZLbGh3RFR5STFRV1NCYnhWWU55VWdQRnpLeHJTbXdNTzB5TmZmN2hvK1FUOXg1K1kvNwpYRTU5UzRNYzRaWHhjWEtldy9nU2xOOVU1bXZUK0QyQmhEdGtDdXBkZnNaTkNRV3AyN0ErYi9EbXJGSTlOcXNDCkF3RUFBYU9DQWNJd2dnRytNQklHQTFVZEV3RUIvd1FJTUFZQkFmOENBUUF3UXdZRFZSMGVCRHd3T3FFNE1BYUMKQkM1dGFXd3dDb2NJQUFBQUFBQUFBQUF3SW9jZ0FBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQQpBQUFBQUFBd0RnWURWUjBQQVFIL0JBUURBZ0dHTUg4R0NDc0dBUVVGQndFQkJITXdjVEF5QmdnckJnRUZCUWN3CkFZWW1hSFIwY0RvdkwybHpjbWN1ZEhKMWMzUnBaQzV2WTNOd0xtbGtaVzUwY25WemRDNWpiMjB3T3dZSUt3WUIKQlFVSE1BS0dMMmgwZEhBNkx5OWhjSEJ6TG1sa1pXNTBjblZ6ZEM1amIyMHZjbTl2ZEhNdlpITjBjbTl2ZEdOaAplRE11Y0Rkak1COEdBMVVkSXdRWU1CYUFGT21rUCs2ZXBlYnkxZGQ1WUR5VHBpNGtqcGVxTUZRR0ExVWRJQVJOCk1Fc3dDQVlHWjRFTUFRSUJNRDhHQ3lzR0FRUUJndDhUQVFFQk1EQXdMZ1lJS3dZQkJRVUhBZ0VXSW1oMGRIQTYKTHk5amNITXVjbTl2ZEMxNE1TNXNaWFJ6Wlc1amNubHdkQzV2Y21jd1BBWURWUjBmQkRVd016QXhvQytnTFlZcgphSFIwY0RvdkwyTnliQzVwWkdWdWRISjFjM1F1WTI5dEwwUlRWRkpQVDFSRFFWZ3pRMUpNTG1OeWJEQWRCZ05WCkhRNEVGZ1FVKzNoUEV2bGdGWU1zbnhkL05CbXpMamJxUVlrd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFBMFkKQWVMWE9rbHg0aGhDaWtVVWwrQmRuRmZuMWcwVzVBaVFMVk5JT0w2UG5xWHUwd2puaE55aHFkd25maFlNbm95NAppZFJoNGxCNnB6OEdmOXBubExkL0RuV1NWM2dTKy9JL21BbDFkQ2tLYnk2SDJWNzkwZTZJSG1JSzJLWW0zam0rClUrK0ZJZEdwQmRzUVRTZG1pWC9yQXl1eE1ETTBhZE1rTkJ3VGZRbVpRQ3o2bkdIdzFRY1NQWk12WnBzQzhTa3YKZWt6eHNqRjFvdE9yTVVQTlBRdnRUV3JWeDhHbFIycWZ4LzR4YlFhMXYyZnJOdkZCQ21PNTlnb3oram5XdmZUdApqMk5qd0RaN3ZsTUJzUG0xNmRiS1lDODQwdXZSb1pqeHFzZGMzQ2hDWmpxaW1GcWxORy94b1BBOCtkVGljWnpDClhFOWlqUEljdlc2eTFhYTNiR3c9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"ChallengeCerts": {}
|
||||
}
|
||||
@@ -1,18 +1,10 @@
|
||||
package acme
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"encoding/base64"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"reflect"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/containous/traefik/tls/generate"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/xenolf/lego/acme"
|
||||
)
|
||||
|
||||
func TestDomainsSet(t *testing.T) {
|
||||
@@ -71,8 +63,8 @@ func TestDomainsSetAppend(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestCertificatesRenew(t *testing.T) {
|
||||
foo1Cert, foo1Key, _ := generate.KeyPair("foo1.com", time.Now())
|
||||
foo2Cert, foo2Key, _ := generate.KeyPair("foo2.com", time.Now())
|
||||
foo1Cert, foo1Key, _ := generateKeyPair("foo1.com", time.Now())
|
||||
foo2Cert, foo2Key, _ := generateKeyPair("foo2.com", time.Now())
|
||||
domainsCertificates := DomainsCertificates{
|
||||
lock: sync.RWMutex{},
|
||||
Certs: []*DomainsCertificate{
|
||||
@@ -102,7 +94,7 @@ func TestCertificatesRenew(t *testing.T) {
|
||||
},
|
||||
},
|
||||
}
|
||||
foo1Cert, foo1Key, _ = generate.KeyPair("foo1.com", time.Now())
|
||||
foo1Cert, foo1Key, _ = generateKeyPair("foo1.com", time.Now())
|
||||
newCertificate := &Certificate{
|
||||
Domain: "foo1.com",
|
||||
CertURL: "url",
|
||||
@@ -129,10 +121,10 @@ func TestCertificatesRenew(t *testing.T) {
|
||||
|
||||
func TestRemoveDuplicates(t *testing.T) {
|
||||
now := time.Now()
|
||||
fooCert, fooKey, _ := generate.KeyPair("foo.com", now)
|
||||
foo24Cert, foo24Key, _ := generate.KeyPair("foo.com", now.Add(24*time.Hour))
|
||||
foo48Cert, foo48Key, _ := generate.KeyPair("foo.com", now.Add(48*time.Hour))
|
||||
barCert, barKey, _ := generate.KeyPair("bar.com", now)
|
||||
fooCert, fooKey, _ := generateKeyPair("foo.com", now)
|
||||
foo24Cert, foo24Key, _ := generateKeyPair("foo.com", now.Add(24*time.Hour))
|
||||
foo48Cert, foo48Key, _ := generateKeyPair("foo.com", now.Add(48*time.Hour))
|
||||
barCert, barKey, _ := generateKeyPair("bar.com", now)
|
||||
domainsCertificates := DomainsCertificates{
|
||||
lock: sync.RWMutex{},
|
||||
Certs: []*DomainsCertificate{
|
||||
@@ -213,112 +205,7 @@ func TestRemoveDuplicates(t *testing.T) {
|
||||
t.Errorf("Bad expiration %s date for domain %+v, now %s", cert.tlsCert.Leaf.NotAfter.String(), cert, now.Add(48*time.Hour).Truncate(1*time.Second).String())
|
||||
}
|
||||
default:
|
||||
t.Errorf("Unknown domain %+v", cert)
|
||||
t.Errorf("Unkown domain %+v", cert)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNoPreCheckOverride(t *testing.T) {
|
||||
acme.PreCheckDNS = nil // Irreversable - but not expecting real calls into this during testing process
|
||||
err := dnsOverrideDelay(0)
|
||||
if err != nil {
|
||||
t.Errorf("Error in dnsOverrideDelay :%v", err)
|
||||
}
|
||||
if acme.PreCheckDNS != nil {
|
||||
t.Error("Unexpected change to acme.PreCheckDNS when leaving DNS verification as is.")
|
||||
}
|
||||
}
|
||||
|
||||
func TestSillyPreCheckOverride(t *testing.T) {
|
||||
err := dnsOverrideDelay(-5)
|
||||
if err == nil {
|
||||
t.Error("Missing expected error in dnsOverrideDelay!")
|
||||
}
|
||||
}
|
||||
|
||||
func TestPreCheckOverride(t *testing.T) {
|
||||
acme.PreCheckDNS = nil // Irreversable - but not expecting real calls into this during testing process
|
||||
err := dnsOverrideDelay(5)
|
||||
if err != nil {
|
||||
t.Errorf("Error in dnsOverrideDelay :%v", err)
|
||||
}
|
||||
if acme.PreCheckDNS == nil {
|
||||
t.Error("No change to acme.PreCheckDNS when meant to be adding enforcing override function.")
|
||||
}
|
||||
}
|
||||
|
||||
func TestAcmeClientCreation(t *testing.T) {
|
||||
acme.PreCheckDNS = nil // Irreversable - but not expecting real calls into this during testing process
|
||||
// Lengthy setup to avoid external web requests - oh for easier golang testing!
|
||||
account := &Account{Email: "f@f"}
|
||||
account.PrivateKey, _ = base64.StdEncoding.DecodeString(`
|
||||
MIIBPAIBAAJBAMp2Ni92FfEur+CAvFkgC12LT4l9D53ApbBpDaXaJkzzks+KsLw9zyAxvlrfAyTCQ
|
||||
7tDnEnIltAXyQ0uOFUUdcMCAwEAAQJAK1FbipATZcT9cGVa5x7KD7usytftLW14heQUPXYNV80r/3
|
||||
lmnpvjL06dffRpwkYeN8DATQF/QOcy3NNNGDw/4QIhAPAKmiZFxA/qmRXsuU8Zhlzf16WrNZ68K64
|
||||
asn/h3qZrAiEA1+wFR3WXCPIolOvd7AHjfgcTKQNkoMPywU4FYUNQ1AkCIQDv8yk0qPjckD6HVCPJ
|
||||
llJh9MC0svjevGtNlxJoE3lmEQIhAKXy1wfZ32/XtcrnENPvi6lzxI0T94X7s5pP3aCoPPoJAiEAl
|
||||
cijFkALeQp/qyeXdFld2v9gUN3eCgljgcl0QweRoIc=---`)
|
||||
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.Write([]byte(`{
|
||||
"new-authz": "https://foo/acme/new-authz",
|
||||
"new-cert": "https://foo/acme/new-cert",
|
||||
"new-reg": "https://foo/acme/new-reg",
|
||||
"revoke-cert": "https://foo/acme/revoke-cert"
|
||||
}`))
|
||||
}))
|
||||
defer ts.Close()
|
||||
a := ACME{DNSChallenge: &DNSChallenge{Provider: "manual", DelayBeforeCheck: 10}, CAServer: ts.URL}
|
||||
|
||||
client, err := a.buildACMEClient(account)
|
||||
if err != nil {
|
||||
t.Errorf("Error in buildACMEClient: %v", err)
|
||||
}
|
||||
if client == nil {
|
||||
t.Error("No client from buildACMEClient!")
|
||||
}
|
||||
if acme.PreCheckDNS == nil {
|
||||
t.Error("No change to acme.PreCheckDNS when meant to be adding enforcing override function.")
|
||||
}
|
||||
}
|
||||
|
||||
func TestAcme_getUncheckedCertificates(t *testing.T) {
|
||||
mm := make(map[string]*tls.Certificate)
|
||||
mm["*.containo.us"] = &tls.Certificate{}
|
||||
mm["traefik.acme.io"] = &tls.Certificate{}
|
||||
|
||||
a := ACME{TLSConfig: &tls.Config{NameToCertificate: mm}}
|
||||
|
||||
domains := []string{"traefik.containo.us", "trae.containo.us"}
|
||||
uncheckedDomains := a.getUncheckedDomains(domains, nil)
|
||||
assert.Empty(t, uncheckedDomains)
|
||||
domains = []string{"traefik.acme.io", "trae.acme.io"}
|
||||
uncheckedDomains = a.getUncheckedDomains(domains, nil)
|
||||
assert.Len(t, uncheckedDomains, 1)
|
||||
domainsCertificates := DomainsCertificates{Certs: []*DomainsCertificate{
|
||||
{
|
||||
tlsCert: &tls.Certificate{},
|
||||
Domains: Domain{
|
||||
Main: "*.acme.wtf",
|
||||
SANs: []string{"trae.acme.io"},
|
||||
},
|
||||
},
|
||||
}}
|
||||
account := Account{DomainsCertificate: domainsCertificates}
|
||||
uncheckedDomains = a.getUncheckedDomains(domains, &account)
|
||||
assert.Empty(t, uncheckedDomains)
|
||||
}
|
||||
|
||||
func TestAcme_getProvidedCertificate(t *testing.T) {
|
||||
mm := make(map[string]*tls.Certificate)
|
||||
mm["*.containo.us"] = &tls.Certificate{}
|
||||
mm["traefik.acme.io"] = &tls.Certificate{}
|
||||
|
||||
a := ACME{TLSConfig: &tls.Config{NameToCertificate: mm}}
|
||||
|
||||
domain := "traefik.containo.us"
|
||||
certificate := a.getProvidedCertificate(domain)
|
||||
assert.NotNil(t, certificate)
|
||||
domain = "trae.acme.io"
|
||||
certificate = a.getProvidedCertificate(domain)
|
||||
assert.Nil(t, certificate)
|
||||
}
|
||||
|
||||
97
acme/challengeProvider.go
Normal file
97
acme/challengeProvider.go
Normal file
@@ -0,0 +1,97 @@
|
||||
package acme
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"fmt"
|
||||
"github.com/cenk/backoff"
|
||||
"github.com/containous/traefik/cluster"
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/containous/traefik/safe"
|
||||
"github.com/xenolf/lego/acme"
|
||||
"time"
|
||||
)
|
||||
|
||||
var _ acme.ChallengeProviderTimeout = (*challengeProvider)(nil)
|
||||
|
||||
type challengeProvider struct {
|
||||
store cluster.Store
|
||||
lock sync.RWMutex
|
||||
}
|
||||
|
||||
func (c *challengeProvider) getCertificate(domain string) (cert *tls.Certificate, exists bool) {
|
||||
log.Debugf("Challenge GetCertificate %s", domain)
|
||||
if !strings.HasSuffix(domain, ".acme.invalid") {
|
||||
return nil, false
|
||||
}
|
||||
c.lock.RLock()
|
||||
defer c.lock.RUnlock()
|
||||
account := c.store.Get().(*Account)
|
||||
if account.ChallengeCerts == nil {
|
||||
return nil, false
|
||||
}
|
||||
account.Init()
|
||||
var result *tls.Certificate
|
||||
operation := func() error {
|
||||
for _, cert := range account.ChallengeCerts {
|
||||
for _, dns := range cert.certificate.Leaf.DNSNames {
|
||||
if domain == dns {
|
||||
result = cert.certificate
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
return fmt.Errorf("Cannot find challenge cert for domain %s", domain)
|
||||
}
|
||||
notify := func(err error, time time.Duration) {
|
||||
log.Errorf("Error getting cert: %v, retrying in %s", err, time)
|
||||
}
|
||||
ebo := backoff.NewExponentialBackOff()
|
||||
ebo.MaxElapsedTime = 60 * time.Second
|
||||
err := backoff.RetryNotify(safe.OperationWithRecover(operation), ebo, notify)
|
||||
if err != nil {
|
||||
log.Errorf("Error getting cert: %v", err)
|
||||
return nil, false
|
||||
}
|
||||
return result, true
|
||||
}
|
||||
|
||||
func (c *challengeProvider) Present(domain, token, keyAuth string) error {
|
||||
log.Debugf("Challenge Present %s", domain)
|
||||
cert, _, err := TLSSNI01ChallengeCert(keyAuth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
c.lock.Lock()
|
||||
defer c.lock.Unlock()
|
||||
transaction, object, err := c.store.Begin()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
account := object.(*Account)
|
||||
if account.ChallengeCerts == nil {
|
||||
account.ChallengeCerts = map[string]*ChallengeCert{}
|
||||
}
|
||||
account.ChallengeCerts[domain] = &cert
|
||||
return transaction.Commit(account)
|
||||
}
|
||||
|
||||
func (c *challengeProvider) CleanUp(domain, token, keyAuth string) error {
|
||||
log.Debugf("Challenge CleanUp %s", domain)
|
||||
c.lock.Lock()
|
||||
defer c.lock.Unlock()
|
||||
transaction, object, err := c.store.Begin()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
account := object.(*Account)
|
||||
delete(account.ChallengeCerts, domain)
|
||||
return transaction.Commit(account)
|
||||
}
|
||||
|
||||
func (c *challengeProvider) Timeout() (timeout, interval time.Duration) {
|
||||
return 60 * time.Second, 5 * time.Second
|
||||
}
|
||||
@@ -1,92 +0,0 @@
|
||||
package acme
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/cenk/backoff"
|
||||
"github.com/containous/traefik/cluster"
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/containous/traefik/safe"
|
||||
"github.com/xenolf/lego/acme"
|
||||
)
|
||||
|
||||
var _ acme.ChallengeProviderTimeout = (*challengeHTTPProvider)(nil)
|
||||
|
||||
type challengeHTTPProvider struct {
|
||||
store cluster.Store
|
||||
lock sync.RWMutex
|
||||
}
|
||||
|
||||
func (c *challengeHTTPProvider) getTokenValue(token, domain string) []byte {
|
||||
log.Debugf("Looking for an existing ACME challenge for token %v...", token)
|
||||
c.lock.RLock()
|
||||
defer c.lock.RUnlock()
|
||||
account := c.store.Get().(*Account)
|
||||
if account.HTTPChallenge == nil {
|
||||
return []byte{}
|
||||
}
|
||||
var result []byte
|
||||
operation := func() error {
|
||||
var ok bool
|
||||
if result, ok = account.HTTPChallenge[token][domain]; !ok {
|
||||
return fmt.Errorf("cannot find challenge for token %v", token)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
notify := func(err error, time time.Duration) {
|
||||
log.Errorf("Error getting challenge for token retrying in %s", time)
|
||||
}
|
||||
ebo := backoff.NewExponentialBackOff()
|
||||
ebo.MaxElapsedTime = 60 * time.Second
|
||||
err := backoff.RetryNotify(safe.OperationWithRecover(operation), ebo, notify)
|
||||
if err != nil {
|
||||
log.Errorf("Error getting challenge for token: %v", err)
|
||||
return []byte{}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
func (c *challengeHTTPProvider) Present(domain, token, keyAuth string) error {
|
||||
log.Debugf("Challenge Present %s", domain)
|
||||
c.lock.Lock()
|
||||
defer c.lock.Unlock()
|
||||
transaction, object, err := c.store.Begin()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
account := object.(*Account)
|
||||
if account.HTTPChallenge == nil {
|
||||
account.HTTPChallenge = map[string]map[string][]byte{}
|
||||
}
|
||||
if _, ok := account.HTTPChallenge[token]; !ok {
|
||||
account.HTTPChallenge[token] = map[string][]byte{}
|
||||
}
|
||||
account.HTTPChallenge[token][domain] = []byte(keyAuth)
|
||||
return transaction.Commit(account)
|
||||
}
|
||||
|
||||
func (c *challengeHTTPProvider) CleanUp(domain, token, keyAuth string) error {
|
||||
log.Debugf("Challenge CleanUp %s", domain)
|
||||
c.lock.Lock()
|
||||
defer c.lock.Unlock()
|
||||
transaction, object, err := c.store.Begin()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
account := object.(*Account)
|
||||
if _, ok := account.HTTPChallenge[token]; ok {
|
||||
if _, domainOk := account.HTTPChallenge[token][domain]; domainOk {
|
||||
delete(account.HTTPChallenge[token], domain)
|
||||
}
|
||||
if len(account.HTTPChallenge[token]) == 0 {
|
||||
delete(account.HTTPChallenge, token)
|
||||
}
|
||||
}
|
||||
return transaction.Commit(account)
|
||||
}
|
||||
|
||||
func (c *challengeHTTPProvider) Timeout() (timeout, interval time.Duration) {
|
||||
return 60 * time.Second, 5 * time.Second
|
||||
}
|
||||
@@ -1,150 +0,0 @@
|
||||
package acme
|
||||
|
||||
import (
|
||||
"crypto"
|
||||
"crypto/ecdsa"
|
||||
"crypto/rand"
|
||||
"crypto/rsa"
|
||||
"crypto/sha256"
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"encoding/hex"
|
||||
"encoding/pem"
|
||||
"fmt"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/cenk/backoff"
|
||||
"github.com/containous/traefik/cluster"
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/containous/traefik/safe"
|
||||
"github.com/containous/traefik/tls/generate"
|
||||
"github.com/xenolf/lego/acme"
|
||||
)
|
||||
|
||||
var _ acme.ChallengeProviderTimeout = (*challengeTLSProvider)(nil)
|
||||
|
||||
type challengeTLSProvider struct {
|
||||
store cluster.Store
|
||||
lock sync.RWMutex
|
||||
}
|
||||
|
||||
func (c *challengeTLSProvider) getCertificate(domain string) (cert *tls.Certificate, exists bool) {
|
||||
log.Debugf("Looking for an existing ACME challenge for %s...", domain)
|
||||
if !strings.HasSuffix(domain, ".acme.invalid") {
|
||||
return nil, false
|
||||
}
|
||||
c.lock.RLock()
|
||||
defer c.lock.RUnlock()
|
||||
account := c.store.Get().(*Account)
|
||||
if account.ChallengeCerts == nil {
|
||||
return nil, false
|
||||
}
|
||||
account.Init()
|
||||
var result *tls.Certificate
|
||||
operation := func() error {
|
||||
for _, cert := range account.ChallengeCerts {
|
||||
for _, dns := range cert.certificate.Leaf.DNSNames {
|
||||
if domain == dns {
|
||||
result = cert.certificate
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
return fmt.Errorf("cannot find challenge cert for domain %s", domain)
|
||||
}
|
||||
notify := func(err error, time time.Duration) {
|
||||
log.Errorf("Error getting cert: %v, retrying in %s", err, time)
|
||||
}
|
||||
ebo := backoff.NewExponentialBackOff()
|
||||
ebo.MaxElapsedTime = 60 * time.Second
|
||||
err := backoff.RetryNotify(safe.OperationWithRecover(operation), ebo, notify)
|
||||
if err != nil {
|
||||
log.Errorf("Error getting cert: %v", err)
|
||||
return nil, false
|
||||
}
|
||||
return result, true
|
||||
}
|
||||
|
||||
func (c *challengeTLSProvider) Present(domain, token, keyAuth string) error {
|
||||
log.Debugf("Challenge Present %s", domain)
|
||||
cert, _, err := tlsSNI01ChallengeCert(keyAuth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
c.lock.Lock()
|
||||
defer c.lock.Unlock()
|
||||
transaction, object, err := c.store.Begin()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
account := object.(*Account)
|
||||
if account.ChallengeCerts == nil {
|
||||
account.ChallengeCerts = map[string]*ChallengeCert{}
|
||||
}
|
||||
account.ChallengeCerts[domain] = &cert
|
||||
return transaction.Commit(account)
|
||||
}
|
||||
|
||||
func (c *challengeTLSProvider) CleanUp(domain, token, keyAuth string) error {
|
||||
log.Debugf("Challenge CleanUp %s", domain)
|
||||
c.lock.Lock()
|
||||
defer c.lock.Unlock()
|
||||
transaction, object, err := c.store.Begin()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
account := object.(*Account)
|
||||
delete(account.ChallengeCerts, domain)
|
||||
return transaction.Commit(account)
|
||||
}
|
||||
|
||||
func (c *challengeTLSProvider) Timeout() (timeout, interval time.Duration) {
|
||||
return 60 * time.Second, 5 * time.Second
|
||||
}
|
||||
|
||||
// tlsSNI01ChallengeCert returns a certificate and target domain for the `tls-sni-01` challenge
|
||||
func tlsSNI01ChallengeCert(keyAuth string) (ChallengeCert, string, error) {
|
||||
// generate a new RSA key for the certificates
|
||||
var tempPrivKey crypto.PrivateKey
|
||||
tempPrivKey, err := rsa.GenerateKey(rand.Reader, 2048)
|
||||
if err != nil {
|
||||
return ChallengeCert{}, "", err
|
||||
}
|
||||
rsaPrivKey := tempPrivKey.(*rsa.PrivateKey)
|
||||
rsaPrivPEM := pemEncode(rsaPrivKey)
|
||||
|
||||
zBytes := sha256.Sum256([]byte(keyAuth))
|
||||
z := hex.EncodeToString(zBytes[:sha256.Size])
|
||||
domain := fmt.Sprintf("%s.%s.acme.invalid", z[:32], z[32:])
|
||||
tempCertPEM, err := generate.PemCert(rsaPrivKey, domain, time.Time{})
|
||||
if err != nil {
|
||||
return ChallengeCert{}, "", err
|
||||
}
|
||||
|
||||
certificate, err := tls.X509KeyPair(tempCertPEM, rsaPrivPEM)
|
||||
if err != nil {
|
||||
return ChallengeCert{}, "", err
|
||||
}
|
||||
|
||||
return ChallengeCert{Certificate: tempCertPEM, PrivateKey: rsaPrivPEM, certificate: &certificate}, domain, nil
|
||||
}
|
||||
|
||||
func pemEncode(data interface{}) []byte {
|
||||
var pemBlock *pem.Block
|
||||
switch key := data.(type) {
|
||||
case *ecdsa.PrivateKey:
|
||||
keyBytes, _ := x509.MarshalECPrivateKey(key)
|
||||
pemBlock = &pem.Block{Type: "EC PRIVATE KEY", Bytes: keyBytes}
|
||||
case *rsa.PrivateKey:
|
||||
pemBlock = &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(key)}
|
||||
case *x509.CertificateRequest:
|
||||
pemBlock = &pem.Block{Type: "CERTIFICATE REQUEST", Bytes: key.Raw}
|
||||
case []byte:
|
||||
pemBlock = &pem.Block{Type: "CERTIFICATE", Bytes: []byte(data.([]byte))}
|
||||
}
|
||||
|
||||
return pem.EncodeToMemory(pemBlock)
|
||||
}
|
||||
135
acme/crypto.go
Normal file
135
acme/crypto.go
Normal file
@@ -0,0 +1,135 @@
|
||||
package acme
|
||||
|
||||
import (
|
||||
"crypto"
|
||||
"crypto/ecdsa"
|
||||
"crypto/rand"
|
||||
"crypto/rsa"
|
||||
"crypto/sha256"
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"crypto/x509/pkix"
|
||||
"encoding/hex"
|
||||
"encoding/pem"
|
||||
"fmt"
|
||||
"math/big"
|
||||
"time"
|
||||
)
|
||||
|
||||
func generateDefaultCertificate() (*tls.Certificate, error) {
|
||||
randomBytes := make([]byte, 100)
|
||||
_, err := rand.Read(randomBytes)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
zBytes := sha256.Sum256(randomBytes)
|
||||
z := hex.EncodeToString(zBytes[:sha256.Size])
|
||||
domain := fmt.Sprintf("%s.%s.traefik.default", z[:32], z[32:])
|
||||
|
||||
certPEM, keyPEM, err := generateKeyPair(domain, time.Time{})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
certificate, err := tls.X509KeyPair(certPEM, keyPEM)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &certificate, nil
|
||||
}
|
||||
|
||||
func generateKeyPair(domain string, expiration time.Time) ([]byte, []byte, error) {
|
||||
rsaPrivKey, err := rsa.GenerateKey(rand.Reader, 2048)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
keyPEM := pem.EncodeToMemory(&pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(rsaPrivKey)})
|
||||
|
||||
certPEM, err := generatePemCert(rsaPrivKey, domain, expiration)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
return certPEM, keyPEM, nil
|
||||
}
|
||||
|
||||
func generatePemCert(privKey *rsa.PrivateKey, domain string, expiration time.Time) ([]byte, error) {
|
||||
derBytes, err := generateDerCert(privKey, expiration, domain)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: derBytes}), nil
|
||||
}
|
||||
|
||||
func generateDerCert(privKey *rsa.PrivateKey, expiration time.Time, domain string) ([]byte, error) {
|
||||
serialNumberLimit := new(big.Int).Lsh(big.NewInt(1), 128)
|
||||
serialNumber, err := rand.Int(rand.Reader, serialNumberLimit)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if expiration.IsZero() {
|
||||
expiration = time.Now().Add(365)
|
||||
}
|
||||
|
||||
template := x509.Certificate{
|
||||
SerialNumber: serialNumber,
|
||||
Subject: pkix.Name{
|
||||
CommonName: "TRAEFIK DEFAULT CERT",
|
||||
},
|
||||
NotBefore: time.Now(),
|
||||
NotAfter: expiration,
|
||||
|
||||
KeyUsage: x509.KeyUsageKeyEncipherment,
|
||||
BasicConstraintsValid: true,
|
||||
DNSNames: []string{domain},
|
||||
}
|
||||
|
||||
return x509.CreateCertificate(rand.Reader, &template, &template, &privKey.PublicKey, privKey)
|
||||
}
|
||||
|
||||
// TLSSNI01ChallengeCert returns a certificate and target domain for the `tls-sni-01` challenge
|
||||
func TLSSNI01ChallengeCert(keyAuth string) (ChallengeCert, string, error) {
|
||||
// generate a new RSA key for the certificates
|
||||
var tempPrivKey crypto.PrivateKey
|
||||
tempPrivKey, err := rsa.GenerateKey(rand.Reader, 2048)
|
||||
if err != nil {
|
||||
return ChallengeCert{}, "", err
|
||||
}
|
||||
rsaPrivKey := tempPrivKey.(*rsa.PrivateKey)
|
||||
rsaPrivPEM := pemEncode(rsaPrivKey)
|
||||
|
||||
zBytes := sha256.Sum256([]byte(keyAuth))
|
||||
z := hex.EncodeToString(zBytes[:sha256.Size])
|
||||
domain := fmt.Sprintf("%s.%s.acme.invalid", z[:32], z[32:])
|
||||
tempCertPEM, err := generatePemCert(rsaPrivKey, domain, time.Time{})
|
||||
if err != nil {
|
||||
return ChallengeCert{}, "", err
|
||||
}
|
||||
|
||||
certificate, err := tls.X509KeyPair(tempCertPEM, rsaPrivPEM)
|
||||
if err != nil {
|
||||
return ChallengeCert{}, "", err
|
||||
}
|
||||
|
||||
return ChallengeCert{Certificate: tempCertPEM, PrivateKey: rsaPrivPEM, certificate: &certificate}, domain, nil
|
||||
}
|
||||
func pemEncode(data interface{}) []byte {
|
||||
var pemBlock *pem.Block
|
||||
switch key := data.(type) {
|
||||
case *ecdsa.PrivateKey:
|
||||
keyBytes, _ := x509.MarshalECPrivateKey(key)
|
||||
pemBlock = &pem.Block{Type: "EC PRIVATE KEY", Bytes: keyBytes}
|
||||
case *rsa.PrivateKey:
|
||||
pemBlock = &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(key)}
|
||||
break
|
||||
case *x509.CertificateRequest:
|
||||
pemBlock = &pem.Block{Type: "CERTIFICATE REQUEST", Bytes: key.Raw}
|
||||
break
|
||||
case []byte:
|
||||
pemBlock = &pem.Block{Type: "CERTIFICATE", Bytes: []byte(data.([]byte))}
|
||||
}
|
||||
|
||||
return pem.EncodeToMemory(pemBlock)
|
||||
}
|
||||
@@ -3,12 +3,10 @@ package acme
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"sync"
|
||||
|
||||
"github.com/containous/traefik/cluster"
|
||||
"github.com/containous/traefik/log"
|
||||
"io/ioutil"
|
||||
"sync"
|
||||
)
|
||||
|
||||
var _ cluster.Store = (*LocalStore)(nil)
|
||||
@@ -39,17 +37,7 @@ func (s *LocalStore) Load() (cluster.Object, error) {
|
||||
s.storageLock.Lock()
|
||||
defer s.storageLock.Unlock()
|
||||
account := &Account{}
|
||||
|
||||
err := checkPermissions(s.file)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
f, err := os.Open(s.file)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer f.Close()
|
||||
file, err := ioutil.ReadAll(f)
|
||||
file, err := ioutil.ReadFile(s.file)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -1,41 +0,0 @@
|
||||
package acme
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestLoad(t *testing.T) {
|
||||
acmeFile := "./acme_example.json"
|
||||
|
||||
folder, prefix := filepath.Split(acmeFile)
|
||||
tmpFile, err := ioutil.TempFile(folder, prefix)
|
||||
defer os.Remove(tmpFile.Name())
|
||||
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
fileContent, err := ioutil.ReadFile(acmeFile)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
tmpFile.Write(fileContent)
|
||||
|
||||
localStore := NewLocalStore(tmpFile.Name())
|
||||
obj, err := localStore.Load()
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
account, ok := obj.(*Account)
|
||||
if !ok {
|
||||
t.Error("Object is not an ACME Account")
|
||||
}
|
||||
|
||||
if len(account.DomainsCertificate.Certs) != 1 {
|
||||
t.Errorf("Must found %d and found %d certificates in Account", 3, len(account.DomainsCertificate.Certs))
|
||||
}
|
||||
}
|
||||
@@ -1,25 +0,0 @@
|
||||
// +build !windows
|
||||
|
||||
package acme
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
)
|
||||
|
||||
// Check file permissions
|
||||
func checkPermissions(name string) error {
|
||||
f, err := os.Open(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
fi, err := f.Stat()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if fi.Mode().Perm()&0077 != 0 {
|
||||
return fmt.Errorf("permissions %o for %s are too open, please use 600", fi.Mode().Perm(), name)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -1,6 +0,0 @@
|
||||
package acme
|
||||
|
||||
// Do not check file permissions on Windows right now
|
||||
func checkPermissions(name string) error {
|
||||
return nil
|
||||
}
|
||||
34
adapters.go
Normal file
34
adapters.go
Normal file
@@ -0,0 +1,34 @@
|
||||
/*
|
||||
Copyright
|
||||
*/
|
||||
package main
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/containous/traefik/log"
|
||||
)
|
||||
|
||||
// OxyLogger implements oxy Logger interface with logrus.
|
||||
type OxyLogger struct {
|
||||
}
|
||||
|
||||
// Infof logs specified string as Debug level in logrus.
|
||||
func (oxylogger *OxyLogger) Infof(format string, args ...interface{}) {
|
||||
log.Debugf(format, args...)
|
||||
}
|
||||
|
||||
// Warningf logs specified string as Warning level in logrus.
|
||||
func (oxylogger *OxyLogger) Warningf(format string, args ...interface{}) {
|
||||
log.Warningf(format, args...)
|
||||
}
|
||||
|
||||
// Errorf logs specified string as Warningf level in logrus.
|
||||
func (oxylogger *OxyLogger) Errorf(format string, args ...interface{}) {
|
||||
log.Warningf(format, args...)
|
||||
}
|
||||
|
||||
func notFoundHandler(w http.ResponseWriter, r *http.Request) {
|
||||
http.NotFound(w, r)
|
||||
//templatesRenderer.HTML(w, http.StatusNotFound, "notFound", nil)
|
||||
}
|
||||
@@ -1,22 +0,0 @@
|
||||
package api
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/containous/mux"
|
||||
"github.com/containous/traefik/autogen/genstatic"
|
||||
"github.com/elazarl/go-bindata-assetfs"
|
||||
)
|
||||
|
||||
// DashboardHandler expose dashboard routes
|
||||
type DashboardHandler struct{}
|
||||
|
||||
// AddRoutes add dashboard routes on a router
|
||||
func (g DashboardHandler) AddRoutes(router *mux.Router) {
|
||||
// Expose dashboard
|
||||
router.Methods(http.MethodGet).Path("/").HandlerFunc(func(response http.ResponseWriter, request *http.Request) {
|
||||
http.Redirect(response, request, request.Header.Get("X-Forwarded-Prefix")+"/dashboard/", 302)
|
||||
})
|
||||
router.Methods(http.MethodGet).PathPrefix("/dashboard/").
|
||||
Handler(http.StripPrefix("/dashboard/", http.FileServer(&assetfs.AssetFS{Asset: genstatic.Asset, AssetInfo: genstatic.AssetInfo, AssetDir: genstatic.AssetDir, Prefix: "static"})))
|
||||
}
|
||||
46
api/debug.go
46
api/debug.go
@@ -1,46 +0,0 @@
|
||||
package api
|
||||
|
||||
import (
|
||||
"expvar"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/http/pprof"
|
||||
"runtime"
|
||||
|
||||
"github.com/containous/mux"
|
||||
)
|
||||
|
||||
func init() {
|
||||
expvar.Publish("Goroutines", expvar.Func(goroutines))
|
||||
}
|
||||
|
||||
func goroutines() interface{} {
|
||||
return runtime.NumGoroutine()
|
||||
}
|
||||
|
||||
// DebugHandler expose debug routes
|
||||
type DebugHandler struct{}
|
||||
|
||||
// AddRoutes add debug routes on a router
|
||||
func (g DebugHandler) AddRoutes(router *mux.Router) {
|
||||
router.Methods(http.MethodGet).Path("/debug/vars").
|
||||
HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
fmt.Fprint(w, "{\n")
|
||||
first := true
|
||||
expvar.Do(func(kv expvar.KeyValue) {
|
||||
if !first {
|
||||
fmt.Fprint(w, ",\n")
|
||||
}
|
||||
first = false
|
||||
fmt.Fprintf(w, "%q: %s", kv.Key, kv.Value)
|
||||
})
|
||||
fmt.Fprint(w, "\n}\n")
|
||||
})
|
||||
|
||||
router.Methods(http.MethodGet).PathPrefix("/debug/pprof/cmdline").HandlerFunc(pprof.Cmdline)
|
||||
router.Methods(http.MethodGet).PathPrefix("/debug/pprof/profile").HandlerFunc(pprof.Profile)
|
||||
router.Methods(http.MethodGet).PathPrefix("/debug/pprof/symbol").HandlerFunc(pprof.Symbol)
|
||||
router.Methods(http.MethodGet).PathPrefix("/debug/pprof/trace").HandlerFunc(pprof.Trace)
|
||||
router.Methods(http.MethodGet).PathPrefix("/debug/pprof/").HandlerFunc(pprof.Index)
|
||||
}
|
||||
250
api/handler.go
250
api/handler.go
@@ -1,250 +0,0 @@
|
||||
package api
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/containous/mux"
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/containous/traefik/middlewares"
|
||||
"github.com/containous/traefik/safe"
|
||||
"github.com/containous/traefik/types"
|
||||
"github.com/containous/traefik/version"
|
||||
thoas_stats "github.com/thoas/stats"
|
||||
"github.com/unrolled/render"
|
||||
)
|
||||
|
||||
// Handler expose api routes
|
||||
type Handler struct {
|
||||
EntryPoint string `description:"EntryPoint" export:"true"`
|
||||
Dashboard bool `description:"Activate dashboard" export:"true"`
|
||||
Debug bool `export:"true"`
|
||||
CurrentConfigurations *safe.Safe
|
||||
Statistics *types.Statistics `description:"Enable more detailed statistics" export:"true"`
|
||||
Stats *thoas_stats.Stats `json:"-"`
|
||||
StatsRecorder *middlewares.StatsRecorder `json:"-"`
|
||||
}
|
||||
|
||||
var (
|
||||
templatesRenderer = render.New(render.Options{
|
||||
Directory: "nowhere",
|
||||
})
|
||||
)
|
||||
|
||||
// AddRoutes add api routes on a router
|
||||
func (p Handler) AddRoutes(router *mux.Router) {
|
||||
if p.Debug {
|
||||
DebugHandler{}.AddRoutes(router)
|
||||
}
|
||||
|
||||
router.Methods(http.MethodGet).Path("/api").HandlerFunc(p.getConfigHandler)
|
||||
router.Methods(http.MethodGet).Path("/api/providers").HandlerFunc(p.getConfigHandler)
|
||||
router.Methods(http.MethodGet).Path("/api/providers/{provider}").HandlerFunc(p.getProviderHandler)
|
||||
router.Methods(http.MethodGet).Path("/api/providers/{provider}/backends").HandlerFunc(p.getBackendsHandler)
|
||||
router.Methods(http.MethodGet).Path("/api/providers/{provider}/backends/{backend}").HandlerFunc(p.getBackendHandler)
|
||||
router.Methods(http.MethodGet).Path("/api/providers/{provider}/backends/{backend}/servers").HandlerFunc(p.getServersHandler)
|
||||
router.Methods(http.MethodGet).Path("/api/providers/{provider}/backends/{backend}/servers/{server}").HandlerFunc(p.getServerHandler)
|
||||
router.Methods(http.MethodGet).Path("/api/providers/{provider}/frontends").HandlerFunc(p.getFrontendsHandler)
|
||||
router.Methods(http.MethodGet).Path("/api/providers/{provider}/frontends/{frontend}").HandlerFunc(p.getFrontendHandler)
|
||||
router.Methods(http.MethodGet).Path("/api/providers/{provider}/frontends/{frontend}/routes").HandlerFunc(p.getRoutesHandler)
|
||||
router.Methods(http.MethodGet).Path("/api/providers/{provider}/frontends/{frontend}/routes/{route}").HandlerFunc(p.getRouteHandler)
|
||||
|
||||
// health route
|
||||
router.Methods(http.MethodGet).Path("/health").HandlerFunc(p.getHealthHandler)
|
||||
|
||||
version.Handler{}.AddRoutes(router)
|
||||
|
||||
if p.Dashboard {
|
||||
DashboardHandler{}.AddRoutes(router)
|
||||
}
|
||||
}
|
||||
|
||||
func getProviderIDFromVars(vars map[string]string) string {
|
||||
providerID := vars["provider"]
|
||||
// TODO: Deprecated
|
||||
if providerID == "rest" {
|
||||
providerID = "web"
|
||||
}
|
||||
return providerID
|
||||
}
|
||||
|
||||
func (p Handler) getConfigHandler(response http.ResponseWriter, request *http.Request) {
|
||||
currentConfigurations := p.CurrentConfigurations.Get().(types.Configurations)
|
||||
err := templatesRenderer.JSON(response, http.StatusOK, currentConfigurations)
|
||||
if err != nil {
|
||||
log.Error(err)
|
||||
}
|
||||
}
|
||||
|
||||
func (p Handler) getProviderHandler(response http.ResponseWriter, request *http.Request) {
|
||||
providerID := getProviderIDFromVars(mux.Vars(request))
|
||||
|
||||
currentConfigurations := p.CurrentConfigurations.Get().(types.Configurations)
|
||||
if provider, ok := currentConfigurations[providerID]; ok {
|
||||
err := templatesRenderer.JSON(response, http.StatusOK, provider)
|
||||
if err != nil {
|
||||
log.Error(err)
|
||||
}
|
||||
} else {
|
||||
http.NotFound(response, request)
|
||||
}
|
||||
}
|
||||
|
||||
func (p Handler) getBackendsHandler(response http.ResponseWriter, request *http.Request) {
|
||||
providerID := getProviderIDFromVars(mux.Vars(request))
|
||||
|
||||
currentConfigurations := p.CurrentConfigurations.Get().(types.Configurations)
|
||||
if provider, ok := currentConfigurations[providerID]; ok {
|
||||
err := templatesRenderer.JSON(response, http.StatusOK, provider.Backends)
|
||||
if err != nil {
|
||||
log.Error(err)
|
||||
}
|
||||
} else {
|
||||
http.NotFound(response, request)
|
||||
}
|
||||
}
|
||||
|
||||
func (p Handler) getBackendHandler(response http.ResponseWriter, request *http.Request) {
|
||||
vars := mux.Vars(request)
|
||||
providerID := getProviderIDFromVars(vars)
|
||||
backendID := vars["backend"]
|
||||
|
||||
currentConfigurations := p.CurrentConfigurations.Get().(types.Configurations)
|
||||
if provider, ok := currentConfigurations[providerID]; ok {
|
||||
if backend, ok := provider.Backends[backendID]; ok {
|
||||
err := templatesRenderer.JSON(response, http.StatusOK, backend)
|
||||
if err != nil {
|
||||
log.Error(err)
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
http.NotFound(response, request)
|
||||
}
|
||||
|
||||
func (p Handler) getServersHandler(response http.ResponseWriter, request *http.Request) {
|
||||
vars := mux.Vars(request)
|
||||
providerID := getProviderIDFromVars(vars)
|
||||
backendID := vars["backend"]
|
||||
|
||||
currentConfigurations := p.CurrentConfigurations.Get().(types.Configurations)
|
||||
if provider, ok := currentConfigurations[providerID]; ok {
|
||||
if backend, ok := provider.Backends[backendID]; ok {
|
||||
err := templatesRenderer.JSON(response, http.StatusOK, backend.Servers)
|
||||
if err != nil {
|
||||
log.Error(err)
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
http.NotFound(response, request)
|
||||
}
|
||||
|
||||
func (p Handler) getServerHandler(response http.ResponseWriter, request *http.Request) {
|
||||
vars := mux.Vars(request)
|
||||
providerID := getProviderIDFromVars(vars)
|
||||
backendID := vars["backend"]
|
||||
serverID := vars["server"]
|
||||
|
||||
currentConfigurations := p.CurrentConfigurations.Get().(types.Configurations)
|
||||
if provider, ok := currentConfigurations[providerID]; ok {
|
||||
if backend, ok := provider.Backends[backendID]; ok {
|
||||
if server, ok := backend.Servers[serverID]; ok {
|
||||
err := templatesRenderer.JSON(response, http.StatusOK, server)
|
||||
if err != nil {
|
||||
log.Error(err)
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
http.NotFound(response, request)
|
||||
}
|
||||
|
||||
func (p Handler) getFrontendsHandler(response http.ResponseWriter, request *http.Request) {
|
||||
providerID := getProviderIDFromVars(mux.Vars(request))
|
||||
|
||||
currentConfigurations := p.CurrentConfigurations.Get().(types.Configurations)
|
||||
if provider, ok := currentConfigurations[providerID]; ok {
|
||||
err := templatesRenderer.JSON(response, http.StatusOK, provider.Frontends)
|
||||
if err != nil {
|
||||
log.Error(err)
|
||||
}
|
||||
} else {
|
||||
http.NotFound(response, request)
|
||||
}
|
||||
}
|
||||
|
||||
func (p Handler) getFrontendHandler(response http.ResponseWriter, request *http.Request) {
|
||||
vars := mux.Vars(request)
|
||||
providerID := getProviderIDFromVars(vars)
|
||||
frontendID := vars["frontend"]
|
||||
|
||||
currentConfigurations := p.CurrentConfigurations.Get().(types.Configurations)
|
||||
if provider, ok := currentConfigurations[providerID]; ok {
|
||||
if frontend, ok := provider.Frontends[frontendID]; ok {
|
||||
err := templatesRenderer.JSON(response, http.StatusOK, frontend)
|
||||
if err != nil {
|
||||
log.Error(err)
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
http.NotFound(response, request)
|
||||
}
|
||||
|
||||
func (p Handler) getRoutesHandler(response http.ResponseWriter, request *http.Request) {
|
||||
vars := mux.Vars(request)
|
||||
providerID := getProviderIDFromVars(vars)
|
||||
frontendID := vars["frontend"]
|
||||
|
||||
currentConfigurations := p.CurrentConfigurations.Get().(types.Configurations)
|
||||
if provider, ok := currentConfigurations[providerID]; ok {
|
||||
if frontend, ok := provider.Frontends[frontendID]; ok {
|
||||
err := templatesRenderer.JSON(response, http.StatusOK, frontend.Routes)
|
||||
if err != nil {
|
||||
log.Error(err)
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
http.NotFound(response, request)
|
||||
}
|
||||
|
||||
func (p Handler) getRouteHandler(response http.ResponseWriter, request *http.Request) {
|
||||
vars := mux.Vars(request)
|
||||
providerID := getProviderIDFromVars(vars)
|
||||
frontendID := vars["frontend"]
|
||||
routeID := vars["route"]
|
||||
|
||||
currentConfigurations := p.CurrentConfigurations.Get().(types.Configurations)
|
||||
if provider, ok := currentConfigurations[providerID]; ok {
|
||||
if frontend, ok := provider.Frontends[frontendID]; ok {
|
||||
if route, ok := frontend.Routes[routeID]; ok {
|
||||
err := templatesRenderer.JSON(response, http.StatusOK, route)
|
||||
if err != nil {
|
||||
log.Error(err)
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
http.NotFound(response, request)
|
||||
}
|
||||
|
||||
// healthResponse combines data returned by thoas/stats with statistics (if
|
||||
// they are enabled).
|
||||
type healthResponse struct {
|
||||
*thoas_stats.Data
|
||||
*middlewares.Stats
|
||||
}
|
||||
|
||||
func (p *Handler) getHealthHandler(response http.ResponseWriter, request *http.Request) {
|
||||
health := &healthResponse{Data: p.Stats.Data()}
|
||||
if p.StatsRecorder != nil {
|
||||
health.Stats = p.StatsRecorder.Data()
|
||||
}
|
||||
err := templatesRenderer.JSON(response, http.StatusOK, health)
|
||||
if err != nil {
|
||||
log.Error(err)
|
||||
}
|
||||
}
|
||||
@@ -1,990 +0,0 @@
|
||||
// Code generated by go-bindata.
|
||||
// sources:
|
||||
// templates/consul_catalog.tmpl
|
||||
// templates/docker.tmpl
|
||||
// templates/ecs.tmpl
|
||||
// templates/eureka.tmpl
|
||||
// templates/kubernetes.tmpl
|
||||
// templates/kv.tmpl
|
||||
// templates/marathon.tmpl
|
||||
// templates/mesos.tmpl
|
||||
// templates/notFound.tmpl
|
||||
// templates/rancher.tmpl
|
||||
// DO NOT EDIT!
|
||||
|
||||
package gentemplates
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
type asset struct {
|
||||
bytes []byte
|
||||
info os.FileInfo
|
||||
}
|
||||
|
||||
type bindataFileInfo struct {
|
||||
name string
|
||||
size int64
|
||||
mode os.FileMode
|
||||
modTime time.Time
|
||||
}
|
||||
|
||||
func (fi bindataFileInfo) Name() string {
|
||||
return fi.name
|
||||
}
|
||||
func (fi bindataFileInfo) Size() int64 {
|
||||
return fi.size
|
||||
}
|
||||
func (fi bindataFileInfo) Mode() os.FileMode {
|
||||
return fi.mode
|
||||
}
|
||||
func (fi bindataFileInfo) ModTime() time.Time {
|
||||
return fi.modTime
|
||||
}
|
||||
func (fi bindataFileInfo) IsDir() bool {
|
||||
return false
|
||||
}
|
||||
func (fi bindataFileInfo) Sys() interface{} {
|
||||
return nil
|
||||
}
|
||||
|
||||
var _templatesConsul_catalogTmpl = []byte(`[backends]
|
||||
{{range $index, $node := .Nodes}}
|
||||
[backends."backend-{{getBackend $node}}".servers."{{getBackendName $node $index}}"]
|
||||
url = "{{getAttribute "protocol" $node.Service.Tags "http"}}://{{getBackendAddress $node}}:{{$node.Service.Port}}"
|
||||
{{$weight := getAttribute "backend.weight" $node.Service.Tags "0"}}
|
||||
{{with $weight}}
|
||||
weight = {{$weight}}
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
{{range .Services}}
|
||||
{{$service := .ServiceName}}
|
||||
{{$circuitBreaker := getAttribute "backend.circuitbreaker" .Attributes ""}}
|
||||
{{with $circuitBreaker}}
|
||||
[backends."backend-{{$service}}".circuitbreaker]
|
||||
expression = "{{$circuitBreaker}}"
|
||||
{{end}}
|
||||
|
||||
[backends."backend-{{$service}}".loadbalancer]
|
||||
method = "{{getAttribute "backend.loadbalancer" .Attributes "wrr"}}"
|
||||
sticky = {{getSticky .Attributes}}
|
||||
{{if hasStickinessLabel .Attributes}}
|
||||
[backends."backend-{{$service}}".loadbalancer.stickiness]
|
||||
cookieName = "{{getStickinessCookieName .Attributes}}"
|
||||
{{end}}
|
||||
|
||||
{{if hasMaxconnAttributes .Attributes}}
|
||||
[backends."backend-{{$service}}".maxconn]
|
||||
amount = {{getAttribute "backend.maxconn.amount" .Attributes "" }}
|
||||
extractorfunc = "{{getAttribute "backend.maxconn.extractorfunc" .Attributes "" }}"
|
||||
{{end}}
|
||||
|
||||
{{end}}
|
||||
|
||||
[frontends]
|
||||
{{range .Services}}
|
||||
[frontends."frontend-{{.ServiceName}}"]
|
||||
backend = "backend-{{.ServiceName}}"
|
||||
passHostHeader = {{getAttribute "frontend.passHostHeader" .Attributes "true"}}
|
||||
priority = {{getAttribute "frontend.priority" .Attributes "0"}}
|
||||
{{$entryPoints := getAttribute "frontend.entrypoints" .Attributes ""}}
|
||||
{{with $entryPoints}}
|
||||
entrypoints = [{{range getEntryPoints $entryPoints}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
{{end}}
|
||||
basicAuth = [{{range getBasicAuth .Attributes}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
[frontends."frontend-{{.ServiceName}}".routes."route-host-{{.ServiceName}}"]
|
||||
rule = "{{getFrontendRule .}}"
|
||||
{{end}}
|
||||
`)
|
||||
|
||||
func templatesConsul_catalogTmplBytes() ([]byte, error) {
|
||||
return _templatesConsul_catalogTmpl, nil
|
||||
}
|
||||
|
||||
func templatesConsul_catalogTmpl() (*asset, error) {
|
||||
bytes, err := templatesConsul_catalogTmplBytes()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
info := bindataFileInfo{name: "templates/consul_catalog.tmpl", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)}
|
||||
a := &asset{bytes: bytes, info: info}
|
||||
return a, nil
|
||||
}
|
||||
|
||||
var _templatesDockerTmpl = []byte(`{{$backendServers := .Servers}}
|
||||
[backends]{{range $backendName, $backend := .Backends}}
|
||||
{{if hasCircuitBreakerLabel $backend}}
|
||||
[backends.backend-{{$backendName}}.circuitbreaker]
|
||||
expression = "{{getCircuitBreakerExpression $backend}}"
|
||||
{{end}}
|
||||
|
||||
{{if hasLoadBalancerLabel $backend}}
|
||||
[backends.backend-{{$backendName}}.loadbalancer]
|
||||
method = "{{getLoadBalancerMethod $backend}}"
|
||||
sticky = {{getSticky $backend}}
|
||||
{{if hasStickinessLabel $backend}}
|
||||
[backends.backend-{{$backendName}}.loadbalancer.stickiness]
|
||||
cookieName = "{{getStickinessCookieName $backend}}"
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
{{if hasMaxConnLabels $backend}}
|
||||
[backends.backend-{{$backendName}}.maxconn]
|
||||
amount = {{getMaxConnAmount $backend}}
|
||||
extractorfunc = "{{getMaxConnExtractorFunc $backend}}"
|
||||
{{end}}
|
||||
|
||||
{{$servers := index $backendServers $backendName}}
|
||||
{{range $serverName, $server := $servers}}
|
||||
{{if hasServices $server}}
|
||||
{{$services := getServiceNames $server}}
|
||||
{{range $serviceIndex, $serviceName := $services}}
|
||||
[backends.backend-{{getServiceBackend $server $serviceName}}.servers.service-{{$serverName}}]
|
||||
url = "{{getServiceProtocol $server $serviceName}}://{{getIPAddress $server}}:{{getServicePort $server $serviceName}}"
|
||||
weight = {{getServiceWeight $server $serviceName}}
|
||||
{{end}}
|
||||
{{else}}
|
||||
[backends.backend-{{$backendName}}.servers.server-{{$server.Name | replace "/" "" | replace "." "-"}}]
|
||||
url = "{{getProtocol $server}}://{{getIPAddress $server}}:{{getPort $server}}"
|
||||
weight = {{getWeight $server}}
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
{{end}}
|
||||
|
||||
[frontends]{{range $frontend, $containers := .Frontends}}
|
||||
{{$container := index $containers 0}}
|
||||
{{if hasServices $container}}
|
||||
{{$services := getServiceNames $container}}
|
||||
{{range $serviceIndex, $serviceName := $services}}
|
||||
[frontends."frontend-{{getServiceBackend $container $serviceName}}"]
|
||||
backend = "backend-{{getServiceBackend $container $serviceName}}"
|
||||
passHostHeader = {{getServicePassHostHeader $container $serviceName}}
|
||||
{{if getWhitelistSourceRange $container}}
|
||||
whitelistSourceRange = [{{range getWhitelistSourceRange $container}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
{{end}}
|
||||
priority = {{getServicePriority $container $serviceName}}
|
||||
entryPoints = [{{range getServiceEntryPoints $container $serviceName}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
basicAuth = [{{range getServiceBasicAuth $container $serviceName}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
|
||||
{{if hasServiceRedirect $container $serviceName}}
|
||||
[frontends."frontend-{{getServiceBackend $container $serviceName}}".redirect]
|
||||
entryPoint = "{{getServiceRedirectEntryPoint $container $serviceName}}"
|
||||
regex = "{{getServiceRedirectRegex $container $serviceName}}"
|
||||
replacement = "{{getServiceRedirectReplacement $container $serviceName}}"
|
||||
{{end}}
|
||||
|
||||
[frontends."frontend-{{getServiceBackend $container $serviceName}}".routes."service-{{$serviceName | replace "/" "" | replace "." "-"}}"]
|
||||
rule = "{{getServiceFrontendRule $container $serviceName}}"
|
||||
{{end}}
|
||||
{{else}}
|
||||
[frontends."frontend-{{$frontend}}"]
|
||||
backend = "backend-{{getBackend $container}}"
|
||||
passHostHeader = {{getPassHostHeader $container}}
|
||||
{{if getWhitelistSourceRange $container}}
|
||||
whitelistSourceRange = [{{range getWhitelistSourceRange $container}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
{{end}}
|
||||
priority = {{getPriority $container}}
|
||||
entryPoints = [{{range getEntryPoints $container}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
basicAuth = [{{range getBasicAuth $container}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
|
||||
{{if hasRedirect $container}}
|
||||
[frontends."frontend-{{$frontend}}".redirect]
|
||||
entryPoint = "{{getRedirectEntryPoint $container}}"
|
||||
regex = "{{getRedirectRegex $container}}"
|
||||
replacement = "{{getRedirectReplacement $container}}"
|
||||
{{end}}
|
||||
|
||||
{{ if hasHeaders $container}}
|
||||
[frontends."frontend-{{$frontend}}".headers]
|
||||
{{if hasSSLRedirectHeaders $container}}
|
||||
SSLRedirect = {{getSSLRedirectHeaders $container}}
|
||||
{{end}}
|
||||
{{if hasSSLTemporaryRedirectHeaders $container}}
|
||||
SSLTemporaryRedirect = {{getSSLTemporaryRedirectHeaders $container}}
|
||||
{{end}}
|
||||
{{if hasSSLHostHeaders $container}}
|
||||
SSLHost = "{{getSSLHostHeaders $container}}"
|
||||
{{end}}
|
||||
{{if hasSTSSecondsHeaders $container}}
|
||||
STSSeconds = {{getSTSSecondsHeaders $container}}
|
||||
{{end}}
|
||||
{{if hasSTSIncludeSubdomainsHeaders $container}}
|
||||
STSIncludeSubdomains = {{getSTSIncludeSubdomainsHeaders $container}}
|
||||
{{end}}
|
||||
{{if hasSTSPreloadHeaders $container}}
|
||||
STSPreload = {{getSTSPreloadHeaders $container}}
|
||||
{{end}}
|
||||
{{if hasForceSTSHeaderHeaders $container}}
|
||||
ForceSTSHeader = {{getForceSTSHeaderHeaders $container}}
|
||||
{{end}}
|
||||
{{if hasFrameDenyHeaders $container}}
|
||||
FrameDeny = {{getFrameDenyHeaders $container}}
|
||||
{{end}}
|
||||
{{if hasCustomFrameOptionsValueHeaders $container}}
|
||||
CustomFrameOptionsValue = "{{getCustomFrameOptionsValueHeaders $container}}"
|
||||
{{end}}
|
||||
{{if hasContentTypeNosniffHeaders $container}}
|
||||
ContentTypeNosniff = {{getContentTypeNosniffHeaders $container}}
|
||||
{{end}}
|
||||
{{if hasBrowserXSSFilterHeaders $container}}
|
||||
BrowserXSSFilter = {{getBrowserXSSFilterHeaders $container}}
|
||||
{{end}}
|
||||
{{if hasContentSecurityPolicyHeaders $container}}
|
||||
ContentSecurityPolicy = "{{getContentSecurityPolicyHeaders $container}}"
|
||||
{{end}}
|
||||
{{if hasPublicKeyHeaders $container}}
|
||||
PublicKey = "{{getPublicKeyHeaders $container}}"
|
||||
{{end}}
|
||||
{{if hasReferrerPolicyHeaders $container}}
|
||||
ReferrerPolicy = "{{getReferrerPolicyHeaders $container}}"
|
||||
{{end}}
|
||||
{{if hasIsDevelopmentHeaders $container}}
|
||||
IsDevelopment = {{getIsDevelopmentHeaders $container}}
|
||||
{{end}}
|
||||
{{if hasAllowedHostsHeaders $container}}
|
||||
AllowedHosts = [{{range getAllowedHostsHeaders $container}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
{{end}}
|
||||
{{if hasHostsProxyHeaders $container}}
|
||||
HostsProxyHeaders = [{{range getHostsProxyHeaders $container}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
{{end}}
|
||||
{{if hasRequestHeaders $container}}
|
||||
[frontends."frontend-{{$frontend}}".headers.customrequestheaders]
|
||||
{{range $k, $v := getRequestHeaders $container}}
|
||||
{{$k}} = "{{$v}}"
|
||||
{{end}}
|
||||
{{end}}
|
||||
{{if hasResponseHeaders $container}}
|
||||
[frontends."frontend-{{$frontend}}".headers.customresponseheaders]
|
||||
{{range $k, $v := getResponseHeaders $container}}
|
||||
{{$k}} = "{{$v}}"
|
||||
{{end}}
|
||||
{{end}}
|
||||
{{if hasSSLProxyHeaders $container}}
|
||||
[frontends."frontend-{{$frontend}}".headers.SSLProxyHeaders]
|
||||
{{range $k, $v := getSSLProxyHeaders $container}}
|
||||
{{$k}} = "{{$v}}"
|
||||
{{end}}
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
[frontends."frontend-{{$frontend}}".routes."route-frontend-{{$frontend}}"]
|
||||
rule = "{{getFrontendRule $container}}"
|
||||
{{end}}
|
||||
{{end}}
|
||||
`)
|
||||
|
||||
func templatesDockerTmplBytes() ([]byte, error) {
|
||||
return _templatesDockerTmpl, nil
|
||||
}
|
||||
|
||||
func templatesDockerTmpl() (*asset, error) {
|
||||
bytes, err := templatesDockerTmplBytes()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
info := bindataFileInfo{name: "templates/docker.tmpl", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)}
|
||||
a := &asset{bytes: bytes, info: info}
|
||||
return a, nil
|
||||
}
|
||||
|
||||
var _templatesEcsTmpl = []byte(`[backends]{{range $serviceName, $instances := .Services}}
|
||||
[backends.backend-{{ $serviceName }}.loadbalancer]
|
||||
method = "{{ getLoadBalancerMethod $instances}}"
|
||||
sticky = {{ getLoadBalancerSticky $instances}}
|
||||
{{if hasStickinessLabel $instances}}
|
||||
[backends.backend-{{ $serviceName }}.loadbalancer.stickiness]
|
||||
cookieName = "{{getStickinessCookieName $instances}}"
|
||||
{{end}}
|
||||
{{ if hasHealthCheckLabels $instances }}
|
||||
[backends.backend-{{ $serviceName }}.healthcheck]
|
||||
path = "{{getHealthCheckPath $instances }}"
|
||||
interval = "{{getHealthCheckInterval $instances }}"
|
||||
{{end}}
|
||||
|
||||
{{range $index, $i := $instances}}
|
||||
[backends.backend-{{ $i.Name }}.servers.server-{{ $i.Name }}{{ $i.ID }}]
|
||||
url = "{{ getProtocol $i }}://{{ getHost $i }}:{{ getPort $i }}"
|
||||
weight = {{ getWeight $i}}
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
[frontends]{{range $serviceName, $instances := .Services}}
|
||||
{{range filterFrontends $instances}}
|
||||
[frontends.frontend-{{ $serviceName }}]
|
||||
backend = "backend-{{ $serviceName }}"
|
||||
passHostHeader = {{ getPassHostHeader .}}
|
||||
priority = {{ getPriority .}}
|
||||
entryPoints = [{{range getEntryPoints .}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
basicAuth = [{{range getBasicAuth .}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
[frontends.frontend-{{ $serviceName }}.routes.route-frontend-{{ $serviceName }}]
|
||||
rule = "{{getFrontendRule .}}"
|
||||
{{end}}
|
||||
{{end}}`)
|
||||
|
||||
func templatesEcsTmplBytes() ([]byte, error) {
|
||||
return _templatesEcsTmpl, nil
|
||||
}
|
||||
|
||||
func templatesEcsTmpl() (*asset, error) {
|
||||
bytes, err := templatesEcsTmplBytes()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
info := bindataFileInfo{name: "templates/ecs.tmpl", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)}
|
||||
a := &asset{bytes: bytes, info: info}
|
||||
return a, nil
|
||||
}
|
||||
|
||||
var _templatesEurekaTmpl = []byte(`[backends]{{range .Applications}}
|
||||
{{ $app := .}}
|
||||
{{range .Instances}}
|
||||
[backends.backend{{$app.Name}}.servers.server-{{ getInstanceID . }}]
|
||||
url = "{{ getProtocol . }}://{{ .IpAddr }}:{{ getPort . }}"
|
||||
weight = {{ getWeight . }}
|
||||
{{end}}{{end}}
|
||||
|
||||
[frontends]{{range .Applications}}
|
||||
[frontends.frontend{{.Name}}]
|
||||
backend = "backend{{.Name}}"
|
||||
entryPoints = ["http"]
|
||||
[frontends.frontend{{.Name }}.routes.route-host{{.Name}}]
|
||||
rule = "Host:{{ .Name | tolower }}"
|
||||
{{end}}
|
||||
`)
|
||||
|
||||
func templatesEurekaTmplBytes() ([]byte, error) {
|
||||
return _templatesEurekaTmpl, nil
|
||||
}
|
||||
|
||||
func templatesEurekaTmpl() (*asset, error) {
|
||||
bytes, err := templatesEurekaTmplBytes()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
info := bindataFileInfo{name: "templates/eureka.tmpl", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)}
|
||||
a := &asset{bytes: bytes, info: info}
|
||||
return a, nil
|
||||
}
|
||||
|
||||
var _templatesKubernetesTmpl = []byte(`[backends]{{range $backendName, $backend := .Backends}}
|
||||
[backends."{{$backendName}}"]
|
||||
{{if $backend.CircuitBreaker}}
|
||||
[backends."{{$backendName}}".circuitbreaker]
|
||||
expression = "{{$backend.CircuitBreaker.Expression}}"
|
||||
{{end}}
|
||||
[backends."{{$backendName}}".loadbalancer]
|
||||
method = "{{$backend.LoadBalancer.Method}}"
|
||||
{{if $backend.LoadBalancer.Sticky}}
|
||||
sticky = true
|
||||
{{end}}
|
||||
{{if $backend.LoadBalancer.Stickiness}}
|
||||
[backends."{{$backendName}}".loadbalancer.stickiness]
|
||||
cookieName = "{{$backend.LoadBalancer.Stickiness.CookieName}}"
|
||||
{{end}}
|
||||
{{range $serverName, $server := $backend.Servers}}
|
||||
[backends."{{$backendName}}".servers."{{$serverName}}"]
|
||||
url = "{{$server.URL}}"
|
||||
weight = {{$server.Weight}}
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
[frontends]{{range $frontendName, $frontend := .Frontends}}
|
||||
[frontends."{{$frontendName}}"]
|
||||
backend = "{{$frontend.Backend}}"
|
||||
priority = {{$frontend.Priority}}
|
||||
passHostHeader = {{$frontend.PassHostHeader}}
|
||||
entryPoints = [{{range $frontend.EntryPoints}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
basicAuth = [{{range $frontend.BasicAuth}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
whitelistSourceRange = [{{range $frontend.WhitelistSourceRange}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
|
||||
{{if $frontend.Redirect}}
|
||||
[frontends."{{$frontendName}}".redirect]
|
||||
entryPoint = "{{$frontend.Redirect.EntryPoint}}"
|
||||
regex = "{{$frontend.Redirect.Regex}}"
|
||||
replacement = "{{$frontend.Redirect.Replacement}}"
|
||||
{{end}}
|
||||
|
||||
{{ if $frontend.Headers }}
|
||||
[frontends."{{$frontendName}}".headers]
|
||||
SSLRedirect = {{$frontend.Headers.SSLRedirect}}
|
||||
SSLTemporaryRedirect = {{$frontend.Headers.SSLTemporaryRedirect}}
|
||||
SSLHost = "{{$frontend.Headers.SSLHost}}"
|
||||
STSSeconds = {{$frontend.Headers.STSSeconds}}
|
||||
STSIncludeSubdomains = {{$frontend.Headers.STSIncludeSubdomains}}
|
||||
STSPreload = {{$frontend.Headers.STSPreload}}
|
||||
ForceSTSHeader = {{$frontend.Headers.ForceSTSHeader}}
|
||||
FrameDeny = {{$frontend.Headers.FrameDeny}}
|
||||
CustomFrameOptionsValue = "{{$frontend.Headers.CustomFrameOptionsValue}}"
|
||||
ContentTypeNosniff = {{$frontend.Headers.ContentTypeNosniff}}
|
||||
BrowserXSSFilter = {{$frontend.Headers.BrowserXSSFilter}}
|
||||
ContentSecurityPolicy = "{{$frontend.Headers.ContentSecurityPolicy}}"
|
||||
PublicKey = "{{$frontend.Headers.PublicKey}}"
|
||||
ReferrerPolicy = "{{$frontend.Headers.ReferrerPolicy}}"
|
||||
IsDevelopment = {{$frontend.Headers.IsDevelopment}}
|
||||
|
||||
{{if $frontend.Headers.AllowedHosts}}
|
||||
AllowedHosts = [{{range $frontend.Headers.AllowedHosts}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
{{end}}
|
||||
|
||||
{{if $frontend.Headers.HostsProxyHeaders}}
|
||||
HostsProxyHeaders = [{{range $frontend.Headers.HostsProxyHeaders}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
{{end}}
|
||||
|
||||
{{if $frontend.Headers.CustomRequestHeaders}}
|
||||
[frontends."{{$frontendName}}".headers.customrequestheaders]
|
||||
{{range $k, $v := $frontend.Headers.CustomRequestHeaders}}
|
||||
{{$k}} = "{{$v}}"
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
{{if $frontend.Headers.CustomResponseHeaders}}
|
||||
[frontends."{{$frontendName}}".headers.customresponseheaders]
|
||||
{{range $k, $v := $frontend.Headers.CustomResponseHeaders}}
|
||||
{{$k}} = "{{$v}}"
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
{{if $frontend.Headers.SSLProxyHeaders}}
|
||||
[frontends."{{$frontendName}}".headers.SSLProxyHeaders]
|
||||
{{range $k, $v := $frontend.Headers.SSLProxyHeaders}}
|
||||
{{$k}} = "{{$v}}"
|
||||
{{end}}
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
{{range $routeName, $route := $frontend.Routes}}
|
||||
[frontends."{{$frontendName}}".routes."{{$routeName}}"]
|
||||
rule = "{{$route.Rule}}"
|
||||
{{end}}
|
||||
{{end}}
|
||||
`)
|
||||
|
||||
func templatesKubernetesTmplBytes() ([]byte, error) {
|
||||
return _templatesKubernetesTmpl, nil
|
||||
}
|
||||
|
||||
func templatesKubernetesTmpl() (*asset, error) {
|
||||
bytes, err := templatesKubernetesTmplBytes()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
info := bindataFileInfo{name: "templates/kubernetes.tmpl", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)}
|
||||
a := &asset{bytes: bytes, info: info}
|
||||
return a, nil
|
||||
}
|
||||
|
||||
var _templatesKvTmpl = []byte(`{{$frontends := List .Prefix "/frontends/" }}
|
||||
{{$backends := List .Prefix "/backends/"}}
|
||||
{{$tls := List .Prefix "/tls/"}}
|
||||
|
||||
[backends]{{range $backends}}
|
||||
{{$backend := .}}
|
||||
{{$backendName := Last $backend}}
|
||||
{{$servers := ListServers $backend }}
|
||||
|
||||
{{$circuitBreaker := Get "" . "/circuitbreaker/" "expression"}}
|
||||
{{with $circuitBreaker}}
|
||||
[backends."{{$backendName}}".circuitBreaker]
|
||||
expression = "{{$circuitBreaker}}"
|
||||
{{end}}
|
||||
|
||||
{{$loadBalancer := Get "" . "/loadbalancer/" "method"}}
|
||||
{{with $loadBalancer}}
|
||||
[backends."{{$backendName}}".loadBalancer]
|
||||
method = "{{$loadBalancer}}"
|
||||
sticky = {{ getSticky . }}
|
||||
{{if hasStickinessLabel $backend}}
|
||||
[backends."{{$backendName}}".loadBalancer.stickiness]
|
||||
cookieName = "{{getStickinessCookieName $backend}}"
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
{{$healthCheck := Get "" . "/healthcheck/" "path"}}
|
||||
{{with $healthCheck}}
|
||||
[backends."{{$backendName}}".healthCheck]
|
||||
path = "{{$healthCheck}}"
|
||||
interval = "{{ Get "30s" $backend "/healthcheck/" "interval" }}"
|
||||
{{end}}
|
||||
|
||||
{{$maxConnAmt := Get "" . "/maxconn/" "amount"}}
|
||||
{{$maxConnExtractorFunc := Get "" . "/maxconn/" "extractorfunc"}}
|
||||
{{with $maxConnAmt}}
|
||||
{{with $maxConnExtractorFunc}}
|
||||
[backends."{{$backendName}}".maxConn]
|
||||
amount = {{$maxConnAmt}}
|
||||
extractorFunc = "{{$maxConnExtractorFunc}}"
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
{{range $servers}}
|
||||
[backends."{{$backendName}}".servers."{{Last .}}"]
|
||||
url = "{{Get "" . "/url"}}"
|
||||
weight = {{Get "0" . "/weight"}}
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
[frontends]{{range $frontends}}
|
||||
{{$frontend := Last .}}
|
||||
{{$entryPoints := GetList . "/entrypoints"}}
|
||||
[frontends."{{$frontend}}"]
|
||||
backend = "{{Get "" . "/backend"}}"
|
||||
{{ $passHostHeader := Get "" . "/passhostheader"}}
|
||||
{{if $passHostHeader}}
|
||||
passHostHeader = {{ $passHostHeader }}
|
||||
{{else}}
|
||||
# keep for compatibility reason
|
||||
passHostHeader = {{Get "true" . "/passHostHeader"}}
|
||||
{{end}}
|
||||
priority = {{Get "0" . "/priority"}}
|
||||
entryPoints = [{{range $entryPoints}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
{{$routes := List . "/routes/"}}
|
||||
{{range $routes}}
|
||||
[frontends."{{$frontend}}".routes."{{Last .}}"]
|
||||
rule = "{{Get "" . "/rule"}}"
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
{{range $tls}}
|
||||
{{$entryPoints := SplitGet . "/entrypoints"}}
|
||||
[[tls]]
|
||||
entryPoints = [{{range $entryPoints}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
[tls.certificate]
|
||||
certFile = """{{Get "" . "/certificate" "/certfile"}}"""
|
||||
keyFile = """{{Get "" . "/certificate" "/keyfile"}}"""
|
||||
{{end}}
|
||||
|
||||
`)
|
||||
|
||||
func templatesKvTmplBytes() ([]byte, error) {
|
||||
return _templatesKvTmpl, nil
|
||||
}
|
||||
|
||||
func templatesKvTmpl() (*asset, error) {
|
||||
bytes, err := templatesKvTmplBytes()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
info := bindataFileInfo{name: "templates/kv.tmpl", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)}
|
||||
a := &asset{bytes: bytes, info: info}
|
||||
return a, nil
|
||||
}
|
||||
|
||||
var _templatesMarathonTmpl = []byte(`{{$apps := .Applications}}
|
||||
|
||||
{{range $app := $apps}}
|
||||
{{range $task := $app.Tasks}}
|
||||
{{range $serviceIndex, $serviceName := getServiceNames $app}}
|
||||
[backends."backend{{getBackend $app $serviceName}}".servers."server-{{$task.ID | replace "." "-"}}{{getServiceNameSuffix $serviceName }}"]
|
||||
url = "{{getProtocol $app $serviceName}}://{{getBackendServer $task $app}}:{{getPort $task $app $serviceName}}"
|
||||
weight = {{getWeight $app $serviceName}}
|
||||
{{end}}
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
{{range $app := $apps}}
|
||||
{{range $serviceIndex, $serviceName := getServiceNames $app}}
|
||||
[backends."backend{{getBackend $app $serviceName }}"]
|
||||
{{ if hasMaxConnLabels $app }}
|
||||
[backends."backend{{getBackend $app $serviceName }}".maxconn]
|
||||
amount = {{getMaxConnAmount $app }}
|
||||
extractorfunc = "{{getMaxConnExtractorFunc $app }}"
|
||||
{{end}}
|
||||
{{ if hasLoadBalancerLabels $app }}
|
||||
[backends."backend{{getBackend $app $serviceName }}".loadbalancer]
|
||||
method = "{{getLoadBalancerMethod $app }}"
|
||||
sticky = {{getSticky $app}}
|
||||
{{if hasStickinessLabel $app}}
|
||||
[backends."backend{{getBackend $app $serviceName }}".loadbalancer.stickiness]
|
||||
cookieName = "{{getStickinessCookieName $app}}"
|
||||
{{end}}
|
||||
{{end}}
|
||||
{{ if hasCircuitBreakerLabels $app }}
|
||||
[backends."backend{{getBackend $app $serviceName }}".circuitbreaker]
|
||||
expression = "{{getCircuitBreakerExpression $app }}"
|
||||
{{end}}
|
||||
{{ if hasHealthCheckLabels $app }}
|
||||
[backends."backend{{getBackend $app $serviceName }}".healthcheck]
|
||||
path = "{{getHealthCheckPath $app }}"
|
||||
interval = "{{getHealthCheckInterval $app }}"
|
||||
{{end}}
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
[frontends]{{range $app := $apps}}{{range $serviceIndex, $serviceName := getServiceNames .}}
|
||||
[frontends."{{ getFrontendName $app $serviceName }}"]
|
||||
backend = "backend{{getBackend $app $serviceName}}"
|
||||
passHostHeader = {{getPassHostHeader $app $serviceName}}
|
||||
priority = {{getPriority $app $serviceName}}
|
||||
entryPoints = [{{range getEntryPoints $app $serviceName}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
basicAuth = [{{range getBasicAuth $app $serviceName}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
[frontends."{{ getFrontendName $app $serviceName }}".routes."route-host{{$app.ID | replace "/" "-"}}{{getServiceNameSuffix $serviceName }}"]
|
||||
rule = "{{getFrontendRule $app $serviceName}}"
|
||||
{{end}}{{end}}
|
||||
`)
|
||||
|
||||
func templatesMarathonTmplBytes() ([]byte, error) {
|
||||
return _templatesMarathonTmpl, nil
|
||||
}
|
||||
|
||||
func templatesMarathonTmpl() (*asset, error) {
|
||||
bytes, err := templatesMarathonTmplBytes()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
info := bindataFileInfo{name: "templates/marathon.tmpl", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)}
|
||||
a := &asset{bytes: bytes, info: info}
|
||||
return a, nil
|
||||
}
|
||||
|
||||
var _templatesMesosTmpl = []byte(`{{$apps := .Applications}}
|
||||
[backends]{{range .Tasks}}
|
||||
[backends.backend{{getBackend . $apps}}.servers.server-{{getID .}}]
|
||||
url = "{{getProtocol . $apps}}://{{getHost .}}:{{getPort . $apps}}"
|
||||
weight = {{getWeight . $apps}}
|
||||
{{end}}
|
||||
|
||||
[frontends]{{range .Applications}}
|
||||
[frontends.frontend-{{getFrontEndName .}}]
|
||||
backend = "backend{{getFrontendBackend .}}"
|
||||
passHostHeader = {{getPassHostHeader .}}
|
||||
priority = {{getPriority .}}
|
||||
entryPoints = [{{range getEntryPoints .}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
[frontends.frontend-{{getFrontEndName .}}.routes.route-host{{getFrontEndName .}}]
|
||||
rule = "{{getFrontendRule .}}"
|
||||
{{end}}
|
||||
`)
|
||||
|
||||
func templatesMesosTmplBytes() ([]byte, error) {
|
||||
return _templatesMesosTmpl, nil
|
||||
}
|
||||
|
||||
func templatesMesosTmpl() (*asset, error) {
|
||||
bytes, err := templatesMesosTmplBytes()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
info := bindataFileInfo{name: "templates/mesos.tmpl", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)}
|
||||
a := &asset{bytes: bytes, info: info}
|
||||
return a, nil
|
||||
}
|
||||
|
||||
var _templatesNotfoundTmpl = []byte(`<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>Traefik</title>
|
||||
</head>
|
||||
<body>
|
||||
Ohhhh man, this is bad...
|
||||
</body>
|
||||
</html>`)
|
||||
|
||||
func templatesNotfoundTmplBytes() ([]byte, error) {
|
||||
return _templatesNotfoundTmpl, nil
|
||||
}
|
||||
|
||||
func templatesNotfoundTmpl() (*asset, error) {
|
||||
bytes, err := templatesNotfoundTmplBytes()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
info := bindataFileInfo{name: "templates/notFound.tmpl", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)}
|
||||
a := &asset{bytes: bytes, info: info}
|
||||
return a, nil
|
||||
}
|
||||
|
||||
var _templatesRancherTmpl = []byte(`{{$backendServers := .Backends}}
|
||||
[backends]{{range $backendName, $backend := .Backends}}
|
||||
{{if hasCircuitBreakerLabel $backend}}
|
||||
[backends.backend-{{$backendName}}.circuitbreaker]
|
||||
expression = "{{getCircuitBreakerExpression $backend}}"
|
||||
{{end}}
|
||||
|
||||
{{if hasLoadBalancerLabel $backend}}
|
||||
[backends.backend-{{$backendName}}.loadbalancer]
|
||||
method = "{{getLoadBalancerMethod $backend}}"
|
||||
sticky = {{getSticky $backend}}
|
||||
{{if hasStickinessLabel $backend}}
|
||||
[backends.backend-{{$backendName}}.loadbalancer.stickiness]
|
||||
cookieName = "{{getStickinessCookieName $backend}}"
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
{{if hasMaxConnLabels $backend}}
|
||||
[backends.backend-{{$backendName}}.maxconn]
|
||||
amount = {{getMaxConnAmount $backend}}
|
||||
extractorfunc = "{{getMaxConnExtractorFunc $backend}}"
|
||||
{{end}}
|
||||
|
||||
{{range $index, $ip := $backend.Containers}}
|
||||
[backends.backend-{{$backendName}}.servers.server-{{$index}}]
|
||||
url = "{{getProtocol $backend}}://{{$ip}}:{{getPort $backend}}"
|
||||
weight = {{getWeight $backend}}
|
||||
{{end}}
|
||||
|
||||
{{end}}
|
||||
|
||||
[frontends]{{range $frontendName, $service := .Frontends}}
|
||||
[frontends."frontend-{{$frontendName}}"]
|
||||
backend = "backend-{{getBackend $service}}"
|
||||
passHostHeader = {{getPassHostHeader $service}}
|
||||
priority = {{getPriority $service}}
|
||||
entryPoints = [{{range getEntryPoints $service}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
basicAuth = [{{range getBasicAuth $service}}
|
||||
"{{.}}",
|
||||
{{end}}]
|
||||
|
||||
{{if hasRedirect $service}}
|
||||
[frontends."frontend-{{$frontendName}}".redirect]
|
||||
entryPoint = "{{getRedirectEntryPoint $service}}"
|
||||
regex = "{{getRedirectRegex $service}}"
|
||||
replacement = "{{getRedirectReplacement $service}}"
|
||||
{{end}}
|
||||
|
||||
[frontends."frontend-{{$frontendName}}".routes."route-frontend-{{$frontendName}}"]
|
||||
rule = "{{getFrontendRule $service}}"
|
||||
{{end}}
|
||||
`)
|
||||
|
||||
func templatesRancherTmplBytes() ([]byte, error) {
|
||||
return _templatesRancherTmpl, nil
|
||||
}
|
||||
|
||||
func templatesRancherTmpl() (*asset, error) {
|
||||
bytes, err := templatesRancherTmplBytes()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
info := bindataFileInfo{name: "templates/rancher.tmpl", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)}
|
||||
a := &asset{bytes: bytes, info: info}
|
||||
return a, nil
|
||||
}
|
||||
|
||||
// Asset loads and returns the asset for the given name.
|
||||
// It returns an error if the asset could not be found or
|
||||
// could not be loaded.
|
||||
func Asset(name string) ([]byte, error) {
|
||||
cannonicalName := strings.Replace(name, "\\", "/", -1)
|
||||
if f, ok := _bindata[cannonicalName]; ok {
|
||||
a, err := f()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Asset %s can't read by error: %v", name, err)
|
||||
}
|
||||
return a.bytes, nil
|
||||
}
|
||||
return nil, fmt.Errorf("Asset %s not found", name)
|
||||
}
|
||||
|
||||
// MustAsset is like Asset but panics when Asset would return an error.
|
||||
// It simplifies safe initialization of global variables.
|
||||
func MustAsset(name string) []byte {
|
||||
a, err := Asset(name)
|
||||
if err != nil {
|
||||
panic("asset: Asset(" + name + "): " + err.Error())
|
||||
}
|
||||
|
||||
return a
|
||||
}
|
||||
|
||||
// AssetInfo loads and returns the asset info for the given name.
|
||||
// It returns an error if the asset could not be found or
|
||||
// could not be loaded.
|
||||
func AssetInfo(name string) (os.FileInfo, error) {
|
||||
cannonicalName := strings.Replace(name, "\\", "/", -1)
|
||||
if f, ok := _bindata[cannonicalName]; ok {
|
||||
a, err := f()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("AssetInfo %s can't read by error: %v", name, err)
|
||||
}
|
||||
return a.info, nil
|
||||
}
|
||||
return nil, fmt.Errorf("AssetInfo %s not found", name)
|
||||
}
|
||||
|
||||
// AssetNames returns the names of the assets.
|
||||
func AssetNames() []string {
|
||||
names := make([]string, 0, len(_bindata))
|
||||
for name := range _bindata {
|
||||
names = append(names, name)
|
||||
}
|
||||
return names
|
||||
}
|
||||
|
||||
// _bindata is a table, holding each asset generator, mapped to its name.
|
||||
var _bindata = map[string]func() (*asset, error){
|
||||
"templates/consul_catalog.tmpl": templatesConsul_catalogTmpl,
|
||||
"templates/docker.tmpl": templatesDockerTmpl,
|
||||
"templates/ecs.tmpl": templatesEcsTmpl,
|
||||
"templates/eureka.tmpl": templatesEurekaTmpl,
|
||||
"templates/kubernetes.tmpl": templatesKubernetesTmpl,
|
||||
"templates/kv.tmpl": templatesKvTmpl,
|
||||
"templates/marathon.tmpl": templatesMarathonTmpl,
|
||||
"templates/mesos.tmpl": templatesMesosTmpl,
|
||||
"templates/notFound.tmpl": templatesNotfoundTmpl,
|
||||
"templates/rancher.tmpl": templatesRancherTmpl,
|
||||
}
|
||||
|
||||
// AssetDir returns the file names below a certain
|
||||
// directory embedded in the file by go-bindata.
|
||||
// For example if you run go-bindata on data/... and data contains the
|
||||
// following hierarchy:
|
||||
// data/
|
||||
// foo.txt
|
||||
// img/
|
||||
// a.png
|
||||
// b.png
|
||||
// then AssetDir("data") would return []string{"foo.txt", "img"}
|
||||
// AssetDir("data/img") would return []string{"a.png", "b.png"}
|
||||
// AssetDir("foo.txt") and AssetDir("notexist") would return an error
|
||||
// AssetDir("") will return []string{"data"}.
|
||||
func AssetDir(name string) ([]string, error) {
|
||||
node := _bintree
|
||||
if len(name) != 0 {
|
||||
cannonicalName := strings.Replace(name, "\\", "/", -1)
|
||||
pathList := strings.Split(cannonicalName, "/")
|
||||
for _, p := range pathList {
|
||||
node = node.Children[p]
|
||||
if node == nil {
|
||||
return nil, fmt.Errorf("Asset %s not found", name)
|
||||
}
|
||||
}
|
||||
}
|
||||
if node.Func != nil {
|
||||
return nil, fmt.Errorf("Asset %s not found", name)
|
||||
}
|
||||
rv := make([]string, 0, len(node.Children))
|
||||
for childName := range node.Children {
|
||||
rv = append(rv, childName)
|
||||
}
|
||||
return rv, nil
|
||||
}
|
||||
|
||||
type bintree struct {
|
||||
Func func() (*asset, error)
|
||||
Children map[string]*bintree
|
||||
}
|
||||
|
||||
var _bintree = &bintree{nil, map[string]*bintree{
|
||||
"templates": {nil, map[string]*bintree{
|
||||
"consul_catalog.tmpl": {templatesConsul_catalogTmpl, map[string]*bintree{}},
|
||||
"docker.tmpl": {templatesDockerTmpl, map[string]*bintree{}},
|
||||
"ecs.tmpl": {templatesEcsTmpl, map[string]*bintree{}},
|
||||
"eureka.tmpl": {templatesEurekaTmpl, map[string]*bintree{}},
|
||||
"kubernetes.tmpl": {templatesKubernetesTmpl, map[string]*bintree{}},
|
||||
"kv.tmpl": {templatesKvTmpl, map[string]*bintree{}},
|
||||
"marathon.tmpl": {templatesMarathonTmpl, map[string]*bintree{}},
|
||||
"mesos.tmpl": {templatesMesosTmpl, map[string]*bintree{}},
|
||||
"notFound.tmpl": {templatesNotfoundTmpl, map[string]*bintree{}},
|
||||
"rancher.tmpl": {templatesRancherTmpl, map[string]*bintree{}},
|
||||
}},
|
||||
}}
|
||||
|
||||
// RestoreAsset restores an asset under the given directory
|
||||
func RestoreAsset(dir, name string) error {
|
||||
data, err := Asset(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
info, err := AssetInfo(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = os.MkdirAll(_filePath(dir, filepath.Dir(name)), os.FileMode(0755))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = ioutil.WriteFile(_filePath(dir, name), data, info.Mode())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = os.Chtimes(_filePath(dir, name), info.ModTime(), info.ModTime())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// RestoreAssets restores an asset under the given directory recursively
|
||||
func RestoreAssets(dir, name string) error {
|
||||
children, err := AssetDir(name)
|
||||
// File
|
||||
if err != nil {
|
||||
return RestoreAsset(dir, name)
|
||||
}
|
||||
// Dir
|
||||
for _, child := range children {
|
||||
err = RestoreAssets(dir, filepath.Join(name, child))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func _filePath(dir, name string) string {
|
||||
cannonicalName := strings.Replace(name, "\\", "/", -1)
|
||||
return filepath.Join(append([]string{dir}, strings.Split(cannonicalName, "/")...)...)
|
||||
}
|
||||
@@ -1,27 +1,25 @@
|
||||
FROM golang:1.9-alpine
|
||||
FROM golang:1.7
|
||||
|
||||
RUN apk --update upgrade \
|
||||
&& apk --no-cache --no-progress add git mercurial bash gcc musl-dev curl tar \
|
||||
&& rm -rf /var/cache/apk/*
|
||||
|
||||
RUN go get github.com/containous/go-bindata/... \
|
||||
RUN go get github.com/Masterminds/glide \
|
||||
&& go get github.com/jteeuwen/go-bindata/... \
|
||||
&& go get github.com/golang/lint/golint \
|
||||
&& go get github.com/kisielk/errcheck \
|
||||
&& go get github.com/client9/misspell/cmd/misspell
|
||||
&& go get github.com/kisielk/errcheck
|
||||
|
||||
# Which docker version to test on
|
||||
ARG DOCKER_VERSION=17.03.2
|
||||
ARG DEP_VERSION=0.4.1
|
||||
|
||||
# Download dep binary to bin folder in $GOPATH
|
||||
RUN mkdir -p /usr/local/bin \
|
||||
&& curl -fsSL -o /usr/local/bin/dep https://github.com/golang/dep/releases/download/v${DEP_VERSION}/dep-linux-amd64 \
|
||||
&& chmod +x /usr/local/bin/dep
|
||||
ARG DOCKER_VERSION=1.10.1
|
||||
|
||||
# Download docker
|
||||
RUN mkdir -p /usr/local/bin \
|
||||
&& curl -fL https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKER_VERSION}-ce.tgz \
|
||||
| tar -xzC /usr/local/bin --transform 's#^.+/##x'
|
||||
RUN set -ex; \
|
||||
curl https://get.docker.com/builds/Linux/x86_64/docker-${DOCKER_VERSION} -o /usr/local/bin/docker-${DOCKER_VERSION}; \
|
||||
chmod +x /usr/local/bin/docker-${DOCKER_VERSION}
|
||||
|
||||
# Set the default Docker to be run
|
||||
RUN ln -s /usr/local/bin/docker-${DOCKER_VERSION} /usr/local/bin/docker
|
||||
|
||||
WORKDIR /go/src/github.com/containous/traefik
|
||||
|
||||
COPY glide.yaml glide.yaml
|
||||
COPY glide.lock glide.lock
|
||||
RUN glide install
|
||||
|
||||
COPY . /go/src/github.com/containous/traefik
|
||||
|
||||
36
circle.yml
Normal file
36
circle.yml
Normal file
@@ -0,0 +1,36 @@
|
||||
machine:
|
||||
pre:
|
||||
- sudo docker -d -e lxc -s btrfs -H tcp://0.0.0.0:2375:
|
||||
background: true
|
||||
- curl --retry 15 --retry-delay 3 -v http://172.17.42.1:2375/version
|
||||
environment:
|
||||
REPO: $CIRCLE_PROJECT_USERNAME/$CIRCLE_PROJECT_REPONAME
|
||||
DOCKER_HOST: tcp://172.17.42.1:2375
|
||||
MAKE_DOCKER_HOST: $DOCKER_HOST
|
||||
VERSION: v1.0.alpha.$CIRCLE_BUILD_NUM
|
||||
|
||||
dependencies:
|
||||
pre:
|
||||
- docker version
|
||||
- go get github.com/tcnksm/ghr
|
||||
- make validate
|
||||
override:
|
||||
- make binary
|
||||
|
||||
test:
|
||||
override:
|
||||
- make test-unit
|
||||
- make test-integration
|
||||
post:
|
||||
- make crossbinary
|
||||
- make image
|
||||
|
||||
deployment:
|
||||
hub:
|
||||
branch: master
|
||||
commands:
|
||||
- ghr -t $GITHUB_TOKEN -u $CIRCLE_PROJECT_USERNAME -r $CIRCLE_PROJECT_REPONAME --prerelease ${VERSION} dist/
|
||||
- docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
|
||||
- docker push ${REPO,,}:latest
|
||||
- docker tag ${REPO,,}:latest ${REPO,,}:${VERSION}
|
||||
- docker push ${REPO,,}:${VERSION}
|
||||
@@ -1,12 +1,8 @@
|
||||
package cluster
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/cenk/backoff"
|
||||
"github.com/containous/staert"
|
||||
"github.com/containous/traefik/job"
|
||||
@@ -14,6 +10,9 @@ import (
|
||||
"github.com/containous/traefik/safe"
|
||||
"github.com/docker/libkv/store"
|
||||
"github.com/satori/go.uuid"
|
||||
"golang.org/x/net/context"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Metadata stores Object plus metadata
|
||||
@@ -76,11 +75,11 @@ func NewDataStore(ctx context.Context, kvSource staert.KvSource, object Object,
|
||||
|
||||
func (d *Datastore) watchChanges() error {
|
||||
stopCh := make(chan struct{})
|
||||
kvCh, err := d.kv.Watch(d.lockKey, stopCh, nil)
|
||||
kvCh, err := d.kv.Watch(d.lockKey, stopCh)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
safe.Go(func() {
|
||||
go func() {
|
||||
ctx, cancel := context.WithCancel(d.ctx)
|
||||
operation := func() error {
|
||||
for {
|
||||
@@ -97,6 +96,7 @@ func (d *Datastore) watchChanges() error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// log.Debugf("Datastore object change received: %+v", d.meta)
|
||||
if d.listener != nil {
|
||||
err := d.listener(d.meta.object)
|
||||
if err != nil {
|
||||
@@ -113,14 +113,25 @@ func (d *Datastore) watchChanges() error {
|
||||
if err != nil {
|
||||
log.Errorf("Error in watch datastore: %v", err)
|
||||
}
|
||||
})
|
||||
}()
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *Datastore) reload() error {
|
||||
log.Debug("Datastore reload")
|
||||
_, err := d.Load()
|
||||
return err
|
||||
log.Debugf("Datastore reload")
|
||||
d.localLock.Lock()
|
||||
err := d.kv.LoadConfig(d.meta)
|
||||
if err != nil {
|
||||
d.localLock.Unlock()
|
||||
return err
|
||||
}
|
||||
err = d.meta.unmarshall()
|
||||
if err != nil {
|
||||
d.localLock.Unlock()
|
||||
return err
|
||||
}
|
||||
d.localLock.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Begin creates a transaction with the KV store.
|
||||
@@ -188,10 +199,6 @@ func (d *Datastore) get() *Metadata {
|
||||
func (d *Datastore) Load() (Object, error) {
|
||||
d.localLock.Lock()
|
||||
defer d.localLock.Unlock()
|
||||
|
||||
// clear Object first, as mapstructure's decoder doesn't have ZeroFields set to true for merging purposes
|
||||
d.meta.Object = d.meta.Object[:0]
|
||||
|
||||
err := d.kv.LoadConfig(d.meta)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -242,6 +249,6 @@ func (s *datastoreTransaction) Commit(object Object) error {
|
||||
}
|
||||
|
||||
s.dirty = true
|
||||
log.Debugf("Transaction committed %s", s.id)
|
||||
log.Debugf("Transaction commited %s", s.id)
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -1,14 +1,13 @@
|
||||
package cluster
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/cenk/backoff"
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/containous/traefik/safe"
|
||||
"github.com/containous/traefik/types"
|
||||
"github.com/docker/leadership"
|
||||
"golang.org/x/net/context"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Leadership allows leadership election using a KV store
|
||||
@@ -54,7 +53,7 @@ func (l *Leadership) Participate(pool *safe.Pool) {
|
||||
})
|
||||
}
|
||||
|
||||
// AddListener adds a leadership listener
|
||||
// AddListener adds a leadership listerner
|
||||
func (l *Leadership) AddListener(listener LeaderListener) {
|
||||
l.listeners = append(l.listeners, listener)
|
||||
}
|
||||
@@ -86,7 +85,7 @@ func (l *Leadership) onElection(elected bool) {
|
||||
l.leader.Set(true)
|
||||
l.Start()
|
||||
} else {
|
||||
log.Infof("Node %s elected worker ♝", l.Cluster.Node)
|
||||
log.Infof("Node %s elected slave ♝", l.Cluster.Node)
|
||||
l.leader.Set(false)
|
||||
l.Stop()
|
||||
}
|
||||
|
||||
@@ -1,136 +0,0 @@
|
||||
package anonymize
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"regexp"
|
||||
|
||||
"github.com/mitchellh/copystructure"
|
||||
"github.com/mvdan/xurls"
|
||||
)
|
||||
|
||||
const (
|
||||
maskShort = "xxxx"
|
||||
maskLarge = maskShort + maskShort + maskShort + maskShort + maskShort + maskShort + maskShort + maskShort
|
||||
)
|
||||
|
||||
// Do configuration.
|
||||
func Do(baseConfig interface{}, indent bool) (string, error) {
|
||||
anomConfig, err := copystructure.Copy(baseConfig)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
val := reflect.ValueOf(anomConfig)
|
||||
|
||||
err = doOnStruct(val)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
configJSON, err := marshal(anomConfig, indent)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return doOnJSON(string(configJSON)), nil
|
||||
}
|
||||
|
||||
func doOnJSON(input string) string {
|
||||
mailExp := regexp.MustCompile(`\w[-._\w]*\w@\w[-._\w]*\w\.\w{2,3}"`)
|
||||
return xurls.Relaxed.ReplaceAllString(mailExp.ReplaceAllString(input, maskLarge+"\""), maskLarge)
|
||||
}
|
||||
|
||||
func doOnStruct(field reflect.Value) error {
|
||||
switch field.Kind() {
|
||||
case reflect.Ptr:
|
||||
if !field.IsNil() {
|
||||
if err := doOnStruct(field.Elem()); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
case reflect.Struct:
|
||||
for i := 0; i < field.NumField(); i++ {
|
||||
fld := field.Field(i)
|
||||
stField := field.Type().Field(i)
|
||||
if !isExported(stField) {
|
||||
continue
|
||||
}
|
||||
if stField.Tag.Get("export") == "true" {
|
||||
if err := doOnStruct(fld); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
if err := reset(fld, stField.Name); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
case reflect.Map:
|
||||
for _, key := range field.MapKeys() {
|
||||
if err := doOnStruct(field.MapIndex(key)); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
case reflect.Slice:
|
||||
for j := 0; j < field.Len(); j++ {
|
||||
if err := doOnStruct(field.Index(j)); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func reset(field reflect.Value, name string) error {
|
||||
if !field.CanSet() {
|
||||
return fmt.Errorf("cannot reset field %s", name)
|
||||
}
|
||||
|
||||
switch field.Kind() {
|
||||
case reflect.Ptr:
|
||||
if !field.IsNil() {
|
||||
field.Set(reflect.Zero(field.Type()))
|
||||
}
|
||||
case reflect.Struct:
|
||||
if field.IsValid() {
|
||||
field.Set(reflect.Zero(field.Type()))
|
||||
}
|
||||
case reflect.String:
|
||||
if field.String() != "" {
|
||||
field.Set(reflect.ValueOf(maskShort))
|
||||
}
|
||||
case reflect.Map:
|
||||
if field.Len() > 0 {
|
||||
field.Set(reflect.MakeMap(field.Type()))
|
||||
}
|
||||
case reflect.Slice:
|
||||
if field.Len() > 0 {
|
||||
field.Set(reflect.MakeSlice(field.Type(), 0, 0))
|
||||
}
|
||||
case reflect.Interface:
|
||||
if !field.IsNil() {
|
||||
return reset(field.Elem(), "")
|
||||
}
|
||||
default:
|
||||
// Primitive type
|
||||
field.Set(reflect.Zero(field.Type()))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// isExported return true is a struct field is exported, else false
|
||||
func isExported(f reflect.StructField) bool {
|
||||
if f.PkgPath != "" && !f.Anonymous {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func marshal(anomConfig interface{}, indent bool) ([]byte, error) {
|
||||
if indent {
|
||||
return json.MarshalIndent(anomConfig, "", " ")
|
||||
}
|
||||
return json.Marshal(anomConfig)
|
||||
}
|
||||
@@ -1,664 +0,0 @@
|
||||
package anonymize
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/containous/flaeg"
|
||||
"github.com/containous/traefik/acme"
|
||||
"github.com/containous/traefik/configuration"
|
||||
"github.com/containous/traefik/provider"
|
||||
"github.com/containous/traefik/provider/boltdb"
|
||||
"github.com/containous/traefik/provider/consul"
|
||||
"github.com/containous/traefik/provider/docker"
|
||||
"github.com/containous/traefik/provider/dynamodb"
|
||||
"github.com/containous/traefik/provider/ecs"
|
||||
"github.com/containous/traefik/provider/etcd"
|
||||
"github.com/containous/traefik/provider/eureka"
|
||||
"github.com/containous/traefik/provider/file"
|
||||
"github.com/containous/traefik/provider/kubernetes"
|
||||
"github.com/containous/traefik/provider/kv"
|
||||
"github.com/containous/traefik/provider/marathon"
|
||||
"github.com/containous/traefik/provider/mesos"
|
||||
"github.com/containous/traefik/provider/rancher"
|
||||
"github.com/containous/traefik/provider/zk"
|
||||
traefikTls "github.com/containous/traefik/tls"
|
||||
"github.com/containous/traefik/types"
|
||||
)
|
||||
|
||||
func TestDo_globalConfiguration(t *testing.T) {
|
||||
|
||||
config := &configuration.GlobalConfiguration{}
|
||||
|
||||
config.GraceTimeOut = flaeg.Duration(666 * time.Second)
|
||||
config.Debug = true
|
||||
config.CheckNewVersion = true
|
||||
config.AccessLogsFile = "AccessLogsFile"
|
||||
config.AccessLog = &types.AccessLog{
|
||||
FilePath: "AccessLog FilePath",
|
||||
Format: "AccessLog Format",
|
||||
}
|
||||
config.TraefikLogsFile = "TraefikLogsFile"
|
||||
config.LogLevel = "LogLevel"
|
||||
config.EntryPoints = configuration.EntryPoints{
|
||||
"foo": {
|
||||
Network: "foo Network",
|
||||
Address: "foo Address",
|
||||
TLS: &traefikTls.TLS{
|
||||
MinVersion: "foo MinVersion",
|
||||
CipherSuites: []string{"foo CipherSuites 1", "foo CipherSuites 2", "foo CipherSuites 3"},
|
||||
Certificates: traefikTls.Certificates{
|
||||
{CertFile: "CertFile 1", KeyFile: "KeyFile 1"},
|
||||
{CertFile: "CertFile 2", KeyFile: "KeyFile 2"},
|
||||
},
|
||||
ClientCA: traefikTls.ClientCA{
|
||||
Files: []string{"foo ClientCAFiles 1", "foo ClientCAFiles 2", "foo ClientCAFiles 3"},
|
||||
Optional: false,
|
||||
},
|
||||
},
|
||||
Redirect: &types.Redirect{
|
||||
Replacement: "foo Replacement",
|
||||
Regex: "foo Regex",
|
||||
EntryPoint: "foo EntryPoint",
|
||||
},
|
||||
Auth: &types.Auth{
|
||||
Basic: &types.Basic{
|
||||
UsersFile: "foo Basic UsersFile",
|
||||
Users: types.Users{"foo Basic Users 1", "foo Basic Users 2", "foo Basic Users 3"},
|
||||
},
|
||||
Digest: &types.Digest{
|
||||
UsersFile: "foo Digest UsersFile",
|
||||
Users: types.Users{"foo Digest Users 1", "foo Digest Users 2", "foo Digest Users 3"},
|
||||
},
|
||||
Forward: &types.Forward{
|
||||
Address: "foo Address",
|
||||
TLS: &types.ClientTLS{
|
||||
CA: "foo CA",
|
||||
Cert: "foo Cert",
|
||||
Key: "foo Key",
|
||||
InsecureSkipVerify: true,
|
||||
},
|
||||
TrustForwardHeader: true,
|
||||
},
|
||||
},
|
||||
WhitelistSourceRange: []string{"foo WhitelistSourceRange 1", "foo WhitelistSourceRange 2", "foo WhitelistSourceRange 3"},
|
||||
Compress: true,
|
||||
ProxyProtocol: &configuration.ProxyProtocol{
|
||||
TrustedIPs: []string{"127.0.0.1/32", "192.168.0.1"},
|
||||
},
|
||||
},
|
||||
"fii": {
|
||||
Network: "fii Network",
|
||||
Address: "fii Address",
|
||||
TLS: &traefikTls.TLS{
|
||||
MinVersion: "fii MinVersion",
|
||||
CipherSuites: []string{"fii CipherSuites 1", "fii CipherSuites 2", "fii CipherSuites 3"},
|
||||
Certificates: traefikTls.Certificates{
|
||||
{CertFile: "CertFile 1", KeyFile: "KeyFile 1"},
|
||||
{CertFile: "CertFile 2", KeyFile: "KeyFile 2"},
|
||||
},
|
||||
ClientCA: traefikTls.ClientCA{
|
||||
Files: []string{"fii ClientCAFiles 1", "fii ClientCAFiles 2", "fii ClientCAFiles 3"},
|
||||
Optional: false,
|
||||
},
|
||||
},
|
||||
Redirect: &types.Redirect{
|
||||
Replacement: "fii Replacement",
|
||||
Regex: "fii Regex",
|
||||
EntryPoint: "fii EntryPoint",
|
||||
},
|
||||
Auth: &types.Auth{
|
||||
Basic: &types.Basic{
|
||||
UsersFile: "fii Basic UsersFile",
|
||||
Users: types.Users{"fii Basic Users 1", "fii Basic Users 2", "fii Basic Users 3"},
|
||||
},
|
||||
Digest: &types.Digest{
|
||||
UsersFile: "fii Digest UsersFile",
|
||||
Users: types.Users{"fii Digest Users 1", "fii Digest Users 2", "fii Digest Users 3"},
|
||||
},
|
||||
Forward: &types.Forward{
|
||||
Address: "fii Address",
|
||||
TLS: &types.ClientTLS{
|
||||
CA: "fii CA",
|
||||
Cert: "fii Cert",
|
||||
Key: "fii Key",
|
||||
InsecureSkipVerify: true,
|
||||
},
|
||||
TrustForwardHeader: true,
|
||||
},
|
||||
},
|
||||
WhitelistSourceRange: []string{"fii WhitelistSourceRange 1", "fii WhitelistSourceRange 2", "fii WhitelistSourceRange 3"},
|
||||
Compress: true,
|
||||
ProxyProtocol: &configuration.ProxyProtocol{
|
||||
TrustedIPs: []string{"127.0.0.1/32", "192.168.0.1"},
|
||||
},
|
||||
},
|
||||
}
|
||||
config.Cluster = &types.Cluster{
|
||||
Node: "Cluster Node",
|
||||
Store: &types.Store{
|
||||
Prefix: "Cluster Store Prefix",
|
||||
// ...
|
||||
},
|
||||
}
|
||||
config.Constraints = types.Constraints{
|
||||
{
|
||||
Key: "Constraints Key 1",
|
||||
Regex: "Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
{
|
||||
Key: "Constraints Key 1",
|
||||
Regex: "Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
}
|
||||
config.ACME = &acme.ACME{
|
||||
Email: "acme Email",
|
||||
Domains: []acme.Domain{
|
||||
{
|
||||
Main: "Domains Main",
|
||||
SANs: []string{"Domains acme SANs 1", "Domains acme SANs 2", "Domains acme SANs 3"},
|
||||
},
|
||||
},
|
||||
Storage: "Storage",
|
||||
StorageFile: "StorageFile",
|
||||
OnDemand: true,
|
||||
OnHostRule: true,
|
||||
CAServer: "CAServer",
|
||||
EntryPoint: "EntryPoint",
|
||||
DNSChallenge: &acme.DNSChallenge{Provider: "DNSProvider"},
|
||||
DelayDontCheckDNS: 666,
|
||||
ACMELogging: true,
|
||||
TLSConfig: &tls.Config{
|
||||
InsecureSkipVerify: true,
|
||||
// ...
|
||||
},
|
||||
}
|
||||
config.DefaultEntryPoints = configuration.DefaultEntryPoints{"DefaultEntryPoints 1", "DefaultEntryPoints 2", "DefaultEntryPoints 3"}
|
||||
config.ProvidersThrottleDuration = flaeg.Duration(666 * time.Second)
|
||||
config.MaxIdleConnsPerHost = 666
|
||||
config.IdleTimeout = flaeg.Duration(666 * time.Second)
|
||||
config.InsecureSkipVerify = true
|
||||
config.RootCAs = traefikTls.RootCAs{"RootCAs 1", "RootCAs 2", "RootCAs 3"}
|
||||
config.Retry = &configuration.Retry{
|
||||
Attempts: 666,
|
||||
}
|
||||
config.HealthCheck = &configuration.HealthCheckConfig{
|
||||
Interval: flaeg.Duration(666 * time.Second),
|
||||
}
|
||||
config.RespondingTimeouts = &configuration.RespondingTimeouts{
|
||||
ReadTimeout: flaeg.Duration(666 * time.Second),
|
||||
WriteTimeout: flaeg.Duration(666 * time.Second),
|
||||
IdleTimeout: flaeg.Duration(666 * time.Second),
|
||||
}
|
||||
config.ForwardingTimeouts = &configuration.ForwardingTimeouts{
|
||||
DialTimeout: flaeg.Duration(666 * time.Second),
|
||||
ResponseHeaderTimeout: flaeg.Duration(666 * time.Second),
|
||||
}
|
||||
config.Docker = &docker.Provider{
|
||||
BaseProvider: provider.BaseProvider{
|
||||
Watch: true,
|
||||
Filename: "docker Filename",
|
||||
Constraints: types.Constraints{
|
||||
{
|
||||
Key: "docker Constraints Key 1",
|
||||
Regex: "docker Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
{
|
||||
Key: "docker Constraints Key 1",
|
||||
Regex: "docker Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
},
|
||||
Trace: true,
|
||||
DebugLogGeneratedTemplate: true,
|
||||
},
|
||||
Endpoint: "docker Endpoint",
|
||||
Domain: "docker Domain",
|
||||
TLS: &types.ClientTLS{
|
||||
CA: "docker CA",
|
||||
Cert: "docker Cert",
|
||||
Key: "docker Key",
|
||||
InsecureSkipVerify: true,
|
||||
},
|
||||
ExposedByDefault: true,
|
||||
UseBindPortIP: true,
|
||||
SwarmMode: true,
|
||||
}
|
||||
config.File = &file.Provider{
|
||||
BaseProvider: provider.BaseProvider{
|
||||
Watch: true,
|
||||
Filename: "file Filename",
|
||||
Constraints: types.Constraints{
|
||||
{
|
||||
Key: "file Constraints Key 1",
|
||||
Regex: "file Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
{
|
||||
Key: "file Constraints Key 1",
|
||||
Regex: "file Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
},
|
||||
Trace: true,
|
||||
DebugLogGeneratedTemplate: true,
|
||||
},
|
||||
Directory: "file Directory",
|
||||
}
|
||||
config.Web = &configuration.WebCompatibility{
|
||||
Address: "web Address",
|
||||
CertFile: "web CertFile",
|
||||
KeyFile: "web KeyFile",
|
||||
ReadOnly: true,
|
||||
Statistics: &types.Statistics{
|
||||
RecentErrors: 666,
|
||||
},
|
||||
Metrics: &types.Metrics{
|
||||
Prometheus: &types.Prometheus{
|
||||
Buckets: types.Buckets{6.5, 6.6, 6.7},
|
||||
},
|
||||
Datadog: &types.Datadog{
|
||||
Address: "Datadog Address",
|
||||
PushInterval: "Datadog PushInterval",
|
||||
},
|
||||
StatsD: &types.Statsd{
|
||||
Address: "StatsD Address",
|
||||
PushInterval: "StatsD PushInterval",
|
||||
},
|
||||
},
|
||||
Path: "web Path",
|
||||
Auth: &types.Auth{
|
||||
Basic: &types.Basic{
|
||||
UsersFile: "web Basic UsersFile",
|
||||
Users: types.Users{"web Basic Users 1", "web Basic Users 2", "web Basic Users 3"},
|
||||
},
|
||||
Digest: &types.Digest{
|
||||
UsersFile: "web Digest UsersFile",
|
||||
Users: types.Users{"web Digest Users 1", "web Digest Users 2", "web Digest Users 3"},
|
||||
},
|
||||
Forward: &types.Forward{
|
||||
Address: "web Address",
|
||||
TLS: &types.ClientTLS{
|
||||
CA: "web CA",
|
||||
Cert: "web Cert",
|
||||
Key: "web Key",
|
||||
InsecureSkipVerify: true,
|
||||
},
|
||||
TrustForwardHeader: true,
|
||||
},
|
||||
},
|
||||
Debug: true,
|
||||
}
|
||||
config.Marathon = &marathon.Provider{
|
||||
BaseProvider: provider.BaseProvider{
|
||||
Watch: true,
|
||||
Filename: "marathon Filename",
|
||||
Constraints: types.Constraints{
|
||||
{
|
||||
Key: "marathon Constraints Key 1",
|
||||
Regex: "marathon Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
{
|
||||
Key: "marathon Constraints Key 1",
|
||||
Regex: "marathon Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
},
|
||||
Trace: true,
|
||||
DebugLogGeneratedTemplate: true,
|
||||
},
|
||||
Endpoint: "",
|
||||
Domain: "",
|
||||
ExposedByDefault: true,
|
||||
GroupsAsSubDomains: true,
|
||||
DCOSToken: "",
|
||||
MarathonLBCompatibility: true,
|
||||
TLS: &types.ClientTLS{
|
||||
CA: "marathon CA",
|
||||
Cert: "marathon Cert",
|
||||
Key: "marathon Key",
|
||||
InsecureSkipVerify: true,
|
||||
},
|
||||
DialerTimeout: flaeg.Duration(666 * time.Second),
|
||||
KeepAlive: flaeg.Duration(666 * time.Second),
|
||||
ForceTaskHostname: true,
|
||||
Basic: &marathon.Basic{
|
||||
HTTPBasicAuthUser: "marathon HTTPBasicAuthUser",
|
||||
HTTPBasicPassword: "marathon HTTPBasicPassword",
|
||||
},
|
||||
RespectReadinessChecks: true,
|
||||
}
|
||||
config.ConsulCatalog = &consul.CatalogProvider{
|
||||
BaseProvider: provider.BaseProvider{
|
||||
Watch: true,
|
||||
Filename: "ConsulCatalog Filename",
|
||||
Constraints: types.Constraints{
|
||||
{
|
||||
Key: "ConsulCatalog Constraints Key 1",
|
||||
Regex: "ConsulCatalog Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
{
|
||||
Key: "ConsulCatalog Constraints Key 1",
|
||||
Regex: "ConsulCatalog Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
},
|
||||
Trace: true,
|
||||
DebugLogGeneratedTemplate: true,
|
||||
},
|
||||
Endpoint: "ConsulCatalog Endpoint",
|
||||
Domain: "ConsulCatalog Domain",
|
||||
ExposedByDefault: true,
|
||||
Prefix: "ConsulCatalog Prefix",
|
||||
FrontEndRule: "ConsulCatalog FrontEndRule",
|
||||
}
|
||||
config.Kubernetes = &kubernetes.Provider{
|
||||
BaseProvider: provider.BaseProvider{
|
||||
Watch: true,
|
||||
Filename: "k8s Filename",
|
||||
Constraints: types.Constraints{
|
||||
{
|
||||
Key: "k8s Constraints Key 1",
|
||||
Regex: "k8s Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
{
|
||||
Key: "k8s Constraints Key 1",
|
||||
Regex: "k8s Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
},
|
||||
Trace: true,
|
||||
DebugLogGeneratedTemplate: true,
|
||||
},
|
||||
Endpoint: "k8s Endpoint",
|
||||
Token: "k8s Token",
|
||||
CertAuthFilePath: "k8s CertAuthFilePath",
|
||||
DisablePassHostHeaders: true,
|
||||
Namespaces: kubernetes.Namespaces{"k8s Namespaces 1", "k8s Namespaces 2", "k8s Namespaces 3"},
|
||||
LabelSelector: "k8s LabelSelector",
|
||||
}
|
||||
config.Mesos = &mesos.Provider{
|
||||
BaseProvider: provider.BaseProvider{
|
||||
Watch: true,
|
||||
Filename: "mesos Filename",
|
||||
Constraints: types.Constraints{
|
||||
{
|
||||
Key: "mesos Constraints Key 1",
|
||||
Regex: "mesos Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
{
|
||||
Key: "mesos Constraints Key 1",
|
||||
Regex: "mesos Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
},
|
||||
Trace: true,
|
||||
DebugLogGeneratedTemplate: true,
|
||||
},
|
||||
Endpoint: "mesos Endpoint",
|
||||
Domain: "mesos Domain",
|
||||
ExposedByDefault: true,
|
||||
GroupsAsSubDomains: true,
|
||||
ZkDetectionTimeout: 666,
|
||||
RefreshSeconds: 666,
|
||||
IPSources: "mesos IPSources",
|
||||
StateTimeoutSecond: 666,
|
||||
Masters: []string{"mesos Masters 1", "mesos Masters 2", "mesos Masters 3"},
|
||||
}
|
||||
config.Eureka = &eureka.Provider{
|
||||
BaseProvider: provider.BaseProvider{
|
||||
Watch: true,
|
||||
Filename: "eureka Filename",
|
||||
Constraints: types.Constraints{
|
||||
{
|
||||
Key: "eureka Constraints Key 1",
|
||||
Regex: "eureka Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
{
|
||||
Key: "eureka Constraints Key 1",
|
||||
Regex: "eureka Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
},
|
||||
Trace: true,
|
||||
DebugLogGeneratedTemplate: true,
|
||||
},
|
||||
Endpoint: "eureka Endpoint",
|
||||
Delay: "eureka Delay",
|
||||
}
|
||||
config.ECS = &ecs.Provider{
|
||||
BaseProvider: provider.BaseProvider{
|
||||
Watch: true,
|
||||
Filename: "ecs Filename",
|
||||
Constraints: types.Constraints{
|
||||
{
|
||||
Key: "ecs Constraints Key 1",
|
||||
Regex: "ecs Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
{
|
||||
Key: "ecs Constraints Key 1",
|
||||
Regex: "ecs Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
},
|
||||
Trace: true,
|
||||
DebugLogGeneratedTemplate: true,
|
||||
},
|
||||
Domain: "ecs Domain",
|
||||
ExposedByDefault: true,
|
||||
RefreshSeconds: 666,
|
||||
Clusters: ecs.Clusters{"ecs Clusters 1", "ecs Clusters 2", "ecs Clusters 3"},
|
||||
Cluster: "ecs Cluster",
|
||||
AutoDiscoverClusters: true,
|
||||
Region: "ecs Region",
|
||||
AccessKeyID: "ecs AccessKeyID",
|
||||
SecretAccessKey: "ecs SecretAccessKey",
|
||||
}
|
||||
config.Rancher = &rancher.Provider{
|
||||
BaseProvider: provider.BaseProvider{
|
||||
Watch: true,
|
||||
Filename: "rancher Filename",
|
||||
Constraints: types.Constraints{
|
||||
{
|
||||
Key: "rancher Constraints Key 1",
|
||||
Regex: "rancher Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
{
|
||||
Key: "rancher Constraints Key 1",
|
||||
Regex: "rancher Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
},
|
||||
Trace: true,
|
||||
DebugLogGeneratedTemplate: true,
|
||||
},
|
||||
APIConfiguration: rancher.APIConfiguration{
|
||||
Endpoint: "rancher Endpoint",
|
||||
AccessKey: "rancher AccessKey",
|
||||
SecretKey: "rancher SecretKey",
|
||||
},
|
||||
API: &rancher.APIConfiguration{
|
||||
Endpoint: "rancher Endpoint",
|
||||
AccessKey: "rancher AccessKey",
|
||||
SecretKey: "rancher SecretKey",
|
||||
},
|
||||
Metadata: &rancher.MetadataConfiguration{
|
||||
IntervalPoll: true,
|
||||
Prefix: "rancher Metadata Prefix",
|
||||
},
|
||||
Domain: "rancher Domain",
|
||||
RefreshSeconds: 666,
|
||||
ExposedByDefault: true,
|
||||
EnableServiceHealthFilter: true,
|
||||
}
|
||||
config.DynamoDB = &dynamodb.Provider{
|
||||
BaseProvider: provider.BaseProvider{
|
||||
Watch: true,
|
||||
Filename: "dynamodb Filename",
|
||||
Constraints: types.Constraints{
|
||||
{
|
||||
Key: "dynamodb Constraints Key 1",
|
||||
Regex: "dynamodb Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
{
|
||||
Key: "dynamodb Constraints Key 1",
|
||||
Regex: "dynamodb Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
},
|
||||
Trace: true,
|
||||
DebugLogGeneratedTemplate: true,
|
||||
},
|
||||
AccessKeyID: "dynamodb AccessKeyID",
|
||||
RefreshSeconds: 666,
|
||||
Region: "dynamodb Region",
|
||||
SecretAccessKey: "dynamodb SecretAccessKey",
|
||||
TableName: "dynamodb TableName",
|
||||
Endpoint: "dynamodb Endpoint",
|
||||
}
|
||||
config.Etcd = &etcd.Provider{
|
||||
Provider: kv.Provider{
|
||||
BaseProvider: provider.BaseProvider{
|
||||
Watch: true,
|
||||
Filename: "etcd Filename",
|
||||
Constraints: types.Constraints{
|
||||
{
|
||||
Key: "etcd Constraints Key 1",
|
||||
Regex: "etcd Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
{
|
||||
Key: "etcd Constraints Key 1",
|
||||
Regex: "etcd Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
},
|
||||
Trace: true,
|
||||
DebugLogGeneratedTemplate: true,
|
||||
},
|
||||
Endpoint: "etcd Endpoint",
|
||||
Prefix: "etcd Prefix",
|
||||
TLS: &types.ClientTLS{
|
||||
CA: "etcd CA",
|
||||
Cert: "etcd Cert",
|
||||
Key: "etcd Key",
|
||||
InsecureSkipVerify: true,
|
||||
},
|
||||
Username: "etcd Username",
|
||||
Password: "etcd Password",
|
||||
},
|
||||
}
|
||||
config.Zookeeper = &zk.Provider{
|
||||
Provider: kv.Provider{
|
||||
BaseProvider: provider.BaseProvider{
|
||||
Watch: true,
|
||||
Filename: "zk Filename",
|
||||
Constraints: types.Constraints{
|
||||
{
|
||||
Key: "zk Constraints Key 1",
|
||||
Regex: "zk Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
{
|
||||
Key: "zk Constraints Key 1",
|
||||
Regex: "zk Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
},
|
||||
Trace: true,
|
||||
DebugLogGeneratedTemplate: true,
|
||||
},
|
||||
Endpoint: "zk Endpoint",
|
||||
Prefix: "zk Prefix",
|
||||
TLS: &types.ClientTLS{
|
||||
CA: "zk CA",
|
||||
Cert: "zk Cert",
|
||||
Key: "zk Key",
|
||||
InsecureSkipVerify: true,
|
||||
},
|
||||
Username: "zk Username",
|
||||
Password: "zk Password",
|
||||
},
|
||||
}
|
||||
config.Boltdb = &boltdb.Provider{
|
||||
Provider: kv.Provider{
|
||||
BaseProvider: provider.BaseProvider{
|
||||
Watch: true,
|
||||
Filename: "boltdb Filename",
|
||||
Constraints: types.Constraints{
|
||||
{
|
||||
Key: "boltdb Constraints Key 1",
|
||||
Regex: "boltdb Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
{
|
||||
Key: "boltdb Constraints Key 1",
|
||||
Regex: "boltdb Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
},
|
||||
Trace: true,
|
||||
DebugLogGeneratedTemplate: true,
|
||||
},
|
||||
Endpoint: "boltdb Endpoint",
|
||||
Prefix: "boltdb Prefix",
|
||||
TLS: &types.ClientTLS{
|
||||
CA: "boltdb CA",
|
||||
Cert: "boltdb Cert",
|
||||
Key: "boltdb Key",
|
||||
InsecureSkipVerify: true,
|
||||
},
|
||||
Username: "boltdb Username",
|
||||
Password: "boltdb Password",
|
||||
},
|
||||
}
|
||||
config.Consul = &consul.Provider{
|
||||
Provider: kv.Provider{
|
||||
BaseProvider: provider.BaseProvider{
|
||||
Watch: true,
|
||||
Filename: "consul Filename",
|
||||
Constraints: types.Constraints{
|
||||
{
|
||||
Key: "consul Constraints Key 1",
|
||||
Regex: "consul Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
{
|
||||
Key: "consul Constraints Key 1",
|
||||
Regex: "consul Constraints Regex 2",
|
||||
MustMatch: true,
|
||||
},
|
||||
},
|
||||
Trace: true,
|
||||
DebugLogGeneratedTemplate: true,
|
||||
},
|
||||
Endpoint: "consul Endpoint",
|
||||
Prefix: "consul Prefix",
|
||||
TLS: &types.ClientTLS{
|
||||
CA: "consul CA",
|
||||
Cert: "consul Cert",
|
||||
Key: "consul Key",
|
||||
InsecureSkipVerify: true,
|
||||
},
|
||||
Username: "consul Username",
|
||||
Password: "consul Password",
|
||||
},
|
||||
}
|
||||
|
||||
cleanJSON, err := Do(config, true)
|
||||
if err != nil {
|
||||
t.Fatal(err, cleanJSON)
|
||||
}
|
||||
}
|
||||
@@ -1,239 +0,0 @@
|
||||
package anonymize
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func Test_doOnJSON(t *testing.T) {
|
||||
baseConfiguration := `
|
||||
{
|
||||
"GraceTimeOut": 10000000000,
|
||||
"Debug": false,
|
||||
"CheckNewVersion": true,
|
||||
"AccessLogsFile": "",
|
||||
"TraefikLogsFile": "",
|
||||
"LogLevel": "ERROR",
|
||||
"EntryPoints": {
|
||||
"http": {
|
||||
"Network": "",
|
||||
"Address": ":80",
|
||||
"TLS": null,
|
||||
"Redirect": {
|
||||
"EntryPoint": "https",
|
||||
"Regex": "",
|
||||
"Replacement": ""
|
||||
},
|
||||
"Auth": null,
|
||||
"Compress": false
|
||||
},
|
||||
"https": {
|
||||
"Network": "",
|
||||
"Address": ":443",
|
||||
"TLS": {
|
||||
"MinVersion": "",
|
||||
"CipherSuites": null,
|
||||
"Certificates": null,
|
||||
"ClientCAFiles": null
|
||||
},
|
||||
"Redirect": null,
|
||||
"Auth": null,
|
||||
"Compress": false
|
||||
}
|
||||
},
|
||||
"Cluster": null,
|
||||
"Constraints": [],
|
||||
"ACME": {
|
||||
"Email": "foo@bar.com",
|
||||
"Domains": [
|
||||
{
|
||||
"Main": "foo@bar.com",
|
||||
"SANs": null
|
||||
},
|
||||
{
|
||||
"Main": "foo@bar.com",
|
||||
"SANs": null
|
||||
}
|
||||
],
|
||||
"Storage": "",
|
||||
"StorageFile": "/acme/acme.json",
|
||||
"OnDemand": true,
|
||||
"OnHostRule": true,
|
||||
"CAServer": "",
|
||||
"EntryPoint": "https",
|
||||
"DNSProvider": "",
|
||||
"DelayDontCheckDNS": 0,
|
||||
"ACMELogging": false,
|
||||
"TLSConfig": null
|
||||
},
|
||||
"DefaultEntryPoints": [
|
||||
"https",
|
||||
"http"
|
||||
],
|
||||
"ProvidersThrottleDuration": 2000000000,
|
||||
"MaxIdleConnsPerHost": 200,
|
||||
"IdleTimeout": 180000000000,
|
||||
"InsecureSkipVerify": false,
|
||||
"Retry": null,
|
||||
"HealthCheck": {
|
||||
"Interval": 30000000000
|
||||
},
|
||||
"Docker": null,
|
||||
"File": null,
|
||||
"Web": null,
|
||||
"Marathon": null,
|
||||
"Consul": null,
|
||||
"ConsulCatalog": null,
|
||||
"Etcd": null,
|
||||
"Zookeeper": null,
|
||||
"Boltdb": null,
|
||||
"Kubernetes": null,
|
||||
"Mesos": null,
|
||||
"Eureka": null,
|
||||
"ECS": null,
|
||||
"Rancher": null,
|
||||
"DynamoDB": null,
|
||||
"ConfigFile": "/etc/traefik/traefik.toml"
|
||||
}
|
||||
`
|
||||
expectedConfiguration := `
|
||||
{
|
||||
"GraceTimeOut": 10000000000,
|
||||
"Debug": false,
|
||||
"CheckNewVersion": true,
|
||||
"AccessLogsFile": "",
|
||||
"TraefikLogsFile": "",
|
||||
"LogLevel": "ERROR",
|
||||
"EntryPoints": {
|
||||
"http": {
|
||||
"Network": "",
|
||||
"Address": ":80",
|
||||
"TLS": null,
|
||||
"Redirect": {
|
||||
"EntryPoint": "https",
|
||||
"Regex": "",
|
||||
"Replacement": ""
|
||||
},
|
||||
"Auth": null,
|
||||
"Compress": false
|
||||
},
|
||||
"https": {
|
||||
"Network": "",
|
||||
"Address": ":443",
|
||||
"TLS": {
|
||||
"MinVersion": "",
|
||||
"CipherSuites": null,
|
||||
"Certificates": null,
|
||||
"ClientCAFiles": null
|
||||
},
|
||||
"Redirect": null,
|
||||
"Auth": null,
|
||||
"Compress": false
|
||||
}
|
||||
},
|
||||
"Cluster": null,
|
||||
"Constraints": [],
|
||||
"ACME": {
|
||||
"Email": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
|
||||
"Domains": [
|
||||
{
|
||||
"Main": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
|
||||
"SANs": null
|
||||
},
|
||||
{
|
||||
"Main": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
|
||||
"SANs": null
|
||||
}
|
||||
],
|
||||
"Storage": "",
|
||||
"StorageFile": "/acme/acme.json",
|
||||
"OnDemand": true,
|
||||
"OnHostRule": true,
|
||||
"CAServer": "",
|
||||
"EntryPoint": "https",
|
||||
"DNSProvider": "",
|
||||
"DelayDontCheckDNS": 0,
|
||||
"ACMELogging": false,
|
||||
"TLSConfig": null
|
||||
},
|
||||
"DefaultEntryPoints": [
|
||||
"https",
|
||||
"http"
|
||||
],
|
||||
"ProvidersThrottleDuration": 2000000000,
|
||||
"MaxIdleConnsPerHost": 200,
|
||||
"IdleTimeout": 180000000000,
|
||||
"InsecureSkipVerify": false,
|
||||
"Retry": null,
|
||||
"HealthCheck": {
|
||||
"Interval": 30000000000
|
||||
},
|
||||
"Docker": null,
|
||||
"File": null,
|
||||
"Web": null,
|
||||
"Marathon": null,
|
||||
"Consul": null,
|
||||
"ConsulCatalog": null,
|
||||
"Etcd": null,
|
||||
"Zookeeper": null,
|
||||
"Boltdb": null,
|
||||
"Kubernetes": null,
|
||||
"Mesos": null,
|
||||
"Eureka": null,
|
||||
"ECS": null,
|
||||
"Rancher": null,
|
||||
"DynamoDB": null,
|
||||
"ConfigFile": "/etc/traefik/traefik.toml"
|
||||
}
|
||||
`
|
||||
anomConfiguration := doOnJSON(baseConfiguration)
|
||||
|
||||
if anomConfiguration != expectedConfiguration {
|
||||
t.Errorf("Got %s, want %s.", anomConfiguration, expectedConfiguration)
|
||||
}
|
||||
}
|
||||
|
||||
func Test_doOnJSON_simple(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
input string
|
||||
expectedOutput string
|
||||
}{
|
||||
{
|
||||
name: "email",
|
||||
input: `{
|
||||
"email1": "goo@example.com",
|
||||
"email2": "foo.bargoo@example.com",
|
||||
"email3": "foo.bargoo@example.com.us"
|
||||
}`,
|
||||
expectedOutput: `{
|
||||
"email1": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
|
||||
"email2": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
|
||||
"email3": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
|
||||
}`,
|
||||
},
|
||||
{
|
||||
name: "url",
|
||||
input: `{
|
||||
"URL": "foo domain.com foo",
|
||||
"URL": "foo sub.domain.com foo",
|
||||
"URL": "foo sub.sub.domain.com foo",
|
||||
"URL": "foo sub.sub.sub.domain.com.us foo"
|
||||
}`,
|
||||
expectedOutput: `{
|
||||
"URL": "foo xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx foo",
|
||||
"URL": "foo xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx foo",
|
||||
"URL": "foo xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx foo",
|
||||
"URL": "foo xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx foo"
|
||||
}`,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range testCases {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
output := doOnJSON(test.input)
|
||||
assert.Equal(t, test.expectedOutput, output)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1,176 +0,0 @@
|
||||
package anonymize
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
type Courgette struct {
|
||||
Ji string
|
||||
Ho string
|
||||
}
|
||||
type Tomate struct {
|
||||
Ji string
|
||||
Ho string
|
||||
}
|
||||
|
||||
type Carotte struct {
|
||||
Name string
|
||||
Value int
|
||||
Courgette Courgette
|
||||
ECourgette Courgette `export:"true"`
|
||||
Pourgette *Courgette
|
||||
EPourgette *Courgette `export:"true"`
|
||||
Aubergine map[string]string
|
||||
EAubergine map[string]string `export:"true"`
|
||||
SAubergine map[string]Tomate
|
||||
ESAubergine map[string]Tomate `export:"true"`
|
||||
PSAubergine map[string]*Tomate
|
||||
EPAubergine map[string]*Tomate `export:"true"`
|
||||
}
|
||||
|
||||
func Test_doOnStruct(t *testing.T) {
|
||||
testCase := []struct {
|
||||
name string
|
||||
base *Carotte
|
||||
expected *Carotte
|
||||
hasError bool
|
||||
}{
|
||||
{
|
||||
name: "primitive",
|
||||
base: &Carotte{
|
||||
Name: "koko",
|
||||
Value: 666,
|
||||
},
|
||||
expected: &Carotte{
|
||||
Name: "xxxx",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "struct",
|
||||
base: &Carotte{
|
||||
Name: "koko",
|
||||
Courgette: Courgette{
|
||||
Ji: "huu",
|
||||
},
|
||||
},
|
||||
expected: &Carotte{
|
||||
Name: "xxxx",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "pointer",
|
||||
base: &Carotte{
|
||||
Name: "koko",
|
||||
Pourgette: &Courgette{
|
||||
Ji: "hoo",
|
||||
},
|
||||
},
|
||||
expected: &Carotte{
|
||||
Name: "xxxx",
|
||||
Pourgette: nil,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "export struct",
|
||||
base: &Carotte{
|
||||
Name: "koko",
|
||||
ECourgette: Courgette{
|
||||
Ji: "huu",
|
||||
},
|
||||
},
|
||||
expected: &Carotte{
|
||||
Name: "xxxx",
|
||||
ECourgette: Courgette{
|
||||
Ji: "xxxx",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "export pointer struct",
|
||||
base: &Carotte{
|
||||
Name: "koko",
|
||||
ECourgette: Courgette{
|
||||
Ji: "huu",
|
||||
},
|
||||
},
|
||||
expected: &Carotte{
|
||||
Name: "xxxx",
|
||||
ECourgette: Courgette{
|
||||
Ji: "xxxx",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "export map string/string",
|
||||
base: &Carotte{
|
||||
Name: "koko",
|
||||
EAubergine: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
expected: &Carotte{
|
||||
Name: "xxxx",
|
||||
EAubergine: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "export map string/pointer",
|
||||
base: &Carotte{
|
||||
Name: "koko",
|
||||
EPAubergine: map[string]*Tomate{
|
||||
"foo": {
|
||||
Ji: "fdskljf",
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: &Carotte{
|
||||
Name: "xxxx",
|
||||
EPAubergine: map[string]*Tomate{
|
||||
"foo": {
|
||||
Ji: "xxxx",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "export map string/struct (UNSAFE)",
|
||||
base: &Carotte{
|
||||
Name: "koko",
|
||||
ESAubergine: map[string]Tomate{
|
||||
"foo": {
|
||||
Ji: "JiJiJi",
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: &Carotte{
|
||||
Name: "xxxx",
|
||||
ESAubergine: map[string]Tomate{
|
||||
"foo": {
|
||||
Ji: "JiJiJi",
|
||||
},
|
||||
},
|
||||
},
|
||||
hasError: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range testCase {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
val := reflect.ValueOf(test.base).Elem()
|
||||
err := doOnStruct(val)
|
||||
if !test.hasError && err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if test.hasError && err == nil {
|
||||
t.Fatal("Got no error but want an error.")
|
||||
}
|
||||
|
||||
assert.EqualValues(t, test.expected, test.base)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1,169 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"os/exec"
|
||||
"runtime"
|
||||
"text/template"
|
||||
|
||||
"github.com/containous/flaeg"
|
||||
"github.com/containous/traefik/cmd/traefik/anonymize"
|
||||
)
|
||||
|
||||
const (
|
||||
bugTracker = "https://github.com/containous/traefik/issues/new"
|
||||
bugTemplate = `<!--
|
||||
DO NOT FILE ISSUES FOR GENERAL SUPPORT QUESTIONS.
|
||||
|
||||
The issue tracker is for reporting bugs and feature requests only.
|
||||
For end-user related support questions, refer to one of the following:
|
||||
|
||||
- Stack Overflow (using the "traefik" tag): https://stackoverflow.com/questions/tagged/traefik
|
||||
- the Traefik community Slack channel: https://traefik.herokuapp.com
|
||||
|
||||
-->
|
||||
|
||||
### Do you want to request a *feature* or report a *bug*?
|
||||
|
||||
(If you intend to ask a support question: **DO NOT FILE AN ISSUE**.
|
||||
Use [Stack Overflow](https://stackoverflow.com/questions/tagged/traefik)
|
||||
or [Slack](https://traefik.herokuapp.com) instead.)
|
||||
|
||||
|
||||
|
||||
### What did you do?
|
||||
|
||||
<!--
|
||||
|
||||
HOW TO WRITE A GOOD ISSUE?
|
||||
|
||||
- Respect the issue template as more as possible.
|
||||
- If it's possible use the command ` + "`" + "traefik bug" + "`" + `. See https://www.youtube.com/watch?v=Lyz62L8m93I.
|
||||
- The title must be short and descriptive.
|
||||
- Explain the conditions which led you to write this issue: the context.
|
||||
- The context should lead to something, an idea or a problem that you’re facing.
|
||||
- Remain clear and concise.
|
||||
- Format your messages to help the reader focus on what matters and understand the structure of your message, use Markdown syntax https://help.github.com/articles/github-flavored-markdown
|
||||
|
||||
-->
|
||||
|
||||
|
||||
### What did you expect to see?
|
||||
|
||||
|
||||
|
||||
### What did you see instead?
|
||||
|
||||
|
||||
|
||||
### Output of ` + "`" + `traefik version` + "`" + `: (_What version of Traefik are you using?_)
|
||||
|
||||
` + "```" + `
|
||||
{{.Version}}
|
||||
` + "```" + `
|
||||
|
||||
### What is your environment & configuration (arguments, toml, provider, platform, ...)?
|
||||
|
||||
` + "```" + `json
|
||||
{{.Configuration}}
|
||||
` + "```" + `
|
||||
|
||||
<!--
|
||||
Add more configuration information here.
|
||||
-->
|
||||
|
||||
### If applicable, please paste the log output in debug mode (` + "`" + `--debug` + "`" + ` switch)
|
||||
|
||||
` + "```" + `
|
||||
(paste your output here)
|
||||
` + "```" + `
|
||||
|
||||
`
|
||||
)
|
||||
|
||||
// newBugCmd builds a new Bug command
|
||||
func newBugCmd(traefikConfiguration *TraefikConfiguration, traefikPointersConfiguration *TraefikConfiguration) *flaeg.Command {
|
||||
|
||||
//version Command init
|
||||
return &flaeg.Command{
|
||||
Name: "bug",
|
||||
Description: `Report an issue on Traefik bugtracker`,
|
||||
Config: traefikConfiguration,
|
||||
DefaultPointersConfig: traefikPointersConfiguration,
|
||||
Run: runBugCmd(traefikConfiguration),
|
||||
Metadata: map[string]string{
|
||||
"parseAllSources": "true",
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func runBugCmd(traefikConfiguration *TraefikConfiguration) func() error {
|
||||
return func() error {
|
||||
|
||||
body, err := createBugReport(traefikConfiguration)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
sendBugReport(body)
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func createBugReport(traefikConfiguration *TraefikConfiguration) (string, error) {
|
||||
var version bytes.Buffer
|
||||
if err := getVersionPrint(&version); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
tmpl, err := template.New("bug").Parse(bugTemplate)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
config, err := anonymize.Do(traefikConfiguration, true)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
v := struct {
|
||||
Version string
|
||||
Configuration string
|
||||
}{
|
||||
Version: version.String(),
|
||||
Configuration: config,
|
||||
}
|
||||
|
||||
var bug bytes.Buffer
|
||||
if err := tmpl.Execute(&bug, v); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return bug.String(), nil
|
||||
}
|
||||
|
||||
func sendBugReport(body string) {
|
||||
URL := bugTracker + "?body=" + url.QueryEscape(body)
|
||||
if err := openBrowser(URL); err != nil {
|
||||
fmt.Printf("Please file a new issue at %s using this template:\n\n", bugTracker)
|
||||
fmt.Print(body)
|
||||
}
|
||||
}
|
||||
|
||||
func openBrowser(URL string) error {
|
||||
var err error
|
||||
switch runtime.GOOS {
|
||||
case "linux":
|
||||
err = exec.Command("xdg-open", URL).Start()
|
||||
case "windows":
|
||||
err = exec.Command("rundll32", "url.dll,FileProtocolHandler", URL).Start()
|
||||
case "darwin":
|
||||
err = exec.Command("open", URL).Start()
|
||||
default:
|
||||
err = fmt.Errorf("unsupported platform")
|
||||
}
|
||||
return err
|
||||
}
|
||||
@@ -1,66 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/containous/traefik/cmd/traefik/anonymize"
|
||||
"github.com/containous/traefik/configuration"
|
||||
"github.com/containous/traefik/provider/file"
|
||||
"github.com/containous/traefik/tls"
|
||||
"github.com/containous/traefik/types"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func Test_createBugReport(t *testing.T) {
|
||||
traefikConfiguration := &TraefikConfiguration{
|
||||
ConfigFile: "FOO",
|
||||
GlobalConfiguration: configuration.GlobalConfiguration{
|
||||
EntryPoints: configuration.EntryPoints{
|
||||
"goo": &configuration.EntryPoint{
|
||||
Address: "hoo.bar",
|
||||
Auth: &types.Auth{
|
||||
Basic: &types.Basic{
|
||||
UsersFile: "foo Basic UsersFile",
|
||||
Users: types.Users{"foo Basic Users 1", "foo Basic Users 2", "foo Basic Users 3"},
|
||||
},
|
||||
Digest: &types.Digest{
|
||||
UsersFile: "foo Digest UsersFile",
|
||||
Users: types.Users{"foo Digest Users 1", "foo Digest Users 2", "foo Digest Users 3"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
File: &file.Provider{
|
||||
Directory: "BAR",
|
||||
},
|
||||
RootCAs: tls.RootCAs{"fllf"},
|
||||
},
|
||||
}
|
||||
|
||||
report, err := createBugReport(traefikConfiguration)
|
||||
assert.NoError(t, err, report)
|
||||
|
||||
// exported anonymous configuration
|
||||
assert.NotContains(t, "web Basic Users ", report)
|
||||
assert.NotContains(t, "foo Digest Users ", report)
|
||||
assert.NotContains(t, "hoo.bar", report)
|
||||
}
|
||||
|
||||
func Test_anonymize_traefikConfiguration(t *testing.T) {
|
||||
traefikConfiguration := &TraefikConfiguration{
|
||||
ConfigFile: "FOO",
|
||||
GlobalConfiguration: configuration.GlobalConfiguration{
|
||||
EntryPoints: configuration.EntryPoints{
|
||||
"goo": &configuration.EntryPoint{
|
||||
Address: "hoo.bar",
|
||||
},
|
||||
},
|
||||
File: &file.Provider{
|
||||
Directory: "BAR",
|
||||
},
|
||||
},
|
||||
}
|
||||
_, err := anonymize.Do(traefikConfiguration, true)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "hoo.bar", traefikConfiguration.GlobalConfiguration.EntryPoints["goo"].Address)
|
||||
}
|
||||
@@ -1,297 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/containous/flaeg"
|
||||
"github.com/containous/traefik-extra-service-fabric"
|
||||
"github.com/containous/traefik/api"
|
||||
"github.com/containous/traefik/configuration"
|
||||
"github.com/containous/traefik/middlewares/accesslog"
|
||||
"github.com/containous/traefik/ping"
|
||||
"github.com/containous/traefik/provider/boltdb"
|
||||
"github.com/containous/traefik/provider/consul"
|
||||
"github.com/containous/traefik/provider/docker"
|
||||
"github.com/containous/traefik/provider/dynamodb"
|
||||
"github.com/containous/traefik/provider/ecs"
|
||||
"github.com/containous/traefik/provider/etcd"
|
||||
"github.com/containous/traefik/provider/eureka"
|
||||
"github.com/containous/traefik/provider/file"
|
||||
"github.com/containous/traefik/provider/kubernetes"
|
||||
"github.com/containous/traefik/provider/marathon"
|
||||
"github.com/containous/traefik/provider/mesos"
|
||||
"github.com/containous/traefik/provider/rancher"
|
||||
"github.com/containous/traefik/provider/rest"
|
||||
"github.com/containous/traefik/provider/zk"
|
||||
"github.com/containous/traefik/types"
|
||||
sf "github.com/jjcollinge/servicefabric"
|
||||
)
|
||||
|
||||
// TraefikConfiguration holds GlobalConfiguration and other stuff
|
||||
type TraefikConfiguration struct {
|
||||
configuration.GlobalConfiguration `mapstructure:",squash" export:"true"`
|
||||
ConfigFile string `short:"c" description:"Configuration file to use (TOML)." export:"true"`
|
||||
}
|
||||
|
||||
// NewTraefikDefaultPointersConfiguration creates a TraefikConfiguration with pointers default values
|
||||
func NewTraefikDefaultPointersConfiguration() *TraefikConfiguration {
|
||||
//default Docker
|
||||
var defaultDocker docker.Provider
|
||||
defaultDocker.Watch = true
|
||||
defaultDocker.ExposedByDefault = true
|
||||
defaultDocker.Endpoint = "unix:///var/run/docker.sock"
|
||||
defaultDocker.SwarmMode = false
|
||||
|
||||
// default File
|
||||
var defaultFile file.Provider
|
||||
defaultFile.Watch = true
|
||||
defaultFile.Filename = "" //needs equivalent to viper.ConfigFileUsed()
|
||||
|
||||
// default Rest
|
||||
var defaultRest rest.Provider
|
||||
defaultRest.EntryPoint = configuration.DefaultInternalEntryPointName
|
||||
|
||||
// TODO: Deprecated - Web provider, use REST provider instead
|
||||
var defaultWeb configuration.WebCompatibility
|
||||
defaultWeb.Address = ":8080"
|
||||
defaultWeb.Statistics = &types.Statistics{
|
||||
RecentErrors: 10,
|
||||
}
|
||||
|
||||
// TODO: Deprecated - default Metrics
|
||||
defaultWeb.Metrics = &types.Metrics{
|
||||
Prometheus: &types.Prometheus{
|
||||
Buckets: types.Buckets{0.1, 0.3, 1.2, 5},
|
||||
EntryPoint: configuration.DefaultInternalEntryPointName,
|
||||
},
|
||||
Datadog: &types.Datadog{
|
||||
Address: "localhost:8125",
|
||||
PushInterval: "10s",
|
||||
},
|
||||
StatsD: &types.Statsd{
|
||||
Address: "localhost:8125",
|
||||
PushInterval: "10s",
|
||||
},
|
||||
InfluxDB: &types.InfluxDB{
|
||||
Address: "localhost:8089",
|
||||
PushInterval: "10s",
|
||||
},
|
||||
}
|
||||
|
||||
// default Marathon
|
||||
var defaultMarathon marathon.Provider
|
||||
defaultMarathon.Watch = true
|
||||
defaultMarathon.Endpoint = "http://127.0.0.1:8080"
|
||||
defaultMarathon.ExposedByDefault = true
|
||||
defaultMarathon.Constraints = types.Constraints{}
|
||||
defaultMarathon.DialerTimeout = flaeg.Duration(60 * time.Second)
|
||||
defaultMarathon.KeepAlive = flaeg.Duration(10 * time.Second)
|
||||
|
||||
// default Consul
|
||||
var defaultConsul consul.Provider
|
||||
defaultConsul.Watch = true
|
||||
defaultConsul.Endpoint = "127.0.0.1:8500"
|
||||
defaultConsul.Prefix = "traefik"
|
||||
defaultConsul.Constraints = types.Constraints{}
|
||||
|
||||
// default CatalogProvider
|
||||
var defaultConsulCatalog consul.CatalogProvider
|
||||
defaultConsulCatalog.Endpoint = "127.0.0.1:8500"
|
||||
defaultConsulCatalog.ExposedByDefault = true
|
||||
defaultConsulCatalog.Constraints = types.Constraints{}
|
||||
defaultConsulCatalog.Prefix = "traefik"
|
||||
defaultConsulCatalog.FrontEndRule = "Host:{{.ServiceName}}.{{.Domain}}"
|
||||
|
||||
// default Etcd
|
||||
var defaultEtcd etcd.Provider
|
||||
defaultEtcd.Watch = true
|
||||
defaultEtcd.Endpoint = "127.0.0.1:2379"
|
||||
defaultEtcd.Prefix = "/traefik"
|
||||
defaultEtcd.Constraints = types.Constraints{}
|
||||
|
||||
//default Zookeeper
|
||||
var defaultZookeeper zk.Provider
|
||||
defaultZookeeper.Watch = true
|
||||
defaultZookeeper.Endpoint = "127.0.0.1:2181"
|
||||
defaultZookeeper.Prefix = "traefik"
|
||||
defaultZookeeper.Constraints = types.Constraints{}
|
||||
|
||||
//default Boltdb
|
||||
var defaultBoltDb boltdb.Provider
|
||||
defaultBoltDb.Watch = true
|
||||
defaultBoltDb.Endpoint = "127.0.0.1:4001"
|
||||
defaultBoltDb.Prefix = "/traefik"
|
||||
defaultBoltDb.Constraints = types.Constraints{}
|
||||
|
||||
//default Kubernetes
|
||||
var defaultKubernetes kubernetes.Provider
|
||||
defaultKubernetes.Watch = true
|
||||
defaultKubernetes.Endpoint = ""
|
||||
defaultKubernetes.LabelSelector = ""
|
||||
defaultKubernetes.Constraints = types.Constraints{}
|
||||
|
||||
// default Mesos
|
||||
var defaultMesos mesos.Provider
|
||||
defaultMesos.Watch = true
|
||||
defaultMesos.Endpoint = "http://127.0.0.1:5050"
|
||||
defaultMesos.ExposedByDefault = true
|
||||
defaultMesos.Constraints = types.Constraints{}
|
||||
defaultMesos.RefreshSeconds = 30
|
||||
defaultMesos.ZkDetectionTimeout = 30
|
||||
defaultMesos.StateTimeoutSecond = 30
|
||||
|
||||
//default ECS
|
||||
var defaultECS ecs.Provider
|
||||
defaultECS.Watch = true
|
||||
defaultECS.ExposedByDefault = true
|
||||
defaultECS.AutoDiscoverClusters = false
|
||||
defaultECS.Clusters = ecs.Clusters{"default"}
|
||||
defaultECS.RefreshSeconds = 15
|
||||
defaultECS.Constraints = types.Constraints{}
|
||||
|
||||
//default Rancher
|
||||
var defaultRancher rancher.Provider
|
||||
defaultRancher.Watch = true
|
||||
defaultRancher.ExposedByDefault = true
|
||||
defaultRancher.RefreshSeconds = 15
|
||||
|
||||
// default DynamoDB
|
||||
var defaultDynamoDB dynamodb.Provider
|
||||
defaultDynamoDB.Constraints = types.Constraints{}
|
||||
defaultDynamoDB.RefreshSeconds = 15
|
||||
defaultDynamoDB.TableName = "traefik"
|
||||
defaultDynamoDB.Watch = true
|
||||
|
||||
// default Eureka
|
||||
var defaultEureka eureka.Provider
|
||||
defaultEureka.Delay = "30s"
|
||||
|
||||
// default ServiceFabric
|
||||
var defaultServiceFabric servicefabric.Provider
|
||||
defaultServiceFabric.APIVersion = sf.DefaultAPIVersion
|
||||
defaultServiceFabric.RefreshSeconds = 10
|
||||
|
||||
// default Ping
|
||||
var defaultPing = ping.Handler{
|
||||
EntryPoint: "traefik",
|
||||
}
|
||||
|
||||
// default TraefikLog
|
||||
defaultTraefikLog := types.TraefikLog{
|
||||
Format: "common",
|
||||
FilePath: "",
|
||||
}
|
||||
|
||||
// default AccessLog
|
||||
defaultAccessLog := types.AccessLog{
|
||||
Format: accesslog.CommonFormat,
|
||||
FilePath: "",
|
||||
}
|
||||
|
||||
// default HealthCheckConfig
|
||||
healthCheck := configuration.HealthCheckConfig{
|
||||
Interval: flaeg.Duration(configuration.DefaultHealthCheckInterval),
|
||||
}
|
||||
|
||||
// default RespondingTimeouts
|
||||
respondingTimeouts := configuration.RespondingTimeouts{
|
||||
IdleTimeout: flaeg.Duration(configuration.DefaultIdleTimeout),
|
||||
}
|
||||
|
||||
// default ForwardingTimeouts
|
||||
forwardingTimeouts := configuration.ForwardingTimeouts{
|
||||
DialTimeout: flaeg.Duration(configuration.DefaultDialTimeout),
|
||||
}
|
||||
|
||||
// default LifeCycle
|
||||
defaultLifeCycle := configuration.LifeCycle{
|
||||
GraceTimeOut: flaeg.Duration(configuration.DefaultGraceTimeout),
|
||||
}
|
||||
|
||||
// default ApiConfiguration
|
||||
defaultAPI := api.Handler{
|
||||
EntryPoint: "traefik",
|
||||
Dashboard: true,
|
||||
}
|
||||
defaultAPI.Statistics = &types.Statistics{
|
||||
RecentErrors: 10,
|
||||
}
|
||||
|
||||
// default Metrics
|
||||
defaultMetrics := types.Metrics{
|
||||
Prometheus: &types.Prometheus{
|
||||
Buckets: types.Buckets{0.1, 0.3, 1.2, 5},
|
||||
EntryPoint: configuration.DefaultInternalEntryPointName,
|
||||
},
|
||||
Datadog: &types.Datadog{
|
||||
Address: "localhost:8125",
|
||||
PushInterval: "10s",
|
||||
},
|
||||
StatsD: &types.Statsd{
|
||||
Address: "localhost:8125",
|
||||
PushInterval: "10s",
|
||||
},
|
||||
InfluxDB: &types.InfluxDB{
|
||||
Address: "localhost:8089",
|
||||
PushInterval: "10s",
|
||||
},
|
||||
}
|
||||
|
||||
defaultConfiguration := configuration.GlobalConfiguration{
|
||||
Docker: &defaultDocker,
|
||||
File: &defaultFile,
|
||||
Web: &defaultWeb,
|
||||
Rest: &defaultRest,
|
||||
Marathon: &defaultMarathon,
|
||||
Consul: &defaultConsul,
|
||||
ConsulCatalog: &defaultConsulCatalog,
|
||||
Etcd: &defaultEtcd,
|
||||
Zookeeper: &defaultZookeeper,
|
||||
Boltdb: &defaultBoltDb,
|
||||
Kubernetes: &defaultKubernetes,
|
||||
Mesos: &defaultMesos,
|
||||
ECS: &defaultECS,
|
||||
Rancher: &defaultRancher,
|
||||
Eureka: &defaultEureka,
|
||||
DynamoDB: &defaultDynamoDB,
|
||||
Retry: &configuration.Retry{},
|
||||
HealthCheck: &healthCheck,
|
||||
RespondingTimeouts: &respondingTimeouts,
|
||||
ForwardingTimeouts: &forwardingTimeouts,
|
||||
TraefikLog: &defaultTraefikLog,
|
||||
AccessLog: &defaultAccessLog,
|
||||
LifeCycle: &defaultLifeCycle,
|
||||
Ping: &defaultPing,
|
||||
API: &defaultAPI,
|
||||
Metrics: &defaultMetrics,
|
||||
}
|
||||
|
||||
return &TraefikConfiguration{
|
||||
GlobalConfiguration: defaultConfiguration,
|
||||
}
|
||||
}
|
||||
|
||||
// NewTraefikConfiguration creates a TraefikConfiguration with default values
|
||||
func NewTraefikConfiguration() *TraefikConfiguration {
|
||||
return &TraefikConfiguration{
|
||||
GlobalConfiguration: configuration.GlobalConfiguration{
|
||||
AccessLogsFile: "",
|
||||
TraefikLogsFile: "",
|
||||
LogLevel: "ERROR",
|
||||
EntryPoints: map[string]*configuration.EntryPoint{},
|
||||
Constraints: types.Constraints{},
|
||||
DefaultEntryPoints: []string{"http"},
|
||||
ProvidersThrottleDuration: flaeg.Duration(2 * time.Second),
|
||||
MaxIdleConnsPerHost: 200,
|
||||
IdleTimeout: flaeg.Duration(0),
|
||||
HealthCheck: &configuration.HealthCheckConfig{
|
||||
Interval: flaeg.Duration(configuration.DefaultHealthCheckInterval),
|
||||
},
|
||||
LifeCycle: &configuration.LifeCycle{
|
||||
GraceTimeOut: flaeg.Duration(configuration.DefaultGraceTimeout),
|
||||
},
|
||||
CheckNewVersion: true,
|
||||
},
|
||||
ConfigFile: "",
|
||||
}
|
||||
}
|
||||
@@ -1,71 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/containous/flaeg"
|
||||
"github.com/containous/traefik/configuration"
|
||||
)
|
||||
|
||||
func newHealthCheckCmd(traefikConfiguration *TraefikConfiguration, traefikPointersConfiguration *TraefikConfiguration) *flaeg.Command {
|
||||
return &flaeg.Command{
|
||||
Name: "healthcheck",
|
||||
Description: `Calls traefik /ping to check health (web provider must be enabled)`,
|
||||
Config: traefikConfiguration,
|
||||
DefaultPointersConfig: traefikPointersConfiguration,
|
||||
Run: runHealthCheck(traefikConfiguration),
|
||||
Metadata: map[string]string{
|
||||
"parseAllSources": "true",
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func runHealthCheck(traefikConfiguration *TraefikConfiguration) func() error {
|
||||
return func() error {
|
||||
traefikConfiguration.GlobalConfiguration.SetEffectiveConfiguration(traefikConfiguration.ConfigFile)
|
||||
|
||||
resp, errPing := healthCheck(traefikConfiguration.GlobalConfiguration)
|
||||
if errPing != nil {
|
||||
fmt.Printf("Error calling healthcheck: %s\n", errPing)
|
||||
os.Exit(1)
|
||||
}
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
fmt.Printf("Bad healthcheck status: %s\n", resp.Status)
|
||||
os.Exit(1)
|
||||
}
|
||||
fmt.Printf("OK: %s\n", resp.Request.URL)
|
||||
os.Exit(0)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func healthCheck(globalConfiguration configuration.GlobalConfiguration) (*http.Response, error) {
|
||||
if globalConfiguration.Ping == nil {
|
||||
return nil, errors.New("please enable `ping` to use health check")
|
||||
}
|
||||
|
||||
pingEntryPoint, ok := globalConfiguration.EntryPoints[globalConfiguration.Ping.EntryPoint]
|
||||
if !ok {
|
||||
return nil, errors.New("missing `ping` entrypoint")
|
||||
}
|
||||
|
||||
client := &http.Client{Timeout: 5 * time.Second}
|
||||
protocol := "http"
|
||||
if pingEntryPoint.TLS != nil {
|
||||
protocol = "https"
|
||||
tr := &http.Transport{
|
||||
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
|
||||
}
|
||||
client.Transport = tr
|
||||
}
|
||||
path := "/"
|
||||
if globalConfiguration.Web != nil {
|
||||
path = globalConfiguration.Web.Path
|
||||
}
|
||||
return client.Head(protocol + "://" + pingEntryPoint.Address + path + "ping")
|
||||
}
|
||||
@@ -1,145 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
stdlog "log"
|
||||
|
||||
"github.com/containous/flaeg"
|
||||
"github.com/containous/staert"
|
||||
"github.com/containous/traefik/acme"
|
||||
"github.com/containous/traefik/cluster"
|
||||
"github.com/docker/libkv/store"
|
||||
)
|
||||
|
||||
func newStoreConfigCmd(traefikConfiguration *TraefikConfiguration, traefikPointersConfiguration *TraefikConfiguration) *flaeg.Command {
|
||||
return &flaeg.Command{
|
||||
Name: "storeconfig",
|
||||
Description: `Store the static traefik configuration into a Key-value stores. Traefik will not start.`,
|
||||
Config: traefikConfiguration,
|
||||
DefaultPointersConfig: traefikPointersConfiguration,
|
||||
Metadata: map[string]string{
|
||||
"parseAllSources": "true",
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func runStoreConfig(kv *staert.KvSource, traefikConfiguration *TraefikConfiguration) func() error {
|
||||
return func() error {
|
||||
if kv == nil {
|
||||
return fmt.Errorf("error using command storeconfig, no Key-value store defined")
|
||||
}
|
||||
|
||||
fileConfig := traefikConfiguration.GlobalConfiguration.File
|
||||
if fileConfig != nil {
|
||||
traefikConfiguration.GlobalConfiguration.File = nil
|
||||
if len(fileConfig.Filename) == 0 && len(fileConfig.Directory) == 0 {
|
||||
fileConfig.Filename = traefikConfiguration.ConfigFile
|
||||
}
|
||||
}
|
||||
|
||||
jsonConf, err := json.Marshal(traefikConfiguration.GlobalConfiguration)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
stdlog.Printf("Storing configuration: %s\n", jsonConf)
|
||||
|
||||
err = kv.StoreConfig(traefikConfiguration.GlobalConfiguration)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if fileConfig != nil {
|
||||
jsonConf, err = json.Marshal(fileConfig)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
stdlog.Printf("Storing file configuration: %s\n", jsonConf)
|
||||
config, err := fileConfig.LoadConfig()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
stdlog.Print("Writing config to KV")
|
||||
err = kv.StoreConfig(config)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if traefikConfiguration.GlobalConfiguration.ACME != nil {
|
||||
var object cluster.Object
|
||||
if len(traefikConfiguration.GlobalConfiguration.ACME.StorageFile) > 0 {
|
||||
// convert ACME json file to KV store
|
||||
localStore := acme.NewLocalStore(traefikConfiguration.GlobalConfiguration.ACME.StorageFile)
|
||||
object, err = localStore.Load()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
} else {
|
||||
// Create an empty account to create all the keys into the KV store
|
||||
account := &acme.Account{}
|
||||
account.Init()
|
||||
object = account
|
||||
}
|
||||
|
||||
meta := cluster.NewMetadata(object)
|
||||
err = meta.Marshall()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
source := staert.KvSource{
|
||||
Store: kv,
|
||||
Prefix: traefikConfiguration.GlobalConfiguration.ACME.Storage,
|
||||
}
|
||||
err = source.StoreConfig(meta)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Force to delete storagefile
|
||||
err = kv.Delete(kv.Prefix + "/acme/storagefile")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// createKvSource creates KvSource
|
||||
// TLS support is enable for Consul and Etcd backends
|
||||
func createKvSource(traefikConfiguration *TraefikConfiguration) (*staert.KvSource, error) {
|
||||
var kv *staert.KvSource
|
||||
var kvStore store.Store
|
||||
var err error
|
||||
|
||||
switch {
|
||||
case traefikConfiguration.Consul != nil:
|
||||
kvStore, err = traefikConfiguration.Consul.CreateStore()
|
||||
kv = &staert.KvSource{
|
||||
Store: kvStore,
|
||||
Prefix: traefikConfiguration.Consul.Prefix,
|
||||
}
|
||||
case traefikConfiguration.Etcd != nil:
|
||||
kvStore, err = traefikConfiguration.Etcd.CreateStore()
|
||||
kv = &staert.KvSource{
|
||||
Store: kvStore,
|
||||
Prefix: traefikConfiguration.Etcd.Prefix,
|
||||
}
|
||||
case traefikConfiguration.Zookeeper != nil:
|
||||
kvStore, err = traefikConfiguration.Zookeeper.CreateStore()
|
||||
kv = &staert.KvSource{
|
||||
Store: kvStore,
|
||||
Prefix: traefikConfiguration.Zookeeper.Prefix,
|
||||
}
|
||||
case traefikConfiguration.Boltdb != nil:
|
||||
kvStore, err = traefikConfiguration.Boltdb.CreateStore()
|
||||
kv = &staert.KvSource{
|
||||
Store: kvStore,
|
||||
Prefix: traefikConfiguration.Boltdb.Prefix,
|
||||
}
|
||||
}
|
||||
return kv, err
|
||||
}
|
||||
@@ -1,290 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
fmtlog "log"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/cenk/backoff"
|
||||
"github.com/containous/flaeg"
|
||||
"github.com/containous/staert"
|
||||
"github.com/containous/traefik/acme"
|
||||
"github.com/containous/traefik/collector"
|
||||
"github.com/containous/traefik/configuration"
|
||||
"github.com/containous/traefik/job"
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/containous/traefik/provider/ecs"
|
||||
"github.com/containous/traefik/provider/kubernetes"
|
||||
"github.com/containous/traefik/safe"
|
||||
"github.com/containous/traefik/server"
|
||||
"github.com/containous/traefik/server/uuid"
|
||||
traefikTls "github.com/containous/traefik/tls"
|
||||
"github.com/containous/traefik/types"
|
||||
"github.com/containous/traefik/version"
|
||||
"github.com/coreos/go-systemd/daemon"
|
||||
"github.com/ogier/pflag"
|
||||
)
|
||||
|
||||
func main() {
|
||||
//traefik config inits
|
||||
traefikConfiguration := NewTraefikConfiguration()
|
||||
traefikPointersConfiguration := NewTraefikDefaultPointersConfiguration()
|
||||
//traefik Command init
|
||||
traefikCmd := &flaeg.Command{
|
||||
Name: "traefik",
|
||||
Description: `traefik is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease.
|
||||
Complete documentation is available at https://traefik.io`,
|
||||
Config: traefikConfiguration,
|
||||
DefaultPointersConfig: traefikPointersConfiguration,
|
||||
Run: func() error {
|
||||
run(&traefikConfiguration.GlobalConfiguration, traefikConfiguration.ConfigFile)
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
//storeconfig Command init
|
||||
storeConfigCmd := newStoreConfigCmd(traefikConfiguration, traefikPointersConfiguration)
|
||||
|
||||
//init flaeg source
|
||||
f := flaeg.New(traefikCmd, os.Args[1:])
|
||||
//add custom parsers
|
||||
f.AddParser(reflect.TypeOf(configuration.EntryPoints{}), &configuration.EntryPoints{})
|
||||
f.AddParser(reflect.TypeOf(configuration.DefaultEntryPoints{}), &configuration.DefaultEntryPoints{})
|
||||
f.AddParser(reflect.TypeOf(traefikTls.RootCAs{}), &traefikTls.RootCAs{})
|
||||
f.AddParser(reflect.TypeOf(types.Constraints{}), &types.Constraints{})
|
||||
f.AddParser(reflect.TypeOf(kubernetes.Namespaces{}), &kubernetes.Namespaces{})
|
||||
f.AddParser(reflect.TypeOf(ecs.Clusters{}), &ecs.Clusters{})
|
||||
f.AddParser(reflect.TypeOf([]acme.Domain{}), &acme.Domains{})
|
||||
f.AddParser(reflect.TypeOf(types.Buckets{}), &types.Buckets{})
|
||||
|
||||
//add commands
|
||||
f.AddCommand(newVersionCmd())
|
||||
f.AddCommand(newBugCmd(traefikConfiguration, traefikPointersConfiguration))
|
||||
f.AddCommand(storeConfigCmd)
|
||||
f.AddCommand(newHealthCheckCmd(traefikConfiguration, traefikPointersConfiguration))
|
||||
|
||||
usedCmd, err := f.GetCommand()
|
||||
if err != nil {
|
||||
fmtlog.Println(err)
|
||||
os.Exit(-1)
|
||||
}
|
||||
|
||||
if _, err := f.Parse(usedCmd); err != nil {
|
||||
if err == pflag.ErrHelp {
|
||||
os.Exit(0)
|
||||
}
|
||||
fmtlog.Printf("Error parsing command: %s\n", err)
|
||||
os.Exit(-1)
|
||||
}
|
||||
|
||||
//staert init
|
||||
s := staert.NewStaert(traefikCmd)
|
||||
//init toml source
|
||||
toml := staert.NewTomlSource("traefik", []string{traefikConfiguration.ConfigFile, "/etc/traefik/", "$HOME/.traefik/", "."})
|
||||
|
||||
//add sources to staert
|
||||
s.AddSource(toml)
|
||||
s.AddSource(f)
|
||||
if _, err := s.LoadConfig(); err != nil {
|
||||
fmtlog.Printf("Error reading TOML config file %s : %s\n", toml.ConfigFileUsed(), err)
|
||||
os.Exit(-1)
|
||||
}
|
||||
|
||||
traefikConfiguration.ConfigFile = toml.ConfigFileUsed()
|
||||
|
||||
kv, err := createKvSource(traefikConfiguration)
|
||||
if err != nil {
|
||||
fmtlog.Printf("Error creating kv store: %s\n", err)
|
||||
os.Exit(-1)
|
||||
}
|
||||
storeConfigCmd.Run = runStoreConfig(kv, traefikConfiguration)
|
||||
|
||||
// IF a KV Store is enable and no sub-command called in args
|
||||
if kv != nil && usedCmd == traefikCmd {
|
||||
if traefikConfiguration.Cluster == nil {
|
||||
traefikConfiguration.Cluster = &types.Cluster{Node: uuid.Get()}
|
||||
}
|
||||
if traefikConfiguration.Cluster.Store == nil {
|
||||
traefikConfiguration.Cluster.Store = &types.Store{Prefix: kv.Prefix, Store: kv.Store}
|
||||
}
|
||||
s.AddSource(kv)
|
||||
operation := func() error {
|
||||
_, err := s.LoadConfig()
|
||||
return err
|
||||
}
|
||||
notify := func(err error, time time.Duration) {
|
||||
log.Errorf("Load config error: %+v, retrying in %s", err, time)
|
||||
}
|
||||
err := backoff.RetryNotify(safe.OperationWithRecover(operation), job.NewBackOff(backoff.NewExponentialBackOff()), notify)
|
||||
if err != nil {
|
||||
fmtlog.Printf("Error loading configuration: %s\n", err)
|
||||
os.Exit(-1)
|
||||
}
|
||||
}
|
||||
|
||||
if err := s.Run(); err != nil {
|
||||
fmtlog.Printf("Error running traefik: %s\n", err)
|
||||
os.Exit(-1)
|
||||
}
|
||||
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
func run(globalConfiguration *configuration.GlobalConfiguration, configFile string) {
|
||||
configureLogging(globalConfiguration)
|
||||
|
||||
if len(configFile) > 0 {
|
||||
log.Infof("Using TOML configuration file %s", configFile)
|
||||
}
|
||||
|
||||
http.DefaultTransport.(*http.Transport).Proxy = http.ProxyFromEnvironment
|
||||
|
||||
globalConfiguration.SetEffectiveConfiguration(configFile)
|
||||
globalConfiguration.ValidateConfiguration()
|
||||
|
||||
jsonConf, _ := json.Marshal(globalConfiguration)
|
||||
log.Infof("Traefik version %s built on %s", version.Version, version.BuildDate)
|
||||
|
||||
if globalConfiguration.CheckNewVersion {
|
||||
checkNewVersion()
|
||||
}
|
||||
|
||||
stats(globalConfiguration)
|
||||
|
||||
log.Debugf("Global configuration loaded %s", string(jsonConf))
|
||||
svr := server.NewServer(*globalConfiguration)
|
||||
svr.Start()
|
||||
defer svr.Close()
|
||||
|
||||
sent, err := daemon.SdNotify(false, "READY=1")
|
||||
if !sent && err != nil {
|
||||
log.Error("Fail to notify", err)
|
||||
}
|
||||
|
||||
t, err := daemon.SdWatchdogEnabled(false)
|
||||
if err != nil {
|
||||
log.Error("Problem with watchdog", err)
|
||||
} else if t != 0 {
|
||||
// Send a ping each half time given
|
||||
t = t / 2
|
||||
log.Info("Watchdog activated with timer each ", t)
|
||||
safe.Go(func() {
|
||||
tick := time.Tick(t)
|
||||
for range tick {
|
||||
_, errHealthCheck := healthCheck(*globalConfiguration)
|
||||
if globalConfiguration.Ping == nil || errHealthCheck == nil {
|
||||
if ok, _ := daemon.SdNotify(false, "WATCHDOG=1"); !ok {
|
||||
log.Error("Fail to tick watchdog")
|
||||
}
|
||||
} else {
|
||||
log.Error(errHealthCheck)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
svr.Wait()
|
||||
log.Info("Shutting down")
|
||||
logrus.Exit(0)
|
||||
}
|
||||
|
||||
func configureLogging(globalConfiguration *configuration.GlobalConfiguration) {
|
||||
// configure default log flags
|
||||
fmtlog.SetFlags(fmtlog.Lshortfile | fmtlog.LstdFlags)
|
||||
|
||||
if globalConfiguration.Debug {
|
||||
globalConfiguration.LogLevel = "DEBUG"
|
||||
}
|
||||
|
||||
// configure log level
|
||||
level, err := logrus.ParseLevel(strings.ToLower(globalConfiguration.LogLevel))
|
||||
if err != nil {
|
||||
log.Error("Error getting level", err)
|
||||
}
|
||||
log.SetLevel(level)
|
||||
|
||||
// configure log output file
|
||||
logFile := globalConfiguration.TraefikLogsFile
|
||||
if len(logFile) > 0 {
|
||||
log.Warn("top-level traefikLogsFile has been deprecated -- please use traefiklog.filepath")
|
||||
}
|
||||
if globalConfiguration.TraefikLog != nil && len(globalConfiguration.TraefikLog.FilePath) > 0 {
|
||||
logFile = globalConfiguration.TraefikLog.FilePath
|
||||
}
|
||||
|
||||
// configure log format
|
||||
var formatter logrus.Formatter
|
||||
if globalConfiguration.TraefikLog != nil && globalConfiguration.TraefikLog.Format == "json" {
|
||||
formatter = &logrus.JSONFormatter{}
|
||||
} else {
|
||||
disableColors := false
|
||||
if len(logFile) > 0 {
|
||||
disableColors = true
|
||||
}
|
||||
formatter = &logrus.TextFormatter{DisableColors: disableColors, FullTimestamp: true, DisableSorting: true}
|
||||
}
|
||||
log.SetFormatter(formatter)
|
||||
|
||||
if len(logFile) > 0 {
|
||||
dir := filepath.Dir(logFile)
|
||||
|
||||
err := os.MkdirAll(dir, 0755)
|
||||
if err != nil {
|
||||
log.Errorf("Failed to create log path %s: %s", dir, err)
|
||||
}
|
||||
|
||||
err = log.OpenFile(logFile)
|
||||
logrus.RegisterExitHandler(func() {
|
||||
if err := log.CloseFile(); err != nil {
|
||||
log.Error("Error closing log", err)
|
||||
}
|
||||
})
|
||||
if err != nil {
|
||||
log.Error("Error opening file", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func checkNewVersion() {
|
||||
ticker := time.Tick(24 * time.Hour)
|
||||
safe.Go(func() {
|
||||
for time.Sleep(10 * time.Minute); ; <-ticker {
|
||||
version.CheckNewVersion()
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func stats(globalConfiguration *configuration.GlobalConfiguration) {
|
||||
if globalConfiguration.SendAnonymousUsage {
|
||||
log.Info(`
|
||||
Stats collection is enabled.
|
||||
Many thanks for contributing to Traefik's improvement by allowing us to receive anonymous information from your configuration.
|
||||
Help us improve Traefik by leaving this feature on :)
|
||||
More details on: https://docs.traefik.io/basics/#collected-data
|
||||
`)
|
||||
collect(globalConfiguration)
|
||||
} else {
|
||||
log.Info(`
|
||||
Stats collection is disabled.
|
||||
Help us improve Traefik by turning this feature on :)
|
||||
More details on: https://docs.traefik.io/basics/#collected-data
|
||||
`)
|
||||
}
|
||||
}
|
||||
|
||||
func collect(globalConfiguration *configuration.GlobalConfiguration) {
|
||||
ticker := time.Tick(24 * time.Hour)
|
||||
safe.Go(func() {
|
||||
for time.Sleep(10 * time.Minute); ; <-ticker {
|
||||
if err := collector.Collect(globalConfiguration); err != nil {
|
||||
log.Debug(err)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -1,63 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"runtime"
|
||||
"text/template"
|
||||
|
||||
"github.com/containous/flaeg"
|
||||
"github.com/containous/traefik/version"
|
||||
)
|
||||
|
||||
var versionTemplate = `Version: {{.Version}}
|
||||
Codename: {{.Codename}}
|
||||
Go version: {{.GoVersion}}
|
||||
Built: {{.BuildTime}}
|
||||
OS/Arch: {{.Os}}/{{.Arch}}`
|
||||
|
||||
// newVersionCmd builds a new Version command
|
||||
func newVersionCmd() *flaeg.Command {
|
||||
|
||||
//version Command init
|
||||
return &flaeg.Command{
|
||||
Name: "version",
|
||||
Description: `Print version`,
|
||||
Config: struct{}{},
|
||||
DefaultPointersConfig: struct{}{},
|
||||
Run: func() error {
|
||||
if err := getVersionPrint(os.Stdout); err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Print("\n")
|
||||
return nil
|
||||
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func getVersionPrint(wr io.Writer) error {
|
||||
tmpl, err := template.New("").Parse(versionTemplate)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
v := struct {
|
||||
Version string
|
||||
Codename string
|
||||
GoVersion string
|
||||
BuildTime string
|
||||
Os string
|
||||
Arch string
|
||||
}{
|
||||
Version: version.Version,
|
||||
Codename: version.Codename,
|
||||
GoVersion: runtime.Version(),
|
||||
BuildTime: version.BuildDate,
|
||||
Os: runtime.GOOS,
|
||||
Arch: runtime.GOARCH,
|
||||
}
|
||||
|
||||
return tmpl.Execute(wr, v)
|
||||
}
|
||||
@@ -1,79 +0,0 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"net"
|
||||
"net/http"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/containous/traefik/cmd/traefik/anonymize"
|
||||
"github.com/containous/traefik/configuration"
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/containous/traefik/version"
|
||||
"github.com/mitchellh/hashstructure"
|
||||
)
|
||||
|
||||
// collectorURL URL where the stats are send
|
||||
const collectorURL = "https://collect.traefik.io/619df80498b60f985d766ce62f912b7c"
|
||||
|
||||
// Collected data
|
||||
type data struct {
|
||||
Version string
|
||||
Codename string
|
||||
BuildDate string
|
||||
Configuration string
|
||||
Hash string
|
||||
}
|
||||
|
||||
// Collect anonymous data.
|
||||
func Collect(globalConfiguration *configuration.GlobalConfiguration) error {
|
||||
anonConfig, err := anonymize.Do(globalConfiguration, false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
log.Infof("Anonymous stats sent to %s: %s", collectorURL, anonConfig)
|
||||
|
||||
hashConf, err := hashstructure.Hash(globalConfiguration, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
data := &data{
|
||||
Version: version.Version,
|
||||
Codename: version.Codename,
|
||||
BuildDate: version.BuildDate,
|
||||
Hash: strconv.FormatUint(hashConf, 10),
|
||||
Configuration: base64.StdEncoding.EncodeToString([]byte(anonConfig)),
|
||||
}
|
||||
|
||||
buf := new(bytes.Buffer)
|
||||
err = json.NewEncoder(buf).Encode(data)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
_, err = makeHTTPClient().Post(collectorURL, "application/json; charset=utf-8", buf)
|
||||
return err
|
||||
}
|
||||
|
||||
func makeHTTPClient() *http.Client {
|
||||
dialer := &net.Dialer{
|
||||
Timeout: configuration.DefaultDialTimeout,
|
||||
KeepAlive: 30 * time.Second,
|
||||
DualStack: true,
|
||||
}
|
||||
|
||||
transport := &http.Transport{
|
||||
Proxy: http.ProxyFromEnvironment,
|
||||
DialContext: dialer.DialContext,
|
||||
IdleConnTimeout: 90 * time.Second,
|
||||
TLSHandshakeTimeout: 10 * time.Second,
|
||||
ExpectContinueTimeout: 1 * time.Second,
|
||||
}
|
||||
|
||||
return &http.Client{Transport: transport}
|
||||
}
|
||||
419
configuration.go
Normal file
419
configuration.go
Normal file
@@ -0,0 +1,419 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/containous/traefik/acme"
|
||||
"github.com/containous/traefik/provider"
|
||||
"github.com/containous/traefik/types"
|
||||
)
|
||||
|
||||
// TraefikConfiguration holds GlobalConfiguration and other stuff
|
||||
type TraefikConfiguration struct {
|
||||
GlobalConfiguration `mapstructure:",squash"`
|
||||
ConfigFile string `short:"c" description:"Configuration file to use (TOML)."`
|
||||
}
|
||||
|
||||
// GlobalConfiguration holds global configuration (with providers, etc.).
|
||||
// It's populated from the traefik configuration file passed as an argument to the binary.
|
||||
type GlobalConfiguration struct {
|
||||
GraceTimeOut int64 `short:"g" description:"Duration to give active requests a chance to finish during hot-reload"`
|
||||
Debug bool `short:"d" description:"Enable debug mode"`
|
||||
CheckNewVersion bool `description:"Periodically check if a new version has been released"`
|
||||
AccessLogsFile string `description:"Access logs file"`
|
||||
TraefikLogsFile string `description:"Traefik logs file"`
|
||||
LogLevel string `short:"l" description:"Log level"`
|
||||
EntryPoints EntryPoints `description:"Entrypoints definition using format: --entryPoints='Name:http Address::8000 Redirect.EntryPoint:https' --entryPoints='Name:https Address::4442 TLS:tests/traefik.crt,tests/traefik.key;prod/traefik.crt,prod/traefik.key'"`
|
||||
Cluster *types.Cluster `description:"Enable clustering"`
|
||||
Constraints types.Constraints `description:"Filter services by constraint, matching with service tags"`
|
||||
ACME *acme.ACME `description:"Enable ACME (Let's Encrypt): automatic SSL"`
|
||||
DefaultEntryPoints DefaultEntryPoints `description:"Entrypoints to be used by frontends that do not specify any entrypoint"`
|
||||
ProvidersThrottleDuration time.Duration `description:"Backends throttle duration: minimum duration between 2 events from providers before applying a new configuration. It avoids unnecessary reloads if multiples events are sent in a short amount of time."`
|
||||
MaxIdleConnsPerHost int `description:"If non-zero, controls the maximum idle (keep-alive) to keep per-host. If zero, DefaultMaxIdleConnsPerHost is used"`
|
||||
InsecureSkipVerify bool `description:"Disable SSL certificate verification"`
|
||||
Retry *Retry `description:"Enable retry sending request if network error"`
|
||||
Docker *provider.Docker `description:"Enable Docker backend"`
|
||||
File *provider.File `description:"Enable File backend"`
|
||||
Web *WebProvider `description:"Enable Web backend"`
|
||||
Marathon *provider.Marathon `description:"Enable Marathon backend"`
|
||||
Consul *provider.Consul `description:"Enable Consul backend"`
|
||||
ConsulCatalog *provider.ConsulCatalog `description:"Enable Consul catalog backend"`
|
||||
Etcd *provider.Etcd `description:"Enable Etcd backend"`
|
||||
Zookeeper *provider.Zookepper `description:"Enable Zookeeper backend"`
|
||||
Boltdb *provider.BoltDb `description:"Enable Boltdb backend"`
|
||||
Kubernetes *provider.Kubernetes `description:"Enable Kubernetes backend"`
|
||||
Mesos *provider.Mesos `description:"Enable Mesos backend"`
|
||||
}
|
||||
|
||||
// DefaultEntryPoints holds default entry points
|
||||
type DefaultEntryPoints []string
|
||||
|
||||
// String is the method to format the flag's value, part of the flag.Value interface.
|
||||
// The String method's output will be used in diagnostics.
|
||||
func (dep *DefaultEntryPoints) String() string {
|
||||
return strings.Join(*dep, ",")
|
||||
}
|
||||
|
||||
// Set is the method to set the flag value, part of the flag.Value interface.
|
||||
// Set's argument is a string to be parsed to set the flag.
|
||||
// It's a comma-separated list, so we split it.
|
||||
func (dep *DefaultEntryPoints) Set(value string) error {
|
||||
entrypoints := strings.Split(value, ",")
|
||||
if len(entrypoints) == 0 {
|
||||
return errors.New("Bad DefaultEntryPoints format: " + value)
|
||||
}
|
||||
for _, entrypoint := range entrypoints {
|
||||
*dep = append(*dep, entrypoint)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get return the EntryPoints map
|
||||
func (dep *DefaultEntryPoints) Get() interface{} {
|
||||
return DefaultEntryPoints(*dep)
|
||||
}
|
||||
|
||||
// SetValue sets the EntryPoints map with val
|
||||
func (dep *DefaultEntryPoints) SetValue(val interface{}) {
|
||||
*dep = DefaultEntryPoints(val.(DefaultEntryPoints))
|
||||
}
|
||||
|
||||
// Type is type of the struct
|
||||
func (dep *DefaultEntryPoints) Type() string {
|
||||
return fmt.Sprint("defaultentrypoints")
|
||||
}
|
||||
|
||||
// EntryPoints holds entry points configuration of the reverse proxy (ip, port, TLS...)
|
||||
type EntryPoints map[string]*EntryPoint
|
||||
|
||||
// String is the method to format the flag's value, part of the flag.Value interface.
|
||||
// The String method's output will be used in diagnostics.
|
||||
func (ep *EntryPoints) String() string {
|
||||
return fmt.Sprintf("%+v", *ep)
|
||||
}
|
||||
|
||||
// Set is the method to set the flag value, part of the flag.Value interface.
|
||||
// Set's argument is a string to be parsed to set the flag.
|
||||
// It's a comma-separated list, so we split it.
|
||||
func (ep *EntryPoints) Set(value string) error {
|
||||
regex := regexp.MustCompile("(?:Name:(?P<Name>\\S*))\\s*(?:Address:(?P<Address>\\S*))?\\s*(?:TLS:(?P<TLS>\\S*))?\\s*((?P<TLSACME>TLS))?\\s*(?:CA:(?P<CA>\\S*))?\\s*(?:Redirect.EntryPoint:(?P<RedirectEntryPoint>\\S*))?\\s*(?:Redirect.Regex:(?P<RedirectRegex>\\S*))?\\s*(?:Redirect.Replacement:(?P<RedirectReplacement>\\S*))?\\s*(?:Compress:(?P<Compress>\\S*))?")
|
||||
match := regex.FindAllStringSubmatch(value, -1)
|
||||
if match == nil {
|
||||
return errors.New("Bad EntryPoints format: " + value)
|
||||
}
|
||||
matchResult := match[0]
|
||||
result := make(map[string]string)
|
||||
for i, name := range regex.SubexpNames() {
|
||||
if i != 0 {
|
||||
result[name] = matchResult[i]
|
||||
}
|
||||
}
|
||||
var tls *TLS
|
||||
if len(result["TLS"]) > 0 {
|
||||
certs := Certificates{}
|
||||
if err := certs.Set(result["TLS"]); err != nil {
|
||||
return err
|
||||
}
|
||||
tls = &TLS{
|
||||
Certificates: certs,
|
||||
}
|
||||
} else if len(result["TLSACME"]) > 0 {
|
||||
tls = &TLS{
|
||||
Certificates: Certificates{},
|
||||
}
|
||||
}
|
||||
if len(result["CA"]) > 0 {
|
||||
files := strings.Split(result["CA"], ",")
|
||||
tls.ClientCAFiles = files
|
||||
}
|
||||
var redirect *Redirect
|
||||
if len(result["RedirectEntryPoint"]) > 0 || len(result["RedirectRegex"]) > 0 || len(result["RedirectReplacement"]) > 0 {
|
||||
redirect = &Redirect{
|
||||
EntryPoint: result["RedirectEntryPoint"],
|
||||
Regex: result["RedirectRegex"],
|
||||
Replacement: result["RedirectReplacement"],
|
||||
}
|
||||
}
|
||||
|
||||
compress := false
|
||||
if len(result["Compress"]) > 0 {
|
||||
compress = strings.EqualFold(result["Compress"], "enable") || strings.EqualFold(result["Compress"], "on")
|
||||
}
|
||||
|
||||
(*ep)[result["Name"]] = &EntryPoint{
|
||||
Address: result["Address"],
|
||||
TLS: tls,
|
||||
Redirect: redirect,
|
||||
Compress: compress,
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get return the EntryPoints map
|
||||
func (ep *EntryPoints) Get() interface{} {
|
||||
return EntryPoints(*ep)
|
||||
}
|
||||
|
||||
// SetValue sets the EntryPoints map with val
|
||||
func (ep *EntryPoints) SetValue(val interface{}) {
|
||||
*ep = EntryPoints(val.(EntryPoints))
|
||||
}
|
||||
|
||||
// Type is type of the struct
|
||||
func (ep *EntryPoints) Type() string {
|
||||
return fmt.Sprint("entrypoints")
|
||||
}
|
||||
|
||||
// EntryPoint holds an entry point configuration of the reverse proxy (ip, port, TLS...)
|
||||
type EntryPoint struct {
|
||||
Network string
|
||||
Address string
|
||||
TLS *TLS
|
||||
Redirect *Redirect
|
||||
Auth *types.Auth
|
||||
Compress bool
|
||||
}
|
||||
|
||||
// Redirect configures a redirection of an entry point to another, or to an URL
|
||||
type Redirect struct {
|
||||
EntryPoint string
|
||||
Regex string
|
||||
Replacement string
|
||||
}
|
||||
|
||||
// TLS configures TLS for an entry point
|
||||
type TLS struct {
|
||||
MinVersion string
|
||||
CipherSuites []string
|
||||
Certificates Certificates
|
||||
ClientCAFiles []string
|
||||
}
|
||||
|
||||
// Map of allowed TLS minimum versions
|
||||
var minVersion = map[string]uint16{
|
||||
`VersionTLS10`: tls.VersionTLS10,
|
||||
`VersionTLS11`: tls.VersionTLS11,
|
||||
`VersionTLS12`: tls.VersionTLS12,
|
||||
}
|
||||
|
||||
// Map of TLS CipherSuites from crypto/tls
|
||||
var cipherSuites = map[string]uint16{
|
||||
`TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256`: tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
|
||||
`TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`: tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
|
||||
`TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA`: tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
|
||||
`TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA`: tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
|
||||
`TLS_RSA_WITH_AES_128_GCM_SHA256`: tls.TLS_RSA_WITH_AES_128_GCM_SHA256,
|
||||
`TLS_RSA_WITH_AES_256_GCM_SHA384`: tls.TLS_RSA_WITH_AES_256_GCM_SHA384,
|
||||
`TLS_RSA_WITH_AES_128_CBC_SHA`: tls.TLS_RSA_WITH_AES_128_CBC_SHA,
|
||||
`TLS_RSA_WITH_AES_256_CBC_SHA`: tls.TLS_RSA_WITH_AES_256_CBC_SHA,
|
||||
`TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA`: tls.TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
|
||||
`TLS_RSA_WITH_3DES_EDE_CBC_SHA`: tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA,
|
||||
}
|
||||
|
||||
// Certificates defines traefik certificates type
|
||||
// Certs and Keys could be either a file path, or the file content itself
|
||||
type Certificates []Certificate
|
||||
|
||||
//CreateTLSConfig creates a TLS config from Certificate structures
|
||||
func (certs *Certificates) CreateTLSConfig() (*tls.Config, error) {
|
||||
config := &tls.Config{}
|
||||
config.Certificates = []tls.Certificate{}
|
||||
certsSlice := []Certificate(*certs)
|
||||
for _, v := range certsSlice {
|
||||
isAPath := false
|
||||
_, errCert := os.Stat(v.CertFile)
|
||||
_, errKey := os.Stat(v.KeyFile)
|
||||
if errCert == nil {
|
||||
if errKey == nil {
|
||||
isAPath = true
|
||||
} else {
|
||||
return nil, fmt.Errorf("bad TLS Certificate KeyFile format, expected a path")
|
||||
}
|
||||
} else if errKey == nil {
|
||||
return nil, fmt.Errorf("bad TLS Certificate KeyFile format, expected a path")
|
||||
}
|
||||
|
||||
cert := tls.Certificate{}
|
||||
var err error
|
||||
if isAPath {
|
||||
cert, err = tls.LoadX509KeyPair(v.CertFile, v.KeyFile)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else {
|
||||
cert, err = tls.X509KeyPair([]byte(v.CertFile), []byte(v.KeyFile))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
config.Certificates = append(config.Certificates, cert)
|
||||
}
|
||||
return config, nil
|
||||
}
|
||||
|
||||
// String is the method to format the flag's value, part of the flag.Value interface.
|
||||
// The String method's output will be used in diagnostics.
|
||||
func (certs *Certificates) String() string {
|
||||
if len(*certs) == 0 {
|
||||
return ""
|
||||
}
|
||||
var result []string
|
||||
for _, certificate := range *certs {
|
||||
result = append(result, certificate.CertFile+","+certificate.KeyFile)
|
||||
}
|
||||
return strings.Join(result, ";")
|
||||
}
|
||||
|
||||
// Set is the method to set the flag value, part of the flag.Value interface.
|
||||
// Set's argument is a string to be parsed to set the flag.
|
||||
// It's a comma-separated list, so we split it.
|
||||
func (certs *Certificates) Set(value string) error {
|
||||
certificates := strings.Split(value, ";")
|
||||
for _, certificate := range certificates {
|
||||
files := strings.Split(certificate, ",")
|
||||
if len(files) != 2 {
|
||||
return errors.New("Bad certificates format: " + value)
|
||||
}
|
||||
*certs = append(*certs, Certificate{
|
||||
CertFile: files[0],
|
||||
KeyFile: files[1],
|
||||
})
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Type is type of the struct
|
||||
func (certs *Certificates) Type() string {
|
||||
return fmt.Sprint("certificates")
|
||||
}
|
||||
|
||||
// Certificate holds a SSL cert/key pair
|
||||
// Certs and Key could be either a file path, or the file content itself
|
||||
type Certificate struct {
|
||||
CertFile string
|
||||
KeyFile string
|
||||
}
|
||||
|
||||
// Retry contains request retry config
|
||||
type Retry struct {
|
||||
Attempts int `description:"Number of attempts"`
|
||||
}
|
||||
|
||||
// NewTraefikDefaultPointersConfiguration creates a TraefikConfiguration with pointers default values
|
||||
func NewTraefikDefaultPointersConfiguration() *TraefikConfiguration {
|
||||
//default Docker
|
||||
var defaultDocker provider.Docker
|
||||
defaultDocker.Watch = true
|
||||
defaultDocker.ExposedByDefault = true
|
||||
defaultDocker.Endpoint = "unix:///var/run/docker.sock"
|
||||
defaultDocker.SwarmMode = false
|
||||
|
||||
// default File
|
||||
var defaultFile provider.File
|
||||
defaultFile.Watch = true
|
||||
defaultFile.Filename = "" //needs equivalent to viper.ConfigFileUsed()
|
||||
|
||||
// default Web
|
||||
var defaultWeb WebProvider
|
||||
defaultWeb.Address = ":8080"
|
||||
|
||||
// default Marathon
|
||||
var defaultMarathon provider.Marathon
|
||||
defaultMarathon.Watch = true
|
||||
defaultMarathon.Endpoint = "http://127.0.0.1:8080"
|
||||
defaultMarathon.ExposedByDefault = true
|
||||
defaultMarathon.Constraints = types.Constraints{}
|
||||
|
||||
// default Consul
|
||||
var defaultConsul provider.Consul
|
||||
defaultConsul.Watch = true
|
||||
defaultConsul.Endpoint = "127.0.0.1:8500"
|
||||
defaultConsul.Prefix = "traefik"
|
||||
defaultConsul.Constraints = types.Constraints{}
|
||||
|
||||
// default ConsulCatalog
|
||||
var defaultConsulCatalog provider.ConsulCatalog
|
||||
defaultConsulCatalog.Endpoint = "127.0.0.1:8500"
|
||||
defaultConsulCatalog.Constraints = types.Constraints{}
|
||||
|
||||
// default Etcd
|
||||
var defaultEtcd provider.Etcd
|
||||
defaultEtcd.Watch = true
|
||||
defaultEtcd.Endpoint = "127.0.0.1:2379"
|
||||
defaultEtcd.Prefix = "/traefik"
|
||||
defaultEtcd.Constraints = types.Constraints{}
|
||||
|
||||
//default Zookeeper
|
||||
var defaultZookeeper provider.Zookepper
|
||||
defaultZookeeper.Watch = true
|
||||
defaultZookeeper.Endpoint = "127.0.0.1:2181"
|
||||
defaultZookeeper.Prefix = "/traefik"
|
||||
defaultZookeeper.Constraints = types.Constraints{}
|
||||
|
||||
//default Boltdb
|
||||
var defaultBoltDb provider.BoltDb
|
||||
defaultBoltDb.Watch = true
|
||||
defaultBoltDb.Endpoint = "127.0.0.1:4001"
|
||||
defaultBoltDb.Prefix = "/traefik"
|
||||
defaultBoltDb.Constraints = types.Constraints{}
|
||||
|
||||
//default Kubernetes
|
||||
var defaultKubernetes provider.Kubernetes
|
||||
defaultKubernetes.Watch = true
|
||||
defaultKubernetes.Endpoint = ""
|
||||
defaultKubernetes.LabelSelector = ""
|
||||
defaultKubernetes.Constraints = types.Constraints{}
|
||||
|
||||
// default Mesos
|
||||
var defaultMesos provider.Mesos
|
||||
defaultMesos.Watch = true
|
||||
defaultMesos.Endpoint = "http://127.0.0.1:5050"
|
||||
defaultMesos.ExposedByDefault = true
|
||||
defaultMesos.Constraints = types.Constraints{}
|
||||
|
||||
defaultConfiguration := GlobalConfiguration{
|
||||
Docker: &defaultDocker,
|
||||
File: &defaultFile,
|
||||
Web: &defaultWeb,
|
||||
Marathon: &defaultMarathon,
|
||||
Consul: &defaultConsul,
|
||||
ConsulCatalog: &defaultConsulCatalog,
|
||||
Etcd: &defaultEtcd,
|
||||
Zookeeper: &defaultZookeeper,
|
||||
Boltdb: &defaultBoltDb,
|
||||
Kubernetes: &defaultKubernetes,
|
||||
Mesos: &defaultMesos,
|
||||
Retry: &Retry{},
|
||||
}
|
||||
return &TraefikConfiguration{
|
||||
GlobalConfiguration: defaultConfiguration,
|
||||
}
|
||||
}
|
||||
|
||||
// NewTraefikConfiguration creates a TraefikConfiguration with default values
|
||||
func NewTraefikConfiguration() *TraefikConfiguration {
|
||||
return &TraefikConfiguration{
|
||||
GlobalConfiguration: GlobalConfiguration{
|
||||
GraceTimeOut: 10,
|
||||
AccessLogsFile: "",
|
||||
TraefikLogsFile: "",
|
||||
LogLevel: "ERROR",
|
||||
EntryPoints: map[string]*EntryPoint{},
|
||||
Constraints: types.Constraints{},
|
||||
DefaultEntryPoints: []string{},
|
||||
ProvidersThrottleDuration: time.Duration(2 * time.Second),
|
||||
MaxIdleConnsPerHost: 200,
|
||||
CheckNewVersion: true,
|
||||
},
|
||||
ConfigFile: "",
|
||||
}
|
||||
}
|
||||
|
||||
type configs map[string]*types.Configuration
|
||||
@@ -1,504 +0,0 @@
|
||||
package configuration
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/containous/flaeg"
|
||||
"github.com/containous/traefik-extra-service-fabric"
|
||||
"github.com/containous/traefik/acme"
|
||||
"github.com/containous/traefik/api"
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/containous/traefik/ping"
|
||||
"github.com/containous/traefik/provider/boltdb"
|
||||
"github.com/containous/traefik/provider/consul"
|
||||
"github.com/containous/traefik/provider/docker"
|
||||
"github.com/containous/traefik/provider/dynamodb"
|
||||
"github.com/containous/traefik/provider/ecs"
|
||||
"github.com/containous/traefik/provider/etcd"
|
||||
"github.com/containous/traefik/provider/eureka"
|
||||
"github.com/containous/traefik/provider/file"
|
||||
"github.com/containous/traefik/provider/kubernetes"
|
||||
"github.com/containous/traefik/provider/marathon"
|
||||
"github.com/containous/traefik/provider/mesos"
|
||||
"github.com/containous/traefik/provider/rancher"
|
||||
"github.com/containous/traefik/provider/rest"
|
||||
"github.com/containous/traefik/provider/zk"
|
||||
"github.com/containous/traefik/tls"
|
||||
"github.com/containous/traefik/types"
|
||||
)
|
||||
|
||||
const (
|
||||
// DefaultInternalEntryPointName the name of the default internal entry point
|
||||
DefaultInternalEntryPointName = "traefik"
|
||||
|
||||
// DefaultHealthCheckInterval is the default health check interval.
|
||||
DefaultHealthCheckInterval = 30 * time.Second
|
||||
|
||||
// DefaultDialTimeout when connecting to a backend server.
|
||||
DefaultDialTimeout = 30 * time.Second
|
||||
|
||||
// DefaultIdleTimeout before closing an idle connection.
|
||||
DefaultIdleTimeout = 180 * time.Second
|
||||
|
||||
// DefaultGraceTimeout controls how long Traefik serves pending requests
|
||||
// prior to shutting down.
|
||||
DefaultGraceTimeout = 10 * time.Second
|
||||
)
|
||||
|
||||
// GlobalConfiguration holds global configuration (with providers, etc.).
|
||||
// It's populated from the traefik configuration file passed as an argument to the binary.
|
||||
type GlobalConfiguration struct {
|
||||
LifeCycle *LifeCycle `description:"Timeouts influencing the server life cycle" export:"true"`
|
||||
GraceTimeOut flaeg.Duration `short:"g" description:"(Deprecated) Duration to give active requests a chance to finish before Traefik stops" export:"true"` // Deprecated
|
||||
Debug bool `short:"d" description:"Enable debug mode" export:"true"`
|
||||
CheckNewVersion bool `description:"Periodically check if a new version has been released" export:"true"`
|
||||
SendAnonymousUsage bool `description:"send periodically anonymous usage statistics" export:"true"`
|
||||
AccessLogsFile string `description:"(Deprecated) Access logs file" export:"true"` // Deprecated
|
||||
AccessLog *types.AccessLog `description:"Access log settings" export:"true"`
|
||||
TraefikLogsFile string `description:"(Deprecated) Traefik logs file. Stdout is used when omitted or empty" export:"true"` // Deprecated
|
||||
TraefikLog *types.TraefikLog `description:"Traefik log settings" export:"true"`
|
||||
LogLevel string `short:"l" description:"Log level" export:"true"`
|
||||
EntryPoints EntryPoints `description:"Entrypoints definition using format: --entryPoints='Name:http Address::8000 Redirect.EntryPoint:https' --entryPoints='Name:https Address::4442 TLS:tests/traefik.crt,tests/traefik.key;prod/traefik.crt,prod/traefik.key'" export:"true"`
|
||||
Cluster *types.Cluster `description:"Enable clustering" export:"true"`
|
||||
Constraints types.Constraints `description:"Filter services by constraint, matching with service tags" export:"true"`
|
||||
ACME *acme.ACME `description:"Enable ACME (Let's Encrypt): automatic SSL" export:"true"`
|
||||
DefaultEntryPoints DefaultEntryPoints `description:"Entrypoints to be used by frontends that do not specify any entrypoint" export:"true"`
|
||||
ProvidersThrottleDuration flaeg.Duration `description:"Backends throttle duration: minimum duration between 2 events from providers before applying a new configuration. It avoids unnecessary reloads if multiples events are sent in a short amount of time." export:"true"`
|
||||
MaxIdleConnsPerHost int `description:"If non-zero, controls the maximum idle (keep-alive) to keep per-host. If zero, DefaultMaxIdleConnsPerHost is used" export:"true"`
|
||||
IdleTimeout flaeg.Duration `description:"(Deprecated) maximum amount of time an idle (keep-alive) connection will remain idle before closing itself." export:"true"` // Deprecated
|
||||
InsecureSkipVerify bool `description:"Disable SSL certificate verification" export:"true"`
|
||||
RootCAs tls.RootCAs `description:"Add cert file for self-signed certificate"`
|
||||
Retry *Retry `description:"Enable retry sending request if network error" export:"true"`
|
||||
HealthCheck *HealthCheckConfig `description:"Health check parameters" export:"true"`
|
||||
RespondingTimeouts *RespondingTimeouts `description:"Timeouts for incoming requests to the Traefik instance" export:"true"`
|
||||
ForwardingTimeouts *ForwardingTimeouts `description:"Timeouts for requests forwarded to the backend servers" export:"true"`
|
||||
Web *WebCompatibility `description:"(Deprecated) Enable Web backend with default settings" export:"true"` // Deprecated
|
||||
Docker *docker.Provider `description:"Enable Docker backend with default settings" export:"true"`
|
||||
File *file.Provider `description:"Enable File backend with default settings" export:"true"`
|
||||
Marathon *marathon.Provider `description:"Enable Marathon backend with default settings" export:"true"`
|
||||
Consul *consul.Provider `description:"Enable Consul backend with default settings" export:"true"`
|
||||
ConsulCatalog *consul.CatalogProvider `description:"Enable Consul catalog backend with default settings" export:"true"`
|
||||
Etcd *etcd.Provider `description:"Enable Etcd backend with default settings" export:"true"`
|
||||
Zookeeper *zk.Provider `description:"Enable Zookeeper backend with default settings" export:"true"`
|
||||
Boltdb *boltdb.Provider `description:"Enable Boltdb backend with default settings" export:"true"`
|
||||
Kubernetes *kubernetes.Provider `description:"Enable Kubernetes backend with default settings" export:"true"`
|
||||
Mesos *mesos.Provider `description:"Enable Mesos backend with default settings" export:"true"`
|
||||
Eureka *eureka.Provider `description:"Enable Eureka backend with default settings" export:"true"`
|
||||
ECS *ecs.Provider `description:"Enable ECS backend with default settings" export:"true"`
|
||||
Rancher *rancher.Provider `description:"Enable Rancher backend with default settings" export:"true"`
|
||||
DynamoDB *dynamodb.Provider `description:"Enable DynamoDB backend with default settings" export:"true"`
|
||||
ServiceFabric *servicefabric.Provider `description:"Enable Service Fabric backend with default settings" export:"true"`
|
||||
Rest *rest.Provider `description:"Enable Rest backend with default settings" export:"true"`
|
||||
API *api.Handler `description:"Enable api/dashboard" export:"true"`
|
||||
Metrics *types.Metrics `description:"Enable a metrics exporter" export:"true"`
|
||||
Ping *ping.Handler `description:"Enable ping" export:"true"`
|
||||
}
|
||||
|
||||
// WebCompatibility is a configuration to handle compatibility with deprecated web provider options
|
||||
type WebCompatibility struct {
|
||||
Address string `description:"Web administration port" export:"true"`
|
||||
CertFile string `description:"SSL certificate" export:"true"`
|
||||
KeyFile string `description:"SSL certificate" export:"true"`
|
||||
ReadOnly bool `description:"Enable read only API" export:"true"`
|
||||
Statistics *types.Statistics `description:"Enable more detailed statistics" export:"true"`
|
||||
Metrics *types.Metrics `description:"Enable a metrics exporter" export:"true"`
|
||||
Path string `description:"Root path for dashboard and API" export:"true"`
|
||||
Auth *types.Auth `export:"true"`
|
||||
Debug bool `export:"true"`
|
||||
}
|
||||
|
||||
func (gc *GlobalConfiguration) handleWebDeprecation() {
|
||||
if gc.Web != nil {
|
||||
log.Warn("web provider configuration is deprecated, you should use these options : api, rest provider, ping and metrics")
|
||||
|
||||
if gc.API != nil || gc.Metrics != nil || gc.Ping != nil || gc.Rest != nil {
|
||||
log.Warn("web option is ignored if you use it with one of these options : api, rest provider, ping or metrics")
|
||||
return
|
||||
}
|
||||
gc.EntryPoints[DefaultInternalEntryPointName] = &EntryPoint{
|
||||
Address: gc.Web.Address,
|
||||
Auth: gc.Web.Auth,
|
||||
}
|
||||
if gc.Web.CertFile != "" {
|
||||
gc.EntryPoints[DefaultInternalEntryPointName].TLS = &tls.TLS{
|
||||
Certificates: []tls.Certificate{
|
||||
{
|
||||
CertFile: tls.FileOrContent(gc.Web.CertFile),
|
||||
KeyFile: tls.FileOrContent(gc.Web.KeyFile),
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
if gc.API == nil {
|
||||
gc.API = &api.Handler{
|
||||
EntryPoint: DefaultInternalEntryPointName,
|
||||
Statistics: gc.Web.Statistics,
|
||||
Dashboard: true,
|
||||
}
|
||||
}
|
||||
|
||||
if gc.Ping == nil {
|
||||
gc.Ping = &ping.Handler{
|
||||
EntryPoint: DefaultInternalEntryPointName,
|
||||
}
|
||||
}
|
||||
|
||||
if gc.Metrics == nil {
|
||||
gc.Metrics = gc.Web.Metrics
|
||||
}
|
||||
|
||||
if !gc.Debug {
|
||||
gc.Debug = gc.Web.Debug
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// SetEffectiveConfiguration adds missing configuration parameters derived from existing ones.
|
||||
// It also takes care of maintaining backwards compatibility.
|
||||
func (gc *GlobalConfiguration) SetEffectiveConfiguration(configFile string) {
|
||||
if len(gc.EntryPoints) == 0 {
|
||||
gc.EntryPoints = map[string]*EntryPoint{"http": {
|
||||
Address: ":80",
|
||||
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
|
||||
}}
|
||||
gc.DefaultEntryPoints = []string{"http"}
|
||||
}
|
||||
|
||||
gc.handleWebDeprecation()
|
||||
|
||||
if (gc.API != nil && gc.API.EntryPoint == DefaultInternalEntryPointName) ||
|
||||
(gc.Ping != nil && gc.Ping.EntryPoint == DefaultInternalEntryPointName) ||
|
||||
(gc.Metrics != nil && gc.Metrics.Prometheus != nil && gc.Metrics.Prometheus.EntryPoint == DefaultInternalEntryPointName) ||
|
||||
(gc.Rest != nil && gc.Rest.EntryPoint == DefaultInternalEntryPointName) {
|
||||
if _, ok := gc.EntryPoints[DefaultInternalEntryPointName]; !ok {
|
||||
gc.EntryPoints[DefaultInternalEntryPointName] = &EntryPoint{Address: ":8080"}
|
||||
}
|
||||
}
|
||||
|
||||
// ForwardedHeaders must be remove in the next breaking version
|
||||
for entryPointName := range gc.EntryPoints {
|
||||
entryPoint := gc.EntryPoints[entryPointName]
|
||||
if entryPoint.ForwardedHeaders == nil {
|
||||
entryPoint.ForwardedHeaders = &ForwardedHeaders{Insecure: true}
|
||||
}
|
||||
}
|
||||
|
||||
// Make sure LifeCycle isn't nil to spare nil checks elsewhere.
|
||||
if gc.LifeCycle == nil {
|
||||
gc.LifeCycle = &LifeCycle{}
|
||||
}
|
||||
|
||||
// Prefer legacy grace timeout parameter for backwards compatibility reasons.
|
||||
if gc.GraceTimeOut > 0 {
|
||||
log.Warn("top-level grace period configuration has been deprecated -- please use lifecycle grace period")
|
||||
gc.LifeCycle.GraceTimeOut = gc.GraceTimeOut
|
||||
}
|
||||
|
||||
if gc.Rancher != nil {
|
||||
// Ensure backwards compatibility for now
|
||||
if len(gc.Rancher.AccessKey) > 0 ||
|
||||
len(gc.Rancher.Endpoint) > 0 ||
|
||||
len(gc.Rancher.SecretKey) > 0 {
|
||||
|
||||
if gc.Rancher.API == nil {
|
||||
gc.Rancher.API = &rancher.APIConfiguration{
|
||||
AccessKey: gc.Rancher.AccessKey,
|
||||
SecretKey: gc.Rancher.SecretKey,
|
||||
Endpoint: gc.Rancher.Endpoint,
|
||||
}
|
||||
}
|
||||
log.Warn("Deprecated configuration found: rancher.[accesskey|secretkey|endpoint]. " +
|
||||
"Please use rancher.api.[accesskey|secretkey|endpoint] instead.")
|
||||
}
|
||||
|
||||
if gc.Rancher.Metadata != nil && len(gc.Rancher.Metadata.Prefix) == 0 {
|
||||
gc.Rancher.Metadata.Prefix = "latest"
|
||||
}
|
||||
}
|
||||
|
||||
if gc.API != nil {
|
||||
gc.API.Debug = gc.Debug
|
||||
}
|
||||
|
||||
if gc.Debug {
|
||||
gc.LogLevel = "DEBUG"
|
||||
}
|
||||
|
||||
if gc.Web != nil && (gc.Web.Path == "" || !strings.HasSuffix(gc.Web.Path, "/")) {
|
||||
gc.Web.Path += "/"
|
||||
}
|
||||
|
||||
// Try to fallback to traefik config file in case the file provider is enabled
|
||||
// but has no file name configured.
|
||||
if gc.File != nil && len(gc.File.Filename) == 0 {
|
||||
if len(configFile) > 0 {
|
||||
gc.File.Filename = configFile
|
||||
} else {
|
||||
log.Errorln("Error using file configuration backend, no filename defined")
|
||||
}
|
||||
}
|
||||
|
||||
if gc.ACME != nil {
|
||||
// TODO: to remove in the futurs
|
||||
if len(gc.ACME.StorageFile) > 0 && len(gc.ACME.Storage) == 0 {
|
||||
log.Warn("ACME.StorageFile is deprecated, use ACME.Storage instead")
|
||||
gc.ACME.Storage = gc.ACME.StorageFile
|
||||
}
|
||||
|
||||
if len(gc.ACME.DNSProvider) > 0 {
|
||||
log.Warn("ACME.DNSProvider is deprecated, use ACME.DNSChallenge instead")
|
||||
gc.ACME.DNSChallenge = &acme.DNSChallenge{Provider: gc.ACME.DNSProvider, DelayBeforeCheck: gc.ACME.DelayDontCheckDNS}
|
||||
}
|
||||
|
||||
if gc.ACME.OnDemand {
|
||||
log.Warn("ACME.OnDemand is deprecated")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ValidateConfiguration validate that configuration is coherent
|
||||
func (gc *GlobalConfiguration) ValidateConfiguration() {
|
||||
if gc.ACME != nil {
|
||||
if _, ok := gc.EntryPoints[gc.ACME.EntryPoint]; !ok {
|
||||
log.Fatalf("Unknown entrypoint %q for ACME configuration", gc.ACME.EntryPoint)
|
||||
} else {
|
||||
if gc.EntryPoints[gc.ACME.EntryPoint].TLS == nil {
|
||||
log.Fatalf("Entrypoint without TLS %q for ACME configuration", gc.ACME.EntryPoint)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DefaultEntryPoints holds default entry points
|
||||
type DefaultEntryPoints []string
|
||||
|
||||
// String is the method to format the flag's value, part of the flag.Value interface.
|
||||
// The String method's output will be used in diagnostics.
|
||||
func (dep *DefaultEntryPoints) String() string {
|
||||
return strings.Join(*dep, ",")
|
||||
}
|
||||
|
||||
// Set is the method to set the flag value, part of the flag.Value interface.
|
||||
// Set's argument is a string to be parsed to set the flag.
|
||||
// It's a comma-separated list, so we split it.
|
||||
func (dep *DefaultEntryPoints) Set(value string) error {
|
||||
entrypoints := strings.Split(value, ",")
|
||||
if len(entrypoints) == 0 {
|
||||
return fmt.Errorf("bad DefaultEntryPoints format: %s", value)
|
||||
}
|
||||
for _, entrypoint := range entrypoints {
|
||||
*dep = append(*dep, entrypoint)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get return the EntryPoints map
|
||||
func (dep *DefaultEntryPoints) Get() interface{} {
|
||||
return DefaultEntryPoints(*dep)
|
||||
}
|
||||
|
||||
// SetValue sets the EntryPoints map with val
|
||||
func (dep *DefaultEntryPoints) SetValue(val interface{}) {
|
||||
*dep = DefaultEntryPoints(val.(DefaultEntryPoints))
|
||||
}
|
||||
|
||||
// Type is type of the struct
|
||||
func (dep *DefaultEntryPoints) Type() string {
|
||||
return "defaultentrypoints"
|
||||
}
|
||||
|
||||
// EntryPoints holds entry points configuration of the reverse proxy (ip, port, TLS...)
|
||||
type EntryPoints map[string]*EntryPoint
|
||||
|
||||
// String is the method to format the flag's value, part of the flag.Value interface.
|
||||
// The String method's output will be used in diagnostics.
|
||||
func (ep *EntryPoints) String() string {
|
||||
return fmt.Sprintf("%+v", *ep)
|
||||
}
|
||||
|
||||
// Set is the method to set the flag value, part of the flag.Value interface.
|
||||
// Set's argument is a string to be parsed to set the flag.
|
||||
// It's a comma-separated list, so we split it.
|
||||
func (ep *EntryPoints) Set(value string) error {
|
||||
result := parseEntryPointsConfiguration(value)
|
||||
|
||||
var configTLS *tls.TLS
|
||||
if len(result["tls"]) > 0 {
|
||||
certs := tls.Certificates{}
|
||||
if err := certs.Set(result["tls"]); err != nil {
|
||||
return err
|
||||
}
|
||||
configTLS = &tls.TLS{
|
||||
Certificates: certs,
|
||||
}
|
||||
} else if len(result["tls_acme"]) > 0 {
|
||||
configTLS = &tls.TLS{
|
||||
Certificates: tls.Certificates{},
|
||||
}
|
||||
}
|
||||
if len(result["ca"]) > 0 {
|
||||
files := strings.Split(result["ca"], ",")
|
||||
optional := toBool(result, "ca_optional")
|
||||
configTLS.ClientCA = tls.ClientCA{
|
||||
Files: files,
|
||||
Optional: optional,
|
||||
}
|
||||
}
|
||||
var redirect *types.Redirect
|
||||
if len(result["redirect_entrypoint"]) > 0 || len(result["redirect_regex"]) > 0 || len(result["redirect_replacement"]) > 0 {
|
||||
redirect = &types.Redirect{
|
||||
EntryPoint: result["redirect_entrypoint"],
|
||||
Regex: result["redirect_regex"],
|
||||
Replacement: result["redirect_replacement"],
|
||||
}
|
||||
}
|
||||
|
||||
whiteListSourceRange := []string{}
|
||||
if len(result["whitelistsourcerange"]) > 0 {
|
||||
whiteListSourceRange = strings.Split(result["whitelistsourcerange"], ",")
|
||||
}
|
||||
|
||||
compress := toBool(result, "compress")
|
||||
|
||||
var proxyProtocol *ProxyProtocol
|
||||
ppTrustedIPs := result["proxyprotocol_trustedips"]
|
||||
if len(result["proxyprotocol_insecure"]) > 0 || len(ppTrustedIPs) > 0 {
|
||||
proxyProtocol = &ProxyProtocol{
|
||||
Insecure: toBool(result, "proxyprotocol_insecure"),
|
||||
}
|
||||
if len(ppTrustedIPs) > 0 {
|
||||
proxyProtocol.TrustedIPs = strings.Split(ppTrustedIPs, ",")
|
||||
}
|
||||
}
|
||||
|
||||
// TODO must be changed to false by default in the next breaking version.
|
||||
forwardedHeaders := &ForwardedHeaders{Insecure: true}
|
||||
if _, ok := result["forwardedheaders_insecure"]; ok {
|
||||
forwardedHeaders.Insecure = toBool(result, "forwardedheaders_insecure")
|
||||
}
|
||||
|
||||
fhTrustedIPs := result["forwardedheaders_trustedips"]
|
||||
if len(fhTrustedIPs) > 0 {
|
||||
// TODO must be removed in the next breaking version.
|
||||
forwardedHeaders.Insecure = toBool(result, "forwardedheaders_insecure")
|
||||
forwardedHeaders.TrustedIPs = strings.Split(fhTrustedIPs, ",")
|
||||
}
|
||||
|
||||
if proxyProtocol != nil && proxyProtocol.Insecure {
|
||||
log.Warn("ProxyProtocol.Insecure:true is dangerous. Please use 'ProxyProtocol.TrustedIPs:IPs' and remove 'ProxyProtocol.Insecure:true'")
|
||||
}
|
||||
|
||||
(*ep)[result["name"]] = &EntryPoint{
|
||||
Address: result["address"],
|
||||
TLS: configTLS,
|
||||
Redirect: redirect,
|
||||
Compress: compress,
|
||||
WhitelistSourceRange: whiteListSourceRange,
|
||||
ProxyProtocol: proxyProtocol,
|
||||
ForwardedHeaders: forwardedHeaders,
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func parseEntryPointsConfiguration(raw string) map[string]string {
|
||||
sections := strings.Fields(raw)
|
||||
|
||||
config := make(map[string]string)
|
||||
for _, part := range sections {
|
||||
field := strings.SplitN(part, ":", 2)
|
||||
name := strings.ToLower(strings.Replace(field[0], ".", "_", -1))
|
||||
if len(field) > 1 {
|
||||
config[name] = field[1]
|
||||
} else {
|
||||
if strings.EqualFold(name, "TLS") {
|
||||
config["tls_acme"] = "TLS"
|
||||
} else {
|
||||
config[name] = ""
|
||||
}
|
||||
}
|
||||
}
|
||||
return config
|
||||
}
|
||||
|
||||
func toBool(conf map[string]string, key string) bool {
|
||||
if val, ok := conf[key]; ok {
|
||||
return strings.EqualFold(val, "true") ||
|
||||
strings.EqualFold(val, "enable") ||
|
||||
strings.EqualFold(val, "on")
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Get return the EntryPoints map
|
||||
func (ep *EntryPoints) Get() interface{} {
|
||||
return EntryPoints(*ep)
|
||||
}
|
||||
|
||||
// SetValue sets the EntryPoints map with val
|
||||
func (ep *EntryPoints) SetValue(val interface{}) {
|
||||
*ep = EntryPoints(val.(EntryPoints))
|
||||
}
|
||||
|
||||
// Type is type of the struct
|
||||
func (ep *EntryPoints) Type() string {
|
||||
return "entrypoints"
|
||||
}
|
||||
|
||||
// EntryPoint holds an entry point configuration of the reverse proxy (ip, port, TLS...)
|
||||
type EntryPoint struct {
|
||||
Network string
|
||||
Address string
|
||||
TLS *tls.TLS `export:"true"`
|
||||
Redirect *types.Redirect `export:"true"`
|
||||
Auth *types.Auth `export:"true"`
|
||||
WhitelistSourceRange []string
|
||||
Compress bool `export:"true"`
|
||||
ProxyProtocol *ProxyProtocol `export:"true"`
|
||||
ForwardedHeaders *ForwardedHeaders `export:"true"`
|
||||
}
|
||||
|
||||
// Retry contains request retry config
|
||||
type Retry struct {
|
||||
Attempts int `description:"Number of attempts" export:"true"`
|
||||
}
|
||||
|
||||
// HealthCheckConfig contains health check configuration parameters.
|
||||
type HealthCheckConfig struct {
|
||||
Interval flaeg.Duration `description:"Default periodicity of enabled health checks" export:"true"`
|
||||
}
|
||||
|
||||
// RespondingTimeouts contains timeout configurations for incoming requests to the Traefik instance.
|
||||
type RespondingTimeouts struct {
|
||||
ReadTimeout flaeg.Duration `description:"ReadTimeout is the maximum duration for reading the entire request, including the body. If zero, no timeout is set" export:"true"`
|
||||
WriteTimeout flaeg.Duration `description:"WriteTimeout is the maximum duration before timing out writes of the response. If zero, no timeout is set" export:"true"`
|
||||
IdleTimeout flaeg.Duration `description:"IdleTimeout is the maximum amount duration an idle (keep-alive) connection will remain idle before closing itself. Defaults to 180 seconds. If zero, no timeout is set" export:"true"`
|
||||
}
|
||||
|
||||
// ForwardingTimeouts contains timeout configurations for forwarding requests to the backend servers.
|
||||
type ForwardingTimeouts struct {
|
||||
DialTimeout flaeg.Duration `description:"The amount of time to wait until a connection to a backend server can be established. Defaults to 30 seconds. If zero, no timeout exists" export:"true"`
|
||||
ResponseHeaderTimeout flaeg.Duration `description:"The amount of time to wait for a server's response headers after fully writing the request (including its body, if any). If zero, no timeout exists" export:"true"`
|
||||
}
|
||||
|
||||
// ProxyProtocol contains Proxy-Protocol configuration
|
||||
type ProxyProtocol struct {
|
||||
Insecure bool
|
||||
TrustedIPs []string
|
||||
}
|
||||
|
||||
// ForwardedHeaders Trust client forwarding headers
|
||||
type ForwardedHeaders struct {
|
||||
Insecure bool
|
||||
TrustedIPs []string
|
||||
}
|
||||
|
||||
// LifeCycle contains configurations relevant to the lifecycle (such as the
|
||||
// shutdown phase) of Traefik.
|
||||
type LifeCycle struct {
|
||||
RequestAcceptGraceTimeout flaeg.Duration `description:"Duration to keep accepting requests before Traefik initiates the graceful shutdown procedure"`
|
||||
GraceTimeOut flaeg.Duration `description:"Duration to give active requests a chance to finish before Traefik stops"`
|
||||
}
|
||||
@@ -1,393 +0,0 @@
|
||||
package configuration
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/containous/flaeg"
|
||||
"github.com/containous/traefik/provider"
|
||||
"github.com/containous/traefik/provider/file"
|
||||
"github.com/containous/traefik/tls"
|
||||
"github.com/containous/traefik/types"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
const defaultConfigFile = "traefik.toml"
|
||||
|
||||
func Test_parseEntryPointsConfiguration(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
value string
|
||||
expectedResult map[string]string
|
||||
}{
|
||||
{
|
||||
name: "all parameters",
|
||||
value: "Name:foo TLS:goo TLS CA:car Redirect.EntryPoint:RedirectEntryPoint Redirect.Regex:RedirectRegex Redirect.Replacement:RedirectReplacement Compress:true WhiteListSourceRange:WhiteListSourceRange ProxyProtocol.TrustedIPs:192.168.0.1 ProxyProtocol.Insecure:false Address::8000",
|
||||
expectedResult: map[string]string{
|
||||
"name": "foo",
|
||||
"address": ":8000",
|
||||
"ca": "car",
|
||||
"tls": "goo",
|
||||
"tls_acme": "TLS",
|
||||
"redirect_entrypoint": "RedirectEntryPoint",
|
||||
"redirect_regex": "RedirectRegex",
|
||||
"redirect_replacement": "RedirectReplacement",
|
||||
"whitelistsourcerange": "WhiteListSourceRange",
|
||||
"proxyprotocol_trustedips": "192.168.0.1",
|
||||
"proxyprotocol_insecure": "false",
|
||||
"compress": "true",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "compress on",
|
||||
value: "name:foo Compress:on",
|
||||
expectedResult: map[string]string{
|
||||
"name": "foo",
|
||||
"compress": "on",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "TLS",
|
||||
value: "Name:foo TLS:goo TLS",
|
||||
expectedResult: map[string]string{
|
||||
"name": "foo",
|
||||
"tls": "goo",
|
||||
"tls_acme": "TLS",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
conf := parseEntryPointsConfiguration(test.value)
|
||||
|
||||
assert.Len(t, conf, len(test.expectedResult))
|
||||
assert.Equal(t, test.expectedResult, conf)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_toBool(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
value string
|
||||
key string
|
||||
expectedBool bool
|
||||
}{
|
||||
{
|
||||
name: "on",
|
||||
value: "on",
|
||||
key: "foo",
|
||||
expectedBool: true,
|
||||
},
|
||||
{
|
||||
name: "true",
|
||||
value: "true",
|
||||
key: "foo",
|
||||
expectedBool: true,
|
||||
},
|
||||
{
|
||||
name: "enable",
|
||||
value: "enable",
|
||||
key: "foo",
|
||||
expectedBool: true,
|
||||
},
|
||||
{
|
||||
name: "arbitrary string",
|
||||
value: "bar",
|
||||
key: "foo",
|
||||
expectedBool: false,
|
||||
},
|
||||
{
|
||||
name: "no existing entry",
|
||||
value: "bar",
|
||||
key: "fii",
|
||||
expectedBool: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
conf := map[string]string{
|
||||
"foo": test.value,
|
||||
}
|
||||
|
||||
result := toBool(conf, test.key)
|
||||
|
||||
assert.Equal(t, test.expectedBool, result)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestEntryPoints_Set(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
expression string
|
||||
expectedEntryPointName string
|
||||
expectedEntryPoint *EntryPoint
|
||||
}{
|
||||
{
|
||||
name: "all parameters camelcase",
|
||||
expression: "Name:foo Address::8000 TLS:goo,gii TLS CA:car CA.Optional:false Redirect.EntryPoint:RedirectEntryPoint Redirect.Regex:RedirectRegex Redirect.Replacement:RedirectReplacement Compress:true WhiteListSourceRange:Range ProxyProtocol.TrustedIPs:192.168.0.1 ForwardedHeaders.TrustedIPs:10.0.0.3/24,20.0.0.3/24",
|
||||
expectedEntryPointName: "foo",
|
||||
expectedEntryPoint: &EntryPoint{
|
||||
Address: ":8000",
|
||||
Redirect: &types.Redirect{
|
||||
EntryPoint: "RedirectEntryPoint",
|
||||
Regex: "RedirectRegex",
|
||||
Replacement: "RedirectReplacement",
|
||||
},
|
||||
Compress: true,
|
||||
ProxyProtocol: &ProxyProtocol{
|
||||
TrustedIPs: []string{"192.168.0.1"},
|
||||
},
|
||||
ForwardedHeaders: &ForwardedHeaders{
|
||||
TrustedIPs: []string{"10.0.0.3/24", "20.0.0.3/24"},
|
||||
},
|
||||
WhitelistSourceRange: []string{"Range"},
|
||||
TLS: &tls.TLS{
|
||||
ClientCA: tls.ClientCA{
|
||||
Files: []string{"car"},
|
||||
Optional: false,
|
||||
},
|
||||
Certificates: tls.Certificates{
|
||||
{
|
||||
CertFile: tls.FileOrContent("goo"),
|
||||
KeyFile: tls.FileOrContent("gii"),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "all parameters lowercase",
|
||||
expression: "name:foo address::8000 tls:goo,gii tls ca:car ca.optional:true redirect.entryPoint:RedirectEntryPoint redirect.regex:RedirectRegex redirect.replacement:RedirectReplacement compress:true whiteListSourceRange:Range proxyProtocol.trustedIPs:192.168.0.1 forwardedHeaders.trustedIPs:10.0.0.3/24,20.0.0.3/24",
|
||||
expectedEntryPointName: "foo",
|
||||
expectedEntryPoint: &EntryPoint{
|
||||
Address: ":8000",
|
||||
Redirect: &types.Redirect{
|
||||
EntryPoint: "RedirectEntryPoint",
|
||||
Regex: "RedirectRegex",
|
||||
Replacement: "RedirectReplacement",
|
||||
},
|
||||
Compress: true,
|
||||
ProxyProtocol: &ProxyProtocol{
|
||||
TrustedIPs: []string{"192.168.0.1"},
|
||||
},
|
||||
ForwardedHeaders: &ForwardedHeaders{
|
||||
TrustedIPs: []string{"10.0.0.3/24", "20.0.0.3/24"},
|
||||
},
|
||||
WhitelistSourceRange: []string{"Range"},
|
||||
TLS: &tls.TLS{
|
||||
ClientCA: tls.ClientCA{
|
||||
Files: []string{"car"},
|
||||
Optional: true,
|
||||
},
|
||||
Certificates: tls.Certificates{
|
||||
{
|
||||
CertFile: tls.FileOrContent("goo"),
|
||||
KeyFile: tls.FileOrContent("gii"),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "default",
|
||||
expression: "Name:foo",
|
||||
expectedEntryPointName: "foo",
|
||||
expectedEntryPoint: &EntryPoint{
|
||||
WhitelistSourceRange: []string{},
|
||||
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ForwardedHeaders insecure true",
|
||||
expression: "Name:foo ForwardedHeaders.Insecure:true",
|
||||
expectedEntryPointName: "foo",
|
||||
expectedEntryPoint: &EntryPoint{
|
||||
WhitelistSourceRange: []string{},
|
||||
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ForwardedHeaders insecure false",
|
||||
expression: "Name:foo ForwardedHeaders.Insecure:false",
|
||||
expectedEntryPointName: "foo",
|
||||
expectedEntryPoint: &EntryPoint{
|
||||
WhitelistSourceRange: []string{},
|
||||
ForwardedHeaders: &ForwardedHeaders{Insecure: false},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ForwardedHeaders TrustedIPs",
|
||||
expression: "Name:foo ForwardedHeaders.TrustedIPs:10.0.0.3/24,20.0.0.3/24",
|
||||
expectedEntryPointName: "foo",
|
||||
expectedEntryPoint: &EntryPoint{
|
||||
WhitelistSourceRange: []string{},
|
||||
ForwardedHeaders: &ForwardedHeaders{
|
||||
TrustedIPs: []string{"10.0.0.3/24", "20.0.0.3/24"},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ProxyProtocol insecure true",
|
||||
expression: "Name:foo ProxyProtocol.Insecure:true",
|
||||
expectedEntryPointName: "foo",
|
||||
expectedEntryPoint: &EntryPoint{
|
||||
WhitelistSourceRange: []string{},
|
||||
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
|
||||
ProxyProtocol: &ProxyProtocol{Insecure: true},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ProxyProtocol insecure false",
|
||||
expression: "Name:foo ProxyProtocol.Insecure:false",
|
||||
expectedEntryPointName: "foo",
|
||||
expectedEntryPoint: &EntryPoint{
|
||||
WhitelistSourceRange: []string{},
|
||||
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
|
||||
ProxyProtocol: &ProxyProtocol{},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ProxyProtocol TrustedIPs",
|
||||
expression: "Name:foo ProxyProtocol.TrustedIPs:10.0.0.3/24,20.0.0.3/24",
|
||||
expectedEntryPointName: "foo",
|
||||
expectedEntryPoint: &EntryPoint{
|
||||
WhitelistSourceRange: []string{},
|
||||
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
|
||||
ProxyProtocol: &ProxyProtocol{
|
||||
TrustedIPs: []string{"10.0.0.3/24", "20.0.0.3/24"},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "compress on",
|
||||
expression: "Name:foo Compress:on",
|
||||
expectedEntryPointName: "foo",
|
||||
expectedEntryPoint: &EntryPoint{
|
||||
Compress: true,
|
||||
WhitelistSourceRange: []string{},
|
||||
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "compress true",
|
||||
expression: "Name:foo Compress:true",
|
||||
expectedEntryPointName: "foo",
|
||||
expectedEntryPoint: &EntryPoint{
|
||||
Compress: true,
|
||||
WhitelistSourceRange: []string{},
|
||||
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
eps := EntryPoints{}
|
||||
err := eps.Set(test.expression)
|
||||
require.NoError(t, err)
|
||||
|
||||
ep := eps[test.expectedEntryPointName]
|
||||
assert.EqualValues(t, test.expectedEntryPoint, ep)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSetEffectiveConfigurationGraceTimeout(t *testing.T) {
|
||||
tests := []struct {
|
||||
desc string
|
||||
legacyGraceTimeout time.Duration
|
||||
lifeCycleGraceTimeout time.Duration
|
||||
wantGraceTimeout time.Duration
|
||||
}{
|
||||
{
|
||||
desc: "legacy grace timeout given only",
|
||||
legacyGraceTimeout: 5 * time.Second,
|
||||
wantGraceTimeout: 5 * time.Second,
|
||||
},
|
||||
{
|
||||
desc: "legacy and life cycle grace timeouts given",
|
||||
legacyGraceTimeout: 5 * time.Second,
|
||||
lifeCycleGraceTimeout: 12 * time.Second,
|
||||
wantGraceTimeout: 5 * time.Second,
|
||||
},
|
||||
{
|
||||
desc: "legacy grace timeout omitted",
|
||||
legacyGraceTimeout: 0,
|
||||
lifeCycleGraceTimeout: 12 * time.Second,
|
||||
wantGraceTimeout: 12 * time.Second,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
gc := &GlobalConfiguration{
|
||||
GraceTimeOut: flaeg.Duration(test.legacyGraceTimeout),
|
||||
}
|
||||
if test.lifeCycleGraceTimeout > 0 {
|
||||
gc.LifeCycle = &LifeCycle{
|
||||
GraceTimeOut: flaeg.Duration(test.lifeCycleGraceTimeout),
|
||||
}
|
||||
}
|
||||
|
||||
gc.SetEffectiveConfiguration(defaultConfigFile)
|
||||
|
||||
gotGraceTimeout := time.Duration(gc.LifeCycle.GraceTimeOut)
|
||||
if gotGraceTimeout != test.wantGraceTimeout {
|
||||
t.Fatalf("got effective grace timeout %d, want %d", gotGraceTimeout, test.wantGraceTimeout)
|
||||
}
|
||||
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSetEffectiveConfigurationFileProviderFilename(t *testing.T) {
|
||||
tests := []struct {
|
||||
desc string
|
||||
fileProvider *file.Provider
|
||||
wantFileProviderFilename string
|
||||
}{
|
||||
{
|
||||
desc: "no filename for file provider given",
|
||||
fileProvider: &file.Provider{},
|
||||
wantFileProviderFilename: defaultConfigFile,
|
||||
},
|
||||
{
|
||||
desc: "filename for file provider given",
|
||||
fileProvider: &file.Provider{BaseProvider: provider.BaseProvider{Filename: "other.toml"}},
|
||||
wantFileProviderFilename: "other.toml",
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
gc := &GlobalConfiguration{
|
||||
File: test.fileProvider,
|
||||
}
|
||||
|
||||
gc.SetEffectiveConfiguration(defaultConfigFile)
|
||||
|
||||
gotFileProviderFilename := gc.File.Filename
|
||||
if gotFileProviderFilename != test.wantFileProviderFilename {
|
||||
t.Fatalf("got file provider file name %q, want %q", gotFileProviderFilename, test.wantFileProviderFilename)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1,170 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Copyright (c) 2017 Brian 'redbeard' Harrington <redbeard@dead-city.org>
|
||||
#
|
||||
# dumpcerts.sh - A simple utility to explode a Traefik acme.json file into a
|
||||
# directory of certificates and a private key
|
||||
#
|
||||
# Usage - dumpcerts.sh /etc/traefik/acme.json /etc/ssl/
|
||||
#
|
||||
# Dependencies -
|
||||
# util-linux
|
||||
# openssl
|
||||
# jq
|
||||
# The MIT License (MIT)
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in
|
||||
# all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
# THE SOFTWARE.
|
||||
|
||||
# Exit codes:
|
||||
# 1 - A component is missing or could not be read
|
||||
# 2 - There was a problem reading acme.json
|
||||
# 4 - The destination certificate directory does not exist
|
||||
# 8 - Missing private key
|
||||
|
||||
set -o errexit
|
||||
set -o pipefail
|
||||
set -o nounset
|
||||
|
||||
USAGE="$(basename "$0") <path to acme> <destination cert directory>"
|
||||
|
||||
# Platform variations
|
||||
case "$(uname)" in
|
||||
'Linux')
|
||||
# On Linux, -d should always work. --decode does not work with Alpine's busybox-binary
|
||||
CMD_DECODE_BASE64="base64 -d"
|
||||
;;
|
||||
*)
|
||||
# Max OS-X supports --decode and -D, but --decode may be supported by other platforms as well.
|
||||
CMD_DECODE_BASE64="base64 --decode"
|
||||
;;
|
||||
esac
|
||||
|
||||
# Allow us to exit on a missing jq binary
|
||||
exit_jq() {
|
||||
echo "
|
||||
You must have the binary 'jq' to use this.
|
||||
jq is available at: https://stedolan.github.io/jq/download/
|
||||
|
||||
${USAGE}" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
bad_acme() {
|
||||
echo "
|
||||
There was a problem parsing your acme.json file.
|
||||
|
||||
${USAGE}" >&2
|
||||
exit 2
|
||||
}
|
||||
|
||||
if [ $# -ne 2 ]; then
|
||||
echo "
|
||||
Insufficient number of parameters.
|
||||
|
||||
${USAGE}" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
readonly acmefile="${1}"
|
||||
readonly certdir="${2%/}"
|
||||
|
||||
if [ ! -r "${acmefile}" ]; then
|
||||
echo "
|
||||
There was a problem reading from '${acmefile}'
|
||||
We need to read this file to explode the JSON bundle... exiting.
|
||||
|
||||
${USAGE}" >&2
|
||||
exit 2
|
||||
fi
|
||||
|
||||
|
||||
if [ ! -d "${certdir}" ]; then
|
||||
echo "
|
||||
Path ${certdir} does not seem to be a directory
|
||||
We need a directory in which to explode the JSON bundle... exiting.
|
||||
|
||||
${USAGE}" >&2
|
||||
exit 4
|
||||
fi
|
||||
|
||||
jq=$(command -v jq) || exit_jq
|
||||
|
||||
priv=$(${jq} -e -r '.PrivateKey' "${acmefile}") || bad_acme
|
||||
|
||||
if [ ! -n "${priv}" ]; then
|
||||
echo "
|
||||
There didn't seem to be a private key in ${acmefile}.
|
||||
Please ensure that there is a key in this file and try again." >&2
|
||||
exit 8
|
||||
fi
|
||||
|
||||
# If they do not exist, create the needed subdirectories for our assets
|
||||
# and place each in a variable for later use, normalizing the path
|
||||
mkdir -p "${certdir}"/{certs,private}
|
||||
|
||||
pdir="${certdir}/private/"
|
||||
cdir="${certdir}/certs/"
|
||||
|
||||
# Save the existing umask, change the default mode to 600, then
|
||||
# after writing the private key switch it back to the default
|
||||
oldumask=$(umask)
|
||||
umask 177
|
||||
trap 'umask ${oldumask}' EXIT
|
||||
|
||||
# traefik stores the private key in stripped base64 format but the certificates
|
||||
# bundled as a base64 object without stripping headers. This normalizes the
|
||||
# headers and formatting.
|
||||
#
|
||||
# In testing this out it was a balance between the following mechanisms:
|
||||
# gawk:
|
||||
# echo ${priv} | awk 'BEGIN {print "-----BEGIN RSA PRIVATE KEY-----"}
|
||||
# {gsub(/.{64}/,"&\n")}1
|
||||
# END {print "-----END RSA PRIVATE KEY-----"}' > "${pdir}/letsencrypt.key"
|
||||
#
|
||||
# openssl:
|
||||
# echo -e "-----BEGIN RSA PRIVATE KEY-----\n${priv}\n-----END RSA PRIVATE KEY-----" \
|
||||
# | openssl rsa -inform pem -out "${pdir}/letsencrypt.key"
|
||||
#
|
||||
# and sed:
|
||||
# echo "-----BEGIN RSA PRIVATE KEY-----" > "${pdir}/letsencrypt.key"
|
||||
# echo ${priv} | sed -E 's/(.{64})/\1\n/g' >> "${pdir}/letsencrypt.key"
|
||||
# sed -i '$ d' "${pdir}/letsencrypt.key"
|
||||
# echo "-----END RSA PRIVATE KEY-----" >> "${pdir}/letsencrypt.key"
|
||||
# openssl rsa -noout -in "${pdir}/letsencrypt.key" -check # To check if the key is valid
|
||||
|
||||
# In the end, openssl was chosen because most users will need this script
|
||||
# *because* of openssl combined with the fact that it will refuse to write the
|
||||
# key if it does not parse out correctly. The other mechanisms were left as
|
||||
# comments so that the user can choose the mechanism most appropriate to them.
|
||||
echo -e "-----BEGIN RSA PRIVATE KEY-----\n${priv}\n-----END RSA PRIVATE KEY-----" \
|
||||
| openssl rsa -inform pem -out "${pdir}/letsencrypt.key"
|
||||
|
||||
# Process the certificates for each of the domains in acme.json
|
||||
for domain in $(jq -r '.DomainsCertificate.Certs[].Certificate.Domain' ${acmefile}); do
|
||||
# Traefik stores a cert bundle for each domain. Within this cert
|
||||
# bundle there is both proper the certificate and the Let's Encrypt CA
|
||||
echo "Extracting cert bundle for ${domain}"
|
||||
cert=$(jq -e -r --arg domain "$domain" '.DomainsCertificate.Certs[].Certificate |
|
||||
select (.Domain == $domain )| .Certificate' ${acmefile}) || bad_acme
|
||||
echo "${cert}" | ${CMD_DECODE_BASE64} > "${cdir}/${domain}.crt"
|
||||
|
||||
echo "Extracting private key for ${domain}"
|
||||
key=$(jq -e -r --arg domain "$domain" '.DomainsCertificate.Certs[].Certificate |
|
||||
select (.Domain == $domain )| .PrivateKey' ${acmefile}) || bad_acme
|
||||
echo "${key}" | ${CMD_DECODE_BASE64} > "${pdir}/${domain}.key"
|
||||
done
|
||||
@@ -4,8 +4,7 @@ Description=Traefik
|
||||
[Service]
|
||||
Type=notify
|
||||
ExecStart=/usr/bin/traefik --configFile=/etc/traefik.toml
|
||||
Restart=always
|
||||
WatchdogSec=1s
|
||||
Restart=on-failure
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
@@ -1,11 +1,10 @@
|
||||
FROM alpine:3.14
|
||||
FROM alpine:3.7
|
||||
|
||||
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/.local/bin
|
||||
|
||||
COPY requirements.txt /mkdocs/
|
||||
WORKDIR /mkdocs
|
||||
VOLUME /mkdocs
|
||||
|
||||
RUN apk --update upgrade \
|
||||
&& apk --no-cache --no-progress add py-pip \
|
||||
&& rm -rf /var/cache/apk/* \
|
||||
&& pip install --user -r requirements.txt
|
||||
RUN apk --no-cache --no-progress add py-pip \
|
||||
&& pip install --user -r requirements.txt
|
||||
|
||||
532
docs/basics.md
532
docs/basics.md
@@ -1,8 +1,7 @@
|
||||
# Basics
|
||||
|
||||
## Concepts
|
||||
# Concepts
|
||||
|
||||
Let's take our example from the [overview](/#overview) again:
|
||||
Let's take our example from the [overview](https://docs.traefik.io/#overview) again:
|
||||
|
||||
|
||||
> Imagine that you have deployed a bunch of microservices on your infrastructure. You probably used a service registry (like etcd or consul) and/or an orchestrator (swarm, Mesos/Marathon) to manage all these services.
|
||||
@@ -14,20 +13,20 @@ Let's take our example from the [overview](/#overview) again:
|
||||
|
||||
> 
|
||||
|
||||
Let's zoom on Træfik and have an overview of its internal architecture:
|
||||
Let's zoom on Træfɪk and have an overview of its internal architecture:
|
||||
|
||||
|
||||

|
||||
|
||||
- Incoming requests end on [entrypoints](#entrypoints), as the name suggests, they are the network entry points into Træfik (listening port, SSL, traffic redirection...).
|
||||
- Incoming requests end on [entrypoints](#entrypoints), as the name suggests, they are the network entry points into Træfɪk (listening port, SSL, traffic redirection...).
|
||||
- Traffic is then forwarded to a matching [frontend](#frontends). A frontend defines routes from [entrypoints](#entrypoints) to [backends](#backends).
|
||||
Routes are created using requests fields (`Host`, `Path`, `Headers`...) and can match or not a request.
|
||||
- The [frontend](#frontends) will then send the request to a [backend](#backends). A backend can be composed by one or more [servers](#servers), and by a load-balancing strategy.
|
||||
- Finally, the [server](#servers) will forward the request to the corresponding microservice in the private network.
|
||||
|
||||
### Entrypoints
|
||||
## Entrypoints
|
||||
|
||||
Entrypoints are the network entry points into Træfik.
|
||||
Entrypoints are the network entry points into Træfɪk.
|
||||
They can be defined using:
|
||||
|
||||
- a port (80, 443...)
|
||||
@@ -62,90 +61,34 @@ And here is another example with client certificate authentication:
|
||||
[entryPoints.https]
|
||||
address = ":443"
|
||||
[entryPoints.https.tls]
|
||||
[entryPoints.https.tls]
|
||||
[entryPoints.https.tls.ClientCA]
|
||||
files = ["tests/clientca1.crt", "tests/clientca2.crt"]
|
||||
optional = false
|
||||
[[entryPoints.https.tls.certificates]]
|
||||
certFile = "tests/traefik.crt"
|
||||
keyFile = "tests/traefik.key"
|
||||
clientCAFiles = ["tests/clientca1.crt", "tests/clientca2.crt"]
|
||||
[[entryPoints.https.tls.certificates]]
|
||||
certFile = "tests/traefik.crt"
|
||||
keyFile = "tests/traefik.key"
|
||||
```
|
||||
|
||||
- We enable SSL on `https` by giving a certificate and a key.
|
||||
- One or several files containing Certificate Authorities in PEM format are added.
|
||||
- It is possible to have multiple CA:s in the same file or keep them in separate files.
|
||||
|
||||
### Frontends
|
||||
## Frontends
|
||||
|
||||
A frontend consists of a set of rules that determine how incoming requests are forwarded from an entrypoint to a backend.
|
||||
A frontend is a set of rules that forwards the incoming traffic from an entrypoint to a backend.
|
||||
Frontends can be defined using the following rules:
|
||||
|
||||
Rules may be classified in one of two groups: Modifiers and matchers.
|
||||
- `Headers: Content-Type, application/json`: Headers adds a matcher for request header values. It accepts a sequence of key/value pairs to be matched.
|
||||
- `HeadersRegexp: Content-Type, application/(text|json)`: Regular expressions can be used with headers as well. It accepts a sequence of key/value pairs, where the value has regex support.
|
||||
- `Host: traefik.io, www.traefik.io`: Match request host with given host list.
|
||||
- `HostRegexp: traefik.io, {subdomain:[a-z]+}.traefik.io`: Adds a matcher for the URL hosts. It accepts templates with zero or more URL variables enclosed by `{}`. Variables can define an optional regexp pattern to be matched.
|
||||
- `Method: GET, POST, PUT`: Method adds a matcher for HTTP methods. It accepts a sequence of one or more methods to be matched.
|
||||
- `Path: /products/, /articles/{category}/{id:[0-9]+}`: Path adds a matcher for the URL paths. It accepts templates with zero or more URL variables enclosed by `{}`.
|
||||
- `PathStrip`: Same as `Path` but strip the given prefix from the request URL's Path.
|
||||
- `PathPrefix`: PathPrefix adds a matcher for the URL path prefixes. This matches if the given template is a prefix of the full URL path.
|
||||
- `PathPrefixStrip`: Same as `PathPrefix` but strip the given prefix from the request URL's Path.
|
||||
|
||||
#### Modifiers
|
||||
|
||||
Modifier rules only modify the request. They do not have any impact on routing decisions being made.
|
||||
|
||||
Following is the list of existing modifier rules:
|
||||
|
||||
- `AddPrefix: /products`: Add path prefix to the existing request path prior to forwarding the request to the backend.
|
||||
- `ReplacePath: /serverless-path`: Replaces the path and adds the old path to the `X-Replaced-Path` header. Useful for mapping to AWS Lambda or Google Cloud Functions.
|
||||
- `ReplacePathRegex: ^/api/v2/(.*) /api/$1`: Replaces the path with a regular expression and adds the old path to the `X-Replaced-Path` header. Separate the regular expression and the replacement by a space.
|
||||
|
||||
#### Matchers
|
||||
|
||||
Matcher rules determine if a particular request should be forwarded to a backend.
|
||||
|
||||
Separate multiple rule values by `,` (comma) in order to enable ANY semantics (i.e., forward a request if any rule matches).
|
||||
Does not work for `Headers` and `HeadersRegexp`.
|
||||
|
||||
Separate multiple rule values by `;` (semicolon) in order to enable ALL semantics (i.e., forward a request if all rules match).
|
||||
|
||||
Following is the list of existing matcher rules along with examples:
|
||||
|
||||
| Matcher | Description |
|
||||
|------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `Headers: Content-Type, application/json` | Match HTTP header. It accepts a comma-separated key/value pair where both key and value must be literals. |
|
||||
| `HeadersRegexp: Content-Type, application/(text/json)` | Match HTTP header. It accepts a comma-separated key/value pair where the key must be a literal and the value may be a literal or a regular expression. |
|
||||
| `Host: traefik.io, www.traefik.io` | Match request host. It accepts a sequence of literal hosts. |
|
||||
| `HostRegexp: traefik.io, {subdomain:[a-z]+}.traefik.io` | Match request host. It accepts a sequence of literal and regular expression hosts. |
|
||||
| `Method: GET, POST, PUT` | Match request HTTP method. It accepts a sequence of HTTP methods. |
|
||||
| `Path: /products/, /articles/{category}/{id:[0-9]+}` | Match exact request path. It accepts a sequence of literal and regular expression paths. |
|
||||
| `PathStrip: /products/` | Match exact path and strip off the path prior to forwarding the request to the backend. It accepts a sequence of literal paths. |
|
||||
| `PathStripRegex: /articles/{category}/{id:[0-9]+}` | Match exact path and strip off the path prior to forwarding the request to the backend. It accepts a sequence of literal and regular expression paths. |
|
||||
| `PathPrefix: /products/, /articles/{category}/{id:[0-9]+}` | Match request prefix path. It accepts a sequence of literal and regular expression prefix paths. |
|
||||
| `PathPrefixStrip: /products/` | Match request prefix path and strip off the path prefix prior to forwarding the request to the backend. It accepts a sequence of literal prefix paths. Starting with Traefik 1.3, the stripped prefix path will be available in the `X-Forwarded-Prefix` header. |
|
||||
| `PathPrefixStripRegex: /articles/{category}/{id:[0-9]+}` | Match request prefix path and strip off the path prefix prior to forwarding the request to the backend. It accepts a sequence of literal and regular expression prefix paths. Starting with Traefik 1.3, the stripped prefix path will be available in the `X-Forwarded-Prefix` header. |
|
||||
| `Query: foo=bar, bar=baz` | Match Query String parameters. It accepts a sequence of key=value pairs. |
|
||||
|
||||
In order to use regular expressions with Host and Path matchers, you must declare an arbitrarily named variable followed by the colon-separated regular expression, all enclosed in curly braces. Any pattern supported by [Go's regexp package](https://golang.org/pkg/regexp/) may be used (example: `/posts/{id:[0-9]+}`).
|
||||
|
||||
!!! note
|
||||
The variable has no special meaning; however, it is required by the [gorilla/mux](https://github.com/gorilla/mux) dependency which embeds the regular expression and defines the syntax.
|
||||
You can use multiple rules by separating them by `;`
|
||||
|
||||
You can optionally enable `passHostHeader` to forward client `Host` header to the backend.
|
||||
You can also optionally enable `passTLSCert` to forward TLS Client certificates to the backend.
|
||||
|
||||
##### Path Matcher Usage Guidelines
|
||||
|
||||
This section explains when to use the various path matchers.
|
||||
|
||||
Use `Path` if your backend listens on the exact path only. For instance, `Path: /products` would match `/products` but not `/products/shoes`.
|
||||
|
||||
Use a `*Prefix*` matcher if your backend listens on a particular base path but also serves requests on sub-paths.
|
||||
For instance, `PathPrefix: /products` would match `/products` but also `/products/shoes` and `/products/shirts`.
|
||||
Since the path is forwarded as-is, your backend is expected to listen on `/products`.
|
||||
|
||||
Use a `*Strip` matcher if your backend listens on the root path (`/`) but should be routeable on a specific prefix.
|
||||
For instance, `PathPrefixStrip: /products` would match `/products` but also `/products/shoes` and `/products/shirts`.
|
||||
Since the path is stripped prior to forwarding, your backend is expected to listen on `/`.
|
||||
If your backend is serving assets (e.g., images or Javascript files), chances are it must return properly constructed relative URLs.
|
||||
Continuing on the example, the backend should return `/products/shoes/image.png` (and not `/images.png` which Traefik would likely not be able to associate with the same backend).
|
||||
The `X-Forwarded-Prefix` header (available since Traefik 1.3) can be queried to build such URLs dynamically.
|
||||
|
||||
Instead of distinguishing your backends by path only, you can add a Host matcher to the mix.
|
||||
That way, namespacing of your backends happens on the basis of hosts in addition to paths.
|
||||
|
||||
#### Examples
|
||||
|
||||
Here is an example of frontends definition:
|
||||
|
||||
@@ -158,11 +101,10 @@ Here is an example of frontends definition:
|
||||
[frontends.frontend2]
|
||||
backend = "backend1"
|
||||
passHostHeader = true
|
||||
passTLSCert = true
|
||||
priority = 10
|
||||
entrypoints = ["https"] # overrides defaultEntryPoints
|
||||
[frontends.frontend2.routes.test_1]
|
||||
rule = "HostRegexp:localhost,{subdomain:[a-z]+}.localhost"
|
||||
rule = "Host:localhost,{subdomain:[a-z]+}.localhost"
|
||||
[frontends.frontend3]
|
||||
backend = "backend2"
|
||||
[frontends.frontend3.routes.test_1]
|
||||
@@ -174,7 +116,7 @@ Here is an example of frontends definition:
|
||||
- `frontend2` will forward the traffic to the `backend1` if the rule `Host:localhost,{subdomain:[a-z]+}.localhost` is matched (forwarding client `Host` header to the backend)
|
||||
- `frontend3` will forward the traffic to the `backend2` if the rules `Host:test3.localhost` **AND** `Path:/test` are matched
|
||||
|
||||
#### Combining multiple rules
|
||||
### Combining multiple rules
|
||||
|
||||
As seen in the previous example, you can combine multiple rules.
|
||||
In TOML file, you can use multiple routes:
|
||||
@@ -189,7 +131,6 @@ In TOML file, you can use multiple routes:
|
||||
```
|
||||
|
||||
Here `frontend3` will forward the traffic to the `backend2` if the rules `Host:test3.localhost` **AND** `Path:/test` are matched.
|
||||
|
||||
You can also use the notation using a `;` separator, same result:
|
||||
|
||||
```toml
|
||||
@@ -211,130 +152,44 @@ Finally, you can create a rule to bind multiple domains or Path to a frontend, u
|
||||
rule = "Path:/test1,/test2"
|
||||
```
|
||||
|
||||
#### Rules Order
|
||||
|
||||
When combining `Modifier` rules with `Matcher` rules, it is important to remember that `Modifier` rules **ALWAYS** apply after the `Matcher` rules.
|
||||
|
||||
The following rules are both `Matchers` and `Modifiers`, so the `Matcher` portion of the rule will apply first, and the `Modifier` will apply later.
|
||||
|
||||
- `PathStrip`
|
||||
- `PathStripRegex`
|
||||
- `PathPrefixStrip`
|
||||
- `PathPrefixStripRegex`
|
||||
|
||||
`Modifiers` will be applied in a pre-determined order regardless of their order in the `rule` configuration section.
|
||||
|
||||
1. `PathStrip`
|
||||
2. `PathPrefixStrip`
|
||||
3. `PathStripRegex`
|
||||
4. `PathPrefixStripRegex`
|
||||
5. `AddPrefix`
|
||||
6. `ReplacePath`
|
||||
|
||||
#### Priorities
|
||||
### Priorities
|
||||
|
||||
By default, routes will be sorted (in descending order) using rules length (to avoid path overlap):
|
||||
`PathPrefix:/foo;Host:foo.com` (length == 28) will be matched before `PathPrefixStrip:/foobar` (length == 23) will be matched before `PathPrefix:/foo,/bar` (length == 20).
|
||||
`PathPrefix:/12345` will be matched before `PathPrefix:/1234` that will be matched before `PathPrefix:/1`.
|
||||
|
||||
You can customize priority by frontend. The priority value override the rule length during sorting:
|
||||
You can customize priority by frontend:
|
||||
|
||||
```toml
|
||||
```
|
||||
[frontends]
|
||||
[frontends.frontend1]
|
||||
backend = "backend1"
|
||||
priority = 20
|
||||
priority = 10
|
||||
passHostHeader = true
|
||||
[frontends.frontend1.routes.test_1]
|
||||
rule = "PathPrefix:/to"
|
||||
[frontends.frontend2]
|
||||
priority = 5
|
||||
backend = "backend2"
|
||||
passHostHeader = true
|
||||
[frontends.frontend2.routes.test_1]
|
||||
rule = "PathPrefix:/toto"
|
||||
```
|
||||
|
||||
Here, `frontend1` will be matched before `frontend2` (`20 > 16`).
|
||||
Here, `frontend1` will be matched before `frontend2` (`10 > 5`).
|
||||
|
||||
#### Custom headers
|
||||
|
||||
Custom headers can be configured through the frontends, to add headers to either requests or responses that match the frontend's rules.
|
||||
This allows for setting headers such as `X-Script-Name` to be added to the request, or custom headers to be added to the response.
|
||||
|
||||
!!! warning
|
||||
If the custom header name is the same as one header name of the request or response, it will be replaced.
|
||||
|
||||
In this example, all matches to the path `/cheese` will have the `X-Script-Name` header added to the proxied request, and the `X-Custom-Response-Header` added to the response.
|
||||
|
||||
```toml
|
||||
[frontends]
|
||||
[frontends.frontend1]
|
||||
backend = "backend1"
|
||||
[frontends.frontend1.headers.customresponseheaders]
|
||||
X-Custom-Response-Header = "True"
|
||||
[frontends.frontend1.headers.customrequestheaders]
|
||||
X-Script-Name = "test"
|
||||
[frontends.frontend1.routes.test_1]
|
||||
rule = "PathPrefixStrip:/cheese"
|
||||
```
|
||||
|
||||
In this second example, all matches to the path `/cheese` will have the `X-Script-Name` header added to the proxied request, the `X-Custom-Request-Header` removed to the request and the `X-Custom-Response-Header` removed to the response.
|
||||
|
||||
```toml
|
||||
[frontends]
|
||||
[frontends.frontend1]
|
||||
backend = "backend1"
|
||||
[frontends.frontend1.headers.customresponseheaders]
|
||||
X-Custom-Response-Header = ""
|
||||
[frontends.frontend1.headers.customrequestheaders]
|
||||
X-Script-Name = "test"
|
||||
X-Custom-Request-Header = ""
|
||||
[frontends.frontend1.routes.test_1]
|
||||
rule = "PathPrefixStrip:/cheese"
|
||||
```
|
||||
|
||||
#### Security headers
|
||||
|
||||
Security related headers (HSTS headers, SSL redirection, Browser XSS filter, etc) can be added and configured per frontend in a similar manner to the custom headers above.
|
||||
This functionality allows for some easy security features to quickly be set.
|
||||
|
||||
An example of some of the security headers:
|
||||
|
||||
```toml
|
||||
[frontends]
|
||||
[frontends.frontend1]
|
||||
backend = "backend1"
|
||||
[frontends.frontend1.headers]
|
||||
FrameDeny = true
|
||||
[frontends.frontend1.routes.test_1]
|
||||
rule = "PathPrefixStrip:/cheddar"
|
||||
[frontends.frontend2]
|
||||
backend = "backend2"
|
||||
[frontends.frontend2.headers]
|
||||
SSLRedirect = true
|
||||
[frontends.frontend2.routes.test_1]
|
||||
rule = "PathPrefixStrip:/stilton"
|
||||
```
|
||||
|
||||
In this example, traffic routed through the first frontend will have the `X-Frame-Options` header set to `DENY`, and the second will only allow HTTPS request through, otherwise will return a 301 HTTPS redirect.
|
||||
|
||||
!!! note
|
||||
The detailed documentation for those security headers can be found in [unrolled/secure](https://github.com/unrolled/secure#available-options).
|
||||
|
||||
### Backends
|
||||
## Backends
|
||||
|
||||
A backend is responsible to load-balance the traffic coming from one or more frontends to a set of http servers.
|
||||
Various methods of load-balancing is supported:
|
||||
|
||||
Various methods of load-balancing are supported:
|
||||
|
||||
- `wrr`: Weighted Round Robin.
|
||||
- `drr`: Dynamic Round Robin: increases weights on servers that perform better than others.
|
||||
It also rolls back to original weights if the servers have changed.
|
||||
- `wrr`: Weighted Round Robin
|
||||
- `drr`: Dynamic Round Robin: increases weights on servers that perform better than others. It also rolls back to original weights if the servers have changed.
|
||||
|
||||
A circuit breaker can also be applied to a backend, preventing high loads on failing servers.
|
||||
Initial state is Standby. CB observes the statistics and does not modify the request.
|
||||
In case the condition matches, CB enters Tripped state, where it responds with predefined code or redirects to another frontend.
|
||||
In case if condition matches, CB enters Tripped state, where it responds with predefines code or redirects to another frontend.
|
||||
Once Tripped timer expires, CB enters Recovering state and resets all stats.
|
||||
In case the condition does not match and recovery timer expires, CB enters Standby state.
|
||||
In case if the condition does not match and recovery timer expires, CB enters Standby state.
|
||||
|
||||
It can be configured using:
|
||||
|
||||
@@ -343,13 +198,16 @@ It can be configured using:
|
||||
|
||||
For example:
|
||||
|
||||
- `NetworkErrorRatio() > 0.5`: watch error ratio over 10 second sliding window for a frontend.
|
||||
- `NetworkErrorRatio() > 0.5`: watch error ratio over 10 second sliding window for a frontend
|
||||
- `LatencyAtQuantileMS(50.0) > 50`: watch latency at quantile in milliseconds.
|
||||
- `ResponseCodeRatio(500, 600, 0, 600) > 0.5`: ratio of response codes in ranges [500-600) and [0-600).
|
||||
- `ResponseCodeRatio(500, 600, 0, 600) > 0.5`: ratio of response codes in range [500-600) to [0-600)
|
||||
|
||||
To proactively prevent backends from being overwhelmed with high load, a maximum connection limit can also be applied to each backend.
|
||||
To proactively prevent backends from being overwhelmed with high load, a maximum connection limit can
|
||||
also be applied to each backend.
|
||||
|
||||
Maximum connections can be configured by specifying an integer value for `maxconn.amount` and `maxconn.extractorfunc` which is a strategy used to determine how to categorize requests in order to evaluate the maximum connections.
|
||||
Maximum connections can be configured by specifying an integer value for `maxconn.amount` and
|
||||
`maxconn.extractorfunc` which is a strategy used to determine how to categorize requests in order to
|
||||
evaluate the maximum connections.
|
||||
|
||||
For example:
|
||||
```toml
|
||||
@@ -364,73 +222,20 @@ For example:
|
||||
- Another possible value for `extractorfunc` is `client.ip` which will categorize requests based on client source ip.
|
||||
- Lastly `extractorfunc` can take the value of `request.header.ANY_HEADER` which will categorize requests based on `ANY_HEADER` that you provide.
|
||||
|
||||
### Sticky sessions
|
||||
|
||||
Sticky sessions are supported with both load balancers.
|
||||
When sticky sessions are enabled, a cookie is set on the initial request.
|
||||
The default cookie name is an abbreviation of a sha1 (ex: `_1d52e`).
|
||||
On subsequent requests, the client will be directed to the backend stored in the cookie if it is still healthy.
|
||||
If not, a new backend will be assigned.
|
||||
|
||||
|
||||
```toml
|
||||
[backends]
|
||||
[backends.backend1]
|
||||
# Enable sticky session
|
||||
[backends.backend1.loadbalancer.stickiness]
|
||||
|
||||
# Customize the cookie name
|
||||
#
|
||||
# Optional
|
||||
# Default: a sha1 (6 chars)
|
||||
#
|
||||
# cookieName = "my_cookie"
|
||||
```
|
||||
|
||||
The deprecated way:
|
||||
Sticky sessions are supported with both load balancers. When sticky sessions are enabled, a cookie called `_TRAEFIK_BACKEND` is set on the initial
|
||||
request. On subsequent requests, the client will be directed to the backend stored in the cookie if it is still healthy. If not, a new backend
|
||||
will be assigned.
|
||||
|
||||
For example:
|
||||
```toml
|
||||
[backends]
|
||||
[backends.backend1]
|
||||
[backends.backend1.loadbalancer]
|
||||
sticky = true
|
||||
```
|
||||
## Servers
|
||||
|
||||
### Health Check
|
||||
|
||||
A health check can be configured in order to remove a backend from LB rotation as long as it keeps returning HTTP status codes other than `200 OK` to HTTP GET requests periodically carried out by Traefik.
|
||||
The check is defined by a pathappended to the backend URL and an interval (given in a format understood by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration)) specifying how often the health check should be executed (the default being 30 seconds).
|
||||
Each backend must respond to the health check within 5 seconds.
|
||||
By default, the port of the backend server is used, however, this may be overridden.
|
||||
|
||||
A recovering backend returning 200 OK responses again is being returned to the
|
||||
LB rotation pool.
|
||||
|
||||
For example:
|
||||
```toml
|
||||
[backends]
|
||||
[backends.backend1]
|
||||
[backends.backend1.healthcheck]
|
||||
path = "/health"
|
||||
interval = "10s"
|
||||
```
|
||||
|
||||
To use a different port for the healthcheck:
|
||||
```toml
|
||||
[backends]
|
||||
[backends.backend1]
|
||||
[backends.backend1.healthcheck]
|
||||
path = "/health"
|
||||
interval = "10s"
|
||||
port = 8080
|
||||
```
|
||||
|
||||
### Servers
|
||||
|
||||
Servers are simply defined using a `url`. You can also apply a custom `weight` to each server (this will be used by load-balancing).
|
||||
|
||||
!!! note
|
||||
Paths in `url` are ignored. Use `Modifier` to specify paths instead.
|
||||
Servers are simply defined using a `URL`. You can also apply a custom `weight` to each server (this will be used by load-balancing).
|
||||
|
||||
Here is an example of backends and servers definition:
|
||||
|
||||
@@ -438,7 +243,7 @@ Here is an example of backends and servers definition:
|
||||
[backends]
|
||||
[backends.backend1]
|
||||
[backends.backend1.circuitbreaker]
|
||||
expression = "NetworkErrorRatio() > 0.5"
|
||||
expression = "NetworkErrorRatio() > 0.5"
|
||||
[backends.backend1.servers.server1]
|
||||
url = "http://172.17.0.2:80"
|
||||
weight = 10
|
||||
@@ -447,7 +252,7 @@ Here is an example of backends and servers definition:
|
||||
weight = 1
|
||||
[backends.backend2]
|
||||
[backends.backend2.LoadBalancer]
|
||||
method = "drr"
|
||||
method = "drr"
|
||||
[backends.backend2.servers.server1]
|
||||
url = "http://172.17.0.4:80"
|
||||
weight = 1
|
||||
@@ -461,260 +266,99 @@ Here is an example of backends and servers definition:
|
||||
- `backend2` will forward the traffic to two servers: `http://172.17.0.4:80"` with weight `1` and `http://172.17.0.5:80` with weight `2` using `drr` load-balancing strategy.
|
||||
- a circuit breaker is added on `backend1` using the expression `NetworkErrorRatio() > 0.5`: watch error ratio over 10 second sliding window
|
||||
|
||||
# Configuration
|
||||
|
||||
## Configuration
|
||||
Træfɪk's configuration has two parts:
|
||||
|
||||
Træfik's configuration has two parts:
|
||||
- The [static Træfɪk configuration](/basics#static-trfk-configuration) which is loaded only at the begining.
|
||||
- The [dynamic Træfɪk configuration](/basics#dynamic-trfk-configuration) which can be hot-reloaded (no need to restart the process).
|
||||
|
||||
- The [static Træfik configuration](/basics#static-trfik-configuration) which is loaded only at the beginning.
|
||||
- The [dynamic Træfik configuration](/basics#dynamic-trfik-configuration) which can be hot-reloaded (no need to restart the process).
|
||||
|
||||
### Static Træfik configuration
|
||||
## Static Træfɪk configuration
|
||||
|
||||
The static configuration is the global configuration which is setting up connections to configuration backends and entrypoints.
|
||||
The static configuration is the global configuration which setting up connections to configuration backends and entrypoints.
|
||||
|
||||
Træfik can be configured using many configuration sources with the following precedence order.
|
||||
Træfɪk can be configured using many configuration sources with the following precedence order.
|
||||
Each item takes precedence over the item below it:
|
||||
|
||||
- [Key-value store](/basics/#key-value-stores)
|
||||
- [Key-value Store](/basics/#key-value-stores)
|
||||
- [Arguments](/basics/#arguments)
|
||||
- [Configuration file](/basics/#configuration-file)
|
||||
- Default
|
||||
|
||||
It means that arguments override configuration file, and key-value store overrides arguments.
|
||||
It means that arguments overrides configuration file, and Key-value Store overrides arguments.
|
||||
|
||||
!!! note
|
||||
the provider-enabling argument parameters (e.g., `--docker`) set all default values for the specific provider.
|
||||
It must not be used if a configuration source with less precedence wants to set a non-default provider value.
|
||||
### Configuration file
|
||||
|
||||
#### Configuration file
|
||||
|
||||
By default, Træfik will try to find a `traefik.toml` in the following places:
|
||||
By default, Træfɪk will try to find a `traefik.toml` in the following places:
|
||||
|
||||
- `/etc/traefik/`
|
||||
- `$HOME/.traefik/`
|
||||
- `.` _the working directory_
|
||||
- `.` *the working directory*
|
||||
|
||||
You can override this by setting a `configFile` argument:
|
||||
|
||||
```bash
|
||||
traefik --configFile=foo/bar/myconfigfile.toml
|
||||
$ traefik --configFile=foo/bar/myconfigfile.toml
|
||||
```
|
||||
|
||||
Please refer to the [global configuration](/configuration/commons) section to get documentation on it.
|
||||
Please refer to the [global configuration](/toml/#global-configuration) section to get documentation on it.
|
||||
|
||||
#### Arguments
|
||||
### Arguments
|
||||
|
||||
Each argument (and command) is described in the help section:
|
||||
|
||||
```bash
|
||||
traefik --help
|
||||
$ traefik --help
|
||||
```
|
||||
|
||||
Note that all default values will be displayed as well.
|
||||
|
||||
#### Key-value stores
|
||||
### Key-value stores
|
||||
|
||||
Træfik supports several Key-value stores:
|
||||
Træfɪk supports several Key-value stores:
|
||||
|
||||
- [Consul](https://consul.io)
|
||||
- [etcd](https://coreos.com/etcd/)
|
||||
- [ZooKeeper](https://zookeeper.apache.org/)
|
||||
- [ZooKeeper](https://zookeeper.apache.org/)
|
||||
- [boltdb](https://github.com/boltdb/bolt)
|
||||
|
||||
Please refer to the [User Guide Key-value store configuration](/user-guide/kv-config/) section to get documentation on it.
|
||||
|
||||
### Dynamic Træfik configuration
|
||||
## Dynamic Træfɪk configuration
|
||||
|
||||
The dynamic configuration concerns :
|
||||
The dynamic configuration concerns :
|
||||
|
||||
- [Frontends](/basics/#frontends)
|
||||
- [Backends](/basics/#backends)
|
||||
- [Servers](/basics/#servers)
|
||||
- HTTPS Certificates
|
||||
- [Backends](/basics/#backends)
|
||||
- [Servers](/basics/#servers)
|
||||
|
||||
Træfik can hot-reload those rules which could be provided by [multiple configuration backends](/configuration/commons).
|
||||
Træfɪk can hot-reload those rules which could be provided by [multiple configuration backends](/toml/#configuration-backends).
|
||||
|
||||
We only need to enable `watch` option to make Træfik watch configuration backend changes and generate its configuration automatically.
|
||||
We only need to enable `watch` option to make Træfɪk watch configuration backend changes and generate its configuration automatically.
|
||||
Routes to services will be created and updated instantly at any changes.
|
||||
|
||||
Please refer to the [configuration backends](/configuration/commons) section to get documentation on it.
|
||||
Please refer to the [configuration backends](/toml/#configuration-backends) section to get documentation on it.
|
||||
|
||||
## Commands
|
||||
# Commands
|
||||
|
||||
### traefik
|
||||
Usage: `traefik [command] [--flag=flag_argument]`
|
||||
|
||||
Usage:
|
||||
```bash
|
||||
traefik [command] [--flag=flag_argument]
|
||||
```
|
||||
List of Træfɪk available commands with description :
|
||||
|
||||
List of Træfik available commands with description :
|
||||
|
||||
- `version` : Print version
|
||||
- `storeconfig` : Store the static Traefik configuration into a Key-value stores. Please refer to the [Store Træfik configuration](/user-guide/kv-config/#store-configuration-in-key-value-store) section to get documentation on it.
|
||||
- `bug`: The easiest way to submit a pre-filled issue.
|
||||
- `healthcheck`: Calls Traefik `/ping` to check health.
|
||||
|
||||
Each command may have related flags.
|
||||
- `version` : Print version
|
||||
- `storeconfig` : Store the static traefik configuration into a Key-value stores. Please refer to the [Store Træfɪk configuration](/user-guide/kv-config/#store-trfk-configuration) section to get documentation on it.
|
||||
|
||||
Each command may have related flags.
|
||||
All those related flags will be displayed with :
|
||||
|
||||
```bash
|
||||
traefik [command] --help
|
||||
$ traefik [command] --help
|
||||
```
|
||||
|
||||
Each command is described at the beginning of the help section:
|
||||
Note that each command is described at the begining of the help section:
|
||||
|
||||
```bash
|
||||
traefik --help
|
||||
|
||||
# or
|
||||
|
||||
docker run traefik[:version] --help
|
||||
# ex: docker run traefik:1.5 --help
|
||||
$ traefik --help
|
||||
```
|
||||
|
||||
### Command: bug
|
||||
|
||||
Here is the easiest way to submit a pre-filled issue on [Træfik GitHub](https://github.com/containous/traefik).
|
||||
|
||||
```bash
|
||||
traefik bug
|
||||
```
|
||||
|
||||
Watch [this demo](https://www.youtube.com/watch?v=Lyz62L8m93I).
|
||||
|
||||
### Command: healthcheck
|
||||
|
||||
This command allows to check the health of Traefik. Its exit status is `0` if Traefik is healthy and `1` if it is unhealthy.
|
||||
|
||||
This can be used with Docker [HEALTHCHECK](https://docs.docker.com/engine/reference/builder/#healthcheck) instruction or any other health check orchestration mechanism.
|
||||
|
||||
!!! note
|
||||
The [`ping`](/configuration/ping) must be enabled to allow the `healthcheck` command to call `/ping`.
|
||||
|
||||
```bash
|
||||
traefik healthcheck
|
||||
```
|
||||
```bash
|
||||
OK: http://:8082/ping
|
||||
```
|
||||
|
||||
|
||||
## Collected Data
|
||||
|
||||
**This feature is disabled by default.**
|
||||
|
||||
You can read the public proposal on this topic [here](https://github.com/containous/traefik/issues/2369).
|
||||
|
||||
### Why ?
|
||||
|
||||
In order to help us learn more about how Træfik is being used and improve it, we collect anonymous usage statistics from running instances.
|
||||
Those data help us prioritize our developments and focus on what's more important (for example, which configuration backend is used and which is not used).
|
||||
|
||||
### What ?
|
||||
|
||||
Once a day (the first call begins 10 minutes after the start of Træfik), we collect:
|
||||
|
||||
- the Træfik version
|
||||
- a hash of the configuration
|
||||
- an **anonymous version** of the static configuration:
|
||||
- token, user name, password, URL, IP, domain, email, etc, are removed
|
||||
|
||||
!!! note
|
||||
We do not collect the dynamic configuration (frontends & backends).
|
||||
|
||||
!!! note
|
||||
We do not collect data behind the scenes to run advertising programs or to sell such data to third-party.
|
||||
|
||||
#### Here is an example
|
||||
|
||||
- Source configuration:
|
||||
|
||||
```toml
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
|
||||
[api]
|
||||
|
||||
[Docker]
|
||||
endpoint = "tcp://10.10.10.10:2375"
|
||||
domain = "foo.bir"
|
||||
exposedByDefault = true
|
||||
swarmMode = true
|
||||
|
||||
[Docker.TLS]
|
||||
CA = "dockerCA"
|
||||
Cert = "dockerCert"
|
||||
Key = "dockerKey"
|
||||
InsecureSkipVerify = true
|
||||
|
||||
[ECS]
|
||||
Domain = "foo.bar"
|
||||
ExposedByDefault = true
|
||||
Clusters = ["foo-bar"]
|
||||
Region = "us-west-2"
|
||||
AccessKeyID = "AccessKeyID"
|
||||
SecretAccessKey = "SecretAccessKey"
|
||||
```
|
||||
|
||||
- Obfuscated and anonymous configuration:
|
||||
|
||||
```toml
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
|
||||
[api]
|
||||
|
||||
[Docker]
|
||||
Endpoint = "xxxx"
|
||||
Domain = "xxxx"
|
||||
ExposedByDefault = true
|
||||
SwarmMode = true
|
||||
|
||||
[Docker.TLS]
|
||||
CA = "xxxx"
|
||||
Cert = "xxxx"
|
||||
Key = "xxxx"
|
||||
InsecureSkipVerify = false
|
||||
|
||||
[ECS]
|
||||
Domain = "xxxx"
|
||||
ExposedByDefault = true
|
||||
Clusters = []
|
||||
Region = "us-west-2"
|
||||
AccessKeyID = "xxxx"
|
||||
SecretAccessKey = "xxxx"
|
||||
```
|
||||
|
||||
### Show me the code !
|
||||
|
||||
If you want to dig into more details, here is the source code of the collecting system: [collector.go](https://github.com/containous/traefik/blob/master/collector/collector.go)
|
||||
|
||||
By default we anonymize all configuration fields, except fields tagged with `export=true`.
|
||||
|
||||
You can check all fields in the [godoc](https://godoc.org/github.com/containous/traefik/configuration#GlobalConfiguration).
|
||||
|
||||
### How to enable this ?
|
||||
|
||||
You can enable the collecting system by:
|
||||
|
||||
- adding this line in the configuration TOML file:
|
||||
|
||||
```toml
|
||||
# Send anonymous usage data
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
sendAnonymousUsage = true
|
||||
```
|
||||
|
||||
- adding this flag in the CLI:
|
||||
|
||||
```bash
|
||||
./traefik --sendAnonymousUsage=true
|
||||
```
|
||||
|
||||
@@ -14,7 +14,7 @@ I used 4 VMs for the tests with the following configuration:
|
||||
## Setup
|
||||
|
||||
1. One VM used to launch the benchmarking tool [wrk](https://github.com/wg/wrk)
|
||||
2. One VM for Traefik (v1.0.0-beta.416) / nginx (v1.4.6)
|
||||
2. One VM for traefik (v1.0.0-beta.416) / nginx (v1.4.6)
|
||||
3. Two VMs for 2 backend servers in go [whoami](https://github.com/emilevauge/whoamI/)
|
||||
|
||||
Each VM has been tuned using the following limits:
|
||||
@@ -65,8 +65,8 @@ http {
|
||||
keepalive_requests 10000;
|
||||
types_hash_max_size 2048;
|
||||
|
||||
open_file_cache max=200000 inactive=300s;
|
||||
open_file_cache_valid 300s;
|
||||
open_file_cache max=200000 inactive=300s;
|
||||
open_file_cache_valid 300s;
|
||||
open_file_cache_min_uses 2;
|
||||
open_file_cache_errors on;
|
||||
|
||||
@@ -117,7 +117,7 @@ server {
|
||||
|
||||
Here is the `traefik.toml` file used:
|
||||
|
||||
```toml
|
||||
```
|
||||
MaxIdleConnsPerHost = 100000
|
||||
defaultEntryPoints = ["http"]
|
||||
|
||||
@@ -145,7 +145,7 @@ defaultEntryPoints = ["http"]
|
||||
## Results
|
||||
|
||||
### whoami:
|
||||
```shell
|
||||
```
|
||||
wrk -t20 -c1000 -d60s -H "Host: test.traefik" --latency http://IP-whoami:80/bench
|
||||
Running 1m test @ http://IP-whoami:80/bench
|
||||
20 threads and 1000 connections
|
||||
@@ -164,7 +164,7 @@ Transfer/sec: 6.40MB
|
||||
```
|
||||
|
||||
### nginx:
|
||||
```shell
|
||||
```
|
||||
wrk -t20 -c1000 -d60s -H "Host: test.traefik" --latency http://IP-nginx:8001/bench
|
||||
Running 1m test @ http://IP-nginx:8001/bench
|
||||
20 threads and 1000 connections
|
||||
@@ -182,9 +182,8 @@ Requests/sec: 33591.67
|
||||
Transfer/sec: 4.97MB
|
||||
```
|
||||
|
||||
### Traefik:
|
||||
|
||||
```shell
|
||||
### traefik:
|
||||
```
|
||||
wrk -t20 -c1000 -d60s -H "Host: test.traefik" --latency http://IP-traefik:8000/bench
|
||||
Running 1m test @ http://IP-traefik:8000/bench
|
||||
20 threads and 1000 connections
|
||||
@@ -210,5 +209,5 @@ Not bad for young project :) !
|
||||
Some areas of possible improvements:
|
||||
|
||||
- Use [GO_REUSEPORT](https://github.com/kavu/go_reuseport) listener
|
||||
- Run a separate server instance per CPU core with `GOMAXPROCS=1` (it appears during benchmarks that there is a lot more context switches with Traefik than with nginx)
|
||||
- Run a separate server instance per CPU core with `GOMAXPROCS=1` (it appears during benchmarks that there is a lot more context switches with traefik than with nginx)
|
||||
|
||||
|
||||
@@ -1,405 +0,0 @@
|
||||
# ACME (Let's Encrypt) configuration
|
||||
|
||||
See also [Let's Encrypt examples](/user-guide/examples/#lets-encrypt-support) and [Docker & Let's Encrypt user guide](/user-guide/docker-and-lets-encrypt).
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
# Sample entrypoint configuration when using ACME.
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
[entryPoints.https]
|
||||
address = ":443"
|
||||
[entryPoints.https.tls]
|
||||
```
|
||||
|
||||
```toml
|
||||
# Enable ACME (Let's Encrypt): automatic SSL.
|
||||
[acme]
|
||||
|
||||
# Email address used for registration.
|
||||
#
|
||||
# Required
|
||||
#
|
||||
email = "test@traefik.io"
|
||||
|
||||
# File used for certificates storage.
|
||||
#
|
||||
# Optional (Deprecated)
|
||||
#
|
||||
#storageFile = "acme.json"
|
||||
|
||||
# File or key used for certificates storage.
|
||||
#
|
||||
# Required
|
||||
#
|
||||
storage = "acme.json"
|
||||
# or `storage = "traefik/acme/account"` if using KV store.
|
||||
|
||||
# Entrypoint to proxy acme apply certificates to.
|
||||
# WARNING, if the TLS-SNI-01 challenge is used, it must point to an entrypoint on port 443
|
||||
#
|
||||
# Required
|
||||
#
|
||||
entryPoint = "https"
|
||||
|
||||
# Use a DNS-01 acme challenge rather than TLS-SNI-01 challenge
|
||||
#
|
||||
# Optional (Deprecated, replaced by [acme.dnsChallenge])
|
||||
#
|
||||
# dnsProvider = "digitalocean"
|
||||
|
||||
# By default, the dnsProvider will verify the TXT DNS challenge record before letting ACME verify.
|
||||
# If delayDontCheckDNS is greater than zero, avoid this & instead just wait so many seconds.
|
||||
# Useful if internal networks block external DNS queries.
|
||||
#
|
||||
# Optional (Deprecated, replaced by [acme.dnsChallenge])
|
||||
# Default: 0
|
||||
#
|
||||
# delayDontCheckDNS = 0
|
||||
|
||||
# If true, display debug log messages from the acme client library.
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
# acmeLogging = true
|
||||
|
||||
# Enable on demand certificate generation.
|
||||
#
|
||||
# Optional (Deprecated)
|
||||
# Default: false
|
||||
#
|
||||
# onDemand = true
|
||||
|
||||
# Enable certificate generation on frontends Host rules.
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
# onHostRule = true
|
||||
|
||||
# CA server to use.
|
||||
# - Uncomment the line to run on the staging let's encrypt server.
|
||||
# - Leave comment to go to prod.
|
||||
#
|
||||
# Optional
|
||||
# Default: "https://acme-v01.api.letsencrypt.org/directory"
|
||||
#
|
||||
# caServer = "https://acme-staging.api.letsencrypt.org/directory"
|
||||
|
||||
# Domains list.
|
||||
#
|
||||
# [[acme.domains]]
|
||||
# main = "local1.com"
|
||||
# sans = ["test1.local1.com", "test2.local1.com"]
|
||||
# [[acme.domains]]
|
||||
# main = "local2.com"
|
||||
# sans = ["test1.local2.com", "test2.local2.com"]
|
||||
# [[acme.domains]]
|
||||
# main = "local3.com"
|
||||
# [[acme.domains]]
|
||||
# main = "local4.com"
|
||||
|
||||
# Use a HTTP-01 acme challenge rather than TLS-SNI-01 challenge
|
||||
#
|
||||
# Optional but recommend
|
||||
#
|
||||
[acme.httpChallenge]
|
||||
|
||||
# EntryPoint to use for the challenges.
|
||||
#
|
||||
# Required
|
||||
#
|
||||
entryPoint = "http"
|
||||
|
||||
# Use a DNS-01 acme challenge rather than TLS-SNI-01 challenge
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# [acme.dnsChallenge]
|
||||
|
||||
# Provider used.
|
||||
#
|
||||
# Required
|
||||
#
|
||||
# provider = "digitalocean"
|
||||
|
||||
# By default, the provider will verify the TXT DNS challenge record before letting ACME verify.
|
||||
# If delayBeforeCheck is greater than zero, avoid this & instead just wait so many seconds.
|
||||
# Useful if internal networks block external DNS queries.
|
||||
#
|
||||
# Optional
|
||||
# Default: 0
|
||||
#
|
||||
# delayBeforeCheck = 0
|
||||
```
|
||||
|
||||
!!! note
|
||||
Even if `TLS-SNI-01` challenge is [disabled](https://community.letsencrypt.org/t/2018-01-11-update-regarding-acme-tls-sni-and-shared-hosting-infrastructure/50188) for the moment, it stays the _by default_ ACME Challenge in Træfik.
|
||||
If `TLS-SNI-01` challenge is not re-enabled in the future, it we will be removed from Træfik.
|
||||
|
||||
!!! note
|
||||
If `TLS-SNI-01` challenge is used, `acme.entryPoint` has to be reachable by Let's Encrypt through the port 443.
|
||||
If `HTTP-01` challenge is used, `acme.httpChallenge.entryPoint` has to be defined and reachable by Let's Encrypt through the port 80.
|
||||
These are Let's Encrypt limitations as described on the [community forum](https://community.letsencrypt.org/t/support-for-ports-other-than-80-and-443/3419/72).
|
||||
|
||||
### Let's Encrypt downtime
|
||||
|
||||
Let's Encrypt functionality will be limited until Træfik is restarted.
|
||||
|
||||
If Let's Encrypt is not reachable, these certificates will be used :
|
||||
|
||||
- ACME certificates already generated before downtime
|
||||
- Expired ACME certificates
|
||||
- Provided certificates
|
||||
|
||||
!!! note
|
||||
Default Træfik certificate will be used instead of ACME certificates for new (sub)domains (which need Let's Encrypt challenge).
|
||||
|
||||
### `storage`
|
||||
|
||||
```toml
|
||||
[acme]
|
||||
# ...
|
||||
storage = "acme.json"
|
||||
# ...
|
||||
```
|
||||
|
||||
The `storage` option sets where are stored your ACME certificates.
|
||||
|
||||
There are two kind of `storage` :
|
||||
|
||||
- a JSON file,
|
||||
- a KV store entry.
|
||||
|
||||
!!! danger "DEPRECATED"
|
||||
`storage` replaces `storageFile` which is deprecated.
|
||||
|
||||
!!! note
|
||||
During Træfik configuration migration from a configuration file to a KV store (thanks to `storeconfig` subcommand as described [here](/user-guide/kv-config/#store-configuration-in-key-value-store)), if ACME certificates have to be migrated too, use both `storageFile` and `storage`.
|
||||
|
||||
- `storageFile` will contain the path to the `acme.json` file to migrate.
|
||||
- `storage` will contain the key where the certificates will be stored.
|
||||
|
||||
#### Store data in a file
|
||||
|
||||
ACME certificates can be stored in a JSON file which with the `600` right mode.
|
||||
|
||||
There are two ways to store ACME certificates in a file from Docker:
|
||||
|
||||
- create a file on your host and mount it as a volume:
|
||||
```toml
|
||||
storage = "acme.json"
|
||||
```
|
||||
```bash
|
||||
docker run -v "/my/host/acme.json:acme.json" traefik
|
||||
```
|
||||
- mount the folder containing the file as a volume
|
||||
```toml
|
||||
storage = "/etc/traefik/acme/acme.json"
|
||||
```
|
||||
```bash
|
||||
docker run -v "/my/host/acme:/etc/traefik/acme" traefik
|
||||
```
|
||||
|
||||
!!! warning
|
||||
This file cannot be shared per many instances of Træfik at the same time.
|
||||
If you have to use Træfik cluster mode, please use [a KV Store entry](/configuration/acme/#storage-kv-entry).
|
||||
|
||||
#### Store data in a KV store entry
|
||||
|
||||
ACME certificates can be stored in a KV Store entry.
|
||||
|
||||
```toml
|
||||
storage = "traefik/acme/account"
|
||||
```
|
||||
|
||||
**This kind of storage is mandatory in cluster mode.**
|
||||
|
||||
Because KV stores (like Consul) have limited entries size, the certificates list is compressed before to be set in a KV store entry.
|
||||
|
||||
!!! note
|
||||
It's possible to store up to approximately 100 ACME certificates in Consul.
|
||||
|
||||
### `acme.httpChallenge`
|
||||
|
||||
Use `HTTP-01` challenge to generate/renew ACME certificates.
|
||||
|
||||
The redirection is fully compatible with the HTTP-01 challenge.
|
||||
You can use redirection with HTTP-01 challenge without problem.
|
||||
|
||||
```toml
|
||||
[acme]
|
||||
# ...
|
||||
entryPoint = "https"
|
||||
[acme.httpChallenge]
|
||||
entryPoint = "http"
|
||||
```
|
||||
|
||||
#### `entryPoint`
|
||||
|
||||
Specify the entryPoint to use during the challenges.
|
||||
|
||||
```toml
|
||||
defaultEntryPoints = ["http", "https"]
|
||||
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
[entryPoints.https]
|
||||
address = ":443"
|
||||
[entryPoints.https.tls]
|
||||
# ...
|
||||
|
||||
[acme]
|
||||
# ...
|
||||
entryPoint = "https"
|
||||
[acme.httpChallenge]
|
||||
entryPoint = "http"
|
||||
```
|
||||
|
||||
!!! note
|
||||
`acme.httpChallenge.entryPoint` has to be reachable by Let's Encrypt through the port 80.
|
||||
It's a Let's Encrypt limitation as described on the [community forum](https://community.letsencrypt.org/t/support-for-ports-other-than-80-and-443/3419/72).
|
||||
|
||||
### `acme.dnsChallenge`
|
||||
|
||||
Use `DNS-01` challenge to generate/renew ACME certificates.
|
||||
|
||||
```toml
|
||||
[acme]
|
||||
# ...
|
||||
[acme.dnsChallenge]
|
||||
provider = "digitalocean"
|
||||
delayBeforeCheck = 0
|
||||
# ...
|
||||
```
|
||||
|
||||
#### `provider`
|
||||
|
||||
Select the provider that matches the DNS domain that will host the challenge TXT record, and provide environment variables to enable setting it:
|
||||
|
||||
| Provider Name | Provider code | Configuration |
|
||||
|--------------------------------------------------------|----------------|---------------------------------------------------------------------------------------------------------------------------|
|
||||
| [Auroradns](https://www.pcextreme.com/aurora/dns) | `auroradns` | `AURORA_USER_ID`, `AURORA_KEY`, `AURORA_ENDPOINT` |
|
||||
| [Azure](https://azure.microsoft.com/services/dns/) | `azure` | `AZURE_CLIENT_ID`, `AZURE_CLIENT_SECRET`, `AZURE_SUBSCRIPTION_ID`, `AZURE_TENANT_ID`, `AZURE_RESOURCE_GROUP` |
|
||||
| [Cloudflare](https://www.cloudflare.com) | `cloudflare` | `CLOUDFLARE_EMAIL`, `CLOUDFLARE_API_KEY` - The Cloudflare `Global API Key` needs to be used and not the `Origin CA Key` |
|
||||
| [DigitalOcean](https://www.digitalocean.com) | `digitalocean` | `DO_AUTH_TOKEN` |
|
||||
| [DNSimple](https://dnsimple.com) | `dnsimple` | `DNSIMPLE_OAUTH_TOKEN`, `DNSIMPLE_BASE_URL` |
|
||||
| [DNS Made Easy](https://dnsmadeeasy.com) | `dnsmadeeasy` | `DNSMADEEASY_API_KEY`, `DNSMADEEASY_API_SECRET`, `DNSMADEEASY_SANDBOX` |
|
||||
| [DNSPod](http://www.dnspod.net/) | `dnspod` | `DNSPOD_API_KEY` |
|
||||
| [Dyn](https://dyn.com) | `dyn` | `DYN_CUSTOMER_NAME`, `DYN_USER_NAME`, `DYN_PASSWORD` |
|
||||
| [Exoscale](https://www.exoscale.ch) | `exoscale` | `EXOSCALE_API_KEY`, `EXOSCALE_API_SECRET`, `EXOSCALE_ENDPOINT` |
|
||||
| [Gandi](https://www.gandi.net) | `gandi` | `GANDI_API_KEY` |
|
||||
| [GoDaddy](https://godaddy.com/domains) | `godaddy` | `GODADDY_API_KEY`, `GODADDY_API_SECRET` |
|
||||
| [Google Cloud DNS](https://cloud.google.com/dns/docs/) | `gcloud` | `GCE_PROJECT`, `GCE_SERVICE_ACCOUNT_FILE` |
|
||||
| [Linode](https://www.linode.com) | `linode` | `LINODE_API_KEY` |
|
||||
| manual | - | none, but run Træfik interactively & turn on `acmeLogging` to see instructions & press <kbd>Enter</kbd>. |
|
||||
| [Namecheap](https://www.namecheap.com) | `namecheap` | `NAMECHEAP_API_USER`, `NAMECHEAP_API_KEY` |
|
||||
| [Ns1](https://ns1.com/) | `ns1` | `NS1_API_KEY` |
|
||||
| [Open Telekom Cloud](https://cloud.telekom.de/en/) | `otc` | `OTC_DOMAIN_NAME`, `OTC_USER_NAME`, `OTC_PASSWORD`, `OTC_PROJECT_NAME`, `OTC_IDENTITY_ENDPOINT` |
|
||||
| [OVH](https://www.ovh.com) | `ovh` | `OVH_ENDPOINT`, `OVH_APPLICATION_KEY`, `OVH_APPLICATION_SECRET`, `OVH_CONSUMER_KEY` |
|
||||
| [PowerDNS](https://www.powerdns.com) | `pdns` | `PDNS_API_KEY`, `PDNS_API_URL` |
|
||||
| [Rackspace](https://www.rackspace.com/cloud/dns) | `rackspace` | `RACKSPACE_USER`, `RACKSPACE_API_KEY` |
|
||||
| [RFC2136](https://tools.ietf.org/html/rfc2136) | `rfc2136` | `RFC2136_TSIG_KEY`, `RFC2136_TSIG_SECRET`, `RFC2136_TSIG_ALGORITHM`, `RFC2136_NAMESERVER` |
|
||||
| [Route 53](https://aws.amazon.com/route53/) | `route53` | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_REGION`, `AWS_HOSTED_ZONE_ID` or configured user/instance IAM profile. |
|
||||
| [VULTR](https://www.vultr.com) | `vultr` | `VULTR_API_KEY` |
|
||||
|
||||
#### `delayBeforeCheck`
|
||||
|
||||
By default, the `provider` will verify the TXT DNS challenge record before letting ACME verify.
|
||||
If `delayBeforeCheck` is greater than zero, avoid this & instead just wait so many seconds.
|
||||
|
||||
Useful if internal networks block external DNS queries.
|
||||
|
||||
!!! note
|
||||
This field has no sense if a `provider` is not defined.
|
||||
|
||||
### `onDemand` (Deprecated)
|
||||
|
||||
!!! danger "DEPRECATED"
|
||||
This option is deprecated.
|
||||
|
||||
```toml
|
||||
[acme]
|
||||
# ...
|
||||
onDemand = true
|
||||
# ...
|
||||
```
|
||||
|
||||
Enable on demand certificate.
|
||||
|
||||
This will request a certificate from Let's Encrypt during the first TLS handshake for a host name that does not yet have a certificate.
|
||||
|
||||
!!! warning
|
||||
TLS handshakes will be slow when requesting a host name certificate for the first time, this can lead to DoS attacks.
|
||||
|
||||
!!! warning
|
||||
Take note that Let's Encrypt have [rate limiting](https://letsencrypt.org/docs/rate-limits).
|
||||
|
||||
### `onHostRule`
|
||||
|
||||
```toml
|
||||
[acme]
|
||||
# ...
|
||||
onHostRule = true
|
||||
# ...
|
||||
```
|
||||
|
||||
Enable certificate generation on frontends `Host` rules (for frontends wired on the `acme.entryPoint`).
|
||||
|
||||
This will request a certificate from Let's Encrypt for each frontend with a Host rule.
|
||||
|
||||
For example, a rule `Host:test1.traefik.io,test2.traefik.io` will request a certificate with main domain `test1.traefik.io` and SAN `test2.traefik.io`.
|
||||
|
||||
### `caServer`
|
||||
|
||||
```toml
|
||||
[acme]
|
||||
# ...
|
||||
caServer = "https://acme-staging.api.letsencrypt.org/directory"
|
||||
# ...
|
||||
```
|
||||
|
||||
CA server to use.
|
||||
|
||||
- Uncomment the line to run on the staging Let's Encrypt server.
|
||||
- Leave comment to go to prod.
|
||||
|
||||
### `acme.domains`
|
||||
|
||||
```toml
|
||||
[acme]
|
||||
# ...
|
||||
[[acme.domains]]
|
||||
main = "local1.com"
|
||||
sans = ["test1.local1.com", "test2.local1.com"]
|
||||
[[acme.domains]]
|
||||
main = "local2.com"
|
||||
sans = ["test1.local2.com", "test2.local2.com"]
|
||||
[[acme.domains]]
|
||||
main = "local3.com"
|
||||
[[acme.domains]]
|
||||
main = "local4.com"
|
||||
# ...
|
||||
```
|
||||
|
||||
You can provide SANs (alternative domains) to each main domain.
|
||||
All domains must have A/AAAA records pointing to Træfik.
|
||||
|
||||
!!! warning
|
||||
Take note that Let's Encrypt have [rate limiting](https://letsencrypt.org/docs/rate-limits).
|
||||
|
||||
Each domain & SANs will lead to a certificate request.
|
||||
|
||||
### `dnsProvider` (Deprecated)
|
||||
|
||||
!!! danger "DEPRECATED"
|
||||
This option is deprecated, use [dnsChallenge.provider](/configuration/acme/#acmednschallenge) instead.
|
||||
|
||||
### `delayDontCheckDNS` (Deprecated)
|
||||
|
||||
!!! danger "DEPRECATED"
|
||||
This option is deprecated, use [dnsChallenge.delayBeforeCheck](/configuration/acme/#acmednschallenge) instead.
|
||||
@@ -1,308 +0,0 @@
|
||||
# API Definition
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
# API definition
|
||||
[api]
|
||||
# Name of the related entry point
|
||||
#
|
||||
# Optional
|
||||
# Default: "traefik"
|
||||
#
|
||||
entryPoint = "traefik"
|
||||
|
||||
# Enabled Dashboard
|
||||
#
|
||||
# Optional
|
||||
# Default: true
|
||||
#
|
||||
dashboard = true
|
||||
|
||||
# Enable debug mode.
|
||||
# This will install HTTP handlers to expose Go expvars under /debug/vars and
|
||||
# pprof profiling data under /debug/pprof.
|
||||
# Additionally, the log level will be set to DEBUG.
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
debug = true
|
||||
```
|
||||
|
||||
For more customization, see [entry points](/configuration/entrypoints/) documentation and [examples](/user-guide/examples/#ping-health-check).
|
||||
|
||||
## Web UI
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
## API
|
||||
|
||||
| Path | Method | Description |
|
||||
|-----------------------------------------------------------------|------------------|-------------------------------------------|
|
||||
| `/` | `GET` | Provides a simple HTML frontend of Træfik |
|
||||
| `/health` | `GET` | JSON health metrics |
|
||||
| `/api` | `GET` | Configuration for all providers |
|
||||
| `/api/providers` | `GET` | Providers |
|
||||
| `/api/providers/{provider}` | `GET`, `PUT` | Get or update provider (1) |
|
||||
| `/api/providers/{provider}/backends` | `GET` | List backends |
|
||||
| `/api/providers/{provider}/backends/{backend}` | `GET` | Get backend |
|
||||
| `/api/providers/{provider}/backends/{backend}/servers` | `GET` | List servers in backend |
|
||||
| `/api/providers/{provider}/backends/{backend}/servers/{server}` | `GET` | Get a server in a backend |
|
||||
| `/api/providers/{provider}/frontends` | `GET` | List frontends |
|
||||
| `/api/providers/{provider}/frontends/{frontend}` | `GET` | Get a frontend |
|
||||
| `/api/providers/{provider}/frontends/{frontend}/routes` | `GET` | List routes in a frontend |
|
||||
| `/api/providers/{provider}/frontends/{frontend}/routes/{route}` | `GET` | Get a route in a frontend |
|
||||
|
||||
<1> See [Rest](/configuration/backends/rest/#api) for more information.
|
||||
|
||||
!!! warning
|
||||
For compatibility reason, when you activate the rest provider, you can use `web` or `rest` as `provider` value.
|
||||
But be careful, in the configuration for all providers the key is still `web`.
|
||||
|
||||
### Address / Port
|
||||
|
||||
You can define a custom address/port like this:
|
||||
|
||||
```toml
|
||||
defaultEntryPoints = ["http"]
|
||||
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
|
||||
[entryPoints.foo]
|
||||
address = ":8082"
|
||||
|
||||
[entryPoints.bar]
|
||||
address = ":8083"
|
||||
|
||||
[ping]
|
||||
entryPoint = "foo"
|
||||
|
||||
[api]
|
||||
entryPoint = "bar"
|
||||
```
|
||||
|
||||
In the above example, you would access a regular path, administration panel, and health-check as follows:
|
||||
|
||||
* Regular path: `http://hostname:80/path`
|
||||
* Admin Panel: `http://hostname:8083/`
|
||||
* Ping URL: `http://hostname:8082/ping`
|
||||
|
||||
In the above example, it is _very_ important to create a named dedicated entry point, and do **not** include it in `defaultEntryPoints`.
|
||||
Otherwise, you are likely to expose _all_ services via that entry point.
|
||||
|
||||
### Custom Path
|
||||
|
||||
You can define a custom path like this:
|
||||
|
||||
```toml
|
||||
defaultEntryPoints = ["http"]
|
||||
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
|
||||
[entryPoints.foo]
|
||||
address = ":8080"
|
||||
|
||||
[entryPoints.bar]
|
||||
address = ":8081"
|
||||
|
||||
# Activate API and Dashboard
|
||||
[api]
|
||||
entryPoint = "bar"
|
||||
dashboard = true
|
||||
|
||||
[file]
|
||||
[backends]
|
||||
[backends.backend1]
|
||||
[backends.backend1.servers.server1]
|
||||
url = "http://127.0.0.1:8081"
|
||||
|
||||
[frontends]
|
||||
[frontends.frontend1]
|
||||
entryPoints = ["foo"]
|
||||
backend = "backend1"
|
||||
[frontends.frontend1.routes.test_1]
|
||||
rule = "PathPrefixStrip:/yourprefix;PathPrefix:/yourprefix"
|
||||
```
|
||||
|
||||
### Authentication
|
||||
|
||||
You can define the authentication like this:
|
||||
|
||||
```toml
|
||||
defaultEntryPoints = ["http"]
|
||||
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
|
||||
[entryPoints.foo]
|
||||
address=":8080"
|
||||
[entryPoints.foo.auth]
|
||||
[entryPoints.foo.auth.basic]
|
||||
users = [
|
||||
"test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/",
|
||||
"test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0",
|
||||
]
|
||||
|
||||
[api]
|
||||
entrypoint="foo"
|
||||
```
|
||||
|
||||
For more information, see [entry points](/configuration/entrypoints/) .
|
||||
|
||||
### Provider call example
|
||||
|
||||
```shell
|
||||
curl -s "http://localhost:8080/api" | jq .
|
||||
```
|
||||
```json
|
||||
{
|
||||
"file": {
|
||||
"frontends": {
|
||||
"frontend2": {
|
||||
"routes": {
|
||||
"test_2": {
|
||||
"rule": "Path:/test"
|
||||
}
|
||||
},
|
||||
"backend": "backend1"
|
||||
},
|
||||
"frontend1": {
|
||||
"routes": {
|
||||
"test_1": {
|
||||
"rule": "Host:test.localhost"
|
||||
}
|
||||
},
|
||||
"backend": "backend2"
|
||||
}
|
||||
},
|
||||
"backends": {
|
||||
"backend2": {
|
||||
"loadBalancer": {
|
||||
"method": "drr"
|
||||
},
|
||||
"servers": {
|
||||
"server2": {
|
||||
"weight": 2,
|
||||
"URL": "http://172.17.0.5:80"
|
||||
},
|
||||
"server1": {
|
||||
"weight": 1,
|
||||
"url": "http://172.17.0.4:80"
|
||||
}
|
||||
}
|
||||
},
|
||||
"backend1": {
|
||||
"loadBalancer": {
|
||||
"method": "wrr"
|
||||
},
|
||||
"circuitBreaker": {
|
||||
"expression": "NetworkErrorRatio() > 0.5"
|
||||
},
|
||||
"servers": {
|
||||
"server2": {
|
||||
"weight": 1,
|
||||
"url": "http://172.17.0.3:80"
|
||||
},
|
||||
"server1": {
|
||||
"weight": 10,
|
||||
"url": "http://172.17.0.2:80"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Health
|
||||
|
||||
```shell
|
||||
curl -s "http://localhost:8080/health" | jq .
|
||||
```
|
||||
```json
|
||||
{
|
||||
// Træfik PID
|
||||
"pid": 2458,
|
||||
// Træfik server uptime (formated time)
|
||||
"uptime": "39m6.885931127s",
|
||||
// Træfik server uptime in seconds
|
||||
"uptime_sec": 2346.885931127,
|
||||
// current server date
|
||||
"time": "2015-10-07 18:32:24.362238909 +0200 CEST",
|
||||
// current server date in seconds
|
||||
"unixtime": 1444235544,
|
||||
// count HTTP response status code in realtime
|
||||
"status_code_count": {
|
||||
"502": 1
|
||||
},
|
||||
// count HTTP response status code since Træfik started
|
||||
"total_status_code_count": {
|
||||
"200": 7,
|
||||
"404": 21,
|
||||
"502": 13
|
||||
},
|
||||
// count HTTP response
|
||||
"count": 1,
|
||||
// count HTTP response
|
||||
"total_count": 41,
|
||||
// sum of all response time (formated time)
|
||||
"total_response_time": "35.456865605s",
|
||||
// sum of all response time in seconds
|
||||
"total_response_time_sec": 35.456865605,
|
||||
// average response time (formated time)
|
||||
"average_response_time": "864.8016ms",
|
||||
// average response time in seconds
|
||||
"average_response_time_sec": 0.8648016000000001,
|
||||
|
||||
// request statistics [requires --statistics to be set]
|
||||
// ten most recent requests with 4xx and 5xx status codes
|
||||
"recent_errors": [
|
||||
{
|
||||
// status code
|
||||
"status_code": 500,
|
||||
// description of status code
|
||||
"status": "Internal Server Error",
|
||||
// request HTTP method
|
||||
"method": "GET",
|
||||
// request hostname
|
||||
"host": "localhost",
|
||||
// request path
|
||||
"path": "/path",
|
||||
// RFC 3339 formatted date/time
|
||||
"time": "2016-10-21T16:59:15.418495872-07:00"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
You can enable Traefik to export internal metrics to different monitoring systems.
|
||||
|
||||
```toml
|
||||
[api]
|
||||
# ...
|
||||
|
||||
# Enable more detailed statistics.
|
||||
[api.statistics]
|
||||
|
||||
# Number of recent errors logged.
|
||||
#
|
||||
# Default: 10
|
||||
#
|
||||
recentErrors = 10
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
| Path | Method | Description |
|
||||
|------------|---------------|-------------------------|
|
||||
| `/metrics` | `GET` | Export internal metrics |
|
||||
@@ -1,59 +0,0 @@
|
||||
# BoltDB Backend
|
||||
|
||||
Træfik can be configured to use BoltDB as a backend configuration.
|
||||
|
||||
```toml
|
||||
################################################################
|
||||
# BoltDB configuration backend
|
||||
################################################################
|
||||
|
||||
# Enable BoltDB configuration backend.
|
||||
[boltdb]
|
||||
|
||||
# BoltDB file.
|
||||
#
|
||||
# Required
|
||||
# Default: "127.0.0.1:4001"
|
||||
#
|
||||
endpoint = "/my.db"
|
||||
|
||||
# Enable watch BoltDB changes.
|
||||
#
|
||||
# Optional
|
||||
# Default: true
|
||||
#
|
||||
watch = true
|
||||
|
||||
# Prefix used for KV store.
|
||||
#
|
||||
# Optional
|
||||
# Default: "/traefik"
|
||||
#
|
||||
prefix = "/traefik"
|
||||
|
||||
# Override default configuration template.
|
||||
# For advanced users :)
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
filename = "boltdb.tmpl"
|
||||
|
||||
# Use BoltDB user/pass authentication.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# username = foo
|
||||
# password = bar
|
||||
|
||||
# Enable BoltDB TLS connection.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# [boltdb.tls]
|
||||
# ca = "/etc/ssl/ca.crt"
|
||||
# cert = "/etc/ssl/boltdb.crt"
|
||||
# key = "/etc/ssl/boltdb.key"
|
||||
# insecureskipverify = true
|
||||
```
|
||||
|
||||
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
|
||||
@@ -1,61 +0,0 @@
|
||||
# Consul Key-Value backend
|
||||
|
||||
Træfik can be configured to use Consul as a backend configuration.
|
||||
|
||||
```toml
|
||||
################################################################
|
||||
# Consul KV configuration backend
|
||||
################################################################
|
||||
|
||||
# Enable Consul KV configuration backend.
|
||||
[consul]
|
||||
|
||||
# Consul server endpoint.
|
||||
#
|
||||
# Required
|
||||
# Default: "127.0.0.1:8500"
|
||||
#
|
||||
endpoint = "127.0.0.1:8500"
|
||||
|
||||
# Enable watch Consul changes.
|
||||
#
|
||||
# Optional
|
||||
# Default: true
|
||||
#
|
||||
watch = true
|
||||
|
||||
# Prefix used for KV store.
|
||||
#
|
||||
# Optional
|
||||
# Default: traefik
|
||||
#
|
||||
prefix = "traefik"
|
||||
|
||||
# Override default configuration template.
|
||||
# For advanced users :)
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# filename = "consul.tmpl"
|
||||
|
||||
# Use Consul user/pass authentication.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# username = foo
|
||||
# password = bar
|
||||
|
||||
# Enable Consul TLS connection.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# [consul.tls]
|
||||
# ca = "/etc/ssl/ca.crt"
|
||||
# cert = "/etc/ssl/consul.crt"
|
||||
# key = "/etc/ssl/consul.key"
|
||||
# insecureskipverify = true
|
||||
```
|
||||
|
||||
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
|
||||
|
||||
Please refer to the [Key Value storage structure](/user-guide/kv-config/#key-value-storage-structure) section to get documentation on Traefik KV structure.
|
||||
@@ -1,93 +0,0 @@
|
||||
# Consul Catalog backend
|
||||
|
||||
Træfik can be configured to use service discovery catalog of Consul as a backend configuration.
|
||||
|
||||
```toml
|
||||
################################################################
|
||||
# Consul Catalog configuration backend
|
||||
################################################################
|
||||
|
||||
# Enable Consul Catalog configuration backend.
|
||||
[consulCatalog]
|
||||
|
||||
# Consul server endpoint.
|
||||
#
|
||||
# Required
|
||||
# Default: "127.0.0.1:8500"
|
||||
#
|
||||
endpoint = "127.0.0.1:8500"
|
||||
|
||||
# Expose Consul catalog services by default in Traefik.
|
||||
#
|
||||
# Optional
|
||||
# Default: true
|
||||
#
|
||||
exposedByDefault = false
|
||||
|
||||
# Default domain used.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
domain = "consul.localhost"
|
||||
|
||||
# Prefix for Consul catalog tags.
|
||||
#
|
||||
# Optional
|
||||
# Default: "traefik"
|
||||
#
|
||||
prefix = "traefik"
|
||||
|
||||
# Default frontEnd Rule for Consul services.
|
||||
#
|
||||
# The format is a Go Template with:
|
||||
# - ".ServiceName", ".Domain" and ".Attributes" available
|
||||
# - "getTag(name, tags, defaultValue)", "hasTag(name, tags)" and "getAttribute(name, tags, defaultValue)" functions are available
|
||||
# - "getAttribute(...)" function uses prefixed tag names based on "prefix" value
|
||||
#
|
||||
# Optional
|
||||
# Default: "Host:{{.ServiceName}}.{{.Domain}}"
|
||||
#
|
||||
#frontEndRule = "Host:{{.ServiceName}}.{{.Domain}}"
|
||||
```
|
||||
|
||||
This backend will create routes matching on hostname based on the service name used in Consul.
|
||||
|
||||
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
|
||||
|
||||
### Tags
|
||||
|
||||
Additional settings can be defined using Consul Catalog tags.
|
||||
|
||||
| Tag | Description |
|
||||
|-----------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `traefik.enable=false` | Disable this container in Træfik |
|
||||
| `traefik.protocol=https` | Override the default `http` protocol |
|
||||
| `traefik.backend.weight=10` | Assign this weight to the container |
|
||||
| `traefik.backend.circuitbreaker=EXPR` | Create a [circuit breaker](/basics/#backends) to be used against the backend, ex: `NetworkErrorRatio() > 0.` |
|
||||
| `traefik.backend.maxconn.amount=10` | Set a maximum number of connections to the backend. Must be used in conjunction with the below label to take effect. |
|
||||
| `traefik.backend.maxconn.extractorfunc=client.ip` | Set the function to be used against the request to determine what to limit maximum connections to the backend by. Must be used in conjunction with the above label to take effect. |
|
||||
| `traefik.frontend.rule=Host:test.traefik.io` | Override the default frontend rule (Default: `Host:{{.ServiceName}}.{{.Domain}}`). |
|
||||
| `traefik.frontend.passHostHeader=true` | Forward client `Host` header to the backend. |
|
||||
| `traefik.frontend.priority=10` | Override default frontend priority |
|
||||
| `traefik.frontend.entryPoints=http,https` | Assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. |
|
||||
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash` |
|
||||
| `traefik.backend.loadbalancer=drr` | override the default `wrr` load balancer algorithm |
|
||||
| `traefik.backend.loadbalancer.stickiness=true` | enable backend sticky sessions |
|
||||
| `traefik.backend.loadbalancer.stickiness.cookieName=NAME` | Manually set the cookie name for sticky sessions |
|
||||
| `traefik.backend.loadbalancer.sticky=true` | enable backend sticky sessions (DEPRECATED) |
|
||||
|
||||
### Examples
|
||||
|
||||
If you want that Træfik uses Consul tags correctly you need to defined them like that:
|
||||
```json
|
||||
traefik.enable=true
|
||||
traefik.tags=api
|
||||
traefik.tags=external
|
||||
```
|
||||
|
||||
If the prefix defined in Træfik configuration is `bla`, tags need to be defined like that:
|
||||
```json
|
||||
bla.enable=true
|
||||
bla.tags=api
|
||||
bla.tags=external
|
||||
```
|
||||
@@ -1,265 +0,0 @@
|
||||
|
||||
# Docker Backend
|
||||
|
||||
Træfik can be configured to use Docker as a backend configuration.
|
||||
|
||||
## Docker
|
||||
|
||||
```toml
|
||||
################################################################
|
||||
# Docker configuration backend
|
||||
################################################################
|
||||
|
||||
# Enable Docker configuration backend.
|
||||
[docker]
|
||||
|
||||
# Docker server endpoint. Can be a tcp or a unix socket endpoint.
|
||||
#
|
||||
# Required
|
||||
#
|
||||
endpoint = "unix:///var/run/docker.sock"
|
||||
|
||||
# Default domain used.
|
||||
# Can be overridden by setting the "traefik.domain" label on a container.
|
||||
#
|
||||
# Required
|
||||
#
|
||||
domain = "docker.localhost"
|
||||
|
||||
# Enable watch docker changes.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
watch = true
|
||||
|
||||
# Override default configuration template.
|
||||
# For advanced users :)
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# filename = "docker.tmpl"
|
||||
|
||||
# Expose containers by default in Traefik.
|
||||
# If set to false, containers that don't have `traefik.enable=true` will be ignored.
|
||||
#
|
||||
# Optional
|
||||
# Default: true
|
||||
#
|
||||
exposedbydefault = true
|
||||
|
||||
# Use the IP address from the binded port instead of the inner network one.
|
||||
# For specific use-case :)
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
usebindportip = true
|
||||
|
||||
# Use Swarm Mode services as data provider.
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
swarmmode = false
|
||||
|
||||
# Enable docker TLS connection.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# [docker.tls]
|
||||
# ca = "/etc/ssl/ca.crt"
|
||||
# cert = "/etc/ssl/docker.crt"
|
||||
# key = "/etc/ssl/docker.key"
|
||||
# insecureskipverify = true
|
||||
```
|
||||
|
||||
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
|
||||
|
||||
|
||||
## Docker Swarm Mode
|
||||
|
||||
```toml
|
||||
################################################################
|
||||
# Docker Swarmmode configuration backend
|
||||
################################################################
|
||||
|
||||
# Enable Docker configuration backend.
|
||||
[docker]
|
||||
|
||||
# Docker server endpoint.
|
||||
# Can be a tcp or a unix socket endpoint.
|
||||
#
|
||||
# Required
|
||||
# Default: "unix:///var/run/docker.sock"
|
||||
#
|
||||
endpoint = "tcp://127.0.0.1:2375"
|
||||
|
||||
# Default domain used.
|
||||
# Can be overridden by setting the "traefik.domain" label on a services.
|
||||
#
|
||||
# Optional
|
||||
# Default: ""
|
||||
#
|
||||
domain = "docker.localhost"
|
||||
|
||||
# Enable watch docker changes.
|
||||
#
|
||||
# Optional
|
||||
# Default: true
|
||||
#
|
||||
watch = true
|
||||
|
||||
# Use Docker Swarm Mode as data provider.
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
swarmmode = true
|
||||
|
||||
# Override default configuration template.
|
||||
# For advanced users :)
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# filename = "docker.tmpl"
|
||||
|
||||
# Expose services by default in Traefik.
|
||||
#
|
||||
# Optional
|
||||
# Default: true
|
||||
#
|
||||
exposedbydefault = false
|
||||
|
||||
# Enable docker TLS connection.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# [docker.tls]
|
||||
# ca = "/etc/ssl/ca.crt"
|
||||
# cert = "/etc/ssl/docker.crt"
|
||||
# key = "/etc/ssl/docker.key"
|
||||
# insecureskipverify = true
|
||||
```
|
||||
|
||||
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
|
||||
|
||||
## Labels: overriding default behaviour
|
||||
|
||||
#### Using Docker with Swarm Mode
|
||||
|
||||
If you use a compose file with the Swarm mode, labels should be defined in the `deploy` part of your service.
|
||||
This behavior is only enabled for docker-compose version 3+ ([Compose file reference](https://docs.docker.com/compose/compose-file/#labels-1)).
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
services:
|
||||
whoami:
|
||||
deploy:
|
||||
labels:
|
||||
traefik.docker.network: traefik
|
||||
```
|
||||
|
||||
#### Using Docker Compose
|
||||
|
||||
If you are intending to use only Docker Compose commands (e.g. `docker-compose up --scale whoami=2 -d`), labels should be under your service, otherwise they will be ignored.
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
services:
|
||||
whoami:
|
||||
labels:
|
||||
traefik.docker.network: traefik
|
||||
```
|
||||
|
||||
### On Containers
|
||||
|
||||
Labels can be used on containers to override default behaviour.
|
||||
|
||||
| Label | Description |
|
||||
|------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `traefik.backend=foo` | Give the name `foo` to the generated backend for this container. |
|
||||
| `traefik.backend.maxconn.amount=10` | Set a maximum number of connections to the backend. Must be used in conjunction with the below label to take effect. |
|
||||
| `traefik.backend.maxconn.extractorfunc=client.ip` | Set the function to be used against the request to determine what to limit maximum connections to the backend by. Must be used in conjunction with the above label to take effect. |
|
||||
| `traefik.backend.loadbalancer.method=drr` | Override the default `wrr` load balancer algorithm |
|
||||
| `traefik.backend.loadbalancer.stickiness=true` | Enable backend sticky sessions |
|
||||
| `traefik.backend.loadbalancer.stickiness.cookieName=NAME` | Manually set the cookie name for sticky sessions |
|
||||
| `traefik.backend.loadbalancer.sticky=true` | Enable backend sticky sessions (DEPRECATED) |
|
||||
| `traefik.backend.loadbalancer.swarm=true` | Use Swarm's inbuilt load balancer (only relevant under Swarm Mode). |
|
||||
| `traefik.backend.circuitbreaker.expression=EXPR` | Create a [circuit breaker](/basics/#backends) to be used against the backend |
|
||||
| `traefik.port=80` | Register this port. Useful when the container exposes multiples ports. |
|
||||
| `traefik.protocol=https` | Override the default `http` protocol |
|
||||
| `traefik.weight=10` | Assign this weight to the container |
|
||||
| `traefik.enable=false` | Disable this container in Træfik |
|
||||
| `traefik.frontend.rule=EXPR` | Override the default frontend rule. Default: `Host:{containerName}.{domain}` or `Host:{service}.{project_name}.{domain}` if you are using `docker-compose`. |
|
||||
| `traefik.frontend.passHostHeader=true` | Forward client `Host` header to the backend. |
|
||||
| `traefik.frontend.priority=10` | Override default frontend priority |
|
||||
| `traefik.frontend.entryPoints=http,https` | Assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints` |
|
||||
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash` |
|
||||
| `traefik.frontend.whitelistSourceRange:RANGE` | List of IP-Ranges which are allowed to access. An unset or empty list allows all Source-IPs to access. If one of the Net-Specifications are invalid, the whole list is invalid and allows all Source-IPs to access. |
|
||||
| `traefik.docker.network` | Set the docker network to use for connections to this container. [1] |
|
||||
| `traefik.frontend.redirect.entryPoint=https` | Enables Redirect to another entryPoint for that frontend (e.g. HTTPS) |
|
||||
| `traefik.frontend.redirect.regex=^http://localhost/(.*)` | Redirect to another URL for that frontend. Must be set with `traefik.frontend.redirect.replacement`. |
|
||||
| `traefik.frontend.redirect.replacement=http://mydomain/$1` | Redirect to another URL for that frontend. Must be set with `traefik.frontend.redirect.regex`. |
|
||||
|
||||
[1] `traefik.docker.network`:
|
||||
If a container is linked to several networks, be sure to set the proper network name (you can check with `docker inspect <container_id>`) otherwise it will randomly pick one (depending on how docker is returning them).
|
||||
For instance when deploying docker `stack` from compose files, the compose defined networks will be prefixed with the `stack` name.
|
||||
Or if your service references external network use it's name instead.
|
||||
|
||||
#### Security Headers
|
||||
|
||||
| Label | Description |
|
||||
|----------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `traefik.frontend.headers.allowedHosts=EXPR` | Provides a list of allowed hosts that requests will be processed. Format: `Host1,Host2` |
|
||||
| `traefik.frontend.headers.customRequestHeaders=EXPR ` | Provides the container with custom request headers that will be appended to each request forwarded to the container. Format: <code>HEADER:value||HEADER2:value2</code> |
|
||||
| `traefik.frontend.headers.customResponseHeaders=EXPR` | Appends the headers to each response returned by the container, before forwarding the response to the client. Format: <code>HEADER:value||HEADER2:value2</code> |
|
||||
| `traefik.frontend.headers.hostsProxyHeaders=EXPR ` | Provides a list of headers that the proxied hostname may be stored. Format: `HEADER1,HEADER2` |
|
||||
| `traefik.frontend.headers.SSLRedirect=true` | Forces the frontend to redirect to SSL if a non-SSL request is sent. |
|
||||
| `traefik.frontend.headers.SSLTemporaryRedirect=true` | Forces the frontend to redirect to SSL if a non-SSL request is sent, but by sending a 302 instead of a 301. |
|
||||
| `traefik.frontend.headers.SSLHost=HOST` | This setting configures the hostname that redirects will be based on. Default is "", which is the same host as the request. |
|
||||
| `traefik.frontend.headers.SSLProxyHeaders=EXPR` | Header combinations that would signify a proper SSL Request (Such as `X-Forwarded-For:https`). Format: <code>HEADER:value||HEADER2:value2</code> |
|
||||
| `traefik.frontend.headers.STSSeconds=315360000` | Sets the max-age of the STS header. |
|
||||
| `traefik.frontend.headers.STSIncludeSubdomains=true` | Adds the `IncludeSubdomains` section of the STS header. |
|
||||
| `traefik.frontend.headers.STSPreload=true` | Adds the preload flag to the STS header. |
|
||||
| `traefik.frontend.headers.forceSTSHeader=false` | Adds the STS header to non-SSL requests. |
|
||||
| `traefik.frontend.headers.frameDeny=false` | Adds the `X-Frame-Options` header with the value of `DENY`. |
|
||||
| `traefik.frontend.headers.customFrameOptionsValue=VALUE` | Overrides the `X-Frame-Options` header with the custom value. |
|
||||
| `traefik.frontend.headers.contentTypeNosniff=true` | Adds the `X-Content-Type-Options` header with the value `nosniff`. |
|
||||
| `traefik.frontend.headers.browserXSSFilter=true` | Adds the X-XSS-Protection header with the value `1; mode=block`. |
|
||||
| `traefik.frontend.headers.contentSecurityPolicy=VALUE` | Adds CSP Header with the custom value. |
|
||||
| `traefik.frontend.headers.publicKey=VALUE` | Adds pinned HTST public key header. |
|
||||
| `traefik.frontend.headers.referrerPolicy=VALUE` | Adds referrer policy header. |
|
||||
| `traefik.frontend.headers.isDevelopment=false` | This will cause the `AllowedHosts`, `SSLRedirect`, and `STSSeconds`/`STSIncludeSubdomains` options to be ignored during development.<br>When deploying to production, be sure to set this to false. |
|
||||
|
||||
### On Service
|
||||
|
||||
Services labels can be used for overriding default behaviour
|
||||
|
||||
| Label | Description |
|
||||
|---------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------|
|
||||
| `traefik.<service-name>.port=PORT` | Overrides `traefik.port`. If several ports need to be exposed, the service labels could be used. |
|
||||
| `traefik.<service-name>.protocol` | Overrides `traefik.protocol`. |
|
||||
| `traefik.<service-name>.weight` | Assign this service weight. Overrides `traefik.weight`. |
|
||||
| `traefik.<service-name>.frontend.backend=BACKEND` | Assign this service frontend to `BACKEND`. Default is to assign to the service backend. |
|
||||
| `traefik.<service-name>.frontend.entryPoints` | Overrides `traefik.frontend.entrypoints` |
|
||||
| `traefik.<service-name>.frontend.auth.basic` | Sets a Basic Auth for that frontend |
|
||||
| `traefik.<service-name>.frontend.passHostHeader` | Overrides `traefik.frontend.passHostHeader`. |
|
||||
| `traefik.<service-name>.frontend.priority` | Overrides `traefik.frontend.priority`. |
|
||||
| `traefik.<service-name>.frontend.rule` | Overrides `traefik.frontend.rule`. |
|
||||
| `traefik.<service-name>.frontend.redirect` | Overrides `traefik.frontend.redirect`. |
|
||||
| `traefik.<service-name>.frontend.redirect.entryPoint=https` | Overrides `traefik.frontend.redirect.entryPoint`. |
|
||||
| `traefik.<service-name>.frontend.redirect.regex=^http://localhost/(.*)` | Overrides `traefik.frontend.redirect.regex`. |
|
||||
| `traefik.<service-name>.frontend.redirect.replacement=http://mydomain/$1` | Overrides `traefik.frontend.redirect.replacement`. |
|
||||
|
||||
|
||||
!!! note
|
||||
If a label is defined both as a `container label` and a `service label` (for example `traefik.<service-name>.port=PORT` and `traefik.port=PORT` ), the `service label` is used to defined the `<service-name>` property (`port` in the example).
|
||||
|
||||
It's possible to mix `container labels` and `service labels`, in this case `container labels` are used as default value for missing `service labels` but no frontends are going to be created with the `container labels`.
|
||||
|
||||
More details in this [example](/user-guide/docker-and-lets-encrypt/#labels).
|
||||
|
||||
!!! warning
|
||||
When running inside a container, Træfik will need network access through:
|
||||
|
||||
`docker network connect <network> <traefik-container>`
|
||||
@@ -1,71 +0,0 @@
|
||||
# DynamoDB Backend
|
||||
|
||||
Træfik can be configured to use Amazon DynamoDB as a backend configuration.
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
################################################################
|
||||
# DynamoDB configuration backend
|
||||
################################################################
|
||||
|
||||
# Enable DynamoDB configuration backend.
|
||||
[dynamodb]
|
||||
|
||||
# Region to use when connecting to AWS.
|
||||
#
|
||||
# Required
|
||||
#
|
||||
region = "us-west-1"
|
||||
|
||||
# DyanmoDB Table Name.
|
||||
#
|
||||
# Optional
|
||||
# Default: "traefik"
|
||||
#
|
||||
tableName = "traefik"
|
||||
|
||||
# Enable watch DynamoDB changes.
|
||||
#
|
||||
# Optional
|
||||
# Default: true
|
||||
#
|
||||
watch = true
|
||||
|
||||
# Polling interval (in seconds).
|
||||
#
|
||||
# Optional
|
||||
# Default: 15
|
||||
#
|
||||
refreshSeconds = 15
|
||||
|
||||
# AccessKeyID to use when connecting to AWS.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
accessKeyID = "abc"
|
||||
|
||||
# SecretAccessKey to use when connecting to AWS.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
secretAccessKey = "123"
|
||||
|
||||
# Endpoint of local dynamodb instance for testing?
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
endpoint = "http://localhost:8080"
|
||||
```
|
||||
|
||||
## Table Items
|
||||
|
||||
Items in the `dynamodb` table must have three attributes:
|
||||
|
||||
- `id` (string): The id is the primary key.
|
||||
- `name`(string): The name is used as the name of the frontend or backend.
|
||||
- `frontend` or `backend` (map): This attribute's structure matches exactly the structure of a Frontend or Backend type in Traefik.
|
||||
See `types/types.go` for details.
|
||||
The presence or absence of this attribute determines its type.
|
||||
So an item should never have both a `frontend` and a `backend` attribute.
|
||||
|
||||
@@ -1,143 +0,0 @@
|
||||
# ECS Backend
|
||||
|
||||
Træfik can be configured to use Amazon ECS as a backend configuration.
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
################################################################
|
||||
# ECS configuration backend
|
||||
################################################################
|
||||
|
||||
# Enable ECS configuration backend.
|
||||
[ecs]
|
||||
|
||||
# ECS Cluster Name.
|
||||
#
|
||||
# DEPRECATED - Please use `clusters`.
|
||||
#
|
||||
cluster = "default"
|
||||
|
||||
# ECS Clusters Name.
|
||||
#
|
||||
# Optional
|
||||
# Default: ["default"]
|
||||
#
|
||||
clusters = ["default"]
|
||||
|
||||
# Enable watch ECS changes.
|
||||
#
|
||||
# Optional
|
||||
# Default: true
|
||||
#
|
||||
watch = true
|
||||
|
||||
# Default domain used.
|
||||
#
|
||||
# Optional
|
||||
# Default: ""
|
||||
#
|
||||
domain = "ecs.localhost"
|
||||
|
||||
# Enable auto discover ECS clusters.
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
autoDiscoverClusters = false
|
||||
|
||||
# Polling interval (in seconds).
|
||||
#
|
||||
# Optional
|
||||
# Default: 15
|
||||
#
|
||||
refreshSeconds = 15
|
||||
|
||||
# Expose ECS services by default in Traefik.
|
||||
#
|
||||
# Optional
|
||||
# Default: true
|
||||
#
|
||||
exposedByDefault = false
|
||||
|
||||
# Region to use when connecting to AWS.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
region = "us-east-1"
|
||||
|
||||
# AccessKeyID to use when connecting to AWS.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
accessKeyID = "abc"
|
||||
|
||||
# SecretAccessKey to use when connecting to AWS.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
secretAccessKey = "123"
|
||||
|
||||
# Override default configuration template.
|
||||
# For advanced users :)
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# filename = "ecs.tmpl"
|
||||
```
|
||||
|
||||
If `AccessKeyID`/`SecretAccessKey` is not given credentials will be resolved in the following order:
|
||||
|
||||
- From environment variables; `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and `AWS_SESSION_TOKEN`.
|
||||
- Shared credentials, determined by `AWS_PROFILE` and `AWS_SHARED_CREDENTIALS_FILE`, defaults to `default` and `~/.aws/credentials`.
|
||||
- EC2 instance role or ECS task role
|
||||
|
||||
## Policy
|
||||
|
||||
Træfik needs the following policy to read ECS information:
|
||||
|
||||
```json
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Sid": "TraefikECSReadAccess",
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"ecs:ListClusters",
|
||||
"ecs:DescribeClusters",
|
||||
"ecs:ListTasks",
|
||||
"ecs:DescribeTasks",
|
||||
"ecs:DescribeContainerInstances",
|
||||
"ecs:DescribeTaskDefinition",
|
||||
"ec2:DescribeInstances"
|
||||
],
|
||||
"Resource": [
|
||||
"*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Labels: overriding default behaviour
|
||||
|
||||
Labels can be used on task containers to override default behaviour:
|
||||
|
||||
| Label | Description |
|
||||
|-----------------------------------------------------------|------------------------------------------------------------------------------------------|
|
||||
| `traefik.protocol=https` | override the default `http` protocol |
|
||||
| `traefik.weight=10` | assign this weight to the container |
|
||||
| `traefik.enable=false` | disable this container in Træfik |
|
||||
| `traefik.port=80` | override the default `port` value. Overrides `NetworkBindings` from Docker Container |
|
||||
| `traefik.backend.loadbalancer.method=drr` | override the default `wrr` load balancer algorithm |
|
||||
| `traefik.backend.loadbalancer.stickiness=true` | enable backend sticky sessions |
|
||||
| `traefik.backend.loadbalancer.stickiness.cookieName=NAME` | Manually set the cookie name for sticky sessions |
|
||||
| `traefik.backend.loadbalancer.sticky=true` | enable backend sticky sessions (DEPRECATED) |
|
||||
| `traefik.backend.healthcheck.path=/health` | enable health checks for the backend, hitting the container at `path` |
|
||||
| `traefik.backend.healthcheck.interval=1s` | configure the health check interval |
|
||||
| `traefik.frontend.rule=Host:test.traefik.io` | override the default frontend rule (Default: `Host:{containerName}.{domain}`). |
|
||||
| `traefik.frontend.passHostHeader=true` | forward client `Host` header to the backend. |
|
||||
| `traefik.frontend.priority=10` | override default frontend priority |
|
||||
| `traefik.frontend.entryPoints=http,https` | assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. |
|
||||
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash` |
|
||||
@@ -1,75 +0,0 @@
|
||||
# Etcd Backend
|
||||
|
||||
Træfik can be configured to use Etcd as a backend configuration.
|
||||
|
||||
```toml
|
||||
################################################################
|
||||
# Etcd configuration backend
|
||||
################################################################
|
||||
|
||||
# Enable Etcd configuration backend.
|
||||
[etcd]
|
||||
|
||||
# Etcd server endpoint.
|
||||
#
|
||||
# Required
|
||||
# Default: "127.0.0.1:2379"
|
||||
#
|
||||
endpoint = "127.0.0.1:2379"
|
||||
|
||||
# Enable watch Etcd changes.
|
||||
#
|
||||
# Optional
|
||||
# Default: true
|
||||
#
|
||||
watch = true
|
||||
|
||||
# Prefix used for KV store.
|
||||
#
|
||||
# Optional
|
||||
# Default: "/traefik"
|
||||
#
|
||||
prefix = "/traefik"
|
||||
|
||||
# Force to use API V3 (otherwise still use API V2)
|
||||
#
|
||||
# Deprecated
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
useAPIV3 = true
|
||||
|
||||
|
||||
# Override default configuration template.
|
||||
# For advanced users :)
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# filename = "etcd.tmpl"
|
||||
|
||||
# Use etcd user/pass authentication.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# username = foo
|
||||
# password = bar
|
||||
|
||||
# Enable etcd TLS connection.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# [etcd.tls]
|
||||
# ca = "/etc/ssl/ca.crt"
|
||||
# cert = "/etc/ssl/etcd.crt"
|
||||
# key = "/etc/ssl/etcd.key"
|
||||
# insecureskipverify = true
|
||||
```
|
||||
|
||||
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
|
||||
|
||||
Please refer to the [Key Value storage structure](/user-guide/kv-config/#key-value-storage-structure) section to get documentation on Traefik KV structure.
|
||||
|
||||
!!! note
|
||||
The option `useAPIV3` allows using Etcd API V3 only if it's set to true.
|
||||
This option is **deprecated** and API V2 won't be supported in the future.
|
||||
@@ -1,32 +0,0 @@
|
||||
# Eureka Backend
|
||||
|
||||
Træfik can be configured to use Eureka as a backend configuration.
|
||||
|
||||
```toml
|
||||
################################################################
|
||||
# Eureka configuration backend
|
||||
################################################################
|
||||
|
||||
# Enable Eureka configuration backend.
|
||||
[eureka]
|
||||
|
||||
# Eureka server endpoint.
|
||||
#
|
||||
# Required
|
||||
#
|
||||
endpoint = "http://my.eureka.server/eureka"
|
||||
|
||||
# Override default configuration time between refresh.
|
||||
#
|
||||
# Optional
|
||||
# Default: 30s
|
||||
#
|
||||
delay = "1m"
|
||||
|
||||
# Override default configuration template.
|
||||
# For advanced users :)
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# filename = "eureka.tmpl"
|
||||
```
|
||||
@@ -1,249 +0,0 @@
|
||||
# File Backends
|
||||
|
||||
Træfik can be configured with a file.
|
||||
|
||||
## Reference
|
||||
|
||||
```toml
|
||||
[file]
|
||||
|
||||
# Backends
|
||||
[backends]
|
||||
|
||||
[backends.backend1]
|
||||
|
||||
[backends.backend1.servers]
|
||||
[backends.backend1.servers.server0]
|
||||
url = "http://10.10.10.1:80"
|
||||
weight = 1
|
||||
[backends.backend1.servers.server1]
|
||||
url = "http://10.10.10.2:80"
|
||||
weight = 2
|
||||
# ...
|
||||
|
||||
[backends.backend1.circuitBreaker]
|
||||
expression = "NetworkErrorRatio() > 0.5"
|
||||
|
||||
[backends.backend1.loadBalancer]
|
||||
method = "drr"
|
||||
[backends.backend1.loadBalancer.stickiness]
|
||||
cookieName = "foobar"
|
||||
|
||||
[backends.backend1.maxConn]
|
||||
amount = 10
|
||||
extractorfunc = "request.host"
|
||||
|
||||
[backends.backend1.healthCheck]
|
||||
path = "/health"
|
||||
port = 88
|
||||
interval = "30s"
|
||||
|
||||
[backends.backend2]
|
||||
# ...
|
||||
|
||||
# Frontends
|
||||
[frontends]
|
||||
|
||||
[frontends.frontend1]
|
||||
entryPoints = ["http", "https"]
|
||||
backend = "backend1"
|
||||
passHostHeader = true
|
||||
passTLSCert = true
|
||||
priority = 42
|
||||
basicAuth = [
|
||||
"test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/",
|
||||
"test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0",
|
||||
]
|
||||
whitelistSourceRange = ["10.42.0.0/16", "152.89.1.33/32", "afed:be44::/16"]
|
||||
|
||||
[frontends.frontend1.routes]
|
||||
[frontends.frontend1.routes.route0]
|
||||
rule = "Host:test.localhost"
|
||||
[frontends.frontend1.routes.Route1]
|
||||
rule = "Method:GET"
|
||||
# ...
|
||||
|
||||
[frontends.frontend1.headers]
|
||||
allowedHosts = ["foobar", "foobar"]
|
||||
hostsProxyHeaders = ["foobar", "foobar"]
|
||||
SSLRedirect = true
|
||||
SSLTemporaryRedirect = true
|
||||
SSLHost = "foobar"
|
||||
STSSeconds = 42
|
||||
STSIncludeSubdomains = true
|
||||
STSPreload = true
|
||||
forceSTSHeader = true
|
||||
frameDeny = true
|
||||
customFrameOptionsValue = "foobar"
|
||||
contentTypeNosniff = true
|
||||
browserXSSFilter = true
|
||||
contentSecurityPolicy = "foobar"
|
||||
publicKey = "foobar"
|
||||
referrerPolicy = "foobar"
|
||||
isDevelopment = true
|
||||
[frontends.frontend1.headers.customRequestHeaders]
|
||||
X-Foo-Bar-01 = "foobar"
|
||||
X-Foo-Bar-02 = "foobar"
|
||||
# ...
|
||||
[frontends.frontend1.headers.customResponseHeaders]
|
||||
X-Foo-Bar-03 = "foobar"
|
||||
X-Foo-Bar-04 = "foobar"
|
||||
# ...
|
||||
[frontends.frontend1.headers.SSLProxyHeaders]
|
||||
X-Foo-Bar-05 = "foobar"
|
||||
X-Foo-Bar-06 = "foobar"
|
||||
# ...
|
||||
|
||||
[frontends.frontend1.errors]
|
||||
[frontends.frontend1.errors.errorPage0]
|
||||
status = ["500-599"]
|
||||
backend = "error"
|
||||
query = "/{status}.html"
|
||||
[frontends.frontend1.errors.errorPage1]
|
||||
status = ["404", "403"]
|
||||
backend = "error"
|
||||
query = "/{status}.html"
|
||||
# ...
|
||||
|
||||
[frontends.frontend1.ratelimit]
|
||||
extractorfunc = "client.ip"
|
||||
[frontends.frontend1.ratelimit.rateset.rateset1]
|
||||
period = "10s"
|
||||
average = 100
|
||||
burst = 200
|
||||
[frontends.frontend1.ratelimit.rateset.rateset2]
|
||||
period = "3s"
|
||||
average = 5
|
||||
burst = 10
|
||||
# ...
|
||||
|
||||
[frontends.frontend1.redirect]
|
||||
entryPoint = "https"
|
||||
regex = "^http://localhost/(.*)"
|
||||
replacement = "http://mydomain/$1"
|
||||
|
||||
[frontends.frontend2]
|
||||
# ...
|
||||
|
||||
# HTTPS certificates
|
||||
[[tls]]
|
||||
entryPoints = ["https"]
|
||||
[tls.certificate]
|
||||
certFile = "path/to/my.cert"
|
||||
keyFile = "path/to/my.key"
|
||||
|
||||
[[tls]]
|
||||
# ...
|
||||
```
|
||||
|
||||
## Configuration mode
|
||||
|
||||
You have three choices:
|
||||
|
||||
- [Simple](/configuration/backends/file/#simple)
|
||||
- [Rules in a Separate File](/configuration/backends/file/#rules-in-a-separate-file)
|
||||
- [Multiple `.toml` Files](/configuration/backends/file/#multiple-toml-files)
|
||||
|
||||
To enable the file backend, you must either pass the `--file` option to the Træfik binary or put the `[file]` section (with or without inner settings) in the configuration file.
|
||||
|
||||
The configuration file allows managing both backends/frontends and HTTPS certificates (which are not [Let's Encrypt](https://letsencrypt.org) certificates generated through Træfik).
|
||||
|
||||
### Simple
|
||||
|
||||
Add your configuration at the end of the global configuration file `traefik.toml`:
|
||||
|
||||
```toml
|
||||
defaultEntryPoints = ["http", "https"]
|
||||
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
# ...
|
||||
[entryPoints.https]
|
||||
# ...
|
||||
|
||||
[file]
|
||||
|
||||
# rules
|
||||
[backends]
|
||||
[backends.backend1]
|
||||
# ...
|
||||
[backends.backend2]
|
||||
# ...
|
||||
|
||||
[frontends]
|
||||
[frontends.frontend1]
|
||||
# ...
|
||||
[frontends.frontend2]
|
||||
# ...
|
||||
[frontends.frontend3]
|
||||
# ...
|
||||
|
||||
# HTTPS certificate
|
||||
[[tls]]
|
||||
# ...
|
||||
|
||||
[[tls]]
|
||||
# ...
|
||||
```
|
||||
|
||||
!!! note
|
||||
adding certificates directly to the entrypoint is still maintained but certificates declared in this way cannot be managed dynamically.
|
||||
It's recommended to use the file provider to declare certificates.
|
||||
|
||||
### Rules in a Separate File
|
||||
|
||||
Put your rules in a separate file, for example `rules.toml`:
|
||||
|
||||
```toml
|
||||
# traefik.toml
|
||||
defaultEntryPoints = ["http", "https"]
|
||||
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
# ...
|
||||
[entryPoints.https]
|
||||
# ...
|
||||
|
||||
[file]
|
||||
filename = "rules.toml"
|
||||
```
|
||||
|
||||
```toml
|
||||
# rules.toml
|
||||
[backends]
|
||||
[backends.backend1]
|
||||
# ...
|
||||
[backends.backend2]
|
||||
# ...
|
||||
|
||||
[frontends]
|
||||
[frontends.frontend1]
|
||||
# ...
|
||||
[frontends.frontend2]
|
||||
# ...
|
||||
[frontends.frontend3]
|
||||
# ...
|
||||
|
||||
# HTTPS certificate
|
||||
[[tls]]
|
||||
# ...
|
||||
|
||||
[[tls]]
|
||||
# ...
|
||||
```
|
||||
|
||||
### Multiple `.toml` Files
|
||||
|
||||
You could have multiple `.toml` files in a directory (and recursively in its sub-directories):
|
||||
|
||||
```toml
|
||||
[file]
|
||||
directory = "/path/to/config/"
|
||||
```
|
||||
|
||||
If you want Træfik to watch file changes automatically, just add:
|
||||
|
||||
```toml
|
||||
[file]
|
||||
watch = true
|
||||
```
|
||||
@@ -1,191 +0,0 @@
|
||||
# Kubernetes Ingress Backend
|
||||
|
||||
Træfik can be configured to use Kubernetes Ingress as a backend configuration.
|
||||
|
||||
See also [Kubernetes user guide](/user-guide/kubernetes).
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
################################################################
|
||||
# Kubernetes Ingress configuration backend
|
||||
################################################################
|
||||
|
||||
# Enable Kubernetes Ingress configuration backend.
|
||||
[kubernetes]
|
||||
|
||||
# Kubernetes server endpoint.
|
||||
#
|
||||
# Optional for in-cluster configuration, required otherwise.
|
||||
# Default: empty
|
||||
#
|
||||
# endpoint = "http://localhost:8080"
|
||||
|
||||
# Bearer token used for the Kubernetes client configuration.
|
||||
#
|
||||
# Optional
|
||||
# Default: empty
|
||||
#
|
||||
# token = "my token"
|
||||
|
||||
# Path to the certificate authority file.
|
||||
# Used for the Kubernetes client configuration.
|
||||
#
|
||||
# Optional
|
||||
# Default: empty
|
||||
#
|
||||
# certAuthFilePath = "/my/ca.crt"
|
||||
|
||||
# Array of namespaces to watch.
|
||||
#
|
||||
# Optional
|
||||
# Default: all namespaces (empty array).
|
||||
#
|
||||
# namespaces = ["default", "production"]
|
||||
|
||||
# Ingress label selector to filter Ingress objects that should be processed.
|
||||
#
|
||||
# Optional
|
||||
# Default: empty (process all Ingresses)
|
||||
#
|
||||
# labelselector = "A and not B"
|
||||
|
||||
# Disable PassHost Headers.
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
# disablePassHostHeaders = true
|
||||
|
||||
# Enable PassTLSCert Headers.
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
# enablePassTLSCert = true
|
||||
|
||||
# Override default configuration template.
|
||||
#
|
||||
# Optional
|
||||
# Default: <built-in template>
|
||||
#
|
||||
# filename = "kubernetes.tmpl"
|
||||
```
|
||||
|
||||
### `endpoint`
|
||||
|
||||
The Kubernetes server endpoint as URL.
|
||||
|
||||
When deployed into Kubernetes, Traefik will read the environment variables `KUBERNETES_SERVICE_HOST` and `KUBERNETES_SERVICE_PORT` to construct the endpoint.
|
||||
|
||||
The access token will be looked up in `/var/run/secrets/kubernetes.io/serviceaccount/token` and the SSL CA certificate in `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.
|
||||
Both are provided mounted automatically when deployed inside Kubernetes.
|
||||
|
||||
The endpoint may be specified to override the environment variable values inside a cluster.
|
||||
|
||||
When the environment variables are not found, Traefik will try to connect to the Kubernetes API server with an external-cluster client.
|
||||
In this case, the endpoint is required.
|
||||
Specifically, it may be set to the URL used by `kubectl proxy` to connect to a Kubernetes cluster using the granted autentication and authorization of the associated kubeconfig.
|
||||
|
||||
### `labelselector`
|
||||
|
||||
By default, Traefik processes all Ingress objects in the configured namespaces.
|
||||
A label selector can be defined to filter on specific Ingress objects only.
|
||||
|
||||
See [label-selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors) for details.
|
||||
|
||||
### TLS communication between Traefik and backend pods
|
||||
|
||||
Traefik automatically requests endpoint information based on the service provided in the ingress spec.
|
||||
Although traefik will connect directly to the endpoints (pods), it still checks the service port to see if TLS communication is required.
|
||||
If the service port defined in the ingress spec is 443, then the backend communication protocol is assumed to be TLS, and will connect via TLS automatically.
|
||||
|
||||
!!! note
|
||||
Please note that by enabling TLS communication between traefik and your pods, you will have to have trusted certificates that have the proper trust chain and IP subject name.
|
||||
If this is not an option, you may need to skip TLS certificate verification.
|
||||
See the [InsecureSkipVerify](/configuration/commons/#main-section) setting for more details.
|
||||
|
||||
## Annotations
|
||||
|
||||
### General annotations
|
||||
|
||||
The following general annotations are applicable on the Ingress object:
|
||||
|
||||
- `traefik.frontend.rule.type: PathPrefixStrip`
|
||||
Override the default frontend rule type. Default: `PathPrefix`.
|
||||
- `traefik.frontend.priority: "3"`
|
||||
Override the default frontend rule priority.
|
||||
- `traefik.frontend.redirect.entryPoint: https`:
|
||||
Enables Redirect to another entryPoint for that frontend (e.g. HTTPS).
|
||||
- `traefik.frontend.redirect.regex: ^http://localhost/(.*)`:
|
||||
Redirect to another URL for that frontend. Must be set with `traefik.frontend.redirect.replacement`.
|
||||
- `traefik.frontend.redirect.replacement: http://mydomain/$1`:
|
||||
Redirect to another URL for that frontend. Must be set with `traefik.frontend.redirect.regex`.
|
||||
- `traefik.frontend.entryPoints: http,https`
|
||||
Override the default frontend endpoints.
|
||||
- `traefik.frontend.passTLSCert: true`
|
||||
Override the default frontend PassTLSCert value. Default: `false`.
|
||||
- `ingress.kubernetes.io/rewrite-target: /users`
|
||||
Replaces each matched Ingress path with the specified one, and adds the old path to the `X-Replaced-Path` header.
|
||||
- `ingress.kubernetes.io/whitelist-source-range: "1.2.3.0/24, fe80::/16"`
|
||||
A comma-separated list of IP ranges permitted for access. all source IPs are permitted if the list is empty or a single range is ill-formatted.
|
||||
|
||||
!!! note
|
||||
Please note that `traefik.frontend.redirect.regex` and `traefik.frontend.redirect.replacement` do not have to be set if `traefik.frontend.redirect.entryPoint` is defined for the redirection (they will not be used in this case).
|
||||
|
||||
The following annotations are applicable on the Service object associated with a particular Ingress object:
|
||||
|
||||
- `traefik.backend.loadbalancer.method=drr`
|
||||
Override the default `wrr` load balancer algorithm.
|
||||
- `traefik.backend.loadbalancer.stickiness=true`
|
||||
Enable backend sticky sessions.
|
||||
- `traefik.backend.loadbalancer.stickiness.cookieName=NAME`
|
||||
Manually set the cookie name for sticky sessions.
|
||||
- `traefik.backend.loadbalancer.sticky=true`
|
||||
Enable backend sticky sessions (DEPRECATED).
|
||||
- `traefik.backend.circuitbreaker: <expression>`
|
||||
Set the circuit breaker expression for the backend.
|
||||
|
||||
### Security annotations
|
||||
|
||||
The following security annotations are applicable on the Ingress object:
|
||||
|
||||
| Annotation | Description |
|
||||
| -------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `ingress.kubernetes.io/allowed-hosts:EXPR` | Provides a list of allowed hosts that requests will be processed. Format: `Host1,Host2` |
|
||||
| `ingress.kubernetes.io/custom-request-headers:EXPR` | Provides the container with custom request headers that will be appended to each request forwarded to the container. Format: <code>HEADER:value||HEADER2:value2</code> |
|
||||
| `ingress.kubernetes.io/custom-response-headers:EXPR` | Appends the headers to each response returned by the container, before forwarding the response to the client. Format: <code>HEADER:value||HEADER2:value2</code> |
|
||||
| `ingress.kubernetes.io/proxy-headers:EXPR` | Provides a list of headers that the proxied hostname may be stored. Format: `HEADER1,HEADER2` |
|
||||
| `ingress.kubernetes.io/ssl-redirect:true` | Forces the frontend to redirect to SSL if a non-SSL request is sent. |
|
||||
| `ingress.kubernetes.io/ssl-temporary-redirect:true` | Forces the frontend to redirect to SSL if a non-SSL request is sent, but by sending a 302 instead of a 301. |
|
||||
| `ingress.kubernetes.io/ssl-host:HOST` | This setting configures the hostname that redirects will be based on. Default is "", which is the same host as the request. |
|
||||
| `ingress.kubernetes.io/ssl-proxy-headers:EXPR` | Header combinations that would signify a proper SSL Request (Such as `X-Forwarded-For:https`). Format: <code>HEADER:value||HEADER2:value2</code> |
|
||||
| `ingress.kubernetes.io/hsts-max-age:315360000` | Sets the max-age of the HSTS header. |
|
||||
| `ingress.kubernetes.io/hsts-include-subdomains:true` | Adds the IncludeSubdomains section of the STS header. |
|
||||
| `ingress.kubernetes.io/hsts-preload:true` | Adds the preload flag to the HSTS header. |
|
||||
| `ingress.kubernetes.io/force-hsts:false` | Adds the STS header to non-SSL requests. |
|
||||
| `ingress.kubernetes.io/frame-deny:false` | Adds the `X-Frame-Options` header with the value of `DENY`. |
|
||||
| `ingress.kubernetes.io/custom-frame-options-value:VALUE` | Overrides the `X-Frame-Options` header with the custom value. |
|
||||
| `ingress.kubernetes.io/content-type-nosniff:true` | Adds the `X-Content-Type-Options` header with the value `nosniff`. |
|
||||
| `ingress.kubernetes.io/browser-xss-filter:true` | Adds the X-XSS-Protection header with the value `1; mode=block`. |
|
||||
| `ingress.kubernetes.io/content-security-policy:VALUE` | Adds CSP Header with the custom value. |
|
||||
| `ingress.kubernetes.io/public-key:VALUE` | Adds pinned HTST public key header. |
|
||||
| `ingress.kubernetes.io/referrer-policy:VALUE` | Adds referrer policy header. |
|
||||
| `ingress.kubernetes.io/is-development:false` | This will cause the `AllowedHosts`, `SSLRedirect`, and `STSSeconds`/`STSIncludeSubdomains` options to be ignored during development.<br>When deploying to production, be sure to set this to false. |
|
||||
|
||||
### Authentication
|
||||
|
||||
Is possible to add additional authentication annotations to the Ingress object.
|
||||
The source of the authentication is a Secret object that contains the credentials.
|
||||
|
||||
- `ingress.kubernetes.io/auth-type`: `basic`
|
||||
Contains the authentication type. The only permitted type is `basic`.
|
||||
- `ingress.kubernetes.io/auth-secret`: `mysecret`
|
||||
Contains the username and password with access to the paths defined in the Ingress object.
|
||||
|
||||
The secret must be created in the same namespace as the Ingress object.
|
||||
|
||||
The following limitations hold:
|
||||
|
||||
- The realm is not configurable; the only supported (and default) value is `traefik`.
|
||||
- The Secret must contain a single file only.
|
||||
@@ -1,201 +0,0 @@
|
||||
# Marathon Backend
|
||||
|
||||
Træfik can be configured to use Marathon as a backend configuration.
|
||||
|
||||
See also [Marathon user guide](/user-guide/marathon).
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
################################################################
|
||||
# Mesos/Marathon configuration backend
|
||||
################################################################
|
||||
|
||||
# Enable Marathon configuration backend.
|
||||
[marathon]
|
||||
|
||||
# Marathon server endpoint.
|
||||
# You can also specify multiple endpoint for Marathon:
|
||||
# endpoint = "http://10.241.1.71:8080,10.241.1.72:8080,10.241.1.73:8080"
|
||||
#
|
||||
# Required
|
||||
# Default: "http://127.0.0.1:8080"
|
||||
#
|
||||
endpoint = "http://127.0.0.1:8080"
|
||||
|
||||
# Enable watch Marathon changes.
|
||||
#
|
||||
# Optional
|
||||
# Default: true
|
||||
#
|
||||
watch = true
|
||||
|
||||
# Default domain used.
|
||||
# Can be overridden by setting the "traefik.domain" label on an application.
|
||||
#
|
||||
# Required
|
||||
#
|
||||
domain = "marathon.localhost"
|
||||
|
||||
# Override default configuration template.
|
||||
# For advanced users :)
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# filename = "marathon.tmpl"
|
||||
|
||||
# Expose Marathon apps by default in Traefik.
|
||||
#
|
||||
# Optional
|
||||
# Default: true
|
||||
#
|
||||
# exposedByDefault = false
|
||||
|
||||
# Convert Marathon groups to subdomains.
|
||||
# Default behavior: /foo/bar/myapp => foo-bar-myapp.{defaultDomain}
|
||||
# with groupsAsSubDomains enabled: /foo/bar/myapp => myapp.bar.foo.{defaultDomain}
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
# groupsAsSubDomains = true
|
||||
|
||||
# Enable compatibility with marathon-lb labels.
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
# marathonLBCompatibility = true
|
||||
|
||||
# Enable filtering using Marathon constraints..
|
||||
# If enabled, Traefik will read Marathon constraints, as defined in https://mesosphere.github.io/marathon/docs/constraints.html
|
||||
# Each individual constraint will be treated as a verbatim compounded tag.
|
||||
# i.e. "rack_id:CLUSTER:rack-1", with all constraint groups concatenated together using ":"
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
# filterMarathonConstraints = true
|
||||
|
||||
# Enable Marathon basic authentication.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# [marathon.basic]
|
||||
# httpBasicAuthUser = "foo"
|
||||
# httpBasicPassword = "bar"
|
||||
|
||||
# TLS client configuration. https://golang.org/pkg/crypto/tls/#Config
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# [marathon.TLS]
|
||||
# CA = "/etc/ssl/ca.crt"
|
||||
# Cert = "/etc/ssl/marathon.cert"
|
||||
# Key = "/etc/ssl/marathon.key"
|
||||
# InsecureSkipVerify = true
|
||||
|
||||
# DCOSToken for DCOS environment.
|
||||
# This will override the Authorization header.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# dcosToken = "xxxxxx"
|
||||
|
||||
# Override DialerTimeout.
|
||||
# Amount of time to allow the Marathon provider to wait to open a TCP connection
|
||||
# to a Marathon master.
|
||||
# Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw
|
||||
# values (digits).
|
||||
# If no units are provided, the value is parsed assuming seconds.
|
||||
#
|
||||
# Optional
|
||||
# Default: "60s"
|
||||
#
|
||||
# dialerTimeout = "60s"
|
||||
|
||||
# Set the TCP Keep Alive interval for the Marathon HTTP Client.
|
||||
# Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw
|
||||
# values (digits).
|
||||
# If no units are provided, the value is parsed assuming seconds.
|
||||
#
|
||||
# Optional
|
||||
# Default: "10s"
|
||||
#
|
||||
# keepAlive = "10s"
|
||||
|
||||
# By default, a task's IP address (as returned by the Marathon API) is used as
|
||||
# backend server if an IP-per-task configuration can be found; otherwise, the
|
||||
# name of the host running the task is used.
|
||||
# The latter behavior can be enforced by enabling this switch.
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
# forceTaskHostname = true
|
||||
|
||||
# Applications may define readiness checks which are probed by Marathon during
|
||||
# deployments periodically and the results exposed via the API.
|
||||
# Enabling the following parameter causes Traefik to filter out tasks
|
||||
# whose readiness checks have not succeeded.
|
||||
# Note that the checks are only valid at deployment times.
|
||||
# See the Marathon guide for details.
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
# respectReadinessChecks = true
|
||||
```
|
||||
|
||||
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
|
||||
|
||||
## Labels: overriding default behaviour
|
||||
|
||||
Marathon labels may be used to dynamically change the routing and forwarding behaviour.
|
||||
|
||||
They may be specified on one of two levels: Application or service.
|
||||
|
||||
### Application Level
|
||||
|
||||
The following labels can be defined on Marathon applications. They adjust the behaviour for the entire application.
|
||||
|
||||
| Label | Description |
|
||||
|-----------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `traefik.backend=foo` | assign the application to `foo` backend |
|
||||
| `traefik.backend.maxconn.amount=10` | set a maximum number of connections to the backend. Must be used in conjunction with the below label to take effect. |
|
||||
| `traefik.backend.maxconn.extractorfunc=client.ip` | set the function to be used against the request to determine what to limit maximum connections to the backend by. Must be used in conjunction with the above label to take effect. |
|
||||
| `traefik.backend.loadbalancer.method=drr` | override the default `wrr` load balancer algorithm |
|
||||
| `traefik.backend.loadbalancer.sticky=true` | enable backend sticky sessions (DEPRECATED) |
|
||||
| `traefik.backend.loadbalancer.stickiness=true` | enable backend sticky sessions |
|
||||
| `traefik.backend.loadbalancer.stickiness.cookieName=NAME` | Manually set the cookie name for sticky sessions |
|
||||
| `traefik.backend.circuitbreaker.expression=NetworkErrorRatio() > 0.5` | create a [circuit breaker](/basics/#backends) to be used against the backend |
|
||||
| `traefik.backend.healthcheck.path=/health` | set the Traefik health check path [default: no health checks] |
|
||||
| `traefik.backend.healthcheck.interval=5s` | sets a custom health check interval in Go-parseable (`time.ParseDuration`) format [default: 30s] |
|
||||
| `traefik.portIndex=1` | register port by index in the application's ports array. Useful when the application exposes multiple ports. |
|
||||
| `traefik.port=80` | register the explicit application port value. Cannot be used alongside `traefik.portIndex`. |
|
||||
| `traefik.protocol=https` | override the default `http` protocol |
|
||||
| `traefik.weight=10` | assign this weight to the application |
|
||||
| `traefik.enable=false` | disable this application in Træfik |
|
||||
| `traefik.frontend.rule=Host:test.traefik.io` | override the default frontend rule (Default: `Host:{containerName}.{domain}`). |
|
||||
| `traefik.frontend.passHostHeader=true` | forward client `Host` header to the backend. |
|
||||
| `traefik.frontend.priority=10` | override default frontend priority |
|
||||
| `traefik.frontend.entryPoints=http,https` | assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. |
|
||||
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash`. |
|
||||
|
||||
### Service Level
|
||||
|
||||
For applications that expose multiple ports, specific labels can be used to extract one frontend/backend configuration pair per port. Each such pair is called a _service_. The (freely choosable) name of the service is an integral part of the service label name.
|
||||
|
||||
| Label | Description |
|
||||
|--------------------------------------------------------|------------------------------------------------------------------------------------------------------|
|
||||
| `traefik.<service-name>.port=443` | create a service binding with frontend/backend using this port. Overrides `traefik.port`. |
|
||||
| `traefik.<service-name>.portIndex=1` | create a service binding with frontend/backend using this port index. Overrides `traefik.portIndex`. |
|
||||
| `traefik.<service-name>.protocol=https` | assign `https` protocol. Overrides `traefik.protocol`. |
|
||||
| `traefik.<service-name>.weight=10` | assign this service weight. Overrides `traefik.weight`. |
|
||||
| `traefik.<service-name>.frontend.backend=fooBackend` | assign this service frontend to `foobackend`. Default is to assign to the service backend. |
|
||||
| `traefik.<service-name>.frontend.entryPoints=http` | assign this service entrypoints. Overrides `traefik.frontend.entrypoints`. |
|
||||
| `traefik.<service-name>.frontend.auth.basic=test:EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash` |
|
||||
| `traefik.<service-name>.frontend.passHostHeader=true` | Forward client `Host` header to the backend. Overrides `traefik.frontend.passHostHeader`. |
|
||||
| `traefik.<service-name>.frontend.priority=10` | assign the service frontend priority. Overrides `traefik.frontend.priority`. |
|
||||
| `traefik.<service-name>.frontend.rule=Path:/foo` | assign the service frontend rule. Overrides `traefik.frontend.rule`. |
|
||||
@@ -1,93 +0,0 @@
|
||||
# Mesos Generic Backend
|
||||
|
||||
Træfik can be configured to use Mesos as a backend configuration.
|
||||
|
||||
```toml
|
||||
################################################################
|
||||
# Mesos configuration backend
|
||||
################################################################
|
||||
|
||||
# Enable Mesos configuration backend.
|
||||
[mesos]
|
||||
|
||||
# Mesos server endpoint.
|
||||
# You can also specify multiple endpoint for Mesos:
|
||||
# endpoint = "192.168.35.40:5050,192.168.35.41:5050,192.168.35.42:5050"
|
||||
# endpoint = "zk://192.168.35.20:2181,192.168.35.21:2181,192.168.35.22:2181/mesos"
|
||||
#
|
||||
# Required
|
||||
# Default: "http://127.0.0.1:5050"
|
||||
#
|
||||
endpoint = "http://127.0.0.1:8080"
|
||||
|
||||
# Enable watch Mesos changes.
|
||||
#
|
||||
# Optional
|
||||
# Default: true
|
||||
#
|
||||
watch = true
|
||||
|
||||
# Default domain used.
|
||||
# Can be overridden by setting the "traefik.domain" label on an application.
|
||||
#
|
||||
# Required
|
||||
#
|
||||
domain = "mesos.localhost"
|
||||
|
||||
# Override default configuration template.
|
||||
# For advanced users :)
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# filename = "mesos.tmpl"
|
||||
|
||||
# Expose Mesos apps by default in Traefik.
|
||||
#
|
||||
# Optional
|
||||
# Default: true
|
||||
#
|
||||
# ExposedByDefault = false
|
||||
|
||||
# TLS client configuration. https://golang.org/pkg/crypto/tls/#Config
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# [mesos.TLS]
|
||||
# InsecureSkipVerify = true
|
||||
|
||||
# Zookeeper timeout (in seconds).
|
||||
#
|
||||
# Optional
|
||||
# Default: 30
|
||||
#
|
||||
# ZkDetectionTimeout = 30
|
||||
|
||||
# Polling interval (in seconds).
|
||||
#
|
||||
# Optional
|
||||
# Default: 30
|
||||
#
|
||||
# RefreshSeconds = 30
|
||||
|
||||
# IP sources (e.g. host, docker, mesos, rkt).
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# IPSources = "host"
|
||||
|
||||
# HTTP Timeout (in seconds).
|
||||
#
|
||||
# Optional
|
||||
# Default: 30
|
||||
#
|
||||
# StateTimeoutSecond = "30"
|
||||
|
||||
# Convert groups to subdomains.
|
||||
# Default behavior: /foo/bar/myapp => foo-bar-myapp.{defaultDomain}
|
||||
# with groupsAsSubDomains enabled: /foo/bar/myapp => myapp.bar.foo.{defaultDomain}
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
# groupsAsSubDomains = true
|
||||
```
|
||||
@@ -1,140 +0,0 @@
|
||||
# Rancher Backend
|
||||
|
||||
Træfik can be configured to use Rancher as a backend configuration.
|
||||
|
||||
## Global Configuration
|
||||
|
||||
```toml
|
||||
################################################################
|
||||
# Rancher configuration backend
|
||||
################################################################
|
||||
|
||||
# Enable Rancher configuration backend.
|
||||
[rancher]
|
||||
|
||||
# Default domain used.
|
||||
# Can be overridden by setting the "traefik.domain" label on an service.
|
||||
#
|
||||
# Required
|
||||
#
|
||||
domain = "rancher.localhost"
|
||||
|
||||
# Enable watch Rancher changes.
|
||||
#
|
||||
# Optional
|
||||
# Default: true
|
||||
#
|
||||
watch = true
|
||||
|
||||
# Polling interval (in seconds).
|
||||
#
|
||||
# Optional
|
||||
# Default: 15
|
||||
#
|
||||
refreshSeconds = 15
|
||||
|
||||
# Expose Rancher services by default in Traefik.
|
||||
#
|
||||
# Optional
|
||||
# Default: true
|
||||
#
|
||||
exposedByDefault = false
|
||||
|
||||
# Filter services with unhealthy states and inactive states.
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
enableServiceHealthFilter = true
|
||||
```
|
||||
|
||||
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
|
||||
|
||||
## Rancher Metadata Service
|
||||
|
||||
```toml
|
||||
# Enable Rancher metadata service configuration backend instead of the API
|
||||
# configuration backend.
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
[rancher.metadata]
|
||||
|
||||
# Poll the Rancher metadata service for changes every `rancher.RefreshSeconds`.
|
||||
# NOTE: this is less accurate than the default long polling technique which
|
||||
# will provide near instantaneous updates to Traefik
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
intervalPoll = true
|
||||
|
||||
# Prefix used for accessing the Rancher metadata service.
|
||||
#
|
||||
# Optional
|
||||
# Default: "/latest"
|
||||
#
|
||||
prefix = "/2016-07-29"
|
||||
```
|
||||
|
||||
## Rancher API
|
||||
|
||||
```toml
|
||||
# Enable Rancher API configuration backend.
|
||||
#
|
||||
# Optional
|
||||
# Default: true
|
||||
#
|
||||
[rancher.api]
|
||||
|
||||
# Endpoint to use when connecting to the Rancher API.
|
||||
#
|
||||
# Required
|
||||
endpoint = "http://rancherserver.example.com/v1"
|
||||
|
||||
# AccessKey to use when connecting to the Rancher API.
|
||||
#
|
||||
# Required
|
||||
accessKey = "XXXXXXXXXXXXXXXXXXXX"
|
||||
|
||||
# SecretKey to use when connecting to the Rancher API.
|
||||
#
|
||||
# Required
|
||||
secretKey = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
|
||||
```
|
||||
|
||||
!!! note
|
||||
If Traefik needs access to the Rancher API, you need to set the `endpoint`, `accesskey` and `secretkey` parameters.
|
||||
|
||||
To enable Traefik to fetch information about the Environment it's deployed in only, you need to create an `Environment API Key`.
|
||||
This can be found within the API Key advanced options.
|
||||
|
||||
Add these labels to traefik docker deployment to autogenerated these values:
|
||||
```
|
||||
io.rancher.container.agent.role: environment
|
||||
io.rancher.container.create_agent: true
|
||||
```
|
||||
|
||||
## Labels: overriding default behaviour
|
||||
|
||||
Labels can be used on task containers to override default behaviour:
|
||||
|
||||
| Label | Description |
|
||||
|-----------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|
|
||||
| `traefik.protocol=https` | Override the default `http` protocol |
|
||||
| `traefik.weight=10` | Assign this weight to the container |
|
||||
| `traefik.enable=false` | Disable this container in Træfik |
|
||||
| `traefik.frontend.rule=Host:test.traefik.io` | Override the default frontend rule (Default: `Host:{containerName}.{domain}`). |
|
||||
| `traefik.frontend.passHostHeader=true` | Forward client `Host` header to the backend. |
|
||||
| `traefik.frontend.priority=10` | Override default frontend priority |
|
||||
| `traefik.frontend.entryPoints=http,https` | Assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. |
|
||||
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash`. |
|
||||
| `traefik.frontend.redirect.entryPoint=https` | Enables Redirect to another entryPoint for that frontend (e.g. HTTPS) |
|
||||
| `traefik.frontend.redirect.regex: ^http://localhost/(.*)` | Redirect to another URL for that frontend.<br>Must be set with `traefik.frontend.redirect.replacement`. |
|
||||
| `traefik.frontend.redirect.replacement: http://mydomain/$1` | Redirect to another URL for that frontend.<br>Must be set with `traefik.frontend.redirect.regex`. |
|
||||
| `traefik.backend.circuitbreaker.expression=NetworkErrorRatio() > 0.5` | Create a [circuit breaker](/basics/#backends) to be used against the backend |
|
||||
| `traefik.backend.loadbalancer.method=drr` | Override the default `wrr` load balancer algorithm |
|
||||
| `traefik.backend.loadbalancer.stickiness=true` | Enable backend sticky sessions |
|
||||
| `traefik.backend.loadbalancer.stickiness.cookieName=NAME` | Manually set the cookie name for sticky sessions |
|
||||
| `traefik.backend.loadbalancer.sticky=true` | Enable backend sticky sessions (DEPRECATED) |
|
||||
@@ -1,92 +0,0 @@
|
||||
# Rest Backend
|
||||
|
||||
Træfik can be configured:
|
||||
|
||||
- using a RESTful api.
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
# Enable rest backend.
|
||||
[rest]
|
||||
# Name of the related entry point
|
||||
#
|
||||
# Optional
|
||||
# Default: "traefik"
|
||||
#
|
||||
entryPoint = "traefik"
|
||||
```
|
||||
|
||||
## API
|
||||
|
||||
| Path | Method | Description |
|
||||
|------------------------------|--------|-----------------|
|
||||
| `/api/providers/web` | `PUT` | update provider |
|
||||
| `/api/providers/rest` | `PUT` | update provider |
|
||||
|
||||
!!! warning
|
||||
For compatibility reason, when you activate the rest provider, you can use `web` or `rest` as `provider` value.
|
||||
|
||||
|
||||
```shell
|
||||
curl -XPUT @file "http://localhost:8080/api/providers/rest"
|
||||
```
|
||||
|
||||
with `@file`:
|
||||
```json
|
||||
{
|
||||
"frontends": {
|
||||
"frontend2": {
|
||||
"routes": {
|
||||
"test_2": {
|
||||
"rule": "Path:/test"
|
||||
}
|
||||
},
|
||||
"backend": "backend1"
|
||||
},
|
||||
"frontend1": {
|
||||
"routes": {
|
||||
"test_1": {
|
||||
"rule": "Host:test.localhost"
|
||||
}
|
||||
},
|
||||
"backend": "backend2"
|
||||
}
|
||||
},
|
||||
"backends": {
|
||||
"backend2": {
|
||||
"loadBalancer": {
|
||||
"method": "drr"
|
||||
},
|
||||
"servers": {
|
||||
"server2": {
|
||||
"weight": 2,
|
||||
"URL": "http://172.17.0.5:80"
|
||||
},
|
||||
"server1": {
|
||||
"weight": 1,
|
||||
"url": "http://172.17.0.4:80"
|
||||
}
|
||||
}
|
||||
},
|
||||
"backend1": {
|
||||
"loadBalancer": {
|
||||
"method": "wrr"
|
||||
},
|
||||
"circuitBreaker": {
|
||||
"expression": "NetworkErrorRatio() > 0.5"
|
||||
},
|
||||
"servers": {
|
||||
"server2": {
|
||||
"weight": 1,
|
||||
"url": "http://172.17.0.3:80"
|
||||
},
|
||||
"server1": {
|
||||
"weight": 10,
|
||||
"url": "http://172.17.0.2:80"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -1,114 +0,0 @@
|
||||
# Azure Service Fabric Backend
|
||||
|
||||
Træfik can be configured to use Azure Service Fabric as a backend configuration.
|
||||
|
||||
See [this repository for an example deployment package and further documentation.](https://aka.ms/traefikonsf)
|
||||
|
||||
## Azure Service Fabric
|
||||
|
||||
```toml
|
||||
################################################################
|
||||
# Azure Service Fabric provider
|
||||
################################################################
|
||||
|
||||
# Enable Azure Service Fabric configuration backend
|
||||
[serviceFabric]
|
||||
|
||||
# Azure Service Fabric Management Endpoint
|
||||
#
|
||||
# Required
|
||||
#
|
||||
clusterManagementUrl = "https://localhost:19080"
|
||||
|
||||
# Azure Service Fabric Management Endpoint API Version
|
||||
#
|
||||
# Required
|
||||
# Default: "3.0"
|
||||
#
|
||||
apiVersion = "3.0"
|
||||
|
||||
# Azure Service Fabric Polling Interval (in seconds)
|
||||
#
|
||||
# Required
|
||||
# Default: 10
|
||||
#
|
||||
refreshSeconds = 10
|
||||
|
||||
# Enable TLS connection.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# [serviceFabric.tls]
|
||||
# ca = "/etc/ssl/ca.crt"
|
||||
# cert = "/etc/ssl/servicefabric.crt"
|
||||
# key = "/etc/ssl/servicefabric.key"
|
||||
# insecureskipverify = true
|
||||
```
|
||||
|
||||
## Labels
|
||||
|
||||
The provider uses labels to configure how services are exposed through Træfik.
|
||||
These can be set using Extensions and the Property Manager API
|
||||
|
||||
#### Extensions
|
||||
|
||||
Set labels with extensions through the services `ServiceManifest.xml` file.
|
||||
Here is an example of an extension setting Træfik labels:
|
||||
|
||||
```xml
|
||||
<StatelessServiceType ServiceTypeName="WebServiceType">
|
||||
<Extensions>
|
||||
<Extension Name="Traefik">
|
||||
<Labels xmlns="http://schemas.microsoft.com/2015/03/fabact-no-schema">
|
||||
<Label Key="traefik.frontend.rule.example2">PathPrefixStrip: /a/path/to/strip</Label>
|
||||
<Label Key="traefik.expose">true</Label>
|
||||
<Label Key="traefik.frontend.passHostHeader">true</Label>
|
||||
</Labels>
|
||||
</Extension>
|
||||
</Extensions>
|
||||
</StatelessServiceType>
|
||||
```
|
||||
|
||||
#### Property Manager
|
||||
|
||||
Set Labels with the property manager API to overwrite and add labels, while your service is running.
|
||||
Here is an example of adding a frontend rule using the property manager API.
|
||||
|
||||
```shell
|
||||
curl -X PUT \
|
||||
'http://localhost:19080/Names/GettingStartedApplication2/WebService/$/GetProperty?api-version=6.0&IncludeValues=true' \
|
||||
-d '{
|
||||
"PropertyName": "traefik.frontend.rule.default",
|
||||
"Value": {
|
||||
"Kind": "String",
|
||||
"Data": "PathPrefixStrip: /a/path/to/strip"
|
||||
},
|
||||
"CustomTypeId": "LabelType"
|
||||
}'
|
||||
```
|
||||
|
||||
!!! note
|
||||
This functionality will be released in a future version of the [sfctl](https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-application-lifecycle-sfctl) tool.
|
||||
|
||||
## Available Labels
|
||||
|
||||
Labels, set through extensions or the property manager, can be used on services to override default behaviour.
|
||||
|
||||
| Label | Description |
|
||||
|-----------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `traefik.backend.maxconn.amount=10` | Set a maximum number of connections to the backend.<br>Must be used in conjunction with the below label to take effect. |
|
||||
| `traefik.backend.maxconn.extractorfunc=client.ip` | Set the function to be used against the request to determine what to limit maximum connections to the backend by.<br>Must be used in conjunction with the above label to take effect. |
|
||||
| `traefik.backend.loadbalancer.method=drr` | Override the default `wrr` load balancer algorithm |
|
||||
| `traefik.backend.loadbalancer.stickiness=true` | Enable backend sticky sessions |
|
||||
| `traefik.backend.loadbalancer.stickiness.cookieName=NAME` | Manually set the cookie name for sticky sessions |
|
||||
| `traefik.backend.circuitbreaker.expression=EXPR` | Create a [circuit breaker](/basics/#backends) to be used against the backend |
|
||||
| `traefik.backend.weight=10` | Assign this weight to the container |
|
||||
| `traefik.expose=true` | Expose this service using træfik |
|
||||
| `traefik.frontend.rule=EXPR` | Override the default frontend rule. Defaults to SF address. |
|
||||
| `traefik.frontend.passHostHeader=true` | Forward client `Host` header to the backend. |
|
||||
| `traefik.frontend.priority=10` | Override default frontend priority |
|
||||
| `traefik.frontend.entryPoints=http,https` | Assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints` |
|
||||
| `traefik.frontend.auth.basic=EXPR` | Set basic authentication for that frontend in CSV format: `User:Hash,User:Hash` |
|
||||
| `traefik.frontend.whitelistSourceRange:RANGE` | List of IP-Ranges which are allowed to access. An unset or empty list allows all Source-IPs to access.<br>If one of the Net-Specifications are invalid, the whole list is invalid and allows all Source-IPs to access. |
|
||||
| `traefik.backend.group.name` | Group all services with the same name into a single backend in Træfik |
|
||||
| `traefik.backend.group.weight` | Set the weighting of the current services nodes in the backend group |
|
||||
@@ -1,482 +0,0 @@
|
||||
# Web Backend
|
||||
|
||||
!!! danger "DEPRECATED"
|
||||
The web provider is deprecated, please use the [api](/configuration/api.md), the [ping](/configuration/ping.md), the [metrics](/configuration/metrics) and the [rest](/configuration/backends/rest.md) provider.
|
||||
|
||||
Træfik can be configured:
|
||||
|
||||
- using a RESTful api.
|
||||
- to use a monitoring system (like Prometheus, DataDog or StatD, ...).
|
||||
- to expose a Web Dashboard.
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
# Enable web backend.
|
||||
[web]
|
||||
|
||||
# Web administration port.
|
||||
#
|
||||
# Required
|
||||
# Default: ":8080"
|
||||
#
|
||||
address = ":8080"
|
||||
|
||||
# SSL certificate and key used.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# certFile = "traefik.crt"
|
||||
# keyFile = "traefik.key"
|
||||
|
||||
# Set REST API to read-only mode.
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
readOnly = true
|
||||
|
||||
# Set the root path for webui and API
|
||||
#
|
||||
# Deprecated
|
||||
# Optional
|
||||
#
|
||||
# path = "/mypath"
|
||||
#
|
||||
```
|
||||
|
||||
## Web UI
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
### Authentication
|
||||
|
||||
!!! note
|
||||
The `/ping` path of the API is excluded from authentication (since 1.4).
|
||||
|
||||
#### Basic Authentication
|
||||
|
||||
Passwords can be encoded in MD5, SHA1 and BCrypt: you can use `htpasswd` to generate those ones.
|
||||
|
||||
Users can be specified directly in the TOML file, or indirectly by referencing an external file;
|
||||
if both are provided, the two are merged, with external file contents having precedence.
|
||||
|
||||
```toml
|
||||
[web]
|
||||
# ...
|
||||
|
||||
# To enable basic auth on the webui with 2 user/pass: test:test and test2:test2
|
||||
[web.auth.basic]
|
||||
users = ["test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/", "test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0"]
|
||||
usersFile = "/path/to/.htpasswd"
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
#### Digest Authentication
|
||||
|
||||
You can use `htdigest` to generate those ones.
|
||||
|
||||
Users can be specified directly in the TOML file, or indirectly by referencing an external file;
|
||||
if both are provided, the two are merged, with external file contents having precedence
|
||||
|
||||
```toml
|
||||
[web]
|
||||
# ...
|
||||
|
||||
# To enable digest auth on the webui with 2 user/realm/pass: test:traefik:test and test2:traefik:test2
|
||||
[web.auth.digest]
|
||||
users = ["test:traefik:a2688e031edb4be6a3797f3882655c05", "test2:traefik:518845800f9e2bfb1f1f740ec24f074e"]
|
||||
usersFile = "/path/to/.htdigest"
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
|
||||
## Metrics
|
||||
|
||||
You can enable Træfik to export internal metrics to different monitoring systems.
|
||||
|
||||
### Prometheus
|
||||
|
||||
```toml
|
||||
[web]
|
||||
# ...
|
||||
|
||||
# To enable Traefik to export internal metrics to Prometheus
|
||||
[web.metrics.prometheus]
|
||||
|
||||
# Buckets for latency metrics
|
||||
#
|
||||
# Optional
|
||||
# Default: [0.1, 0.3, 1.2, 5]
|
||||
buckets=[0.1,0.3,1.2,5.0]
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
### DataDog
|
||||
|
||||
```toml
|
||||
[web]
|
||||
# ...
|
||||
|
||||
# DataDog metrics exporter type
|
||||
[web.metrics.datadog]
|
||||
|
||||
# DataDog's address.
|
||||
#
|
||||
# Required
|
||||
# Default: "localhost:8125"
|
||||
#
|
||||
address = "localhost:8125"
|
||||
|
||||
# DataDog push interval
|
||||
#
|
||||
# Optional
|
||||
# Default: "10s"
|
||||
#
|
||||
pushinterval = "10s"
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
### StatsD
|
||||
|
||||
```toml
|
||||
[web]
|
||||
# ...
|
||||
|
||||
# StatsD metrics exporter type
|
||||
[web.metrics.statsd]
|
||||
|
||||
# StatD's address.
|
||||
#
|
||||
# Required
|
||||
# Default: "localhost:8125"
|
||||
#
|
||||
address = "localhost:8125"
|
||||
|
||||
# StatD push interval
|
||||
#
|
||||
# Optional
|
||||
# Default: "10s"
|
||||
#
|
||||
pushinterval = "10s"
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
### InfluxDB
|
||||
|
||||
```toml
|
||||
[web]
|
||||
# ...
|
||||
|
||||
# InfluxDB metrics exporter type
|
||||
[web.metrics.influxdb]
|
||||
|
||||
# InfluxDB's address.
|
||||
#
|
||||
# Required
|
||||
# Default: "localhost:8089"
|
||||
#
|
||||
address = "localhost:8089"
|
||||
|
||||
# InfluxDB push interval
|
||||
#
|
||||
# Optional
|
||||
# Default: "10s"
|
||||
#
|
||||
pushinterval = "10s"
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
## Statistics
|
||||
|
||||
```toml
|
||||
[web]
|
||||
# ...
|
||||
|
||||
# Enable more detailed statistics.
|
||||
[web.statistics]
|
||||
|
||||
# Number of recent errors logged.
|
||||
#
|
||||
# Default: 10
|
||||
#
|
||||
recentErrors = 10
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
|
||||
## API
|
||||
|
||||
| Path | Method | Description |
|
||||
|-----------------------------------------------------------------|:-------------:|----------------------------------------------------------------------------------------------------|
|
||||
| `/` | `GET` | Provides a simple HTML frontend of Træfik |
|
||||
| `/ping` | `GET`, `HEAD` | A simple endpoint to check for Træfik process liveness. Return a code `200` with the content: `OK` |
|
||||
| `/health` | `GET` | JSON health metrics |
|
||||
| `/api` | `GET` | Configuration for all providers |
|
||||
| `/api/providers` | `GET` | Providers |
|
||||
| `/api/providers/{provider}` | `GET`, `PUT` | Get or update provider |
|
||||
| `/api/providers/{provider}/backends` | `GET` | List backends |
|
||||
| `/api/providers/{provider}/backends/{backend}` | `GET` | Get backend |
|
||||
| `/api/providers/{provider}/backends/{backend}/servers` | `GET` | List servers in backend |
|
||||
| `/api/providers/{provider}/backends/{backend}/servers/{server}` | `GET` | Get a server in a backend |
|
||||
| `/api/providers/{provider}/frontends` | `GET` | List frontends |
|
||||
| `/api/providers/{provider}/frontends/{frontend}` | `GET` | Get a frontend |
|
||||
| `/api/providers/{provider}/frontends/{frontend}/routes` | `GET` | List routes in a frontend |
|
||||
| `/api/providers/{provider}/frontends/{frontend}/routes/{route}` | `GET` | Get a route in a frontend |
|
||||
| `/metrics` | `GET` | Export internal metrics |
|
||||
|
||||
### Example
|
||||
|
||||
#### Ping
|
||||
|
||||
```shell
|
||||
curl -sv "http://localhost:8080/ping"
|
||||
```
|
||||
```shell
|
||||
* Trying ::1...
|
||||
* Connected to localhost (::1) port 8080 (\#0)
|
||||
> GET /ping HTTP/1.1
|
||||
> Host: localhost:8080
|
||||
> User-Agent: curl/7.43.0
|
||||
> Accept: */*
|
||||
>
|
||||
< HTTP/1.1 200 OK
|
||||
< Date: Thu, 25 Aug 2016 01:35:36 GMT
|
||||
< Content-Length: 2
|
||||
< Content-Type: text/plain; charset=utf-8
|
||||
<
|
||||
* Connection \#0 to host localhost left intact
|
||||
OK
|
||||
```
|
||||
|
||||
#### Health
|
||||
|
||||
```shell
|
||||
curl -s "http://localhost:8080/health" | jq .
|
||||
```
|
||||
```json
|
||||
{
|
||||
// Træfik PID
|
||||
"pid": 2458,
|
||||
// Træfik server uptime (formated time)
|
||||
"uptime": "39m6.885931127s",
|
||||
// Træfik server uptime in seconds
|
||||
"uptime_sec": 2346.885931127,
|
||||
// current server date
|
||||
"time": "2015-10-07 18:32:24.362238909 +0200 CEST",
|
||||
// current server date in seconds
|
||||
"unixtime": 1444235544,
|
||||
// count HTTP response status code in realtime
|
||||
"status_code_count": {
|
||||
"502": 1
|
||||
},
|
||||
// count HTTP response status code since Træfik started
|
||||
"total_status_code_count": {
|
||||
"200": 7,
|
||||
"404": 21,
|
||||
"502": 13
|
||||
},
|
||||
// count HTTP response
|
||||
"count": 1,
|
||||
// count HTTP response
|
||||
"total_count": 41,
|
||||
// sum of all response time (formated time)
|
||||
"total_response_time": "35.456865605s",
|
||||
// sum of all response time in seconds
|
||||
"total_response_time_sec": 35.456865605,
|
||||
// average response time (formated time)
|
||||
"average_response_time": "864.8016ms",
|
||||
// average response time in seconds
|
||||
"average_response_time_sec": 0.8648016000000001,
|
||||
|
||||
// request statistics [requires --web.statistics to be set]
|
||||
// ten most recent requests with 4xx and 5xx status codes
|
||||
"recent_errors": [
|
||||
{
|
||||
// status code
|
||||
"status_code": 500,
|
||||
// description of status code
|
||||
"status": "Internal Server Error",
|
||||
// request HTTP method
|
||||
"method": "GET",
|
||||
// request host name
|
||||
"host": "localhost",
|
||||
// request path
|
||||
"path": "/path",
|
||||
// RFC 3339 formatted date/time
|
||||
"time": "2016-10-21T16:59:15.418495872-07:00"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Provider configurations
|
||||
|
||||
```shell
|
||||
curl -s "http://localhost:8080/api" | jq .
|
||||
```
|
||||
```json
|
||||
{
|
||||
"file": {
|
||||
"frontends": {
|
||||
"frontend2": {
|
||||
"routes": {
|
||||
"test_2": {
|
||||
"rule": "Path:/test"
|
||||
}
|
||||
},
|
||||
"backend": "backend1"
|
||||
},
|
||||
"frontend1": {
|
||||
"routes": {
|
||||
"test_1": {
|
||||
"rule": "Host:test.localhost"
|
||||
}
|
||||
},
|
||||
"backend": "backend2"
|
||||
}
|
||||
},
|
||||
"backends": {
|
||||
"backend2": {
|
||||
"loadBalancer": {
|
||||
"method": "drr"
|
||||
},
|
||||
"servers": {
|
||||
"server2": {
|
||||
"weight": 2,
|
||||
"URL": "http://172.17.0.5:80"
|
||||
},
|
||||
"server1": {
|
||||
"weight": 1,
|
||||
"url": "http://172.17.0.4:80"
|
||||
}
|
||||
}
|
||||
},
|
||||
"backend1": {
|
||||
"loadBalancer": {
|
||||
"method": "wrr"
|
||||
},
|
||||
"circuitBreaker": {
|
||||
"expression": "NetworkErrorRatio() > 0.5"
|
||||
},
|
||||
"servers": {
|
||||
"server2": {
|
||||
"weight": 1,
|
||||
"url": "http://172.17.0.3:80"
|
||||
},
|
||||
"server1": {
|
||||
"weight": 10,
|
||||
"url": "http://172.17.0.2:80"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Deprecation compatibility
|
||||
|
||||
#### Address
|
||||
|
||||
As the web provider is deprecated, you can handle the `Address` option like this:
|
||||
|
||||
```toml
|
||||
defaultEntryPoints = ["http"]
|
||||
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
|
||||
[entryPoints.foo]
|
||||
address = ":8082"
|
||||
|
||||
[entryPoints.bar]
|
||||
address = ":8083"
|
||||
|
||||
[ping]
|
||||
entryPoint = "foo"
|
||||
|
||||
[api]
|
||||
entryPoint = "bar"
|
||||
```
|
||||
|
||||
In the above example, you would access a regular path, administration panel, and health-check as follows:
|
||||
|
||||
* Regular path: `http://hostname:80/path`
|
||||
* Admin Panel: `http://hostname:8083/`
|
||||
* Ping URL: `http://hostname:8082/ping`
|
||||
|
||||
In the above example, it is _very_ important to create a named dedicated entry point, and do **not** include it in `defaultEntryPoints`.
|
||||
Otherwise, you are likely to expose _all_ services via that entry point.
|
||||
|
||||
#### Path
|
||||
|
||||
As the web provider is deprecated, you can handle the `Path` option like this:
|
||||
|
||||
```toml
|
||||
defaultEntryPoints = ["http"]
|
||||
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
|
||||
[entryPoints.foo]
|
||||
address = ":8080"
|
||||
|
||||
[entryPoints.bar]
|
||||
address = ":8081"
|
||||
|
||||
# Activate API and Dashboard
|
||||
[api]
|
||||
entryPoint = "bar"
|
||||
dashboard = true
|
||||
|
||||
[file]
|
||||
[backends]
|
||||
[backends.backend1]
|
||||
[backends.backend1.servers.server1]
|
||||
url = "http://127.0.0.1:8081"
|
||||
|
||||
[frontends]
|
||||
[frontends.frontend1]
|
||||
entryPoints = ["foo"]
|
||||
backend = "backend1"
|
||||
[frontends.frontend1.routes.test_1]
|
||||
rule = "PathPrefixStrip:/yourprefix;PathPrefix:/yourprefix"
|
||||
```
|
||||
|
||||
#### Authentication
|
||||
|
||||
As the web provider is deprecated, you can handle the `auth` option like this:
|
||||
|
||||
```toml
|
||||
defaultEntryPoints = ["http"]
|
||||
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
|
||||
[entryPoints.foo]
|
||||
address=":8080"
|
||||
[entryPoints.foo.auth]
|
||||
[entryPoints.foo.auth.basic]
|
||||
users = [
|
||||
"test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/",
|
||||
"test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0",
|
||||
]
|
||||
|
||||
[api]
|
||||
entrypoint="foo"
|
||||
```
|
||||
|
||||
For more information, see [entry points](/configuration/entrypoints/) .
|
||||
@@ -1,61 +0,0 @@
|
||||
# Zookeeper Backend
|
||||
|
||||
Træfik can be configured to use Zookeeper as a backend configuration.
|
||||
|
||||
```toml
|
||||
################################################################
|
||||
# Zookeeper configuration backend
|
||||
################################################################
|
||||
|
||||
# Enable Zookeeperconfiguration backend.
|
||||
[zookeeper]
|
||||
|
||||
# Zookeeper server endpoint.
|
||||
#
|
||||
# Required
|
||||
# Default: "127.0.0.1:2181"
|
||||
#
|
||||
endpoint = "127.0.0.1:2181"
|
||||
|
||||
# Enable watch Zookeeper changes.
|
||||
#
|
||||
# Optional
|
||||
# Default: true
|
||||
#
|
||||
watch = true
|
||||
|
||||
# Prefix used for KV store.
|
||||
#
|
||||
# Optional
|
||||
# Default: "traefik"
|
||||
#
|
||||
prefix = "traefik"
|
||||
|
||||
# Override default configuration template.
|
||||
# For advanced users :)
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# filename = "zookeeper.tmpl"
|
||||
|
||||
# Use Zookeeper user/pass authentication.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# username = foo
|
||||
# password = bar
|
||||
|
||||
# Enable Zookeeper TLS connection.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# [zookeeper.tls]
|
||||
# ca = "/etc/ssl/ca.crt"
|
||||
# cert = "/etc/ssl/zookeeper.crt"
|
||||
# key = "/etc/ssl/zookeeper.key"
|
||||
# insecureskipverify = true
|
||||
```
|
||||
|
||||
To enable constraints see [backend-specific constraints section](/configuration/commons/#backend-specific).
|
||||
|
||||
Please refer to the [Key Value storage structure](/user-guide/kv-config/#key-value-storage-structure) section to get documentation on Traefik KV structure.
|
||||
@@ -1,528 +0,0 @@
|
||||
# Global Configuration
|
||||
|
||||
## Main Section
|
||||
|
||||
```toml
|
||||
# DEPRECATED - for general usage instruction see [lifeCycle.graceTimeOut].
|
||||
#
|
||||
# If both the deprecated option and the new one are given, the deprecated one
|
||||
# takes precedence.
|
||||
# A value of zero is equivalent to omitting the parameter, causing
|
||||
# [lifeCycle.graceTimeOut] to be effective. Pass zero to the new option in
|
||||
# order to disable the grace period.
|
||||
#
|
||||
# Optional
|
||||
# Default: "0s"
|
||||
#
|
||||
# graceTimeOut = "10s"
|
||||
|
||||
# Enable debug mode.
|
||||
# This will install HTTP handlers to expose Go expvars under /debug/vars and
|
||||
# pprof profiling data under /debug/pprof.
|
||||
# Additionally, the log level will be set to DEBUG.
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
# debug = true
|
||||
|
||||
# Periodically check if a new version has been released.
|
||||
#
|
||||
# Optional
|
||||
# Default: true
|
||||
#
|
||||
# checkNewVersion = false
|
||||
|
||||
# Backends throttle duration.
|
||||
#
|
||||
# Optional
|
||||
# Default: "2s"
|
||||
#
|
||||
# ProvidersThrottleDuration = "2s"
|
||||
|
||||
# Controls the maximum idle (keep-alive) connections to keep per-host.
|
||||
#
|
||||
# Optional
|
||||
# Default: 200
|
||||
#
|
||||
# MaxIdleConnsPerHost = 200
|
||||
|
||||
# If set to true invalid SSL certificates are accepted for backends.
|
||||
# This disables detection of man-in-the-middle attacks so should only be used on secure backend networks.
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
# InsecureSkipVerify = true
|
||||
|
||||
# Register Certificates in the RootCA.
|
||||
#
|
||||
# Optional
|
||||
# Default: []
|
||||
#
|
||||
# RootCAs = [ "/mycert.cert" ]
|
||||
|
||||
# Entrypoints to be used by frontends that do not specify any entrypoint.
|
||||
# Each frontend can specify its own entrypoints.
|
||||
#
|
||||
# Optional
|
||||
# Default: ["http"]
|
||||
#
|
||||
# defaultEntryPoints = ["http", "https"]
|
||||
```
|
||||
|
||||
- `graceTimeOut`: Duration to give active requests a chance to finish before Traefik stops.
|
||||
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
|
||||
If no units are provided, the value is parsed assuming seconds.
|
||||
**Note:** in this time frame no new requests are accepted.
|
||||
|
||||
- `ProvidersThrottleDuration`: Backends throttle duration: minimum duration in seconds between 2 events from providers before applying a new configuration.
|
||||
It avoids unnecessary reloads if multiples events are sent in a short amount of time.
|
||||
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
|
||||
If no units are provided, the value is parsed assuming seconds.
|
||||
|
||||
- `MaxIdleConnsPerHost`: Controls the maximum idle (keep-alive) connections to keep per-host.
|
||||
If zero, `DefaultMaxIdleConnsPerHost` from the Go standard library net/http module is used.
|
||||
If you encounter 'too many open files' errors, you can either increase this value or change the `ulimit`.
|
||||
|
||||
- `InsecureSkipVerify` : If set to true invalid SSL certificates are accepted for backends.
|
||||
**Note:** This disables detection of man-in-the-middle attacks so should only be used on secure backend networks.
|
||||
|
||||
- `RootCAs`: Register Certificates in the RootCA. This certificates will be use for backends calls.
|
||||
**Note** You can use file path or cert content directly
|
||||
|
||||
- `defaultEntryPoints`: Entrypoints to be used by frontends that do not specify any entrypoint.
|
||||
Each frontend can specify its own entrypoints.
|
||||
|
||||
|
||||
## Constraints
|
||||
|
||||
In a micro-service architecture, with a central service discovery, setting constraints limits Træfik scope to a smaller number of routes.
|
||||
|
||||
Træfik filters services according to service attributes/tags set in your configuration backends.
|
||||
|
||||
Supported filters:
|
||||
|
||||
- `tag`
|
||||
|
||||
### Simple
|
||||
|
||||
```toml
|
||||
# Simple matching constraint
|
||||
constraints = ["tag==api"]
|
||||
|
||||
# Simple mismatching constraint
|
||||
constraints = ["tag!=api"]
|
||||
|
||||
# Globbing
|
||||
constraints = ["tag==us-*"]
|
||||
```
|
||||
|
||||
### Multiple
|
||||
|
||||
```toml
|
||||
# Multiple constraints
|
||||
# - "tag==" must match with at least one tag
|
||||
# - "tag!=" must match with none of tags
|
||||
constraints = ["tag!=us-*", "tag!=asia-*"]
|
||||
```
|
||||
|
||||
### Backend-specific
|
||||
|
||||
Supported backends:
|
||||
|
||||
- Docker
|
||||
- Consul K/V
|
||||
- BoltDB
|
||||
- Zookeeper
|
||||
- Etcd
|
||||
- Consul Catalog
|
||||
- Rancher
|
||||
- Marathon
|
||||
- Kubernetes (using a provider-specific mechanism based on label selectors)
|
||||
|
||||
```toml
|
||||
# Backend-specific constraint
|
||||
[consulCatalog]
|
||||
# ...
|
||||
constraints = ["tag==api"]
|
||||
|
||||
# Backend-specific constraint
|
||||
[marathon]
|
||||
# ...
|
||||
constraints = ["tag==api", "tag!=v*-beta"]
|
||||
```
|
||||
|
||||
|
||||
## Logs Definition
|
||||
|
||||
### Traefik logs
|
||||
|
||||
```toml
|
||||
# Traefik logs file
|
||||
# If not defined, logs to stdout
|
||||
#
|
||||
# DEPRECATED - see [traefikLog] lower down
|
||||
# In case both traefikLogsFile and traefikLog.filePath are specified, the latter will take precedence.
|
||||
# Optional
|
||||
#
|
||||
traefikLogsFile = "log/traefik.log"
|
||||
|
||||
# Log level
|
||||
#
|
||||
# Optional
|
||||
# Default: "ERROR"
|
||||
#
|
||||
# Accepted values, in order of severity: "DEBUG", "INFO", "WARN", "ERROR", "FATAL", "PANIC"
|
||||
# Messages at and above the selected level will be logged.
|
||||
#
|
||||
logLevel = "ERROR"
|
||||
```
|
||||
|
||||
## Traefik Logs
|
||||
|
||||
By default the Traefik log is written to stdout in text format.
|
||||
|
||||
To write the logs into a logfile specify the `filePath`.
|
||||
```toml
|
||||
[traefikLog]
|
||||
filePath = "/path/to/traefik.log"
|
||||
```
|
||||
|
||||
To write JSON format logs, specify `json` as the format:
|
||||
```toml
|
||||
[traefikLog]
|
||||
filePath = "/path/to/traefik.log"
|
||||
format = "json"
|
||||
```
|
||||
|
||||
### Access Logs
|
||||
|
||||
Access logs are written when `[accessLog]` is defined.
|
||||
By default it will write to stdout and produce logs in the textual Common Log Format (CLF), extended with additional fields.
|
||||
|
||||
To enable access logs using the default settings just add the `[accessLog]` entry.
|
||||
```toml
|
||||
[accessLog]
|
||||
```
|
||||
|
||||
To write the logs into a logfile specify the `filePath`.
|
||||
```toml
|
||||
[accessLog]
|
||||
filePath = "/path/to/access.log"
|
||||
```
|
||||
|
||||
To write JSON format logs, specify `json` as the format:
|
||||
```toml
|
||||
[accessLog]
|
||||
filePath = "/path/to/access.log"
|
||||
format = "json"
|
||||
```
|
||||
|
||||
Deprecated way (before 1.4):
|
||||
```toml
|
||||
# Access logs file
|
||||
#
|
||||
# DEPRECATED - see [accessLog] lower down
|
||||
#
|
||||
accessLogsFile = "log/access.log"
|
||||
```
|
||||
|
||||
### Log Rotation
|
||||
|
||||
Traefik will close and reopen its log files, assuming they're configured, on receipt of a USR1 signal.
|
||||
This allows the logs to be rotated and processed by an external program, such as `logrotate`.
|
||||
|
||||
!!! note
|
||||
This does not work on Windows due to the lack of USR signals.
|
||||
|
||||
|
||||
## Custom Error pages
|
||||
|
||||
Custom error pages can be returned, in lieu of the default, according to frontend-configured ranges of HTTP Status codes.
|
||||
|
||||
In the example below, if a 503 status is returned from the frontend "website", the custom error page at http://2.3.4.5/503.html is returned with the actual status code set in the HTTP header.
|
||||
|
||||
!!! note
|
||||
The `503.html` page itself is not hosted on Traefik, but some other infrastructure.
|
||||
|
||||
```toml
|
||||
[frontends]
|
||||
[frontends.website]
|
||||
backend = "website"
|
||||
[frontends.website.errors]
|
||||
[frontends.website.errors.network]
|
||||
status = ["500-599"]
|
||||
backend = "error"
|
||||
query = "/{status}.html"
|
||||
[frontends.website.routes.website]
|
||||
rule = "Host: website.mydomain.com"
|
||||
|
||||
[backends]
|
||||
[backends.website]
|
||||
[backends.website.servers.website]
|
||||
url = "https://1.2.3.4"
|
||||
[backends.error]
|
||||
[backends.error.servers.error]
|
||||
url = "http://2.3.4.5"
|
||||
```
|
||||
|
||||
In the above example, the error page rendered was based on the status code.
|
||||
Instead, the query parameter can also be set to some generic error page like so: `query = "/500s.html"`
|
||||
|
||||
Now the `500s.html` error page is returned for the configured code range.
|
||||
The configured status code ranges are inclusive; that is, in the above example, the `500s.html` page will be returned for status codes `500` through, and including, `599`.
|
||||
|
||||
Custom error pages are easiest to implement using the file provider.
|
||||
For dynamic providers, the corresponding template file needs to be customized accordingly and referenced in the Traefik configuration.
|
||||
|
||||
|
||||
## Rate limiting
|
||||
|
||||
Rate limiting can be configured per frontend.
|
||||
Multiple sets of rates can be added to each frontend, but the time periods must be unique.
|
||||
|
||||
```toml
|
||||
[frontends]
|
||||
[frontends.frontend1]
|
||||
# ...
|
||||
[frontends.frontend1.ratelimit]
|
||||
extractorfunc = "client.ip"
|
||||
[frontends.frontend1.ratelimit.rateset.rateset1]
|
||||
period = "10s"
|
||||
average = 100
|
||||
burst = 200
|
||||
[frontends.frontend1.ratelimit.rateset.rateset2]
|
||||
period = "3s"
|
||||
average = 5
|
||||
burst = 10
|
||||
```
|
||||
|
||||
In the above example, frontend1 is configured to limit requests by the client's ip address.
|
||||
An average of 5 requests every 3 seconds is allowed and an average of 100 requests every 10 seconds.
|
||||
These can "burst" up to 10 and 200 in each period respectively.
|
||||
|
||||
|
||||
## Retry Configuration
|
||||
|
||||
```toml
|
||||
# Enable retry sending request if network error
|
||||
[retry]
|
||||
|
||||
# Number of attempts
|
||||
#
|
||||
# Optional
|
||||
# Default: (number servers in backend) -1
|
||||
#
|
||||
# attempts = 3
|
||||
```
|
||||
|
||||
|
||||
## Health Check Configuration
|
||||
|
||||
```toml
|
||||
# Enable custom health check options.
|
||||
[healthcheck]
|
||||
|
||||
# Set the default health check interval.
|
||||
#
|
||||
# Optional
|
||||
# Default: "30s"
|
||||
#
|
||||
# interval = "30s"
|
||||
```
|
||||
|
||||
- `interval` set the default health check interval.
|
||||
Will only be effective if health check paths are defined.
|
||||
Given provider-specific support, the value may be overridden on a per-backend basis.
|
||||
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
|
||||
If no units are provided, the value is parsed assuming seconds.
|
||||
|
||||
## Life Cycle
|
||||
|
||||
Controls the behavior of Traefik during the shutdown phase.
|
||||
|
||||
```toml
|
||||
[lifeCycle]
|
||||
|
||||
# Duration to keep accepting requests prior to initiating the graceful
|
||||
# termination period (as defined by the `graceTimeOut` option). This
|
||||
# option is meant to give downstream load-balancers sufficient time to
|
||||
# take Traefik out of rotation.
|
||||
# Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
|
||||
# If no units are provided, the value is parsed assuming seconds.
|
||||
# The zero duration disables the request accepting grace period, i.e.,
|
||||
# Traefik will immediately proceed to the grace period.
|
||||
#
|
||||
# Optional
|
||||
# Default: 0
|
||||
#
|
||||
# requestAcceptGraceTimeout = "10s"
|
||||
|
||||
# Duration to give active requests a chance to finish before Traefik stops.
|
||||
# Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
|
||||
# If no units are provided, the value is parsed assuming seconds.
|
||||
# Note: in this time frame no new requests are accepted.
|
||||
#
|
||||
# Optional
|
||||
# Default: "10s"
|
||||
#
|
||||
# graceTimeOut = "10s"
|
||||
```
|
||||
|
||||
## Timeouts
|
||||
|
||||
### Responding Timeouts
|
||||
|
||||
`respondingTimeouts` are timeouts for incoming requests to the Traefik instance.
|
||||
|
||||
```toml
|
||||
[respondingTimeouts]
|
||||
|
||||
# readTimeout is the maximum duration for reading the entire request, including the body.
|
||||
#
|
||||
# Optional
|
||||
# Default: "0s"
|
||||
#
|
||||
# readTimeout = "5s"
|
||||
|
||||
# writeTimeout is the maximum duration before timing out writes of the response.
|
||||
#
|
||||
# Optional
|
||||
# Default: "0s"
|
||||
#
|
||||
# writeTimeout = "5s"
|
||||
|
||||
# idleTimeout is the maximum duration an idle (keep-alive) connection will remain idle before closing itself.
|
||||
#
|
||||
# Optional
|
||||
# Default: "180s"
|
||||
#
|
||||
# idleTimeout = "360s"
|
||||
```
|
||||
|
||||
- `readTimeout` is the maximum duration for reading the entire request, including the body.
|
||||
If zero, no timeout exists.
|
||||
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
|
||||
If no units are provided, the value is parsed assuming seconds.
|
||||
|
||||
- `writeTimeout` is the maximum duration before timing out writes of the response.
|
||||
It covers the time from the end of the request header read to the end of the response write.
|
||||
If zero, no timeout exists.
|
||||
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
|
||||
If no units are provided, the value is parsed assuming seconds.
|
||||
|
||||
- `idleTimeout` is the maximum duration an idle (keep-alive) connection will remain idle before closing itself.
|
||||
If zero, no timeout exists.
|
||||
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
|
||||
If no units are provided, the value is parsed assuming seconds.
|
||||
|
||||
### Forwarding Timeouts
|
||||
|
||||
`forwardingTimeouts` are timeouts for requests forwarded to the backend servers.
|
||||
|
||||
```toml
|
||||
[forwardingTimeouts]
|
||||
|
||||
# dialTimeout is the amount of time to wait until a connection to a backend server can be established.
|
||||
#
|
||||
# Optional
|
||||
# Default: "30s"
|
||||
#
|
||||
# dialTimeout = "30s"
|
||||
|
||||
# responseHeaderTimeout is the amount of time to wait for a server's response headers after fully writing the request (including its body, if any).
|
||||
#
|
||||
# Optional
|
||||
# Default: "0s"
|
||||
#
|
||||
# responseHeaderTimeout = "0s"
|
||||
```
|
||||
|
||||
- `dialTimeout` is the amount of time to wait until a connection to a backend server can be established.
|
||||
If zero, no timeout exists.
|
||||
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
|
||||
If no units are provided, the value is parsed assuming seconds.
|
||||
|
||||
- `responseHeaderTimeout` is the amount of time to wait for a server's response headers after fully writing the request (including its body, if any).
|
||||
If zero, no timeout exists.
|
||||
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
|
||||
If no units are provided, the value is parsed assuming seconds.
|
||||
|
||||
|
||||
### Idle Timeout (deprecated)
|
||||
|
||||
Use [respondingTimeouts](/configuration/commons/#responding-timeouts) instead of `IdleTimeout`.
|
||||
In the case both settings are configured, the deprecated option will be overwritten.
|
||||
|
||||
`IdleTimeout` is the maximum amount of time an idle (keep-alive) connection will remain idle before closing itself.
|
||||
This is set to enforce closing of stale client connections.
|
||||
|
||||
Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw values (digits).
|
||||
If no units are provided, the value is parsed assuming seconds.
|
||||
|
||||
```toml
|
||||
# IdleTimeout
|
||||
#
|
||||
# DEPRECATED - see [respondingTimeouts] section.
|
||||
#
|
||||
# Optional
|
||||
# Default: "180s"
|
||||
#
|
||||
IdleTimeout = "360s"
|
||||
```
|
||||
|
||||
|
||||
## Override Default Configuration Template
|
||||
|
||||
!!! warning
|
||||
For advanced users only.
|
||||
|
||||
Supported by all backends except: File backend, Web backend and DynamoDB backend.
|
||||
|
||||
```toml
|
||||
[backend_name]
|
||||
|
||||
# Override default configuration template. For advanced users :)
|
||||
#
|
||||
# Optional
|
||||
# Default: ""
|
||||
#
|
||||
filename = "custom_config_template.tpml"
|
||||
|
||||
# Enable debug logging of generated configuration template.
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
debugLogGeneratedTemplate = true
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```toml
|
||||
[marathon]
|
||||
filename = "my_custom_config_template.tpml"
|
||||
```
|
||||
|
||||
The template files can be written using functions provided by:
|
||||
|
||||
- [go template](https://golang.org/pkg/text/template/)
|
||||
- [sprig library](https://masterminds.github.io/sprig/)
|
||||
|
||||
Example:
|
||||
|
||||
```tmpl
|
||||
[backends]
|
||||
[backends.backend1]
|
||||
url = "http://firstserver"
|
||||
[backends.backend2]
|
||||
url = "http://secondserver"
|
||||
|
||||
{{$frontends := dict "frontend1" "backend1" "frontend2" "backend2"}}
|
||||
[frontends]
|
||||
{{range $frontend, $backend := $frontends}}
|
||||
[frontends.{{$frontend}}]
|
||||
backend = "{{$backend}}"
|
||||
{{end}}
|
||||
```
|
||||
@@ -1,402 +0,0 @@
|
||||
# Entry Points Definition
|
||||
|
||||
## Reference
|
||||
|
||||
### TOML
|
||||
|
||||
```toml
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
whitelistSourceRange = ["10.42.0.0/16", "152.89.1.33/32", "afed:be44::/16"]
|
||||
compress = true
|
||||
|
||||
[entryPoints.http.tls]
|
||||
minVersion = "VersionTLS12"
|
||||
cipherSuites = [
|
||||
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
|
||||
"TLS_RSA_WITH_AES_256_GCM_SHA384"
|
||||
]
|
||||
[[entryPoints.http.tls.certificates]]
|
||||
certFile = "path/to/my.cert"
|
||||
keyFile = "path/to/my.key"
|
||||
[[entryPoints.http.tls.certificates]]
|
||||
certFile = "path/to/other.cert"
|
||||
keyFile = "path/to/other.key"
|
||||
# ...
|
||||
[entryPoints.http.tls.clientCA]
|
||||
files = ["path/to/ca1.crt", "path/to/ca2.crt"]
|
||||
optional = false
|
||||
|
||||
[entryPoints.http.redirect]
|
||||
entryPoint = "https"
|
||||
regex = "^http://localhost/(.*)"
|
||||
replacement = "http://mydomain/$1"
|
||||
|
||||
[entryPoints.http.auth]
|
||||
headerField = "X-WebAuth-User"
|
||||
[entryPoints.http.auth.basic]
|
||||
users = [
|
||||
"test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/",
|
||||
"test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0",
|
||||
]
|
||||
usersFile = "/path/to/.htpasswd"
|
||||
[entryPoints.http.auth.digest]
|
||||
users = [
|
||||
"test:traefik:a2688e031edb4be6a3797f3882655c05",
|
||||
"test2:traefik:518845800f9e2bfb1f1f740ec24f074e",
|
||||
]
|
||||
usersFile = "/path/to/.htdigest"
|
||||
[entryPoints.http.auth.forward]
|
||||
address = "https://authserver.com/auth"
|
||||
trustForwardHeader = true
|
||||
[entryPoints.http.auth.forward.tls]
|
||||
ca = [ "path/to/local.crt"]
|
||||
caOptional = true
|
||||
cert = "path/to/foo.cert"
|
||||
key = "path/to/foo.key"
|
||||
insecureSkipVerify = true
|
||||
|
||||
[entryPoints.http.proxyProtocol]
|
||||
insecure = true
|
||||
trustedIPs = ["10.10.10.1", "10.10.10.2"]
|
||||
|
||||
[entryPoints.http.forwardedHeaders]
|
||||
trustedIPs = ["10.10.10.1", "10.10.10.2"]
|
||||
|
||||
[entryPoints.https]
|
||||
# ...
|
||||
```
|
||||
|
||||
### CLI
|
||||
|
||||
For more information about the CLI, see the documentation about [Traefik command](/basics/#traefik).
|
||||
|
||||
```shell
|
||||
--entryPoints='Name:http Address::80'
|
||||
--entryPoints='Name:https Address::443 TLS'
|
||||
```
|
||||
|
||||
!!! note
|
||||
Whitespace is used as option separator and `,` is used as value separator for the list.
|
||||
The names of the options are case-insensitive.
|
||||
|
||||
In compose file the entrypoint syntax is different:
|
||||
|
||||
```yaml
|
||||
traefik:
|
||||
image: traefik
|
||||
command:
|
||||
- --defaultentrypoints=powpow
|
||||
- "--entryPoints=Name:powpow Address::42 Compress:true"
|
||||
```
|
||||
or
|
||||
```yaml
|
||||
traefik:
|
||||
image: traefik
|
||||
command: --defaultentrypoints=powpow --entryPoints='Name:powpow Address::42 Compress:true'
|
||||
```
|
||||
|
||||
#### All available options:
|
||||
|
||||
```ini
|
||||
Name:foo
|
||||
Address::80
|
||||
TLS:goo,gii
|
||||
TLS
|
||||
CA:car
|
||||
CA.Optional:true
|
||||
Redirect.EntryPoint:https
|
||||
Redirect.Regex:http://localhost/(.*)
|
||||
Redirect.Replacement:http://mydomain/$1
|
||||
Compress:true
|
||||
WhiteListSourceRange:10.42.0.0/16,152.89.1.33/32,afed:be44::/16
|
||||
ProxyProtocol.TrustedIPs:192.168.0.1
|
||||
ProxyProtocol.Insecure:tue
|
||||
ForwardedHeaders.TrustedIPs:10.0.0.3/24,20.0.0.3/24
|
||||
```
|
||||
|
||||
## Basic
|
||||
|
||||
```toml
|
||||
# Entrypoints definition
|
||||
#
|
||||
# Default:
|
||||
# [entryPoints]
|
||||
# [entryPoints.http]
|
||||
# address = ":80"
|
||||
#
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
```
|
||||
|
||||
## Redirect HTTP to HTTPS
|
||||
|
||||
To redirect an http entrypoint to an https entrypoint (with SNI support).
|
||||
|
||||
```toml
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
[entryPoints.http.redirect]
|
||||
entryPoint = "https"
|
||||
[entryPoints.https]
|
||||
address = ":443"
|
||||
[entryPoints.https.tls]
|
||||
[[entryPoints.https.tls.certificates]]
|
||||
certFile = "integration/fixtures/https/snitest.com.cert"
|
||||
keyFile = "integration/fixtures/https/snitest.com.key"
|
||||
[[entryPoints.https.tls.certificates]]
|
||||
certFile = "integration/fixtures/https/snitest.org.cert"
|
||||
keyFile = "integration/fixtures/https/snitest.org.key"
|
||||
```
|
||||
|
||||
!!! note
|
||||
Please note that `regex` and `replacement` do not have to be set in the `redirect` structure if an entrypoint is defined for the redirection (they will not be used in this case).
|
||||
|
||||
## Rewriting URL
|
||||
|
||||
To redirect an entrypoint rewriting the URL.
|
||||
|
||||
```toml
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
[entryPoints.http.redirect]
|
||||
regex = "^http://localhost/(.*)"
|
||||
replacement = "http://mydomain/$1"
|
||||
```
|
||||
|
||||
!!! note
|
||||
Please note that `regex` and `replacement` do not have to be set in the `redirect` structure if an `entrypoint` is defined for the redirection (they will not be used in this case).
|
||||
|
||||
Care should be taken when defining replacement expand variables: `$1x` is equivalent to `${1x}`, not `${1}x` (see [Regexp.Expand](https://golang.org/pkg/regexp/#Regexp.Expand)), so use `${1}` syntax.
|
||||
|
||||
Regular expressions and replacements can be tested using online tools such as [Go Playground](https://play.golang.org/p/mWU9p-wk2ru) or the [Regex101](https://regex101.com/r/58sIgx/2).
|
||||
|
||||
## TLS
|
||||
|
||||
### Static Certificates
|
||||
|
||||
Define an entrypoint with SNI support.
|
||||
|
||||
```toml
|
||||
[entryPoints]
|
||||
[entryPoints.https]
|
||||
address = ":443"
|
||||
[entryPoints.https.tls]
|
||||
[[entryPoints.https.tls.certificates]]
|
||||
certFile = "integration/fixtures/https/snitest.com.cert"
|
||||
keyFile = "integration/fixtures/https/snitest.com.key"
|
||||
```
|
||||
|
||||
!!! note
|
||||
If an empty TLS configuration is done, default self-signed certificates are generated.
|
||||
|
||||
|
||||
### Dynamic Certificates
|
||||
|
||||
If you need to add or remove TLS certificates while Traefik is started, Dynamic TLS certificates are supported using the [file provider](/configuration/backends/file).
|
||||
|
||||
|
||||
## TLS Mutual Authentication
|
||||
|
||||
TLS Mutual Authentication can be `optional` or not.
|
||||
If it's `optional`, Træfik will authorize connection with certificates not signed by a specified Certificate Authority (CA).
|
||||
Otherwise, Træfik will only accept clients that present a certificate signed by a specified Certificate Authority (CA).
|
||||
`ClientCAFiles` can be configured with multiple `CA:s` in the same file or use multiple files containing one or several `CA:s`.
|
||||
The `CA:s` has to be in PEM format.
|
||||
|
||||
By default, `ClientCAFiles` is not optional, all clients will be required to present a valid cert.
|
||||
The requirement will apply to all server certs in the entrypoint.
|
||||
|
||||
In the example below both `snitest.com` and `snitest.org` will require client certs
|
||||
|
||||
```toml
|
||||
[entryPoints]
|
||||
[entryPoints.https]
|
||||
address = ":443"
|
||||
[entryPoints.https.tls]
|
||||
[entryPoints.https.tls.ClientCA]
|
||||
files = ["tests/clientca1.crt", "tests/clientca2.crt"]
|
||||
optional = false
|
||||
[[entryPoints.https.tls.certificates]]
|
||||
certFile = "integration/fixtures/https/snitest.com.cert"
|
||||
keyFile = "integration/fixtures/https/snitest.com.key"
|
||||
[[entryPoints.https.tls.certificates]]
|
||||
certFile = "integration/fixtures/https/snitest.org.cert"
|
||||
keyFile = "integration/fixtures/https/snitest.org.key"
|
||||
```
|
||||
|
||||
!!! note
|
||||
The deprecated argument `ClientCAFiles` allows adding Client CA files which are mandatory.
|
||||
If this parameter exists, the new ones are not checked.
|
||||
|
||||
## Authentication
|
||||
|
||||
### Basic Authentication
|
||||
|
||||
Passwords can be encoded in MD5, SHA1 and BCrypt: you can use `htpasswd` to generate them.
|
||||
|
||||
Users can be specified directly in the TOML file, or indirectly by referencing an external file;
|
||||
if both are provided, the two are merged, with external file contents having precedence.
|
||||
|
||||
```toml
|
||||
# To enable basic auth on an entrypoint with 2 user/pass: test:test and test2:test2
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
[entryPoints.http.auth.basic]
|
||||
users = ["test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/", "test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0"]
|
||||
usersFile = "/path/to/.htpasswd"
|
||||
```
|
||||
|
||||
### Digest Authentication
|
||||
|
||||
You can use `htdigest` to generate them.
|
||||
|
||||
Users can be specified directly in the TOML file, or indirectly by referencing an external file;
|
||||
if both are provided, the two are merged, with external file contents having precedence
|
||||
|
||||
```toml
|
||||
# To enable digest auth on an entrypoint with 2 user/realm/pass: test:traefik:test and test2:traefik:test2
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
[entryPoints.http.auth.digest]
|
||||
users = ["test:traefik:a2688e031edb4be6a3797f3882655c05", "test2:traefik:518845800f9e2bfb1f1f740ec24f074e"]
|
||||
usersFile = "/path/to/.htdigest"
|
||||
```
|
||||
|
||||
### Forward Authentication
|
||||
|
||||
This configuration will first forward the request to `http://authserver.com/auth`.
|
||||
|
||||
If the response code is 2XX, access is granted and the original request is performed.
|
||||
Otherwise, the response from the authentication server is returned.
|
||||
|
||||
```toml
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
# ...
|
||||
# To enable forward auth on an entrypoint
|
||||
[entryPoints.http.auth.forward]
|
||||
address = "https://authserver.com/auth"
|
||||
|
||||
# Trust existing X-Forwarded-* headers.
|
||||
# Useful with another reverse proxy in front of Traefik.
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
trustForwardHeader = true
|
||||
|
||||
# Enable forward auth TLS connection.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
[entryPoints.http.auth.forward.tls]
|
||||
cert = "authserver.crt"
|
||||
key = "authserver.key"
|
||||
```
|
||||
|
||||
## Specify Minimum TLS Version
|
||||
|
||||
To specify an https entry point with a minimum TLS version, and specifying an array of cipher suites (from [crypto/tls](https://godoc.org/crypto/tls#pkg-constants)).
|
||||
|
||||
```toml
|
||||
[entryPoints]
|
||||
[entryPoints.https]
|
||||
address = ":443"
|
||||
[entryPoints.https.tls]
|
||||
minVersion = "VersionTLS12"
|
||||
cipherSuites = [
|
||||
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
|
||||
"TLS_RSA_WITH_AES_256_GCM_SHA384"
|
||||
]
|
||||
[[entryPoints.https.tls.certificates]]
|
||||
certFile = "integration/fixtures/https/snitest.com.cert"
|
||||
keyFile = "integration/fixtures/https/snitest.com.key"
|
||||
[[entryPoints.https.tls.certificates]]
|
||||
certFile = "integration/fixtures/https/snitest.org.cert"
|
||||
keyFile = "integration/fixtures/https/snitest.org.key"
|
||||
```
|
||||
|
||||
## Compression
|
||||
|
||||
To enable compression support using gzip format.
|
||||
|
||||
```toml
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
compress = true
|
||||
```
|
||||
|
||||
Responses are compressed when:
|
||||
|
||||
* The response body is larger than `512` bytes
|
||||
* And the `Accept-Encoding` request header contains `gzip`
|
||||
* And the response is not already compressed, i.e. the `Content-Encoding` response header is not already set.
|
||||
|
||||
## Whitelisting
|
||||
|
||||
To enable IP whitelisting at the entrypoint level.
|
||||
|
||||
```toml
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
whiteListSourceRange = ["127.0.0.1/32", "192.168.1.7"]
|
||||
```
|
||||
|
||||
## ProxyProtocol
|
||||
|
||||
To enable [ProxyProtocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt) support.
|
||||
Only IPs in `trustedIPs` will lead to remote client address replacement: you should declare your load-balancer IP or CIDR range here (in testing environment, you can trust everyone using `insecure = true`).
|
||||
|
||||
!!! danger
|
||||
When queuing Træfik behind another load-balancer, be sure to carefully configure Proxy Protocol on both sides.
|
||||
Otherwise, it could introduce a security risk in your system by forging requests.
|
||||
|
||||
```toml
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
|
||||
# Enable ProxyProtocol
|
||||
[entryPoints.http.proxyProtocol]
|
||||
# List of trusted IPs
|
||||
#
|
||||
# Required
|
||||
# Default: []
|
||||
#
|
||||
trustedIPs = ["127.0.0.1/32", "192.168.1.7"]
|
||||
|
||||
# Insecure mode FOR TESTING ENVIRONNEMENT ONLY
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
# insecure = true
|
||||
```
|
||||
|
||||
## Forwarded Header
|
||||
|
||||
Only IPs in `trustedIPs` will be authorized to trust the client forwarded headers (`X-Forwarded-*`).
|
||||
|
||||
```toml
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
|
||||
# Enable Forwarded Headers
|
||||
[entryPoints.http.forwardedHeaders]
|
||||
# List of trusted IPs
|
||||
#
|
||||
# Required
|
||||
# Default: []
|
||||
#
|
||||
trustedIPs = ["127.0.0.1/32", "192.168.1.7"]
|
||||
```
|
||||
@@ -1,126 +0,0 @@
|
||||
# Metrics Definition
|
||||
|
||||
## Prometheus
|
||||
|
||||
```toml
|
||||
# Metrics definition
|
||||
[metrics]
|
||||
#...
|
||||
|
||||
# To enable Traefik to export internal metrics to Prometheus
|
||||
[metrics.prometheus]
|
||||
|
||||
# Name of the related entry point
|
||||
#
|
||||
# Optional
|
||||
# Default: "traefik"
|
||||
#
|
||||
entryPoint = "traefik"
|
||||
|
||||
# Buckets for latency metrics
|
||||
#
|
||||
# Optional
|
||||
# Default: [0.1, 0.3, 1.2, 5]
|
||||
#
|
||||
buckets = [0.1,0.3,1.2,5.0]
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
## DataDog
|
||||
|
||||
```toml
|
||||
# Metrics definition
|
||||
[metrics]
|
||||
#...
|
||||
|
||||
# DataDog metrics exporter type
|
||||
[metrics.datadog]
|
||||
|
||||
# DataDog's address.
|
||||
#
|
||||
# Required
|
||||
# Default: "localhost:8125"
|
||||
#
|
||||
address = "localhost:8125"
|
||||
|
||||
# DataDog push interval
|
||||
#
|
||||
# Optional
|
||||
# Default: "10s"
|
||||
#
|
||||
pushInterval = "10s"
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
## StatsD
|
||||
|
||||
```toml
|
||||
# Metrics definition
|
||||
[metrics]
|
||||
#...
|
||||
|
||||
# StatsD metrics exporter type
|
||||
[metrics.statsd]
|
||||
|
||||
# StatD's address.
|
||||
#
|
||||
# Required
|
||||
# Default: "localhost:8125"
|
||||
#
|
||||
address = "localhost:8125"
|
||||
|
||||
# StatD push interval
|
||||
#
|
||||
# Optional
|
||||
# Default: "10s"
|
||||
#
|
||||
pushInterval = "10s"
|
||||
|
||||
# ...
|
||||
```
|
||||
### InfluxDB
|
||||
|
||||
```toml
|
||||
[metrics]
|
||||
# ...
|
||||
|
||||
# InfluxDB metrics exporter type
|
||||
[metrics.influxdb]
|
||||
|
||||
# InfluxDB's address.
|
||||
#
|
||||
# Required
|
||||
# Default: "localhost:8089"
|
||||
#
|
||||
address = "localhost:8089"
|
||||
|
||||
# InfluxDB push interval
|
||||
#
|
||||
# Optional
|
||||
# Default: "10s"
|
||||
#
|
||||
pushinterval = "10s"
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
## Statistics
|
||||
|
||||
```toml
|
||||
# Metrics definition
|
||||
[metrics]
|
||||
# ...
|
||||
|
||||
# Enable more detailed statistics.
|
||||
[metrics.statistics]
|
||||
|
||||
# Number of recent errors logged.
|
||||
#
|
||||
# Default: 10
|
||||
#
|
||||
recentErrors = 10
|
||||
|
||||
# ...
|
||||
```
|
||||
@@ -1,87 +0,0 @@
|
||||
# Ping Definition
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
# Ping definition
|
||||
[ping]
|
||||
# Name of the related entry point
|
||||
#
|
||||
# Optional
|
||||
# Default: "traefik"
|
||||
#
|
||||
entryPoint = "traefik"
|
||||
```
|
||||
|
||||
| Path | Method | Description |
|
||||
|---------|---------------|----------------------------------------------------------------------------------------------------|
|
||||
| `/ping` | `GET`, `HEAD` | A simple endpoint to check for Træfik process liveness. Return a code `200` with the content: `OK` |
|
||||
|
||||
|
||||
!!! warning
|
||||
Even if you have authentication configured on entry point, the `/ping` path of the api is excluded from authentication.
|
||||
|
||||
## Examples
|
||||
|
||||
The `/ping` health-check URL is enabled with the command-line `--ping` or config file option `[ping]`.
|
||||
Thus, if you have a regular path for `/foo` and an entrypoint on `:80`, you would access them as follows:
|
||||
|
||||
* Regular path: `http://hostname:80/foo`
|
||||
* Admin panel: `http://hostname:8080/`
|
||||
* Ping URL: `http://hostname:8080/ping`
|
||||
|
||||
However, for security reasons, you may want to be able to expose the `/ping` health-check URL to outside health-checkers, e.g. an Internet service or cloud load-balancer, _without_ exposing your administration panel's port.
|
||||
In many environments, the security staff may not _allow_ you to expose it.
|
||||
|
||||
You have two options:
|
||||
|
||||
* Enable `/ping` on a regular entry point
|
||||
* Enable `/ping` on a dedicated port
|
||||
|
||||
### Ping health check on a regular entry point
|
||||
|
||||
To proxy `/ping` from a regular entry point to the administration one without exposing the panel, do the following:
|
||||
|
||||
```toml
|
||||
defaultEntryPoints = ["http"]
|
||||
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
|
||||
[ping]
|
||||
entryPoint = "http"
|
||||
|
||||
```
|
||||
|
||||
The above link `ping` on the `http` entry point and then expose it on port `80`
|
||||
|
||||
### Enable ping health check on dedicated port
|
||||
|
||||
If you do not want to or cannot expose the health-check on a regular entry point - e.g. your security rules do not allow it, or you have a conflicting path - then you can enable health-check on its own entry point.
|
||||
Use the following configuration:
|
||||
|
||||
```toml
|
||||
defaultEntryPoints = ["http"]
|
||||
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
[entryPoints.ping]
|
||||
address = ":8082"
|
||||
|
||||
[ping]
|
||||
entryPoint = "ping"
|
||||
```
|
||||
|
||||
The above is similar to the previous example, but instead of enabling `/ping` on the _default_ entry point, we enable it on a _dedicated_ entry point.
|
||||
|
||||
In the above example, you would access a regular path and health-check as follows:
|
||||
|
||||
* Regular path: `http://hostname:80/foo`
|
||||
* Ping URL: `http://hostname:8082/ping`
|
||||
|
||||
Note the dedicated port `:8082` for `/ping`.
|
||||
|
||||
In the above example, it is _very_ important to create a named dedicated entry point, and do **not** include it in `defaultEntryPoints`.
|
||||
Otherwise, you are likely to expose _all_ services via this entry point.
|
||||
61
docs/css/traefik.css
Normal file
61
docs/css/traefik.css
Normal file
@@ -0,0 +1,61 @@
|
||||
a {
|
||||
color: #37ABC8;
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
a:hover, a:focus {
|
||||
color: #25606F;
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
h1, h2, h3, H4 {
|
||||
color: #37ABC8;
|
||||
}
|
||||
|
||||
.navbar-default {
|
||||
background-color: #37ABC8;
|
||||
border-color: #25606F;
|
||||
}
|
||||
|
||||
.navbar-default .navbar-nav>.active>a, .navbar-default .navbar-nav>.active>a:hover, .navbar-default .navbar-nav>.active>a:focus {
|
||||
color: #fff;
|
||||
background-color: #25606F;
|
||||
}
|
||||
|
||||
.navbar-default .navbar-nav>li>a:hover, .navbar-default .navbar-nav>li>a:focus {
|
||||
color: #fff;
|
||||
background-color: #25606F;
|
||||
}
|
||||
|
||||
.navbar-default .navbar-toggle {
|
||||
border-color: #25606F;
|
||||
}
|
||||
|
||||
.navbar-default .navbar-toggle:hover, .navbar-default .navbar-toggle:focus .navbar-toggle {
|
||||
background-color: #25606F;
|
||||
}
|
||||
.navbar-default .navbar-collapse, .navbar-default .navbar-form {
|
||||
border-color: #25606F;
|
||||
}
|
||||
|
||||
blockquote p {
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
.navbar-default .navbar-nav>.open>a, .navbar-default .navbar-nav>.open>a:hover, .navbar-default .navbar-nav>.open>a:focus {
|
||||
color: #fff;
|
||||
background-color: #25606F;
|
||||
}
|
||||
|
||||
.dropdown-menu>li>a:hover, .dropdown-menu>li>a:focus {
|
||||
color: #fff;
|
||||
text-decoration: none;
|
||||
background-color: #25606F;
|
||||
}
|
||||
|
||||
.dropdown-menu>.active>a, .dropdown-menu>.active>a:hover, .dropdown-menu>.active>a:focus {
|
||||
color: #fff;
|
||||
text-decoration: none;
|
||||
background-color: #25606F;
|
||||
outline: 0;
|
||||
}
|
||||
File diff suppressed because one or more lines are too long
|
Before Width: | Height: | Size: 186 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 52 KiB After Width: | Height: | Size: 51 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 255 KiB After Width: | Height: | Size: 53 KiB |
258
docs/index.md
258
docs/index.md
@@ -1,188 +1,58 @@
|
||||
<p align="center">
|
||||
<img src="img/traefik.logo.png" alt="Træfik" title="Træfik" />
|
||||
<img src="img/traefik.logo.png" alt="Træfɪk" title="Træfɪk" />
|
||||
</p>
|
||||
|
||||
[](https://semaphoreci.com/containous/traefik)
|
||||
[](https://travis-ci.org/containous/traefik)
|
||||
[](https://docs.traefik.io)
|
||||
[](https://goreportcard.com/report/github.com/containous/traefik)
|
||||
[](http://goreportcard.com/report/containous/traefik)
|
||||
[](https://github.com/containous/traefik/blob/master/LICENSE.md)
|
||||
[](https://traefik.herokuapp.com)
|
||||
[](https://twitter.com/intent/follow?screen_name=traefikproxy)
|
||||
|
||||
|
||||
Træfik is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy.
|
||||
Træfik integrates with your existing infrastructure components ([Docker](https://www.docker.com/), [Swarm mode](https://docs.docker.com/engine/swarm/), [Kubernetes](https://kubernetes.io), [Marathon](https://mesosphere.github.io/marathon/), [Consul](https://www.consul.io/), [Etcd](https://coreos.com/etcd/), [Rancher](https://rancher.com), [Amazon ECS](https://aws.amazon.com/ecs), ...) and configures itself automatically and dynamically.
|
||||
Telling Træfik where your orchestrator is could be the _only_ configuration step you need to do.
|
||||
Træfɪk is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease.
|
||||
It supports several backends ([Docker](https://www.docker.com/), [Swarm](https://docs.docker.com/swarm), [Mesos/Marathon](https://mesosphere.github.io/marathon/), [Consul](https://www.consul.io/), [Etcd](https://coreos.com/etcd/), [Zookeeper](https://zookeeper.apache.org), [BoltDB](https://github.com/boltdb/bolt), Rest API, file...) to manage its configuration automatically and dynamically.
|
||||
|
||||
## Overview
|
||||
|
||||
Imagine that you have deployed a bunch of microservices with the help of an orchestrator (like Swarm or Kubernetes) or a service registry (like etcd or consul).
|
||||
Now you want users to access these microservices, and you need a reverse proxy.
|
||||
Imagine that you have deployed a bunch of microservices on your infrastructure. You probably used a service registry (like etcd or consul) and/or an orchestrator (swarm, Mesos/Marathon) to manage all these services.
|
||||
If you want your users to access some of your microservices from the Internet, you will have to use a reverse proxy and configure it using virtual hosts or prefix paths:
|
||||
|
||||
Traditional reverse-proxies require that you configure _each_ route that will connect paths and subdomains to _each_ microservice. In an environment where you add, remove, kill, upgrade, or scale your services _many_ times a day, the task of keeping the routes up to date becomes tedious.
|
||||
- domain `api.domain.com` will point the microservice `api` in your private network
|
||||
- path `domain.com/web` will point the microservice `web` in your private network
|
||||
- domain `backoffice.domain.com` will point the microservices `backoffice` in your private network, load-balancing between your multiple instances
|
||||
|
||||
**This is when Træfik can help you!**
|
||||
But a microservices architecture is dynamic... Services are added, removed, killed or upgraded often, eventually several times a day.
|
||||
|
||||
Træfik listens to your service registry/orchestrator API and instantly generates the routes so your microservices are connected to the outside world -- without further intervention from your part.
|
||||
Traditional reverse-proxies are not natively dynamic. You can't change their configuration and hot-reload easily.
|
||||
|
||||
**Run Træfik and let it do the work for you!**
|
||||
_(But if you'd rather configure some of your routes manually, Træfik supports that too!)_
|
||||
Here enters Træfɪk.
|
||||
|
||||

|
||||
|
||||
## Features
|
||||
Træfɪk can listen to your service registry/orchestrator API, and knows each time a microservice is added, removed, killed or upgraded, and can generate its configuration automatically.
|
||||
Routes to your services will be created instantly.
|
||||
|
||||
- Continuously updates its configuration (No restarts!)
|
||||
- Supports multiple load balancing algorithms
|
||||
- Provides HTTPS to your microservices by leveraging [Let's Encrypt](https://letsencrypt.org)
|
||||
- Circuit breakers, retry
|
||||
- High Availability with cluster mode (beta)
|
||||
- See the magic through its clean web UI
|
||||
- Websocket, HTTP/2, GRPC ready
|
||||
- Provides metrics (Rest, Prometheus, Datadog, Statsd, InfluxDB)
|
||||
- Keeps access logs (JSON, CLF)
|
||||
- [Fast](/benchmarks) ... which is nice
|
||||
- Exposes a Rest API
|
||||
- Packaged as a single binary file (made with :heart: with go) and available as a [tiny](https://microbadger.com/images/traefik) [official](https://hub.docker.com/r/_/traefik/) docker image
|
||||
Run it and forget it!
|
||||
|
||||
|
||||
## Quickstart
|
||||
|
||||
## Supported backends
|
||||
You can have a quick look at Træfɪk in this [Katacoda tutorial](https://www.katacoda.com/courses/traefik/deploy-load-balancer) that shows how to load balance requests between multiple Docker containers.
|
||||
|
||||
- [Docker](/configuration/backends/docker/) / [Swarm mode](/configuration/backends/docker/#docker-swarm-mode)
|
||||
- [Kubernetes](/configuration/backends/kubernetes/)
|
||||
- [Mesos](/configuration/backends/mesos/) / [Marathon](/configuration/backends/marathon/)
|
||||
- [Rancher](/configuration/backends/rancher/) (API, Metadata)
|
||||
- [Azure Service Fabric](/configuration/backends/servicefabric/)
|
||||
- [Consul Catalog](/configuration/backends/consulcatalog/)
|
||||
- [Consul](/configuration/backends/consul/) / [Etcd](/configuration/backends/etcd/) / [Zookeeper](/configuration/backends/zookeeper/) / [BoltDB](/configuration/backends/boltdb/)
|
||||
- [Eureka](/configuration/backends/eureka/)
|
||||
- [Amazon ECS](/configuration/backends/ecs/)
|
||||
- [Amazon DynamoDB](/configuration/backends/dynamodb/)
|
||||
- [File](/configuration/backends/file/)
|
||||
- [Rest](/configuration/backends/rest/)
|
||||
Here is a talk given by [Ed Robinson](https://github.com/errm) at the [ContainerCamp UK](https://container.camp) conference.
|
||||
You will learn fundamental Træfɪk features and see some demos with Kubernetes.
|
||||
|
||||
## The Træfik Quickstart (Using Docker)
|
||||
[](https://www.youtube.com/watch?v=aFtpIShV60I)
|
||||
|
||||
In this quickstart, we'll use [Docker compose](https://docs.docker.com/compose) to create our demo infrastructure.
|
||||
Here is a talk (in French) given by [Emile Vauge](https://github.com/emilevauge) at the [Devoxx France 2016](http://www.devoxx.fr) conference.
|
||||
You will learn fundamental Træfɪk features and see some demos with Docker, Mesos/Marathon and Let's Encrypt.
|
||||
|
||||
To save some time, you can clone [Træfik's repository](https://github.com/containous/traefik) and use the quickstart files located in the [examples/quickstart](https://github.com/containous/traefik/tree/master/examples/quickstart/) directory.
|
||||
[](http://www.youtube.com/watch?v=QvAz9mVx5TI)
|
||||
|
||||
### 1 — Launch Træfik — Tell It to Listen to Docker
|
||||
## Get it
|
||||
|
||||
Create a `docker-compose.yml` file where you will define a `reverse-proxy` service that uses the official Træfik image:
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
reverse-proxy:
|
||||
image: traefik #The official Traefik docker image
|
||||
command: --api --docker #Enables the web UI and tells Træfik to listen to docker
|
||||
ports:
|
||||
- "80:80" #The HTTP port
|
||||
- "8080:8080" #The Web UI (enabled by --api)
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock #So that Traefik can listen to the Docker events
|
||||
```
|
||||
|
||||
**That's it. Now you can launch Træfik!**
|
||||
|
||||
Start your `reverse-proxy` with the following command:
|
||||
|
||||
```shell
|
||||
docker-compose up -d reverse-proxy
|
||||
```
|
||||
|
||||
You can open a browser and go to [http://localhost:8080](http://localhost:8080) to see Træfik's dashboard (we'll go back there once we have launched a service in step 2).
|
||||
|
||||
### 2 — Launch a Service — Træfik Detects It and Creates a Route for You
|
||||
|
||||
Now that we have a Træfik instance up and running, we will deploy new services.
|
||||
|
||||
Edit your `docker-compose.yml` file and add the following at the end of your file.
|
||||
|
||||
```yaml
|
||||
# ...
|
||||
whoami:
|
||||
image: emilevauge/whoami #A container that exposes an API to show it's IP address
|
||||
labels:
|
||||
- "traefik.frontend.rule=Host:whoami.docker.localhost"
|
||||
```
|
||||
|
||||
The above defines `whoami`: a simple web service that outputs information about the machine it is deployed on (its IP address, host, and so on).
|
||||
|
||||
Start the `whoami` service with the following command:
|
||||
|
||||
```shell
|
||||
docker-compose up -d whoami
|
||||
```
|
||||
|
||||
Go back to your browser ([http://localhost:8080](http://localhost:8080)) and see that Træfik has automatically detected the new container and updated its own configuration.
|
||||
|
||||
When Traefik detects new services, it creates the corresponding routes so you can call them ... _let's see!_ (Here, we're using curl)
|
||||
|
||||
```shell
|
||||
curl -H Host:whoami.docker.localhost http://127.0.0.1
|
||||
```
|
||||
|
||||
_Shows the following output:_
|
||||
```yaml
|
||||
Hostname: 8656c8ddca6c
|
||||
IP: 172.27.0.3
|
||||
#...
|
||||
```
|
||||
|
||||
### 3 — Launch More Instances — Traefik Load Balances Them
|
||||
|
||||
Run more instances of your `whoami` service with the following command:
|
||||
|
||||
```shell
|
||||
docker-compose up -d --scale whoami=2
|
||||
```
|
||||
|
||||
Go back to your browser ([http://localhost:8080](http://localhost:8080)) and see that Træfik has automatically detected the new instance of the container.
|
||||
|
||||
Finally, see that Træfik load-balances between the two instances of your services by running twice the following command:
|
||||
|
||||
```shell
|
||||
curl -H Host:whoami.docker.localhost http://127.0.0.1
|
||||
```
|
||||
|
||||
The output will show alternatively one of the followings:
|
||||
|
||||
```yaml
|
||||
Hostname: 8656c8ddca6c
|
||||
IP: 172.27.0.3
|
||||
#...
|
||||
```
|
||||
|
||||
```yaml
|
||||
Hostname: 8458f154e1f1
|
||||
IP: 172.27.0.4
|
||||
# ...
|
||||
```
|
||||
|
||||
### 4 — Enjoy Træfik's Magic
|
||||
|
||||
Now that you have a basic understanding of how Træfik can automatically create the routes to your services and load balance them, it might be time to dive into [the documentation](https://docs.traefik.io/) and let Træfik work for you! Whatever your infrastructure is, there is probably [an available Træfik backend](https://docs.traefik.io/configuration/backends/available) that will do the job.
|
||||
|
||||
Our recommendation would be to see for yourself how simple it is to enable HTTPS with [Træfik's let's encrypt integration](https://docs.traefik.io/user-guide/examples/#lets-encrypt-support) using the dedicated [user guide](https://docs.traefik.io/user-guide/docker-and-lets-encrypt/).
|
||||
|
||||
## Resources
|
||||
|
||||
Here is a talk given by [Emile Vauge](https://github.com/emilevauge) at [GopherCon 2017](https://gophercon.com).
|
||||
You will learn Træfik basics in less than 10 minutes.
|
||||
|
||||
[](https://www.youtube.com/watch?v=RgudiksfL-k)
|
||||
|
||||
Here is a talk given by [Ed Robinson](https://github.com/errm) at [ContainerCamp UK](https://container.camp) conference.
|
||||
You will learn fundamental Træfik features and see some demos with Kubernetes.
|
||||
|
||||
[](https://www.youtube.com/watch?v=aFtpIShV60I)
|
||||
|
||||
## Downloads
|
||||
|
||||
### The Official Binary File
|
||||
### Binary
|
||||
|
||||
You can grab the latest binary from the [releases](https://github.com/containous/traefik/releases) page and just run it with the [sample configuration file](https://raw.githubusercontent.com/containous/traefik/master/traefik.sample.toml):
|
||||
|
||||
@@ -190,10 +60,80 @@ You can grab the latest binary from the [releases](https://github.com/containous
|
||||
./traefik -c traefik.toml
|
||||
```
|
||||
|
||||
### The Official Docker Image
|
||||
### Docker
|
||||
|
||||
Using the tiny Docker image:
|
||||
|
||||
```shell
|
||||
docker run -d -p 8080:8080 -p 80:80 -v $PWD/traefik.toml:/etc/traefik/traefik.toml traefik
|
||||
```
|
||||
```
|
||||
|
||||
## Test it
|
||||
|
||||
You can test Træfɪk easily using [Docker compose](https://docs.docker.com/compose), with this `docker-compose.yml` file:
|
||||
|
||||
```yaml
|
||||
traefik:
|
||||
image: traefik
|
||||
command: --web --docker --docker.domain=docker.localhost --logLevel=DEBUG
|
||||
ports:
|
||||
- "80:80"
|
||||
- "8080:8080"
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- /dev/null:/traefik.toml
|
||||
|
||||
whoami1:
|
||||
image: emilevauge/whoami
|
||||
labels:
|
||||
- "traefik.backend=whoami"
|
||||
- "traefik.frontend.rule=Host:whoami.docker.localhost"
|
||||
|
||||
whoami2:
|
||||
image: emilevauge/whoami
|
||||
labels:
|
||||
- "traefik.backend=whoami"
|
||||
- "traefik.frontend.rule=Host:whoami.docker.localhost"
|
||||
```
|
||||
|
||||
Then, start it:
|
||||
|
||||
```
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
Finally, test load-balancing between the two servers `whoami1` and `whoami2`:
|
||||
|
||||
```bash
|
||||
$ curl -H Host:whoami.docker.localhost http://127.0.0.1
|
||||
Hostname: ef194d07634a
|
||||
IP: 127.0.0.1
|
||||
IP: ::1
|
||||
IP: 172.17.0.4
|
||||
IP: fe80::42:acff:fe11:4
|
||||
GET / HTTP/1.1
|
||||
Host: 172.17.0.4:80
|
||||
User-Agent: curl/7.35.0
|
||||
Accept: */*
|
||||
Accept-Encoding: gzip
|
||||
X-Forwarded-For: 172.17.0.1
|
||||
X-Forwarded-Host: 172.17.0.4:80
|
||||
X-Forwarded-Proto: http
|
||||
X-Forwarded-Server: dbb60406010d
|
||||
|
||||
$ curl -H Host:whoami.docker.localhost http://127.0.0.1
|
||||
Hostname: 6c3c5df0c79a
|
||||
IP: 127.0.0.1
|
||||
IP: ::1
|
||||
IP: 172.17.0.3
|
||||
IP: fe80::42:acff:fe11:3
|
||||
GET / HTTP/1.1
|
||||
Host: 172.17.0.3:80
|
||||
User-Agent: curl/7.35.0
|
||||
Accept: */*
|
||||
Accept-Encoding: gzip
|
||||
X-Forwarded-For: 172.17.0.1
|
||||
X-Forwarded-Host: 172.17.0.3:80
|
||||
X-Forwarded-Proto: http
|
||||
X-Forwarded-Server: dbb60406010d
|
||||
```
|
||||
|
||||
4
docs/theme/js/extra.js
vendored
4
docs/theme/js/extra.js
vendored
@@ -1,4 +0,0 @@
|
||||
/* Highlight */
|
||||
(function(hljs) {
|
||||
hljs.initHighlightingOnLoad();
|
||||
})(hljs);
|
||||
24
docs/theme/js/hljs/LICENSE
vendored
24
docs/theme/js/hljs/LICENSE
vendored
@@ -1,24 +0,0 @@
|
||||
Copyright (c) 2006, Ivan Sagalaev
|
||||
All rights reserved.
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in the
|
||||
documentation and/or other materials provided with the distribution.
|
||||
* Neither the name of highlight.js nor the names of its contributors
|
||||
may be used to endorse or promote products derived from this software
|
||||
without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND ANY
|
||||
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE REGENTS AND CONTRIBUTORS BE LIABLE FOR ANY
|
||||
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
||||
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
||||
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
|
||||
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
||||
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
2
docs/theme/js/hljs/highlight.pack.js
vendored
2
docs/theme/js/hljs/highlight.pack.js
vendored
File diff suppressed because one or more lines are too long
104
docs/theme/partials/footer.html
vendored
104
docs/theme/partials/footer.html
vendored
@@ -1,104 +0,0 @@
|
||||
<!--
|
||||
Copyright (c) 2016-2017 Martin Donath <martin.donath@squidfunk.com>
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to
|
||||
deal in the Software without restriction, including without limitation the
|
||||
rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
|
||||
sell copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
|
||||
IN THE SOFTWARE.
|
||||
-->
|
||||
|
||||
{% import "partials/language.html" as lang with context %}
|
||||
|
||||
<!-- Application footer -->
|
||||
<footer class="md-footer">
|
||||
|
||||
<!-- Link to previous and/or next page -->
|
||||
{% if page.previous_page or page.next_page %}
|
||||
<!--<div class="md-footer-nav">-->
|
||||
<!--<nav class="md-footer-nav__inner md-grid">-->
|
||||
<!-- -->
|
||||
<!-- Link to previous page -->
|
||||
<!--{% if page.previous_page %}-->
|
||||
<!--<a href="{{ page.previous_page.url }}"-->
|
||||
<!--title="{{ page.previous_page.title }}"-->
|
||||
<!--class="md-flex md-footer-nav__link md-footer-nav__link--prev"-->
|
||||
<!--rel="prev">-->
|
||||
<!--<div class="md-flex__cell md-flex__cell--shrink">-->
|
||||
<!--<i class="md-icon md-icon--arrow-back-->
|
||||
<!--md-footer-nav__button"></i>-->
|
||||
<!--</div>-->
|
||||
<!--<div class="md-flex__cell md-flex__cell--stretch-->
|
||||
<!--md-footer-nav__title">-->
|
||||
<!--<span class="md-flex__ellipsis">-->
|
||||
<!--<span class="md-footer-nav__direction">-->
|
||||
<!--{{ lang.t("footer.previous") }} -->
|
||||
<!--</span>-->
|
||||
<!--{{ page.previous_page.title }}-->
|
||||
<!--</span>-->
|
||||
<!--</div>-->
|
||||
<!--</a>-->
|
||||
<!--{% endif %}-->
|
||||
<!-- -->
|
||||
<!-- Link to next page -->
|
||||
<!--{% if page.next_page %}-->
|
||||
<!--<a href="{{ page.next_page.url }}" title="{{ page.next_page.title }}"-->
|
||||
<!--class="md-flex md-footer-nav__link md-footer-nav__link--next"-->
|
||||
<!--rel="next">-->
|
||||
<!--<div class="md-flex__cell md-flex__cell--stretch-->
|
||||
<!--md-footer-nav__title">-->
|
||||
<!--<span class="md-flex__ellipsis">-->
|
||||
<!--<span class="md-footer-nav__direction">-->
|
||||
<!--{{ lang.t("footer.next") }}-->
|
||||
<!--</span>-->
|
||||
<!--{{ page.next_page.title }}-->
|
||||
<!--</span>-->
|
||||
<!--</div>-->
|
||||
<!--<div class="md-flex__cell md-flex__cell--shrink">-->
|
||||
<!--<i class="md-icon md-icon--arrow-forward-->
|
||||
<!--md-footer-nav__button"></i>-->
|
||||
<!--</div>-->
|
||||
<!--</a>-->
|
||||
<!--{% endif %}-->
|
||||
<!--</nav>-->
|
||||
<!--</div>-->
|
||||
{% endif %}
|
||||
|
||||
<!-- Further information -->
|
||||
<div class="md-footer-meta md-typeset">
|
||||
<div class="md-footer-meta__inner md-grid">
|
||||
|
||||
<!-- Copyright and theme information -->
|
||||
<div class="md-footer-copyright">
|
||||
{% if config.copyright %}
|
||||
<div class="md-footer-copyright__highlight">
|
||||
{{ config.copyright }}
|
||||
</div>
|
||||
{% endif %}
|
||||
powered by
|
||||
<a href="http://www.mkdocs.org" title="MkDocs">MkDocs</a>
|
||||
and
|
||||
<a href="http://squidfunk.github.io/mkdocs-material/"
|
||||
title="Material for MkDocs">
|
||||
Material for MkDocs</a>
|
||||
</div>
|
||||
|
||||
<!-- Social links -->
|
||||
{% block social %}
|
||||
{% include "partials/social.html" %}
|
||||
{% endblock %}
|
||||
</div>
|
||||
</div>
|
||||
</footer>
|
||||
96
docs/theme/styles/atom-one-light.css
vendored
96
docs/theme/styles/atom-one-light.css
vendored
@@ -1,96 +0,0 @@
|
||||
/*
|
||||
|
||||
Atom One Light by Daniel Gamage
|
||||
Original One Light Syntax theme from https://github.com/atom/one-light-syntax
|
||||
|
||||
base: #fafafa
|
||||
mono-1: #383a42
|
||||
mono-2: #686b77
|
||||
mono-3: #a0a1a7
|
||||
hue-1: #0184bb
|
||||
hue-2: #4078f2
|
||||
hue-3: #a626a4
|
||||
hue-4: #50a14f
|
||||
hue-5: #e45649
|
||||
hue-5-2: #c91243
|
||||
hue-6: #986801
|
||||
hue-6-2: #c18401
|
||||
|
||||
*/
|
||||
|
||||
.hljs {
|
||||
display: block;
|
||||
overflow-x: auto;
|
||||
padding: 0.5em;
|
||||
color: #383a42;
|
||||
background: #fafafa;
|
||||
}
|
||||
|
||||
.hljs-comment,
|
||||
.hljs-quote {
|
||||
color: #a0a1a7;
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
.hljs-doctag,
|
||||
.hljs-keyword,
|
||||
.hljs-formula {
|
||||
color: #a626a4;
|
||||
}
|
||||
|
||||
.hljs-section,
|
||||
.hljs-name,
|
||||
.hljs-selector-tag,
|
||||
.hljs-deletion,
|
||||
.hljs-subst {
|
||||
color: #e45649;
|
||||
}
|
||||
|
||||
.hljs-literal {
|
||||
color: #0184bb;
|
||||
}
|
||||
|
||||
.hljs-string,
|
||||
.hljs-regexp,
|
||||
.hljs-addition,
|
||||
.hljs-attribute,
|
||||
.hljs-meta-string {
|
||||
color: #50a14f;
|
||||
}
|
||||
|
||||
.hljs-built_in,
|
||||
.hljs-class .hljs-title {
|
||||
color: #c18401;
|
||||
}
|
||||
|
||||
.hljs-attr,
|
||||
.hljs-variable,
|
||||
.hljs-template-variable,
|
||||
.hljs-type,
|
||||
.hljs-selector-class,
|
||||
.hljs-selector-attr,
|
||||
.hljs-selector-pseudo,
|
||||
.hljs-number {
|
||||
color: #986801;
|
||||
}
|
||||
|
||||
.hljs-symbol,
|
||||
.hljs-bullet,
|
||||
.hljs-link,
|
||||
.hljs-meta,
|
||||
.hljs-selector-id,
|
||||
.hljs-title {
|
||||
color: #4078f2;
|
||||
}
|
||||
|
||||
.hljs-emphasis {
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
.hljs-strong {
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.hljs-link {
|
||||
text-decoration: underline;
|
||||
}
|
||||
20
docs/theme/styles/extra.css
vendored
20
docs/theme/styles/extra.css
vendored
@@ -1,20 +0,0 @@
|
||||
.md-logo img {
|
||||
background-color: white;
|
||||
border-radius: 50%;
|
||||
width: 30px;
|
||||
height: 30px;
|
||||
}
|
||||
|
||||
/* Fix for Chrome */
|
||||
.md-typeset__table td code {
|
||||
word-break: unset;
|
||||
}
|
||||
|
||||
.md-typeset__table tr :nth-child(1) {
|
||||
word-wrap: break-word;
|
||||
max-width: 30em;
|
||||
}
|
||||
|
||||
p {
|
||||
text-align: justify;
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user