forked from SW/traefik
Compare commits
43 Commits
v1.2.0-rc2
...
v1.2.2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c766439fed | ||
|
|
0ecfbb8279 | ||
|
|
82d631572c | ||
|
|
fa4d2d325d | ||
|
|
c469bbb70c | ||
|
|
b3e6c7f598 | ||
|
|
40afd641a9 | ||
|
|
fba3db5291 | ||
|
|
023cda1398 | ||
|
|
3cf6d7e9e5 | ||
|
|
0d657a09b0 | ||
|
|
07176396e2 | ||
|
|
5183c98fb7 | ||
|
|
5a57515c6b | ||
|
|
3490b9d35d | ||
|
|
dd0a20a668 | ||
|
|
d13cef6ff6 | ||
|
|
9f149977d6 | ||
|
|
e65544f414 | ||
|
|
8010758b29 | ||
|
|
75d92c6967 | ||
|
|
0f67cc7818 | ||
|
|
f428a752a5 | ||
|
|
3fe3784b6c | ||
|
|
e4d63331cf | ||
|
|
8ae521db64 | ||
|
|
24fde36b20 | ||
|
|
7c55a4fd0c | ||
|
|
7b1c0a97f7 | ||
|
|
8392846bd4 | ||
|
|
07db6a2df1 | ||
|
|
cc9bb4b1f8 | ||
|
|
de91b99639 | ||
|
|
c582ea5ff0 | ||
|
|
b5de37e722 | ||
|
|
ee9032f0bf | ||
|
|
11a68ce7f9 | ||
|
|
0dbac0af0d | ||
|
|
9541ee4cf6 | ||
|
|
2958a67ce5 | ||
|
|
eebbf6ebbb | ||
|
|
0133178b84 | ||
|
|
9af5ba34ae |
92
CHANGELOG.md
92
CHANGELOG.md
@@ -1,5 +1,97 @@
|
||||
# Change Log
|
||||
|
||||
## [v1.2.2](https://github.com/containous/traefik/tree/v1.2.2) (2017-04-11)
|
||||
[Full Changelog](https://github.com/containous/traefik/compare/v1.2.1...v1.2.2)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Carry PR 1271 [\#1417](https://github.com/containous/traefik/pull/1417) ([emilevauge](https://github.com/emilevauge))
|
||||
- Fix postloadconfig acme & Docker filter empty rule [\#1401](https://github.com/containous/traefik/pull/1401) ([emilevauge](https://github.com/emilevauge))
|
||||
|
||||
## [v1.2.1](https://github.com/containous/traefik/tree/v1.2.1) (2017-03-27)
|
||||
[Full Changelog](https://github.com/containous/traefik/compare/v1.2.0...v1.2.1)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- bump lego 0e2937900 [\#1347](https://github.com/containous/traefik/pull/1347) ([emilevauge](https://github.com/emilevauge))
|
||||
- k8s: Do not log service fields when GetService is failing. [\#1331](https://github.com/containous/traefik/pull/1331) ([timoreimann](https://github.com/timoreimann))
|
||||
|
||||
## [v1.2.0](https://github.com/containous/traefik/tree/v1.2.0) (2017-03-20)
|
||||
[Full Changelog](https://github.com/containous/traefik/compare/v1.1.2...v1.2.0)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Docker: Added warning if network could not be found [\#1310](https://github.com/containous/traefik/pull/1310) ([zweizeichen](https://github.com/zweizeichen))
|
||||
- Add filter on task status in addition to desired status \(Docker Provider - swarm\) [\#1304](https://github.com/containous/traefik/pull/1304) ([Yshayy](https://github.com/Yshayy))
|
||||
- Abort Kubernetes Ingress update if Kubernetes API call fails [\#1295](https://github.com/containous/traefik/pull/1295) ([Regner](https://github.com/Regner))
|
||||
- Small fixes [\#1291](https://github.com/containous/traefik/pull/1291) ([emilevauge](https://github.com/emilevauge))
|
||||
- Rename health check URL parameter to path. [\#1285](https://github.com/containous/traefik/pull/1285) ([timoreimann](https://github.com/timoreimann))
|
||||
- Update Oxy, fix for \#1199 [\#1278](https://github.com/containous/traefik/pull/1278) ([akanto](https://github.com/akanto))
|
||||
- Fix metrics registering [\#1258](https://github.com/containous/traefik/pull/1258) ([matevzmihalic](https://github.com/matevzmihalic))
|
||||
- Update DefaultMaxIdleConnsPerHost default in docs. [\#1239](https://github.com/containous/traefik/pull/1239) ([timoreimann](https://github.com/timoreimann))
|
||||
- Update WSS/WS Proto \[Fixes \#670\] [\#1225](https://github.com/containous/traefik/pull/1225) ([dtomcej](https://github.com/dtomcej))
|
||||
- Bump go-rancher version [\#1219](https://github.com/containous/traefik/pull/1219) ([SantoDE](https://github.com/SantoDE))
|
||||
- Chunk taskArns into groups of 100 [\#1209](https://github.com/containous/traefik/pull/1209) ([owen](https://github.com/owen))
|
||||
- Prepare release v1.2.0 rc2 [\#1204](https://github.com/containous/traefik/pull/1204) ([emilevauge](https://github.com/emilevauge))
|
||||
- Revert "Ensure that we don't add balancees with no health check runs … [\#1198](https://github.com/containous/traefik/pull/1198) ([jangie](https://github.com/jangie))
|
||||
- Small fixes and improvments [\#1173](https://github.com/containous/traefik/pull/1173) ([SantoDE](https://github.com/SantoDE))
|
||||
- Fix docker issues with global and dead tasks [\#1167](https://github.com/containous/traefik/pull/1167) ([christopherobin](https://github.com/christopherobin))
|
||||
- Better ECS error checking [\#1143](https://github.com/containous/traefik/pull/1143) ([lpetre](https://github.com/lpetre))
|
||||
- Fix stats race condition [\#1141](https://github.com/containous/traefik/pull/1141) ([emilevauge](https://github.com/emilevauge))
|
||||
- ECS: Docs - info about cred. resolution and required access policies [\#1137](https://github.com/containous/traefik/pull/1137) ([rickard-von-essen](https://github.com/rickard-von-essen))
|
||||
- Healthcheck tests and doc [\#1132](https://github.com/containous/traefik/pull/1132) ([Juliens](https://github.com/Juliens))
|
||||
- Fix travis deploy [\#1128](https://github.com/containous/traefik/pull/1128) ([emilevauge](https://github.com/emilevauge))
|
||||
- Prepare release v1.2.0 rc1 [\#1126](https://github.com/containous/traefik/pull/1126) ([emilevauge](https://github.com/emilevauge))
|
||||
- Fix checkout initial before calling rmpr [\#1124](https://github.com/containous/traefik/pull/1124) ([emilevauge](https://github.com/emilevauge))
|
||||
- Feature rancher integration [\#1120](https://github.com/containous/traefik/pull/1120) ([SantoDE](https://github.com/SantoDE))
|
||||
- Fix glide go units [\#1119](https://github.com/containous/traefik/pull/1119) ([emilevauge](https://github.com/emilevauge))
|
||||
- Carry \#818 — Add systemd watchdog feature [\#1116](https://github.com/containous/traefik/pull/1116) ([vdemeester](https://github.com/vdemeester))
|
||||
- Skip file permission check on Windows [\#1115](https://github.com/containous/traefik/pull/1115) ([StefanScherer](https://github.com/StefanScherer))
|
||||
- Fix Docker API version for Windows [\#1113](https://github.com/containous/traefik/pull/1113) ([StefanScherer](https://github.com/StefanScherer))
|
||||
- Fix git rpr [\#1109](https://github.com/containous/traefik/pull/1109) ([emilevauge](https://github.com/emilevauge))
|
||||
- Fix docker version specifier [\#1108](https://github.com/containous/traefik/pull/1108) ([timoreimann](https://github.com/timoreimann))
|
||||
- Merge v1.1.2 master [\#1105](https://github.com/containous/traefik/pull/1105) ([emilevauge](https://github.com/emilevauge))
|
||||
- add sh before script in deploy... [\#1103](https://github.com/containous/traefik/pull/1103) ([emilevauge](https://github.com/emilevauge))
|
||||
- \[doc\] typo fixes for kubernetes user guide [\#1102](https://github.com/containous/traefik/pull/1102) ([bamarni](https://github.com/bamarni))
|
||||
- add skip\_cleanup in deploy [\#1101](https://github.com/containous/traefik/pull/1101) ([emilevauge](https://github.com/emilevauge))
|
||||
- Fix k8s example UI port. [\#1098](https://github.com/containous/traefik/pull/1098) ([ddunkin](https://github.com/ddunkin))
|
||||
- Fix marathon provider [\#1090](https://github.com/containous/traefik/pull/1090) ([diegooliveira](https://github.com/diegooliveira))
|
||||
- Add an ECS provider [\#1088](https://github.com/containous/traefik/pull/1088) ([lpetre](https://github.com/lpetre))
|
||||
- Update comment to reflect the code [\#1087](https://github.com/containous/traefik/pull/1087) ([np](https://github.com/np))
|
||||
- update NYTimes/gziphandler fixes \#1059 [\#1084](https://github.com/containous/traefik/pull/1084) ([JamesKyburz](https://github.com/JamesKyburz))
|
||||
- Ensure that we don't add balancees with no health check runs if there is a health check defined on it [\#1080](https://github.com/containous/traefik/pull/1080) ([jangie](https://github.com/jangie))
|
||||
- Add FreeBSD & OpenBSD to crossbinary [\#1078](https://github.com/containous/traefik/pull/1078) ([geoffgarside](https://github.com/geoffgarside))
|
||||
- Fix metrics for multiple entry points [\#1071](https://github.com/containous/traefik/pull/1071) ([matevzmihalic](https://github.com/matevzmihalic))
|
||||
- Allow setting load balancer method and sticky using service annotations [\#1068](https://github.com/containous/traefik/pull/1068) ([bakins](https://github.com/bakins))
|
||||
- Fix travis script [\#1067](https://github.com/containous/traefik/pull/1067) ([emilevauge](https://github.com/emilevauge))
|
||||
- Add missing fmt verb specifier in k8s provider. [\#1066](https://github.com/containous/traefik/pull/1066) ([timoreimann](https://github.com/timoreimann))
|
||||
- Add git rpr command [\#1063](https://github.com/containous/traefik/pull/1063) ([emilevauge](https://github.com/emilevauge))
|
||||
- Fix k8s example [\#1062](https://github.com/containous/traefik/pull/1062) ([emilevauge](https://github.com/emilevauge))
|
||||
- Replace underscores to dash in autogenerated urls \(docker provider\) [\#1061](https://github.com/containous/traefik/pull/1061) ([WTFKr0](https://github.com/WTFKr0))
|
||||
- Don't run go test on .glide cache folder [\#1057](https://github.com/containous/traefik/pull/1057) ([vdemeester](https://github.com/vdemeester))
|
||||
- Allow setting circuitbreaker expression via Kubernetes annotation [\#1056](https://github.com/containous/traefik/pull/1056) ([bakins](https://github.com/bakins))
|
||||
- Improving instrumentation. [\#1042](https://github.com/containous/traefik/pull/1042) ([enxebre](https://github.com/enxebre))
|
||||
- Update user guide for upcoming `docker stack deploy` [\#1041](https://github.com/containous/traefik/pull/1041) ([twelvelabs](https://github.com/twelvelabs))
|
||||
- Support sticky sessions under SWARM Mode. \#1024 [\#1033](https://github.com/containous/traefik/pull/1033) ([foleymic](https://github.com/foleymic))
|
||||
- Allow for wildcards in k8s ingress host, fixes \#792 [\#1029](https://github.com/containous/traefik/pull/1029) ([sheerun](https://github.com/sheerun))
|
||||
- Don't fetch ACME certificates for frontends using non-TLS entrypoints \(\#989\) [\#1023](https://github.com/containous/traefik/pull/1023) ([syfonseq](https://github.com/syfonseq))
|
||||
- Return Proper Non-ACME certificate - Fixes Issue 672 [\#1018](https://github.com/containous/traefik/pull/1018) ([dtomcej](https://github.com/dtomcej))
|
||||
- Fix docs build and add missing benchmarks page [\#1017](https://github.com/containous/traefik/pull/1017) ([csabapalfi](https://github.com/csabapalfi))
|
||||
- Set a NopCloser request body with retry middleware [\#1016](https://github.com/containous/traefik/pull/1016) ([bamarni](https://github.com/bamarni))
|
||||
- instruct to flatten dependencies with glide [\#1010](https://github.com/containous/traefik/pull/1010) ([bamarni](https://github.com/bamarni))
|
||||
- check permissions on acme.json during startup [\#1009](https://github.com/containous/traefik/pull/1009) ([bamarni](https://github.com/bamarni))
|
||||
- \[doc\] few tweaks on the basics page [\#1005](https://github.com/containous/traefik/pull/1005) ([bamarni](https://github.com/bamarni))
|
||||
- Import order as goimports does [\#1004](https://github.com/containous/traefik/pull/1004) ([vdemeester](https://github.com/vdemeester))
|
||||
- See the right go report badge [\#991](https://github.com/containous/traefik/pull/991) ([guilhem](https://github.com/guilhem))
|
||||
- Add multiple values for one rule to docs [\#978](https://github.com/containous/traefik/pull/978) ([j0hnsmith](https://github.com/j0hnsmith))
|
||||
- Add ACME/Let’s Encrypt integration tests [\#975](https://github.com/containous/traefik/pull/975) ([trecloux](https://github.com/trecloux))
|
||||
- deploy.sh: upload release source tarball [\#969](https://github.com/containous/traefik/pull/969) ([Mic92](https://github.com/Mic92))
|
||||
- toml zookeeper doc fix [\#948](https://github.com/containous/traefik/pull/948) ([brdude](https://github.com/brdude))
|
||||
- Add Rule AddPrefix [\#931](https://github.com/containous/traefik/pull/931) ([Juliens](https://github.com/Juliens))
|
||||
- Add bug command [\#921](https://github.com/containous/traefik/pull/921) ([emilevauge](https://github.com/emilevauge))
|
||||
- \(WIP\) feat: HealthCheck [\#918](https://github.com/containous/traefik/pull/918) ([Juliens](https://github.com/Juliens))
|
||||
- Add ability to set authenticated user in request header [\#889](https://github.com/containous/traefik/pull/889) ([ViViDboarder](https://github.com/ViViDboarder))
|
||||
- IP-per-task: [\#841](https://github.com/containous/traefik/pull/841) ([diegooliveira](https://github.com/diegooliveira))
|
||||
|
||||
## [v1.2.0-rc2](https://github.com/containous/traefik/tree/v1.2.0-rc2) (2017-03-01)
|
||||
[Full Changelog](https://github.com/containous/traefik/compare/v1.2.0-rc1...v1.2.0-rc2)
|
||||
|
||||
@@ -392,6 +392,9 @@ func NewTraefikDefaultPointersConfiguration() *TraefikConfiguration {
|
||||
defaultMesos.Endpoint = "http://127.0.0.1:5050"
|
||||
defaultMesos.ExposedByDefault = true
|
||||
defaultMesos.Constraints = types.Constraints{}
|
||||
defaultMesos.RefreshSeconds = 30
|
||||
defaultMesos.ZkDetectionTimeout = 30
|
||||
defaultMesos.StateTimeoutSecond = 30
|
||||
|
||||
//default ECS
|
||||
var defaultECS provider.ECS
|
||||
|
||||
@@ -236,16 +236,22 @@ For example:
|
||||
sticky = true
|
||||
```
|
||||
|
||||
Healthcheck URL can be configured with a relative URL for `healthcheck.URL`.
|
||||
Interval between healthcheck can be configured by using `healthcheck.interval`
|
||||
(default: 30s)
|
||||
A health check can be configured in order to remove a backend from LB rotation
|
||||
as long as it keeps returning HTTP status codes other than 200 OK to HTTP GET
|
||||
requests periodically carried out by Traefik. The check is defined by a path
|
||||
appended to the backend URL and an interval (given in a format understood by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration)) specifying how
|
||||
often the health check should be executed (the default being 30 seconds). Each
|
||||
backend must respond to the health check within 5 seconds.
|
||||
|
||||
A recovering backend returning 200 OK responses again is being returned to the
|
||||
LB rotation pool.
|
||||
|
||||
For example:
|
||||
```toml
|
||||
[backends]
|
||||
[backends.backend1]
|
||||
[backends.backend1.healthcheck]
|
||||
URL = "/health"
|
||||
path = "/health"
|
||||
interval = "10s"
|
||||
```
|
||||
|
||||
|
||||
15
docs/toml.md
15
docs/toml.md
@@ -62,11 +62,13 @@
|
||||
#
|
||||
# ProvidersThrottleDuration = "5"
|
||||
|
||||
# If non-zero, controls the maximum idle (keep-alive) to keep per-host. If zero, DefaultMaxIdleConnsPerHost is used.
|
||||
# If you encounter 'too many open files' errors, you can either change this value, or change `ulimit` value.
|
||||
# Controls the maximum idle (keep-alive) connections to keep per-host. If zero, DefaultMaxIdleConnsPerHost
|
||||
# from the Go standard library net/http module is used.
|
||||
# If you encounter 'too many open files' errors, you can either increase this
|
||||
# value or change the `ulimit`.
|
||||
#
|
||||
# Optional
|
||||
# Default: http.DefaultMaxIdleConnsPerHost
|
||||
# Default: 200
|
||||
#
|
||||
# MaxIdleConnsPerHost = 200
|
||||
|
||||
@@ -814,7 +816,7 @@ Labels can be used on containers to override default behaviour:
|
||||
- `traefik.frontend.passHostHeader=true`: forward client `Host` header to the backend.
|
||||
- `traefik.frontend.priority=10`: override default frontend priority
|
||||
- `traefik.frontend.entryPoints=http,https`: assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`.
|
||||
- `traefik.docker.network`: Set the docker network to use for connections to this container
|
||||
- `traefik.docker.network`: Set the docker network to use for connections to this container. If a container is linked to several networks, be sure to set the proper network name (you can check with docker inspect <container_id>) otherwise it will randomly pick one (depending on how docker is returning them). For instance when deploying docker `stack` from compose files, the compose defined networks will be prefixed with the `stack` name.
|
||||
|
||||
NB: when running inside a container, Træfɪk will need network access through `docker network connect <network> <traefik-container>`
|
||||
|
||||
@@ -1003,12 +1005,14 @@ domain = "mesos.localhost"
|
||||
# Zookeeper timeout (in seconds)
|
||||
#
|
||||
# Optional
|
||||
# Default: 30
|
||||
#
|
||||
# ZkDetectionTimeout = 30
|
||||
|
||||
# Polling interval (in seconds)
|
||||
#
|
||||
# Optional
|
||||
# Default: 30
|
||||
#
|
||||
# RefreshSeconds = 30
|
||||
|
||||
@@ -1021,8 +1025,9 @@ domain = "mesos.localhost"
|
||||
# HTTP Timeout (in seconds)
|
||||
#
|
||||
# Optional
|
||||
# Default: 30
|
||||
#
|
||||
# StateTimeoutSecond = "host"
|
||||
# StateTimeoutSecond = "30"
|
||||
```
|
||||
|
||||
## Kubernetes Ingress backend
|
||||
|
||||
26
glide.lock
generated
26
glide.lock
generated
@@ -1,5 +1,5 @@
|
||||
hash: f2a0b9af55c8312762e4148bd876e5bd2451c240b407fbb6d4a5dbc56bf46c05
|
||||
updated: 2017-02-07T14:15:03.854561081+01:00
|
||||
hash: b689cb0faed68086641d9e3504ee29498e5bf06b088ad4fcd1e76543446d4d9a
|
||||
updated: 2017-03-27T14:29:54.009570184+02:00
|
||||
imports:
|
||||
- name: bitbucket.org/ww/goautoneg
|
||||
version: 75cd24fc2f2c2a2088577d12123ddee5f54e0675
|
||||
@@ -125,6 +125,10 @@ imports:
|
||||
version: f6b1d56f1c048bd94d7e42ac36efb4d57b069b6f
|
||||
- name: github.com/dgrijalva/jwt-go
|
||||
version: 9ed569b5d1ac936e6494082958d63a6aa4fff99a
|
||||
- name: github.com/dnsimple/dnsimple-go
|
||||
version: eeb343928d9a3de357a650c8c25d8f1318330d57
|
||||
subpackages:
|
||||
- dnsimple
|
||||
- name: github.com/docker/distribution
|
||||
version: 325b0804fef3a66309d962357aac3c2ce3f4d329
|
||||
subpackages:
|
||||
@@ -258,7 +262,7 @@ imports:
|
||||
- log
|
||||
- swagger
|
||||
- name: github.com/gambol99/go-marathon
|
||||
version: 9ab64d9f0259e8800911d92ebcd4d5b981917919
|
||||
version: 6b00a5b651b1beb2c6821863f7c60df490bd46c8
|
||||
- name: github.com/ghodss/yaml
|
||||
version: 04f313413ffd65ce25f2541bfd2b2ceec5c0908c
|
||||
- name: github.com/go-ini/ini
|
||||
@@ -301,7 +305,7 @@ imports:
|
||||
- name: github.com/gorilla/context
|
||||
version: 1ea25387ff6f684839d82767c1733ff4d4d15d0a
|
||||
- name: github.com/gorilla/websocket
|
||||
version: c36f2fe5c330f0ac404b616b96c438b8616b1aaf
|
||||
version: 4873052237e4eeda85cf50c071ef33836fe8e139
|
||||
- name: github.com/hashicorp/consul
|
||||
version: fce7d75609a04eeb9d4bf41c8dc592aac18fc97d
|
||||
subpackages:
|
||||
@@ -389,7 +393,7 @@ imports:
|
||||
- name: github.com/pborman/uuid
|
||||
version: 5007efa264d92316c43112bc573e754bc889b7b1
|
||||
- name: github.com/pkg/errors
|
||||
version: 248dadf4e9068a0b3e79f02ed0a610d935de5302
|
||||
version: bfd5150e4e41705ded2129ec33379de1cb90b513
|
||||
- name: github.com/pmezard/go-difflib
|
||||
version: d8ed2627bdf02c080bf22230dbb337003b7aba2d
|
||||
subpackages:
|
||||
@@ -419,7 +423,7 @@ imports:
|
||||
subpackages:
|
||||
- src/egoscale
|
||||
- name: github.com/rancher/go-rancher
|
||||
version: 2c43ff300f3eafcbd7d0b89b10427fc630efdc1e
|
||||
version: 5b8f6cc26b355ba03d7611fce3844155b7baf05b
|
||||
subpackages:
|
||||
- client
|
||||
- name: github.com/ryanuber/go-glob
|
||||
@@ -460,7 +464,7 @@ imports:
|
||||
- name: github.com/vdemeester/docker-events
|
||||
version: be74d4929ec1ad118df54349fda4b0cba60f849b
|
||||
- name: github.com/vulcand/oxy
|
||||
version: fcc76b52eb8568540a020b7a99e854d9d752b364
|
||||
version: f88530866c561d24a6b5aac49f76d6351b788b9f
|
||||
repo: https://github.com/containous/oxy.git
|
||||
vcs: git
|
||||
subpackages:
|
||||
@@ -482,12 +486,8 @@ imports:
|
||||
- plugin
|
||||
- plugin/rewrite
|
||||
- router
|
||||
- name: github.com/weppos/dnsimple-go
|
||||
version: 65c1ca73cb19baf0f8b2b33219b7f57595a3ccb0
|
||||
subpackages:
|
||||
- dnsimple
|
||||
- name: github.com/xenolf/lego
|
||||
version: ce8fb060cb8361a9ff8b5fb7c2347fa907b6fcac
|
||||
version: 0e2937900b224325f4476745a9b53aef246b7410
|
||||
subpackages:
|
||||
- acme
|
||||
- providers/dns
|
||||
@@ -600,7 +600,7 @@ imports:
|
||||
- name: gopkg.in/yaml.v2
|
||||
version: bef53efd0c76e49e6de55ead051f886bea7e9420
|
||||
- name: k8s.io/client-go
|
||||
version: 843f7c4f28b1f647f664f883697107d5c02c5acc
|
||||
version: 1195e3a8ee1a529d53eed7c624527a68555ddf1f
|
||||
subpackages:
|
||||
- 1.5/discovery
|
||||
- 1.5/kubernetes
|
||||
|
||||
@@ -9,7 +9,7 @@ import:
|
||||
- package: github.com/containous/flaeg
|
||||
version: a731c034dda967333efce5f8d276aeff11f8ff87
|
||||
- package: github.com/vulcand/oxy
|
||||
version: fcc76b52eb8568540a020b7a99e854d9d752b364
|
||||
version: f88530866c561d24a6b5aac49f76d6351b788b9f
|
||||
repo: https://github.com/containous/oxy.git
|
||||
vcs: git
|
||||
subpackages:
|
||||
@@ -62,7 +62,7 @@ import:
|
||||
subpackages:
|
||||
- plugin/rewrite
|
||||
- package: github.com/xenolf/lego
|
||||
version: ce8fb060cb8361a9ff8b5fb7c2347fa907b6fcac
|
||||
version: 0e2937900b224325f4476745a9b53aef246b7410
|
||||
subpackages:
|
||||
- acme
|
||||
- package: gopkg.in/fsnotify.v1
|
||||
@@ -138,4 +138,4 @@ import:
|
||||
subpackages:
|
||||
- proto
|
||||
- package: github.com/rancher/go-rancher
|
||||
version: 2c43ff300f3eafcbd7d0b89b10427fc630efdc1e
|
||||
version: 5b8f6cc26b355ba03d7611fce3844155b7baf05b
|
||||
|
||||
@@ -25,7 +25,7 @@ func GetHealthCheck() *HealthCheck {
|
||||
|
||||
// BackendHealthCheck HealthCheck configuration for a backend
|
||||
type BackendHealthCheck struct {
|
||||
URL string
|
||||
Path string
|
||||
Interval time.Duration
|
||||
DisabledURLs []*url.URL
|
||||
lb loadBalancer
|
||||
@@ -81,7 +81,7 @@ func (hc *HealthCheck) execute(ctx context.Context) {
|
||||
enabledURLs := currentBackend.lb.Servers()
|
||||
var newDisabledURLs []*url.URL
|
||||
for _, url := range currentBackend.DisabledURLs {
|
||||
if checkHealth(url, currentBackend.URL) {
|
||||
if checkHealth(url, currentBackend.Path) {
|
||||
log.Debugf("HealthCheck is up [%s]: Upsert in server list", url.String())
|
||||
currentBackend.lb.UpsertServer(url, roundrobin.Weight(1))
|
||||
} else {
|
||||
@@ -91,7 +91,7 @@ func (hc *HealthCheck) execute(ctx context.Context) {
|
||||
currentBackend.DisabledURLs = newDisabledURLs
|
||||
|
||||
for _, url := range enabledURLs {
|
||||
if !checkHealth(url, currentBackend.URL) {
|
||||
if !checkHealth(url, currentBackend.Path) {
|
||||
log.Debugf("HealthCheck has failed [%s]: Remove from server list", url.String())
|
||||
currentBackend.lb.RemoveServer(url)
|
||||
currentBackend.DisabledURLs = append(currentBackend.DisabledURLs, url)
|
||||
@@ -104,12 +104,12 @@ func (hc *HealthCheck) execute(ctx context.Context) {
|
||||
}
|
||||
}
|
||||
|
||||
func checkHealth(serverURL *url.URL, checkURL string) bool {
|
||||
func checkHealth(serverURL *url.URL, path string) bool {
|
||||
timeout := time.Duration(5 * time.Second)
|
||||
client := http.Client{
|
||||
Timeout: timeout,
|
||||
}
|
||||
resp, err := client.Get(serverURL.String() + checkURL)
|
||||
resp, err := client.Get(serverURL.String() + path)
|
||||
if err != nil || resp.StatusCode != 200 {
|
||||
return false
|
||||
}
|
||||
|
||||
@@ -32,7 +32,8 @@ func (p *Prometheus) getLatencyHistogram() metrics.Histogram {
|
||||
// NewPrometheus returns a new prometheus Metrics implementation.
|
||||
func NewPrometheus(name string, config *types.Prometheus) *Prometheus {
|
||||
var m Prometheus
|
||||
m.reqsCounter = prometheus.NewCounterFrom(
|
||||
|
||||
cv := stdprometheus.NewCounterVec(
|
||||
stdprometheus.CounterOpts{
|
||||
Name: reqsName,
|
||||
Help: "How many HTTP requests processed, partitioned by status code and method.",
|
||||
@@ -41,6 +42,17 @@ func NewPrometheus(name string, config *types.Prometheus) *Prometheus {
|
||||
[]string{"code", "method"},
|
||||
)
|
||||
|
||||
err := stdprometheus.Register(cv)
|
||||
if err != nil {
|
||||
e, ok := err.(stdprometheus.AlreadyRegisteredError)
|
||||
if !ok {
|
||||
panic(err)
|
||||
}
|
||||
m.reqsCounter = prometheus.NewCounter(e.ExistingCollector.(*stdprometheus.CounterVec))
|
||||
} else {
|
||||
m.reqsCounter = prometheus.NewCounter(cv)
|
||||
}
|
||||
|
||||
var buckets []float64
|
||||
if config.Buckets != nil {
|
||||
buckets = config.Buckets
|
||||
@@ -48,7 +60,7 @@ func NewPrometheus(name string, config *types.Prometheus) *Prometheus {
|
||||
buckets = []float64{0.1, 0.3, 1.2, 5}
|
||||
}
|
||||
|
||||
m.latencyHistogram = prometheus.NewHistogramFrom(
|
||||
hv := stdprometheus.NewHistogramVec(
|
||||
stdprometheus.HistogramOpts{
|
||||
Name: latencyName,
|
||||
Help: "How long it took to process the request.",
|
||||
@@ -57,6 +69,18 @@ func NewPrometheus(name string, config *types.Prometheus) *Prometheus {
|
||||
},
|
||||
[]string{},
|
||||
)
|
||||
|
||||
err = stdprometheus.Register(hv)
|
||||
if err != nil {
|
||||
e, ok := err.(stdprometheus.AlreadyRegisteredError)
|
||||
if !ok {
|
||||
panic(err)
|
||||
}
|
||||
m.latencyHistogram = prometheus.NewHistogram(e.ExistingCollector.(*stdprometheus.HistogramVec))
|
||||
} else {
|
||||
m.latencyHistogram = prometheus.NewHistogram(hv)
|
||||
}
|
||||
|
||||
return &m
|
||||
}
|
||||
|
||||
|
||||
@@ -9,10 +9,19 @@ import (
|
||||
|
||||
"github.com/codegangsta/negroni"
|
||||
"github.com/containous/traefik/types"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/client_golang/prometheus/promhttp"
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestPrometheus(t *testing.T) {
|
||||
metricsFamily, err := prometheus.DefaultGatherer.Gather()
|
||||
if err != nil {
|
||||
t.Fatalf("could not gather metrics family: %s", err)
|
||||
}
|
||||
initialMetricsFamilyCount := len(metricsFamily)
|
||||
|
||||
recorder := httptest.NewRecorder()
|
||||
|
||||
n := negroni.New()
|
||||
@@ -42,6 +51,80 @@ func TestPrometheus(t *testing.T) {
|
||||
t.Errorf("body does not contain request total entry '%s'", reqsName)
|
||||
}
|
||||
if !strings.Contains(body, latencyName) {
|
||||
t.Errorf("body does not contain request duration entry '%s'", reqsName)
|
||||
t.Errorf("body does not contain request duration entry '%s'", latencyName)
|
||||
}
|
||||
|
||||
// Register the same metrics again
|
||||
metricsMiddlewareBackend = NewMetricsWrapper(NewPrometheus("test", &types.Prometheus{}))
|
||||
n = negroni.New()
|
||||
n.Use(metricsMiddlewareBackend)
|
||||
n.UseHandler(r)
|
||||
|
||||
n.ServeHTTP(recorder, req2)
|
||||
|
||||
metricsFamily, err = prometheus.DefaultGatherer.Gather()
|
||||
if err != nil {
|
||||
t.Fatalf("could not gather metrics family: %s", err)
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
labels map[string]string
|
||||
assert func(*dto.MetricFamily)
|
||||
}{
|
||||
{
|
||||
name: reqsName,
|
||||
labels: map[string]string{
|
||||
"code": "200",
|
||||
"method": "GET",
|
||||
"service": "test",
|
||||
},
|
||||
assert: func(family *dto.MetricFamily) {
|
||||
cv := uint(family.Metric[0].Counter.GetValue())
|
||||
if cv != 3 {
|
||||
t.Errorf("gathered metrics do not contain correct value for total requests, got %d", cv)
|
||||
}
|
||||
},
|
||||
},
|
||||
{
|
||||
name: latencyName,
|
||||
labels: map[string]string{
|
||||
"service": "test",
|
||||
},
|
||||
assert: func(family *dto.MetricFamily) {
|
||||
sc := family.Metric[0].Histogram.GetSampleCount()
|
||||
if sc != 3 {
|
||||
t.Errorf("gathered metrics do not contain correct sample count for request duration, got %d", sc)
|
||||
}
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
assert.Equal(t, len(tests), len(metricsFamily)-initialMetricsFamilyCount, "gathered traefic metrics count does not match tests count")
|
||||
|
||||
for _, test := range tests {
|
||||
family := findMetricFamily(test.name, metricsFamily)
|
||||
if family == nil {
|
||||
t.Errorf("gathered metrics do not contain '%s'", test.name)
|
||||
continue
|
||||
}
|
||||
for _, label := range family.Metric[0].Label {
|
||||
val, ok := test.labels[*label.Name]
|
||||
if !ok {
|
||||
t.Errorf("'%s' metric contains unexpected label '%s'", test.name, label)
|
||||
} else if val != *label.Value {
|
||||
t.Errorf("label '%s' in metric '%s' has wrong value '%s'", label, test.name, *label.Value)
|
||||
}
|
||||
}
|
||||
test.assert(family)
|
||||
}
|
||||
}
|
||||
|
||||
func findMetricFamily(name string, families []*dto.MetricFamily) *dto.MetricFamily {
|
||||
for _, family := range families {
|
||||
if family.GetName() == name {
|
||||
return family
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -380,6 +380,11 @@ func (provider *Docker) containerFilter(container dockerData) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
if len(provider.getFrontendRule(container)) == 0 {
|
||||
log.Debugf("Filtering container with empty frontend rule %s", container.Name)
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
@@ -394,7 +399,10 @@ func (provider *Docker) getFrontendRule(container dockerData) string {
|
||||
if label, err := getLabel(container, "traefik.frontend.rule"); err == nil {
|
||||
return label
|
||||
}
|
||||
return "Host:" + provider.getSubDomain(container.ServiceName) + "." + provider.Domain
|
||||
if len(provider.Domain) > 0 {
|
||||
return "Host:" + provider.getSubDomain(container.ServiceName) + "." + provider.Domain
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (provider *Docker) getBackend(container dockerData) string {
|
||||
@@ -412,6 +420,8 @@ func (provider *Docker) getIPAddress(container dockerData) string {
|
||||
if network != nil {
|
||||
return network.Addr
|
||||
}
|
||||
|
||||
log.Warnf("Could not find network named '%s' for container '%s'! Maybe you're missing the project's prefix in the label? Defaulting to first available network.", label, container.Name)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -688,6 +698,9 @@ func listTasks(ctx context.Context, dockerClient client.APIClient, serviceID str
|
||||
var dockerDataList []dockerData
|
||||
|
||||
for _, task := range taskList {
|
||||
if task.Status.State != swarm.TaskStateRunning {
|
||||
continue
|
||||
}
|
||||
dockerData := parseTasks(task, serviceDockerData, networkMap, isGlobalSvc)
|
||||
dockerDataList = append(dockerDataList, dockerData)
|
||||
}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -206,13 +206,28 @@ func (provider *ECS) listInstances(ctx context.Context, client *awsClient) ([]ec
|
||||
taskArns = append(taskArns, req.Data.(*ecs.ListTasksOutput).TaskArns...)
|
||||
}
|
||||
|
||||
req, taskResp := client.ecs.DescribeTasksRequest(&ecs.DescribeTasksInput{
|
||||
Tasks: taskArns,
|
||||
Cluster: &provider.Cluster,
|
||||
})
|
||||
// Early return: if we can't list tasks we have nothing to
|
||||
// describe below - likely empty cluster/permissions are bad. This
|
||||
// stops the AWS API from returning a 401 when you DescribeTasks
|
||||
// with no input.
|
||||
if len(taskArns) == 0 {
|
||||
return []ecsInstance{}, nil
|
||||
}
|
||||
|
||||
chunkedTaskArns := provider.chunkedTaskArns(taskArns)
|
||||
var tasks []*ecs.Task
|
||||
|
||||
for _, arns := range chunkedTaskArns {
|
||||
req, taskResp := client.ecs.DescribeTasksRequest(&ecs.DescribeTasksInput{
|
||||
Tasks: arns,
|
||||
Cluster: &provider.Cluster,
|
||||
})
|
||||
|
||||
if err := wrapAws(ctx, req); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
tasks = append(tasks, taskResp.Tasks...)
|
||||
|
||||
if err := wrapAws(ctx, req); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
containerInstanceArns := make([]*string, 0)
|
||||
@@ -221,7 +236,7 @@ func (provider *ECS) listInstances(ctx context.Context, client *awsClient) ([]ec
|
||||
taskDefinitionArns := make([]*string, 0)
|
||||
byTaskDefinition := make(map[string]int)
|
||||
|
||||
for _, task := range taskResp.Tasks {
|
||||
for _, task := range tasks {
|
||||
if _, found := byContainerInstance[*task.ContainerInstanceArn]; !found {
|
||||
byContainerInstance[*task.ContainerInstanceArn] = len(containerInstanceArns)
|
||||
containerInstanceArns = append(containerInstanceArns, task.ContainerInstanceArn)
|
||||
@@ -243,7 +258,7 @@ func (provider *ECS) listInstances(ctx context.Context, client *awsClient) ([]ec
|
||||
}
|
||||
|
||||
var instances []ecsInstance
|
||||
for _, task := range taskResp.Tasks {
|
||||
for _, task := range tasks {
|
||||
|
||||
machineIdx := byContainerInstance[*task.ContainerInstanceArn]
|
||||
taskDefIdx := byTaskDefinition[*task.TaskDefinitionArn]
|
||||
@@ -398,6 +413,22 @@ func (provider *ECS) getFrontendRule(i ecsInstance) string {
|
||||
return "Host:" + strings.ToLower(strings.Replace(i.Name, "_", "-", -1)) + "." + provider.Domain
|
||||
}
|
||||
|
||||
// ECS expects no more than 100 parameters be passed to a DescribeTask call; thus, pack
|
||||
// each string into an array capped at 100 elements
|
||||
func (provider *ECS) chunkedTaskArns(tasks []*string) [][]*string {
|
||||
var chunkedTasks [][]*string
|
||||
for i := 0; i < len(tasks); i += 100 {
|
||||
sliceEnd := -1
|
||||
if i+100 < len(tasks) {
|
||||
sliceEnd = i + 100
|
||||
} else {
|
||||
sliceEnd = len(tasks)
|
||||
}
|
||||
chunkedTasks = append(chunkedTasks, tasks[i:sliceEnd])
|
||||
}
|
||||
return chunkedTasks
|
||||
}
|
||||
|
||||
func (i ecsInstance) Protocol() string {
|
||||
if label := i.label("traefik.protocol"); label != "" {
|
||||
return label
|
||||
|
||||
@@ -308,3 +308,42 @@ func TestFilterInstance(t *testing.T) {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestTaskChunking(t *testing.T) {
|
||||
provider := &ECS{}
|
||||
|
||||
testval := "a"
|
||||
cases := []struct {
|
||||
count int
|
||||
expectedLengths []int
|
||||
}{
|
||||
{0, []int(nil)},
|
||||
{1, []int{1}},
|
||||
{99, []int{99}},
|
||||
{100, []int{100}},
|
||||
{101, []int{100, 1}},
|
||||
{199, []int{100, 99}},
|
||||
{200, []int{100, 100}},
|
||||
{201, []int{100, 100, 1}},
|
||||
{555, []int{100, 100, 100, 100, 100, 55}},
|
||||
{1001, []int{100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 1}},
|
||||
}
|
||||
|
||||
for _, c := range cases {
|
||||
var tasks []*string
|
||||
for v := 0; v < c.count; v++ {
|
||||
tasks = append(tasks, &testval)
|
||||
}
|
||||
|
||||
out := provider.chunkedTaskArns(tasks)
|
||||
var outCount []int
|
||||
|
||||
for _, el := range out {
|
||||
outCount = append(outCount, len(el))
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(outCount, c.expectedLengths) {
|
||||
t.Errorf("Chunking %d elements, expected %#v, got %#v", c.count, c.expectedLengths, outCount)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3,6 +3,7 @@ package k8s
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/containous/traefik/log"
|
||||
"k8s.io/client-go/1.5/kubernetes"
|
||||
"k8s.io/client-go/1.5/pkg/api"
|
||||
"k8s.io/client-go/1.5/pkg/api/v1"
|
||||
@@ -39,31 +40,18 @@ type clientImpl struct {
|
||||
clientset *kubernetes.Clientset
|
||||
}
|
||||
|
||||
// NewInClusterClient returns a new Kubernetes client that expect to run inside the cluster
|
||||
func NewInClusterClient() (Client, error) {
|
||||
// NewClient returns a new Kubernetes client
|
||||
func NewClient(endpoint string) (Client, error) {
|
||||
config, err := rest.InClusterConfig()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
clientset, err := kubernetes.NewForConfig(config)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
log.Warnf("Kubernetes in cluster config error, trying from out of cluster: %s", err)
|
||||
config = &rest.Config{}
|
||||
}
|
||||
|
||||
return &clientImpl{
|
||||
clientset: clientset,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// NewInClusterClientWithEndpoint is the same as NewInClusterClient but uses the provided endpoint URL
|
||||
func NewInClusterClientWithEndpoint(endpoint string) (Client, error) {
|
||||
config, err := rest.InClusterConfig()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
if len(endpoint) > 0 {
|
||||
config.Host = endpoint
|
||||
}
|
||||
|
||||
config.Host = endpoint
|
||||
|
||||
clientset, err := kubernetes.NewForConfig(config)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
||||
@@ -29,19 +29,10 @@ type Kubernetes struct {
|
||||
lastConfiguration safe.Safe
|
||||
}
|
||||
|
||||
func (provider *Kubernetes) newK8sClient() (k8s.Client, error) {
|
||||
if provider.Endpoint != "" {
|
||||
log.Infof("Creating in cluster Kubernetes client with endpoint %v", provider.Endpoint)
|
||||
return k8s.NewInClusterClientWithEndpoint(provider.Endpoint)
|
||||
}
|
||||
log.Info("Creating in cluster Kubernetes client")
|
||||
return k8s.NewInClusterClient()
|
||||
}
|
||||
|
||||
// Provide allows the provider to provide configurations to traefik
|
||||
// using the given configuration channel.
|
||||
func (provider *Kubernetes) Provide(configurationChan chan<- types.ConfigMessage, pool *safe.Pool, constraints types.Constraints) error {
|
||||
k8sClient, err := provider.newK8sClient()
|
||||
k8sClient, err := k8s.NewClient(provider.Endpoint)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -169,8 +160,13 @@ func (provider *Kubernetes) loadIngresses(k8sClient k8s.Client) (*types.Configur
|
||||
}
|
||||
}
|
||||
service, exists, err := k8sClient.GetService(i.ObjectMeta.Namespace, pa.Backend.ServiceName)
|
||||
if err != nil || !exists {
|
||||
log.Warnf("Error retrieving service %s/%s: %v", i.ObjectMeta.Namespace, pa.Backend.ServiceName, err)
|
||||
if err != nil {
|
||||
log.Errorf("Error while retrieving service information from k8s API %s/%s: %v", i.ObjectMeta.Namespace, pa.Backend.ServiceName, err)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if !exists {
|
||||
log.Errorf("Service not found for %s/%s", i.ObjectMeta.Namespace, pa.Backend.ServiceName)
|
||||
delete(templateObjects.Frontends, r.Host+pa.Path)
|
||||
continue
|
||||
}
|
||||
@@ -193,13 +189,20 @@ func (provider *Kubernetes) loadIngresses(k8sClient k8s.Client) (*types.Configur
|
||||
if port.Port == 443 {
|
||||
protocol = "https"
|
||||
}
|
||||
|
||||
endpoints, exists, err := k8sClient.GetEndpoints(service.ObjectMeta.Namespace, service.ObjectMeta.Name)
|
||||
if err != nil || !exists {
|
||||
if err != nil {
|
||||
log.Errorf("Error retrieving endpoints %s/%s: %v", service.ObjectMeta.Namespace, service.ObjectMeta.Name, err)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if !exists {
|
||||
log.Errorf("Endpoints not found for %s/%s", service.ObjectMeta.Namespace, service.ObjectMeta.Name)
|
||||
continue
|
||||
}
|
||||
|
||||
if len(endpoints.Subsets) == 0 {
|
||||
log.Warnf("Endpoints not found for %s/%s, falling back to Service ClusterIP", service.ObjectMeta.Namespace, service.ObjectMeta.Name)
|
||||
log.Warnf("Service endpoints not found for %s/%s, falling back to Service ClusterIP", service.ObjectMeta.Namespace, service.ObjectMeta.Name)
|
||||
templateObjects.Backends[r.Host+pa.Path].Servers[string(service.UID)] = types.Server{
|
||||
URL: protocol + "://" + service.Spec.ClusterIP + ":" + strconv.Itoa(int(port.Port)),
|
||||
Weight: 1,
|
||||
|
||||
@@ -2,11 +2,13 @@ package provider
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/containous/traefik/provider/k8s"
|
||||
"github.com/containous/traefik/types"
|
||||
"github.com/davecgh/go-spew/spew"
|
||||
"k8s.io/client-go/1.5/pkg/api/v1"
|
||||
"k8s.io/client-go/1.5/pkg/apis/extensions/v1beta1"
|
||||
"k8s.io/client-go/1.5/pkg/util/intstr"
|
||||
@@ -1524,11 +1526,286 @@ func TestServiceAnnotations(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestKubeAPIErrors(t *testing.T) {
|
||||
ingresses := []*v1beta1.Ingress{{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "testing",
|
||||
},
|
||||
Spec: v1beta1.IngressSpec{
|
||||
Rules: []v1beta1.IngressRule{
|
||||
{
|
||||
Host: "foo",
|
||||
IngressRuleValue: v1beta1.IngressRuleValue{
|
||||
HTTP: &v1beta1.HTTPIngressRuleValue{
|
||||
Paths: []v1beta1.HTTPIngressPath{
|
||||
{
|
||||
Path: "/bar",
|
||||
Backend: v1beta1.IngressBackend{
|
||||
ServiceName: "service1",
|
||||
ServicePort: intstr.FromInt(80),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}}
|
||||
|
||||
services := []*v1.Service{{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: "service1",
|
||||
UID: "1",
|
||||
Namespace: "testing",
|
||||
},
|
||||
Spec: v1.ServiceSpec{
|
||||
ClusterIP: "10.0.0.1",
|
||||
Ports: []v1.ServicePort{
|
||||
{
|
||||
Port: 80,
|
||||
},
|
||||
},
|
||||
},
|
||||
}}
|
||||
|
||||
endpoints := []*v1.Endpoints{}
|
||||
watchChan := make(chan interface{})
|
||||
apiErr := errors.New("failed kube api call")
|
||||
|
||||
testCases := []struct {
|
||||
desc string
|
||||
apiServiceErr error
|
||||
apiEndpointsErr error
|
||||
}{
|
||||
{
|
||||
desc: "failed service call",
|
||||
apiServiceErr: apiErr,
|
||||
},
|
||||
{
|
||||
desc: "failed endpoints call",
|
||||
apiEndpointsErr: apiErr,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
tc := tc
|
||||
t.Run(tc.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
client := clientMock{
|
||||
ingresses: ingresses,
|
||||
services: services,
|
||||
endpoints: endpoints,
|
||||
watchChan: watchChan,
|
||||
apiServiceError: tc.apiServiceErr,
|
||||
apiEndpointsError: tc.apiEndpointsErr,
|
||||
}
|
||||
|
||||
provider := Kubernetes{}
|
||||
if _, err := provider.loadIngresses(client); err != apiErr {
|
||||
t.Errorf("Got error %v, wanted error %v", err, apiErr)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestMissingResources(t *testing.T) {
|
||||
ingresses := []*v1beta1.Ingress{{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "testing",
|
||||
},
|
||||
Spec: v1beta1.IngressSpec{
|
||||
Rules: []v1beta1.IngressRule{
|
||||
{
|
||||
Host: "fully_working",
|
||||
IngressRuleValue: v1beta1.IngressRuleValue{
|
||||
HTTP: &v1beta1.HTTPIngressRuleValue{
|
||||
Paths: []v1beta1.HTTPIngressPath{
|
||||
{
|
||||
Backend: v1beta1.IngressBackend{
|
||||
ServiceName: "fully_working_service",
|
||||
ServicePort: intstr.FromInt(80),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Host: "missing_service",
|
||||
IngressRuleValue: v1beta1.IngressRuleValue{
|
||||
HTTP: &v1beta1.HTTPIngressRuleValue{
|
||||
Paths: []v1beta1.HTTPIngressPath{
|
||||
{
|
||||
Backend: v1beta1.IngressBackend{
|
||||
ServiceName: "missing_service_service",
|
||||
ServicePort: intstr.FromInt(80),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Host: "missing_endpoints",
|
||||
IngressRuleValue: v1beta1.IngressRuleValue{
|
||||
HTTP: &v1beta1.HTTPIngressRuleValue{
|
||||
Paths: []v1beta1.HTTPIngressPath{
|
||||
{
|
||||
Backend: v1beta1.IngressBackend{
|
||||
ServiceName: "missing_endpoints_service",
|
||||
ServicePort: intstr.FromInt(80),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}}
|
||||
services := []*v1.Service{
|
||||
{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: "fully_working_service",
|
||||
UID: "1",
|
||||
Namespace: "testing",
|
||||
},
|
||||
Spec: v1.ServiceSpec{
|
||||
ClusterIP: "10.0.0.1",
|
||||
Ports: []v1.ServicePort{
|
||||
{
|
||||
Port: 80,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: "missing_endpoints_service",
|
||||
UID: "3",
|
||||
Namespace: "testing",
|
||||
},
|
||||
Spec: v1.ServiceSpec{
|
||||
ClusterIP: "10.0.0.3",
|
||||
Ports: []v1.ServicePort{
|
||||
{
|
||||
Port: 80,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
endpoints := []*v1.Endpoints{
|
||||
{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Name: "fully_working_service",
|
||||
UID: "1",
|
||||
Namespace: "testing",
|
||||
},
|
||||
Subsets: []v1.EndpointSubset{
|
||||
{
|
||||
Addresses: []v1.EndpointAddress{
|
||||
{
|
||||
IP: "10.10.0.1",
|
||||
},
|
||||
},
|
||||
Ports: []v1.EndpointPort{
|
||||
{
|
||||
Port: 8080,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
watchChan := make(chan interface{})
|
||||
client := clientMock{
|
||||
ingresses: ingresses,
|
||||
services: services,
|
||||
endpoints: endpoints,
|
||||
watchChan: watchChan,
|
||||
|
||||
// TODO: Update all tests to cope with "properExists == true" correctly and remove flag.
|
||||
// See https://github.com/containous/traefik/issues/1307
|
||||
properExists: true,
|
||||
}
|
||||
provider := Kubernetes{}
|
||||
actual, err := provider.loadIngresses(client)
|
||||
if err != nil {
|
||||
t.Fatalf("error %+v", err)
|
||||
}
|
||||
|
||||
expected := &types.Configuration{
|
||||
Backends: map[string]*types.Backend{
|
||||
"fully_working": {
|
||||
Servers: map[string]types.Server{
|
||||
"http://10.10.0.1:8080": {
|
||||
URL: "http://10.10.0.1:8080",
|
||||
Weight: 1,
|
||||
},
|
||||
},
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Method: "wrr",
|
||||
Sticky: false,
|
||||
},
|
||||
},
|
||||
"missing_service": {
|
||||
Servers: map[string]types.Server{},
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Method: "wrr",
|
||||
Sticky: false,
|
||||
},
|
||||
},
|
||||
"missing_endpoints": {
|
||||
Servers: map[string]types.Server{},
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Method: "wrr",
|
||||
Sticky: false,
|
||||
},
|
||||
},
|
||||
},
|
||||
Frontends: map[string]*types.Frontend{
|
||||
"fully_working": {
|
||||
Backend: "fully_working",
|
||||
PassHostHeader: true,
|
||||
Routes: map[string]types.Route{
|
||||
"fully_working": {
|
||||
Rule: "Host:fully_working",
|
||||
},
|
||||
},
|
||||
},
|
||||
"missing_endpoints": {
|
||||
Backend: "missing_endpoints",
|
||||
PassHostHeader: true,
|
||||
Routes: map[string]types.Route{
|
||||
"missing_endpoints": {
|
||||
Rule: "Host:missing_endpoints",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(actual, expected) {
|
||||
t.Fatalf("expected\n%v\ngot\n\n%v", spew.Sdump(expected), spew.Sdump(actual))
|
||||
}
|
||||
}
|
||||
|
||||
type clientMock struct {
|
||||
ingresses []*v1beta1.Ingress
|
||||
services []*v1.Service
|
||||
endpoints []*v1.Endpoints
|
||||
watchChan chan interface{}
|
||||
|
||||
apiServiceError error
|
||||
apiEndpointsError error
|
||||
|
||||
properExists bool
|
||||
}
|
||||
|
||||
func (c clientMock) GetIngresses(namespaces k8s.Namespaces) []*v1beta1.Ingress {
|
||||
@@ -1543,20 +1820,33 @@ func (c clientMock) GetIngresses(namespaces k8s.Namespaces) []*v1beta1.Ingress {
|
||||
}
|
||||
|
||||
func (c clientMock) GetService(namespace, name string) (*v1.Service, bool, error) {
|
||||
if c.apiServiceError != nil {
|
||||
return nil, false, c.apiServiceError
|
||||
}
|
||||
|
||||
for _, service := range c.services {
|
||||
if service.Namespace == namespace && service.Name == name {
|
||||
return service, true, nil
|
||||
}
|
||||
}
|
||||
return &v1.Service{}, true, nil
|
||||
return nil, false, nil
|
||||
}
|
||||
|
||||
func (c clientMock) GetEndpoints(namespace, name string) (*v1.Endpoints, bool, error) {
|
||||
if c.apiEndpointsError != nil {
|
||||
return nil, false, c.apiEndpointsError
|
||||
}
|
||||
|
||||
for _, endpoints := range c.endpoints {
|
||||
if endpoints.Namespace == namespace && endpoints.Name == name {
|
||||
return endpoints, true, nil
|
||||
}
|
||||
}
|
||||
|
||||
if c.properExists {
|
||||
return nil, false, nil
|
||||
}
|
||||
|
||||
return &v1.Endpoints{}, true, nil
|
||||
}
|
||||
|
||||
|
||||
252
server.go
252
server.go
@@ -311,15 +311,16 @@ func (server *Server) postLoadConfig() {
|
||||
for _, frontend := range configuration.Frontends {
|
||||
|
||||
// check if one of the frontend entrypoints is configured with TLS
|
||||
TLSEnabled := false
|
||||
// and is configured with ACME
|
||||
ACMEEnabled := false
|
||||
for _, entrypoint := range frontend.EntryPoints {
|
||||
if server.globalConfiguration.EntryPoints[entrypoint].TLS != nil {
|
||||
TLSEnabled = true
|
||||
if server.globalConfiguration.ACME.EntryPoint == entrypoint && server.globalConfiguration.EntryPoints[entrypoint].TLS != nil {
|
||||
ACMEEnabled = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if TLSEnabled {
|
||||
if ACMEEnabled {
|
||||
for _, route := range frontend.Routes {
|
||||
rules := Rules{}
|
||||
domains, err := rules.ParseDomains(route.Rule)
|
||||
@@ -552,7 +553,7 @@ func (server *Server) buildEntryPoints(globalConfiguration GlobalConfiguration)
|
||||
// provider configurations.
|
||||
func (server *Server) loadConfig(configurations configs, globalConfiguration GlobalConfiguration) (map[string]*serverEntryPoint, error) {
|
||||
serverEntryPoints := server.buildEntryPoints(globalConfiguration)
|
||||
redirectHandlers := make(map[string]http.Handler)
|
||||
redirectHandlers := make(map[string]negroni.Handler)
|
||||
|
||||
backends := map[string]http.Handler{}
|
||||
|
||||
@@ -597,99 +598,66 @@ func (server *Server) loadConfig(configurations configs, globalConfiguration Glo
|
||||
log.Debugf("Creating route %s %s", routeName, route.Rule)
|
||||
}
|
||||
entryPoint := globalConfiguration.EntryPoints[entryPointName]
|
||||
negroni := negroni.New()
|
||||
if entryPoint.Redirect != nil {
|
||||
if redirectHandlers[entryPointName] != nil {
|
||||
newServerRoute.route.Handler(redirectHandlers[entryPointName])
|
||||
negroni.Use(redirectHandlers[entryPointName])
|
||||
} else if handler, err := server.loadEntryPointConfig(entryPointName, entryPoint); err != nil {
|
||||
log.Errorf("Error loading entrypoint configuration for frontend %s: %v", frontendName, err)
|
||||
log.Errorf("Skipping frontend %s...", frontendName)
|
||||
continue frontend
|
||||
} else {
|
||||
newServerRoute.route.Handler(handler)
|
||||
negroni.Use(handler)
|
||||
redirectHandlers[entryPointName] = handler
|
||||
}
|
||||
} else {
|
||||
if backends[frontend.Backend] == nil {
|
||||
log.Debugf("Creating backend %s", frontend.Backend)
|
||||
var lb http.Handler
|
||||
rr, _ := roundrobin.New(saveBackend)
|
||||
if configuration.Backends[frontend.Backend] == nil {
|
||||
log.Errorf("Undefined backend '%s' for frontend %s", frontend.Backend, frontendName)
|
||||
log.Errorf("Skipping frontend %s...", frontendName)
|
||||
continue frontend
|
||||
}
|
||||
}
|
||||
if backends[frontend.Backend] == nil {
|
||||
log.Debugf("Creating backend %s", frontend.Backend)
|
||||
var lb http.Handler
|
||||
rr, _ := roundrobin.New(saveBackend)
|
||||
if configuration.Backends[frontend.Backend] == nil {
|
||||
log.Errorf("Undefined backend '%s' for frontend %s", frontend.Backend, frontendName)
|
||||
log.Errorf("Skipping frontend %s...", frontendName)
|
||||
continue frontend
|
||||
}
|
||||
|
||||
lbMethod, err := types.NewLoadBalancerMethod(configuration.Backends[frontend.Backend].LoadBalancer)
|
||||
if err != nil {
|
||||
log.Errorf("Error loading load balancer method '%+v' for frontend %s: %v", configuration.Backends[frontend.Backend].LoadBalancer, frontendName, err)
|
||||
log.Errorf("Skipping frontend %s...", frontendName)
|
||||
continue frontend
|
||||
}
|
||||
lbMethod, err := types.NewLoadBalancerMethod(configuration.Backends[frontend.Backend].LoadBalancer)
|
||||
if err != nil {
|
||||
log.Errorf("Error loading load balancer method '%+v' for frontend %s: %v", configuration.Backends[frontend.Backend].LoadBalancer, frontendName, err)
|
||||
log.Errorf("Skipping frontend %s...", frontendName)
|
||||
continue frontend
|
||||
}
|
||||
|
||||
stickysession := configuration.Backends[frontend.Backend].LoadBalancer.Sticky
|
||||
cookiename := "_TRAEFIK_BACKEND"
|
||||
var sticky *roundrobin.StickySession
|
||||
stickysession := configuration.Backends[frontend.Backend].LoadBalancer.Sticky
|
||||
cookiename := "_TRAEFIK_BACKEND"
|
||||
var sticky *roundrobin.StickySession
|
||||
|
||||
if stickysession {
|
||||
sticky = roundrobin.NewStickySession(cookiename)
|
||||
}
|
||||
|
||||
switch lbMethod {
|
||||
case types.Drr:
|
||||
log.Debugf("Creating load-balancer drr")
|
||||
rebalancer, _ := roundrobin.NewRebalancer(rr, roundrobin.RebalancerLogger(oxyLogger))
|
||||
if stickysession {
|
||||
sticky = roundrobin.NewStickySession(cookiename)
|
||||
log.Debugf("Sticky session with cookie %v", cookiename)
|
||||
rebalancer, _ = roundrobin.NewRebalancer(rr, roundrobin.RebalancerLogger(oxyLogger), roundrobin.RebalancerStickySession(sticky))
|
||||
}
|
||||
|
||||
switch lbMethod {
|
||||
case types.Drr:
|
||||
log.Debugf("Creating load-balancer drr")
|
||||
rebalancer, _ := roundrobin.NewRebalancer(rr, roundrobin.RebalancerLogger(oxyLogger))
|
||||
if stickysession {
|
||||
log.Debugf("Sticky session with cookie %v", cookiename)
|
||||
rebalancer, _ = roundrobin.NewRebalancer(rr, roundrobin.RebalancerLogger(oxyLogger), roundrobin.RebalancerStickySession(sticky))
|
||||
lb = rebalancer
|
||||
for serverName, server := range configuration.Backends[frontend.Backend].Servers {
|
||||
url, err := url.Parse(server.URL)
|
||||
if err != nil {
|
||||
log.Errorf("Error parsing server URL %s: %v", server.URL, err)
|
||||
log.Errorf("Skipping frontend %s...", frontendName)
|
||||
continue frontend
|
||||
}
|
||||
lb = rebalancer
|
||||
for serverName, server := range configuration.Backends[frontend.Backend].Servers {
|
||||
url, err := url.Parse(server.URL)
|
||||
if err != nil {
|
||||
log.Errorf("Error parsing server URL %s: %v", server.URL, err)
|
||||
log.Errorf("Skipping frontend %s...", frontendName)
|
||||
continue frontend
|
||||
}
|
||||
backend2FrontendMap[url.String()] = frontendName
|
||||
log.Debugf("Creating server %s at %s with weight %d", serverName, url.String(), server.Weight)
|
||||
if err := rebalancer.UpsertServer(url, roundrobin.Weight(server.Weight)); err != nil {
|
||||
log.Errorf("Error adding server %s to load balancer: %v", server.URL, err)
|
||||
log.Errorf("Skipping frontend %s...", frontendName)
|
||||
continue frontend
|
||||
}
|
||||
if configuration.Backends[frontend.Backend].HealthCheck != nil {
|
||||
var interval time.Duration
|
||||
if configuration.Backends[frontend.Backend].HealthCheck.Interval != "" {
|
||||
interval, err = time.ParseDuration(configuration.Backends[frontend.Backend].HealthCheck.Interval)
|
||||
if err != nil {
|
||||
log.Errorf("Wrong healthcheck interval: %s", err)
|
||||
interval = time.Second * 30
|
||||
}
|
||||
}
|
||||
backendsHealthcheck[frontend.Backend] = healthcheck.NewBackendHealthCheck(configuration.Backends[frontend.Backend].HealthCheck.URL, interval, rebalancer)
|
||||
}
|
||||
}
|
||||
case types.Wrr:
|
||||
log.Debugf("Creating load-balancer wrr")
|
||||
if stickysession {
|
||||
log.Debugf("Sticky session with cookie %v", cookiename)
|
||||
rr, _ = roundrobin.New(saveBackend, roundrobin.EnableStickySession(sticky))
|
||||
}
|
||||
lb = rr
|
||||
for serverName, server := range configuration.Backends[frontend.Backend].Servers {
|
||||
url, err := url.Parse(server.URL)
|
||||
if err != nil {
|
||||
log.Errorf("Error parsing server URL %s: %v", server.URL, err)
|
||||
log.Errorf("Skipping frontend %s...", frontendName)
|
||||
continue frontend
|
||||
}
|
||||
backend2FrontendMap[url.String()] = frontendName
|
||||
log.Debugf("Creating server %s at %s with weight %d", serverName, url.String(), server.Weight)
|
||||
if err := rr.UpsertServer(url, roundrobin.Weight(server.Weight)); err != nil {
|
||||
log.Errorf("Error adding server %s to load balancer: %v", server.URL, err)
|
||||
log.Errorf("Skipping frontend %s...", frontendName)
|
||||
continue frontend
|
||||
}
|
||||
backend2FrontendMap[url.String()] = frontendName
|
||||
log.Debugf("Creating server %s at %s with weight %d", serverName, url.String(), server.Weight)
|
||||
if err := rebalancer.UpsertServer(url, roundrobin.Weight(server.Weight)); err != nil {
|
||||
log.Errorf("Error adding server %s to load balancer: %v", server.URL, err)
|
||||
log.Errorf("Skipping frontend %s...", frontendName)
|
||||
continue frontend
|
||||
}
|
||||
if configuration.Backends[frontend.Backend].HealthCheck != nil {
|
||||
var interval time.Duration
|
||||
@@ -700,63 +668,96 @@ func (server *Server) loadConfig(configurations configs, globalConfiguration Glo
|
||||
interval = time.Second * 30
|
||||
}
|
||||
}
|
||||
backendsHealthcheck[frontend.Backend] = healthcheck.NewBackendHealthCheck(configuration.Backends[frontend.Backend].HealthCheck.URL, interval, rr)
|
||||
backendsHealthcheck[frontend.Backend] = healthcheck.NewBackendHealthCheck(configuration.Backends[frontend.Backend].HealthCheck.Path, interval, rebalancer)
|
||||
}
|
||||
}
|
||||
maxConns := configuration.Backends[frontend.Backend].MaxConn
|
||||
if maxConns != nil && maxConns.Amount != 0 {
|
||||
extractFunc, err := utils.NewExtractor(maxConns.ExtractorFunc)
|
||||
case types.Wrr:
|
||||
log.Debugf("Creating load-balancer wrr")
|
||||
if stickysession {
|
||||
log.Debugf("Sticky session with cookie %v", cookiename)
|
||||
rr, _ = roundrobin.New(saveBackend, roundrobin.EnableStickySession(sticky))
|
||||
}
|
||||
lb = rr
|
||||
for serverName, server := range configuration.Backends[frontend.Backend].Servers {
|
||||
url, err := url.Parse(server.URL)
|
||||
if err != nil {
|
||||
log.Errorf("Error creating connlimit: %v", err)
|
||||
log.Errorf("Error parsing server URL %s: %v", server.URL, err)
|
||||
log.Errorf("Skipping frontend %s...", frontendName)
|
||||
continue frontend
|
||||
}
|
||||
log.Debugf("Creating loadd-balancer connlimit")
|
||||
lb, err = connlimit.New(lb, extractFunc, maxConns.Amount, connlimit.Logger(oxyLogger))
|
||||
if err != nil {
|
||||
log.Errorf("Error creating connlimit: %v", err)
|
||||
backend2FrontendMap[url.String()] = frontendName
|
||||
log.Debugf("Creating server %s at %s with weight %d", serverName, url.String(), server.Weight)
|
||||
if err := rr.UpsertServer(url, roundrobin.Weight(server.Weight)); err != nil {
|
||||
log.Errorf("Error adding server %s to load balancer: %v", server.URL, err)
|
||||
log.Errorf("Skipping frontend %s...", frontendName)
|
||||
continue frontend
|
||||
}
|
||||
}
|
||||
// retry ?
|
||||
if globalConfiguration.Retry != nil {
|
||||
retries := len(configuration.Backends[frontend.Backend].Servers)
|
||||
if globalConfiguration.Retry.Attempts > 0 {
|
||||
retries = globalConfiguration.Retry.Attempts
|
||||
if configuration.Backends[frontend.Backend].HealthCheck != nil {
|
||||
var interval time.Duration
|
||||
if configuration.Backends[frontend.Backend].HealthCheck.Interval != "" {
|
||||
interval, err = time.ParseDuration(configuration.Backends[frontend.Backend].HealthCheck.Interval)
|
||||
if err != nil {
|
||||
log.Errorf("Wrong healthcheck interval: %s", err)
|
||||
interval = time.Second * 30
|
||||
}
|
||||
}
|
||||
lb = middlewares.NewRetry(retries, lb)
|
||||
log.Debugf("Creating retries max attempts %d", retries)
|
||||
backendsHealthcheck[frontend.Backend] = healthcheck.NewBackendHealthCheck(configuration.Backends[frontend.Backend].HealthCheck.Path, interval, rr)
|
||||
}
|
||||
}
|
||||
maxConns := configuration.Backends[frontend.Backend].MaxConn
|
||||
if maxConns != nil && maxConns.Amount != 0 {
|
||||
extractFunc, err := utils.NewExtractor(maxConns.ExtractorFunc)
|
||||
if err != nil {
|
||||
log.Errorf("Error creating connlimit: %v", err)
|
||||
log.Errorf("Skipping frontend %s...", frontendName)
|
||||
continue frontend
|
||||
}
|
||||
log.Debugf("Creating loadd-balancer connlimit")
|
||||
lb, err = connlimit.New(lb, extractFunc, maxConns.Amount, connlimit.Logger(oxyLogger))
|
||||
if err != nil {
|
||||
log.Errorf("Error creating connlimit: %v", err)
|
||||
log.Errorf("Skipping frontend %s...", frontendName)
|
||||
continue frontend
|
||||
}
|
||||
}
|
||||
// retry ?
|
||||
if globalConfiguration.Retry != nil {
|
||||
retries := len(configuration.Backends[frontend.Backend].Servers)
|
||||
if globalConfiguration.Retry.Attempts > 0 {
|
||||
retries = globalConfiguration.Retry.Attempts
|
||||
}
|
||||
lb = middlewares.NewRetry(retries, lb)
|
||||
log.Debugf("Creating retries max attempts %d", retries)
|
||||
}
|
||||
|
||||
var negroni = negroni.New()
|
||||
if server.globalConfiguration.Web != nil && server.globalConfiguration.Web.Metrics != nil {
|
||||
if server.globalConfiguration.Web.Metrics.Prometheus != nil {
|
||||
metricsMiddlewareBackend := middlewares.NewMetricsWrapper(middlewares.NewPrometheus(frontend.Backend, server.globalConfiguration.Web.Metrics.Prometheus))
|
||||
negroni.Use(metricsMiddlewareBackend)
|
||||
}
|
||||
if server.globalConfiguration.Web != nil && server.globalConfiguration.Web.Metrics != nil {
|
||||
if server.globalConfiguration.Web.Metrics.Prometheus != nil {
|
||||
metricsMiddlewareBackend := middlewares.NewMetricsWrapper(middlewares.NewPrometheus(frontend.Backend, server.globalConfiguration.Web.Metrics.Prometheus))
|
||||
negroni.Use(metricsMiddlewareBackend)
|
||||
}
|
||||
if configuration.Backends[frontend.Backend].CircuitBreaker != nil {
|
||||
log.Debugf("Creating circuit breaker %s", configuration.Backends[frontend.Backend].CircuitBreaker.Expression)
|
||||
cbreaker, err := middlewares.NewCircuitBreaker(lb, configuration.Backends[frontend.Backend].CircuitBreaker.Expression, cbreaker.Logger(oxyLogger))
|
||||
if err != nil {
|
||||
log.Errorf("Error creating circuit breaker: %v", err)
|
||||
log.Errorf("Skipping frontend %s...", frontendName)
|
||||
continue frontend
|
||||
}
|
||||
negroni.Use(cbreaker)
|
||||
} else {
|
||||
negroni.UseHandler(lb)
|
||||
}
|
||||
if configuration.Backends[frontend.Backend].CircuitBreaker != nil {
|
||||
log.Debugf("Creating circuit breaker %s", configuration.Backends[frontend.Backend].CircuitBreaker.Expression)
|
||||
cbreaker, err := middlewares.NewCircuitBreaker(lb, configuration.Backends[frontend.Backend].CircuitBreaker.Expression, cbreaker.Logger(oxyLogger))
|
||||
if err != nil {
|
||||
log.Errorf("Error creating circuit breaker: %v", err)
|
||||
log.Errorf("Skipping frontend %s...", frontendName)
|
||||
continue frontend
|
||||
}
|
||||
backends[frontend.Backend] = negroni
|
||||
negroni.Use(cbreaker)
|
||||
} else {
|
||||
log.Debugf("Reusing backend %s", frontend.Backend)
|
||||
negroni.UseHandler(lb)
|
||||
}
|
||||
if frontend.Priority > 0 {
|
||||
newServerRoute.route.Priority(frontend.Priority)
|
||||
}
|
||||
server.wireFrontendBackend(newServerRoute, backends[frontend.Backend])
|
||||
backends[frontend.Backend] = negroni
|
||||
} else {
|
||||
log.Debugf("Reusing backend %s", frontend.Backend)
|
||||
}
|
||||
if frontend.Priority > 0 {
|
||||
newServerRoute.route.Priority(frontend.Priority)
|
||||
}
|
||||
server.wireFrontendBackend(newServerRoute, backends[frontend.Backend])
|
||||
|
||||
err := newServerRoute.route.GetError()
|
||||
if err != nil {
|
||||
log.Errorf("Error building route: %s", err)
|
||||
@@ -793,7 +794,7 @@ func (server *Server) wireFrontendBackend(serverRoute *serverRoute, handler http
|
||||
serverRoute.route.Handler(handler)
|
||||
}
|
||||
|
||||
func (server *Server) loadEntryPointConfig(entryPointName string, entryPoint *EntryPoint) (http.Handler, error) {
|
||||
func (server *Server) loadEntryPointConfig(entryPointName string, entryPoint *EntryPoint) (negroni.Handler, error) {
|
||||
regex := entryPoint.Redirect.Regex
|
||||
replacement := entryPoint.Redirect.Replacement
|
||||
if len(entryPoint.Redirect.EntryPoint) > 0 {
|
||||
@@ -817,9 +818,8 @@ func (server *Server) loadEntryPointConfig(entryPointName string, entryPoint *En
|
||||
return nil, err
|
||||
}
|
||||
log.Debugf("Creating entryPoint redirect %s -> %s : %s -> %s", entryPointName, entryPoint.Redirect.EntryPoint, regex, replacement)
|
||||
negroni := negroni.New()
|
||||
negroni.Use(rewrite)
|
||||
return negroni, nil
|
||||
|
||||
return rewrite, nil
|
||||
}
|
||||
|
||||
func (server *Server) buildDefaultHTTPRouter() *mux.Router {
|
||||
|
||||
@@ -261,7 +261,6 @@ func run(traefikConfiguration *TraefikConfiguration) {
|
||||
}
|
||||
}(t)
|
||||
}
|
||||
log.Info(t.String())
|
||||
server.Wait()
|
||||
log.Info("Shutting down")
|
||||
}
|
||||
|
||||
@@ -53,11 +53,13 @@
|
||||
#
|
||||
# ProvidersThrottleDuration = "5"
|
||||
|
||||
# If non-zero, controls the maximum idle (keep-alive) to keep per-host. If zero, DefaultMaxIdleConnsPerHost is used.
|
||||
# If you encounter 'too many open files' errors, you can either change this value, or change `ulimit` value.
|
||||
# Controls the maximum idle (keep-alive) connections to keep per-host. If zero, DefaultMaxIdleConnsPerHost
|
||||
# from the Go standard library net/http module is used.
|
||||
# If you encounter 'too many open files' errors, you can either increase this
|
||||
# value or change the `ulimit`.
|
||||
#
|
||||
# Optional
|
||||
# Default: http.DefaultMaxIdleConnsPerHost
|
||||
# Default: 200
|
||||
#
|
||||
# MaxIdleConnsPerHost = 200
|
||||
|
||||
@@ -628,12 +630,14 @@
|
||||
# Zookeeper timeout (in seconds)
|
||||
#
|
||||
# Optional
|
||||
# Default: 30
|
||||
#
|
||||
# ZkDetectionTimeout = 30
|
||||
|
||||
# Polling interval (in seconds)
|
||||
#
|
||||
# Optional
|
||||
# Default: 30
|
||||
#
|
||||
# RefreshSeconds = 30
|
||||
|
||||
@@ -646,8 +650,9 @@
|
||||
# HTTP Timeout (in seconds)
|
||||
#
|
||||
# Optional
|
||||
# Default: 30
|
||||
#
|
||||
# StateTimeoutSecond = "host"
|
||||
# StateTimeoutSecond = "30"
|
||||
|
||||
################################################################
|
||||
# Kubernetes Ingress configuration backend
|
||||
|
||||
@@ -39,7 +39,7 @@ type CircuitBreaker struct {
|
||||
|
||||
// HealthCheck holds HealthCheck configuration
|
||||
type HealthCheck struct {
|
||||
URL string `json:"url,omitempty"`
|
||||
Path string `json:"path,omitempty"`
|
||||
Interval string `json:"interval,omitempty"`
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user