Compare commits

..

30 Commits

Author SHA1 Message Date
Emile Vauge
5a57515c6b Merge pull request #1318 from containous/prepare-release-v1.2.0
Prepare release v1.2.0
2017-03-21 10:37:24 +01:00
Emile Vauge
3490b9d35d Prepare release v1.2.0
Signed-off-by: Emile Vauge <emile@vauge.com>
2017-03-20 23:26:52 +01:00
Emile Vauge
dd0a20a668 Merge pull request #1304 from Yshayy/filter-non-running-tasks
Add filter on task status in addition to desired status (Docker Provider - swarm)
2017-03-20 19:28:10 +01:00
Emile Vauge
d13cef6ff6 sub-tests + Fatalf/Errorf
Signed-off-by: Emile Vauge <emile@vauge.com>
2017-03-20 17:54:09 +01:00
Emile Vauge
9f149977d6 Add Docker task list test
Signed-off-by: Emile Vauge <emile@vauge.com>
2017-03-20 12:36:49 +01:00
yshay
e65544f414 Add check on task status in addition to desired status 2017-03-20 12:12:53 +01:00
Sebastian
8010758b29 Docker: Added warning if network could not be found (#1310)
* Added warning if network could not be found

* Removed regex import from master

* Corrected wrong function call
2017-03-19 18:40:09 +01:00
Regner Blok-Andersen
75d92c6967 Abort Kubernetes Ingress update if Kubernetes API call fails (#1295)
* Abort Kubernetes Ingress update if Kubernetes API call fails

Currently if a Kubernetes API call fails we potentially remove a working service from Traefik. This changes it so if a Kubernetes API call fails we abort out of the ingress update and use the current working config. Github issue: #1240

Also added a test to cover when requested resources (services and endpoints) that the user has specified don’t exist.

* Specifically capturing the tc range as documented here: https://blog.golang.org/subtests

* Updating service names in the mock data to be more clear

* Updated expected data to match what currently happens in the loadIngress

* Adding a blank Servers to the expected output so we compare against that instead of nil.

* Replacing the JSON test output with spew for the TestMissingResources test to help ensure we have useful output incase of failures

* Adding a temporary fix to the GetEndoints mocked function so we can override the return value for if the endpoints exist.

After the 1.2 release the use of properExists should be removed and the GetEndpoints function should return false for the second value indicating the endpoint doesn’t exist. However at this time that would break a lot of the tests.

* Adding quick TODO line about removing the properExists property

* Link to issue 1307 re: properExists flag.
2017-03-17 16:34:34 +01:00
Emile Vauge
0f67cc7818 Merge pull request #1291 from containous/small-fixes
Small fixes
2017-03-15 18:04:41 +01:00
Emile Vauge
f428a752a5 Refactor k8s client config
Signed-off-by: Emile Vauge <emile@vauge.com>
2017-03-15 15:24:01 +01:00
Emile Vauge
3fe3784b6c Removed unused log
Signed-off-by: Emile Vauge <emile@vauge.com>
2017-03-15 15:24:01 +01:00
Emile Vauge
e4d63331cf Fix default config in generic Mesos provider
Signed-off-by: Emile Vauge <emile@vauge.com>
2017-03-15 15:24:01 +01:00
Vincent Demeester
8ae521db64 Merge pull request #1278 from akanto/update-oxy
Update Oxy, fix for #1199
2017-03-15 15:22:35 +01:00
Timo Reimann
24fde36b20 Revert "Pass context to ListReleases when checking for new versions."
This reverts commit 07db6a2df1.
2017-03-15 11:01:47 +01:00
Timo Reimann
7c55a4fd0c Update github.com/containous/oxy only. 2017-03-15 11:01:43 +01:00
Timo Reimann
7b1c0a97f7 Reset glide files to versions from upstream/v1.2. 2017-03-15 10:41:10 +01:00
Attila Kanto
8392846bd4 Update vulcand and pin deps in glide.yaml 2017-03-15 06:59:34 +01:00
Timo Reimann
07db6a2df1 Pass context to ListReleases when checking for new versions.
Required by go-github update.
2017-03-15 06:59:34 +01:00
Emile Vauge
cc9bb4b1f8 Merge pull request #1285 from timoreimann/rename-healthcheck-url-to-path
Rename health check URL parameter to path.
2017-03-14 23:39:24 +01:00
Timo Reimann
de91b99639 Rename health check URL parameter to path.
Also improve documentation.
2017-03-14 01:53:24 +01:00
Timo Reimann
c582ea5ff0 Merge pull request #1258 from matevzmihalic/fix/metrics
Fix metrics registering
2017-03-11 07:37:24 +01:00
Matevz Mihalic
b5de37e722 Fix metrics registering 2017-03-10 21:26:34 +01:00
Emile Vauge
ee9032f0bf Merge pull request #1209 from owen/ecs-chunk-taskarns
Chunk taskArns into groups of 100
2017-03-09 15:55:42 +01:00
Owen Marshall
11a68ce7f9 Chunk taskArns into groups of 100
If the ECS cluster has > 100 tasks, passing them to
ecs.DescribeTasksRequest() will result in the AWS API returning
errors.

This patch breaks them into chunks of at most 100, and calls
DescribeTasks for each chunk.

We also return early in case ListTasks returns no values; this
prevents DescribeTasks from throwing HTTP errors.
2017-03-07 20:52:33 -05:00
Emile Vauge
0dbac0af0d Merge pull request #1239 from timoreimann/update-maxidleconnsperhost-default-in-docs
Update DefaultMaxIdleConnsPerHost default in docs.
2017-03-07 15:20:07 +01:00
Timo Reimann
9541ee4cf6 Docs: Update default value for DefaultMaxIdleConnsPerHost. 2017-03-06 21:26:33 +01:00
Emile Vauge
2958a67ce5 Merge pull request #1225 from dtomcej/fix-670
Update WSS/WS Proto [Fixes #670]
2017-03-02 23:51:48 +01:00
dtomcej
eebbf6ebbb update oxy hash 2017-03-02 14:59:19 -07:00
Vincent Demeester
0133178b84 Merge pull request #1219 from SantoDE/feature-bump-gorancher
Bump go-rancher version
2017-03-02 13:13:44 +01:00
Manuel Laufenberg
9af5ba34ae Bump go-rancher version 2017-03-02 09:39:43 +01:00
20 changed files with 1043 additions and 280 deletions

View File

@@ -1,5 +1,81 @@
# Change Log # Change Log
## [v1.2.0](https://github.com/containous/traefik/tree/v1.2.0) (2017-03-20)
[Full Changelog](https://github.com/containous/traefik/compare/v1.1.2...v1.2.0)
**Merged pull requests:**
- Docker: Added warning if network could not be found [\#1310](https://github.com/containous/traefik/pull/1310) ([zweizeichen](https://github.com/zweizeichen))
- Add filter on task status in addition to desired status \(Docker Provider - swarm\) [\#1304](https://github.com/containous/traefik/pull/1304) ([Yshayy](https://github.com/Yshayy))
- Abort Kubernetes Ingress update if Kubernetes API call fails [\#1295](https://github.com/containous/traefik/pull/1295) ([Regner](https://github.com/Regner))
- Small fixes [\#1291](https://github.com/containous/traefik/pull/1291) ([emilevauge](https://github.com/emilevauge))
- Rename health check URL parameter to path. [\#1285](https://github.com/containous/traefik/pull/1285) ([timoreimann](https://github.com/timoreimann))
- Update Oxy, fix for \#1199 [\#1278](https://github.com/containous/traefik/pull/1278) ([akanto](https://github.com/akanto))
- Fix metrics registering [\#1258](https://github.com/containous/traefik/pull/1258) ([matevzmihalic](https://github.com/matevzmihalic))
- Update DefaultMaxIdleConnsPerHost default in docs. [\#1239](https://github.com/containous/traefik/pull/1239) ([timoreimann](https://github.com/timoreimann))
- Update WSS/WS Proto \[Fixes \#670\] [\#1225](https://github.com/containous/traefik/pull/1225) ([dtomcej](https://github.com/dtomcej))
- Bump go-rancher version [\#1219](https://github.com/containous/traefik/pull/1219) ([SantoDE](https://github.com/SantoDE))
- Chunk taskArns into groups of 100 [\#1209](https://github.com/containous/traefik/pull/1209) ([owen](https://github.com/owen))
- Prepare release v1.2.0 rc2 [\#1204](https://github.com/containous/traefik/pull/1204) ([emilevauge](https://github.com/emilevauge))
- Revert "Ensure that we don't add balancees with no health check runs … [\#1198](https://github.com/containous/traefik/pull/1198) ([jangie](https://github.com/jangie))
- Small fixes and improvments [\#1173](https://github.com/containous/traefik/pull/1173) ([SantoDE](https://github.com/SantoDE))
- Fix docker issues with global and dead tasks [\#1167](https://github.com/containous/traefik/pull/1167) ([christopherobin](https://github.com/christopherobin))
- Better ECS error checking [\#1143](https://github.com/containous/traefik/pull/1143) ([lpetre](https://github.com/lpetre))
- Fix stats race condition [\#1141](https://github.com/containous/traefik/pull/1141) ([emilevauge](https://github.com/emilevauge))
- ECS: Docs - info about cred. resolution and required access policies [\#1137](https://github.com/containous/traefik/pull/1137) ([rickard-von-essen](https://github.com/rickard-von-essen))
- Healthcheck tests and doc [\#1132](https://github.com/containous/traefik/pull/1132) ([Juliens](https://github.com/Juliens))
- Fix travis deploy [\#1128](https://github.com/containous/traefik/pull/1128) ([emilevauge](https://github.com/emilevauge))
- Prepare release v1.2.0 rc1 [\#1126](https://github.com/containous/traefik/pull/1126) ([emilevauge](https://github.com/emilevauge))
- Fix checkout initial before calling rmpr [\#1124](https://github.com/containous/traefik/pull/1124) ([emilevauge](https://github.com/emilevauge))
- Feature rancher integration [\#1120](https://github.com/containous/traefik/pull/1120) ([SantoDE](https://github.com/SantoDE))
- Fix glide go units [\#1119](https://github.com/containous/traefik/pull/1119) ([emilevauge](https://github.com/emilevauge))
- Carry \#818 — Add systemd watchdog feature [\#1116](https://github.com/containous/traefik/pull/1116) ([vdemeester](https://github.com/vdemeester))
- Skip file permission check on Windows [\#1115](https://github.com/containous/traefik/pull/1115) ([StefanScherer](https://github.com/StefanScherer))
- Fix Docker API version for Windows [\#1113](https://github.com/containous/traefik/pull/1113) ([StefanScherer](https://github.com/StefanScherer))
- Fix git rpr [\#1109](https://github.com/containous/traefik/pull/1109) ([emilevauge](https://github.com/emilevauge))
- Fix docker version specifier [\#1108](https://github.com/containous/traefik/pull/1108) ([timoreimann](https://github.com/timoreimann))
- Merge v1.1.2 master [\#1105](https://github.com/containous/traefik/pull/1105) ([emilevauge](https://github.com/emilevauge))
- add sh before script in deploy... [\#1103](https://github.com/containous/traefik/pull/1103) ([emilevauge](https://github.com/emilevauge))
- \[doc\] typo fixes for kubernetes user guide [\#1102](https://github.com/containous/traefik/pull/1102) ([bamarni](https://github.com/bamarni))
- add skip\_cleanup in deploy [\#1101](https://github.com/containous/traefik/pull/1101) ([emilevauge](https://github.com/emilevauge))
- Fix k8s example UI port. [\#1098](https://github.com/containous/traefik/pull/1098) ([ddunkin](https://github.com/ddunkin))
- Fix marathon provider [\#1090](https://github.com/containous/traefik/pull/1090) ([diegooliveira](https://github.com/diegooliveira))
- Add an ECS provider [\#1088](https://github.com/containous/traefik/pull/1088) ([lpetre](https://github.com/lpetre))
- Update comment to reflect the code [\#1087](https://github.com/containous/traefik/pull/1087) ([np](https://github.com/np))
- update NYTimes/gziphandler fixes \#1059 [\#1084](https://github.com/containous/traefik/pull/1084) ([JamesKyburz](https://github.com/JamesKyburz))
- Ensure that we don't add balancees with no health check runs if there is a health check defined on it [\#1080](https://github.com/containous/traefik/pull/1080) ([jangie](https://github.com/jangie))
- Add FreeBSD & OpenBSD to crossbinary [\#1078](https://github.com/containous/traefik/pull/1078) ([geoffgarside](https://github.com/geoffgarside))
- Fix metrics for multiple entry points [\#1071](https://github.com/containous/traefik/pull/1071) ([matevzmihalic](https://github.com/matevzmihalic))
- Allow setting load balancer method and sticky using service annotations [\#1068](https://github.com/containous/traefik/pull/1068) ([bakins](https://github.com/bakins))
- Fix travis script [\#1067](https://github.com/containous/traefik/pull/1067) ([emilevauge](https://github.com/emilevauge))
- Add missing fmt verb specifier in k8s provider. [\#1066](https://github.com/containous/traefik/pull/1066) ([timoreimann](https://github.com/timoreimann))
- Add git rpr command [\#1063](https://github.com/containous/traefik/pull/1063) ([emilevauge](https://github.com/emilevauge))
- Fix k8s example [\#1062](https://github.com/containous/traefik/pull/1062) ([emilevauge](https://github.com/emilevauge))
- Replace underscores to dash in autogenerated urls \(docker provider\) [\#1061](https://github.com/containous/traefik/pull/1061) ([WTFKr0](https://github.com/WTFKr0))
- Don't run go test on .glide cache folder [\#1057](https://github.com/containous/traefik/pull/1057) ([vdemeester](https://github.com/vdemeester))
- Allow setting circuitbreaker expression via Kubernetes annotation [\#1056](https://github.com/containous/traefik/pull/1056) ([bakins](https://github.com/bakins))
- Improving instrumentation. [\#1042](https://github.com/containous/traefik/pull/1042) ([enxebre](https://github.com/enxebre))
- Update user guide for upcoming `docker stack deploy` [\#1041](https://github.com/containous/traefik/pull/1041) ([twelvelabs](https://github.com/twelvelabs))
- Support sticky sessions under SWARM Mode. \#1024 [\#1033](https://github.com/containous/traefik/pull/1033) ([foleymic](https://github.com/foleymic))
- Allow for wildcards in k8s ingress host, fixes \#792 [\#1029](https://github.com/containous/traefik/pull/1029) ([sheerun](https://github.com/sheerun))
- Don't fetch ACME certificates for frontends using non-TLS entrypoints \(\#989\) [\#1023](https://github.com/containous/traefik/pull/1023) ([syfonseq](https://github.com/syfonseq))
- Return Proper Non-ACME certificate - Fixes Issue 672 [\#1018](https://github.com/containous/traefik/pull/1018) ([dtomcej](https://github.com/dtomcej))
- Fix docs build and add missing benchmarks page [\#1017](https://github.com/containous/traefik/pull/1017) ([csabapalfi](https://github.com/csabapalfi))
- Set a NopCloser request body with retry middleware [\#1016](https://github.com/containous/traefik/pull/1016) ([bamarni](https://github.com/bamarni))
- instruct to flatten dependencies with glide [\#1010](https://github.com/containous/traefik/pull/1010) ([bamarni](https://github.com/bamarni))
- check permissions on acme.json during startup [\#1009](https://github.com/containous/traefik/pull/1009) ([bamarni](https://github.com/bamarni))
- \[doc\] few tweaks on the basics page [\#1005](https://github.com/containous/traefik/pull/1005) ([bamarni](https://github.com/bamarni))
- Import order as goimports does [\#1004](https://github.com/containous/traefik/pull/1004) ([vdemeester](https://github.com/vdemeester))
- See the right go report badge [\#991](https://github.com/containous/traefik/pull/991) ([guilhem](https://github.com/guilhem))
- Add multiple values for one rule to docs [\#978](https://github.com/containous/traefik/pull/978) ([j0hnsmith](https://github.com/j0hnsmith))
- Add ACME/Lets Encrypt integration tests [\#975](https://github.com/containous/traefik/pull/975) ([trecloux](https://github.com/trecloux))
- deploy.sh: upload release source tarball [\#969](https://github.com/containous/traefik/pull/969) ([Mic92](https://github.com/Mic92))
- toml zookeeper doc fix [\#948](https://github.com/containous/traefik/pull/948) ([brdude](https://github.com/brdude))
- Add Rule AddPrefix [\#931](https://github.com/containous/traefik/pull/931) ([Juliens](https://github.com/Juliens))
- Add bug command [\#921](https://github.com/containous/traefik/pull/921) ([emilevauge](https://github.com/emilevauge))
- \(WIP\) feat: HealthCheck [\#918](https://github.com/containous/traefik/pull/918) ([Juliens](https://github.com/Juliens))
- Add ability to set authenticated user in request header [\#889](https://github.com/containous/traefik/pull/889) ([ViViDboarder](https://github.com/ViViDboarder))
- IP-per-task: [\#841](https://github.com/containous/traefik/pull/841) ([diegooliveira](https://github.com/diegooliveira))
## [v1.2.0-rc2](https://github.com/containous/traefik/tree/v1.2.0-rc2) (2017-03-01) ## [v1.2.0-rc2](https://github.com/containous/traefik/tree/v1.2.0-rc2) (2017-03-01)
[Full Changelog](https://github.com/containous/traefik/compare/v1.2.0-rc1...v1.2.0-rc2) [Full Changelog](https://github.com/containous/traefik/compare/v1.2.0-rc1...v1.2.0-rc2)

View File

@@ -392,6 +392,9 @@ func NewTraefikDefaultPointersConfiguration() *TraefikConfiguration {
defaultMesos.Endpoint = "http://127.0.0.1:5050" defaultMesos.Endpoint = "http://127.0.0.1:5050"
defaultMesos.ExposedByDefault = true defaultMesos.ExposedByDefault = true
defaultMesos.Constraints = types.Constraints{} defaultMesos.Constraints = types.Constraints{}
defaultMesos.RefreshSeconds = 30
defaultMesos.ZkDetectionTimeout = 30
defaultMesos.StateTimeoutSecond = 30
//default ECS //default ECS
var defaultECS provider.ECS var defaultECS provider.ECS

View File

@@ -236,16 +236,22 @@ For example:
sticky = true sticky = true
``` ```
Healthcheck URL can be configured with a relative URL for `healthcheck.URL`. A health check can be configured in order to remove a backend from LB rotation
Interval between healthcheck can be configured by using `healthcheck.interval` as long as it keeps returning HTTP status codes other than 200 OK to HTTP GET
(default: 30s) requests periodically carried out by Traefik. The check is defined by a path
appended to the backend URL and an interval (given in a format understood by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration)) specifying how
often the health check should be executed (the default being 30 seconds). Each
backend must respond to the health check within 5 seconds.
A recovering backend returning 200 OK responses again is being returned to the
LB rotation pool.
For example: For example:
```toml ```toml
[backends] [backends]
[backends.backend1] [backends.backend1]
[backends.backend1.healthcheck] [backends.backend1.healthcheck]
URL = "/health" path = "/health"
interval = "10s" interval = "10s"
``` ```

View File

@@ -62,11 +62,13 @@
# #
# ProvidersThrottleDuration = "5" # ProvidersThrottleDuration = "5"
# If non-zero, controls the maximum idle (keep-alive) to keep per-host. If zero, DefaultMaxIdleConnsPerHost is used. # Controls the maximum idle (keep-alive) connections to keep per-host. If zero, DefaultMaxIdleConnsPerHost
# If you encounter 'too many open files' errors, you can either change this value, or change `ulimit` value. # from the Go standard library net/http module is used.
# If you encounter 'too many open files' errors, you can either increase this
# value or change the `ulimit`.
# #
# Optional # Optional
# Default: http.DefaultMaxIdleConnsPerHost # Default: 200
# #
# MaxIdleConnsPerHost = 200 # MaxIdleConnsPerHost = 200
@@ -814,7 +816,7 @@ Labels can be used on containers to override default behaviour:
- `traefik.frontend.passHostHeader=true`: forward client `Host` header to the backend. - `traefik.frontend.passHostHeader=true`: forward client `Host` header to the backend.
- `traefik.frontend.priority=10`: override default frontend priority - `traefik.frontend.priority=10`: override default frontend priority
- `traefik.frontend.entryPoints=http,https`: assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. - `traefik.frontend.entryPoints=http,https`: assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`.
- `traefik.docker.network`: Set the docker network to use for connections to this container - `traefik.docker.network`: Set the docker network to use for connections to this container. If a container is linked to several networks, be sure to set the proper network name (you can check with docker inspect <container_id>) otherwise it will randomly pick one (depending on how docker is returning them). For instance when deploying docker `stack` from compose files, the compose defined networks will be prefixed with the `stack` name.
NB: when running inside a container, Træfɪk will need network access through `docker network connect <network> <traefik-container>` NB: when running inside a container, Træfɪk will need network access through `docker network connect <network> <traefik-container>`
@@ -1003,12 +1005,14 @@ domain = "mesos.localhost"
# Zookeeper timeout (in seconds) # Zookeeper timeout (in seconds)
# #
# Optional # Optional
# Default: 30
# #
# ZkDetectionTimeout = 30 # ZkDetectionTimeout = 30
# Polling interval (in seconds) # Polling interval (in seconds)
# #
# Optional # Optional
# Default: 30
# #
# RefreshSeconds = 30 # RefreshSeconds = 30
@@ -1021,8 +1025,9 @@ domain = "mesos.localhost"
# HTTP Timeout (in seconds) # HTTP Timeout (in seconds)
# #
# Optional # Optional
# Default: 30
# #
# StateTimeoutSecond = "host" # StateTimeoutSecond = "30"
``` ```
## Kubernetes Ingress backend ## Kubernetes Ingress backend

16
glide.lock generated
View File

@@ -1,5 +1,5 @@
hash: f2a0b9af55c8312762e4148bd876e5bd2451c240b407fbb6d4a5dbc56bf46c05 hash: 741ec5fae23f12e6c0fa0e4c7c00c0af06fac1ddc199dd4b45c904856890b347
updated: 2017-02-07T14:15:03.854561081+01:00 updated: 2017-03-15T10:48:05.202095822+01:00
imports: imports:
- name: bitbucket.org/ww/goautoneg - name: bitbucket.org/ww/goautoneg
version: 75cd24fc2f2c2a2088577d12123ddee5f54e0675 version: 75cd24fc2f2c2a2088577d12123ddee5f54e0675
@@ -258,7 +258,7 @@ imports:
- log - log
- swagger - swagger
- name: github.com/gambol99/go-marathon - name: github.com/gambol99/go-marathon
version: 9ab64d9f0259e8800911d92ebcd4d5b981917919 version: 6b00a5b651b1beb2c6821863f7c60df490bd46c8
- name: github.com/ghodss/yaml - name: github.com/ghodss/yaml
version: 04f313413ffd65ce25f2541bfd2b2ceec5c0908c version: 04f313413ffd65ce25f2541bfd2b2ceec5c0908c
- name: github.com/go-ini/ini - name: github.com/go-ini/ini
@@ -301,7 +301,7 @@ imports:
- name: github.com/gorilla/context - name: github.com/gorilla/context
version: 1ea25387ff6f684839d82767c1733ff4d4d15d0a version: 1ea25387ff6f684839d82767c1733ff4d4d15d0a
- name: github.com/gorilla/websocket - name: github.com/gorilla/websocket
version: c36f2fe5c330f0ac404b616b96c438b8616b1aaf version: 4873052237e4eeda85cf50c071ef33836fe8e139
- name: github.com/hashicorp/consul - name: github.com/hashicorp/consul
version: fce7d75609a04eeb9d4bf41c8dc592aac18fc97d version: fce7d75609a04eeb9d4bf41c8dc592aac18fc97d
subpackages: subpackages:
@@ -389,7 +389,7 @@ imports:
- name: github.com/pborman/uuid - name: github.com/pborman/uuid
version: 5007efa264d92316c43112bc573e754bc889b7b1 version: 5007efa264d92316c43112bc573e754bc889b7b1
- name: github.com/pkg/errors - name: github.com/pkg/errors
version: 248dadf4e9068a0b3e79f02ed0a610d935de5302 version: bfd5150e4e41705ded2129ec33379de1cb90b513
- name: github.com/pmezard/go-difflib - name: github.com/pmezard/go-difflib
version: d8ed2627bdf02c080bf22230dbb337003b7aba2d version: d8ed2627bdf02c080bf22230dbb337003b7aba2d
subpackages: subpackages:
@@ -419,7 +419,7 @@ imports:
subpackages: subpackages:
- src/egoscale - src/egoscale
- name: github.com/rancher/go-rancher - name: github.com/rancher/go-rancher
version: 2c43ff300f3eafcbd7d0b89b10427fc630efdc1e version: 5b8f6cc26b355ba03d7611fce3844155b7baf05b
subpackages: subpackages:
- client - client
- name: github.com/ryanuber/go-glob - name: github.com/ryanuber/go-glob
@@ -460,7 +460,7 @@ imports:
- name: github.com/vdemeester/docker-events - name: github.com/vdemeester/docker-events
version: be74d4929ec1ad118df54349fda4b0cba60f849b version: be74d4929ec1ad118df54349fda4b0cba60f849b
- name: github.com/vulcand/oxy - name: github.com/vulcand/oxy
version: fcc76b52eb8568540a020b7a99e854d9d752b364 version: f88530866c561d24a6b5aac49f76d6351b788b9f
repo: https://github.com/containous/oxy.git repo: https://github.com/containous/oxy.git
vcs: git vcs: git
subpackages: subpackages:
@@ -600,7 +600,7 @@ imports:
- name: gopkg.in/yaml.v2 - name: gopkg.in/yaml.v2
version: bef53efd0c76e49e6de55ead051f886bea7e9420 version: bef53efd0c76e49e6de55ead051f886bea7e9420
- name: k8s.io/client-go - name: k8s.io/client-go
version: 843f7c4f28b1f647f664f883697107d5c02c5acc version: 1195e3a8ee1a529d53eed7c624527a68555ddf1f
subpackages: subpackages:
- 1.5/discovery - 1.5/discovery
- 1.5/kubernetes - 1.5/kubernetes

View File

@@ -9,7 +9,7 @@ import:
- package: github.com/containous/flaeg - package: github.com/containous/flaeg
version: a731c034dda967333efce5f8d276aeff11f8ff87 version: a731c034dda967333efce5f8d276aeff11f8ff87
- package: github.com/vulcand/oxy - package: github.com/vulcand/oxy
version: fcc76b52eb8568540a020b7a99e854d9d752b364 version: f88530866c561d24a6b5aac49f76d6351b788b9f
repo: https://github.com/containous/oxy.git repo: https://github.com/containous/oxy.git
vcs: git vcs: git
subpackages: subpackages:
@@ -138,4 +138,4 @@ import:
subpackages: subpackages:
- proto - proto
- package: github.com/rancher/go-rancher - package: github.com/rancher/go-rancher
version: 2c43ff300f3eafcbd7d0b89b10427fc630efdc1e version: 5b8f6cc26b355ba03d7611fce3844155b7baf05b

View File

@@ -25,7 +25,7 @@ func GetHealthCheck() *HealthCheck {
// BackendHealthCheck HealthCheck configuration for a backend // BackendHealthCheck HealthCheck configuration for a backend
type BackendHealthCheck struct { type BackendHealthCheck struct {
URL string Path string
Interval time.Duration Interval time.Duration
DisabledURLs []*url.URL DisabledURLs []*url.URL
lb loadBalancer lb loadBalancer
@@ -81,7 +81,7 @@ func (hc *HealthCheck) execute(ctx context.Context) {
enabledURLs := currentBackend.lb.Servers() enabledURLs := currentBackend.lb.Servers()
var newDisabledURLs []*url.URL var newDisabledURLs []*url.URL
for _, url := range currentBackend.DisabledURLs { for _, url := range currentBackend.DisabledURLs {
if checkHealth(url, currentBackend.URL) { if checkHealth(url, currentBackend.Path) {
log.Debugf("HealthCheck is up [%s]: Upsert in server list", url.String()) log.Debugf("HealthCheck is up [%s]: Upsert in server list", url.String())
currentBackend.lb.UpsertServer(url, roundrobin.Weight(1)) currentBackend.lb.UpsertServer(url, roundrobin.Weight(1))
} else { } else {
@@ -91,7 +91,7 @@ func (hc *HealthCheck) execute(ctx context.Context) {
currentBackend.DisabledURLs = newDisabledURLs currentBackend.DisabledURLs = newDisabledURLs
for _, url := range enabledURLs { for _, url := range enabledURLs {
if !checkHealth(url, currentBackend.URL) { if !checkHealth(url, currentBackend.Path) {
log.Debugf("HealthCheck has failed [%s]: Remove from server list", url.String()) log.Debugf("HealthCheck has failed [%s]: Remove from server list", url.String())
currentBackend.lb.RemoveServer(url) currentBackend.lb.RemoveServer(url)
currentBackend.DisabledURLs = append(currentBackend.DisabledURLs, url) currentBackend.DisabledURLs = append(currentBackend.DisabledURLs, url)
@@ -104,12 +104,12 @@ func (hc *HealthCheck) execute(ctx context.Context) {
} }
} }
func checkHealth(serverURL *url.URL, checkURL string) bool { func checkHealth(serverURL *url.URL, path string) bool {
timeout := time.Duration(5 * time.Second) timeout := time.Duration(5 * time.Second)
client := http.Client{ client := http.Client{
Timeout: timeout, Timeout: timeout,
} }
resp, err := client.Get(serverURL.String() + checkURL) resp, err := client.Get(serverURL.String() + path)
if err != nil || resp.StatusCode != 200 { if err != nil || resp.StatusCode != 200 {
return false return false
} }

View File

@@ -32,7 +32,8 @@ func (p *Prometheus) getLatencyHistogram() metrics.Histogram {
// NewPrometheus returns a new prometheus Metrics implementation. // NewPrometheus returns a new prometheus Metrics implementation.
func NewPrometheus(name string, config *types.Prometheus) *Prometheus { func NewPrometheus(name string, config *types.Prometheus) *Prometheus {
var m Prometheus var m Prometheus
m.reqsCounter = prometheus.NewCounterFrom(
cv := stdprometheus.NewCounterVec(
stdprometheus.CounterOpts{ stdprometheus.CounterOpts{
Name: reqsName, Name: reqsName,
Help: "How many HTTP requests processed, partitioned by status code and method.", Help: "How many HTTP requests processed, partitioned by status code and method.",
@@ -41,6 +42,17 @@ func NewPrometheus(name string, config *types.Prometheus) *Prometheus {
[]string{"code", "method"}, []string{"code", "method"},
) )
err := stdprometheus.Register(cv)
if err != nil {
e, ok := err.(stdprometheus.AlreadyRegisteredError)
if !ok {
panic(err)
}
m.reqsCounter = prometheus.NewCounter(e.ExistingCollector.(*stdprometheus.CounterVec))
} else {
m.reqsCounter = prometheus.NewCounter(cv)
}
var buckets []float64 var buckets []float64
if config.Buckets != nil { if config.Buckets != nil {
buckets = config.Buckets buckets = config.Buckets
@@ -48,7 +60,7 @@ func NewPrometheus(name string, config *types.Prometheus) *Prometheus {
buckets = []float64{0.1, 0.3, 1.2, 5} buckets = []float64{0.1, 0.3, 1.2, 5}
} }
m.latencyHistogram = prometheus.NewHistogramFrom( hv := stdprometheus.NewHistogramVec(
stdprometheus.HistogramOpts{ stdprometheus.HistogramOpts{
Name: latencyName, Name: latencyName,
Help: "How long it took to process the request.", Help: "How long it took to process the request.",
@@ -57,6 +69,18 @@ func NewPrometheus(name string, config *types.Prometheus) *Prometheus {
}, },
[]string{}, []string{},
) )
err = stdprometheus.Register(hv)
if err != nil {
e, ok := err.(stdprometheus.AlreadyRegisteredError)
if !ok {
panic(err)
}
m.latencyHistogram = prometheus.NewHistogram(e.ExistingCollector.(*stdprometheus.HistogramVec))
} else {
m.latencyHistogram = prometheus.NewHistogram(hv)
}
return &m return &m
} }

View File

@@ -9,10 +9,19 @@ import (
"github.com/codegangsta/negroni" "github.com/codegangsta/negroni"
"github.com/containous/traefik/types" "github.com/containous/traefik/types"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp" "github.com/prometheus/client_golang/prometheus/promhttp"
dto "github.com/prometheus/client_model/go"
"github.com/stretchr/testify/assert"
) )
func TestPrometheus(t *testing.T) { func TestPrometheus(t *testing.T) {
metricsFamily, err := prometheus.DefaultGatherer.Gather()
if err != nil {
t.Fatalf("could not gather metrics family: %s", err)
}
initialMetricsFamilyCount := len(metricsFamily)
recorder := httptest.NewRecorder() recorder := httptest.NewRecorder()
n := negroni.New() n := negroni.New()
@@ -42,6 +51,80 @@ func TestPrometheus(t *testing.T) {
t.Errorf("body does not contain request total entry '%s'", reqsName) t.Errorf("body does not contain request total entry '%s'", reqsName)
} }
if !strings.Contains(body, latencyName) { if !strings.Contains(body, latencyName) {
t.Errorf("body does not contain request duration entry '%s'", reqsName) t.Errorf("body does not contain request duration entry '%s'", latencyName)
}
// Register the same metrics again
metricsMiddlewareBackend = NewMetricsWrapper(NewPrometheus("test", &types.Prometheus{}))
n = negroni.New()
n.Use(metricsMiddlewareBackend)
n.UseHandler(r)
n.ServeHTTP(recorder, req2)
metricsFamily, err = prometheus.DefaultGatherer.Gather()
if err != nil {
t.Fatalf("could not gather metrics family: %s", err)
}
tests := []struct {
name string
labels map[string]string
assert func(*dto.MetricFamily)
}{
{
name: reqsName,
labels: map[string]string{
"code": "200",
"method": "GET",
"service": "test",
},
assert: func(family *dto.MetricFamily) {
cv := uint(family.Metric[0].Counter.GetValue())
if cv != 3 {
t.Errorf("gathered metrics do not contain correct value for total requests, got %d", cv)
}
},
},
{
name: latencyName,
labels: map[string]string{
"service": "test",
},
assert: func(family *dto.MetricFamily) {
sc := family.Metric[0].Histogram.GetSampleCount()
if sc != 3 {
t.Errorf("gathered metrics do not contain correct sample count for request duration, got %d", sc)
}
},
},
}
assert.Equal(t, len(tests), len(metricsFamily)-initialMetricsFamilyCount, "gathered traefic metrics count does not match tests count")
for _, test := range tests {
family := findMetricFamily(test.name, metricsFamily)
if family == nil {
t.Errorf("gathered metrics do not contain '%s'", test.name)
continue
}
for _, label := range family.Metric[0].Label {
val, ok := test.labels[*label.Name]
if !ok {
t.Errorf("'%s' metric contains unexpected label '%s'", test.name, label)
} else if val != *label.Value {
t.Errorf("label '%s' in metric '%s' has wrong value '%s'", label, test.name, *label.Value)
}
}
test.assert(family)
} }
} }
func findMetricFamily(name string, families []*dto.MetricFamily) *dto.MetricFamily {
for _, family := range families {
if family.GetName() == name {
return family
}
}
return nil
}

View File

@@ -412,6 +412,8 @@ func (provider *Docker) getIPAddress(container dockerData) string {
if network != nil { if network != nil {
return network.Addr return network.Addr
} }
log.Warnf("Could not find network named '%s' for container '%s'! Maybe you're missing the project's prefix in the label? Defaulting to first available network.", label, container.Name)
} }
} }
@@ -688,6 +690,9 @@ func listTasks(ctx context.Context, dockerClient client.APIClient, serviceID str
var dockerDataList []dockerData var dockerDataList []dockerData
for _, task := range taskList { for _, task := range taskList {
if task.Status.State != swarm.TaskStateRunning {
continue
}
dockerData := parseTasks(task, serviceDockerData, networkMap, isGlobalSvc) dockerData := parseTasks(task, serviceDockerData, networkMap, isGlobalSvc)
dockerDataList = append(dockerDataList, dockerData) dockerDataList = append(dockerDataList, dockerData)
} }

View File

@@ -6,11 +6,16 @@ import (
"testing" "testing"
"github.com/containous/traefik/types" "github.com/containous/traefik/types"
"github.com/davecgh/go-spew/spew"
dockerclient "github.com/docker/engine-api/client"
docker "github.com/docker/engine-api/types" docker "github.com/docker/engine-api/types"
dockertypes "github.com/docker/engine-api/types"
"github.com/docker/engine-api/types/container" "github.com/docker/engine-api/types/container"
"github.com/docker/engine-api/types/network" "github.com/docker/engine-api/types/network"
"github.com/docker/engine-api/types/swarm" "github.com/docker/engine-api/types/swarm"
"github.com/docker/go-connections/nat" "github.com/docker/go-connections/nat"
"golang.org/x/net/context"
"strconv"
) )
func TestDockerGetFrontendName(t *testing.T) { func TestDockerGetFrontendName(t *testing.T) {
@@ -85,12 +90,16 @@ func TestDockerGetFrontendName(t *testing.T) {
}, },
} }
for _, e := range containers { for containerID, e := range containers {
dockerData := parseContainer(e.container) e := e
actual := provider.getFrontendName(dockerData) t.Run(strconv.Itoa(containerID), func(t *testing.T) {
if actual != e.expected { t.Parallel()
t.Fatalf("expected %q, got %q", e.expected, actual) dockerData := parseContainer(e.container)
} actual := provider.getFrontendName(dockerData)
if actual != e.expected {
t.Errorf("expected %q, got %q", e.expected, actual)
}
})
} }
} }
@@ -149,12 +158,16 @@ func TestDockerGetFrontendRule(t *testing.T) {
}, },
} }
for _, e := range containers { for containerID, e := range containers {
dockerData := parseContainer(e.container) e := e
actual := provider.getFrontendRule(dockerData) t.Run(strconv.Itoa(containerID), func(t *testing.T) {
if actual != e.expected { t.Parallel()
t.Fatalf("expected %q, got %q", e.expected, actual) dockerData := parseContainer(e.container)
} actual := provider.getFrontendRule(dockerData)
if actual != e.expected {
t.Errorf("expected %q, got %q", e.expected, actual)
}
})
} }
} }
@@ -198,12 +211,16 @@ func TestDockerGetBackend(t *testing.T) {
}, },
} }
for _, e := range containers { for containerID, e := range containers {
dockerData := parseContainer(e.container) e := e
actual := provider.getBackend(dockerData) t.Run(strconv.Itoa(containerID), func(t *testing.T) {
if actual != e.expected { t.Parallel()
t.Fatalf("expected %q, got %q", e.expected, actual) dockerData := parseContainer(e.container)
} actual := provider.getBackend(dockerData)
if actual != e.expected {
t.Errorf("expected %q, got %q", e.expected, actual)
}
})
} }
} }
@@ -299,12 +316,16 @@ func TestDockerGetIPAddress(t *testing.T) { // TODO
}, },
} }
for _, e := range containers { for containerID, e := range containers {
dockerData := parseContainer(e.container) e := e
actual := provider.getIPAddress(dockerData) t.Run(strconv.Itoa(containerID), func(t *testing.T) {
if actual != e.expected { t.Parallel()
t.Fatalf("expected %q, got %q", e.expected, actual) dockerData := parseContainer(e.container)
} actual := provider.getIPAddress(dockerData)
if actual != e.expected {
t.Errorf("expected %q, got %q", e.expected, actual)
}
})
} }
} }
@@ -412,12 +433,16 @@ func TestDockerGetPort(t *testing.T) {
}, },
} }
for _, e := range containers { for containerID, e := range containers {
dockerData := parseContainer(e.container) e := e
actual := provider.getPort(dockerData) t.Run(strconv.Itoa(containerID), func(t *testing.T) {
if actual != e.expected { t.Parallel()
t.Fatalf("expected %q, got %q", e.expected, actual) dockerData := parseContainer(e.container)
} actual := provider.getPort(dockerData)
if actual != e.expected {
t.Errorf("expected %q, got %q", e.expected, actual)
}
})
} }
} }
@@ -452,12 +477,16 @@ func TestDockerGetWeight(t *testing.T) {
}, },
} }
for _, e := range containers { for containerID, e := range containers {
dockerData := parseContainer(e.container) e := e
actual := provider.getWeight(dockerData) t.Run(strconv.Itoa(containerID), func(t *testing.T) {
if actual != e.expected { t.Parallel()
t.Fatalf("expected %q, got %q", e.expected, actual) dockerData := parseContainer(e.container)
} actual := provider.getWeight(dockerData)
if actual != e.expected {
t.Errorf("expected %q, got %q", e.expected, actual)
}
})
} }
} }
@@ -494,12 +523,16 @@ func TestDockerGetDomain(t *testing.T) {
}, },
} }
for _, e := range containers { for containerID, e := range containers {
dockerData := parseContainer(e.container) e := e
actual := provider.getDomain(dockerData) t.Run(strconv.Itoa(containerID), func(t *testing.T) {
if actual != e.expected { t.Parallel()
t.Fatalf("expected %q, got %q", e.expected, actual) dockerData := parseContainer(e.container)
} actual := provider.getDomain(dockerData)
if actual != e.expected {
t.Errorf("expected %q, got %q", e.expected, actual)
}
})
} }
} }
@@ -534,12 +567,16 @@ func TestDockerGetProtocol(t *testing.T) {
}, },
} }
for _, e := range containers { for containerID, e := range containers {
dockerData := parseContainer(e.container) e := e
actual := provider.getProtocol(dockerData) t.Run(strconv.Itoa(containerID), func(t *testing.T) {
if actual != e.expected { t.Parallel()
t.Fatalf("expected %q, got %q", e.expected, actual) dockerData := parseContainer(e.container)
} actual := provider.getProtocol(dockerData)
if actual != e.expected {
t.Errorf("expected %q, got %q", e.expected, actual)
}
})
} }
} }
@@ -573,12 +610,16 @@ func TestDockerGetPassHostHeader(t *testing.T) {
}, },
} }
for _, e := range containers { for containerID, e := range containers {
dockerData := parseContainer(e.container) e := e
actual := provider.getPassHostHeader(dockerData) t.Run(strconv.Itoa(containerID), func(t *testing.T) {
if actual != e.expected { t.Parallel()
t.Fatalf("expected %q, got %q", e.expected, actual) dockerData := parseContainer(e.container)
} actual := provider.getPassHostHeader(dockerData)
if actual != e.expected {
t.Errorf("expected %q, got %q", e.expected, actual)
}
})
} }
} }
@@ -611,18 +652,22 @@ func TestDockerGetLabel(t *testing.T) {
}, },
} }
for _, e := range containers { for containerID, e := range containers {
dockerData := parseContainer(e.container) e := e
label, err := getLabel(dockerData, "foo") t.Run(strconv.Itoa(containerID), func(t *testing.T) {
if e.expected != "" { t.Parallel()
if err == nil || !strings.Contains(err.Error(), e.expected) { dockerData := parseContainer(e.container)
t.Fatalf("expected an error with %q, got %v", e.expected, err) label, err := getLabel(dockerData, "foo")
if e.expected != "" {
if err == nil || !strings.Contains(err.Error(), e.expected) {
t.Errorf("expected an error with %q, got %v", e.expected, err)
}
} else {
if label != "bar" {
t.Errorf("expected label 'bar', got %s", label)
}
} }
} else { })
if label != "bar" {
t.Fatalf("expected label 'bar', got %s", label)
}
}
} }
} }
@@ -678,17 +723,21 @@ func TestDockerGetLabels(t *testing.T) {
}, },
} }
for _, e := range containers { for containerID, e := range containers {
dockerData := parseContainer(e.container) e := e
labels, err := getLabels(dockerData, []string{"foo", "bar"}) t.Run(strconv.Itoa(containerID), func(t *testing.T) {
if !reflect.DeepEqual(labels, e.expectedLabels) { t.Parallel()
t.Fatalf("expect %v, got %v", e.expectedLabels, labels) dockerData := parseContainer(e.container)
} labels, err := getLabels(dockerData, []string{"foo", "bar"})
if e.expectedError != "" { if !reflect.DeepEqual(labels, e.expectedLabels) {
if err == nil || !strings.Contains(err.Error(), e.expectedError) { t.Errorf("expect %v, got %v", e.expectedLabels, labels)
t.Fatalf("expected an error with %q, got %v", e.expectedError, err)
} }
} if e.expectedError != "" {
if err == nil || !strings.Contains(err.Error(), e.expectedError) {
t.Errorf("expected an error with %q, got %v", e.expectedError, err)
}
}
})
} }
} }
@@ -912,13 +961,17 @@ func TestDockerTraefikFilter(t *testing.T) {
}, },
} }
for _, e := range containers { for containerID, e := range containers {
provider.ExposedByDefault = e.exposedByDefault e := e
dockerData := parseContainer(e.container) t.Run(strconv.Itoa(containerID), func(t *testing.T) {
actual := provider.containerFilter(dockerData) t.Parallel()
if actual != e.expected { provider.ExposedByDefault = e.exposedByDefault
t.Fatalf("expected %v for %+v, got %+v", e.expected, e, actual) dockerData := parseContainer(e.container)
} actual := provider.containerFilter(dockerData)
if actual != e.expected {
t.Errorf("expected %v for %+v, got %+v", e.expected, e, actual)
}
})
} }
} }
@@ -1134,21 +1187,25 @@ func TestDockerLoadDockerConfig(t *testing.T) {
ExposedByDefault: true, ExposedByDefault: true,
} }
for _, c := range cases { for caseID, c := range cases {
var dockerDataList []dockerData c := c
for _, container := range c.containers { t.Run(strconv.Itoa(caseID), func(t *testing.T) {
dockerData := parseContainer(container) t.Parallel()
dockerDataList = append(dockerDataList, dockerData) var dockerDataList []dockerData
} for _, container := range c.containers {
dockerData := parseContainer(container)
dockerDataList = append(dockerDataList, dockerData)
}
actualConfig := provider.loadDockerConfig(dockerDataList) actualConfig := provider.loadDockerConfig(dockerDataList)
// Compare backends // Compare backends
if !reflect.DeepEqual(actualConfig.Backends, c.expectedBackends) { if !reflect.DeepEqual(actualConfig.Backends, c.expectedBackends) {
t.Fatalf("expected %#v, got %#v", c.expectedBackends, actualConfig.Backends) t.Errorf("expected %#v, got %#v", c.expectedBackends, actualConfig.Backends)
} }
if !reflect.DeepEqual(actualConfig.Frontends, c.expectedFrontends) { if !reflect.DeepEqual(actualConfig.Frontends, c.expectedFrontends) {
t.Fatalf("expected %#v, got %#v", c.expectedFrontends, actualConfig.Frontends) t.Errorf("expected %#v, got %#v", c.expectedFrontends, actualConfig.Frontends)
} }
})
} }
} }
@@ -1232,12 +1289,16 @@ func TestSwarmGetFrontendName(t *testing.T) {
}, },
} }
for _, e := range services { for serviceID, e := range services {
dockerData := parseService(e.service, e.networks) e := e
actual := provider.getFrontendName(dockerData) t.Run(strconv.Itoa(serviceID), func(t *testing.T) {
if actual != e.expected { t.Parallel()
t.Fatalf("expected %q, got %q", e.expected, actual) dockerData := parseService(e.service, e.networks)
} actual := provider.getFrontendName(dockerData)
if actual != e.expected {
t.Errorf("expected %q, got %q", e.expected, actual)
}
})
} }
} }
@@ -1304,12 +1365,16 @@ func TestSwarmGetFrontendRule(t *testing.T) {
}, },
} }
for _, e := range services { for serviceID, e := range services {
dockerData := parseService(e.service, e.networks) e := e
actual := provider.getFrontendRule(dockerData) t.Run(strconv.Itoa(serviceID), func(t *testing.T) {
if actual != e.expected { t.Parallel()
t.Fatalf("expected %q, got %q", e.expected, actual) dockerData := parseService(e.service, e.networks)
} actual := provider.getFrontendRule(dockerData)
if actual != e.expected {
t.Errorf("expected %q, got %q", e.expected, actual)
}
})
} }
} }
@@ -1361,12 +1426,16 @@ func TestSwarmGetBackend(t *testing.T) {
}, },
} }
for _, e := range services { for serviceID, e := range services {
dockerData := parseService(e.service, e.networks) e := e
actual := provider.getBackend(dockerData) t.Run(strconv.Itoa(serviceID), func(t *testing.T) {
if actual != e.expected { t.Parallel()
t.Fatalf("expected %q, got %q", e.expected, actual) dockerData := parseService(e.service, e.networks)
} actual := provider.getBackend(dockerData)
if actual != e.expected {
t.Errorf("expected %q, got %q", e.expected, actual)
}
})
} }
} }
@@ -1458,12 +1527,16 @@ func TestSwarmGetIPAddress(t *testing.T) {
}, },
} }
for _, e := range services { for serviceID, e := range services {
dockerData := parseService(e.service, e.networks) e := e
actual := provider.getIPAddress(dockerData) t.Run(strconv.Itoa(serviceID), func(t *testing.T) {
if actual != e.expected { t.Parallel()
t.Fatalf("expected %q, got %q", e.expected, actual) dockerData := parseService(e.service, e.networks)
} actual := provider.getIPAddress(dockerData)
if actual != e.expected {
t.Errorf("expected %q, got %q", e.expected, actual)
}
})
} }
} }
@@ -1496,12 +1569,16 @@ func TestSwarmGetPort(t *testing.T) {
}, },
} }
for _, e := range services { for serviceID, e := range services {
dockerData := parseService(e.service, e.networks) e := e
actual := provider.getPort(dockerData) t.Run(strconv.Itoa(serviceID), func(t *testing.T) {
if actual != e.expected { t.Parallel()
t.Fatalf("expected %q, got %q", e.expected, actual) dockerData := parseService(e.service, e.networks)
} actual := provider.getPort(dockerData)
if actual != e.expected {
t.Errorf("expected %q, got %q", e.expected, actual)
}
})
} }
} }
@@ -1548,12 +1625,16 @@ func TestSwarmGetWeight(t *testing.T) {
}, },
} }
for _, e := range services { for serviceID, e := range services {
dockerData := parseService(e.service, e.networks) e := e
actual := provider.getWeight(dockerData) t.Run(strconv.Itoa(serviceID), func(t *testing.T) {
if actual != e.expected { t.Parallel()
t.Fatalf("expected %q, got %q", e.expected, actual) dockerData := parseService(e.service, e.networks)
} actual := provider.getWeight(dockerData)
if actual != e.expected {
t.Errorf("expected %q, got %q", e.expected, actual)
}
})
} }
} }
@@ -1601,12 +1682,16 @@ func TestSwarmGetDomain(t *testing.T) {
}, },
} }
for _, e := range services { for serviceID, e := range services {
dockerData := parseService(e.service, e.networks) e := e
actual := provider.getDomain(dockerData) t.Run(strconv.Itoa(serviceID), func(t *testing.T) {
if actual != e.expected { t.Parallel()
t.Fatalf("expected %q, got %q", e.expected, actual) dockerData := parseService(e.service, e.networks)
} actual := provider.getDomain(dockerData)
if actual != e.expected {
t.Errorf("expected %q, got %q", e.expected, actual)
}
})
} }
} }
@@ -1653,12 +1738,16 @@ func TestSwarmGetProtocol(t *testing.T) {
}, },
} }
for _, e := range services { for serviceID, e := range services {
dockerData := parseService(e.service, e.networks) e := e
actual := provider.getProtocol(dockerData) t.Run(strconv.Itoa(serviceID), func(t *testing.T) {
if actual != e.expected { t.Parallel()
t.Fatalf("expected %q, got %q", e.expected, actual) dockerData := parseService(e.service, e.networks)
} actual := provider.getProtocol(dockerData)
if actual != e.expected {
t.Errorf("expected %q, got %q", e.expected, actual)
}
})
} }
} }
@@ -1705,17 +1794,20 @@ func TestSwarmGetPassHostHeader(t *testing.T) {
}, },
} }
for _, e := range services { for serviceID, e := range services {
dockerData := parseService(e.service, e.networks) e := e
actual := provider.getPassHostHeader(dockerData) t.Run(strconv.Itoa(serviceID), func(t *testing.T) {
if actual != e.expected { t.Parallel()
t.Fatalf("expected %q, got %q", e.expected, actual) dockerData := parseService(e.service, e.networks)
} actual := provider.getPassHostHeader(dockerData)
if actual != e.expected {
t.Errorf("expected %q, got %q", e.expected, actual)
}
})
} }
} }
func TestSwarmGetLabel(t *testing.T) { func TestSwarmGetLabel(t *testing.T) {
services := []struct { services := []struct {
service swarm.Service service swarm.Service
expected string expected string
@@ -1754,18 +1846,22 @@ func TestSwarmGetLabel(t *testing.T) {
}, },
} }
for _, e := range services { for serviceID, e := range services {
dockerData := parseService(e.service, e.networks) e := e
label, err := getLabel(dockerData, "foo") t.Run(strconv.Itoa(serviceID), func(t *testing.T) {
if e.expected != "" { t.Parallel()
if err == nil || !strings.Contains(err.Error(), e.expected) { dockerData := parseService(e.service, e.networks)
t.Fatalf("expected an error with %q, got %v", e.expected, err) label, err := getLabel(dockerData, "foo")
if e.expected != "" {
if err == nil || !strings.Contains(err.Error(), e.expected) {
t.Errorf("expected an error with %q, got %v", e.expected, err)
}
} else {
if label != "bar" {
t.Errorf("expected label 'bar', got %s", label)
}
} }
} else { })
if label != "bar" {
t.Fatalf("expected label 'bar', got %s", label)
}
}
} }
} }
@@ -1826,17 +1922,21 @@ func TestSwarmGetLabels(t *testing.T) {
}, },
} }
for _, e := range services { for serviceID, e := range services {
dockerData := parseService(e.service, e.networks) e := e
labels, err := getLabels(dockerData, []string{"foo", "bar"}) t.Run(strconv.Itoa(serviceID), func(t *testing.T) {
if !reflect.DeepEqual(labels, e.expectedLabels) { t.Parallel()
t.Fatalf("expect %v, got %v", e.expectedLabels, labels) dockerData := parseService(e.service, e.networks)
} labels, err := getLabels(dockerData, []string{"foo", "bar"})
if e.expectedError != "" { if !reflect.DeepEqual(labels, e.expectedLabels) {
if err == nil || !strings.Contains(err.Error(), e.expectedError) { t.Errorf("expect %v, got %v", e.expectedLabels, labels)
t.Fatalf("expected an error with %q, got %v", e.expectedError, err)
} }
} if e.expectedError != "" {
if err == nil || !strings.Contains(err.Error(), e.expectedError) {
t.Errorf("expected an error with %q, got %v", e.expectedError, err)
}
}
})
} }
} }
@@ -1990,13 +2090,17 @@ func TestSwarmTraefikFilter(t *testing.T) {
}, },
} }
for _, e := range services { for serviceID, e := range services {
dockerData := parseService(e.service, e.networks) e := e
provider.ExposedByDefault = e.exposedByDefault t.Run(strconv.Itoa(serviceID), func(t *testing.T) {
actual := provider.containerFilter(dockerData) t.Parallel()
if actual != e.expected { dockerData := parseService(e.service, e.networks)
t.Fatalf("expected %v for %+v, got %+v", e.expected, e, actual) provider.ExposedByDefault = e.exposedByDefault
} actual := provider.containerFilter(dockerData)
if actual != e.expected {
t.Errorf("expected %v for %+v, got %+v", e.expected, e, actual)
}
})
} }
} }
@@ -2167,21 +2271,25 @@ func TestSwarmLoadDockerConfig(t *testing.T) {
SwarmMode: true, SwarmMode: true,
} }
for _, c := range cases { for caseID, c := range cases {
var dockerDataList []dockerData c := c
for _, service := range c.services { t.Run(strconv.Itoa(caseID), func(t *testing.T) {
dockerData := parseService(service, c.networks) t.Parallel()
dockerDataList = append(dockerDataList, dockerData) var dockerDataList []dockerData
} for _, service := range c.services {
dockerData := parseService(service, c.networks)
dockerDataList = append(dockerDataList, dockerData)
}
actualConfig := provider.loadDockerConfig(dockerDataList) actualConfig := provider.loadDockerConfig(dockerDataList)
// Compare backends // Compare backends
if !reflect.DeepEqual(actualConfig.Backends, c.expectedBackends) { if !reflect.DeepEqual(actualConfig.Backends, c.expectedBackends) {
t.Fatalf("expected %#v, got %#v", c.expectedBackends, actualConfig.Backends) t.Errorf("expected %#v, got %#v", c.expectedBackends, actualConfig.Backends)
} }
if !reflect.DeepEqual(actualConfig.Frontends, c.expectedFrontends) { if !reflect.DeepEqual(actualConfig.Frontends, c.expectedFrontends) {
t.Fatalf("expected %#v, got %#v", c.expectedFrontends, actualConfig.Frontends) t.Errorf("expected %#v, got %#v", c.expectedFrontends, actualConfig.Frontends)
} }
})
} }
} }
@@ -2189,7 +2297,7 @@ func TestSwarmTaskParsing(t *testing.T) {
cases := []struct { cases := []struct {
service swarm.Service service swarm.Service
tasks []swarm.Task tasks []swarm.Task
isGlobalSvc bool isGlobalSVC bool
expectedNames map[string]string expectedNames map[string]string
networks map[string]*docker.NetworkResource networks map[string]*docker.NetworkResource
}{ }{
@@ -2215,7 +2323,7 @@ func TestSwarmTaskParsing(t *testing.T) {
Slot: 3, Slot: 3,
}, },
}, },
isGlobalSvc: false, isGlobalSVC: false,
expectedNames: map[string]string{ expectedNames: map[string]string{
"id1": "container.1", "id1": "container.1",
"id2": "container.2", "id2": "container.2",
@@ -2246,7 +2354,7 @@ func TestSwarmTaskParsing(t *testing.T) {
ID: "id3", ID: "id3",
}, },
}, },
isGlobalSvc: true, isGlobalSVC: true,
expectedNames: map[string]string{ expectedNames: map[string]string{
"id1": "container.id1", "id1": "container.id1",
"id2": "container.id2", "id2": "container.id2",
@@ -2260,14 +2368,112 @@ func TestSwarmTaskParsing(t *testing.T) {
}, },
} }
for _, e := range cases { for caseID, e := range cases {
dockerData := parseService(e.service, e.networks) e := e
t.Run(strconv.Itoa(caseID), func(t *testing.T) {
t.Parallel()
dockerData := parseService(e.service, e.networks)
for _, task := range e.tasks { for _, task := range e.tasks {
taskDockerData := parseTasks(task, dockerData, map[string]*docker.NetworkResource{}, e.isGlobalSvc) taskDockerData := parseTasks(task, dockerData, map[string]*docker.NetworkResource{}, e.isGlobalSVC)
if !reflect.DeepEqual(taskDockerData.Name, e.expectedNames[task.ID]) { if !reflect.DeepEqual(taskDockerData.Name, e.expectedNames[task.ID]) {
t.Fatalf("expect %v, got %v", e.expectedNames[task.ID], taskDockerData.Name) t.Errorf("expect %v, got %v", e.expectedNames[task.ID], taskDockerData.Name)
}
} }
} })
}
}
type fakeTasksClient struct {
dockerclient.APIClient
tasks []swarm.Task
err error
}
func (c *fakeTasksClient) TaskList(ctx context.Context, options dockertypes.TaskListOptions) ([]swarm.Task, error) {
return c.tasks, c.err
}
func TestListTasks(t *testing.T) {
cases := []struct {
service swarm.Service
tasks []swarm.Task
isGlobalSVC bool
expectedTasks []string
networks map[string]*docker.NetworkResource
}{
{
service: swarm.Service{
Spec: swarm.ServiceSpec{
Annotations: swarm.Annotations{
Name: "container",
},
},
},
tasks: []swarm.Task{
{
ID: "id1",
Slot: 1,
Status: swarm.TaskStatus{
State: swarm.TaskStateRunning,
},
},
{
ID: "id2",
Slot: 2,
Status: swarm.TaskStatus{
State: swarm.TaskStatePending,
},
},
{
ID: "id3",
Slot: 3,
},
{
ID: "id4",
Slot: 4,
Status: swarm.TaskStatus{
State: swarm.TaskStateRunning,
},
},
{
ID: "id5",
Slot: 5,
Status: swarm.TaskStatus{
State: swarm.TaskStateFailed,
},
},
},
isGlobalSVC: false,
expectedTasks: []string{
"container.1",
"container.4",
},
networks: map[string]*docker.NetworkResource{
"1": {
Name: "foo",
},
},
},
}
for caseID, e := range cases {
e := e
t.Run(strconv.Itoa(caseID), func(t *testing.T) {
t.Parallel()
dockerData := parseService(e.service, e.networks)
dockerClient := &fakeTasksClient{tasks: e.tasks}
taskDockerData, _ := listTasks(context.Background(), dockerClient, e.service.ID, dockerData, map[string]*docker.NetworkResource{}, e.isGlobalSVC)
if len(e.expectedTasks) != len(taskDockerData) {
t.Errorf("expected tasks %v, got %v", spew.Sdump(e.expectedTasks), spew.Sdump(taskDockerData))
}
for i, taskID := range e.expectedTasks {
if taskDockerData[i].Name != taskID {
t.Errorf("expect task id %v, got %v", taskID, taskDockerData[i].Name)
}
}
})
} }
} }

View File

@@ -206,13 +206,28 @@ func (provider *ECS) listInstances(ctx context.Context, client *awsClient) ([]ec
taskArns = append(taskArns, req.Data.(*ecs.ListTasksOutput).TaskArns...) taskArns = append(taskArns, req.Data.(*ecs.ListTasksOutput).TaskArns...)
} }
req, taskResp := client.ecs.DescribeTasksRequest(&ecs.DescribeTasksInput{ // Early return: if we can't list tasks we have nothing to
Tasks: taskArns, // describe below - likely empty cluster/permissions are bad. This
Cluster: &provider.Cluster, // stops the AWS API from returning a 401 when you DescribeTasks
}) // with no input.
if len(taskArns) == 0 {
return []ecsInstance{}, nil
}
chunkedTaskArns := provider.chunkedTaskArns(taskArns)
var tasks []*ecs.Task
for _, arns := range chunkedTaskArns {
req, taskResp := client.ecs.DescribeTasksRequest(&ecs.DescribeTasksInput{
Tasks: arns,
Cluster: &provider.Cluster,
})
if err := wrapAws(ctx, req); err != nil {
return nil, err
}
tasks = append(tasks, taskResp.Tasks...)
if err := wrapAws(ctx, req); err != nil {
return nil, err
} }
containerInstanceArns := make([]*string, 0) containerInstanceArns := make([]*string, 0)
@@ -221,7 +236,7 @@ func (provider *ECS) listInstances(ctx context.Context, client *awsClient) ([]ec
taskDefinitionArns := make([]*string, 0) taskDefinitionArns := make([]*string, 0)
byTaskDefinition := make(map[string]int) byTaskDefinition := make(map[string]int)
for _, task := range taskResp.Tasks { for _, task := range tasks {
if _, found := byContainerInstance[*task.ContainerInstanceArn]; !found { if _, found := byContainerInstance[*task.ContainerInstanceArn]; !found {
byContainerInstance[*task.ContainerInstanceArn] = len(containerInstanceArns) byContainerInstance[*task.ContainerInstanceArn] = len(containerInstanceArns)
containerInstanceArns = append(containerInstanceArns, task.ContainerInstanceArn) containerInstanceArns = append(containerInstanceArns, task.ContainerInstanceArn)
@@ -243,7 +258,7 @@ func (provider *ECS) listInstances(ctx context.Context, client *awsClient) ([]ec
} }
var instances []ecsInstance var instances []ecsInstance
for _, task := range taskResp.Tasks { for _, task := range tasks {
machineIdx := byContainerInstance[*task.ContainerInstanceArn] machineIdx := byContainerInstance[*task.ContainerInstanceArn]
taskDefIdx := byTaskDefinition[*task.TaskDefinitionArn] taskDefIdx := byTaskDefinition[*task.TaskDefinitionArn]
@@ -398,6 +413,22 @@ func (provider *ECS) getFrontendRule(i ecsInstance) string {
return "Host:" + strings.ToLower(strings.Replace(i.Name, "_", "-", -1)) + "." + provider.Domain return "Host:" + strings.ToLower(strings.Replace(i.Name, "_", "-", -1)) + "." + provider.Domain
} }
// ECS expects no more than 100 parameters be passed to a DescribeTask call; thus, pack
// each string into an array capped at 100 elements
func (provider *ECS) chunkedTaskArns(tasks []*string) [][]*string {
var chunkedTasks [][]*string
for i := 0; i < len(tasks); i += 100 {
sliceEnd := -1
if i+100 < len(tasks) {
sliceEnd = i + 100
} else {
sliceEnd = len(tasks)
}
chunkedTasks = append(chunkedTasks, tasks[i:sliceEnd])
}
return chunkedTasks
}
func (i ecsInstance) Protocol() string { func (i ecsInstance) Protocol() string {
if label := i.label("traefik.protocol"); label != "" { if label := i.label("traefik.protocol"); label != "" {
return label return label

View File

@@ -308,3 +308,42 @@ func TestFilterInstance(t *testing.T) {
} }
} }
} }
func TestTaskChunking(t *testing.T) {
provider := &ECS{}
testval := "a"
cases := []struct {
count int
expectedLengths []int
}{
{0, []int(nil)},
{1, []int{1}},
{99, []int{99}},
{100, []int{100}},
{101, []int{100, 1}},
{199, []int{100, 99}},
{200, []int{100, 100}},
{201, []int{100, 100, 1}},
{555, []int{100, 100, 100, 100, 100, 55}},
{1001, []int{100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 1}},
}
for _, c := range cases {
var tasks []*string
for v := 0; v < c.count; v++ {
tasks = append(tasks, &testval)
}
out := provider.chunkedTaskArns(tasks)
var outCount []int
for _, el := range out {
outCount = append(outCount, len(el))
}
if !reflect.DeepEqual(outCount, c.expectedLengths) {
t.Errorf("Chunking %d elements, expected %#v, got %#v", c.count, c.expectedLengths, outCount)
}
}
}

View File

@@ -3,6 +3,7 @@ package k8s
import ( import (
"time" "time"
"github.com/containous/traefik/log"
"k8s.io/client-go/1.5/kubernetes" "k8s.io/client-go/1.5/kubernetes"
"k8s.io/client-go/1.5/pkg/api" "k8s.io/client-go/1.5/pkg/api"
"k8s.io/client-go/1.5/pkg/api/v1" "k8s.io/client-go/1.5/pkg/api/v1"
@@ -39,31 +40,18 @@ type clientImpl struct {
clientset *kubernetes.Clientset clientset *kubernetes.Clientset
} }
// NewInClusterClient returns a new Kubernetes client that expect to run inside the cluster // NewClient returns a new Kubernetes client
func NewInClusterClient() (Client, error) { func NewClient(endpoint string) (Client, error) {
config, err := rest.InClusterConfig() config, err := rest.InClusterConfig()
if err != nil { if err != nil {
return nil, err log.Warnf("Kubernetes in cluster config error, trying from out of cluster: %s", err)
} config = &rest.Config{}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
return nil, err
} }
return &clientImpl{ if len(endpoint) > 0 {
clientset: clientset, config.Host = endpoint
}, nil
}
// NewInClusterClientWithEndpoint is the same as NewInClusterClient but uses the provided endpoint URL
func NewInClusterClientWithEndpoint(endpoint string) (Client, error) {
config, err := rest.InClusterConfig()
if err != nil {
return nil, err
} }
config.Host = endpoint
clientset, err := kubernetes.NewForConfig(config) clientset, err := kubernetes.NewForConfig(config)
if err != nil { if err != nil {
return nil, err return nil, err

View File

@@ -29,19 +29,10 @@ type Kubernetes struct {
lastConfiguration safe.Safe lastConfiguration safe.Safe
} }
func (provider *Kubernetes) newK8sClient() (k8s.Client, error) {
if provider.Endpoint != "" {
log.Infof("Creating in cluster Kubernetes client with endpoint %v", provider.Endpoint)
return k8s.NewInClusterClientWithEndpoint(provider.Endpoint)
}
log.Info("Creating in cluster Kubernetes client")
return k8s.NewInClusterClient()
}
// Provide allows the provider to provide configurations to traefik // Provide allows the provider to provide configurations to traefik
// using the given configuration channel. // using the given configuration channel.
func (provider *Kubernetes) Provide(configurationChan chan<- types.ConfigMessage, pool *safe.Pool, constraints types.Constraints) error { func (provider *Kubernetes) Provide(configurationChan chan<- types.ConfigMessage, pool *safe.Pool, constraints types.Constraints) error {
k8sClient, err := provider.newK8sClient() k8sClient, err := k8s.NewClient(provider.Endpoint)
if err != nil { if err != nil {
return err return err
} }
@@ -169,8 +160,13 @@ func (provider *Kubernetes) loadIngresses(k8sClient k8s.Client) (*types.Configur
} }
} }
service, exists, err := k8sClient.GetService(i.ObjectMeta.Namespace, pa.Backend.ServiceName) service, exists, err := k8sClient.GetService(i.ObjectMeta.Namespace, pa.Backend.ServiceName)
if err != nil || !exists { if err != nil {
log.Warnf("Error retrieving service %s/%s: %v", i.ObjectMeta.Namespace, pa.Backend.ServiceName, err) log.Errorf("Error while retrieving service information from k8s API %s/%s: %v", service.ObjectMeta.Namespace, pa.Backend.ServiceName, err)
return nil, err
}
if !exists {
log.Errorf("Service not found for %s/%s", service.ObjectMeta.Namespace, pa.Backend.ServiceName)
delete(templateObjects.Frontends, r.Host+pa.Path) delete(templateObjects.Frontends, r.Host+pa.Path)
continue continue
} }
@@ -193,13 +189,20 @@ func (provider *Kubernetes) loadIngresses(k8sClient k8s.Client) (*types.Configur
if port.Port == 443 { if port.Port == 443 {
protocol = "https" protocol = "https"
} }
endpoints, exists, err := k8sClient.GetEndpoints(service.ObjectMeta.Namespace, service.ObjectMeta.Name) endpoints, exists, err := k8sClient.GetEndpoints(service.ObjectMeta.Namespace, service.ObjectMeta.Name)
if err != nil || !exists { if err != nil {
log.Errorf("Error retrieving endpoints %s/%s: %v", service.ObjectMeta.Namespace, service.ObjectMeta.Name, err) log.Errorf("Error retrieving endpoints %s/%s: %v", service.ObjectMeta.Namespace, service.ObjectMeta.Name, err)
return nil, err
}
if !exists {
log.Errorf("Endpoints not found for %s/%s", service.ObjectMeta.Namespace, service.ObjectMeta.Name)
continue continue
} }
if len(endpoints.Subsets) == 0 { if len(endpoints.Subsets) == 0 {
log.Warnf("Endpoints not found for %s/%s, falling back to Service ClusterIP", service.ObjectMeta.Namespace, service.ObjectMeta.Name) log.Warnf("Service endpoints not found for %s/%s, falling back to Service ClusterIP", service.ObjectMeta.Namespace, service.ObjectMeta.Name)
templateObjects.Backends[r.Host+pa.Path].Servers[string(service.UID)] = types.Server{ templateObjects.Backends[r.Host+pa.Path].Servers[string(service.UID)] = types.Server{
URL: protocol + "://" + service.Spec.ClusterIP + ":" + strconv.Itoa(int(port.Port)), URL: protocol + "://" + service.Spec.ClusterIP + ":" + strconv.Itoa(int(port.Port)),
Weight: 1, Weight: 1,

View File

@@ -2,11 +2,13 @@ package provider
import ( import (
"encoding/json" "encoding/json"
"errors"
"reflect" "reflect"
"testing" "testing"
"github.com/containous/traefik/provider/k8s" "github.com/containous/traefik/provider/k8s"
"github.com/containous/traefik/types" "github.com/containous/traefik/types"
"github.com/davecgh/go-spew/spew"
"k8s.io/client-go/1.5/pkg/api/v1" "k8s.io/client-go/1.5/pkg/api/v1"
"k8s.io/client-go/1.5/pkg/apis/extensions/v1beta1" "k8s.io/client-go/1.5/pkg/apis/extensions/v1beta1"
"k8s.io/client-go/1.5/pkg/util/intstr" "k8s.io/client-go/1.5/pkg/util/intstr"
@@ -1524,11 +1526,286 @@ func TestServiceAnnotations(t *testing.T) {
} }
} }
func TestKubeAPIErrors(t *testing.T) {
ingresses := []*v1beta1.Ingress{{
ObjectMeta: v1.ObjectMeta{
Namespace: "testing",
},
Spec: v1beta1.IngressSpec{
Rules: []v1beta1.IngressRule{
{
Host: "foo",
IngressRuleValue: v1beta1.IngressRuleValue{
HTTP: &v1beta1.HTTPIngressRuleValue{
Paths: []v1beta1.HTTPIngressPath{
{
Path: "/bar",
Backend: v1beta1.IngressBackend{
ServiceName: "service1",
ServicePort: intstr.FromInt(80),
},
},
},
},
},
},
},
},
}}
services := []*v1.Service{{
ObjectMeta: v1.ObjectMeta{
Name: "service1",
UID: "1",
Namespace: "testing",
},
Spec: v1.ServiceSpec{
ClusterIP: "10.0.0.1",
Ports: []v1.ServicePort{
{
Port: 80,
},
},
},
}}
endpoints := []*v1.Endpoints{}
watchChan := make(chan interface{})
apiErr := errors.New("failed kube api call")
testCases := []struct {
desc string
apiServiceErr error
apiEndpointsErr error
}{
{
desc: "failed service call",
apiServiceErr: apiErr,
},
{
desc: "failed endpoints call",
apiEndpointsErr: apiErr,
},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.desc, func(t *testing.T) {
t.Parallel()
client := clientMock{
ingresses: ingresses,
services: services,
endpoints: endpoints,
watchChan: watchChan,
apiServiceError: tc.apiServiceErr,
apiEndpointsError: tc.apiEndpointsErr,
}
provider := Kubernetes{}
if _, err := provider.loadIngresses(client); err != apiErr {
t.Errorf("Got error %v, wanted error %v", err, apiErr)
}
})
}
}
func TestMissingResources(t *testing.T) {
ingresses := []*v1beta1.Ingress{{
ObjectMeta: v1.ObjectMeta{
Namespace: "testing",
},
Spec: v1beta1.IngressSpec{
Rules: []v1beta1.IngressRule{
{
Host: "fully_working",
IngressRuleValue: v1beta1.IngressRuleValue{
HTTP: &v1beta1.HTTPIngressRuleValue{
Paths: []v1beta1.HTTPIngressPath{
{
Backend: v1beta1.IngressBackend{
ServiceName: "fully_working_service",
ServicePort: intstr.FromInt(80),
},
},
},
},
},
},
{
Host: "missing_service",
IngressRuleValue: v1beta1.IngressRuleValue{
HTTP: &v1beta1.HTTPIngressRuleValue{
Paths: []v1beta1.HTTPIngressPath{
{
Backend: v1beta1.IngressBackend{
ServiceName: "missing_service_service",
ServicePort: intstr.FromInt(80),
},
},
},
},
},
},
{
Host: "missing_endpoints",
IngressRuleValue: v1beta1.IngressRuleValue{
HTTP: &v1beta1.HTTPIngressRuleValue{
Paths: []v1beta1.HTTPIngressPath{
{
Backend: v1beta1.IngressBackend{
ServiceName: "missing_endpoints_service",
ServicePort: intstr.FromInt(80),
},
},
},
},
},
},
},
},
}}
services := []*v1.Service{
{
ObjectMeta: v1.ObjectMeta{
Name: "fully_working_service",
UID: "1",
Namespace: "testing",
},
Spec: v1.ServiceSpec{
ClusterIP: "10.0.0.1",
Ports: []v1.ServicePort{
{
Port: 80,
},
},
},
},
{
ObjectMeta: v1.ObjectMeta{
Name: "missing_endpoints_service",
UID: "3",
Namespace: "testing",
},
Spec: v1.ServiceSpec{
ClusterIP: "10.0.0.3",
Ports: []v1.ServicePort{
{
Port: 80,
},
},
},
},
}
endpoints := []*v1.Endpoints{
{
ObjectMeta: v1.ObjectMeta{
Name: "fully_working_service",
UID: "1",
Namespace: "testing",
},
Subsets: []v1.EndpointSubset{
{
Addresses: []v1.EndpointAddress{
{
IP: "10.10.0.1",
},
},
Ports: []v1.EndpointPort{
{
Port: 8080,
},
},
},
},
},
}
watchChan := make(chan interface{})
client := clientMock{
ingresses: ingresses,
services: services,
endpoints: endpoints,
watchChan: watchChan,
// TODO: Update all tests to cope with "properExists == true" correctly and remove flag.
// See https://github.com/containous/traefik/issues/1307
properExists: true,
}
provider := Kubernetes{}
actual, err := provider.loadIngresses(client)
if err != nil {
t.Fatalf("error %+v", err)
}
expected := &types.Configuration{
Backends: map[string]*types.Backend{
"fully_working": {
Servers: map[string]types.Server{
"http://10.10.0.1:8080": {
URL: "http://10.10.0.1:8080",
Weight: 1,
},
},
CircuitBreaker: nil,
LoadBalancer: &types.LoadBalancer{
Method: "wrr",
Sticky: false,
},
},
"missing_service": {
Servers: map[string]types.Server{},
LoadBalancer: &types.LoadBalancer{
Method: "wrr",
Sticky: false,
},
},
"missing_endpoints": {
Servers: map[string]types.Server{},
CircuitBreaker: nil,
LoadBalancer: &types.LoadBalancer{
Method: "wrr",
Sticky: false,
},
},
},
Frontends: map[string]*types.Frontend{
"fully_working": {
Backend: "fully_working",
PassHostHeader: true,
Routes: map[string]types.Route{
"fully_working": {
Rule: "Host:fully_working",
},
},
},
"missing_endpoints": {
Backend: "missing_endpoints",
PassHostHeader: true,
Routes: map[string]types.Route{
"missing_endpoints": {
Rule: "Host:missing_endpoints",
},
},
},
},
}
if !reflect.DeepEqual(actual, expected) {
t.Fatalf("expected\n%v\ngot\n\n%v", spew.Sdump(expected), spew.Sdump(actual))
}
}
type clientMock struct { type clientMock struct {
ingresses []*v1beta1.Ingress ingresses []*v1beta1.Ingress
services []*v1.Service services []*v1.Service
endpoints []*v1.Endpoints endpoints []*v1.Endpoints
watchChan chan interface{} watchChan chan interface{}
apiServiceError error
apiEndpointsError error
properExists bool
} }
func (c clientMock) GetIngresses(namespaces k8s.Namespaces) []*v1beta1.Ingress { func (c clientMock) GetIngresses(namespaces k8s.Namespaces) []*v1beta1.Ingress {
@@ -1543,20 +1820,33 @@ func (c clientMock) GetIngresses(namespaces k8s.Namespaces) []*v1beta1.Ingress {
} }
func (c clientMock) GetService(namespace, name string) (*v1.Service, bool, error) { func (c clientMock) GetService(namespace, name string) (*v1.Service, bool, error) {
if c.apiServiceError != nil {
return &v1.Service{}, false, c.apiServiceError
}
for _, service := range c.services { for _, service := range c.services {
if service.Namespace == namespace && service.Name == name { if service.Namespace == namespace && service.Name == name {
return service, true, nil return service, true, nil
} }
} }
return &v1.Service{}, true, nil return &v1.Service{}, false, nil
} }
func (c clientMock) GetEndpoints(namespace, name string) (*v1.Endpoints, bool, error) { func (c clientMock) GetEndpoints(namespace, name string) (*v1.Endpoints, bool, error) {
if c.apiEndpointsError != nil {
return &v1.Endpoints{}, false, c.apiEndpointsError
}
for _, endpoints := range c.endpoints { for _, endpoints := range c.endpoints {
if endpoints.Namespace == namespace && endpoints.Name == name { if endpoints.Namespace == namespace && endpoints.Name == name {
return endpoints, true, nil return endpoints, true, nil
} }
} }
if c.properExists {
return &v1.Endpoints{}, false, nil
}
return &v1.Endpoints{}, true, nil return &v1.Endpoints{}, true, nil
} }

View File

@@ -666,7 +666,7 @@ func (server *Server) loadConfig(configurations configs, globalConfiguration Glo
interval = time.Second * 30 interval = time.Second * 30
} }
} }
backendsHealthcheck[frontend.Backend] = healthcheck.NewBackendHealthCheck(configuration.Backends[frontend.Backend].HealthCheck.URL, interval, rebalancer) backendsHealthcheck[frontend.Backend] = healthcheck.NewBackendHealthCheck(configuration.Backends[frontend.Backend].HealthCheck.Path, interval, rebalancer)
} }
} }
case types.Wrr: case types.Wrr:
@@ -700,7 +700,7 @@ func (server *Server) loadConfig(configurations configs, globalConfiguration Glo
interval = time.Second * 30 interval = time.Second * 30
} }
} }
backendsHealthcheck[frontend.Backend] = healthcheck.NewBackendHealthCheck(configuration.Backends[frontend.Backend].HealthCheck.URL, interval, rr) backendsHealthcheck[frontend.Backend] = healthcheck.NewBackendHealthCheck(configuration.Backends[frontend.Backend].HealthCheck.Path, interval, rr)
} }
} }
maxConns := configuration.Backends[frontend.Backend].MaxConn maxConns := configuration.Backends[frontend.Backend].MaxConn

View File

@@ -261,7 +261,6 @@ func run(traefikConfiguration *TraefikConfiguration) {
} }
}(t) }(t)
} }
log.Info(t.String())
server.Wait() server.Wait()
log.Info("Shutting down") log.Info("Shutting down")
} }

View File

@@ -53,11 +53,13 @@
# #
# ProvidersThrottleDuration = "5" # ProvidersThrottleDuration = "5"
# If non-zero, controls the maximum idle (keep-alive) to keep per-host. If zero, DefaultMaxIdleConnsPerHost is used. # Controls the maximum idle (keep-alive) connections to keep per-host. If zero, DefaultMaxIdleConnsPerHost
# If you encounter 'too many open files' errors, you can either change this value, or change `ulimit` value. # from the Go standard library net/http module is used.
# If you encounter 'too many open files' errors, you can either increase this
# value or change the `ulimit`.
# #
# Optional # Optional
# Default: http.DefaultMaxIdleConnsPerHost # Default: 200
# #
# MaxIdleConnsPerHost = 200 # MaxIdleConnsPerHost = 200
@@ -628,12 +630,14 @@
# Zookeeper timeout (in seconds) # Zookeeper timeout (in seconds)
# #
# Optional # Optional
# Default: 30
# #
# ZkDetectionTimeout = 30 # ZkDetectionTimeout = 30
# Polling interval (in seconds) # Polling interval (in seconds)
# #
# Optional # Optional
# Default: 30
# #
# RefreshSeconds = 30 # RefreshSeconds = 30
@@ -646,8 +650,9 @@
# HTTP Timeout (in seconds) # HTTP Timeout (in seconds)
# #
# Optional # Optional
# Default: 30
# #
# StateTimeoutSecond = "host" # StateTimeoutSecond = "30"
################################################################ ################################################################
# Kubernetes Ingress configuration backend # Kubernetes Ingress configuration backend

View File

@@ -39,7 +39,7 @@ type CircuitBreaker struct {
// HealthCheck holds HealthCheck configuration // HealthCheck holds HealthCheck configuration
type HealthCheck struct { type HealthCheck struct {
URL string `json:"url,omitempty"` Path string `json:"path,omitempty"`
Interval string `json:"interval,omitempty"` Interval string `json:"interval,omitempty"`
} }