forked from SW/traefik
Compare commits
73 Commits
v1.4.0-rc4
...
v1.4.5
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
cda09c843a | ||
|
|
ab1a930705 | ||
|
|
47a5cfbd3e | ||
|
|
b572879691 | ||
|
|
0f09551a76 | ||
|
|
419d46c958 | ||
|
|
676b79db42 | ||
|
|
c9129b8ecf | ||
|
|
6619a787a3 | ||
|
|
aae17c817b | ||
|
|
8fe5c22075 | ||
|
|
1a4564d998 | ||
|
|
66be04f39e | ||
|
|
77b111702b | ||
|
|
96a7cc483f | ||
|
|
1e3506848a | ||
|
|
5ee2cae85c | ||
|
|
5c119fe2d6 | ||
|
|
2f62ec3632 | ||
|
|
56affb90ae | ||
|
|
9bd0fff319 | ||
|
|
58a438167b | ||
|
|
bc8d68bd31 | ||
|
|
70812c70fc | ||
|
|
0347537f43 | ||
|
|
db9b18f121 | ||
|
|
fc4d670c88 | ||
|
|
02035d4942 | ||
|
|
d3c7681bc5 | ||
|
|
dc66db4abe | ||
|
|
a0e1cf8376 | ||
|
|
5292b84f4f | ||
|
|
b27455a36f | ||
|
|
da7b6f0baf | ||
|
|
9b5845f1cb | ||
|
|
1b2cb53d4f | ||
|
|
3158e51c62 | ||
|
|
f0371da838 | ||
|
|
44b82e6231 | ||
|
|
04f0bf3070 | ||
|
|
7400c39511 | ||
|
|
35ca40c3de | ||
|
|
de821fc305 | ||
|
|
e3cac7d0e5 | ||
|
|
81f7aa9df2 | ||
|
|
afbad56012 | ||
|
|
9c8df8b9ce | ||
|
|
ff4c7b82bc | ||
|
|
47ff51e640 | ||
|
|
08503655d9 | ||
|
|
3afd6024b5 | ||
|
|
aa308b7a3a | ||
|
|
9598f646f5 | ||
|
|
8af39bdaf7 | ||
|
|
8cb3f0835a | ||
|
|
cba0898e4f | ||
|
|
8d158402f3 | ||
|
|
7f2582e3b6 | ||
|
|
dbc796359f | ||
|
|
7a2ce59563 | ||
|
|
14cec7e610 | ||
|
|
6287a3dd53 | ||
|
|
93a1db77c5 | ||
|
|
a9d4b09bdb | ||
|
|
ed2eb7b5a6 | ||
|
|
18d8537d29 | ||
|
|
72f3b1ed39 | ||
|
|
fd70e6edb1 | ||
|
|
5a578c5375 | ||
|
|
9db8773055 | ||
|
|
8a67434380 | ||
|
|
c94e5f3589 | ||
|
|
adef7200f6 |
21
.github/PULL_REQUEST_TEMPLATE.md
vendored
21
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -16,8 +16,21 @@ HOW TO WRITE A GOOD PULL REQUEST?
|
||||
|
||||
-->
|
||||
|
||||
### Description
|
||||
### What does this PR do?
|
||||
|
||||
<!--
|
||||
Briefly describe the pull request in a few paragraphs.
|
||||
-->
|
||||
<!-- A brief description of the change being made with this pull request. -->
|
||||
|
||||
|
||||
### Motivation
|
||||
|
||||
<!-- What inspired you to submit this pull request? -->
|
||||
|
||||
|
||||
### More
|
||||
|
||||
- [ ] Added/updated tests
|
||||
- [ ] Added/updated documentation
|
||||
|
||||
### Additional Notes
|
||||
|
||||
<!-- Anything else we should know when reviewing? -->
|
||||
|
||||
@@ -36,6 +36,7 @@ deploy:
|
||||
on:
|
||||
repo: containous/traefik
|
||||
tags: true
|
||||
condition: ${TRAVIS_TAG} =~ ^v[0-9]+\.[0-9]+\.[0-9]+$
|
||||
- provider: releases
|
||||
api_key: ${GITHUB_TOKEN}
|
||||
file: dist/traefik*
|
||||
|
||||
314
CHANGELOG.md
314
CHANGELOG.md
@@ -1,5 +1,319 @@
|
||||
# Change Log
|
||||
|
||||
## [v1.4.5](https://github.com/containous/traefik/tree/v1.4.5) (2017-12-05)
|
||||
[All Commits](https://github.com/containous/traefik/compare/v1.4.4...v1.4.5)
|
||||
|
||||
**Bug fixes:**
|
||||
- **[docker]** Fix empty ip when container is stopped ([#2478](https://github.com/containous/traefik/pull/2478) by [mmatur](https://github.com/mmatur))
|
||||
- **[k8s]** Fix kubernetes path prefix rule with rewrite-target ([#2461](https://github.com/containous/traefik/pull/2461) by [cheungpat](https://github.com/cheungpat))
|
||||
|
||||
**Documentation:**
|
||||
- **[file]** Emphasize the necessity of enabling file backend ([#2483](https://github.com/containous/traefik/pull/2483) by [mvasin](https://github.com/mvasin))
|
||||
- Add link to future 1.5 documentation. ([#2477](https://github.com/containous/traefik/pull/2477) by [ldez](https://github.com/ldez))
|
||||
|
||||
## [v1.4.4](https://github.com/containous/traefik/tree/v1.4.4) (2017-11-21)
|
||||
[All Commits](https://github.com/containous/traefik/compare/v1.4.3...v1.4.4)
|
||||
|
||||
**Enhancements:**
|
||||
- **[middleware]** Remove GzipHandler Fork ([#2436](https://github.com/containous/traefik/pull/2436) by [ldez](https://github.com/ldez))
|
||||
|
||||
**Bug fixes:**
|
||||
- **[docker]** Fix problems about duplicated and missing Docker backends/frontends. ([#2434](https://github.com/containous/traefik/pull/2434) by [nmengin](https://github.com/nmengin))
|
||||
- **[middleware]** Fix raw path handling in strip prefix ([#2382](https://github.com/containous/traefik/pull/2382) by [marco-jantke](https://github.com/marco-jantke))
|
||||
- **[rancher]** Fix issue with label traefik.backend.loadbalancer.stickiness.cookieName ([#2423](https://github.com/containous/traefik/pull/2423) by [rawmind0](https://github.com/rawmind0))
|
||||
- http.Server log goes to Debug level. ([#2420](https://github.com/containous/traefik/pull/2420) by [ldez](https://github.com/ldez))
|
||||
|
||||
**Documentation:**
|
||||
- Documentation archive ([#2405](https://github.com/containous/traefik/pull/2405) by [ldez](https://github.com/ldez))
|
||||
|
||||
## [v1.4.3](https://github.com/containous/traefik/tree/v1.4.3) (2017-11-14)
|
||||
[All Commits](https://github.com/containous/traefik/compare/v1.4.2...v1.4.3)
|
||||
|
||||
**Bug fixes:**
|
||||
- **[consulcatalog]** Fix Traefik reload if Consul Catalog tags change ([#2389](https://github.com/containous/traefik/pull/2389) by [mmatur](https://github.com/mmatur))
|
||||
- **[kv]** Add Traefik prefix to the KV key ([#2400](https://github.com/containous/traefik/pull/2400) by [nmengin](https://github.com/nmengin))
|
||||
- **[middleware]** Flush and Status code ([#2403](https://github.com/containous/traefik/pull/2403) by [ldez](https://github.com/ldez))
|
||||
- **[middleware]** Exclude GRPC from compress ([#2391](https://github.com/containous/traefik/pull/2391) by [ldez](https://github.com/ldez))
|
||||
- **[middleware]** Keep status when stream mode and compress ([#2380](https://github.com/containous/traefik/pull/2380) by [Juliens](https://github.com/Juliens))
|
||||
|
||||
**Documentation:**
|
||||
- **[acme]** Fix some typos ([#2363](https://github.com/containous/traefik/pull/2363) by [tomsaleeba](https://github.com/tomsaleeba))
|
||||
- **[docker]** Minor fix for docker volume vs created directory ([#2372](https://github.com/containous/traefik/pull/2372) by [visibilityspots](https://github.com/visibilityspots))
|
||||
- **[k8s]** Link corrected ([#2385](https://github.com/containous/traefik/pull/2385) by [xlazex](https://github.com/xlazex))
|
||||
|
||||
**Misc:**
|
||||
- **[k8s]** Add secret creation to docs for kubernetes backend ([#2374](https://github.com/containous/traefik/pull/2374) by [shadycuz](https://github.com/shadycuz))
|
||||
|
||||
## [v1.4.2](https://github.com/containous/traefik/tree/v1.4.2) (2017-11-02)
|
||||
[All Commits](https://github.com/containous/traefik/compare/v1.4.1...v1.4.2)
|
||||
|
||||
**Bug fixes:**
|
||||
- **[cluster]** Fix datastore corruption on reload due to shrinking config size ([#2340](https://github.com/containous/traefik/pull/2340) by [else](https://github.com/else))
|
||||
- **[docker,docker/swarm]** Make frontend names differents for similar routes ([#2338](https://github.com/containous/traefik/pull/2338) by [nmengin](https://github.com/nmengin))
|
||||
- **[docker]** Fix IP address when Docker container network mode is container ([#2331](https://github.com/containous/traefik/pull/2331) by [nmengin](https://github.com/nmengin))
|
||||
- **[docker]** Make the traefik.port label optional when using service labels in Docker containers. ([#2330](https://github.com/containous/traefik/pull/2330) by [nmengin](https://github.com/nmengin))
|
||||
- **[docker]** Add unique ID to Docker services replicas ([#2314](https://github.com/containous/traefik/pull/2314) by [nmengin](https://github.com/nmengin))
|
||||
- **[marathon]** Missing Backend key in configuration when application has no tasks ([#2333](https://github.com/containous/traefik/pull/2333) by [aantono](https://github.com/aantono))
|
||||
- Remove hardcoded runtime.GOMAXPROCS. ([#2317](https://github.com/containous/traefik/pull/2317) by [ldez](https://github.com/ldez))
|
||||
|
||||
**Documentation:**
|
||||
- **[k8s]** fixed dead link in kubernetes backend config docs ([#2337](https://github.com/containous/traefik/pull/2337) by [perplexa](https://github.com/perplexa))
|
||||
- **[k8s]** Fix the k8s docs example deployment yaml ([#2308](https://github.com/containous/traefik/pull/2308) by [gnur](https://github.com/gnur))
|
||||
- Minor grammar change ([#2350](https://github.com/containous/traefik/pull/2350) by [haxorjim](https://github.com/haxorjim))
|
||||
- Minor typo ([#2343](https://github.com/containous/traefik/pull/2343) by [burningTyger](https://github.com/burningTyger))
|
||||
|
||||
## [v1.4.1](https://github.com/containous/traefik/tree/v1.4.1) (2017-10-24)
|
||||
[All Commits](https://github.com/containous/traefik/compare/v1.4.0...v1.4.1)
|
||||
|
||||
**Bug fixes:**
|
||||
- **[docker]** Network filter ([#2301](https://github.com/containous/traefik/pull/2301) by [ldez](https://github.com/ldez))
|
||||
- **[healthcheck]** Fix healthcheck path ([#2295](https://github.com/containous/traefik/pull/2295) by [emilevauge](https://github.com/emilevauge))
|
||||
- **[rules]** Regex capturing group. ([#2296](https://github.com/containous/traefik/pull/2296) by [ldez](https://github.com/ldez))
|
||||
- **[websocket]** Force http/1.1 for websocket ([#2292](https://github.com/containous/traefik/pull/2292) by [Juliens](https://github.com/Juliens))
|
||||
- Stream mode when http2 ([#2309](https://github.com/containous/traefik/pull/2309) by [Juliens](https://github.com/Juliens))
|
||||
- Enhance Trust Forwarded Headers ([#2302](https://github.com/containous/traefik/pull/2302) by [ldez](https://github.com/ldez))
|
||||
|
||||
## [v1.4.0](https://github.com/containous/traefik/tree/v1.4.0) (2017-10-16)
|
||||
[All Commits](https://github.com/containous/traefik/compare/v1.3.0-rc1...v1.4.0)
|
||||
|
||||
**Enhancements:**
|
||||
- **[acme]** Display Traefik logs in integration tests ([#2114](https://github.com/containous/traefik/pull/2114) by [ldez](https://github.com/ldez))
|
||||
- **[acme]** Make the ACME developments testing easier ([#1769](https://github.com/containous/traefik/pull/1769) by [nmengin](https://github.com/nmengin))
|
||||
- **[acme]** contrib: Dump keys/certs from acme.json to files ([#1484](https://github.com/containous/traefik/pull/1484) by [brianredbeard](https://github.com/brianredbeard))
|
||||
- **[api]** Add HTTP HEAD handling to /ping endpoint ([#1768](https://github.com/containous/traefik/pull/1768) by [martinbaillie](https://github.com/martinbaillie))
|
||||
- **[authentication,consulcatalog]** Add Basic auth for consul catalog ([#2027](https://github.com/containous/traefik/pull/2027) by [mmatur](https://github.com/mmatur))
|
||||
- **[authentication,marathon]** Add marathon label to configure basic auth ([#1799](https://github.com/containous/traefik/pull/1799) by [nikore](https://github.com/nikore))
|
||||
- **[authentication,ecs]** Add basic auth for ecs ([#2026](https://github.com/containous/traefik/pull/2026) by [mmatur](https://github.com/mmatur))
|
||||
- **[authentication,middleware]** Add forward authentication option ([#1972](https://github.com/containous/traefik/pull/1972) by [drampelt](https://github.com/drampelt))
|
||||
- **[authentication]** Manage Headers for the Authentication forwarding. ([#2132](https://github.com/containous/traefik/pull/2132) by [ldez](https://github.com/ldez))
|
||||
- **[consulcatalog,sticky-session]** Enable loadbalancer.sticky for Consul Catalog ([#1917](https://github.com/containous/traefik/pull/1917) by [nbonneval](https://github.com/nbonneval))
|
||||
- **[consulcatalog]** Exposed by default feature in Consul Catalog ([#2006](https://github.com/containous/traefik/pull/2006) by [mmatur](https://github.com/mmatur))
|
||||
- **[consulcatalog]** Speeding up consul catalog health change detection ([#1694](https://github.com/containous/traefik/pull/1694) by [vholovko](https://github.com/vholovko))
|
||||
- **[consulcatalog]** Enhanced flexibility in Consul Catalog configuration ([#1565](https://github.com/containous/traefik/pull/1565) by [aantono](https://github.com/aantono))
|
||||
- **[docker,k8s]** IP Whitelists for Frontend (with Docker- & Kubernetes-Provider Support) ([#1332](https://github.com/containous/traefik/pull/1332) by [MaZderMind](https://github.com/MaZderMind))
|
||||
- **[ecs,sticky-session]** Enable loadbalancer.sticky for ECS ([#1925](https://github.com/containous/traefik/pull/1925) by [mmatur](https://github.com/mmatur))
|
||||
- **[ecs]** Add support for several ECS backends ([#1913](https://github.com/containous/traefik/pull/1913) by [mmatur](https://github.com/mmatur))
|
||||
- **[file]** Allow file provider to load service config from files in a directory. ([#1672](https://github.com/containous/traefik/pull/1672) by [rjshep](https://github.com/rjshep))
|
||||
- **[healthcheck]** Add healthcheck command ([#1982](https://github.com/containous/traefik/pull/1982) by [emilevauge](https://github.com/emilevauge))
|
||||
- **[healthcheck]** Allow overriding the port used for healthchecks ([#1567](https://github.com/containous/traefik/pull/1567) by [bakins](https://github.com/bakins))
|
||||
- **[k8s,rules]** kubernetes ingress rewrite-target implementation ([#1723](https://github.com/containous/traefik/pull/1723) by [mlaccetti](https://github.com/mlaccetti))
|
||||
- **[k8s]** Added ability to override frontend priority for k8s ingress router ([#1874](https://github.com/containous/traefik/pull/1874) by [DiverOfDark](https://github.com/DiverOfDark))
|
||||
- **[kv]** Adds definitions to backend kv template for health checking ([#1644](https://github.com/containous/traefik/pull/1644) by [zachomedia](https://github.com/zachomedia))
|
||||
- **[logs,dynamodb,ecs,marathon]** Link some providers logs to Traefik ([#1746](https://github.com/containous/traefik/pull/1746) by [ldez](https://github.com/ldez))
|
||||
- **[logs,marathon]** remove confusing go-marathon log message ([#1810](https://github.com/containous/traefik/pull/1810) by [marco-jantke](https://github.com/marco-jantke))
|
||||
- **[logs]** Send traefik logs to stdout instead stderr ([#2054](https://github.com/containous/traefik/pull/2054) by [marco-jantke](https://github.com/marco-jantke))
|
||||
- **[logs]** enable logging to stdout for access logs ([#1683](https://github.com/containous/traefik/pull/1683) by [marco-jantke](https://github.com/marco-jantke))
|
||||
- **[logs]** Logs & errors review ([#1673](https://github.com/containous/traefik/pull/1673) by [ldez](https://github.com/ldez))
|
||||
- **[logs]** Switch access logging to logrus ([#1647](https://github.com/containous/traefik/pull/1647) by [rjshep](https://github.com/rjshep))
|
||||
- **[logs]** log X-Forwarded-For as ClientHost if present ([#1946](https://github.com/containous/traefik/pull/1946) by [mildis](https://github.com/mildis))
|
||||
- **[logs]** Restore: First stage of access logging middleware. ([#1571](https://github.com/containous/traefik/pull/1571) by [ldez](https://github.com/ldez))
|
||||
- **[logs]** Add log file close and reopen on receipt of SIGUSR1 ([#1761](https://github.com/containous/traefik/pull/1761) by [rjshep](https://github.com/rjshep))
|
||||
- **[logs]** add RetryAttempts to AccessLog in JSON format ([#1793](https://github.com/containous/traefik/pull/1793) by [marco-jantke](https://github.com/marco-jantke))
|
||||
- **[logs]** Add JSON as access logging format ([#1669](https://github.com/containous/traefik/pull/1669) by [rjshep](https://github.com/rjshep))
|
||||
- **[marathon]** Support multi-port service routing for containers running on Marathon ([#1742](https://github.com/containous/traefik/pull/1742) by [aantono](https://github.com/aantono))
|
||||
- **[marathon]** Improve Marathon integration tests. ([#1406](https://github.com/containous/traefik/pull/1406) by [timoreimann](https://github.com/timoreimann))
|
||||
- **[marathon]** Exported getSubDomain function from Marathon provider ([#1693](https://github.com/containous/traefik/pull/1693) by [aantono](https://github.com/aantono))
|
||||
- **[marathon]** Use test builder. ([#1871](https://github.com/containous/traefik/pull/1871) by [timoreimann](https://github.com/timoreimann))
|
||||
- **[marathon]** Add support for readiness checks. ([#1883](https://github.com/containous/traefik/pull/1883) by [timoreimann](https://github.com/timoreimann))
|
||||
- **[marathon]** Move marathon mock ([#1732](https://github.com/containous/traefik/pull/1732) by [ldez](https://github.com/ldez))
|
||||
- **[marathon]** Use single API call to fetch Marathon resources. ([#1815](https://github.com/containous/traefik/pull/1815) by [timoreimann](https://github.com/timoreimann))
|
||||
- **[metrics]** Added RetryMetrics to DataDog and StatsD providers ([#1884](https://github.com/containous/traefik/pull/1884) by [aantono](https://github.com/aantono))
|
||||
- **[metrics]** Extract metrics to own package and refactor implementations ([#1968](https://github.com/containous/traefik/pull/1968) by [marco-jantke](https://github.com/marco-jantke))
|
||||
- **[metrics]** Add metrics for backend_retries_total ([#1504](https://github.com/containous/traefik/pull/1504) by [marco-jantke](https://github.com/marco-jantke))
|
||||
- **[metrics]** Add status code to request duration metric ([#1755](https://github.com/containous/traefik/pull/1755) by [marco-jantke](https://github.com/marco-jantke))
|
||||
- **[middleware]** Add trusted whitelist proxy protocol ([#2234](https://github.com/containous/traefik/pull/2234) by [emilevauge](https://github.com/emilevauge)))
|
||||
- **[metrics]** DataDog and StatsD Metrics Support ([#1701](https://github.com/containous/traefik/pull/1701) by [aantono](https://github.com/aantono))
|
||||
- **[middleware]** Create Header Middleware ([#1236](https://github.com/containous/traefik/pull/1236) by [dtomcej](https://github.com/dtomcej))
|
||||
- **[middleware]** Add configurable timeouts and curate default timeout settings ([#1873](https://github.com/containous/traefik/pull/1873) by [marco-jantke](https://github.com/marco-jantke))
|
||||
- **[middleware]** Fix command bug content. ([#2002](https://github.com/containous/traefik/pull/2002) by [ldez](https://github.com/ldez))
|
||||
- **[middleware]** Retry only on real network errors ([#1549](https://github.com/containous/traefik/pull/1549) by [marco-jantke](https://github.com/marco-jantke))
|
||||
- **[middleware]** Return 503 on empty backend ([#1748](https://github.com/containous/traefik/pull/1748) by [marco-jantke](https://github.com/marco-jantke))
|
||||
- **[middleware]** Custom Error Pages ([#1675](https://github.com/containous/traefik/pull/1675) by [bparli](https://github.com/bparli))
|
||||
- **[oxy]** Support X-Forwarded-Port. ([#1960](https://github.com/containous/traefik/pull/1960) by [ldez](https://github.com/ldez))
|
||||
- **[provider,tls]** Added a check to ensure clientTLS configuration contains either a cert or a key ([#1932](https://github.com/containous/traefik/pull/1932) by [aantono](https://github.com/aantono))
|
||||
- **[provider]** Deflake integration tests ([#1599](https://github.com/containous/traefik/pull/1599) by [ldez](https://github.com/ldez))
|
||||
- **[provider]** Factorize labels ([#1843](https://github.com/containous/traefik/pull/1843) by [ldez](https://github.com/ldez))
|
||||
- **[provider]** Replace go routine by Safe.Go ([#1879](https://github.com/containous/traefik/pull/1879) by [ldez](https://github.com/ldez))
|
||||
- **[rancher]** Refactor into dual Rancher API/Metadata providers ([#1563](https://github.com/containous/traefik/pull/1563) by [martinbaillie](https://github.com/martinbaillie))
|
||||
- **[rules]** Add support for Query String filtering ([#1934](https://github.com/containous/traefik/pull/1934) by [driverpt](https://github.com/driverpt))
|
||||
- **[rules]** Simplify stripPrefix and stripPrefixRegex tests ([#1699](https://github.com/containous/traefik/pull/1699) by [ldez](https://github.com/ldez))
|
||||
- **[rules]** Enhance rules tests. ([#1679](https://github.com/containous/traefik/pull/1679) by [ldez](https://github.com/ldez))
|
||||
- **[sticky-session]** make the cookie name unique to the backend being served ([#1716](https://github.com/containous/traefik/pull/1716) by [richardjq](https://github.com/richardjq))
|
||||
- **[tls]** Handle RootCAs certificate ([#1789](https://github.com/containous/traefik/pull/1789) by [Juliens](https://github.com/Juliens))
|
||||
- **[tls]** enable TLS client forwarding ([#1446](https://github.com/containous/traefik/pull/1446) by [drewwells](https://github.com/drewwells))
|
||||
- **[websocket]** Add tests for urlencoded part in url ([#2199](https://github.com/containous/traefik/pull/2199) by [Juliens](https://github.com/Juliens))
|
||||
- **[websocket]** Add test for SSL TERMINATION in Websocket IT ([#2063](https://github.com/containous/traefik/pull/2063) by [Juliens](https://github.com/Juliens)
|
||||
- **[webui]** Proxy in dev mode ([#1544](https://github.com/containous/traefik/pull/1544) by [maxwo](https://github.com/maxwo))
|
||||
- **[webui]** Minor Health UI fixes ([#1651](https://github.com/containous/traefik/pull/1651) by [mihaitodor](https://github.com/mihaitodor))
|
||||
- Fail fast in IT and fix some flaky tests ([#2126](https://github.com/containous/traefik/pull/2126) by [ldez](https://github.com/ldez))
|
||||
- extract lb configuration steps into method ([#1841](https://github.com/containous/traefik/pull/1841) by [marco-jantke](https://github.com/marco-jantke))
|
||||
- Add whitelist configuration option for entrypoints ([#1702](https://github.com/containous/traefik/pull/1702) by [christopherobin](https://github.com/christopherobin))
|
||||
- Enhance integration tests ([#1842](https://github.com/containous/traefik/pull/1842) by [ldez](https://github.com/ldez))
|
||||
- Add helloworld tests with gRPC ([#1845](https://github.com/containous/traefik/pull/1845) by [Juliens](https://github.com/Juliens))
|
||||
- Add the sprig functions in the template engine ([#1891](https://github.com/containous/traefik/pull/1891) by [thomasbach76](https://github.com/thomasbach76))
|
||||
- Refactor globalConfiguration / WebProvider ([#1938](https://github.com/containous/traefik/pull/1938) by [Juliens](https://github.com/Juliens))
|
||||
- Code cleaning. ([#1956](https://github.com/containous/traefik/pull/1956) by [ldez](https://github.com/ldez))
|
||||
- Add proxy protocol ([#2004](https://github.com/containous/traefik/pull/2004) by [emilevauge](https://github.com/emilevauge))
|
||||
- Bump gorilla/mux version. ([#1954](https://github.com/containous/traefik/pull/1954) by [ldez](https://github.com/ldez))
|
||||
|
||||
**Bug fixes:**
|
||||
- **[cluster,kv]** Be certain to clear our marshalled representation before reloading it ([#2165](https://github.com/containous/traefik/pull/2165) by [gozer](https://github.com/gozer))
|
||||
- **[consulcatalog,docker,ecs,k8s,kv,marathon,rancher,sticky-session]** Backward compatibility for sticky ([#2266](https://github.com/containous/traefik/pull/2266) by [ldez](https://github.com/ldez))
|
||||
- **[consulcatalog,docker,ecs,k8s,marathon,rancher,sticky-session]** Stickiness cookie name ([#2232](https://github.com/containous/traefik/pull/2232) by [ldez](https://github.com/ldez))
|
||||
- **[consulcatalog,docker,ecs,k8s,marathon,rancher,sticky-session]** Stickiness cookie name. ([#2251](https://github.com/containous/traefik/pull/2251) by [ldez](https://github.com/ldez))
|
||||
- **[consulcatalog]** Fix consul catalog retry ([#2263](https://github.com/containous/traefik/pull/2263) by [mmatur](https://github.com/mmatur))
|
||||
- **[consulcatalog]** Flaky tests and refresh problem in consul catalog ([#2148](https://github.com/containous/traefik/pull/2148) by [Juliens](https://github.com/Juliens))
|
||||
- **[consulcatalog]** Consul catalog failed to remove service ([#2157](https://github.com/containous/traefik/pull/2157) by [Juliens](https://github.com/Juliens))
|
||||
- **[consulcatalog]** Fix Consul Catalog refresh ([#2089](https://github.com/containous/traefik/pull/2089) by [Juliens](https://github.com/Juliens))
|
||||
- **[docker]** Changed Docker network filter to allow any swarm network ([#2244](https://github.com/containous/traefik/pull/2244) by [pistolero](https://github.com/pistolero))
|
||||
- **[docker]** Error handling for docker swarm mode ([#1533](https://github.com/containous/traefik/pull/1533) by [tanyadegurechaff](https://github.com/tanyadegurechaff))
|
||||
- **[ecs]** Handle empty ECS Clusters properly ([#2170](https://github.com/containous/traefik/pull/2170) by [jeffreykoetsier](https://github.com/jeffreykoetsier))
|
||||
- **[healthcheck]** Fix healthcheck port ([#2131](https://github.com/containous/traefik/pull/2131) by [fredix](https://github.com/fredix))
|
||||
- **[healthcheck]** Bind healthcheck to backend by entryPointName ([#1868](https://github.com/containous/traefik/pull/1868) by [chrigl](https://github.com/chrigl))
|
||||
- **[k8s]** Continue processing on invalid auth-realm annotation. ([#2252](https://github.com/containous/traefik/pull/2252) by [timoreimann](https://github.com/timoreimann))
|
||||
- **[k8s]** Use default frontend priority of zero. ([#1906](https://github.com/containous/traefik/pull/1906) by [timoreimann](https://github.com/timoreimann))
|
||||
- **[kv]** add retry backoff to staert config loading ([#2268](https://github.com/containous/traefik/pull/2268) by [emilevauge](https://github.com/emilevauge))
|
||||
- **[logs,middleware]** Enable loss less rotation of log files ([#2062](https://github.com/containous/traefik/pull/2062) by [marco-jantke](https://github.com/marco-jantke))
|
||||
- **[logs,middleware]** Access log default values ([#2061](https://github.com/containous/traefik/pull/2061) by [ldez](https://github.com/ldez))
|
||||
- **[logs]** Fix flakiness in log rotation test ([#2213](https://github.com/containous/traefik/pull/2213) by [marco-jantke](https://github.com/marco-jantke))
|
||||
- **[marathon]** Assign filtered tasks to apps contained in slice. ([#1881](https://github.com/containous/traefik/pull/1881) by [timoreimann](https://github.com/timoreimann))
|
||||
- **[marathon]** Fix fallback to other nodes for Marathon ([#1740](https://github.com/containous/traefik/pull/1740) by [marco-jantke](https://github.com/marco-jantke))
|
||||
- **[metrics]** prometheus, HTTP method and utf8 ([#2081](https://github.com/containous/traefik/pull/2081) by [ldez](https://github.com/ldez))
|
||||
- **[middleware]** Enable prefix matching within slash boundaries ([#2214](https://github.com/containous/traefik/pull/2214) by [marco-jantke](https://github.com/marco-jantke))
|
||||
- **[middleware]** Fix SSE subscriptions when retries are enabled ([#2145](https://github.com/containous/traefik/pull/2145) by [marco-jantke](https://github.com/marco-jantke))
|
||||
- **[middleware]** compress: preserve status code ([#1948](https://github.com/containous/traefik/pull/1948) by [ldez](https://github.com/ldez))
|
||||
- **[rancher]** Add stack name to backend name generation to fix rancher metadata backend ([#2107](https://github.com/containous/traefik/pull/2107) by [SantoDE](https://github.com/SantoDE))
|
||||
- **[rancher]** Rancher host IP address ([#2101](https://github.com/containous/traefik/pull/2101) by [matq007](https://github.com/matq007))
|
||||
- **[rancher]** fix seconds to really be seconds ([#2259](https://github.com/containous/traefik/pull/2259) by [SantoDE](https://github.com/SantoDE))
|
||||
- **[rancher]** fix rancher api environment get ([#2053](https://github.com/containous/traefik/pull/2053) by [SantoDE](https://github.com/SantoDE))
|
||||
- **[sticky-session]** Sanitize cookie names. ([#2216](https://github.com/containous/traefik/pull/2216) by [timoreimann](https://github.com/timoreimann))
|
||||
- **[sticky-session]** Setting the Cookie Path explicitly to root ([#1950](https://github.com/containous/traefik/pull/1950) by [marcopaga](https://github.com/marcopaga))
|
||||
- **[websocket]** Forward upgrade error from backend ([#2187](https://github.com/containous/traefik/pull/2187) by [Juliens](https://github.com/Juliens))
|
||||
- **[websocket]** RawPath and Transfer TLSConfig in websocket ([#2088](https://github.com/containous/traefik/pull/2088) by [Juliens](https://github.com/Juliens))
|
||||
- Nil body retries ([#2258](https://github.com/containous/traefik/pull/2258) by [Juliens](https://github.com/Juliens))
|
||||
- Fix deprecated IdleTimeout config ([#2143](https://github.com/containous/traefik/pull/2143) by [marco-jantke](https://github.com/marco-jantke))
|
||||
- Fixes entry points configuration. ([#2120](https://github.com/containous/traefik/pull/2120) by [ldez](https://github.com/ldez))
|
||||
- Delay first version check ([#2215](https://github.com/containous/traefik/pull/2215) by [emilevauge](https://github.com/emilevauge))
|
||||
- Move http2 configure transport ([#2231](https://github.com/containous/traefik/pull/2231) by [Juliens](https://github.com/Juliens))
|
||||
- Fix error in prepareServer ([#2076](https://github.com/containous/traefik/pull/2076) by [emilevauge](https://github.com/emilevauge))
|
||||
- New entry point parser. ([#2248](https://github.com/containous/traefik/pull/2248) by [ldez](https://github.com/ldez))
|
||||
- Add TrustForwardHeader options. ([#2262](https://github.com/containous/traefik/pull/2262) by [ldez](https://github.com/ldez))
|
||||
- `bug` command. ([#2178](https://github.com/containous/traefik/pull/2178) by [ldez](https://github.com/ldez))
|
||||
|
||||
**Documentation:**
|
||||
- **[acme,provider]** Enhance documentation readability. ([#2095](https://github.com/containous/traefik/pull/2095) by [ldez](https://github.com/ldez))
|
||||
- **[acme,provider]** Fix whitespaces ([#2075](https://github.com/containous/traefik/pull/2075) by [chulkilee](https://github.com/chulkilee))
|
||||
- **[acme,provider]** Re-organize documentation ([#2012](https://github.com/containous/traefik/pull/2012) by [jmaitrehenry](https://github.com/jmaitrehenry))
|
||||
- **[acme]** Fix grammar ([#2208](https://github.com/containous/traefik/pull/2208) by [mvasin](https://github.com/mvasin))
|
||||
- **[acme]** Add guide for Docker, Traefik & Letsencrypt ([#1923](https://github.com/containous/traefik/pull/1923) by [mvdstam](https://github.com/mvdstam))
|
||||
- **[acme]** Improve Let's Encrypt documentation ([#1885](https://github.com/containous/traefik/pull/1885) by [nmengin](https://github.com/nmengin))
|
||||
- **[acme]** Update docs for dnsimple env vars. ([#1872](https://github.com/containous/traefik/pull/1872) by [untalpierre](https://github.com/untalpierre))
|
||||
- **[api]** Add examples of proxying ping ([#2102](https://github.com/containous/traefik/pull/2102) by [deitch](https://github.com/deitch))
|
||||
- **[authentication,k8s]** traefik controller access to secrets ([#1707](https://github.com/containous/traefik/pull/1707) by [spinto](https://github.com/spinto))
|
||||
- **[consul,tls]** doc change regarding consul SSL ([#1774](https://github.com/containous/traefik/pull/1774) by [bitsofinfo](https://github.com/bitsofinfo))
|
||||
- **[consulcatalog,docker,ecs,k8s,marathon,rancher,sticky-session]** Stickiness documentation ([#2238](https://github.com/containous/traefik/pull/2238) by [ldez](https://github.com/ldez))
|
||||
- **[consul]** added consul acl token note ([#1720](https://github.com/containous/traefik/pull/1720) by [bitsofinfo](https://github.com/bitsofinfo))
|
||||
- **[docker]** Updating Docker output and curl for sticky sessions ([#2150](https://github.com/containous/traefik/pull/2150) by [jtyr](https://github.com/jtyr))
|
||||
- **[docker]** Add more visibility to docker stack deploy label issue ([#1984](https://github.com/containous/traefik/pull/1984) by [jmaitrehenry](https://github.com/jmaitrehenry))
|
||||
- **[ecs]** Fix IAM policy sid. ([#2066](https://github.com/containous/traefik/pull/2066) by [charlieoleary](https://github.com/charlieoleary))
|
||||
- **[k8s,marathon]** Mark Marathon and Kubernetes as constraint-supporting. ([#1964](https://github.com/containous/traefik/pull/1964) by [timoreimann](https://github.com/timoreimann))
|
||||
- **[k8s]** Add guide section on production advice, esp. CPU. ([#2113](https://github.com/containous/traefik/pull/2113) by [timoreimann](https://github.com/timoreimann))
|
||||
- **[k8s]** Document ways to partition Ingresses in the k8s guide. ([#2223](https://github.com/containous/traefik/pull/2223) by [timoreimann](https://github.com/timoreimann))
|
||||
- **[k8s]** Remove pod from RBAC rules. ([#2229](https://github.com/containous/traefik/pull/2229) by [timoreimann](https://github.com/timoreimann))
|
||||
- **[k8s]** Quote priority values in annotation examples. ([#2230](https://github.com/containous/traefik/pull/2230) by [timoreimann](https://github.com/timoreimann))
|
||||
- **[k8s]** Fix invalid service yaml example ([#2059](https://github.com/containous/traefik/pull/2059) by [kairen](https://github.com/kairen))
|
||||
- **[k8s]** Update usage of `.local` with `.minikube` in k8s docs ([#1551](https://github.com/containous/traefik/pull/1551) by [errm](https://github.com/errm))
|
||||
- **[k8s]** Update the documentation to use DaemonSet or Deployment ([#1735](https://github.com/containous/traefik/pull/1735) by [saschagrunert](https://github.com/saschagrunert))
|
||||
- **[k8s]** Fix docs about default namespaces. ([#1961](https://github.com/containous/traefik/pull/1961) by [timoreimann](https://github.com/timoreimann))
|
||||
- **[k8s]** Moved namespace to correct place ([#1911](https://github.com/containous/traefik/pull/1911) by [markround](https://github.com/markround))
|
||||
- **[k8s]** examples/k8s: fix ui ingress port out of sync with deployment ([#1943](https://github.com/containous/traefik/pull/1943) by [borancar](https://github.com/borancar))
|
||||
- **[k8s]** Add secrets resource to in-line RBAC spec. ([#1890](https://github.com/containous/traefik/pull/1890) by [timoreimann](https://github.com/timoreimann))
|
||||
- **[k8s]** Improve documentation. ([#1831](https://github.com/containous/traefik/pull/1831) by [timoreimann](https://github.com/timoreimann))
|
||||
- **[marathon]** Fix documentation glitches. ([#1996](https://github.com/containous/traefik/pull/1996) by [timoreimann](https://github.com/timoreimann))
|
||||
- **[metrics]** Enhance web backend documentation ([#2122](https://github.com/containous/traefik/pull/2122) by [ldez](https://github.com/ldez))
|
||||
- **[mesos]** fix: documentation Mesos. ([#2029](https://github.com/containous/traefik/pull/2029) by [ldez](https://github.com/ldez))
|
||||
- **[middleware]** Improve compression documentation ([#2184](https://github.com/containous/traefik/pull/2184) by [errm](https://github.com/errm))
|
||||
- **[provider]** Clarify that provider-enabling argument parameters set all defaults. ([#1830](https://github.com/containous/traefik/pull/1830) by [timoreimann](https://github.com/timoreimann))
|
||||
- **[rancher]** Update Rancher documentation. ([#1776](https://github.com/containous/traefik/pull/1776) by [ldez](https://github.com/ldez))
|
||||
- **[webui]** Document yarnpkg. ([#1558](https://github.com/containous/traefik/pull/1558) by [Stibbons](https://github.com/Stibbons))
|
||||
- Add forward auth documentation. ([#2110](https://github.com/containous/traefik/pull/2110) by [ldez](https://github.com/ldez))
|
||||
- User guide gRPC ([#2108](https://github.com/containous/traefik/pull/2108) by [Juliens](https://github.com/Juliens))
|
||||
- Document custom error page restrictions. ([#2104](https://github.com/containous/traefik/pull/2104) by [timoreimann](https://github.com/timoreimann))
|
||||
- Prepare release v1.4.0-rc3 ([#2135](https://github.com/containous/traefik/pull/2135) by [Juliens](https://github.com/Juliens))
|
||||
- Update gRPC example ([#2191](https://github.com/containous/traefik/pull/2191) by [jsenon](https://github.com/jsenon))
|
||||
- Prepare release v1.4.0-rc2 ([#2091](https://github.com/containous/traefik/pull/2091) by [ldez](https://github.com/ldez))
|
||||
- Fix grammar mistake in the kv-config docs ([#2197](https://github.com/containous/traefik/pull/2197) by [chr4](https://github.com/chr4))
|
||||
- Update cluster.md ([#2073](https://github.com/containous/traefik/pull/2073) by [kmbremner](https://github.com/kmbremner))
|
||||
- Prepare release v1.4.0-rc4 ([#2201](https://github.com/containous/traefik/pull/2201) by [nmengin](https://github.com/nmengin))
|
||||
- Prepare release v1.4.0-rc5 ([#2241](https://github.com/containous/traefik/pull/2241) by [ldez](https://github.com/ldez))
|
||||
- Enhance documentation. ([#2048](https://github.com/containous/traefik/pull/2048) by [ldez](https://github.com/ldez))
|
||||
- doc: add notes on server urls with path ([#2045](https://github.com/containous/traefik/pull/2045) by [chulkilee](https://github.com/chulkilee))
|
||||
- Enhance security headers doc. ([#2042](https://github.com/containous/traefik/pull/2042) by [ldez](https://github.com/ldez))
|
||||
- HTTPS for images, video and links in docs. ([#2041](https://github.com/containous/traefik/pull/2041) by [ldez](https://github.com/ldez))
|
||||
- Fix error pages configuration. ([#2038](https://github.com/containous/traefik/pull/2038) by [ldez](https://github.com/ldez))
|
||||
- Fix Proxy Protocol documentation ([#2253](https://github.com/containous/traefik/pull/2253) by [emilevauge](https://github.com/emilevauge))
|
||||
- Update GraceTimeOut documentation ([#1875](https://github.com/containous/traefik/pull/1875) by [marco-jantke](https://github.com/marco-jantke))
|
||||
- Release cycle. ([#1812](https://github.com/containous/traefik/pull/1812) by [ldez](https://github.com/ldez))
|
||||
- Update contributing guide build steps ([#1801](https://github.com/containous/traefik/pull/1801) by [jsturtevant](https://github.com/jsturtevant))
|
||||
- Add Nicolas Mengin to maintainers ([#1792](https://github.com/containous/traefik/pull/1792) by [emilevauge](https://github.com/emilevauge))
|
||||
- Add Julien Salleyron to maintainers ([#1790](https://github.com/containous/traefik/pull/1790) by [emilevauge](https://github.com/emilevauge))
|
||||
- Change to a more flexible PR review process ([#1781](https://github.com/containous/traefik/pull/1781) by [emilevauge](https://github.com/emilevauge))
|
||||
- Traefik "bug" command documentation ([#1811](https://github.com/containous/traefik/pull/1811) by [ldez](https://github.com/ldez))
|
||||
- Change Traefik intro video ([#1893](https://github.com/containous/traefik/pull/1893) by [emilevauge](https://github.com/emilevauge))
|
||||
- Prepare release v1.4.0-rc1 ([#2021](https://github.com/containous/traefik/pull/2021) by [ldez](https://github.com/ldez))
|
||||
- Add play-with-docker example ([#1726](https://github.com/containous/traefik/pull/1726) by [marcosnils](https://github.com/marcosnils))
|
||||
- Add Marco Jantke to maintainers ([#1980](https://github.com/containous/traefik/pull/1980) by [emilevauge](https://github.com/emilevauge))
|
||||
- Remove Russel from maintainers ([#1614](https://github.com/containous/traefik/pull/1614) by [emilevauge](https://github.com/emilevauge))
|
||||
- Update CONTRIBUTING.md. ([#1667](https://github.com/containous/traefik/pull/1667) by [timoreimann](https://github.com/timoreimann))
|
||||
- drop "slave" wording for "worker" ([#1645](https://github.com/containous/traefik/pull/1645) by [djalal](https://github.com/djalal))
|
||||
- Use more inclusive language in README.md {guys => folks} ([#1640](https://github.com/containous/traefik/pull/1640) by [igorwwwwwwwwwwwwwwwwwwww](https://github.com/igorwwwwwwwwwwwwwwwwwwww))
|
||||
- Remove Thomas Recloux from maintainers ([#1616](https://github.com/containous/traefik/pull/1616) by [emilevauge](https://github.com/emilevauge))
|
||||
- Update documentation for 1.4 release ([#2011](https://github.com/containous/traefik/pull/2011) by [emilevauge](https://github.com/emilevauge))
|
||||
- Small toml documentation update ([#1603](https://github.com/containous/traefik/pull/1603) by [antoine-aumjaud](https://github.com/antoine-aumjaud))
|
||||
- Add @ldez to maintainers ([#1589](https://github.com/containous/traefik/pull/1589) by [emilevauge](https://github.com/emilevauge))
|
||||
- doc: add labels documentation. ([#1582](https://github.com/containous/traefik/pull/1582) by [ldez](https://github.com/ldez))
|
||||
- Update golang version in contributing guide ([#2018](https://github.com/containous/traefik/pull/2018) by [ArikaChen](https://github.com/ArikaChen))
|
||||
- toml page - replace li by table ([#1995](https://github.com/containous/traefik/pull/1995) by [jmaitrehenry](https://github.com/jmaitrehenry))
|
||||
|
||||
**Misc:**
|
||||
- Merge v1.3.7 ([#2013](https://github.com/containous/traefik/pull/2013) by [ldez](https://github.com/ldez))
|
||||
- Merge 1.3.6 ([#1992](https://github.com/containous/traefik/pull/1992) by [ldez](https://github.com/ldez))
|
||||
- Merge 1.3.5 ([#1909](https://github.com/containous/traefik/pull/1909) by [ldez](https://github.com/ldez))
|
||||
- Merge 1.3.3 ([#1836](https://github.com/containous/traefik/pull/1836) by [ldez](https://github.com/ldez))
|
||||
- Merge v1.3.2 to master ([#1809](https://github.com/containous/traefik/pull/1809) by [ldez](https://github.com/ldez))
|
||||
- Merge current v1.3 ([#1797](https://github.com/containous/traefik/pull/1797) by [ldez](https://github.com/ldez))
|
||||
- Merge current v1.3 ([#1786](https://github.com/containous/traefik/pull/1786) by [ldez](https://github.com/ldez))
|
||||
- Merge v1.3.1 to master ([#1763](https://github.com/containous/traefik/pull/1763) by [ldez](https://github.com/ldez))
|
||||
- Merge current v1.3 ([#1753](https://github.com/containous/traefik/pull/1753) by [ldez](https://github.com/ldez))
|
||||
- Merge current v1.3 ([#1705](https://github.com/containous/traefik/pull/1705) by [ldez](https://github.com/ldez))
|
||||
- Merge current v1.3 to master ([#1697](https://github.com/containous/traefik/pull/1697) by [ldez](https://github.com/ldez))
|
||||
- Merge v1 3 0 ([#1692](https://github.com/containous/traefik/pull/1692) by [ldez](https://github.com/ldez))
|
||||
- Merge current v1.3 to master (rc3) ([#1666](https://github.com/containous/traefik/pull/1666) by [ldez](https://github.com/ldez))
|
||||
- Merge current v1.3 to master ([#1643](https://github.com/containous/traefik/pull/1643) by [ldez](https://github.com/ldez))
|
||||
- Merge v1.3.0-rc2 master ([#1613](https://github.com/containous/traefik/pull/1613) by [emilevauge](https://github.com/emilevauge))
|
||||
- Merge v1.3 branch into master [2017-05-11] ([#1548](https://github.com/containous/traefik/pull/1548) by [timoreimann](https://github.com/timoreimann))
|
||||
|
||||
## [v1.4.0-rc5](https://github.com/containous/traefik/tree/v1.4.0-rc5) (2017-10-10)
|
||||
[All Commits](https://github.com/containous/traefik/compare/v1.4.0-rc4...v1.4.0-rc5)
|
||||
|
||||
**Enhancements:**
|
||||
- **[middleware]** Add trusted whitelist proxy protocol ([#2234](https://github.com/containous/traefik/pull/2234) by [emilevauge](https://github.com/emilevauge))
|
||||
|
||||
**Bug fixes:**
|
||||
- **[consul,docker,ecs,k8s,marathon,rancher,sticky-session]** Stickiness cookie name ([#2232](https://github.com/containous/traefik/pull/2232) by [ldez](https://github.com/ldez))
|
||||
- **[logs]** Fix flakiness in log rotation test ([#2213](https://github.com/containous/traefik/pull/2213) by [marco-jantke](https://github.com/marco-jantke))
|
||||
- **[middleware]** Enable prefix matching within slash boundaries ([#2214](https://github.com/containous/traefik/pull/2214) by [marco-jantke](https://github.com/marco-jantke))
|
||||
- **[sticky-session]** Sanitize cookie names. ([#2216](https://github.com/containous/traefik/pull/2216) by [timoreimann](https://github.com/timoreimann))
|
||||
- Move http2 configure transport ([#2231](https://github.com/containous/traefik/pull/2231) by [Juliens](https://github.com/Juliens))
|
||||
- Delay first version check ([#2215](https://github.com/containous/traefik/pull/2215) by [emilevauge](https://github.com/emilevauge))
|
||||
|
||||
**Documentation:**
|
||||
- **[acme]** Fix grammar ([#2208](https://github.com/containous/traefik/pull/2208) by [mvasin](https://github.com/mvasin))
|
||||
- **[docker,ecs,k8s,marathon,rancher]** Stickiness documentation ([#2238](https://github.com/containous/traefik/pull/2238) by [ldez](https://github.com/ldez))
|
||||
- **[k8s]** Quote priority values in annotation examples. ([#2230](https://github.com/containous/traefik/pull/2230) by [timoreimann](https://github.com/timoreimann))
|
||||
- **[k8s]** Remove pod from RBAC rules. ([#2229](https://github.com/containous/traefik/pull/2229) by [timoreimann](https://github.com/timoreimann))
|
||||
- **[k8s]** Document ways to partition Ingresses in the k8s guide. ([#2223](https://github.com/containous/traefik/pull/2223) by [timoreimann](https://github.com/timoreimann))
|
||||
|
||||
## [v1.4.0-rc4](https://github.com/containous/traefik/tree/v1.4.0-rc4) (2017-10-02)
|
||||
[All Commits](https://github.com/containous/traefik/compare/v1.4.0-rc3...v1.4.0-rc4)
|
||||
|
||||
|
||||
2
Makefile
2
Makefile
@@ -97,7 +97,7 @@ dist:
|
||||
|
||||
run-dev:
|
||||
go generate
|
||||
go build
|
||||
go build ./cmd/traefik
|
||||
./traefik
|
||||
|
||||
generate-webui: build-webui
|
||||
|
||||
@@ -119,19 +119,8 @@ func (d *Datastore) watchChanges() error {
|
||||
|
||||
func (d *Datastore) reload() error {
|
||||
log.Debug("Datastore reload")
|
||||
d.localLock.Lock()
|
||||
err := d.kv.LoadConfig(d.meta)
|
||||
if err != nil {
|
||||
d.localLock.Unlock()
|
||||
return err
|
||||
}
|
||||
err = d.meta.unmarshall()
|
||||
if err != nil {
|
||||
d.localLock.Unlock()
|
||||
return err
|
||||
}
|
||||
d.localLock.Unlock()
|
||||
return nil
|
||||
_, err := d.Load()
|
||||
return err
|
||||
}
|
||||
|
||||
// Begin creates a transaction with the KV store.
|
||||
|
||||
@@ -84,7 +84,9 @@ func TestDo_globalConfiguration(t *testing.T) {
|
||||
},
|
||||
WhitelistSourceRange: []string{"foo WhitelistSourceRange 1", "foo WhitelistSourceRange 2", "foo WhitelistSourceRange 3"},
|
||||
Compress: true,
|
||||
ProxyProtocol: true,
|
||||
ProxyProtocol: &configuration.ProxyProtocol{
|
||||
TrustedIPs: []string{"127.0.0.1/32", "192.168.0.1"},
|
||||
},
|
||||
},
|
||||
"fii": {
|
||||
Network: "fii Network",
|
||||
@@ -125,7 +127,9 @@ func TestDo_globalConfiguration(t *testing.T) {
|
||||
},
|
||||
WhitelistSourceRange: []string{"fii WhitelistSourceRange 1", "fii WhitelistSourceRange 2", "fii WhitelistSourceRange 3"},
|
||||
Compress: true,
|
||||
ProxyProtocol: true,
|
||||
ProxyProtocol: &configuration.ProxyProtocol{
|
||||
TrustedIPs: []string{"127.0.0.1/32", "192.168.0.1"},
|
||||
},
|
||||
},
|
||||
}
|
||||
config.Cluster = &types.Cluster{
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
package anonymize
|
||||
|
||||
import (
|
||||
"github.com/stretchr/testify/assert"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func Test_doOnJSON(t *testing.T) {
|
||||
|
||||
@@ -9,20 +9,20 @@ import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"runtime"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/cenk/backoff"
|
||||
"github.com/containous/flaeg"
|
||||
"github.com/containous/staert"
|
||||
"github.com/containous/traefik/acme"
|
||||
"github.com/containous/traefik/cluster"
|
||||
"github.com/containous/traefik/configuration"
|
||||
"github.com/containous/traefik/job"
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/containous/traefik/provider/ecs"
|
||||
"github.com/containous/traefik/provider/kubernetes"
|
||||
"github.com/containous/traefik/provider/rancher"
|
||||
"github.com/containous/traefik/safe"
|
||||
"github.com/containous/traefik/server"
|
||||
"github.com/containous/traefik/server/uuid"
|
||||
@@ -33,8 +33,6 @@ import (
|
||||
)
|
||||
|
||||
func main() {
|
||||
runtime.GOMAXPROCS(runtime.NumCPU())
|
||||
|
||||
//traefik config inits
|
||||
traefikConfiguration := NewTraefikConfiguration()
|
||||
traefikPointersConfiguration := NewTraefikDefaultPointersConfiguration()
|
||||
@@ -119,6 +117,8 @@ Complete documentation is available at https://traefik.io`,
|
||||
Config: traefikConfiguration,
|
||||
DefaultPointersConfig: traefikPointersConfiguration,
|
||||
Run: func() error {
|
||||
traefikConfiguration.GlobalConfiguration.SetEffectiveConfiguration()
|
||||
|
||||
if traefikConfiguration.Web == nil {
|
||||
fmt.Println("Please enable the web provider to use healtcheck.")
|
||||
os.Exit(1)
|
||||
@@ -132,7 +132,8 @@ Complete documentation is available at https://traefik.io`,
|
||||
}
|
||||
client.Transport = tr
|
||||
}
|
||||
resp, err := client.Head(protocol + "://" + traefikConfiguration.Web.Address + "/ping")
|
||||
|
||||
resp, err := client.Head(protocol + "://" + traefikConfiguration.Web.Address + traefikConfiguration.Web.Path + "ping")
|
||||
if err != nil {
|
||||
fmt.Printf("Error calling healthcheck: %s\n", err)
|
||||
os.Exit(1)
|
||||
@@ -209,7 +210,15 @@ Complete documentation is available at https://traefik.io`,
|
||||
traefikConfiguration.Cluster.Store = &types.Store{Prefix: kv.Prefix, Store: kv.Store}
|
||||
}
|
||||
s.AddSource(kv)
|
||||
if _, err := s.LoadConfig(); err != nil {
|
||||
operation := func() error {
|
||||
_, err := s.LoadConfig()
|
||||
return err
|
||||
}
|
||||
notify := func(err error, time time.Duration) {
|
||||
log.Errorf("Load config error: %+v, retrying in %s", err, time)
|
||||
}
|
||||
err := backoff.RetryNotify(safe.OperationWithRecover(operation), job.NewBackOff(backoff.NewExponentialBackOff()), notify)
|
||||
if err != nil {
|
||||
fmtlog.Printf("Error loading configuration: %s\n", err)
|
||||
os.Exit(-1)
|
||||
}
|
||||
@@ -228,36 +237,7 @@ func run(globalConfiguration *configuration.GlobalConfiguration) {
|
||||
|
||||
http.DefaultTransport.(*http.Transport).Proxy = http.ProxyFromEnvironment
|
||||
|
||||
if len(globalConfiguration.EntryPoints) == 0 {
|
||||
globalConfiguration.EntryPoints = map[string]*configuration.EntryPoint{"http": {Address: ":80"}}
|
||||
globalConfiguration.DefaultEntryPoints = []string{"http"}
|
||||
}
|
||||
|
||||
if globalConfiguration.Rancher != nil {
|
||||
// Ensure backwards compatibility for now
|
||||
if len(globalConfiguration.Rancher.AccessKey) > 0 ||
|
||||
len(globalConfiguration.Rancher.Endpoint) > 0 ||
|
||||
len(globalConfiguration.Rancher.SecretKey) > 0 {
|
||||
|
||||
if globalConfiguration.Rancher.API == nil {
|
||||
globalConfiguration.Rancher.API = &rancher.APIConfiguration{
|
||||
AccessKey: globalConfiguration.Rancher.AccessKey,
|
||||
SecretKey: globalConfiguration.Rancher.SecretKey,
|
||||
Endpoint: globalConfiguration.Rancher.Endpoint,
|
||||
}
|
||||
}
|
||||
log.Warn("Deprecated configuration found: rancher.[accesskey|secretkey|endpoint]. " +
|
||||
"Please use rancher.api.[accesskey|secretkey|endpoint] instead.")
|
||||
}
|
||||
|
||||
if globalConfiguration.Rancher.Metadata != nil && len(globalConfiguration.Rancher.Metadata.Prefix) == 0 {
|
||||
globalConfiguration.Rancher.Metadata.Prefix = "latest"
|
||||
}
|
||||
}
|
||||
|
||||
if globalConfiguration.Debug {
|
||||
globalConfiguration.LogLevel = "DEBUG"
|
||||
}
|
||||
globalConfiguration.SetEffectiveConfiguration()
|
||||
|
||||
// logging
|
||||
level, err := logrus.ParseLevel(strings.ToLower(globalConfiguration.LogLevel))
|
||||
@@ -265,6 +245,7 @@ func run(globalConfiguration *configuration.GlobalConfiguration) {
|
||||
log.Error("Error getting level", err)
|
||||
}
|
||||
log.SetLevel(level)
|
||||
|
||||
if len(globalConfiguration.TraefikLogsFile) > 0 {
|
||||
dir := filepath.Dir(globalConfiguration.TraefikLogsFile)
|
||||
|
||||
@@ -361,6 +342,7 @@ func CreateKvSource(traefikConfiguration *TraefikConfiguration) (*staert.KvSourc
|
||||
func checkNewVersion() {
|
||||
ticker := time.NewTicker(24 * time.Hour)
|
||||
safe.Go(func() {
|
||||
time.Sleep(10 * time.Minute)
|
||||
version.CheckNewVersion()
|
||||
for {
|
||||
select {
|
||||
|
||||
@@ -5,12 +5,12 @@ import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/containous/flaeg"
|
||||
"github.com/containous/traefik/acme"
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/containous/traefik/provider/boltdb"
|
||||
"github.com/containous/traefik/provider/consul"
|
||||
"github.com/containous/traefik/provider/docker"
|
||||
@@ -80,6 +80,56 @@ type GlobalConfiguration struct {
|
||||
DynamoDB *dynamodb.Provider `description:"Enable DynamoDB backend with default settings" export:"true"`
|
||||
}
|
||||
|
||||
// SetEffectiveConfiguration adds missing configuration parameters derived from existing ones.
|
||||
// It also takes care of maintaining backwards compatibility.
|
||||
func (gc *GlobalConfiguration) SetEffectiveConfiguration() {
|
||||
if len(gc.EntryPoints) == 0 {
|
||||
gc.EntryPoints = map[string]*EntryPoint{"http": {
|
||||
Address: ":80",
|
||||
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
|
||||
}}
|
||||
gc.DefaultEntryPoints = []string{"http"}
|
||||
}
|
||||
|
||||
// ForwardedHeaders must be remove in the next breaking version
|
||||
for entryPointName := range gc.EntryPoints {
|
||||
entryPoint := gc.EntryPoints[entryPointName]
|
||||
if entryPoint.ForwardedHeaders == nil {
|
||||
entryPoint.ForwardedHeaders = &ForwardedHeaders{Insecure: true}
|
||||
}
|
||||
}
|
||||
|
||||
if gc.Rancher != nil {
|
||||
// Ensure backwards compatibility for now
|
||||
if len(gc.Rancher.AccessKey) > 0 ||
|
||||
len(gc.Rancher.Endpoint) > 0 ||
|
||||
len(gc.Rancher.SecretKey) > 0 {
|
||||
|
||||
if gc.Rancher.API == nil {
|
||||
gc.Rancher.API = &rancher.APIConfiguration{
|
||||
AccessKey: gc.Rancher.AccessKey,
|
||||
SecretKey: gc.Rancher.SecretKey,
|
||||
Endpoint: gc.Rancher.Endpoint,
|
||||
}
|
||||
}
|
||||
log.Warn("Deprecated configuration found: rancher.[accesskey|secretkey|endpoint]. " +
|
||||
"Please use rancher.api.[accesskey|secretkey|endpoint] instead.")
|
||||
}
|
||||
|
||||
if gc.Rancher.Metadata != nil && len(gc.Rancher.Metadata.Prefix) == 0 {
|
||||
gc.Rancher.Metadata.Prefix = "latest"
|
||||
}
|
||||
}
|
||||
|
||||
if gc.Debug {
|
||||
gc.LogLevel = "DEBUG"
|
||||
}
|
||||
|
||||
if gc.Web != nil && (gc.Web.Path == "" || !strings.HasSuffix(gc.Web.Path, "/")) {
|
||||
gc.Web.Path += "/"
|
||||
}
|
||||
}
|
||||
|
||||
// DefaultEntryPoints holds default entry points
|
||||
type DefaultEntryPoints []string
|
||||
|
||||
@@ -193,72 +243,101 @@ func (ep *EntryPoints) String() string {
|
||||
// Set's argument is a string to be parsed to set the flag.
|
||||
// It's a comma-separated list, so we split it.
|
||||
func (ep *EntryPoints) Set(value string) error {
|
||||
result, err := parseEntryPointsConfiguration(value)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
result := parseEntryPointsConfiguration(value)
|
||||
|
||||
var configTLS *TLS
|
||||
if len(result["TLS"]) > 0 {
|
||||
if len(result["tls"]) > 0 {
|
||||
certs := Certificates{}
|
||||
if err := certs.Set(result["TLS"]); err != nil {
|
||||
if err := certs.Set(result["tls"]); err != nil {
|
||||
return err
|
||||
}
|
||||
configTLS = &TLS{
|
||||
Certificates: certs,
|
||||
}
|
||||
} else if len(result["TLSACME"]) > 0 {
|
||||
} else if len(result["tls_acme"]) > 0 {
|
||||
configTLS = &TLS{
|
||||
Certificates: Certificates{},
|
||||
}
|
||||
}
|
||||
if len(result["CA"]) > 0 {
|
||||
files := strings.Split(result["CA"], ",")
|
||||
if len(result["ca"]) > 0 {
|
||||
files := strings.Split(result["ca"], ",")
|
||||
configTLS.ClientCAFiles = files
|
||||
}
|
||||
var redirect *Redirect
|
||||
if len(result["RedirectEntryPoint"]) > 0 || len(result["RedirectRegex"]) > 0 || len(result["RedirectReplacement"]) > 0 {
|
||||
if len(result["redirect_entrypoint"]) > 0 || len(result["redirect_regex"]) > 0 || len(result["redirect_replacement"]) > 0 {
|
||||
redirect = &Redirect{
|
||||
EntryPoint: result["RedirectEntryPoint"],
|
||||
Regex: result["RedirectRegex"],
|
||||
Replacement: result["RedirectReplacement"],
|
||||
EntryPoint: result["redirect_entrypoint"],
|
||||
Regex: result["redirect_regex"],
|
||||
Replacement: result["redirect_replacement"],
|
||||
}
|
||||
}
|
||||
|
||||
whiteListSourceRange := []string{}
|
||||
if len(result["WhiteListSourceRange"]) > 0 {
|
||||
whiteListSourceRange = strings.Split(result["WhiteListSourceRange"], ",")
|
||||
if len(result["whitelistsourcerange"]) > 0 {
|
||||
whiteListSourceRange = strings.Split(result["whitelistsourcerange"], ",")
|
||||
}
|
||||
|
||||
compress := toBool(result, "Compress")
|
||||
proxyProtocol := toBool(result, "ProxyProtocol")
|
||||
compress := toBool(result, "compress")
|
||||
|
||||
(*ep)[result["Name"]] = &EntryPoint{
|
||||
Address: result["Address"],
|
||||
var proxyProtocol *ProxyProtocol
|
||||
ppTrustedIPs := result["proxyprotocol_trustedips"]
|
||||
if len(result["proxyprotocol_insecure"]) > 0 || len(ppTrustedIPs) > 0 {
|
||||
proxyProtocol = &ProxyProtocol{
|
||||
Insecure: toBool(result, "proxyprotocol_insecure"),
|
||||
}
|
||||
if len(ppTrustedIPs) > 0 {
|
||||
proxyProtocol.TrustedIPs = strings.Split(ppTrustedIPs, ",")
|
||||
}
|
||||
}
|
||||
|
||||
// TODO must be changed to false by default in the next breaking version.
|
||||
forwardedHeaders := &ForwardedHeaders{Insecure: true}
|
||||
if _, ok := result["forwardedheaders_insecure"]; ok {
|
||||
forwardedHeaders.Insecure = toBool(result, "forwardedheaders_insecure")
|
||||
}
|
||||
|
||||
fhTrustedIPs := result["forwardedheaders_trustedips"]
|
||||
if len(fhTrustedIPs) > 0 {
|
||||
// TODO must be removed in the next breaking version.
|
||||
forwardedHeaders.Insecure = toBool(result, "forwardedheaders_insecure")
|
||||
forwardedHeaders.TrustedIPs = strings.Split(fhTrustedIPs, ",")
|
||||
}
|
||||
|
||||
if proxyProtocol != nil && proxyProtocol.Insecure {
|
||||
log.Warn("ProxyProtocol.Insecure:true is dangerous. Please use 'ProxyProtocol.TrustedIPs:IPs' and remove 'ProxyProtocol.Insecure:true'")
|
||||
}
|
||||
|
||||
(*ep)[result["name"]] = &EntryPoint{
|
||||
Address: result["address"],
|
||||
TLS: configTLS,
|
||||
Redirect: redirect,
|
||||
Compress: compress,
|
||||
WhitelistSourceRange: whiteListSourceRange,
|
||||
ProxyProtocol: proxyProtocol,
|
||||
ForwardedHeaders: forwardedHeaders,
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func parseEntryPointsConfiguration(value string) (map[string]string, error) {
|
||||
regex := regexp.MustCompile(`(?:Name:(?P<Name>\S*))\s*(?:Address:(?P<Address>\S*))?\s*(?:TLS:(?P<TLS>\S*))?\s*(?P<TLSACME>TLS)?\s*(?:CA:(?P<CA>\S*))?\s*(?:Redirect\.EntryPoint:(?P<RedirectEntryPoint>\S*))?\s*(?:Redirect\.Regex:(?P<RedirectRegex>\S*))?\s*(?:Redirect\.Replacement:(?P<RedirectReplacement>\S*))?\s*(?:Compress:(?P<Compress>\S*))?\s*(?:WhiteListSourceRange:(?P<WhiteListSourceRange>\S*))?\s*(?:ProxyProtocol:(?P<ProxyProtocol>\S*))?`)
|
||||
match := regex.FindAllStringSubmatch(value, -1)
|
||||
if match == nil {
|
||||
return nil, fmt.Errorf("bad EntryPoints format: %s", value)
|
||||
}
|
||||
matchResult := match[0]
|
||||
result := make(map[string]string)
|
||||
for i, name := range regex.SubexpNames() {
|
||||
if i != 0 && len(matchResult[i]) != 0 {
|
||||
result[name] = matchResult[i]
|
||||
func parseEntryPointsConfiguration(raw string) map[string]string {
|
||||
sections := strings.Fields(raw)
|
||||
|
||||
config := make(map[string]string)
|
||||
for _, part := range sections {
|
||||
field := strings.SplitN(part, ":", 2)
|
||||
name := strings.ToLower(strings.Replace(field[0], ".", "_", -1))
|
||||
if len(field) > 1 {
|
||||
config[name] = field[1]
|
||||
} else {
|
||||
if strings.EqualFold(name, "TLS") {
|
||||
config["tls_acme"] = "TLS"
|
||||
} else {
|
||||
config[name] = ""
|
||||
}
|
||||
}
|
||||
}
|
||||
return result, nil
|
||||
return config
|
||||
}
|
||||
|
||||
func toBool(conf map[string]string, key string) bool {
|
||||
@@ -293,8 +372,9 @@ type EntryPoint struct {
|
||||
Redirect *Redirect `export:"true"`
|
||||
Auth *types.Auth `export:"true"`
|
||||
WhitelistSourceRange []string
|
||||
Compress bool `export:"true"`
|
||||
ProxyProtocol bool `export:"true"`
|
||||
Compress bool `export:"true"`
|
||||
ProxyProtocol *ProxyProtocol `export:"true"`
|
||||
ForwardedHeaders *ForwardedHeaders `export:"true"`
|
||||
}
|
||||
|
||||
// Redirect configures a redirection of an entry point to another, or to an URL
|
||||
@@ -443,3 +523,15 @@ type ForwardingTimeouts struct {
|
||||
DialTimeout flaeg.Duration `description:"The amount of time to wait until a connection to a backend server can be established. Defaults to 30 seconds. If zero, no timeout exists" export:"true"`
|
||||
ResponseHeaderTimeout flaeg.Duration `description:"The amount of time to wait for a server's response headers after fully writing the request (including its body, if any). If zero, no timeout exists" export:"true"`
|
||||
}
|
||||
|
||||
// ProxyProtocol contains Proxy-Protocol configuration
|
||||
type ProxyProtocol struct {
|
||||
Insecure bool
|
||||
TrustedIPs []string
|
||||
}
|
||||
|
||||
// ForwardedHeaders Trust client forwarding headers
|
||||
type ForwardedHeaders struct {
|
||||
Insecure bool
|
||||
TrustedIPs []string
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package configuration
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
@@ -16,44 +15,37 @@ func Test_parseEntryPointsConfiguration(t *testing.T) {
|
||||
}{
|
||||
{
|
||||
name: "all parameters",
|
||||
value: "Name:foo Address:bar TLS:goo TLS CA:car Redirect.EntryPoint:RedirectEntryPoint Redirect.Regex:RedirectRegex Redirect.Replacement:RedirectReplacement Compress:true WhiteListSourceRange:WhiteListSourceRange ProxyProtocol:true",
|
||||
value: "Name:foo TLS:goo TLS CA:car Redirect.EntryPoint:RedirectEntryPoint Redirect.Regex:RedirectRegex Redirect.Replacement:RedirectReplacement Compress:true WhiteListSourceRange:WhiteListSourceRange ProxyProtocol.TrustedIPs:192.168.0.1 ProxyProtocol.Insecure:false Address::8000",
|
||||
expectedResult: map[string]string{
|
||||
"Name": "foo",
|
||||
"Address": "bar",
|
||||
"CA": "car",
|
||||
"TLS": "goo",
|
||||
"TLSACME": "TLS",
|
||||
"RedirectEntryPoint": "RedirectEntryPoint",
|
||||
"RedirectRegex": "RedirectRegex",
|
||||
"RedirectReplacement": "RedirectReplacement",
|
||||
"WhiteListSourceRange": "WhiteListSourceRange",
|
||||
"ProxyProtocol": "true",
|
||||
"Compress": "true",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "proxy protocol on",
|
||||
value: "Name:foo ProxyProtocol:on",
|
||||
expectedResult: map[string]string{
|
||||
"Name": "foo",
|
||||
"ProxyProtocol": "on",
|
||||
"name": "foo",
|
||||
"address": ":8000",
|
||||
"ca": "car",
|
||||
"tls": "goo",
|
||||
"tls_acme": "TLS",
|
||||
"redirect_entrypoint": "RedirectEntryPoint",
|
||||
"redirect_regex": "RedirectRegex",
|
||||
"redirect_replacement": "RedirectReplacement",
|
||||
"whitelistsourcerange": "WhiteListSourceRange",
|
||||
"proxyprotocol_trustedips": "192.168.0.1",
|
||||
"proxyprotocol_insecure": "false",
|
||||
"compress": "true",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "compress on",
|
||||
value: "Name:foo Compress:on",
|
||||
value: "name:foo Compress:on",
|
||||
expectedResult: map[string]string{
|
||||
"Name": "foo",
|
||||
"Compress": "on",
|
||||
"name": "foo",
|
||||
"compress": "on",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "TLS",
|
||||
value: "Name:foo TLS:goo TLS",
|
||||
expectedResult: map[string]string{
|
||||
"Name": "foo",
|
||||
"TLS": "goo",
|
||||
"TLSACME": "TLS",
|
||||
"name": "foo",
|
||||
"tls": "goo",
|
||||
"tls_acme": "TLS",
|
||||
},
|
||||
},
|
||||
}
|
||||
@@ -63,14 +55,7 @@ func Test_parseEntryPointsConfiguration(t *testing.T) {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
conf, err := parseEntryPointsConfiguration(test.value)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
for key, value := range conf {
|
||||
fmt.Println(key, value)
|
||||
}
|
||||
conf := parseEntryPointsConfiguration(test.value)
|
||||
|
||||
assert.Len(t, conf, len(test.expectedResult))
|
||||
assert.Equal(t, test.expectedResult, conf)
|
||||
@@ -141,18 +126,23 @@ func TestEntryPoints_Set(t *testing.T) {
|
||||
expectedEntryPoint *EntryPoint
|
||||
}{
|
||||
{
|
||||
name: "all parameters",
|
||||
expression: "Name:foo Address:bar TLS:goo,gii TLS CA:car Redirect.EntryPoint:RedirectEntryPoint Redirect.Regex:RedirectRegex Redirect.Replacement:RedirectReplacement Compress:true WhiteListSourceRange:Range ProxyProtocol:true",
|
||||
name: "all parameters camelcase",
|
||||
expression: "Name:foo Address::8000 TLS:goo,gii TLS CA:car Redirect.EntryPoint:RedirectEntryPoint Redirect.Regex:RedirectRegex Redirect.Replacement:RedirectReplacement Compress:true WhiteListSourceRange:Range ProxyProtocol.TrustedIPs:192.168.0.1 ForwardedHeaders.TrustedIPs:10.0.0.3/24,20.0.0.3/24",
|
||||
expectedEntryPointName: "foo",
|
||||
expectedEntryPoint: &EntryPoint{
|
||||
Address: "bar",
|
||||
Address: ":8000",
|
||||
Redirect: &Redirect{
|
||||
EntryPoint: "RedirectEntryPoint",
|
||||
Regex: "RedirectRegex",
|
||||
Replacement: "RedirectReplacement",
|
||||
},
|
||||
Compress: true,
|
||||
ProxyProtocol: true,
|
||||
Compress: true,
|
||||
ProxyProtocol: &ProxyProtocol{
|
||||
TrustedIPs: []string{"192.168.0.1"},
|
||||
},
|
||||
ForwardedHeaders: &ForwardedHeaders{
|
||||
TrustedIPs: []string{"10.0.0.3/24", "20.0.0.3/24"},
|
||||
},
|
||||
WhitelistSourceRange: []string{"Range"},
|
||||
TLS: &TLS{
|
||||
ClientCAFiles: []string{"car"},
|
||||
@@ -165,6 +155,106 @@ func TestEntryPoints_Set(t *testing.T) {
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "all parameters lowercase",
|
||||
expression: "name:foo address::8000 tls:goo,gii tls ca:car redirect.entryPoint:RedirectEntryPoint redirect.regex:RedirectRegex redirect.replacement:RedirectReplacement compress:true whiteListSourceRange:Range proxyProtocol.trustedIPs:192.168.0.1 forwardedHeaders.trustedIPs:10.0.0.3/24,20.0.0.3/24",
|
||||
expectedEntryPointName: "foo",
|
||||
expectedEntryPoint: &EntryPoint{
|
||||
Address: ":8000",
|
||||
Redirect: &Redirect{
|
||||
EntryPoint: "RedirectEntryPoint",
|
||||
Regex: "RedirectRegex",
|
||||
Replacement: "RedirectReplacement",
|
||||
},
|
||||
Compress: true,
|
||||
ProxyProtocol: &ProxyProtocol{
|
||||
TrustedIPs: []string{"192.168.0.1"},
|
||||
},
|
||||
ForwardedHeaders: &ForwardedHeaders{
|
||||
TrustedIPs: []string{"10.0.0.3/24", "20.0.0.3/24"},
|
||||
},
|
||||
WhitelistSourceRange: []string{"Range"},
|
||||
TLS: &TLS{
|
||||
ClientCAFiles: []string{"car"},
|
||||
Certificates: Certificates{
|
||||
{
|
||||
CertFile: FileOrContent("goo"),
|
||||
KeyFile: FileOrContent("gii"),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "default",
|
||||
expression: "Name:foo",
|
||||
expectedEntryPointName: "foo",
|
||||
expectedEntryPoint: &EntryPoint{
|
||||
WhitelistSourceRange: []string{},
|
||||
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ForwardedHeaders insecure true",
|
||||
expression: "Name:foo ForwardedHeaders.Insecure:true",
|
||||
expectedEntryPointName: "foo",
|
||||
expectedEntryPoint: &EntryPoint{
|
||||
WhitelistSourceRange: []string{},
|
||||
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ForwardedHeaders insecure false",
|
||||
expression: "Name:foo ForwardedHeaders.Insecure:false",
|
||||
expectedEntryPointName: "foo",
|
||||
expectedEntryPoint: &EntryPoint{
|
||||
WhitelistSourceRange: []string{},
|
||||
ForwardedHeaders: &ForwardedHeaders{Insecure: false},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ForwardedHeaders TrustedIPs",
|
||||
expression: "Name:foo ForwardedHeaders.TrustedIPs:10.0.0.3/24,20.0.0.3/24",
|
||||
expectedEntryPointName: "foo",
|
||||
expectedEntryPoint: &EntryPoint{
|
||||
WhitelistSourceRange: []string{},
|
||||
ForwardedHeaders: &ForwardedHeaders{
|
||||
TrustedIPs: []string{"10.0.0.3/24", "20.0.0.3/24"},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ProxyProtocol insecure true",
|
||||
expression: "Name:foo ProxyProtocol.Insecure:true",
|
||||
expectedEntryPointName: "foo",
|
||||
expectedEntryPoint: &EntryPoint{
|
||||
WhitelistSourceRange: []string{},
|
||||
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
|
||||
ProxyProtocol: &ProxyProtocol{Insecure: true},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ProxyProtocol insecure false",
|
||||
expression: "Name:foo ProxyProtocol.Insecure:false",
|
||||
expectedEntryPointName: "foo",
|
||||
expectedEntryPoint: &EntryPoint{
|
||||
WhitelistSourceRange: []string{},
|
||||
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
|
||||
ProxyProtocol: &ProxyProtocol{},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ProxyProtocol TrustedIPs",
|
||||
expression: "Name:foo ProxyProtocol.TrustedIPs:10.0.0.3/24,20.0.0.3/24",
|
||||
expectedEntryPointName: "foo",
|
||||
expectedEntryPoint: &EntryPoint{
|
||||
WhitelistSourceRange: []string{},
|
||||
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
|
||||
ProxyProtocol: &ProxyProtocol{
|
||||
TrustedIPs: []string{"10.0.0.3/24", "20.0.0.3/24"},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "compress on",
|
||||
expression: "Name:foo Compress:on",
|
||||
@@ -172,6 +262,7 @@ func TestEntryPoints_Set(t *testing.T) {
|
||||
expectedEntryPoint: &EntryPoint{
|
||||
Compress: true,
|
||||
WhitelistSourceRange: []string{},
|
||||
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
|
||||
},
|
||||
},
|
||||
{
|
||||
@@ -181,6 +272,7 @@ func TestEntryPoints_Set(t *testing.T) {
|
||||
expectedEntryPoint: &EntryPoint{
|
||||
Compress: true,
|
||||
WhitelistSourceRange: []string{},
|
||||
ForwardedHeaders: &ForwardedHeaders{Insecure: true},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
23
docs/archive.md
Normal file
23
docs/archive.md
Normal file
@@ -0,0 +1,23 @@
|
||||
## Current versions documentation
|
||||
|
||||
- [Latest stable](https://docs.traefik.io)
|
||||
|
||||
- [Experimental](https://master--traefik-docs.netlify.com/)
|
||||
|
||||
## Future version documentation
|
||||
|
||||
- [v1.5 RC](http://v1-5.archive.docs.traefik.io/)
|
||||
|
||||
## Previous versions documentation
|
||||
|
||||
- [v1.4 aka Roquefort](http://v1-4.archive.docs.traefik.io/)
|
||||
|
||||
- [v1.3 aka Raclette](http://v1-3.archive.docs.traefik.io/)
|
||||
|
||||
- [v1.2 aka Morbier](http://v1-2.archive.docs.traefik.io/)
|
||||
|
||||
- [v1.1 aka Camembert](http://v1-1.archive.docs.traefik.io/)
|
||||
|
||||
## More
|
||||
|
||||
[Change log](https://github.com/containous/traefik/blob/master/CHANGELOG.md)
|
||||
@@ -346,12 +346,31 @@ For example:
|
||||
- Another possible value for `extractorfunc` is `client.ip` which will categorize requests based on client source ip.
|
||||
- Lastly `extractorfunc` can take the value of `request.header.ANY_HEADER` which will categorize requests based on `ANY_HEADER` that you provide.
|
||||
|
||||
### Sticky sessions
|
||||
|
||||
Sticky sessions are supported with both load balancers.
|
||||
When sticky sessions are enabled, a cookie called `_TRAEFIK_BACKEND` is set on the initial request.
|
||||
When sticky sessions are enabled, a cookie is set on the initial request.
|
||||
The default cookie name is an abbreviation of a sha1 (ex: `_1d52e`).
|
||||
On subsequent requests, the client will be directed to the backend stored in the cookie if it is still healthy.
|
||||
If not, a new backend will be assigned.
|
||||
|
||||
For example:
|
||||
|
||||
```toml
|
||||
[backends]
|
||||
[backends.backend1]
|
||||
# Enable sticky session
|
||||
[backends.backend1.loadbalancer.stickiness]
|
||||
|
||||
# Customize the cookie name
|
||||
#
|
||||
# Optional
|
||||
# Default: a sha1 (6 chars)
|
||||
#
|
||||
# cookieName = "my_cookie"
|
||||
```
|
||||
|
||||
The deprecated way:
|
||||
|
||||
```toml
|
||||
[backends]
|
||||
[backends.backend1]
|
||||
@@ -359,6 +378,8 @@ For example:
|
||||
sticky = true
|
||||
```
|
||||
|
||||
### Health Check
|
||||
|
||||
A health check can be configured in order to remove a backend from LB rotation as long as it keeps returning HTTP status codes other than `200 OK` to HTTP GET requests periodically carried out by Traefik.
|
||||
The check is defined by a pathappended to the backend URL and an interval (given in a format understood by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration)) specifying how often the health check should be executed (the default being 30 seconds).
|
||||
Each backend must respond to the health check within 5 seconds.
|
||||
|
||||
@@ -177,7 +177,7 @@ Enable on demand certificate.
|
||||
This will request a certificate from Let's Encrypt during the first TLS handshake for a hostname that does not yet have a certificate.
|
||||
|
||||
!!! warning
|
||||
TLS handshakes will be slow when requesting a hostname certificate for the first time, this can leads to DoS attacks.
|
||||
TLS handshakes will be slow when requesting a hostname certificate for the first time, this can lead to DoS attacks.
|
||||
|
||||
!!! warning
|
||||
Take note that Let's Encrypt have [rate limiting](https://letsencrypt.org/docs/rate-limits)
|
||||
|
||||
@@ -117,17 +117,20 @@ To enable constraints see [backend-specific constraints section](/configuration/
|
||||
|
||||
Additional settings can be defined using Consul Catalog tags.
|
||||
|
||||
| Tag | Description |
|
||||
|---------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `traefik.enable=false` | Disable this container in Træfik |
|
||||
| `traefik.protocol=https` | Override the default `http` protocol |
|
||||
| `traefik.backend.weight=10` | Assign this weight to the container |
|
||||
| `traefik.backend.circuitbreaker=EXPR` | Create a [circuit breaker](/basics/#backends) to be used against the backend, ex: `NetworkErrorRatio() > 0.` |
|
||||
| `traefik.backend.loadbalancer=drr` | Override the default load balancing mode |
|
||||
| `traefik.backend.maxconn.amount=10` | Set a maximum number of connections to the backend. Must be used in conjunction with the below label to take effect. |
|
||||
| `traefik.backend.maxconn.extractorfunc=client.ip` | Set the function to be used against the request to determine what to limit maximum connections to the backend by. Must be used in conjunction with the above label to take effect. |
|
||||
| `traefik.frontend.rule=Host:test.traefik.io` | Override the default frontend rule (Default: `Host:{{.ServiceName}}.{{.Domain}}`). |
|
||||
| `traefik.frontend.passHostHeader=true` | Forward client `Host` header to the backend. |
|
||||
| `traefik.frontend.priority=10` | Override default frontend priority |
|
||||
| `traefik.frontend.entryPoints=http,https` | Assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. |
|
||||
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash` |
|
||||
| Tag | Description |
|
||||
|-----------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `traefik.enable=false` | Disable this container in Træfik |
|
||||
| `traefik.protocol=https` | Override the default `http` protocol |
|
||||
| `traefik.backend.weight=10` | Assign this weight to the container |
|
||||
| `traefik.backend.circuitbreaker=EXPR` | Create a [circuit breaker](/basics/#backends) to be used against the backend, ex: `NetworkErrorRatio() > 0.` |
|
||||
| `traefik.backend.maxconn.amount=10` | Set a maximum number of connections to the backend. Must be used in conjunction with the below label to take effect. |
|
||||
| `traefik.backend.maxconn.extractorfunc=client.ip` | Set the function to be used against the request to determine what to limit maximum connections to the backend by. Must be used in conjunction with the above label to take effect. |
|
||||
| `traefik.frontend.rule=Host:test.traefik.io` | Override the default frontend rule (Default: `Host:{{.ServiceName}}.{{.Domain}}`). |
|
||||
| `traefik.frontend.passHostHeader=true` | Forward client `Host` header to the backend. |
|
||||
| `traefik.frontend.priority=10` | Override default frontend priority |
|
||||
| `traefik.frontend.entryPoints=http,https` | Assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. |
|
||||
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash` |
|
||||
| `traefik.backend.loadbalancer=drr` | override the default `wrr` load balancer algorithm |
|
||||
| `traefik.backend.loadbalancer.stickiness=true` | enable backend sticky sessions |
|
||||
| `traefik.backend.loadbalancer.stickiness.cookieName=NAME` | Manually set the cookie name for sticky sessions |
|
||||
| `traefik.backend.loadbalancer.sticky=true` | enable backend sticky sessions (DEPRECATED) |
|
||||
|
||||
@@ -148,26 +148,28 @@ To enable constraints see [backend-specific constraints section](/configuration/
|
||||
|
||||
Labels can be used on containers to override default behaviour.
|
||||
|
||||
| Label | Description |
|
||||
|---------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `traefik.backend=foo` | Give the name `foo` to the generated backend for this container. |
|
||||
| `traefik.backend.maxconn.amount=10` | Set a maximum number of connections to the backend. Must be used in conjunction with the below label to take effect. |
|
||||
| `traefik.backend.maxconn.extractorfunc=client.ip` | Set the function to be used against the request to determine what to limit maximum connections to the backend by. Must be used in conjunction with the above label to take effect. |
|
||||
| `traefik.backend.loadbalancer.method=drr` | Override the default `wrr` load balancer algorithm |
|
||||
| `traefik.backend.loadbalancer.sticky=true` | Enable backend sticky sessions |
|
||||
| `traefik.backend.loadbalancer.swarm=true` | Use Swarm's inbuilt load balancer (only relevant under Swarm Mode). |
|
||||
| `traefik.backend.circuitbreaker.expression=EXPR` | Create a [circuit breaker](/basics/#backends) to be used against the backend |
|
||||
| `traefik.port=80` | Register this port. Useful when the container exposes multiples ports. |
|
||||
| `traefik.protocol=https` | Override the default `http` protocol |
|
||||
| `traefik.weight=10` | Assign this weight to the container |
|
||||
| `traefik.enable=false` | Disable this container in Træfik |
|
||||
| `traefik.frontend.rule=EXPR` | Override the default frontend rule. Default: `Host:{containerName}.{domain}` or `Host:{service}.{project_name}.{domain}` if you are using `docker-compose`. |
|
||||
| `traefik.frontend.passHostHeader=true` | Forward client `Host` header to the backend. |
|
||||
| `traefik.frontend.priority=10` | Override default frontend priority |
|
||||
| `traefik.frontend.entryPoints=http,https` | Assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints` |
|
||||
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash` |
|
||||
| `traefik.frontend.whitelistSourceRange:RANGE` | List of IP-Ranges which are allowed to access. An unset or empty list allows all Source-IPs to access. If one of the Net-Specifications are invalid, the whole list is invalid and allows all Source-IPs to access. |
|
||||
| `traefik.docker.network` | Set the docker network to use for connections to this container. If a container is linked to several networks, be sure to set the proper network name (you can check with `docker inspect <container_id>`) otherwise it will randomly pick one (depending on how docker is returning them). For instance when deploying docker `stack` from compose files, the compose defined networks will be prefixed with the `stack` name. |
|
||||
| Label | Description |
|
||||
|-----------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `traefik.backend=foo` | Give the name `foo` to the generated backend for this container. |
|
||||
| `traefik.backend.maxconn.amount=10` | Set a maximum number of connections to the backend. Must be used in conjunction with the below label to take effect. |
|
||||
| `traefik.backend.maxconn.extractorfunc=client.ip` | Set the function to be used against the request to determine what to limit maximum connections to the backend by. Must be used in conjunction with the above label to take effect. |
|
||||
| `traefik.backend.loadbalancer.method=drr` | Override the default `wrr` load balancer algorithm |
|
||||
| `traefik.backend.loadbalancer.stickiness=true` | Enable backend sticky sessions |
|
||||
| `traefik.backend.loadbalancer.stickiness.cookieName=NAME` | Manually set the cookie name for sticky sessions |
|
||||
| `traefik.backend.loadbalancer.sticky=true` | Enable backend sticky sessions (DEPRECATED) |
|
||||
| `traefik.backend.loadbalancer.swarm=true` | Use Swarm's inbuilt load balancer (only relevant under Swarm Mode). |
|
||||
| `traefik.backend.circuitbreaker.expression=EXPR` | Create a [circuit breaker](/basics/#backends) to be used against the backend |
|
||||
| `traefik.port=80` | Register this port. Useful when the container exposes multiples ports. |
|
||||
| `traefik.protocol=https` | Override the default `http` protocol |
|
||||
| `traefik.weight=10` | Assign this weight to the container |
|
||||
| `traefik.enable=false` | Disable this container in Træfik |
|
||||
| `traefik.frontend.rule=EXPR` | Override the default frontend rule. Default: `Host:{containerName}.{domain}` or `Host:{service}.{project_name}.{domain}` if you are using `docker-compose`. |
|
||||
| `traefik.frontend.passHostHeader=true` | Forward client `Host` header to the backend. |
|
||||
| `traefik.frontend.priority=10` | Override default frontend priority |
|
||||
| `traefik.frontend.entryPoints=http,https` | Assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints` |
|
||||
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash` |
|
||||
| `traefik.frontend.whitelistSourceRange:RANGE` | List of IP-Ranges which are allowed to access. An unset or empty list allows all Source-IPs to access. If one of the Net-Specifications are invalid, the whole list is invalid and allows all Source-IPs to access. |
|
||||
| `traefik.docker.network` | Set the docker network to use for connections to this container. If a container is linked to several networks, be sure to set the proper network name (you can check with `docker inspect <container_id>`) otherwise it will randomly pick one (depending on how docker is returning them). For instance when deploying docker `stack` from compose files, the compose defined networks will be prefixed with the `stack` name. |
|
||||
|
||||
### On Service
|
||||
|
||||
@@ -185,6 +187,12 @@ Services labels can be used for overriding default behaviour
|
||||
| `traefik.<service-name>.frontend.priority` | Overrides `traefik.frontend.priority`. |
|
||||
| `traefik.<service-name>.frontend.rule` | Overrides `traefik.frontend.rule`. |
|
||||
|
||||
|
||||
!!! note
|
||||
if a label is defined both as a `container label` and a `service label` (for example `traefik.<service-name>.port=PORT` and `traefik.port=PORT` ), the `service label` is used to defined the `<service-name>` property (`port` in the example).
|
||||
It's possible to mix `container labels` and `service labels`, in this case `container labels` are used as default value for missing `service labels` but no frontends are going to be created with the `container labels`.
|
||||
More details in this [example](/user-guide/docker-and-lets-encrypt/#labels).
|
||||
|
||||
!!! warning
|
||||
when running inside a container, Træfik will need network access through:
|
||||
|
||||
|
||||
@@ -124,15 +124,17 @@ Træfik needs the following policy to read ECS information:
|
||||
|
||||
Labels can be used on task containers to override default behaviour:
|
||||
|
||||
| Label | Description |
|
||||
|---------------------------------------------------|------------------------------------------------------------------------------------------|
|
||||
| `traefik.protocol=https` | override the default `http` protocol |
|
||||
| `traefik.weight=10` | assign this weight to the container |
|
||||
| `traefik.enable=false` | disable this container in Træfik |
|
||||
| `traefik.backend.loadbalancer.method=drr` | override the default `wrr` load balancer algorithm |
|
||||
| `traefik.backend.loadbalancer.sticky=true` | enable backend sticky sessions |
|
||||
| `traefik.frontend.rule=Host:test.traefik.io` | override the default frontend rule (Default: `Host:{containerName}.{domain}`). |
|
||||
| `traefik.frontend.passHostHeader=true` | forward client `Host` header to the backend. |
|
||||
| `traefik.frontend.priority=10` | override default frontend priority |
|
||||
| `traefik.frontend.entryPoints=http,https` | assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. |
|
||||
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash` |
|
||||
| Label | Description |
|
||||
|-----------------------------------------------------------|------------------------------------------------------------------------------------------|
|
||||
| `traefik.protocol=https` | override the default `http` protocol |
|
||||
| `traefik.weight=10` | assign this weight to the container |
|
||||
| `traefik.enable=false` | disable this container in Træfik |
|
||||
| `traefik.backend.loadbalancer.method=drr` | override the default `wrr` load balancer algorithm |
|
||||
| `traefik.backend.loadbalancer.stickiness=true` | enable backend sticky sessions |
|
||||
| `traefik.backend.loadbalancer.stickiness.cookieName=NAME` | Manually set the cookie name for sticky sessions |
|
||||
| `traefik.backend.loadbalancer.sticky=true` | enable backend sticky sessions (DEPRECATED) |
|
||||
| `traefik.frontend.rule=Host:test.traefik.io` | override the default frontend rule (Default: `Host:{containerName}.{domain}`). |
|
||||
| `traefik.frontend.passHostHeader=true` | forward client `Host` header to the backend. |
|
||||
| `traefik.frontend.priority=10` | override default frontend priority |
|
||||
| `traefik.frontend.entryPoints=http,https` | assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. |
|
||||
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash` |
|
||||
@@ -8,6 +8,8 @@ You have three choices:
|
||||
- [Rules in a Separate File](/configuration/backends/file/#rules-in-a-separate-file)
|
||||
- [Multiple `.toml` Files](/configuration/backends/file/#multiple-toml-files)
|
||||
|
||||
To enable the file backend, you must either pass the `--file` option to the Træfik binary or put the `[file]` section (with or without inner settings) in the configuration file.
|
||||
|
||||
## Simple
|
||||
|
||||
Add your configuration at the end of the global configuration file `traefik.toml`:
|
||||
|
||||
@@ -86,15 +86,19 @@ Annotations can be used on containers to override default behaviour for the whol
|
||||
|
||||
- `traefik.frontend.rule.type: PathPrefixStrip`
|
||||
Override the default frontend rule type. Default: `PathPrefix`.
|
||||
- `traefik.frontend.priority: 3`
|
||||
- `traefik.frontend.priority: "3"`
|
||||
Override the default frontend rule priority.
|
||||
|
||||
Annotations can be used on the Kubernetes service to override default behaviour:
|
||||
|
||||
- `traefik.backend.loadbalancer.method=drr`
|
||||
Override the default `wrr` load balancer algorithm
|
||||
- `traefik.backend.loadbalancer.sticky=true`
|
||||
- `traefik.backend.loadbalancer.stickiness=true`
|
||||
Enable backend sticky sessions
|
||||
- `traefik.backend.loadbalancer.stickiness.cookieName=NAME`
|
||||
Manually set the cookie name for sticky sessions
|
||||
- `traefik.backend.loadbalancer.sticky=true`
|
||||
Enable backend sticky sessions (DEPRECATED)
|
||||
|
||||
You can find here an example [ingress](https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/cheese-ingress.yaml) and [replication controller](https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik.yaml).
|
||||
|
||||
@@ -114,10 +118,10 @@ If one of the Net-Specifications are invalid, the whole list is invalid and allo
|
||||
### Authentication
|
||||
|
||||
Is possible to add additional authentication annotations in the Ingress rule.
|
||||
The source of the authentication is a secret that contains usernames and passwords inside the the key auth.
|
||||
The source of the authentication is a secret that contains usernames and passwords inside the key auth.
|
||||
|
||||
- `ingress.kubernetes.io/auth-type`: `basic`
|
||||
- `ingress.kubernetes.io/auth-secret`
|
||||
- `ingress.kubernetes.io/auth-secret`: `mysecret`
|
||||
Contains the usernames and passwords with access to the paths defined in the Ingress Rule.
|
||||
|
||||
The secret must be created in the same namespace as the Ingress rule.
|
||||
|
||||
@@ -153,7 +153,9 @@ Labels can be used on containers to override default behaviour:
|
||||
| `traefik.backend.maxconn.amount=10` | set a maximum number of connections to the backend. Must be used in conjunction with the below label to take effect. |
|
||||
| `traefik.backend.maxconn.extractorfunc=client.ip` | set the function to be used against the request to determine what to limit maximum connections to the backend by. Must be used in conjunction with the above label to take effect. |
|
||||
| `traefik.backend.loadbalancer.method=drr` | override the default `wrr` load balancer algorithm |
|
||||
| `traefik.backend.loadbalancer.sticky=true` | enable backend sticky sessions |
|
||||
| `traefik.backend.loadbalancer.sticky=true` | enable backend sticky sessions (DEPRECATED) |
|
||||
| `traefik.backend.loadbalancer.stickiness=true` | enable backend sticky sessions |
|
||||
| `traefik.backend.loadbalancer.stickiness.cookieName=NAME` | Manually set the cookie name for sticky sessions |
|
||||
| `traefik.backend.circuitbreaker.expression=NetworkErrorRatio() > 0.5` | create a [circuit breaker](/basics/#backends) to be used against the backend |
|
||||
| `traefik.backend.healthcheck.path=/health` | set the Traefik health check path [default: no health checks] |
|
||||
| `traefik.backend.healthcheck.interval=5s` | sets a custom health check interval in Go-parseable (`time.ParseDuration`) format [default: 30s] |
|
||||
|
||||
@@ -114,13 +114,18 @@ secretKey = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
|
||||
|
||||
Labels can be used on task containers to override default behaviour:
|
||||
|
||||
| Label | Description |
|
||||
|----------------------------------------------|------------------------------------------------------------------------------------------|
|
||||
| `traefik.protocol=https` | override the default `http` protocol |
|
||||
| `traefik.weight=10` | assign this weight to the container |
|
||||
| `traefik.enable=false` | disable this container in Træfik |
|
||||
| `traefik.frontend.rule=Host:test.traefik.io` | override the default frontend rule (Default: `Host:{containerName}.{domain}`). |
|
||||
| `traefik.frontend.passHostHeader=true` | forward client `Host` header to the backend. |
|
||||
| `traefik.frontend.priority=10` | override default frontend priority |
|
||||
| `traefik.frontend.entryPoints=http,https` | assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. |
|
||||
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash`. |
|
||||
| Label | Description |
|
||||
|-----------------------------------------------------------------------|------------------------------------------------------------------------------------------|
|
||||
| `traefik.protocol=https` | Override the default `http` protocol |
|
||||
| `traefik.weight=10` | Assign this weight to the container |
|
||||
| `traefik.enable=false` | Disable this container in Træfik |
|
||||
| `traefik.frontend.rule=Host:test.traefik.io` | Override the default frontend rule (Default: `Host:{containerName}.{domain}`). |
|
||||
| `traefik.frontend.passHostHeader=true` | Forward client `Host` header to the backend. |
|
||||
| `traefik.frontend.priority=10` | Override default frontend priority |
|
||||
| `traefik.frontend.entryPoints=http,https` | Assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. |
|
||||
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash`. |
|
||||
| `traefik.backend.circuitbreaker.expression=NetworkErrorRatio() > 0.5` | Create a [circuit breaker](/basics/#backends) to be used against the backend |
|
||||
| `traefik.backend.loadbalancer.method=drr` | Override the default `wrr` load balancer algorithm |
|
||||
| `traefik.backend.loadbalancer.stickiness=true` | Enable backend sticky sessions |
|
||||
| `traefik.backend.loadbalancer.stickiness.cookieName=NAME` | Manually set the cookie name for sticky sessions |
|
||||
| `traefik.backend.loadbalancer.sticky=true` | Enable backend sticky sessions (DEPRECATED) |
|
||||
|
||||
@@ -118,10 +118,10 @@ Otherwise, the response from the auth server is returned.
|
||||
|
||||
```toml
|
||||
[entryPoints]
|
||||
[entrypoints.http]
|
||||
[entryPoints.http]
|
||||
# ...
|
||||
# To enable forward auth on an entrypoint
|
||||
[entrypoints.http.auth.forward]
|
||||
[entryPoints.http.auth.forward]
|
||||
address = "https://authserver.com/auth"
|
||||
|
||||
# Trust existing X-Forwarded-* headers.
|
||||
@@ -136,7 +136,7 @@ Otherwise, the response from the auth server is returned.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
[entrypoints.http.auth.forward.tls]
|
||||
[entryPoints.http.auth.forward.tls]
|
||||
cert = "authserver.crt"
|
||||
key = "authserver.key"
|
||||
```
|
||||
@@ -185,16 +185,55 @@ To enable IP whitelisting at the entrypoint level.
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
whiteListSourceRange = ["127.0.0.1/32"]
|
||||
whiteListSourceRange = ["127.0.0.1/32", "192.168.1.7"]
|
||||
```
|
||||
|
||||
## ProxyProtocol Support
|
||||
## ProxyProtocol
|
||||
|
||||
To enable [ProxyProtocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt) support.
|
||||
Only IPs in `trustedIPs` will lead to remote client address replacement: you should declare your load-balancer IP or CIDR range here (in testing environment, you can trust everyone using `insecure = true`).
|
||||
|
||||
!!! danger
|
||||
When queuing Træfik behind another load-balancer, be sure to carefully configure Proxy Protocol on both sides.
|
||||
Otherwise, it could introduce a security risk in your system by forging requests.
|
||||
|
||||
```toml
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
proxyprotocol = true
|
||||
address = ":80"
|
||||
|
||||
# Enable ProxyProtocol
|
||||
[entryPoints.http.proxyProtocol]
|
||||
# List of trusted IPs
|
||||
#
|
||||
# Required
|
||||
# Default: []
|
||||
#
|
||||
trustedIPs = ["127.0.0.1/32", "192.168.1.7"]
|
||||
|
||||
# Insecure mode FOR TESTING ENVIRONNEMENT ONLY
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
# insecure = true
|
||||
```
|
||||
|
||||
## Forwarded Header
|
||||
|
||||
Only IPs in `trustedIPs` will be authorized to trust the client forwarded headers (`X-Forwarded-*`).
|
||||
|
||||
```toml
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
|
||||
# Enable Forwarded Headers
|
||||
[entryPoints.http.forwardedHeaders]
|
||||
# List of trusted IPs
|
||||
#
|
||||
# Required
|
||||
# Default: []
|
||||
#
|
||||
trustedIPs = ["127.0.0.1/32", "192.168.1.7"]
|
||||
```
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
# Docker & Traefik
|
||||
|
||||
In this use case, we want to use Traefik as a _layer-7_ load balancer with SSL termination for a set of micro-services used to run a web application.
|
||||
In this use case, we want to use Træfik as a _layer-7_ load balancer with SSL termination for a set of micro-services used to run a web application.
|
||||
|
||||
We also want to automatically _discover any services_ on the Docker host and let Traefik reconfigure itself automatically when containers get created (or shut down) so HTTP traffic can be routed accordingly.
|
||||
We also want to automatically _discover any services_ on the Docker host and let Træfik reconfigure itself automatically when containers get created (or shut down) so HTTP traffic can be routed accordingly.
|
||||
|
||||
In addition, we want to use Let's Encrypt to automatically generate and renew SSL certificates per hostname.
|
||||
|
||||
@@ -19,7 +19,7 @@ In real-life, you'll want to use your own domain and have the DNS configured acc
|
||||
Docker containers can only communicate with each other over TCP when they share at least one network.
|
||||
This makes sense from a topological point of view in the context of networking, since Docker under the hood creates IPTable rules so containers can't reach other containers _unless you'd want to_.
|
||||
|
||||
In this example, we're going to use a single network called `web` where all containers that are handling HTTP traffic (including Traefik) will reside in.
|
||||
In this example, we're going to use a single network called `web` where all containers that are handling HTTP traffic (including Træfik) will reside in.
|
||||
|
||||
On the Docker host, run the following command:
|
||||
|
||||
@@ -27,7 +27,7 @@ On the Docker host, run the following command:
|
||||
docker network create web
|
||||
```
|
||||
|
||||
Now, let's create a directory on the server where we will configure the rest of Traefik:
|
||||
Now, let's create a directory on the server where we will configure the rest of Træfik:
|
||||
|
||||
```shell
|
||||
mkdir -p /opt/traefik
|
||||
@@ -41,7 +41,7 @@ touch /opt/traefik/acme.json && chmod 600 /opt/traefik/acme.json
|
||||
touch /opt/traefik/traefik.toml
|
||||
```
|
||||
|
||||
The `docker-compose.yml` file will provide us with a simple, consistent and more importantly, a deterministic way to create Traefik.
|
||||
The `docker-compose.yml` file will provide us with a simple, consistent and more importantly, a deterministic way to create Træfik.
|
||||
|
||||
The contents of the file is as follows:
|
||||
|
||||
@@ -59,8 +59,8 @@ services:
|
||||
- web
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- /srv/traefik/traefik.toml:/traefik.toml
|
||||
- /srv/traefik/acme.json:/acme.json
|
||||
- /opt/traefik/traefik.toml:/traefik.toml
|
||||
- /opt/traefik/acme.json:/acme.json
|
||||
container_name: traefik
|
||||
|
||||
networks:
|
||||
@@ -69,12 +69,12 @@ networks:
|
||||
```
|
||||
|
||||
As you can see, we're mounting the `traefik.toml` file as well as the (empty) `acme.json` file in the container.
|
||||
Also, we're mounting the `/var/run/docker.sock` Docker socket in the container as well, so Traefik can listen to Docker events and reconfigure it's own internal configuration when containers are created (or shut down).
|
||||
Also, we're mounting the `/var/run/docker.sock` Docker socket in the container as well, so Træfik can listen to Docker events and reconfigure it's own internal configuration when containers are created (or shut down).
|
||||
Also, we're making sure the container is automatically restarted by the Docker engine in case of problems (or: if the server is rebooted).
|
||||
We're publishing the default HTTP ports `80` and `443` on the host, and making sure the container is placed within the `web` network we've created earlier on.
|
||||
Finally, we're giving this container a static name called `traefik`.
|
||||
|
||||
Let's take a look at a simply `traefik.toml` configuration as well before we'll create the Traefik container:
|
||||
Let's take a look at a simple `traefik.toml` configuration as well before we'll create the Træfik container:
|
||||
|
||||
```toml
|
||||
debug = false
|
||||
@@ -109,17 +109,17 @@ OnHostRule = true
|
||||
This is the minimum configuration required to do the following:
|
||||
|
||||
- Log `ERROR`-level messages (or more severe) to the console, but silence `DEBUG`-level messagse
|
||||
- Check for new versions of Traefik periodically
|
||||
- Check for new versions of Træfik periodically
|
||||
- Create two entry points, namely an `HTTP` endpoint on port `80`, and an `HTTPS` endpoint on port `443` where all incoming traffic on port `80` will immediately get redirected to `HTTPS`.
|
||||
- Enable the Docker configuration backend and listen for container events on the Docker unix socket we've mounted earlier. However, **new containers will not be exposed by Traefik by default, we'll get into this in a bit!**
|
||||
- Enable the Docker configuration backend and listen for container events on the Docker unix socket we've mounted earlier. However, **new containers will not be exposed by Træfik by default, we'll get into this in a bit!**
|
||||
- Enable automatic request and configuration of SSL certificates using Let's Encrypt.
|
||||
These certificates will be stored in the `acme.json` file, which you can back-up yourself and store off-premises.
|
||||
|
||||
Alright, let's boot the container. From the `/opt/traefik` directory, run `docker-compose up -d` which will create and start the Traefik container.
|
||||
Alright, let's boot the container. From the `/opt/traefik` directory, run `docker-compose up -d` which will create and start the Træfik container.
|
||||
|
||||
## Exposing Web Services to the Outside World
|
||||
|
||||
Now that we've fully configured and started Traefik, it's time to get our applications running!
|
||||
Now that we've fully configured and started Træfik, it's time to get our applications running!
|
||||
|
||||
Let's take a simple example of a micro-service project consisting of various services, where some will be exposed to the outside world and some will not.
|
||||
|
||||
@@ -148,6 +148,10 @@ services:
|
||||
- "traefik.frontend.rule=Host:app.my-awesome-app.org"
|
||||
- "traefik.enable=true"
|
||||
- "traefik.port=9000"
|
||||
- "traefik.default.protocol=http"
|
||||
- "traefik.admin.frontend.rule=Host:admin-app.my-awesome-app.org"
|
||||
- "traefik.admin.protocol=https"
|
||||
- "traefik.admin.port=9443"
|
||||
|
||||
db:
|
||||
image: my-docker-registry.com/back-end/5.7
|
||||
@@ -190,10 +194,10 @@ Since the `traefik` container we've created and started earlier is also attached
|
||||
|
||||
### Labels
|
||||
|
||||
As mentioned earlier, we don't want containers exposed automatically by Traefik.
|
||||
As mentioned earlier, we don't want containers exposed automatically by Træfik.
|
||||
|
||||
The reason behind this is simple: we want to have control over this process ourselves.
|
||||
Thanks to Docker labels, we can tell Traefik how to create it's internal routing configuration.
|
||||
Thanks to Docker labels, we can tell Træfik how to create it's internal routing configuration.
|
||||
|
||||
Let's take a look at the labels themselves for the `app` service, which is a HTTP webservice listing on port 9000:
|
||||
|
||||
@@ -203,31 +207,56 @@ Let's take a look at the labels themselves for the `app` service, which is a HTT
|
||||
- "traefik.frontend.rule=Host:app.my-awesome-app.org"
|
||||
- "traefik.enable=true"
|
||||
- "traefik.port=9000"
|
||||
- "traefik.default.protocol=http"
|
||||
- "traefik.admin.frontend.rule=Host:admin-app.my-awesome-app.org"
|
||||
- "traefik.admin.protocol=https"
|
||||
- "traefik.admin.port=9443"
|
||||
```
|
||||
|
||||
We use both `container labels` and `service labels`.
|
||||
|
||||
#### Container labels
|
||||
|
||||
First, we specify the `backend` name which corresponds to the actual service we're routing **to**.
|
||||
|
||||
We also tell Traefik to use the `web` network to route HTTP traffic to this container.
|
||||
With the `frontend.rule` label, we tell Traefik that we want to route to this container if the incoming HTTP request contains the `Host` `app.my-awesome-app.org`.
|
||||
Essentially, this is the actual rule used for Layer-7 load balancing.
|
||||
With the `traefik.enable` label, we tell Traefik to include this container in it's internal configuration.
|
||||
We also tell Træfik to use the `web` network to route HTTP traffic to this container.
|
||||
With the `traefik.enable` label, we tell Træfik to include this container in it's internal configuration.
|
||||
|
||||
Finally but not unimportantly, we tell Traefik to route **to** port `9000`, since that is the actual TCP/IP port the container actually listens on.
|
||||
With the `frontend.rule` label, we tell Træfik that we want to route to this container if the incoming HTTP request contains the `Host` `app.my-awesome-app.org`.
|
||||
Essentially, this is the actual rule used for Layer-7 load balancing.
|
||||
|
||||
Finally but not unimportantly, we tell Træfik to route **to** port `9000`, since that is the actual TCP/IP port the container actually listens on.
|
||||
|
||||
### Service labels
|
||||
|
||||
`Service labels` allow managing many routes for the same container.
|
||||
|
||||
When both `container labels` and `service labels` are defined, `container labels` are just used as default values for missing `service labels` but no frontend/backend are going to be defined only with these labels.
|
||||
Obviously, labels `traefik.frontend.rule` and `traefik.port` described above, will only be used to complete information set in `service labels` during the container frontends/bakends creation.
|
||||
|
||||
In the example, two service names are defined : `default` and `admin`.
|
||||
They allow creating two frontends and two backends.
|
||||
|
||||
- `default` has only one `service label` : `traefik.default.protocol`.
|
||||
Træfik will use values set in `traefik.frontend.rule` and `traefik.port` to create the `default` frontend and backend.
|
||||
The frontend listens to incoming HTTP requests which contain the `Host` `app.my-awesome-app.org` and redirect them in `HTTP` to the port `9000` of the backend.
|
||||
- `admin` has all the `services labels` needed to create the `admin` frontend and backend (`traefik.admin.frontend.rule`, `traefik.admin.protocol`, `traefik.admin.port`).
|
||||
Træfik will create a frontend to listen to incoming HTTP requests which contain the `Host` `admin-app.my-awesome-app.org` and redirect them in `HTTPS` to the port `9443` of the backend.
|
||||
|
||||
#### Gotchas and tips
|
||||
|
||||
- Always specify the correct port where the container expects HTTP traffic using `traefik.port` label.
|
||||
If a container exposes multiple ports, Traefik may forward traffic to the wrong port.
|
||||
If a container exposes multiple ports, Træfik may forward traffic to the wrong port.
|
||||
Even if a container only exposes one port, you should always write configuration defensively and explicitly.
|
||||
- Should you choose to enable the `exposedbydefault` flag in the `traefik.toml` configuration, be aware that all containers that are placed in the same network as Traefik will automatically be reachable from the outside world, for everyone and everyone to see.
|
||||
- Should you choose to enable the `exposedbydefault` flag in the `traefik.toml` configuration, be aware that all containers that are placed in the same network as Træfik will automatically be reachable from the outside world, for everyone and everyone to see.
|
||||
Usually, this is a bad idea.
|
||||
- With the `traefik.frontend.auth.basic` label, it's possible for Traefik to provide a HTTP basic-auth challenge for the endpoints you provide the label for.
|
||||
- Traefik has built-in support to automatically export [Prometheus](https://prometheus.io) metrics
|
||||
- Traefik supports websockets out of the box. In the example above, the `events`-service could be a NodeJS-based application which allows clients to connect using websocket protocol.
|
||||
- With the `traefik.frontend.auth.basic` label, it's possible for Træfik to provide a HTTP basic-auth challenge for the endpoints you provide the label for.
|
||||
- Træfik has built-in support to automatically export [Prometheus](https://prometheus.io) metrics
|
||||
- Træfik supports websockets out of the box. In the example above, the `events`-service could be a NodeJS-based application which allows clients to connect using websocket protocol.
|
||||
Thanks to the fact that HTTPS in our example is enforced, these websockets are automatically secure as well (WSS)
|
||||
|
||||
### Final thoughts
|
||||
|
||||
Using Traefik as a Layer-7 load balancer in combination with both Docker and Let's Encrypt provides you with an extremely flexible, powerful and self-configuring solution for your projects.
|
||||
Using Træfik as a Layer-7 load balancer in combination with both Docker and Let's Encrypt provides you with an extremely flexible, powerful and self-configuring solution for your projects.
|
||||
|
||||
With Let's Encrypt, your endpoints are automatically secured with production-ready SSL certificates that are renewed automatically as well.
|
||||
|
||||
@@ -137,7 +137,7 @@ This configuration allows generating a Let's Encrypt certificate during the firs
|
||||
* TLS handshakes will be slow when requesting a hostname certificate for the first time, this can leads to DDoS attacks.
|
||||
* Let's Encrypt have rate limiting: https://letsencrypt.org/docs/rate-limits
|
||||
|
||||
That's why, it's better to use the `onHostRule` optin if possible.
|
||||
That's why, it's better to use the `onHostRule` option if possible.
|
||||
|
||||
### DNS challenge
|
||||
|
||||
@@ -170,7 +170,7 @@ entryPoint = "https"
|
||||
DNS challenge needs environment variables to be executed.
|
||||
This variables have to be set on the machine/container which host Traefik.
|
||||
|
||||
These variables has described [in this section](/configuration/acme/#dnsprovider).
|
||||
These variables are described [in this section](/configuration/acme/#dnsprovider).
|
||||
|
||||
### OnHostRule option and provided certificates
|
||||
|
||||
@@ -198,7 +198,7 @@ Traefik will only try to generate a Let's encrypt certificate if the domain cann
|
||||
|
||||
#### Prerequisites
|
||||
|
||||
Before to use Let's Encrypt in a Traefik cluster, take a look to [the key-value store explanations](/user-guide/kv-config) and more precisely to [this section](/user-guide/kv-config/#store-configuration-in-key-value-store) in the way to know how to migrate from a acme local storage *(acme.json file)* to a key-value store configuration.
|
||||
Before you use Let's Encrypt in a Traefik cluster, take a look to [the key-value store explanations](/user-guide/kv-config) and more precisely at [this section](/user-guide/kv-config/#store-configuration-in-key-value-store), which will describe how to migrate from a acme local storage *(acme.json file)* to a key-value store configuration.
|
||||
|
||||
#### Configuration
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
This section explains how to use Traefik as reverse proxy for gRPC application with self-signed certificates.
|
||||
|
||||
!!! warning
|
||||
As gRPC needs HTTP2, we need valid HTTPS certificates on both gRPC Server and Træfik.
|
||||
As gRPC needs HTTP2, we need HTTPS certificates on both gRPC Server and Træfik.
|
||||
|
||||
<p align="center">
|
||||
<img src="/img/grpc.svg" alt="gRPC architecture" title="gRPC architecture" />
|
||||
@@ -76,9 +76,12 @@ RootCAs = [ "./backend.cert" ]
|
||||
rule = "Host:frontend.local"
|
||||
```
|
||||
|
||||
!!! warning
|
||||
With some backends, the server URLs use the IP, so you may need to configure `InsecureSkipVerify` instead of the `RootCAS` to activate HTTPS without hostname verification.
|
||||
|
||||
## Conclusion
|
||||
|
||||
We don't need specific configuration to use gRPC in Træfik, we just need to be careful that all the exchanges (between client and Træfik, and between Træfik and backend) are valid HTTPS communications (without `InsecureSkipVerify` enabled) because gRPC use HTTP2.
|
||||
We don't need specific configuration to use gRPC in Træfik, we just need to be careful that all the exchanges (between client and Træfik, and between Træfik and backend) are HTTPS communications because gRPC uses HTTP2.
|
||||
|
||||
## A gRPC example in go
|
||||
|
||||
|
||||
@@ -32,7 +32,6 @@ rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods
|
||||
- services
|
||||
- endpoints
|
||||
- secrets
|
||||
@@ -80,7 +79,7 @@ It is possible to use Træfik with a [Deployment](https://kubernetes.io/docs/con
|
||||
|
||||
The Deployment objects looks like this:
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
@@ -119,6 +118,7 @@ kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: traefik-ingress-service
|
||||
namespace: kube-system
|
||||
spec:
|
||||
selector:
|
||||
k8s-app: traefik-ingress-lb
|
||||
@@ -183,6 +183,7 @@ kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: traefik-ingress-service
|
||||
namespace: kube-system
|
||||
spec:
|
||||
selector:
|
||||
k8s-app: traefik-ingress-lb
|
||||
@@ -326,6 +327,72 @@ echo "$(minikube ip) traefik-ui.minikube" | sudo tee -a /etc/hosts
|
||||
|
||||
We should now be able to visit [traefik-ui.minikube](http://traefik-ui.minikube) in the browser and view the Træfik Web UI.
|
||||
|
||||
## Basic Authentication
|
||||
|
||||
It's possible to add additional authentication annotations in the Ingress rule.
|
||||
The source of the authentication is a secret that contains usernames and passwords inside the key auth.
|
||||
To read about basic auth limitations see the [Kubernetes Ingress](/configuration/backends/kubernetes) configuration page.
|
||||
|
||||
#### Creating the Secret
|
||||
|
||||
A. Use `htpasswd` to create a file containing the username and the base64-encoded password:
|
||||
|
||||
```shell
|
||||
htpasswd -c ./auth myusername
|
||||
```
|
||||
|
||||
You will be prompted for a password which you will have to enter twice.
|
||||
`htpasswd` will create a file with the following:
|
||||
|
||||
```shell
|
||||
cat auth
|
||||
```
|
||||
```
|
||||
myusername:$apr1$78Jyn/1K$ERHKVRPPlzAX8eBtLuvRZ0
|
||||
```
|
||||
|
||||
B. Now use `kubectl` to create a secret in the monitoring namespace using the file created by `htpasswd`.
|
||||
|
||||
```shell
|
||||
kubectl create secret generic mysecret --from-file auth --namespace=monitoring
|
||||
```
|
||||
|
||||
!!! note
|
||||
Secret must be in same namespace as the ingress rule.
|
||||
|
||||
C. Create the ingress using the following annotations to specify basic auth and that the username and password is stored in `mysecret`.
|
||||
|
||||
- `ingress.kubernetes.io/auth-type: "basic"`
|
||||
- `ingress.kubernetes.io/auth-secret: "mysecret"`
|
||||
|
||||
Following is a full ingress example based on Prometheus:
|
||||
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: prometheus-dashboard
|
||||
namespace: monitoring
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: traefik
|
||||
ingress.kubernetes.io/auth-type: "basic"
|
||||
ingress.kubernetes.io/auth-secret: "mysecret"
|
||||
spec:
|
||||
rules:
|
||||
- host: dashboard.prometheus.example.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: prometheus
|
||||
servicePort: 9090
|
||||
```
|
||||
|
||||
You can apply the example ingress as following:
|
||||
|
||||
```shell
|
||||
kubectl create -f prometheus-ingress.yaml -n monitoring
|
||||
```
|
||||
|
||||
## Name based routing
|
||||
|
||||
In this example we are going to setup websites for 3 of the United Kingdoms best loved cheeses, Cheddar, Stilton and Wensleydale.
|
||||
@@ -591,7 +658,7 @@ kind: Ingress
|
||||
metadata:
|
||||
name: wildcard-cheeses
|
||||
annotations:
|
||||
traefik.frontend.priority: 1
|
||||
traefik.frontend.priority: "1"
|
||||
spec:
|
||||
rules:
|
||||
- host: *.minikube
|
||||
@@ -606,7 +673,7 @@ kind: Ingress
|
||||
metadata:
|
||||
name: specific-cheeses
|
||||
annotations:
|
||||
traefik.frontend.priority: 2
|
||||
traefik.frontend.priority: "2"
|
||||
spec:
|
||||
rules:
|
||||
- host: specific.minikube
|
||||
@@ -618,6 +685,7 @@ spec:
|
||||
servicePort: http
|
||||
```
|
||||
|
||||
Note that priority values must be quoted to avoid them being interpreted as numbers (which are illegal for annotations).
|
||||
|
||||
## Forwarding to ExternalNames
|
||||
|
||||
@@ -687,13 +755,23 @@ If you were to visit `example.com/static` the request would then be passed onto
|
||||
So you could set `disablePassHostHeaders` to `true` in your toml file and then enable passing
|
||||
the host header per ingress if you wanted.
|
||||
|
||||
## Excluding an ingress from Træfik
|
||||
## Partitioning the Ingress object space
|
||||
|
||||
You can control which ingress Træfik cares about by using the `kubernetes.io/ingress.class` annotation.
|
||||
By default, Træfik processes every Ingress objects it observes. At times, however, it may be desirable to ignore certain objects. The following sub-sections describe common use cases and how they can be handled with Træfik.
|
||||
|
||||
By default if the annotation is not set at all Træfik will include the ingress.
|
||||
If the annotation is set to anything other than traefik or a blank string Træfik will ignore it.
|
||||
### Between Træfik and other Ingress controller implementations
|
||||
|
||||
Sometimes Træfik runs along other Ingress controller implementations. One such example is when both Træfik and a cloud provider Ingress controller are active.
|
||||
|
||||
The `kubernetes.io/ingress.class` annotation can be attached to any Ingress object in order to control whether Træfik should handle it.
|
||||
|
||||
If the annotation is missing, contains an empty value, or the value `traefik`, then the Træfik controller will take responsibility and process the associated Ingress object. If the annotation contains any other value (usually the name of a different Ingress controller), Træfik will ignore the object.
|
||||
|
||||
### Between multiple Træfik Deployments
|
||||
|
||||
Sometimes multiple Træfik Deployments are supposed to run concurrently. For instance, it is conceivable to have one Deployment deal with internal and another one with external traffic.
|
||||
|
||||
For such cases, it is advisable to classify Ingress objects through a label and configure the `labelSelector` option per each Træfik Deployment accordingly. To stick with the internal/external example above, all Ingress objects meant for internal traffic could receive a `traffic-type: internal` label while objects designated for external traffic receive a `traffic-type: external` label. The label selectors on the Træfik Deployments would then be `traffic-type=internal` and `traffic-type=external`, respectively.
|
||||
|
||||
## Production advice
|
||||
|
||||
|
||||
@@ -148,6 +148,37 @@ This variable must be initialized with the ACL token value.
|
||||
|
||||
If Traefik is launched into a Docker container, the variable `CONSUL_HTTP_TOKEN` can be initialized with the `-e` Docker option : `-e "CONSUL_HTTP_TOKEN=[consul-acl-token-value]"`
|
||||
|
||||
If a Consul ACL is used to restrict Træfik read/write access, one of the following configurations is needed.
|
||||
|
||||
- HCL format :
|
||||
|
||||
```
|
||||
key "traefik" {
|
||||
policy = "write"
|
||||
},
|
||||
|
||||
session "" {
|
||||
policy = "write"
|
||||
}
|
||||
```
|
||||
|
||||
- JSON format :
|
||||
|
||||
```json
|
||||
{
|
||||
"key": {
|
||||
"traefik": {
|
||||
"policy": "write"
|
||||
}
|
||||
},
|
||||
"session": {
|
||||
"": {
|
||||
"policy": "write"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### TLS support
|
||||
|
||||
To connect to a Consul endpoint using SSL, simply specify `https://` in the `consul.endpoint` property
|
||||
|
||||
@@ -126,7 +126,7 @@ docker-machine ssh manager "docker service create \
|
||||
```
|
||||
|
||||
!!! note
|
||||
We set `whoami1` to use sticky sessions (`--label traefik.backend.loadbalancer.sticky=true`).
|
||||
We set `whoami1` to use sticky sessions (`--label traefik.backend.loadbalancer.stickiness=true`).
|
||||
We'll demonstrate that later.
|
||||
|
||||
!!! note
|
||||
@@ -307,7 +307,7 @@ cat ./cookies.txt
|
||||
whoami1.traefik FALSE / FALSE 0 _TRAEFIK_BACKEND http://10.0.0.15:80
|
||||
```
|
||||
|
||||
If you load the cookies file (`-b cookies.txt`) for the next request, you will see that stickyness is maintained:
|
||||
If you load the cookies file (`-b cookies.txt`) for the next request, you will see that stickiness is maintained:
|
||||
|
||||
```shell
|
||||
curl -b cookies.txt -H Host:whoami1.traefik http://$(docker-machine ip manager)
|
||||
|
||||
@@ -7,7 +7,6 @@ rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods
|
||||
- services
|
||||
- endpoints
|
||||
- secrets
|
||||
|
||||
10
glide.lock
generated
10
glide.lock
generated
@@ -1,5 +1,5 @@
|
||||
hash: 9d176906cf25eaa3224c750b1dd3a5c8ce3340a58df3c562e57572c6294e16f0
|
||||
updated: 2017-09-30T18:32:16.848940186+02:00
|
||||
hash: de7e6a0069090a5811c003db434da19fe31efcf0c9429d3ccb676295708f0d2b
|
||||
updated: 2017-10-24T14:08:11.364720581+02:00
|
||||
imports:
|
||||
- name: cloud.google.com/go
|
||||
version: 2e6a95edb1071d750f6d7db777bf66cd2997af6c
|
||||
@@ -89,7 +89,7 @@ imports:
|
||||
- name: github.com/containous/flaeg
|
||||
version: b5d2dc5878df07c2d74413348186982e7b865871
|
||||
- name: github.com/containous/mux
|
||||
version: af6ea922f7683d9706834157e6b0610e22ccb2db
|
||||
version: 06ccd3e75091eb659b1d720cda0e16bc7057954c
|
||||
- name: github.com/containous/staert
|
||||
version: 1e26a71803e428fd933f5f9c8e50a26878f53147
|
||||
- name: github.com/coreos/etcd
|
||||
@@ -383,7 +383,7 @@ imports:
|
||||
repo: https://github.com/ijc25/Gotty.git
|
||||
vcs: git
|
||||
- name: github.com/NYTimes/gziphandler
|
||||
version: 97ae7fbaf81620fe97840685304a78a306a39c64
|
||||
version: d6f46609c7629af3a02d791a4666866eed3cbd3e
|
||||
- name: github.com/ogier/pflag
|
||||
version: 45c278ab3607870051a2ea9040bb85fcb8557481
|
||||
- name: github.com/opencontainers/go-digest
|
||||
@@ -481,7 +481,7 @@ imports:
|
||||
- name: github.com/urfave/negroni
|
||||
version: 490e6a555d47ca891a89a150d0c1ef3922dfffe9
|
||||
- name: github.com/vulcand/oxy
|
||||
version: 648088ee0902cf8d8337826ae2a82444008720e2
|
||||
version: 7e9763c4dc71b9758379da3581e6495c145caaab
|
||||
repo: https://github.com/containous/oxy.git
|
||||
vcs: git
|
||||
subpackages:
|
||||
|
||||
@@ -12,7 +12,7 @@ import:
|
||||
- package: github.com/cenk/backoff
|
||||
- package: github.com/containous/flaeg
|
||||
- package: github.com/vulcand/oxy
|
||||
version: 648088ee0902cf8d8337826ae2a82444008720e2
|
||||
version: 7e9763c4dc71b9758379da3581e6495c145caaab
|
||||
repo: https://github.com/containous/oxy.git
|
||||
vcs: git
|
||||
subpackages:
|
||||
|
||||
@@ -24,19 +24,20 @@ func (s *ConsulCatalogSuite) SetUpSuite(c *check.C) {
|
||||
s.composeProject.Start(c)
|
||||
|
||||
consul := s.composeProject.Container(c, "consul")
|
||||
|
||||
s.consulIP = consul.NetworkSettings.IPAddress
|
||||
config := api.DefaultConfig()
|
||||
config.Address = s.consulIP + ":8500"
|
||||
consulClient, err := api.NewClient(config)
|
||||
if err != nil {
|
||||
c.Fatalf("Error creating consul client. %v", err)
|
||||
}
|
||||
s.consulClient = consulClient
|
||||
s.createConsulClient(config, c)
|
||||
|
||||
// Wait for consul to elect itself leader
|
||||
err = try.Do(3*time.Second, func() error {
|
||||
leader, err := consulClient.Status().Leader()
|
||||
err := s.waitToElectConsulLeader()
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
}
|
||||
|
||||
func (s *ConsulCatalogSuite) waitToElectConsulLeader() error {
|
||||
return try.Do(15*time.Second, func() error {
|
||||
leader, err := s.consulClient.Status().Leader()
|
||||
|
||||
if err != nil || len(leader) == 0 {
|
||||
return fmt.Errorf("Leader not found. %v", err)
|
||||
@@ -44,7 +45,17 @@ func (s *ConsulCatalogSuite) SetUpSuite(c *check.C) {
|
||||
|
||||
return nil
|
||||
})
|
||||
c.Assert(err, checker.IsNil)
|
||||
}
|
||||
func (s *ConsulCatalogSuite) createConsulClient(config *api.Config, c *check.C) *api.Client {
|
||||
consulClient, err := api.NewClient(config)
|
||||
if err != nil {
|
||||
c.Fatalf("Error creating consul client. %v", err)
|
||||
}
|
||||
s.consulClient = consulClient
|
||||
return consulClient
|
||||
}
|
||||
func (s *ConsulCatalogSuite) startConsulService(c *check.C) {
|
||||
|
||||
}
|
||||
|
||||
func (s *ConsulCatalogSuite) registerService(name string, address string, port int, tags []string) error {
|
||||
@@ -332,3 +343,124 @@ func (s *ConsulCatalogSuite) TestBasicAuthSimpleService(c *check.C) {
|
||||
err = try.Request(req, 5*time.Second, try.StatusCodeIs(http.StatusOK), try.HasBody())
|
||||
c.Assert(err, checker.IsNil)
|
||||
}
|
||||
|
||||
func (s *ConsulCatalogSuite) TestRefreshConfigTagChange(c *check.C) {
|
||||
cmd, display := s.traefikCmd(
|
||||
withConfigFile("fixtures/consul_catalog/simple.toml"),
|
||||
"--consulCatalog",
|
||||
"--consulCatalog.exposedByDefault=false",
|
||||
"--consulCatalog.watch=true",
|
||||
"--consulCatalog.endpoint="+s.consulIP+":8500",
|
||||
"--consulCatalog.domain=consul.localhost")
|
||||
defer display(c)
|
||||
err := cmd.Start()
|
||||
c.Assert(err, checker.IsNil)
|
||||
defer cmd.Process.Kill()
|
||||
|
||||
nginx := s.composeProject.Container(c, "nginx1")
|
||||
|
||||
err = s.registerService("test", nginx.NetworkSettings.IPAddress, 80, []string{"name=nginx1", "traefik.enable=false", "traefik.backend.circuitbreaker=NetworkErrorRatio() > 0.5"})
|
||||
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
||||
defer s.deregisterService("test", nginx.NetworkSettings.IPAddress)
|
||||
|
||||
err = try.GetRequest("http://127.0.0.1:8080/api/providers/consul_catalog/backends", 5*time.Second, try.BodyContains("nginx1"))
|
||||
c.Assert(err, checker.NotNil)
|
||||
|
||||
err = s.registerService("test", nginx.NetworkSettings.IPAddress, 80, []string{"name=nginx1", "traefik.enable=true", "traefik.backend.circuitbreaker=ResponseCodeRatio(500, 600, 0, 600) > 0.5"})
|
||||
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "http://127.0.0.1:8000/", nil)
|
||||
c.Assert(err, checker.IsNil)
|
||||
req.Host = "test.consul.localhost"
|
||||
|
||||
err = try.Request(req, 20*time.Second, try.StatusCodeIs(http.StatusOK), try.HasBody())
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
err = try.GetRequest("http://127.0.0.1:8080/api/providers/consul_catalog/backends", 60*time.Second, try.BodyContains("nginx1"))
|
||||
c.Assert(err, checker.IsNil)
|
||||
}
|
||||
|
||||
func (s *ConsulCatalogSuite) TestCircuitBreaker(c *check.C) {
|
||||
cmd, display := s.traefikCmd(
|
||||
withConfigFile("fixtures/consul_catalog/simple.toml"),
|
||||
"--retry",
|
||||
"--retry.attempts=1",
|
||||
"--forwardingTimeouts.dialTimeout=5s",
|
||||
"--forwardingTimeouts.responseHeaderTimeout=10s",
|
||||
"--consulCatalog",
|
||||
"--consulCatalog.exposedByDefault=false",
|
||||
"--consulCatalog.watch=true",
|
||||
"--consulCatalog.endpoint="+s.consulIP+":8500",
|
||||
"--consulCatalog.domain=consul.localhost")
|
||||
defer display(c)
|
||||
err := cmd.Start()
|
||||
c.Assert(err, checker.IsNil)
|
||||
defer cmd.Process.Kill()
|
||||
|
||||
nginx := s.composeProject.Container(c, "nginx1")
|
||||
nginx2 := s.composeProject.Container(c, "nginx2")
|
||||
nginx3 := s.composeProject.Container(c, "nginx3")
|
||||
|
||||
err = s.registerService("test", nginx.NetworkSettings.IPAddress, 80, []string{"name=nginx1", "traefik.enable=true", "traefik.backend.circuitbreaker=NetworkErrorRatio() > 0.5"})
|
||||
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
||||
defer s.deregisterService("test", nginx.NetworkSettings.IPAddress)
|
||||
err = s.registerService("test", nginx2.NetworkSettings.IPAddress, 42, []string{"name=nginx2", "traefik.enable=true", "traefik.backend.circuitbreaker=NetworkErrorRatio() > 0.5"})
|
||||
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
||||
defer s.deregisterService("test", nginx2.NetworkSettings.IPAddress)
|
||||
err = s.registerService("test", nginx3.NetworkSettings.IPAddress, 42, []string{"name=nginx3", "traefik.enable=true", "traefik.backend.circuitbreaker=NetworkErrorRatio() > 0.5"})
|
||||
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
||||
defer s.deregisterService("test", nginx3.NetworkSettings.IPAddress)
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "http://127.0.0.1:8000/", nil)
|
||||
c.Assert(err, checker.IsNil)
|
||||
req.Host = "test.consul.localhost"
|
||||
|
||||
err = try.Request(req, 20*time.Second, try.StatusCodeIs(http.StatusServiceUnavailable), try.HasBody())
|
||||
c.Assert(err, checker.IsNil)
|
||||
}
|
||||
|
||||
func (s *ConsulCatalogSuite) TestRetryWithConsulServer(c *check.C) {
|
||||
//Scale consul to 0 to be able to start traefik before and test retry
|
||||
s.composeProject.Scale(c, "consul", 0)
|
||||
|
||||
cmd, display := s.traefikCmd(
|
||||
withConfigFile("fixtures/consul_catalog/simple.toml"),
|
||||
"--consulCatalog",
|
||||
"--consulCatalog.watch=false",
|
||||
"--consulCatalog.exposedByDefault=true",
|
||||
"--consulCatalog.endpoint="+s.consulIP+":8500",
|
||||
"--consulCatalog.domain=consul.localhost")
|
||||
defer display(c)
|
||||
err := cmd.Start()
|
||||
c.Assert(err, checker.IsNil)
|
||||
defer cmd.Process.Kill()
|
||||
|
||||
// Wait for Traefik to turn ready.
|
||||
err = try.GetRequest("http://127.0.0.1:8000/", 2*time.Second, try.StatusCodeIs(http.StatusNotFound))
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "http://127.0.0.1:8000/", nil)
|
||||
c.Assert(err, checker.IsNil)
|
||||
req.Host = "test.consul.localhost"
|
||||
|
||||
// Request should fail
|
||||
err = try.Request(req, 2*time.Second, try.StatusCodeIs(http.StatusNotFound), try.HasBody())
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
// Scale consul to 1
|
||||
s.composeProject.Scale(c, "consul", 1)
|
||||
s.waitToElectConsulLeader()
|
||||
|
||||
nginx := s.composeProject.Container(c, "nginx1")
|
||||
// Register service
|
||||
err = s.registerService("test", nginx.NetworkSettings.IPAddress, 80, []string{})
|
||||
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
||||
|
||||
// Provider consul catalog should be present
|
||||
err = try.GetRequest("http://127.0.0.1:8080/api/providers", 10*time.Second, try.BodyContains("consul_catalog"))
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
// Should be ok
|
||||
err = try.Request(req, 10*time.Second, try.StatusCodeIs(http.StatusOK), try.HasBody())
|
||||
c.Assert(err, checker.IsNil)
|
||||
}
|
||||
|
||||
@@ -49,6 +49,14 @@ func (s *DockerSuite) startContainerWithLabels(c *check.C, image string, labels
|
||||
})
|
||||
}
|
||||
|
||||
func (s *DockerSuite) startContainerWithNameAndLabels(c *check.C, name string, image string, labels map[string]string, args ...string) string {
|
||||
return s.startContainerWithConfig(c, image, d.ContainerConfig{
|
||||
Name: name,
|
||||
Cmd: args,
|
||||
Labels: labels,
|
||||
})
|
||||
}
|
||||
|
||||
func (s *DockerSuite) startContainerWithConfig(c *check.C, image string, config d.ContainerConfig) string {
|
||||
if config.Name == "" {
|
||||
config.Name = namesgenerator.GetRandomName(10)
|
||||
@@ -60,6 +68,11 @@ func (s *DockerSuite) startContainerWithConfig(c *check.C, image string, config
|
||||
return strings.SplitAfter(container.Name, "/")[1]
|
||||
}
|
||||
|
||||
func (s *DockerSuite) stopAndRemoveContainerByName(c *check.C, name string) {
|
||||
s.project.Stop(c, name)
|
||||
s.project.Remove(c, name)
|
||||
}
|
||||
|
||||
func (s *DockerSuite) SetUpSuite(c *check.C) {
|
||||
project := docker.NewProjectFromEnv(c)
|
||||
s.project = project
|
||||
@@ -129,6 +142,11 @@ func (s *DockerSuite) TestDockerContainersWithLabels(c *check.C) {
|
||||
}
|
||||
s.startContainerWithLabels(c, "swarm:1.0.0", labels, "manage", "token://blabla")
|
||||
|
||||
// Start another container by replacing a '.' by a '-'
|
||||
labels = map[string]string{
|
||||
types.LabelFrontendRule: "Host:my-super.host",
|
||||
}
|
||||
s.startContainerWithLabels(c, "swarm:1.0.0", labels, "manage", "token://blablabla")
|
||||
// Start traefik
|
||||
cmd, display := s.traefikCmd(withConfigFile(file))
|
||||
defer display(c)
|
||||
@@ -138,12 +156,20 @@ func (s *DockerSuite) TestDockerContainersWithLabels(c *check.C) {
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "http://127.0.0.1:8000/version", nil)
|
||||
c.Assert(err, checker.IsNil)
|
||||
req.Host = "my.super.host"
|
||||
req.Host = "my-super.host"
|
||||
|
||||
// FIXME Need to wait than 500 milliseconds more (for swarm or traefik to boot up ?)
|
||||
resp, err := try.ResponseUntilStatusCode(req, 1500*time.Millisecond, http.StatusOK)
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
req, err = http.NewRequest(http.MethodGet, "http://127.0.0.1:8000/version", nil)
|
||||
c.Assert(err, checker.IsNil)
|
||||
req.Host = "my.super.host"
|
||||
|
||||
// FIXME Need to wait than 500 milliseconds more (for swarm or traefik to boot up ?)
|
||||
resp, err = try.ResponseUntilStatusCode(req, 1500*time.Millisecond, http.StatusOK)
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
body, err := ioutil.ReadAll(resp.Body)
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
@@ -179,3 +205,90 @@ func (s *DockerSuite) TestDockerContainersWithOneMissingLabels(c *check.C) {
|
||||
err = try.Request(req, 1500*time.Millisecond, try.StatusCodeIs(http.StatusNotFound))
|
||||
c.Assert(err, checker.IsNil)
|
||||
}
|
||||
|
||||
// TestDockerContainersWithServiceLabels allows cheking the labels behavior
|
||||
// Use service label if defined and compete information with container labels.
|
||||
func (s *DockerSuite) TestDockerContainersWithServiceLabels(c *check.C) {
|
||||
file := s.adaptFileForHost(c, "fixtures/docker/simple.toml")
|
||||
defer os.Remove(file)
|
||||
// Start a container with some labels
|
||||
labels := map[string]string{
|
||||
types.LabelPrefix + "servicename.frontend.rule": "Host:my.super.host",
|
||||
types.LabelFrontendRule: "Host:my.wrong.host",
|
||||
types.LabelPort: "2375",
|
||||
}
|
||||
s.startContainerWithLabels(c, "swarm:1.0.0", labels, "manage", "token://blabla")
|
||||
|
||||
// Start traefik
|
||||
cmd, display := s.traefikCmd(withConfigFile(file))
|
||||
defer display(c)
|
||||
err := cmd.Start()
|
||||
c.Assert(err, checker.IsNil)
|
||||
defer cmd.Process.Kill()
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "http://127.0.0.1:8000/version", nil)
|
||||
c.Assert(err, checker.IsNil)
|
||||
req.Host = "my.super.host"
|
||||
|
||||
// FIXME Need to wait than 500 milliseconds more (for swarm or traefik to boot up ?)
|
||||
resp, err := try.ResponseUntilStatusCode(req, 1500*time.Millisecond, http.StatusOK)
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
body, err := ioutil.ReadAll(resp.Body)
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
var version map[string]interface{}
|
||||
|
||||
c.Assert(json.Unmarshal(body, &version), checker.IsNil)
|
||||
c.Assert(version["Version"], checker.Equals, "swarm/1.0.0")
|
||||
}
|
||||
|
||||
func (s *DockerSuite) TestRestartDockerContainers(c *check.C) {
|
||||
file := s.adaptFileForHost(c, "fixtures/docker/simple.toml")
|
||||
defer os.Remove(file)
|
||||
// Start a container with some labels
|
||||
labels := map[string]string{
|
||||
types.LabelPrefix + "frontend.rule": "Host:my.super.host",
|
||||
types.LabelPort: "2375",
|
||||
}
|
||||
s.startContainerWithNameAndLabels(c, "powpow", "swarm:1.0.0", labels, "manage", "token://blabla")
|
||||
|
||||
// Start traefik
|
||||
cmd, display := s.traefikCmd(withConfigFile(file))
|
||||
defer display(c)
|
||||
err := cmd.Start()
|
||||
c.Assert(err, checker.IsNil)
|
||||
defer cmd.Process.Kill()
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "http://127.0.0.1:8000/version", nil)
|
||||
c.Assert(err, checker.IsNil)
|
||||
req.Host = "my.super.host"
|
||||
|
||||
// FIXME Need to wait than 500 milliseconds more (for swarm or traefik to boot up ?)
|
||||
resp, err := try.ResponseUntilStatusCode(req, 1500*time.Millisecond, http.StatusOK)
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
body, err := ioutil.ReadAll(resp.Body)
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
var version map[string]interface{}
|
||||
|
||||
c.Assert(json.Unmarshal(body, &version), checker.IsNil)
|
||||
c.Assert(version["Version"], checker.Equals, "swarm/1.0.0")
|
||||
|
||||
err = try.GetRequest("http://127.0.0.1:8080/api/providers/docker/backends", 60*time.Second, try.BodyContains("powpow"))
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
s.stopAndRemoveContainerByName(c, "powpow")
|
||||
defer s.project.Remove(c, "powpow")
|
||||
|
||||
time.Sleep(5 * time.Second)
|
||||
|
||||
err = try.GetRequest("http://127.0.0.1:8080/api/providers/docker/backends", 10*time.Second, try.BodyContains("powpow"))
|
||||
c.Assert(err, checker.NotNil)
|
||||
|
||||
s.startContainerWithNameAndLabels(c, "powpow", "swarm:1.0.0", labels, "manage", "token://blabla")
|
||||
|
||||
err = try.GetRequest("http://127.0.0.1:8080/api/providers/docker/backends", 60*time.Second, try.BodyContains("powpow"))
|
||||
c.Assert(err, checker.IsNil)
|
||||
}
|
||||
|
||||
@@ -6,6 +6,8 @@ logLevel = "DEBUG"
|
||||
[entryPoints.http]
|
||||
address = ":8000"
|
||||
|
||||
[web]
|
||||
|
||||
[docker]
|
||||
|
||||
# It's dynamagic !
|
||||
|
||||
29
integration/fixtures/grpc/config_insecure.toml
Normal file
29
integration/fixtures/grpc/config_insecure.toml
Normal file
@@ -0,0 +1,29 @@
|
||||
defaultEntryPoints = ["https"]
|
||||
|
||||
InsecureSkipVerify = true
|
||||
|
||||
[entryPoints]
|
||||
[entryPoints.https]
|
||||
address = ":4443"
|
||||
[entryPoints.https.tls]
|
||||
[[entryPoints.https.tls.certificates]]
|
||||
CertFile = """{{ .CertContent }}"""
|
||||
KeyFile = """{{ .KeyContent }}"""
|
||||
|
||||
|
||||
[web]
|
||||
address = ":8080"
|
||||
|
||||
[file]
|
||||
|
||||
[backends]
|
||||
[backends.backend1]
|
||||
[backends.backend1.servers.server1]
|
||||
url = "https://127.0.0.1:{{ .GRPCServerPort }}"
|
||||
|
||||
|
||||
[frontends]
|
||||
[frontends.frontend1]
|
||||
backend = "backend1"
|
||||
[frontends.frontend1.routes.test_1]
|
||||
rule = "Host:127.0.0.1"
|
||||
@@ -1,6 +1,7 @@
|
||||
defaultEntryPoints = ["wss"]
|
||||
|
||||
logLevel = "DEBUG"
|
||||
InsecureSkipVerify=true
|
||||
|
||||
[entryPoints]
|
||||
[entryPoints.wss]
|
||||
@@ -24,4 +25,4 @@ logLevel = "DEBUG"
|
||||
[frontends.frontend1]
|
||||
backend = "backend1"
|
||||
[frontends.frontend1.routes.test_1]
|
||||
rule = "Path:/ws"
|
||||
rule = "Path:/echo,/ws"
|
||||
|
||||
@@ -1,8 +1,10 @@
|
||||
package integration
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"errors"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"os"
|
||||
@@ -22,7 +24,9 @@ var LocalhostKey []byte
|
||||
// GRPCSuite
|
||||
type GRPCSuite struct{ BaseSuite }
|
||||
|
||||
type myserver struct{}
|
||||
type myserver struct {
|
||||
stopStreamExample chan bool
|
||||
}
|
||||
|
||||
func (s *GRPCSuite) SetUpSuite(c *check.C) {
|
||||
var err error
|
||||
@@ -36,7 +40,15 @@ func (s *myserver) SayHello(ctx context.Context, in *helloworld.HelloRequest) (*
|
||||
return &helloworld.HelloReply{Message: "Hello " + in.Name}, nil
|
||||
}
|
||||
|
||||
func startGRPCServer(lis net.Listener) error {
|
||||
func (s *myserver) StreamExample(in *helloworld.StreamExampleRequest, server helloworld.Greeter_StreamExampleServer) error {
|
||||
data := make([]byte, 512)
|
||||
rand.Read(data)
|
||||
server.Send(&helloworld.StreamExampleReply{Data: string(data)})
|
||||
<-s.stopStreamExample
|
||||
return nil
|
||||
}
|
||||
|
||||
func startGRPCServer(lis net.Listener, server *myserver) error {
|
||||
cert, err := tls.X509KeyPair(LocalhostCert, LocalhostKey)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -45,26 +57,30 @@ func startGRPCServer(lis net.Listener) error {
|
||||
creds := credentials.NewServerTLSFromCert(&cert)
|
||||
serverOption := grpc.Creds(creds)
|
||||
|
||||
var s *grpc.Server = grpc.NewServer(serverOption)
|
||||
s := grpc.NewServer(serverOption)
|
||||
defer s.Stop()
|
||||
|
||||
helloworld.RegisterGreeterServer(s, &myserver{})
|
||||
helloworld.RegisterGreeterServer(s, server)
|
||||
return s.Serve(lis)
|
||||
}
|
||||
|
||||
func callHelloClientGRPC() (string, error) {
|
||||
func getHelloClientGRPC() (helloworld.GreeterClient, func() error, error) {
|
||||
roots := x509.NewCertPool()
|
||||
roots.AppendCertsFromPEM(LocalhostCert)
|
||||
credsClient := credentials.NewClientTLSFromCert(roots, "")
|
||||
conn, err := grpc.Dial("127.0.0.1:4443", grpc.WithTransportCredentials(credsClient))
|
||||
if err != nil {
|
||||
return nil, func() error { return nil }, err
|
||||
}
|
||||
return helloworld.NewGreeterClient(conn), conn.Close, nil
|
||||
|
||||
}
|
||||
|
||||
func callHelloClientGRPC(name string) (string, error) {
|
||||
client, closer, err := getHelloClientGRPC()
|
||||
defer closer()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
defer conn.Close()
|
||||
client := helloworld.NewGreeterClient(conn)
|
||||
|
||||
name := "World"
|
||||
r, err := client.SayHello(context.Background(), &helloworld.HelloRequest{Name: name})
|
||||
if err != nil {
|
||||
return "", err
|
||||
@@ -72,13 +88,26 @@ func callHelloClientGRPC() (string, error) {
|
||||
return r.Message, nil
|
||||
}
|
||||
|
||||
func callStreamExampleClientGRPC() (helloworld.Greeter_StreamExampleClient, func() error, error) {
|
||||
client, closer, err := getHelloClientGRPC()
|
||||
if err != nil {
|
||||
return nil, closer, err
|
||||
}
|
||||
t, err := client.StreamExample(context.Background(), &helloworld.StreamExampleRequest{})
|
||||
if err != nil {
|
||||
return nil, closer, err
|
||||
}
|
||||
|
||||
return t, closer, nil
|
||||
}
|
||||
|
||||
func (s *GRPCSuite) TestGRPC(c *check.C) {
|
||||
lis, err := net.Listen("tcp", ":0")
|
||||
_, port, err := net.SplitHostPort(lis.Addr().String())
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
go func() {
|
||||
err := startGRPCServer(lis)
|
||||
err := startGRPCServer(lis, &myserver{})
|
||||
c.Log(err)
|
||||
c.Assert(err, check.IsNil)
|
||||
}()
|
||||
@@ -106,10 +135,110 @@ func (s *GRPCSuite) TestGRPC(c *check.C) {
|
||||
c.Assert(err, check.IsNil)
|
||||
var response string
|
||||
err = try.Do(1*time.Second, func() error {
|
||||
response, err = callHelloClientGRPC()
|
||||
response, err = callHelloClientGRPC("World")
|
||||
return err
|
||||
})
|
||||
|
||||
c.Assert(err, check.IsNil)
|
||||
c.Assert(response, check.Equals, "Hello World")
|
||||
}
|
||||
|
||||
func (s *GRPCSuite) TestGRPCInsecure(c *check.C) {
|
||||
lis, err := net.Listen("tcp", ":0")
|
||||
_, port, err := net.SplitHostPort(lis.Addr().String())
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
go func() {
|
||||
err := startGRPCServer(lis, &myserver{})
|
||||
c.Log(err)
|
||||
c.Assert(err, check.IsNil)
|
||||
}()
|
||||
|
||||
file := s.adaptFile(c, "fixtures/grpc/config_insecure.toml", struct {
|
||||
CertContent string
|
||||
KeyContent string
|
||||
GRPCServerPort string
|
||||
}{
|
||||
CertContent: string(LocalhostCert),
|
||||
KeyContent: string(LocalhostKey),
|
||||
GRPCServerPort: port,
|
||||
})
|
||||
|
||||
defer os.Remove(file)
|
||||
cmd, display := s.traefikCmd(withConfigFile(file))
|
||||
defer display(c)
|
||||
|
||||
err = cmd.Start()
|
||||
c.Assert(err, check.IsNil)
|
||||
defer cmd.Process.Kill()
|
||||
|
||||
// wait for Traefik
|
||||
err = try.GetRequest("http://127.0.0.1:8080/api/providers", 1*time.Second, try.BodyContains("Host:127.0.0.1"))
|
||||
c.Assert(err, check.IsNil)
|
||||
var response string
|
||||
err = try.Do(1*time.Second, func() error {
|
||||
response, err = callHelloClientGRPC("World")
|
||||
return err
|
||||
})
|
||||
|
||||
c.Assert(err, check.IsNil)
|
||||
c.Assert(response, check.Equals, "Hello World")
|
||||
}
|
||||
|
||||
func (s *GRPCSuite) TestGRPCBuffer(c *check.C) {
|
||||
stopStreamExample := make(chan bool)
|
||||
defer func() { stopStreamExample <- true }()
|
||||
lis, err := net.Listen("tcp", ":0")
|
||||
_, port, err := net.SplitHostPort(lis.Addr().String())
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
go func() {
|
||||
err := startGRPCServer(lis, &myserver{
|
||||
stopStreamExample: stopStreamExample,
|
||||
})
|
||||
c.Log(err)
|
||||
c.Assert(err, check.IsNil)
|
||||
}()
|
||||
|
||||
file := s.adaptFile(c, "fixtures/grpc/config.toml", struct {
|
||||
CertContent string
|
||||
KeyContent string
|
||||
GRPCServerPort string
|
||||
}{
|
||||
CertContent: string(LocalhostCert),
|
||||
KeyContent: string(LocalhostKey),
|
||||
GRPCServerPort: port,
|
||||
})
|
||||
|
||||
defer os.Remove(file)
|
||||
cmd, display := s.traefikCmd(withConfigFile(file))
|
||||
defer display(c)
|
||||
|
||||
err = cmd.Start()
|
||||
c.Assert(err, check.IsNil)
|
||||
defer cmd.Process.Kill()
|
||||
|
||||
// wait for Traefik
|
||||
err = try.GetRequest("http://127.0.0.1:8080/api/providers", 1*time.Second, try.BodyContains("Host:127.0.0.1"))
|
||||
c.Assert(err, check.IsNil)
|
||||
var client helloworld.Greeter_StreamExampleClient
|
||||
client, closer, err := callStreamExampleClientGRPC()
|
||||
defer closer()
|
||||
|
||||
received := make(chan bool)
|
||||
go func() {
|
||||
tr, _ := client.Recv()
|
||||
c.Assert(len(tr.Data), check.Equals, 512)
|
||||
received <- true
|
||||
}()
|
||||
|
||||
err = try.Do(time.Second*10, func() error {
|
||||
select {
|
||||
case <-received:
|
||||
return nil
|
||||
default:
|
||||
return errors.New("failed to receive stream data")
|
||||
}
|
||||
})
|
||||
c.Assert(err, check.IsNil)
|
||||
}
|
||||
|
||||
@@ -10,6 +10,8 @@ It is generated from these files:
|
||||
It has these top-level messages:
|
||||
HelloRequest
|
||||
HelloReply
|
||||
StreamExampleRequest
|
||||
StreamExampleReply
|
||||
*/
|
||||
package helloworld
|
||||
|
||||
@@ -67,9 +69,35 @@ func (m *HelloReply) GetMessage() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
type StreamExampleRequest struct {
|
||||
}
|
||||
|
||||
func (m *StreamExampleRequest) Reset() { *m = StreamExampleRequest{} }
|
||||
func (m *StreamExampleRequest) String() string { return proto.CompactTextString(m) }
|
||||
func (*StreamExampleRequest) ProtoMessage() {}
|
||||
func (*StreamExampleRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
|
||||
|
||||
type StreamExampleReply struct {
|
||||
Data string `protobuf:"bytes,1,opt,name=data" json:"data,omitempty"`
|
||||
}
|
||||
|
||||
func (m *StreamExampleReply) Reset() { *m = StreamExampleReply{} }
|
||||
func (m *StreamExampleReply) String() string { return proto.CompactTextString(m) }
|
||||
func (*StreamExampleReply) ProtoMessage() {}
|
||||
func (*StreamExampleReply) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
|
||||
|
||||
func (m *StreamExampleReply) GetData() string {
|
||||
if m != nil {
|
||||
return m.Data
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func init() {
|
||||
proto.RegisterType((*HelloRequest)(nil), "helloworld.HelloRequest")
|
||||
proto.RegisterType((*HelloReply)(nil), "helloworld.HelloReply")
|
||||
proto.RegisterType((*StreamExampleRequest)(nil), "helloworld.StreamExampleRequest")
|
||||
proto.RegisterType((*StreamExampleReply)(nil), "helloworld.StreamExampleReply")
|
||||
}
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
@@ -85,6 +113,8 @@ const _ = grpc.SupportPackageIsVersion4
|
||||
type GreeterClient interface {
|
||||
// Sends a greeting
|
||||
SayHello(ctx context.Context, in *HelloRequest, opts ...grpc.CallOption) (*HelloReply, error)
|
||||
// Tick me
|
||||
StreamExample(ctx context.Context, in *StreamExampleRequest, opts ...grpc.CallOption) (Greeter_StreamExampleClient, error)
|
||||
}
|
||||
|
||||
type greeterClient struct {
|
||||
@@ -104,11 +134,45 @@ func (c *greeterClient) SayHello(ctx context.Context, in *HelloRequest, opts ...
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *greeterClient) StreamExample(ctx context.Context, in *StreamExampleRequest, opts ...grpc.CallOption) (Greeter_StreamExampleClient, error) {
|
||||
stream, err := grpc.NewClientStream(ctx, &_Greeter_serviceDesc.Streams[0], c.cc, "/helloworld.Greeter/StreamExample", opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
x := &greeterStreamExampleClient{stream}
|
||||
if err := x.ClientStream.SendMsg(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := x.ClientStream.CloseSend(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return x, nil
|
||||
}
|
||||
|
||||
type Greeter_StreamExampleClient interface {
|
||||
Recv() (*StreamExampleReply, error)
|
||||
grpc.ClientStream
|
||||
}
|
||||
|
||||
type greeterStreamExampleClient struct {
|
||||
grpc.ClientStream
|
||||
}
|
||||
|
||||
func (x *greeterStreamExampleClient) Recv() (*StreamExampleReply, error) {
|
||||
m := new(StreamExampleReply)
|
||||
if err := x.ClientStream.RecvMsg(m); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return m, nil
|
||||
}
|
||||
|
||||
// Server API for Greeter service
|
||||
|
||||
type GreeterServer interface {
|
||||
// Sends a greeting
|
||||
SayHello(context.Context, *HelloRequest) (*HelloReply, error)
|
||||
// Tick me
|
||||
StreamExample(*StreamExampleRequest, Greeter_StreamExampleServer) error
|
||||
}
|
||||
|
||||
func RegisterGreeterServer(s *grpc.Server, srv GreeterServer) {
|
||||
@@ -133,6 +197,27 @@ func _Greeter_SayHello_Handler(srv interface{}, ctx context.Context, dec func(in
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
func _Greeter_StreamExample_Handler(srv interface{}, stream grpc.ServerStream) error {
|
||||
m := new(StreamExampleRequest)
|
||||
if err := stream.RecvMsg(m); err != nil {
|
||||
return err
|
||||
}
|
||||
return srv.(GreeterServer).StreamExample(m, &greeterStreamExampleServer{stream})
|
||||
}
|
||||
|
||||
type Greeter_StreamExampleServer interface {
|
||||
Send(*StreamExampleReply) error
|
||||
grpc.ServerStream
|
||||
}
|
||||
|
||||
type greeterStreamExampleServer struct {
|
||||
grpc.ServerStream
|
||||
}
|
||||
|
||||
func (x *greeterStreamExampleServer) Send(m *StreamExampleReply) error {
|
||||
return x.ServerStream.SendMsg(m)
|
||||
}
|
||||
|
||||
var _Greeter_serviceDesc = grpc.ServiceDesc{
|
||||
ServiceName: "helloworld.Greeter",
|
||||
HandlerType: (*GreeterServer)(nil),
|
||||
@@ -142,23 +227,33 @@ var _Greeter_serviceDesc = grpc.ServiceDesc{
|
||||
Handler: _Greeter_SayHello_Handler,
|
||||
},
|
||||
},
|
||||
Streams: []grpc.StreamDesc{},
|
||||
Streams: []grpc.StreamDesc{
|
||||
{
|
||||
StreamName: "StreamExample",
|
||||
Handler: _Greeter_StreamExample_Handler,
|
||||
ServerStreams: true,
|
||||
},
|
||||
},
|
||||
Metadata: "helloworld.proto",
|
||||
}
|
||||
|
||||
func init() { proto.RegisterFile("helloworld.proto", fileDescriptor0) }
|
||||
|
||||
var fileDescriptor0 = []byte{
|
||||
// 175 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0xc8, 0x48, 0xcd, 0xc9,
|
||||
0xc9, 0x2f, 0xcf, 0x2f, 0xca, 0x49, 0xd1, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0xe2, 0x42, 0x88,
|
||||
0x28, 0x29, 0x71, 0xf1, 0x78, 0x80, 0x78, 0x41, 0xa9, 0x85, 0xa5, 0xa9, 0xc5, 0x25, 0x42, 0x42,
|
||||
0x5c, 0x2c, 0x79, 0x89, 0xb9, 0xa9, 0x12, 0x8c, 0x0a, 0x8c, 0x1a, 0x9c, 0x41, 0x60, 0xb6, 0x92,
|
||||
0x1a, 0x17, 0x17, 0x54, 0x4d, 0x41, 0x4e, 0xa5, 0x90, 0x04, 0x17, 0x7b, 0x6e, 0x6a, 0x71, 0x71,
|
||||
0x62, 0x3a, 0x4c, 0x11, 0x8c, 0x6b, 0xe4, 0xc9, 0xc5, 0xee, 0x5e, 0x94, 0x9a, 0x5a, 0x92, 0x5a,
|
||||
0x24, 0x64, 0xc7, 0xc5, 0x11, 0x9c, 0x58, 0x09, 0xd6, 0x25, 0x24, 0xa1, 0x87, 0xe4, 0x02, 0x64,
|
||||
0xcb, 0xa4, 0xc4, 0xb0, 0xc8, 0x14, 0xe4, 0x54, 0x2a, 0x31, 0x38, 0x19, 0x70, 0x49, 0x67, 0xe6,
|
||||
0xeb, 0xa5, 0x17, 0x15, 0x24, 0xeb, 0xa5, 0x56, 0x24, 0xe6, 0x16, 0xe4, 0xa4, 0x16, 0x23, 0xa9,
|
||||
0x75, 0xe2, 0x07, 0x2b, 0x0e, 0x07, 0xb1, 0x03, 0x40, 0x5e, 0x0a, 0x60, 0x4c, 0x62, 0x03, 0xfb,
|
||||
0xcd, 0x18, 0x10, 0x00, 0x00, 0xff, 0xff, 0x0f, 0xb7, 0xcd, 0xf2, 0xef, 0x00, 0x00, 0x00,
|
||||
// 231 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x7c, 0x50, 0xc1, 0x4a, 0xc4, 0x30,
|
||||
0x10, 0x35, 0xb0, 0xb8, 0x3a, 0x28, 0xca, 0x20, 0x4b, 0x59, 0x41, 0x96, 0x1c, 0x64, 0x4f, 0xa1,
|
||||
0xe8, 0xdd, 0x43, 0x41, 0xf4, 0x58, 0x5a, 0xc4, 0x73, 0xb4, 0x43, 0x15, 0x12, 0x13, 0x93, 0x88,
|
||||
0xf6, 0x6f, 0xfc, 0x54, 0x49, 0x6c, 0x31, 0x4a, 0xf1, 0xf6, 0x66, 0xe6, 0xe5, 0xbd, 0x97, 0x07,
|
||||
0xc7, 0x4f, 0xa4, 0x94, 0x79, 0x37, 0x4e, 0x75, 0xc2, 0x3a, 0x13, 0x0c, 0xc2, 0xcf, 0x86, 0x73,
|
||||
0x38, 0xb8, 0x8d, 0x53, 0x43, 0xaf, 0x6f, 0xe4, 0x03, 0x22, 0x2c, 0x5e, 0xa4, 0xa6, 0x82, 0x6d,
|
||||
0xd8, 0x76, 0xbf, 0x49, 0x98, 0x9f, 0x03, 0x8c, 0x1c, 0xab, 0x06, 0x2c, 0x60, 0xa9, 0xc9, 0x7b,
|
||||
0xd9, 0x4f, 0xa4, 0x69, 0xe4, 0x2b, 0x38, 0x69, 0x83, 0x23, 0xa9, 0xaf, 0x3f, 0xa4, 0xb6, 0x8a,
|
||||
0x46, 0x4d, 0xbe, 0x05, 0xfc, 0xb3, 0x8f, 0x3a, 0x08, 0x8b, 0x4e, 0x06, 0x39, 0x39, 0x45, 0x7c,
|
||||
0xf1, 0xc9, 0x60, 0x79, 0xe3, 0x88, 0x02, 0x39, 0xbc, 0x82, 0xbd, 0x56, 0x0e, 0xc9, 0x18, 0x0b,
|
||||
0x91, 0x7d, 0x22, 0xcf, 0xbb, 0x5e, 0xcd, 0x5c, 0xac, 0x1a, 0xf8, 0x0e, 0xde, 0xc1, 0xe1, 0x2f,
|
||||
0x57, 0xdc, 0xe4, 0xd4, 0xb9, 0xa0, 0xeb, 0xb3, 0x7f, 0x18, 0x49, 0xb4, 0x64, 0x55, 0x09, 0xa7,
|
||||
0xcf, 0x46, 0xf4, 0xce, 0x3e, 0x0a, 0xfa, 0xbe, 0xf9, 0xec, 0x55, 0x75, 0x94, 0x32, 0xdc, 0x47,
|
||||
0x5c, 0xc7, 0xb2, 0x6b, 0xf6, 0xb0, 0x9b, 0x5a, 0xbf, 0xfc, 0x0a, 0x00, 0x00, 0xff, 0xff, 0x6f,
|
||||
0x5f, 0xae, 0xb8, 0x89, 0x01, 0x00, 0x00,
|
||||
}
|
||||
|
||||
@@ -9,7 +9,10 @@ package helloworld;
|
||||
// The greeting service definition.
|
||||
service Greeter {
|
||||
// Sends a greeting
|
||||
rpc SayHello (HelloRequest) returns (HelloReply) {}
|
||||
rpc SayHello (HelloRequest) returns (HelloReply) {};
|
||||
|
||||
rpc StreamExample (StreamExampleRequest) returns (stream StreamExampleReply) {};
|
||||
|
||||
}
|
||||
|
||||
// The request message containing the user's name.
|
||||
@@ -21,3 +24,9 @@ message HelloRequest {
|
||||
message HelloReply {
|
||||
string message = 1;
|
||||
}
|
||||
|
||||
message StreamExampleRequest {}
|
||||
|
||||
message StreamExampleReply {
|
||||
string data = 1;
|
||||
}
|
||||
@@ -57,10 +57,19 @@ func (s *LogRotationSuite) TestAccessLogRotation(c *check.C) {
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
// Verify access.log.rotated output as expected
|
||||
logAccessLogFile(c, traefikTestAccessLogFile+".rotated")
|
||||
lineCount := verifyLogLines(c, traefikTestAccessLogFile+".rotated", 0, true)
|
||||
c.Assert(lineCount, checker.GreaterOrEqualThan, 1)
|
||||
|
||||
// make sure that the access log file is at least created before we do assertions on it
|
||||
err = try.Do(1*time.Second, func() error {
|
||||
_, err := os.Stat(traefikTestAccessLogFile)
|
||||
return err
|
||||
})
|
||||
c.Assert(err, checker.IsNil, check.Commentf("access log file was not created in time"))
|
||||
|
||||
// Verify access.log output as expected
|
||||
logAccessLogFile(c, traefikTestAccessLogFile)
|
||||
lineCount = verifyLogLines(c, traefikTestAccessLogFile, lineCount, true)
|
||||
c.Assert(lineCount, checker.Equals, 3)
|
||||
|
||||
@@ -111,6 +120,12 @@ func (s *LogRotationSuite) TestTraefikLogRotation(c *check.C) {
|
||||
c.Assert(lineCount, checker.GreaterOrEqualThan, 7)
|
||||
}
|
||||
|
||||
func logAccessLogFile(c *check.C, fileName string) {
|
||||
output, err := ioutil.ReadFile(fileName)
|
||||
c.Assert(err, checker.IsNil)
|
||||
c.Logf("Contents of file %s\n%s", fileName, string(output))
|
||||
}
|
||||
|
||||
func verifyEmptyErrorLog(c *check.C, name string) {
|
||||
err := try.Do(5*time.Second, func() error {
|
||||
traefikLog, e2 := ioutil.ReadFile(name)
|
||||
@@ -130,7 +145,6 @@ func verifyLogLines(c *check.C, fileName string, countInit int, accessLog bool)
|
||||
count := countInit
|
||||
for rotatedLog.Scan() {
|
||||
line := rotatedLog.Text()
|
||||
c.Log(line)
|
||||
if accessLog {
|
||||
if len(line) > 0 {
|
||||
CheckAccessLogFormat(c, line, count)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
consul:
|
||||
image: progrium/consul
|
||||
command: -server -bootstrap -log-level debug -ui-dir /ui
|
||||
image: consul
|
||||
command: agent -server -bootstrap-expect 1 -client 0.0.0.0 -log-level debug -ui
|
||||
ports:
|
||||
- "8400:8400"
|
||||
- "8500:8500"
|
||||
@@ -15,3 +15,5 @@ nginx1:
|
||||
image: nginx:alpine
|
||||
nginx2:
|
||||
image: nginx:alpine
|
||||
nginx3:
|
||||
image: nginx:alpine
|
||||
|
||||
@@ -441,3 +441,65 @@ func (s *WebsocketSuite) TestURLWithURLEncodedChar(c *check.C) {
|
||||
c.Assert(err, checker.IsNil)
|
||||
c.Assert(string(msg), checker.Equals, "OK")
|
||||
}
|
||||
|
||||
func (s *WebsocketSuite) TestSSLhttp2(c *check.C) {
|
||||
var upgrader = gorillawebsocket.Upgrader{} // use default options
|
||||
|
||||
ts := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
c, err := upgrader.Upgrade(w, r, nil)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
defer c.Close()
|
||||
for {
|
||||
mt, message, err := c.ReadMessage()
|
||||
if err != nil {
|
||||
break
|
||||
}
|
||||
err = c.WriteMessage(mt, message)
|
||||
if err != nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
}))
|
||||
|
||||
ts.TLS = &tls.Config{}
|
||||
ts.TLS.NextProtos = append(ts.TLS.NextProtos, `h2`)
|
||||
ts.TLS.NextProtos = append(ts.TLS.NextProtos, `http/1.1`)
|
||||
ts.StartTLS()
|
||||
|
||||
file := s.adaptFile(c, "fixtures/websocket/config_https.toml", struct {
|
||||
WebsocketServer string
|
||||
}{
|
||||
WebsocketServer: ts.URL,
|
||||
})
|
||||
|
||||
defer os.Remove(file)
|
||||
cmd, display := s.traefikCmd(withConfigFile(file), "--debug", "--accesslog")
|
||||
defer display(c)
|
||||
|
||||
err := cmd.Start()
|
||||
c.Assert(err, check.IsNil)
|
||||
defer cmd.Process.Kill()
|
||||
|
||||
// wait for traefik
|
||||
err = try.GetRequest("http://127.0.0.1:8080/api/providers", 10*time.Second, try.BodyContains("127.0.0.1"))
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
//Add client self-signed cert
|
||||
roots := x509.NewCertPool()
|
||||
certContent, err := ioutil.ReadFile("./resources/tls/local.cert")
|
||||
roots.AppendCertsFromPEM(certContent)
|
||||
gorillawebsocket.DefaultDialer.TLSClientConfig = &tls.Config{
|
||||
RootCAs: roots,
|
||||
}
|
||||
conn, _, err := gorillawebsocket.DefaultDialer.Dial("wss://127.0.0.1:8000/echo", nil)
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
err = conn.WriteMessage(gorillawebsocket.TextMessage, []byte("OK"))
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
_, msg, err := conn.ReadMessage()
|
||||
c.Assert(err, checker.IsNil)
|
||||
c.Assert(string(msg), checker.Equals, "OK")
|
||||
}
|
||||
|
||||
@@ -3,6 +3,7 @@ package middlewares
|
||||
import (
|
||||
"compress/gzip"
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
"github.com/NYTimes/gziphandler"
|
||||
"github.com/containous/traefik/log"
|
||||
@@ -13,7 +14,12 @@ type Compress struct{}
|
||||
|
||||
// ServerHTTP is a function used by Negroni
|
||||
func (c *Compress) ServeHTTP(rw http.ResponseWriter, r *http.Request, next http.HandlerFunc) {
|
||||
gzipHandler(next).ServeHTTP(rw, r)
|
||||
contentType := r.Header.Get("Content-Type")
|
||||
if strings.HasPrefix(contentType, "application/grpc") {
|
||||
next.ServeHTTP(rw, r)
|
||||
} else {
|
||||
gzipHandler(next).ServeHTTP(rw, r)
|
||||
}
|
||||
}
|
||||
|
||||
func gzipHandler(h http.Handler) http.Handler {
|
||||
|
||||
@@ -16,6 +16,7 @@ import (
|
||||
const (
|
||||
acceptEncodingHeader = "Accept-Encoding"
|
||||
contentEncodingHeader = "Content-Encoding"
|
||||
contentTypeHeader = "Content-Type"
|
||||
varyHeader = "Vary"
|
||||
gzipValue = "gzip"
|
||||
)
|
||||
@@ -81,6 +82,26 @@ func TestShouldNotCompressWhenNoAcceptEncodingHeader(t *testing.T) {
|
||||
assert.EqualValues(t, rw.Body.Bytes(), fakeBody)
|
||||
}
|
||||
|
||||
func TestShouldNotCompressWhenGRPC(t *testing.T) {
|
||||
handler := &Compress{}
|
||||
|
||||
req := testhelpers.MustNewRequest(http.MethodGet, "http://localhost", nil)
|
||||
req.Header.Add(acceptEncodingHeader, gzipValue)
|
||||
req.Header.Add(contentTypeHeader, "application/grpc")
|
||||
|
||||
baseBody := generateBytes(gziphandler.DefaultMinSize)
|
||||
next := func(rw http.ResponseWriter, r *http.Request) {
|
||||
rw.Write(baseBody)
|
||||
}
|
||||
|
||||
rw := httptest.NewRecorder()
|
||||
handler.ServeHTTP(rw, req, next)
|
||||
|
||||
assert.Empty(t, rw.Header().Get(acceptEncodingHeader))
|
||||
assert.Empty(t, rw.Header().Get(contentEncodingHeader))
|
||||
assert.EqualValues(t, rw.Body.Bytes(), baseBody)
|
||||
}
|
||||
|
||||
func TestIntegrationShouldNotCompress(t *testing.T) {
|
||||
fakeCompressedBody := generateBytes(100000)
|
||||
comp := &Compress{}
|
||||
@@ -137,6 +158,31 @@ func TestIntegrationShouldNotCompress(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestShouldWriteHeaderWhenFlush(t *testing.T) {
|
||||
comp := &Compress{}
|
||||
negro := negroni.New(comp)
|
||||
negro.UseHandlerFunc(func(rw http.ResponseWriter, r *http.Request) {
|
||||
rw.Header().Add(contentEncodingHeader, gzipValue)
|
||||
rw.Header().Add(varyHeader, acceptEncodingHeader)
|
||||
rw.WriteHeader(http.StatusUnauthorized)
|
||||
rw.(http.Flusher).Flush()
|
||||
rw.Write([]byte("short"))
|
||||
})
|
||||
ts := httptest.NewServer(negro)
|
||||
defer ts.Close()
|
||||
|
||||
req := testhelpers.MustNewRequest(http.MethodGet, ts.URL, nil)
|
||||
req.Header.Add(acceptEncodingHeader, gzipValue)
|
||||
|
||||
resp, err := http.DefaultClient.Do(req)
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.Equal(t, http.StatusUnauthorized, resp.StatusCode)
|
||||
|
||||
assert.Equal(t, gzipValue, resp.Header.Get(contentEncodingHeader))
|
||||
assert.Equal(t, acceptEncodingHeader, resp.Header.Get(varyHeader))
|
||||
}
|
||||
|
||||
func TestIntegrationShouldCompress(t *testing.T) {
|
||||
fakeBody := generateBytes(100000)
|
||||
|
||||
|
||||
@@ -3,8 +3,9 @@ package middlewares
|
||||
//Middleware based on https://github.com/unrolled/secure
|
||||
|
||||
import (
|
||||
"github.com/containous/traefik/types"
|
||||
"net/http"
|
||||
|
||||
"github.com/containous/traefik/types"
|
||||
)
|
||||
|
||||
// HeaderOptions is a struct for specifying configuration options for the headers middleware.
|
||||
|
||||
@@ -3,11 +3,12 @@ package middlewares
|
||||
//Middleware tests based on https://github.com/unrolled/secure
|
||||
|
||||
import (
|
||||
"github.com/containous/traefik/testhelpers"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
|
||||
"github.com/containous/traefik/testhelpers"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
var myHandler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
@@ -6,80 +6,70 @@ import (
|
||||
"net/http"
|
||||
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/containous/traefik/whitelist"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/urfave/negroni"
|
||||
)
|
||||
|
||||
// IPWhitelister is a middleware that provides Checks of the Requesting IP against a set of Whitelists
|
||||
type IPWhitelister struct {
|
||||
handler negroni.Handler
|
||||
whitelists []*net.IPNet
|
||||
// IPWhiteLister is a middleware that provides Checks of the Requesting IP against a set of Whitelists
|
||||
type IPWhiteLister struct {
|
||||
handler negroni.Handler
|
||||
whiteLister *whitelist.IP
|
||||
}
|
||||
|
||||
// NewIPWhitelister builds a new IPWhitelister given a list of CIDR-Strings to whitelist
|
||||
func NewIPWhitelister(whitelistStrings []string) (*IPWhitelister, error) {
|
||||
// NewIPWhitelister builds a new IPWhiteLister given a list of CIDR-Strings to whitelist
|
||||
func NewIPWhitelister(whitelistStrings []string) (*IPWhiteLister, error) {
|
||||
|
||||
if len(whitelistStrings) == 0 {
|
||||
return nil, errors.New("no whitelists provided")
|
||||
}
|
||||
|
||||
whitelister := IPWhitelister{}
|
||||
whiteLister := IPWhiteLister{}
|
||||
|
||||
for _, whitelistString := range whitelistStrings {
|
||||
_, whitelist, err := net.ParseCIDR(whitelistString)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parsing CIDR whitelist %s: %v", whitelist, err)
|
||||
}
|
||||
whitelister.whitelists = append(whitelister.whitelists, whitelist)
|
||||
ip, err := whitelist.NewIP(whitelistStrings, false)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parsing CIDR whitelist %s: %v", whitelistStrings, err)
|
||||
}
|
||||
whiteLister.whiteLister = ip
|
||||
|
||||
whitelister.handler = negroni.HandlerFunc(whitelister.handle)
|
||||
log.Debugf("configured %u IP whitelists: %s", len(whitelister.whitelists), whitelister.whitelists)
|
||||
whiteLister.handler = negroni.HandlerFunc(whiteLister.handle)
|
||||
log.Debugf("configured %u IP whitelists: %s", len(whitelistStrings), whitelistStrings)
|
||||
|
||||
return &whitelister, nil
|
||||
return &whiteLister, nil
|
||||
}
|
||||
|
||||
func (whitelister *IPWhitelister) handle(w http.ResponseWriter, r *http.Request, next http.HandlerFunc) {
|
||||
remoteIP, err := ipFromRemoteAddr(r.RemoteAddr)
|
||||
func (wl *IPWhiteLister) handle(w http.ResponseWriter, r *http.Request, next http.HandlerFunc) {
|
||||
ipAddress, _, err := net.SplitHostPort(r.RemoteAddr)
|
||||
if err != nil {
|
||||
log.Warnf("unable to parse remote-address from header: %s - rejecting", r.RemoteAddr)
|
||||
reject(w)
|
||||
return
|
||||
}
|
||||
|
||||
for _, whitelist := range whitelister.whitelists {
|
||||
if whitelist.Contains(*remoteIP) {
|
||||
log.Debugf("source-IP %s matched whitelist %s - passing", remoteIP, whitelist)
|
||||
next.ServeHTTP(w, r)
|
||||
return
|
||||
}
|
||||
allowed, ip, err := wl.whiteLister.Contains(ipAddress)
|
||||
if err != nil {
|
||||
log.Debugf("source-IP %s matched none of the whitelists - rejecting", ipAddress)
|
||||
reject(w)
|
||||
return
|
||||
}
|
||||
|
||||
log.Debugf("source-IP %s matched none of the whitelists - rejecting", remoteIP)
|
||||
if allowed {
|
||||
log.Debugf("source-IP %s matched whitelist %s - passing", ipAddress, wl.whiteLister)
|
||||
next.ServeHTTP(w, r)
|
||||
return
|
||||
}
|
||||
|
||||
log.Debugf("source-IP %s matched none of the whitelists - rejecting", ip)
|
||||
reject(w)
|
||||
}
|
||||
|
||||
func (wl *IPWhiteLister) ServeHTTP(rw http.ResponseWriter, r *http.Request, next http.HandlerFunc) {
|
||||
wl.handler.ServeHTTP(rw, r, next)
|
||||
}
|
||||
|
||||
func reject(w http.ResponseWriter) {
|
||||
statusCode := http.StatusForbidden
|
||||
|
||||
w.WriteHeader(statusCode)
|
||||
w.Write([]byte(http.StatusText(statusCode)))
|
||||
}
|
||||
|
||||
func ipFromRemoteAddr(addr string) (*net.IP, error) {
|
||||
ip, _, err := net.SplitHostPort(addr)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("can't extract IP/Port from address %s: %s", addr, err)
|
||||
}
|
||||
|
||||
userIP := net.ParseIP(ip)
|
||||
if userIP == nil {
|
||||
return nil, fmt.Errorf("can't parse IP from address %s", ip)
|
||||
}
|
||||
|
||||
return &userIP, nil
|
||||
}
|
||||
|
||||
func (whitelister *IPWhitelister) ServeHTTP(rw http.ResponseWriter, r *http.Request, next http.HandlerFunc) {
|
||||
whitelister.handler.ServeHTTP(rw, r, next)
|
||||
}
|
||||
|
||||
@@ -16,17 +16,12 @@ type StripPrefix struct {
|
||||
|
||||
func (s *StripPrefix) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
for _, prefix := range s.Prefixes {
|
||||
origPrefix := strings.TrimSpace(prefix)
|
||||
if origPrefix == r.URL.Path {
|
||||
r.URL.Path = "/"
|
||||
s.serveRequest(w, r, origPrefix)
|
||||
return
|
||||
}
|
||||
|
||||
prefix = strings.TrimSuffix(origPrefix, "/") + "/"
|
||||
if p := strings.TrimPrefix(r.URL.Path, prefix); len(p) < len(r.URL.Path) {
|
||||
r.URL.Path = "/" + strings.TrimPrefix(p, "/")
|
||||
s.serveRequest(w, r, origPrefix)
|
||||
if strings.HasPrefix(r.URL.Path, prefix) {
|
||||
r.URL.Path = stripPrefix(r.URL.Path, prefix)
|
||||
if r.URL.RawPath != "" {
|
||||
r.URL.RawPath = stripPrefix(r.URL.RawPath, prefix)
|
||||
}
|
||||
s.serveRequest(w, r, strings.TrimSpace(prefix))
|
||||
return
|
||||
}
|
||||
}
|
||||
@@ -43,3 +38,11 @@ func (s *StripPrefix) serveRequest(w http.ResponseWriter, r *http.Request, prefi
|
||||
func (s *StripPrefix) SetHandler(Handler http.Handler) {
|
||||
s.Handler = Handler
|
||||
}
|
||||
|
||||
func stripPrefix(s, prefix string) string {
|
||||
return ensureLeadingSlash(strings.TrimPrefix(s, prefix))
|
||||
}
|
||||
|
||||
func ensureLeadingSlash(str string) string {
|
||||
return "/" + strings.TrimPrefix(str, "/")
|
||||
}
|
||||
|
||||
@@ -40,6 +40,9 @@ func (s *StripPrefixRegex) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
|
||||
r.URL.Path = r.URL.Path[len(prefix.Path):]
|
||||
if r.URL.RawPath != "" {
|
||||
r.URL.RawPath = r.URL.RawPath[len(prefix.Path):]
|
||||
}
|
||||
r.Header.Add(ForwardedPrefixHeader, prefix.Path)
|
||||
r.RequestURI = r.URL.RequestURI()
|
||||
s.Handler.ServeHTTP(w, r)
|
||||
|
||||
@@ -10,13 +10,13 @@ import (
|
||||
)
|
||||
|
||||
func TestStripPrefixRegex(t *testing.T) {
|
||||
|
||||
testPrefixRegex := []string{"/a/api/", "/b/{regex}/", "/c/{category}/{id:[0-9]+}/"}
|
||||
|
||||
tests := []struct {
|
||||
path string
|
||||
expectedStatusCode int
|
||||
expectedPath string
|
||||
expectedRawPath string
|
||||
expectedHeader string
|
||||
}{
|
||||
{
|
||||
@@ -61,6 +61,13 @@ func TestStripPrefixRegex(t *testing.T) {
|
||||
path: "/c/api/abc/test4",
|
||||
expectedStatusCode: http.StatusNotFound,
|
||||
},
|
||||
{
|
||||
path: "/a/api/a%2Fb",
|
||||
expectedStatusCode: http.StatusOK,
|
||||
expectedPath: "a/b",
|
||||
expectedRawPath: "a%2Fb",
|
||||
expectedHeader: "/a/api/",
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
@@ -68,9 +75,10 @@ func TestStripPrefixRegex(t *testing.T) {
|
||||
t.Run(test.path, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
var actualPath, actualHeader string
|
||||
var actualPath, actualRawPath, actualHeader string
|
||||
handlerPath := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
actualPath = r.URL.Path
|
||||
actualRawPath = r.URL.RawPath
|
||||
actualHeader = r.Header.Get(ForwardedPrefixHeader)
|
||||
})
|
||||
handler := NewStripPrefixRegex(handlerPath, testPrefixRegex)
|
||||
@@ -82,6 +90,7 @@ func TestStripPrefixRegex(t *testing.T) {
|
||||
|
||||
assert.Equal(t, test.expectedStatusCode, resp.Code, "Unexpected status code.")
|
||||
assert.Equal(t, test.expectedPath, actualPath, "Unexpected path.")
|
||||
assert.Equal(t, test.expectedRawPath, actualRawPath, "Unexpected raw path.")
|
||||
assert.Equal(t, test.expectedHeader, actualHeader, "Unexpected '%s' header.", ForwardedPrefixHeader)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -16,6 +16,7 @@ func TestStripPrefix(t *testing.T) {
|
||||
path string
|
||||
expectedStatusCode int
|
||||
expectedPath string
|
||||
expectedRawPath string
|
||||
expectedHeader string
|
||||
}{
|
||||
{
|
||||
@@ -86,6 +87,23 @@ func TestStripPrefix(t *testing.T) {
|
||||
expectedPath: "/",
|
||||
expectedHeader: "/stat",
|
||||
},
|
||||
{
|
||||
desc: "prefix matching within slash boundaries",
|
||||
prefixes: []string{"/stat"},
|
||||
path: "/status",
|
||||
expectedStatusCode: http.StatusOK,
|
||||
expectedPath: "/us",
|
||||
expectedHeader: "/stat",
|
||||
},
|
||||
{
|
||||
desc: "raw path is also stripped",
|
||||
prefixes: []string{"/stat"},
|
||||
path: "/stat/a%2Fb",
|
||||
expectedStatusCode: http.StatusOK,
|
||||
expectedPath: "/a/b",
|
||||
expectedRawPath: "/a%2Fb",
|
||||
expectedHeader: "/stat",
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
@@ -93,11 +111,12 @@ func TestStripPrefix(t *testing.T) {
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
var actualPath, actualHeader, requestURI string
|
||||
var actualPath, actualRawPath, actualHeader, requestURI string
|
||||
handler := &StripPrefix{
|
||||
Prefixes: test.prefixes,
|
||||
Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
actualPath = r.URL.Path
|
||||
actualRawPath = r.URL.RawPath
|
||||
actualHeader = r.Header.Get(ForwardedPrefixHeader)
|
||||
requestURI = r.RequestURI
|
||||
}),
|
||||
@@ -110,8 +129,15 @@ func TestStripPrefix(t *testing.T) {
|
||||
|
||||
assert.Equal(t, test.expectedStatusCode, resp.Code, "Unexpected status code.")
|
||||
assert.Equal(t, test.expectedPath, actualPath, "Unexpected path.")
|
||||
assert.Equal(t, test.expectedRawPath, actualRawPath, "Unexpected raw path.")
|
||||
assert.Equal(t, test.expectedHeader, actualHeader, "Unexpected '%s' header.", ForwardedPrefixHeader)
|
||||
assert.Equal(t, test.expectedPath, requestURI, "Unexpected request URI.")
|
||||
|
||||
expectedURI := test.expectedPath
|
||||
if test.expectedRawPath != "" {
|
||||
// go HTTP uses the raw path when existent in the RequestURI
|
||||
expectedURI = test.expectedRawPath
|
||||
}
|
||||
assert.Equal(t, expectedURI, requestURI, "Unexpected request URI.")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -95,3 +95,4 @@ pages:
|
||||
- 'Clustering/HA': 'user-guide/cluster.md'
|
||||
- 'gRPC Example': 'user-guide/grpc.md'
|
||||
- Benchmarks: benchmarks.md
|
||||
- 'Archive': 'archive.md'
|
||||
|
||||
@@ -77,9 +77,14 @@ func (a nodeSorter) Less(i int, j int) bool {
|
||||
return lentr.Service.Port < rentr.Service.Port
|
||||
}
|
||||
|
||||
func getChangedServiceKeys(currState map[string]Service, prevState map[string]Service) ([]string, []string) {
|
||||
currKeySet := fun.Set(fun.Keys(currState).([]string)).(map[string]bool)
|
||||
prevKeySet := fun.Set(fun.Keys(prevState).([]string)).(map[string]bool)
|
||||
func hasChanged(current map[string]Service, previous map[string]Service) bool {
|
||||
addedServiceKeys, removedServiceKeys := getChangedServiceKeys(current, previous)
|
||||
return len(removedServiceKeys) > 0 || len(addedServiceKeys) > 0 || hasNodeOrTagsChanged(current, previous)
|
||||
}
|
||||
|
||||
func getChangedServiceKeys(current map[string]Service, previous map[string]Service) ([]string, []string) {
|
||||
currKeySet := fun.Set(fun.Keys(current).([]string)).(map[string]bool)
|
||||
prevKeySet := fun.Set(fun.Keys(previous).([]string)).(map[string]bool)
|
||||
|
||||
addedKeys := fun.Difference(currKeySet, prevKeySet).(map[string]bool)
|
||||
removedKeys := fun.Difference(prevKeySet, currKeySet).(map[string]bool)
|
||||
@@ -87,20 +92,23 @@ func getChangedServiceKeys(currState map[string]Service, prevState map[string]Se
|
||||
return fun.Keys(addedKeys).([]string), fun.Keys(removedKeys).([]string)
|
||||
}
|
||||
|
||||
func getChangedServiceNodeKeys(currState map[string]Service, prevState map[string]Service) ([]string, []string) {
|
||||
var addedNodeKeys []string
|
||||
var removedNodeKeys []string
|
||||
for key, value := range currState {
|
||||
if prevValue, ok := prevState[key]; ok {
|
||||
addedKeys, removedKeys := getChangedHealthyKeys(value.Nodes, prevValue.Nodes)
|
||||
addedNodeKeys = append(addedKeys)
|
||||
removedNodeKeys = append(removedKeys)
|
||||
func hasNodeOrTagsChanged(current map[string]Service, previous map[string]Service) bool {
|
||||
var added []string
|
||||
var removed []string
|
||||
for key, value := range current {
|
||||
if prevValue, ok := previous[key]; ok {
|
||||
addedNodesKeys, removedNodesKeys := getChangedStringKeys(value.Nodes, prevValue.Nodes)
|
||||
added = append(added, addedNodesKeys...)
|
||||
removed = append(removed, removedNodesKeys...)
|
||||
addedTagsKeys, removedTagsKeys := getChangedStringKeys(value.Tags, prevValue.Tags)
|
||||
added = append(added, addedTagsKeys...)
|
||||
removed = append(removed, removedTagsKeys...)
|
||||
}
|
||||
}
|
||||
return addedNodeKeys, removedNodeKeys
|
||||
return len(added) > 0 || len(removed) > 0
|
||||
}
|
||||
|
||||
func getChangedHealthyKeys(currState []string, prevState []string) ([]string, []string) {
|
||||
func getChangedStringKeys(currState []string, prevState []string) ([]string, []string) {
|
||||
currKeySet := fun.Set(currState).(map[string]bool)
|
||||
prevKeySet := fun.Set(prevState).(map[string]bool)
|
||||
|
||||
@@ -110,7 +118,7 @@ func getChangedHealthyKeys(currState []string, prevState []string) ([]string, []
|
||||
return fun.Keys(addedKeys).([]string), fun.Keys(removedKeys).([]string)
|
||||
}
|
||||
|
||||
func (p *CatalogProvider) watchHealthState(stopCh <-chan struct{}, watchCh chan<- map[string][]string) {
|
||||
func (p *CatalogProvider) watchHealthState(stopCh <-chan struct{}, watchCh chan<- map[string][]string, errorCh chan<- error) {
|
||||
health := p.client.Health()
|
||||
catalog := p.client.Catalog()
|
||||
|
||||
@@ -131,6 +139,7 @@ func (p *CatalogProvider) watchHealthState(stopCh <-chan struct{}, watchCh chan<
|
||||
healthyState, meta, err := health.State("passing", options)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Failed to retrieve health checks")
|
||||
errorCh <- err
|
||||
return
|
||||
}
|
||||
|
||||
@@ -154,6 +163,7 @@ func (p *CatalogProvider) watchHealthState(stopCh <-chan struct{}, watchCh chan<
|
||||
data, _, err := catalog.Services(&api.QueryOptions{})
|
||||
if err != nil {
|
||||
log.Errorf("Failed to list services: %s", err)
|
||||
errorCh <- err
|
||||
return
|
||||
}
|
||||
|
||||
@@ -161,7 +171,7 @@ func (p *CatalogProvider) watchHealthState(stopCh <-chan struct{}, watchCh chan<
|
||||
// A critical note is that the return of a blocking request is no guarantee of a change.
|
||||
// It is possible that there was an idempotent write that does not affect the result of the query.
|
||||
// Thus it is required to do extra check for changes...
|
||||
addedKeys, removedKeys := getChangedHealthyKeys(current, flashback)
|
||||
addedKeys, removedKeys := getChangedStringKeys(current, flashback)
|
||||
|
||||
if len(addedKeys) > 0 {
|
||||
log.WithField("DiscoveredServices", addedKeys).Debug("Health State change detected.")
|
||||
@@ -186,7 +196,7 @@ type Service struct {
|
||||
Nodes []string
|
||||
}
|
||||
|
||||
func (p *CatalogProvider) watchCatalogServices(stopCh <-chan struct{}, watchCh chan<- map[string][]string) {
|
||||
func (p *CatalogProvider) watchCatalogServices(stopCh <-chan struct{}, watchCh chan<- map[string][]string, errorCh chan<- error) {
|
||||
catalog := p.client.Catalog()
|
||||
|
||||
safe.Go(func() {
|
||||
@@ -205,6 +215,7 @@ func (p *CatalogProvider) watchCatalogServices(stopCh <-chan struct{}, watchCh c
|
||||
data, meta, err := catalog.Services(options)
|
||||
if err != nil {
|
||||
log.Errorf("Failed to list services: %s", err)
|
||||
errorCh <- err
|
||||
return
|
||||
}
|
||||
|
||||
@@ -220,6 +231,7 @@ func (p *CatalogProvider) watchCatalogServices(stopCh <-chan struct{}, watchCh c
|
||||
nodes, _, err := catalog.Service(key, "", &api.QueryOptions{})
|
||||
if err != nil {
|
||||
log.Errorf("Failed to get detail of service %s: %s", key, err)
|
||||
errorCh <- err
|
||||
return
|
||||
}
|
||||
nodesID := getServiceIds(nodes)
|
||||
@@ -238,12 +250,7 @@ func (p *CatalogProvider) watchCatalogServices(stopCh <-chan struct{}, watchCh c
|
||||
// A critical note is that the return of a blocking request is no guarantee of a change.
|
||||
// It is possible that there was an idempotent write that does not affect the result of the query.
|
||||
// Thus it is required to do extra check for changes...
|
||||
addedServiceKeys, removedServiceKeys := getChangedServiceKeys(current, flashback)
|
||||
|
||||
addedServiceNodeKeys, removedServiceNodeKeys := getChangedServiceNodeKeys(current, flashback)
|
||||
|
||||
if len(removedServiceKeys) > 0 || len(removedServiceNodeKeys) > 0 || len(addedServiceKeys) > 0 || len(addedServiceNodeKeys) > 0 {
|
||||
log.WithField("MissingServices", removedServiceKeys).WithField("DiscoveredServices", addedServiceKeys).Debug("Catalog Services change detected.")
|
||||
if hasChanged(current, flashback) {
|
||||
watchCh <- data
|
||||
flashback = current
|
||||
}
|
||||
@@ -251,6 +258,7 @@ func (p *CatalogProvider) watchCatalogServices(stopCh <-chan struct{}, watchCh c
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func getServiceIds(services []*api.CatalogService) []string {
|
||||
var serviceIds []string
|
||||
for _, service := range services {
|
||||
@@ -267,7 +275,6 @@ func (p *CatalogProvider) healthyNodes(service string) (catalogUpdate, error) {
|
||||
log.WithError(err).Errorf("Failed to fetch details of %s", service)
|
||||
return catalogUpdate{}, err
|
||||
}
|
||||
|
||||
nodes := fun.Filter(func(node *api.ServiceEntry) bool {
|
||||
return p.nodeFilter(service, node)
|
||||
}, data).([]*api.ServiceEntry)
|
||||
@@ -385,10 +392,6 @@ func (p *CatalogProvider) getBackendName(node *api.ServiceEntry, index int) stri
|
||||
return serviceName
|
||||
}
|
||||
|
||||
func (p *CatalogProvider) getAttribute(name string, tags []string, defaultValue string) string {
|
||||
return p.getTag(p.getPrefixedName(name), tags, defaultValue)
|
||||
}
|
||||
|
||||
func (p *CatalogProvider) getBasicAuth(tags []string) []string {
|
||||
list := p.getAttribute("frontend.auth.basic", tags, "")
|
||||
if list != "" {
|
||||
@@ -397,6 +400,29 @@ func (p *CatalogProvider) getBasicAuth(tags []string) []string {
|
||||
return []string{}
|
||||
}
|
||||
|
||||
func (p *CatalogProvider) getSticky(tags []string) string {
|
||||
stickyTag := p.getTag(types.LabelBackendLoadbalancerSticky, tags, "")
|
||||
if len(stickyTag) > 0 {
|
||||
log.Warnf("Deprecated configuration found: %s. Please use %s.", types.LabelBackendLoadbalancerSticky, types.LabelBackendLoadbalancerStickiness)
|
||||
} else {
|
||||
stickyTag = "false"
|
||||
}
|
||||
return stickyTag
|
||||
}
|
||||
|
||||
func (p *CatalogProvider) hasStickinessLabel(tags []string) bool {
|
||||
stickinessTag := p.getTag(types.LabelBackendLoadbalancerStickiness, tags, "")
|
||||
return len(stickinessTag) > 0 && strings.EqualFold(strings.TrimSpace(stickinessTag), "true")
|
||||
}
|
||||
|
||||
func (p *CatalogProvider) getStickinessCookieName(tags []string) string {
|
||||
return p.getTag(types.LabelBackendLoadbalancerStickinessCookieName, tags, "")
|
||||
}
|
||||
|
||||
func (p *CatalogProvider) getAttribute(name string, tags []string, defaultValue string) string {
|
||||
return p.getTag(p.getPrefixedName(name), tags, defaultValue)
|
||||
}
|
||||
|
||||
func (p *CatalogProvider) hasTag(name string, tags []string) bool {
|
||||
// Very-very unlikely that a Consul tag would ever start with '=!='
|
||||
tag := p.getTag(name, tags, "=!=")
|
||||
@@ -439,16 +465,19 @@ func (p *CatalogProvider) getConstraintTags(tags []string) []string {
|
||||
|
||||
func (p *CatalogProvider) buildConfig(catalog []catalogUpdate) *types.Configuration {
|
||||
var FuncMap = template.FuncMap{
|
||||
"getBackend": p.getBackend,
|
||||
"getFrontendRule": p.getFrontendRule,
|
||||
"getBackendName": p.getBackendName,
|
||||
"getBackendAddress": p.getBackendAddress,
|
||||
"getAttribute": p.getAttribute,
|
||||
"getBasicAuth": p.getBasicAuth,
|
||||
"getTag": p.getTag,
|
||||
"hasTag": p.hasTag,
|
||||
"getEntryPoints": p.getEntryPoints,
|
||||
"hasMaxconnAttributes": p.hasMaxconnAttributes,
|
||||
"getBackend": p.getBackend,
|
||||
"getFrontendRule": p.getFrontendRule,
|
||||
"getBackendName": p.getBackendName,
|
||||
"getBackendAddress": p.getBackendAddress,
|
||||
"getBasicAuth": p.getBasicAuth,
|
||||
"getSticky": p.getSticky,
|
||||
"hasStickinessLabel": p.hasStickinessLabel,
|
||||
"getStickinessCookieName": p.getStickinessCookieName,
|
||||
"getAttribute": p.getAttribute,
|
||||
"getTag": p.getTag,
|
||||
"hasTag": p.hasTag,
|
||||
"getEntryPoints": p.getEntryPoints,
|
||||
"hasMaxconnAttributes": p.hasMaxconnAttributes,
|
||||
}
|
||||
|
||||
allNodes := []*api.ServiceEntry{}
|
||||
@@ -512,9 +541,10 @@ func (p *CatalogProvider) getNodes(index map[string][]string) ([]catalogUpdate,
|
||||
func (p *CatalogProvider) watch(configurationChan chan<- types.ConfigMessage, stop chan bool) error {
|
||||
stopCh := make(chan struct{})
|
||||
watchCh := make(chan map[string][]string)
|
||||
errorCh := make(chan error)
|
||||
|
||||
p.watchHealthState(stopCh, watchCh)
|
||||
p.watchCatalogServices(stopCh, watchCh)
|
||||
p.watchHealthState(stopCh, watchCh, errorCh)
|
||||
p.watchCatalogServices(stopCh, watchCh, errorCh)
|
||||
|
||||
defer close(stopCh)
|
||||
defer close(watchCh)
|
||||
@@ -537,6 +567,8 @@ func (p *CatalogProvider) watch(configurationChan chan<- types.ConfigMessage, st
|
||||
ProviderName: "consul_catalog",
|
||||
Configuration: configuration,
|
||||
}
|
||||
case err := <-errorCh:
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package consul
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"sort"
|
||||
"testing"
|
||||
"text/template"
|
||||
@@ -9,6 +8,7 @@ import (
|
||||
"github.com/BurntSushi/ty/fun"
|
||||
"github.com/containous/traefik/types"
|
||||
"github.com/hashicorp/consul/api"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestConsulCatalogGetFrontendRule(t *testing.T) {
|
||||
@@ -20,11 +20,13 @@ func TestConsulCatalogGetFrontendRule(t *testing.T) {
|
||||
}
|
||||
provider.setupFrontEndTemplate()
|
||||
|
||||
services := []struct {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
service serviceUpdate
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
desc: "Should return default host foo.localhost",
|
||||
service: serviceUpdate{
|
||||
ServiceName: "foo",
|
||||
Attributes: []string{},
|
||||
@@ -32,6 +34,7 @@ func TestConsulCatalogGetFrontendRule(t *testing.T) {
|
||||
expected: "Host:foo.localhost",
|
||||
},
|
||||
{
|
||||
desc: "Should return host *.example.com",
|
||||
service: serviceUpdate{
|
||||
ServiceName: "foo",
|
||||
Attributes: []string{
|
||||
@@ -41,6 +44,7 @@ func TestConsulCatalogGetFrontendRule(t *testing.T) {
|
||||
expected: "Host:*.example.com",
|
||||
},
|
||||
{
|
||||
desc: "Should return host foo.example.com",
|
||||
service: serviceUpdate{
|
||||
ServiceName: "foo",
|
||||
Attributes: []string{
|
||||
@@ -50,6 +54,7 @@ func TestConsulCatalogGetFrontendRule(t *testing.T) {
|
||||
expected: "Host:foo.example.com",
|
||||
},
|
||||
{
|
||||
desc: "Should return path prefix /bar",
|
||||
service: serviceUpdate{
|
||||
ServiceName: "foo",
|
||||
Attributes: []string{
|
||||
@@ -61,11 +66,14 @@ func TestConsulCatalogGetFrontendRule(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
for _, e := range services {
|
||||
actual := provider.getFrontendRule(e.service)
|
||||
if actual != e.expected {
|
||||
t.Fatalf("expected %s, got %s", e.expected, actual)
|
||||
}
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
actual := provider.getFrontendRule(test.service)
|
||||
assert.Equal(t, test.expected, actual)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -75,13 +83,15 @@ func TestConsulCatalogGetTag(t *testing.T) {
|
||||
Prefix: "traefik",
|
||||
}
|
||||
|
||||
services := []struct {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
tags []string
|
||||
key string
|
||||
defaultValue string
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
desc: "Should return value of foo.bar key",
|
||||
tags: []string{
|
||||
"foo.bar=random",
|
||||
"traefik.backend.weight=42",
|
||||
@@ -93,21 +103,17 @@ func TestConsulCatalogGetTag(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
actual := provider.hasTag("management", []string{"management"})
|
||||
if !actual {
|
||||
t.Fatalf("expected %v, got %v", true, actual)
|
||||
}
|
||||
assert.Equal(t, true, provider.hasTag("management", []string{"management"}))
|
||||
assert.Equal(t, true, provider.hasTag("management", []string{"management=yes"}))
|
||||
|
||||
actual = provider.hasTag("management", []string{"management=yes"})
|
||||
if !actual {
|
||||
t.Fatalf("expected %v, got %v", true, actual)
|
||||
}
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
for _, e := range services {
|
||||
actual := provider.getTag(e.key, e.tags, e.defaultValue)
|
||||
if actual != e.expected {
|
||||
t.Fatalf("expected %s, got %s", e.expected, actual)
|
||||
}
|
||||
actual := provider.getTag(test.key, test.tags, test.defaultValue)
|
||||
assert.Equal(t, test.expected, actual)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -117,13 +123,15 @@ func TestConsulCatalogGetAttribute(t *testing.T) {
|
||||
Prefix: "traefik",
|
||||
}
|
||||
|
||||
services := []struct {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
tags []string
|
||||
key string
|
||||
defaultValue string
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
desc: "Should return tag value 42",
|
||||
tags: []string{
|
||||
"foo.bar=ramdom",
|
||||
"traefik.backend.weight=42",
|
||||
@@ -133,6 +141,7 @@ func TestConsulCatalogGetAttribute(t *testing.T) {
|
||||
expected: "42",
|
||||
},
|
||||
{
|
||||
desc: "Should return tag default value 0",
|
||||
tags: []string{
|
||||
"foo.bar=ramdom",
|
||||
"traefik.backend.wei=42",
|
||||
@@ -143,17 +152,16 @@ func TestConsulCatalogGetAttribute(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
expected := provider.Prefix + ".foo"
|
||||
actual := provider.getPrefixedName("foo")
|
||||
if actual != expected {
|
||||
t.Fatalf("expected %s, got %s", expected, actual)
|
||||
}
|
||||
assert.Equal(t, provider.Prefix+".foo", provider.getPrefixedName("foo"))
|
||||
|
||||
for _, e := range services {
|
||||
actual := provider.getAttribute(e.key, e.tags, e.defaultValue)
|
||||
if actual != e.expected {
|
||||
t.Fatalf("expected %s, got %s", e.expected, actual)
|
||||
}
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
actual := provider.getAttribute(test.key, test.tags, test.defaultValue)
|
||||
assert.Equal(t, test.expected, actual)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -163,13 +171,15 @@ func TestConsulCatalogGetAttributeWithEmptyPrefix(t *testing.T) {
|
||||
Prefix: "",
|
||||
}
|
||||
|
||||
services := []struct {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
tags []string
|
||||
key string
|
||||
defaultValue string
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
desc: "Should return tag value 42",
|
||||
tags: []string{
|
||||
"foo.bar=ramdom",
|
||||
"backend.weight=42",
|
||||
@@ -179,6 +189,7 @@ func TestConsulCatalogGetAttributeWithEmptyPrefix(t *testing.T) {
|
||||
expected: "42",
|
||||
},
|
||||
{
|
||||
desc: "Should return default value 0",
|
||||
tags: []string{
|
||||
"foo.bar=ramdom",
|
||||
"backend.wei=42",
|
||||
@@ -188,6 +199,7 @@ func TestConsulCatalogGetAttributeWithEmptyPrefix(t *testing.T) {
|
||||
expected: "0",
|
||||
},
|
||||
{
|
||||
desc: "Should return for.bar key value random",
|
||||
tags: []string{
|
||||
"foo.bar=ramdom",
|
||||
"backend.wei=42",
|
||||
@@ -198,17 +210,16 @@ func TestConsulCatalogGetAttributeWithEmptyPrefix(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
expected := "foo"
|
||||
actual := provider.getPrefixedName("foo")
|
||||
if actual != expected {
|
||||
t.Fatalf("expected %s, got %s", expected, actual)
|
||||
}
|
||||
assert.Equal(t, "foo", provider.getPrefixedName("foo"))
|
||||
|
||||
for _, e := range services {
|
||||
actual := provider.getAttribute(e.key, e.tags, e.defaultValue)
|
||||
if actual != e.expected {
|
||||
t.Fatalf("expected %s, got %s", e.expected, actual)
|
||||
}
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
actual := provider.getAttribute(test.key, test.tags, test.defaultValue)
|
||||
assert.Equal(t, test.expected, actual)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -218,11 +229,13 @@ func TestConsulCatalogGetBackendAddress(t *testing.T) {
|
||||
Prefix: "traefik",
|
||||
}
|
||||
|
||||
services := []struct {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
node *api.ServiceEntry
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
desc: "Should return the address of the service",
|
||||
node: &api.ServiceEntry{
|
||||
Node: &api.Node{
|
||||
Address: "10.1.0.1",
|
||||
@@ -234,6 +247,7 @@ func TestConsulCatalogGetBackendAddress(t *testing.T) {
|
||||
expected: "10.2.0.1",
|
||||
},
|
||||
{
|
||||
desc: "Should return the address of the node",
|
||||
node: &api.ServiceEntry{
|
||||
Node: &api.Node{
|
||||
Address: "10.1.0.1",
|
||||
@@ -246,11 +260,14 @@ func TestConsulCatalogGetBackendAddress(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
for _, e := range services {
|
||||
actual := provider.getBackendAddress(e.node)
|
||||
if actual != e.expected {
|
||||
t.Fatalf("expected %s, got %s", e.expected, actual)
|
||||
}
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
actual := provider.getBackendAddress(test.node)
|
||||
assert.Equal(t, test.expected, actual)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -260,11 +277,13 @@ func TestConsulCatalogGetBackendName(t *testing.T) {
|
||||
Prefix: "traefik",
|
||||
}
|
||||
|
||||
services := []struct {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
node *api.ServiceEntry
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
desc: "Should create backend name without tags",
|
||||
node: &api.ServiceEntry{
|
||||
Service: &api.AgentService{
|
||||
Service: "api",
|
||||
@@ -276,6 +295,7 @@ func TestConsulCatalogGetBackendName(t *testing.T) {
|
||||
expected: "api--10-0-0-1--80--0",
|
||||
},
|
||||
{
|
||||
desc: "Should create backend name with multiple tags",
|
||||
node: &api.ServiceEntry{
|
||||
Service: &api.AgentService{
|
||||
Service: "api",
|
||||
@@ -287,6 +307,7 @@ func TestConsulCatalogGetBackendName(t *testing.T) {
|
||||
expected: "api--10-0-0-1--80--traefik-weight-42--traefik-enable-true--1",
|
||||
},
|
||||
{
|
||||
desc: "Should create backend name with one tag",
|
||||
node: &api.ServiceEntry{
|
||||
Service: &api.AgentService{
|
||||
Service: "api",
|
||||
@@ -299,11 +320,15 @@ func TestConsulCatalogGetBackendName(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
for i, e := range services {
|
||||
actual := provider.getBackendName(e.node, i)
|
||||
if actual != e.expected {
|
||||
t.Fatalf("expected %s, got %s", e.expected, actual)
|
||||
}
|
||||
for i, test := range testCases {
|
||||
test := test
|
||||
i := i
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
actual := provider.getBackendName(test.node, i)
|
||||
assert.Equal(t, test.expected, actual)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -316,17 +341,20 @@ func TestConsulCatalogBuildConfig(t *testing.T) {
|
||||
frontEndRuleTemplate: template.New("consul catalog frontend rule"),
|
||||
}
|
||||
|
||||
cases := []struct {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
nodes []catalogUpdate
|
||||
expectedFrontends map[string]*types.Frontend
|
||||
expectedBackends map[string]*types.Backend
|
||||
}{
|
||||
{
|
||||
desc: "Should build config of nothing",
|
||||
nodes: []catalogUpdate{},
|
||||
expectedFrontends: map[string]*types.Frontend{},
|
||||
expectedBackends: map[string]*types.Backend{},
|
||||
},
|
||||
{
|
||||
desc: "Should build config with no frontend and backend",
|
||||
nodes: []catalogUpdate{
|
||||
{
|
||||
Service: &serviceUpdate{
|
||||
@@ -338,6 +366,7 @@ func TestConsulCatalogBuildConfig(t *testing.T) {
|
||||
expectedBackends: map[string]*types.Backend{},
|
||||
},
|
||||
{
|
||||
desc: "Should build config who contains one frontend and one backend",
|
||||
nodes: []catalogUpdate{
|
||||
{
|
||||
Service: &serviceUpdate{
|
||||
@@ -407,28 +436,31 @@ func TestConsulCatalogBuildConfig(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
for _, c := range cases {
|
||||
actualConfig := provider.buildConfig(c.nodes)
|
||||
if !reflect.DeepEqual(actualConfig.Backends, c.expectedBackends) {
|
||||
t.Fatalf("expected %#v, got %#v", c.expectedBackends, actualConfig.Backends)
|
||||
}
|
||||
if !reflect.DeepEqual(actualConfig.Frontends, c.expectedFrontends) {
|
||||
t.Fatalf("expected %#v, got %#v", c.expectedFrontends["frontend-test"].BasicAuth, actualConfig.Frontends["frontend-test"].BasicAuth)
|
||||
t.Fatalf("expected %#v, got %#v", c.expectedFrontends, actualConfig.Frontends)
|
||||
}
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
actualConfig := provider.buildConfig(test.nodes)
|
||||
assert.Equal(t, test.expectedBackends, actualConfig.Backends)
|
||||
assert.Equal(t, test.expectedFrontends, actualConfig.Frontends)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestConsulCatalogNodeSorter(t *testing.T) {
|
||||
cases := []struct {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
nodes []*api.ServiceEntry
|
||||
expected []*api.ServiceEntry
|
||||
}{
|
||||
{
|
||||
desc: "Should sort nothing",
|
||||
nodes: []*api.ServiceEntry{},
|
||||
expected: []*api.ServiceEntry{},
|
||||
},
|
||||
{
|
||||
desc: "Should sort by node address",
|
||||
nodes: []*api.ServiceEntry{
|
||||
{
|
||||
Service: &api.AgentService{
|
||||
@@ -457,6 +489,7 @@ func TestConsulCatalogNodeSorter(t *testing.T) {
|
||||
},
|
||||
},
|
||||
{
|
||||
desc: "Should sort by service name",
|
||||
nodes: []*api.ServiceEntry{
|
||||
{
|
||||
Service: &api.AgentService{
|
||||
@@ -551,6 +584,7 @@ func TestConsulCatalogNodeSorter(t *testing.T) {
|
||||
},
|
||||
},
|
||||
{
|
||||
desc: "Should sort by node address",
|
||||
nodes: []*api.ServiceEntry{
|
||||
{
|
||||
Service: &api.AgentService{
|
||||
@@ -602,12 +636,15 @@ func TestConsulCatalogNodeSorter(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
for _, c := range cases {
|
||||
sort.Sort(nodeSorter(c.nodes))
|
||||
actual := c.nodes
|
||||
if !reflect.DeepEqual(actual, c.expected) {
|
||||
t.Fatalf("expected %q, got %q", c.expected, actual)
|
||||
}
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
sort.Sort(nodeSorter(test.nodes))
|
||||
actual := test.nodes
|
||||
assert.Equal(t, test.expected, actual)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -622,11 +659,13 @@ func TestConsulCatalogGetChangedKeys(t *testing.T) {
|
||||
removedKeys []string
|
||||
}
|
||||
|
||||
cases := []struct {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
input Input
|
||||
output Output
|
||||
}{
|
||||
{
|
||||
desc: "Should add 0 services and removed 0",
|
||||
input: Input{
|
||||
currState: map[string]Service{
|
||||
"foo-service": {Name: "v1"},
|
||||
@@ -667,6 +706,7 @@ func TestConsulCatalogGetChangedKeys(t *testing.T) {
|
||||
},
|
||||
},
|
||||
{
|
||||
desc: "Should add 3 services and removed 0",
|
||||
input: Input{
|
||||
currState: map[string]Service{
|
||||
"foo-service": {Name: "v1"},
|
||||
@@ -704,6 +744,7 @@ func TestConsulCatalogGetChangedKeys(t *testing.T) {
|
||||
},
|
||||
},
|
||||
{
|
||||
desc: "Should add 2 services and removed 2",
|
||||
input: Input{
|
||||
currState: map[string]Service{
|
||||
"foo-service": {Name: "v1"},
|
||||
@@ -741,21 +782,20 @@ func TestConsulCatalogGetChangedKeys(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
for _, c := range cases {
|
||||
addedKeys, removedKeys := getChangedServiceKeys(c.input.currState, c.input.prevState)
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
if !reflect.DeepEqual(fun.Set(addedKeys), fun.Set(c.output.addedKeys)) {
|
||||
t.Fatalf("Added keys comparison results: got %q, want %q", addedKeys, c.output.addedKeys)
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(fun.Set(removedKeys), fun.Set(c.output.removedKeys)) {
|
||||
t.Fatalf("Removed keys comparison results: got %q, want %q", removedKeys, c.output.removedKeys)
|
||||
}
|
||||
addedKeys, removedKeys := getChangedServiceKeys(test.input.currState, test.input.prevState)
|
||||
assert.Equal(t, fun.Set(test.output.addedKeys), fun.Set(addedKeys), "Added keys comparison results: got %q, want %q", addedKeys, test.output.addedKeys)
|
||||
assert.Equal(t, fun.Set(test.output.removedKeys), fun.Set(removedKeys), "Removed keys comparison results: got %q, want %q", removedKeys, test.output.removedKeys)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestConsulCatalogFilterEnabled(t *testing.T) {
|
||||
cases := []struct {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
exposedByDefault bool
|
||||
node *api.ServiceEntry
|
||||
@@ -841,24 +881,23 @@ func TestConsulCatalogFilterEnabled(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
for _, c := range cases {
|
||||
c := c
|
||||
t.Run(c.desc, func(t *testing.T) {
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
provider := &CatalogProvider{
|
||||
Domain: "localhost",
|
||||
Prefix: "traefik",
|
||||
ExposedByDefault: c.exposedByDefault,
|
||||
}
|
||||
if provider.nodeFilter("test", c.node) != c.expected {
|
||||
t.Errorf("got unexpected filtering = %t", !c.expected)
|
||||
ExposedByDefault: test.exposedByDefault,
|
||||
}
|
||||
actual := provider.nodeFilter("test", test.node)
|
||||
assert.Equal(t, test.expected, actual)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestConsulCatalogGetBasicAuth(t *testing.T) {
|
||||
cases := []struct {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
tags []string
|
||||
expected []string
|
||||
@@ -877,17 +916,326 @@ func TestConsulCatalogGetBasicAuth(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
for _, c := range cases {
|
||||
c := c
|
||||
t.Run(c.desc, func(t *testing.T) {
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
provider := &CatalogProvider{
|
||||
Prefix: "traefik",
|
||||
}
|
||||
actual := provider.getBasicAuth(c.tags)
|
||||
if !reflect.DeepEqual(actual, c.expected) {
|
||||
t.Errorf("actual %q, expected %q", actual, c.expected)
|
||||
}
|
||||
actual := provider.getBasicAuth(test.tags)
|
||||
assert.Equal(t, test.expected, actual)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestConsulCatalogHasStickinessLabel(t *testing.T) {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
tags []string
|
||||
expected bool
|
||||
}{
|
||||
{
|
||||
desc: "label missing",
|
||||
tags: []string{},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
desc: "stickiness=true",
|
||||
tags: []string{
|
||||
types.LabelBackendLoadbalancerStickiness + "=true",
|
||||
},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
desc: "stickiness=false",
|
||||
tags: []string{
|
||||
types.LabelBackendLoadbalancerStickiness + "=false",
|
||||
},
|
||||
expected: false,
|
||||
},
|
||||
}
|
||||
|
||||
provider := &CatalogProvider{
|
||||
Prefix: "traefik",
|
||||
}
|
||||
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
actual := provider.hasStickinessLabel(test.tags)
|
||||
assert.Equal(t, test.expected, actual)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestConsulCatalogGetChangedStringKeys(t *testing.T) {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
current []string
|
||||
previous []string
|
||||
expectedAdded []string
|
||||
expectedRemoved []string
|
||||
}{
|
||||
{
|
||||
desc: "1 element added, 0 removed",
|
||||
current: []string{"chou"},
|
||||
previous: []string{},
|
||||
expectedAdded: []string{"chou"},
|
||||
expectedRemoved: []string{},
|
||||
}, {
|
||||
desc: "0 element added, 0 removed",
|
||||
current: []string{"chou"},
|
||||
previous: []string{"chou"},
|
||||
expectedAdded: []string{},
|
||||
expectedRemoved: []string{},
|
||||
},
|
||||
{
|
||||
desc: "0 element added, 1 removed",
|
||||
current: []string{},
|
||||
previous: []string{"chou"},
|
||||
expectedAdded: []string{},
|
||||
expectedRemoved: []string{"chou"},
|
||||
},
|
||||
{
|
||||
desc: "1 element added, 1 removed",
|
||||
current: []string{"carotte"},
|
||||
previous: []string{"chou"},
|
||||
expectedAdded: []string{"carotte"},
|
||||
expectedRemoved: []string{"chou"},
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
actualAdded, actualRemoved := getChangedStringKeys(test.current, test.previous)
|
||||
assert.Equal(t, test.expectedAdded, actualAdded)
|
||||
assert.Equal(t, test.expectedRemoved, actualRemoved)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestConsulCatalogHasNodeOrTagschanged(t *testing.T) {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
current map[string]Service
|
||||
previous map[string]Service
|
||||
expected bool
|
||||
}{
|
||||
{
|
||||
desc: "Change detected due to change of nodes",
|
||||
current: map[string]Service{
|
||||
"foo-service": {
|
||||
Name: "foo",
|
||||
Nodes: []string{"node1"},
|
||||
Tags: []string{},
|
||||
},
|
||||
},
|
||||
previous: map[string]Service{
|
||||
"foo-service": {
|
||||
Name: "foo",
|
||||
Nodes: []string{"node2"},
|
||||
Tags: []string{},
|
||||
},
|
||||
},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
desc: "No change missing current service",
|
||||
current: make(map[string]Service),
|
||||
previous: map[string]Service{
|
||||
"foo-service": {
|
||||
Name: "foo",
|
||||
Nodes: []string{"node1"},
|
||||
Tags: []string{},
|
||||
},
|
||||
},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
desc: "No change on nodes",
|
||||
current: map[string]Service{
|
||||
"foo-service": {
|
||||
Name: "foo",
|
||||
Nodes: []string{"node1"},
|
||||
Tags: []string{},
|
||||
},
|
||||
},
|
||||
previous: map[string]Service{
|
||||
"foo-service": {
|
||||
Name: "foo",
|
||||
Nodes: []string{"node1"},
|
||||
Tags: []string{},
|
||||
},
|
||||
},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
desc: "No change on nodes and tags",
|
||||
current: map[string]Service{
|
||||
"foo-service": {
|
||||
Name: "foo",
|
||||
Nodes: []string{"node1"},
|
||||
Tags: []string{"foo=bar"},
|
||||
},
|
||||
},
|
||||
previous: map[string]Service{
|
||||
"foo-service": {
|
||||
Name: "foo",
|
||||
Nodes: []string{"node1"},
|
||||
Tags: []string{"foo=bar"},
|
||||
},
|
||||
},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
desc: "Change detected con tags",
|
||||
current: map[string]Service{
|
||||
"foo-service": {
|
||||
Name: "foo",
|
||||
Nodes: []string{"node1"},
|
||||
Tags: []string{"foo=bar"},
|
||||
},
|
||||
},
|
||||
previous: map[string]Service{
|
||||
"foo-service": {
|
||||
Name: "foo",
|
||||
Nodes: []string{"node1"},
|
||||
Tags: []string{"foo"},
|
||||
},
|
||||
},
|
||||
expected: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
actual := hasNodeOrTagsChanged(test.current, test.previous)
|
||||
assert.Equal(t, test.expected, actual)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestConsulCatalogHasChanged(t *testing.T) {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
current map[string]Service
|
||||
previous map[string]Service
|
||||
expected bool
|
||||
}{
|
||||
{
|
||||
desc: "Change detected due to change new service",
|
||||
current: map[string]Service{
|
||||
"foo-service": {
|
||||
Name: "foo",
|
||||
Nodes: []string{"node1"},
|
||||
Tags: []string{},
|
||||
},
|
||||
},
|
||||
previous: make(map[string]Service),
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
desc: "Change detected due to change service removed",
|
||||
current: make(map[string]Service),
|
||||
previous: map[string]Service{
|
||||
"foo-service": {
|
||||
Name: "foo",
|
||||
Nodes: []string{"node1"},
|
||||
Tags: []string{},
|
||||
},
|
||||
},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
desc: "Change detected due to change of nodes",
|
||||
current: map[string]Service{
|
||||
"foo-service": {
|
||||
Name: "foo",
|
||||
Nodes: []string{"node1"},
|
||||
Tags: []string{},
|
||||
},
|
||||
},
|
||||
previous: map[string]Service{
|
||||
"foo-service": {
|
||||
Name: "foo",
|
||||
Nodes: []string{"node2"},
|
||||
Tags: []string{},
|
||||
},
|
||||
},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
desc: "No change on nodes",
|
||||
current: map[string]Service{
|
||||
"foo-service": {
|
||||
Name: "foo",
|
||||
Nodes: []string{"node1"},
|
||||
Tags: []string{},
|
||||
},
|
||||
},
|
||||
previous: map[string]Service{
|
||||
"foo-service": {
|
||||
Name: "foo",
|
||||
Nodes: []string{"node1"},
|
||||
Tags: []string{},
|
||||
},
|
||||
},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
desc: "No change on nodes and tags",
|
||||
current: map[string]Service{
|
||||
"foo-service": {
|
||||
Name: "foo",
|
||||
Nodes: []string{"node1"},
|
||||
Tags: []string{"foo=bar"},
|
||||
},
|
||||
},
|
||||
previous: map[string]Service{
|
||||
"foo-service": {
|
||||
Name: "foo",
|
||||
Nodes: []string{"node1"},
|
||||
Tags: []string{"foo=bar"},
|
||||
},
|
||||
},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
desc: "Change detected on tags",
|
||||
current: map[string]Service{
|
||||
"foo-service": {
|
||||
Name: "foo",
|
||||
Nodes: []string{"node1"},
|
||||
Tags: []string{"foo=bar"},
|
||||
},
|
||||
},
|
||||
previous: map[string]Service{
|
||||
"foo-service": {
|
||||
Name: "foo",
|
||||
Nodes: []string{"node1"},
|
||||
Tags: []string{"foo"},
|
||||
},
|
||||
},
|
||||
expected: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
actual := hasChanged(test.current, test.previous)
|
||||
assert.Equal(t, test.expected, actual)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -25,9 +25,11 @@ import (
|
||||
eventtypes "github.com/docker/docker/api/types/events"
|
||||
"github.com/docker/docker/api/types/filters"
|
||||
swarmtypes "github.com/docker/docker/api/types/swarm"
|
||||
"github.com/docker/docker/api/types/versions"
|
||||
"github.com/docker/docker/client"
|
||||
"github.com/docker/go-connections/nat"
|
||||
"github.com/docker/go-connections/sockets"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -276,6 +278,8 @@ func (p *Provider) loadDockerConfig(containersInspected []dockerData) *types.Con
|
||||
"getMaxConnAmount": p.getMaxConnAmount,
|
||||
"getMaxConnExtractorFunc": p.getMaxConnExtractorFunc,
|
||||
"getSticky": p.getSticky,
|
||||
"getStickinessCookieName": p.getStickinessCookieName,
|
||||
"hasStickinessLabel": p.hasStickinessLabel,
|
||||
"getIsBackendLBSwarm": p.getIsBackendLBSwarm,
|
||||
"hasServices": p.hasServices,
|
||||
"getServiceNames": p.getServiceNames,
|
||||
@@ -298,9 +302,15 @@ func (p *Provider) loadDockerConfig(containersInspected []dockerData) *types.Con
|
||||
frontends := map[string][]dockerData{}
|
||||
backends := map[string]dockerData{}
|
||||
servers := map[string][]dockerData{}
|
||||
for _, container := range filteredContainers {
|
||||
frontendName := p.getFrontendName(container)
|
||||
frontends[frontendName] = append(frontends[frontendName], container)
|
||||
serviceNames := make(map[string]struct{})
|
||||
for idx, container := range filteredContainers {
|
||||
if _, exists := serviceNames[container.ServiceName]; !exists {
|
||||
frontendName := p.getFrontendName(container, idx)
|
||||
frontends[frontendName] = append(frontends[frontendName], container)
|
||||
if len(container.ServiceName) > 0 {
|
||||
serviceNames[container.ServiceName] = struct{}{}
|
||||
}
|
||||
}
|
||||
backendName := p.getBackend(container)
|
||||
backends[backendName] = container
|
||||
servers[backendName] = append(servers[backendName], container)
|
||||
@@ -328,10 +338,8 @@ func (p *Provider) loadDockerConfig(containersInspected []dockerData) *types.Con
|
||||
}
|
||||
|
||||
func (p *Provider) hasCircuitBreakerLabel(container dockerData) bool {
|
||||
if _, err := getLabel(container, types.LabelBackendCircuitbreakerExpression); err != nil {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
_, err := getLabel(container, types.LabelBackendCircuitbreakerExpression)
|
||||
return err == nil
|
||||
}
|
||||
|
||||
// Regexp used to extract the name of the service and the name of the property for this service
|
||||
@@ -425,9 +433,9 @@ func (p *Provider) getServicePriority(container dockerData, serviceName string)
|
||||
// Extract backend from labels for a given service and a given docker container
|
||||
func (p *Provider) getServiceBackend(container dockerData, serviceName string) string {
|
||||
if value, ok := getContainerServiceLabel(container, serviceName, "frontend.backend"); ok {
|
||||
return value
|
||||
return container.ServiceName + "-" + value
|
||||
}
|
||||
return p.getBackend(container) + "-" + provider.Normalize(serviceName)
|
||||
return strings.TrimPrefix(container.ServiceName, "/") + "-" + p.getBackend(container) + "-" + provider.Normalize(serviceName)
|
||||
}
|
||||
|
||||
// Extract rule from labels for a given service and a given docker container
|
||||
@@ -466,10 +474,10 @@ func (p *Provider) getServiceProtocol(container dockerData, serviceName string)
|
||||
func (p *Provider) hasLoadBalancerLabel(container dockerData) bool {
|
||||
_, errMethod := getLabel(container, types.LabelBackendLoadbalancerMethod)
|
||||
_, errSticky := getLabel(container, types.LabelBackendLoadbalancerSticky)
|
||||
if errMethod != nil && errSticky != nil {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
_, errStickiness := getLabel(container, types.LabelBackendLoadbalancerStickiness)
|
||||
_, errCookieName := getLabel(container, types.LabelBackendLoadbalancerStickinessCookieName)
|
||||
|
||||
return errMethod == nil || errSticky == nil || errStickiness == nil || errCookieName == nil
|
||||
}
|
||||
|
||||
func (p *Provider) hasMaxConnLabels(container dockerData) bool {
|
||||
@@ -516,9 +524,16 @@ func (p *Provider) getMaxConnExtractorFunc(container dockerData) string {
|
||||
}
|
||||
|
||||
func (p *Provider) containerFilter(container dockerData) bool {
|
||||
_, err := strconv.Atoi(container.Labels[types.LabelPort])
|
||||
var err error
|
||||
portLabel := "traefik.port label"
|
||||
if p.hasServices(container) {
|
||||
portLabel = "traefik.<serviceName>.port or " + portLabel + "s"
|
||||
err = checkServiceLabelPort(container)
|
||||
} else {
|
||||
_, err = strconv.Atoi(container.Labels[types.LabelPort])
|
||||
}
|
||||
if len(container.NetworkSettings.Ports) == 0 && err != nil {
|
||||
log.Debugf("Filtering container without port and no traefik.port label %s", container.Name)
|
||||
log.Debugf("Filtering container without port and no %s %s : %s", portLabel, container.Name, err.Error())
|
||||
return false
|
||||
}
|
||||
|
||||
@@ -548,9 +563,46 @@ func (p *Provider) containerFilter(container dockerData) bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (p *Provider) getFrontendName(container dockerData) string {
|
||||
// checkServiceLabelPort checks if all service names have a port service label
|
||||
// or if port container label exists for default value
|
||||
func checkServiceLabelPort(container dockerData) error {
|
||||
// If port container label is present, there is a default values for all ports, use it for the check
|
||||
_, err := strconv.Atoi(container.Labels[types.LabelPort])
|
||||
if err != nil {
|
||||
serviceLabelPorts := make(map[string]struct{})
|
||||
serviceLabels := make(map[string]struct{})
|
||||
portRegexp := regexp.MustCompile(`^traefik\.(?P<service_name>.+?)\.port$`)
|
||||
for label := range container.Labels {
|
||||
// Get all port service labels
|
||||
portLabel := portRegexp.FindStringSubmatch(label)
|
||||
if portLabel != nil && len(portLabel) > 0 {
|
||||
serviceLabelPorts[portLabel[0]] = struct{}{}
|
||||
}
|
||||
// Get only one instance of all service names from service labels
|
||||
servicesLabelNames := servicesPropertiesRegexp.FindStringSubmatch(label)
|
||||
if servicesLabelNames != nil && len(servicesLabelNames) > 0 {
|
||||
serviceLabels[strings.Split(servicesLabelNames[0], ".")[1]] = struct{}{}
|
||||
}
|
||||
}
|
||||
// If the number of service labels is different than the number of port services label
|
||||
// there is an error
|
||||
if len(serviceLabels) == len(serviceLabelPorts) {
|
||||
for labelPort := range serviceLabelPorts {
|
||||
_, err = strconv.Atoi(container.Labels[labelPort])
|
||||
if err != nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
} else {
|
||||
err = errors.New("Port service labels missing, please use traefik.port as default value or define all port service labels")
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func (p *Provider) getFrontendName(container dockerData, idx int) string {
|
||||
// Replace '.' with '-' in quoted keys because of this issue https://github.com/BurntSushi/toml/issues/78
|
||||
return provider.Normalize(p.getFrontendRule(container))
|
||||
return provider.Normalize(p.getFrontendRule(container) + "-" + strconv.Itoa(idx))
|
||||
}
|
||||
|
||||
// GetFrontendRule returns the frontend rule for the specified container, using
|
||||
@@ -560,10 +612,10 @@ func (p *Provider) getFrontendRule(container dockerData) string {
|
||||
return label
|
||||
}
|
||||
if labels, err := getLabels(container, []string{labelDockerComposeProject, labelDockerComposeService}); err == nil {
|
||||
return "Host:" + p.getSubDomain(labels[labelDockerComposeService]+"."+labels[labelDockerComposeProject]) + "." + p.Domain
|
||||
return "Host:" + getSubDomain(labels[labelDockerComposeService]+"."+labels[labelDockerComposeProject]) + "." + p.Domain
|
||||
}
|
||||
if len(p.Domain) > 0 {
|
||||
return "Host:" + p.getSubDomain(container.ServiceName) + "." + p.Domain
|
||||
return "Host:" + getSubDomain(container.ServiceName) + "." + p.Domain
|
||||
}
|
||||
return ""
|
||||
}
|
||||
@@ -593,10 +645,25 @@ func (p *Provider) getIPAddress(container dockerData) string {
|
||||
|
||||
// If net==host, quick n' dirty, we return 127.0.0.1
|
||||
// This will work locally, but will fail with swarm.
|
||||
if "host" == container.NetworkSettings.NetworkMode {
|
||||
if container.NetworkSettings.NetworkMode.IsHost() {
|
||||
return "127.0.0.1"
|
||||
}
|
||||
|
||||
if container.NetworkSettings.NetworkMode.IsContainer() {
|
||||
dockerClient, err := p.createClient()
|
||||
if err != nil {
|
||||
log.Warnf("Unable to get IP address for container %s, error: %s", container.Name, err)
|
||||
return ""
|
||||
}
|
||||
ctx := context.Background()
|
||||
containerInspected, err := dockerClient.ContainerInspect(ctx, container.NetworkSettings.NetworkMode.ConnectedContainer())
|
||||
if err != nil {
|
||||
log.Warnf("Unable to get IP address for container %s : Failed to inspect container ID %s, error: %s", container.Name, container.NetworkSettings.NetworkMode.ConnectedContainer(), err)
|
||||
return ""
|
||||
}
|
||||
return p.getIPAddress(parseContainer(containerInspected))
|
||||
}
|
||||
|
||||
if p.UseBindPortIP {
|
||||
port := p.getPort(container)
|
||||
for netport, portBindings := range container.NetworkSettings.Ports {
|
||||
@@ -645,13 +712,28 @@ func (p *Provider) getWeight(container dockerData) string {
|
||||
return "0"
|
||||
}
|
||||
|
||||
func (p *Provider) hasStickinessLabel(container dockerData) bool {
|
||||
labelStickiness, errStickiness := getLabel(container, types.LabelBackendLoadbalancerStickiness)
|
||||
return errStickiness == nil && len(labelStickiness) > 0 && strings.EqualFold(strings.TrimSpace(labelStickiness), "true")
|
||||
}
|
||||
|
||||
func (p *Provider) getSticky(container dockerData) string {
|
||||
if label, err := getLabel(container, types.LabelBackendLoadbalancerSticky); err == nil {
|
||||
if len(label) > 0 {
|
||||
log.Warnf("Deprecated configuration found: %s. Please use %s.", types.LabelBackendLoadbalancerSticky, types.LabelBackendLoadbalancerStickiness)
|
||||
}
|
||||
return label
|
||||
}
|
||||
return "false"
|
||||
}
|
||||
|
||||
func (p *Provider) getStickinessCookieName(container dockerData) string {
|
||||
if label, err := getLabel(container, types.LabelBackendLoadbalancerStickinessCookieName); err == nil {
|
||||
return label
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (p *Provider) getIsBackendLBSwarm(container dockerData) string {
|
||||
if label, err := getLabel(container, labelBackendLoadbalancerSwarm); err == nil {
|
||||
return label
|
||||
@@ -753,8 +835,12 @@ func listContainers(ctx context.Context, dockerClient client.ContainerAPIClient)
|
||||
if err != nil {
|
||||
log.Warnf("Failed to inspect container %s, error: %s", container.ID, err)
|
||||
} else {
|
||||
dockerData := parseContainer(containerInspected)
|
||||
containersInspected = append(containersInspected, dockerData)
|
||||
// This condition is here to avoid to have empty IP https://github.com/containous/traefik/issues/2459
|
||||
// We register only container which are running
|
||||
if containerInspected.ContainerJSONBase != nil && containerInspected.ContainerJSONBase.State != nil && containerInspected.ContainerJSONBase.State.Running {
|
||||
dockerData := parseContainer(containerInspected)
|
||||
containersInspected = append(containersInspected, dockerData)
|
||||
}
|
||||
}
|
||||
}
|
||||
return containersInspected, nil
|
||||
@@ -803,7 +889,7 @@ func parseContainer(container dockertypes.ContainerJSON) dockerData {
|
||||
}
|
||||
|
||||
// Escape beginning slash "/", convert all others to dash "-", and convert underscores "_" to dash "-"
|
||||
func (p *Provider) getSubDomain(name string) string {
|
||||
func getSubDomain(name string) string {
|
||||
return strings.Replace(strings.Replace(strings.TrimPrefix(name, "/"), "/", "-", -1), "_", "-", -1)
|
||||
}
|
||||
|
||||
@@ -812,8 +898,16 @@ func (p *Provider) listServices(ctx context.Context, dockerClient client.APIClie
|
||||
if err != nil {
|
||||
return []dockerData{}, err
|
||||
}
|
||||
|
||||
serverVersion, err := dockerClient.ServerVersion(ctx)
|
||||
|
||||
networkListArgs := filters.NewArgs()
|
||||
networkListArgs.Add("driver", "overlay")
|
||||
// https://docs.docker.com/engine/api/v1.29/#tag/Network (Docker 17.06)
|
||||
if versions.GreaterThanOrEqualTo(serverVersion.APIVersion, "1.29") {
|
||||
networkListArgs.Add("scope", "swarm")
|
||||
} else {
|
||||
networkListArgs.Add("driver", "overlay")
|
||||
}
|
||||
|
||||
networkList, err := dockerClient.NetworkList(ctx, dockertypes.NetworkListOptions{Filters: networkListArgs})
|
||||
|
||||
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
docker "github.com/docker/docker/api/types"
|
||||
"github.com/docker/docker/api/types/container"
|
||||
"github.com/docker/go-connections/nat"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestDockerGetFrontendName(t *testing.T) {
|
||||
@@ -19,38 +20,38 @@ func TestDockerGetFrontendName(t *testing.T) {
|
||||
}{
|
||||
{
|
||||
container: containerJSON(name("foo")),
|
||||
expected: "Host-foo-docker-localhost",
|
||||
expected: "Host-foo-docker-localhost-0",
|
||||
},
|
||||
{
|
||||
container: containerJSON(labels(map[string]string{
|
||||
types.LabelFrontendRule: "Headers:User-Agent,bat/0.1.0",
|
||||
})),
|
||||
expected: "Headers-User-Agent-bat-0-1-0",
|
||||
expected: "Headers-User-Agent-bat-0-1-0-0",
|
||||
},
|
||||
{
|
||||
container: containerJSON(labels(map[string]string{
|
||||
"com.docker.compose.project": "foo",
|
||||
"com.docker.compose.service": "bar",
|
||||
})),
|
||||
expected: "Host-bar-foo-docker-localhost",
|
||||
expected: "Host-bar-foo-docker-localhost-0",
|
||||
},
|
||||
{
|
||||
container: containerJSON(labels(map[string]string{
|
||||
types.LabelFrontendRule: "Host:foo.bar",
|
||||
})),
|
||||
expected: "Host-foo-bar",
|
||||
expected: "Host-foo-bar-0",
|
||||
},
|
||||
{
|
||||
container: containerJSON(labels(map[string]string{
|
||||
types.LabelFrontendRule: "Path:/test",
|
||||
})),
|
||||
expected: "Path-test",
|
||||
expected: "Path-test-0",
|
||||
},
|
||||
{
|
||||
container: containerJSON(labels(map[string]string{
|
||||
types.LabelFrontendRule: "PathPrefix:/test2",
|
||||
})),
|
||||
expected: "PathPrefix-test2",
|
||||
expected: "PathPrefix-test2-0",
|
||||
},
|
||||
}
|
||||
|
||||
@@ -62,7 +63,7 @@ func TestDockerGetFrontendName(t *testing.T) {
|
||||
provider := &Provider{
|
||||
Domain: "docker.localhost",
|
||||
}
|
||||
actual := provider.getFrontendName(dockerData)
|
||||
actual := provider.getFrontendName(dockerData, 0)
|
||||
if actual != e.expected {
|
||||
t.Errorf("expected %q, got %q", e.expected, actual)
|
||||
}
|
||||
@@ -883,13 +884,13 @@ func TestDockerLoadDockerConfig(t *testing.T) {
|
||||
),
|
||||
},
|
||||
expectedFrontends: map[string]*types.Frontend{
|
||||
"frontend-Host-test-docker-localhost": {
|
||||
"frontend-Host-test-docker-localhost-0": {
|
||||
Backend: "backend-test",
|
||||
PassHostHeader: true,
|
||||
EntryPoints: []string{},
|
||||
BasicAuth: []string{},
|
||||
Routes: map[string]types.Route{
|
||||
"route-frontend-Host-test-docker-localhost": {
|
||||
"route-frontend-Host-test-docker-localhost-0": {
|
||||
Rule: "Host:test.docker.localhost",
|
||||
},
|
||||
},
|
||||
@@ -933,24 +934,24 @@ func TestDockerLoadDockerConfig(t *testing.T) {
|
||||
),
|
||||
},
|
||||
expectedFrontends: map[string]*types.Frontend{
|
||||
"frontend-Host-test1-docker-localhost": {
|
||||
"frontend-Host-test1-docker-localhost-0": {
|
||||
Backend: "backend-foobar",
|
||||
PassHostHeader: true,
|
||||
EntryPoints: []string{"http", "https"},
|
||||
BasicAuth: []string{"test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/", "test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0"},
|
||||
Routes: map[string]types.Route{
|
||||
"route-frontend-Host-test1-docker-localhost": {
|
||||
"route-frontend-Host-test1-docker-localhost-0": {
|
||||
Rule: "Host:test1.docker.localhost",
|
||||
},
|
||||
},
|
||||
},
|
||||
"frontend-Host-test2-docker-localhost": {
|
||||
"frontend-Host-test2-docker-localhost-1": {
|
||||
Backend: "backend-foobar",
|
||||
PassHostHeader: true,
|
||||
EntryPoints: []string{},
|
||||
BasicAuth: []string{},
|
||||
Routes: map[string]types.Route{
|
||||
"route-frontend-Host-test2-docker-localhost": {
|
||||
"route-frontend-Host-test2-docker-localhost-1": {
|
||||
Rule: "Host:test2.docker.localhost",
|
||||
},
|
||||
},
|
||||
@@ -991,13 +992,13 @@ func TestDockerLoadDockerConfig(t *testing.T) {
|
||||
),
|
||||
},
|
||||
expectedFrontends: map[string]*types.Frontend{
|
||||
"frontend-Host-test1-docker-localhost": {
|
||||
"frontend-Host-test1-docker-localhost-0": {
|
||||
Backend: "backend-foobar",
|
||||
PassHostHeader: true,
|
||||
EntryPoints: []string{"http", "https"},
|
||||
BasicAuth: []string{},
|
||||
Routes: map[string]types.Route{
|
||||
"route-frontend-Host-test1-docker-localhost": {
|
||||
"route-frontend-Host-test1-docker-localhost-0": {
|
||||
Rule: "Host:test1.docker.localhost",
|
||||
},
|
||||
},
|
||||
@@ -1051,3 +1052,92 @@ func TestDockerLoadDockerConfig(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestDockerHasStickinessLabel(t *testing.T) {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
container docker.ContainerJSON
|
||||
expected bool
|
||||
}{
|
||||
{
|
||||
desc: "no stickiness-label",
|
||||
container: containerJSON(),
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
desc: "stickiness true",
|
||||
container: containerJSON(labels(map[string]string{
|
||||
types.LabelBackendLoadbalancerStickiness: "true",
|
||||
})),
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
desc: "stickiness false",
|
||||
container: containerJSON(labels(map[string]string{
|
||||
types.LabelBackendLoadbalancerStickiness: "false",
|
||||
})),
|
||||
expected: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
dockerData := parseContainer(test.container)
|
||||
provider := &Provider{}
|
||||
actual := provider.hasStickinessLabel(dockerData)
|
||||
assert.Equal(t, actual, test.expected)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestDockerCheckPortLabels(t *testing.T) {
|
||||
testCases := []struct {
|
||||
container docker.ContainerJSON
|
||||
expectedError bool
|
||||
}{
|
||||
{
|
||||
container: containerJSON(labels(map[string]string{
|
||||
types.LabelPort: "80",
|
||||
})),
|
||||
expectedError: false,
|
||||
},
|
||||
{
|
||||
container: containerJSON(labels(map[string]string{
|
||||
types.LabelPrefix + "servicename.protocol": "http",
|
||||
types.LabelPrefix + "servicename.port": "80",
|
||||
})),
|
||||
expectedError: false,
|
||||
},
|
||||
{
|
||||
container: containerJSON(labels(map[string]string{
|
||||
types.LabelPrefix + "servicename.protocol": "http",
|
||||
types.LabelPort: "80",
|
||||
})),
|
||||
expectedError: false,
|
||||
},
|
||||
{
|
||||
container: containerJSON(labels(map[string]string{
|
||||
types.LabelPrefix + "servicename.protocol": "http",
|
||||
})),
|
||||
expectedError: true,
|
||||
},
|
||||
}
|
||||
|
||||
for containerID, test := range testCases {
|
||||
test := test
|
||||
t.Run(strconv.Itoa(containerID), func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
dockerData := parseContainer(test.container)
|
||||
err := checkServiceLabelPort(dockerData)
|
||||
|
||||
if test.expectedError && err == nil {
|
||||
t.Error("expected an error but got nil")
|
||||
} else if !test.expectedError && err != nil {
|
||||
t.Errorf("expected no error, got %q", err)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -171,19 +171,19 @@ func TestDockerGetServiceBackend(t *testing.T) {
|
||||
}{
|
||||
{
|
||||
container: containerJSON(name("foo")),
|
||||
expected: "foo-myservice",
|
||||
expected: "foo-foo-myservice",
|
||||
},
|
||||
{
|
||||
container: containerJSON(labels(map[string]string{
|
||||
types.LabelBackend: "another-backend",
|
||||
})),
|
||||
expected: "another-backend-myservice",
|
||||
expected: "fake-another-backend-myservice",
|
||||
},
|
||||
{
|
||||
container: containerJSON(labels(map[string]string{
|
||||
"traefik.myservice.frontend.backend": "custom-backend",
|
||||
})),
|
||||
expected: "custom-backend",
|
||||
expected: "fake-custom-backend",
|
||||
},
|
||||
}
|
||||
|
||||
@@ -341,8 +341,8 @@ func TestDockerLoadDockerServiceConfig(t *testing.T) {
|
||||
),
|
||||
},
|
||||
expectedFrontends: map[string]*types.Frontend{
|
||||
"frontend-foo-service": {
|
||||
Backend: "backend-foo-service",
|
||||
"frontend-foo-foo-service": {
|
||||
Backend: "backend-foo-foo-service",
|
||||
PassHostHeader: true,
|
||||
EntryPoints: []string{"http", "https"},
|
||||
BasicAuth: []string{"test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/", "test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0"},
|
||||
@@ -354,9 +354,9 @@ func TestDockerLoadDockerServiceConfig(t *testing.T) {
|
||||
},
|
||||
},
|
||||
expectedBackends: map[string]*types.Backend{
|
||||
"backend-foo-service": {
|
||||
"backend-foo-foo-service": {
|
||||
Servers: map[string]types.Server{
|
||||
"service": {
|
||||
"service-0": {
|
||||
URL: "http://127.0.0.1:2503",
|
||||
Weight: 0,
|
||||
},
|
||||
@@ -399,8 +399,8 @@ func TestDockerLoadDockerServiceConfig(t *testing.T) {
|
||||
),
|
||||
},
|
||||
expectedFrontends: map[string]*types.Frontend{
|
||||
"frontend-foobar": {
|
||||
Backend: "backend-foobar",
|
||||
"frontend-test1-foobar": {
|
||||
Backend: "backend-test1-foobar",
|
||||
PassHostHeader: false,
|
||||
Priority: 5000,
|
||||
EntryPoints: []string{"http", "https", "ws"},
|
||||
@@ -411,8 +411,8 @@ func TestDockerLoadDockerServiceConfig(t *testing.T) {
|
||||
},
|
||||
},
|
||||
},
|
||||
"frontend-test2-anotherservice": {
|
||||
Backend: "backend-test2-anotherservice",
|
||||
"frontend-test2-test2-anotherservice": {
|
||||
Backend: "backend-test2-test2-anotherservice",
|
||||
PassHostHeader: true,
|
||||
EntryPoints: []string{},
|
||||
BasicAuth: []string{},
|
||||
@@ -424,18 +424,18 @@ func TestDockerLoadDockerServiceConfig(t *testing.T) {
|
||||
},
|
||||
},
|
||||
expectedBackends: map[string]*types.Backend{
|
||||
"backend-foobar": {
|
||||
"backend-test1-foobar": {
|
||||
Servers: map[string]types.Server{
|
||||
"service": {
|
||||
"service-0": {
|
||||
URL: "https://127.0.0.1:2503",
|
||||
Weight: 80,
|
||||
},
|
||||
},
|
||||
CircuitBreaker: nil,
|
||||
},
|
||||
"backend-test2-anotherservice": {
|
||||
"backend-test2-test2-anotherservice": {
|
||||
Servers: map[string]types.Server{
|
||||
"service": {
|
||||
"service-0": {
|
||||
URL: "http://127.0.0.1:8079",
|
||||
Weight: 33,
|
||||
},
|
||||
|
||||
@@ -23,28 +23,28 @@ func TestSwarmGetFrontendName(t *testing.T) {
|
||||
}{
|
||||
{
|
||||
service: swarmService(serviceName("foo")),
|
||||
expected: "Host-foo-docker-localhost",
|
||||
expected: "Host-foo-docker-localhost-0",
|
||||
networks: map[string]*docker.NetworkResource{},
|
||||
},
|
||||
{
|
||||
service: swarmService(serviceLabels(map[string]string{
|
||||
types.LabelFrontendRule: "Headers:User-Agent,bat/0.1.0",
|
||||
})),
|
||||
expected: "Headers-User-Agent-bat-0-1-0",
|
||||
expected: "Headers-User-Agent-bat-0-1-0-0",
|
||||
networks: map[string]*docker.NetworkResource{},
|
||||
},
|
||||
{
|
||||
service: swarmService(serviceLabels(map[string]string{
|
||||
types.LabelFrontendRule: "Host:foo.bar",
|
||||
})),
|
||||
expected: "Host-foo-bar",
|
||||
expected: "Host-foo-bar-0",
|
||||
networks: map[string]*docker.NetworkResource{},
|
||||
},
|
||||
{
|
||||
service: swarmService(serviceLabels(map[string]string{
|
||||
types.LabelFrontendRule: "Path:/test",
|
||||
})),
|
||||
expected: "Path-test",
|
||||
expected: "Path-test-0",
|
||||
networks: map[string]*docker.NetworkResource{},
|
||||
},
|
||||
{
|
||||
@@ -54,7 +54,7 @@ func TestSwarmGetFrontendName(t *testing.T) {
|
||||
types.LabelFrontendRule: "PathPrefix:/test2",
|
||||
}),
|
||||
),
|
||||
expected: "PathPrefix-test2",
|
||||
expected: "PathPrefix-test2-0",
|
||||
networks: map[string]*docker.NetworkResource{},
|
||||
},
|
||||
}
|
||||
@@ -68,7 +68,7 @@ func TestSwarmGetFrontendName(t *testing.T) {
|
||||
Domain: "docker.localhost",
|
||||
SwarmMode: true,
|
||||
}
|
||||
actual := provider.getFrontendName(dockerData)
|
||||
actual := provider.getFrontendName(dockerData, 0)
|
||||
if actual != e.expected {
|
||||
t.Errorf("expected %q, got %q", e.expected, actual)
|
||||
}
|
||||
@@ -660,13 +660,13 @@ func TestSwarmLoadDockerConfig(t *testing.T) {
|
||||
),
|
||||
},
|
||||
expectedFrontends: map[string]*types.Frontend{
|
||||
"frontend-Host-test-docker-localhost": {
|
||||
"frontend-Host-test-docker-localhost-0": {
|
||||
Backend: "backend-test",
|
||||
PassHostHeader: true,
|
||||
EntryPoints: []string{},
|
||||
BasicAuth: []string{},
|
||||
Routes: map[string]types.Route{
|
||||
"route-frontend-Host-test-docker-localhost": {
|
||||
"route-frontend-Host-test-docker-localhost-0": {
|
||||
Rule: "Host:test.docker.localhost",
|
||||
},
|
||||
},
|
||||
@@ -714,24 +714,24 @@ func TestSwarmLoadDockerConfig(t *testing.T) {
|
||||
),
|
||||
},
|
||||
expectedFrontends: map[string]*types.Frontend{
|
||||
"frontend-Host-test1-docker-localhost": {
|
||||
"frontend-Host-test1-docker-localhost-0": {
|
||||
Backend: "backend-foobar",
|
||||
PassHostHeader: true,
|
||||
EntryPoints: []string{"http", "https"},
|
||||
BasicAuth: []string{"test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/", "test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0"},
|
||||
Routes: map[string]types.Route{
|
||||
"route-frontend-Host-test1-docker-localhost": {
|
||||
"route-frontend-Host-test1-docker-localhost-0": {
|
||||
Rule: "Host:test1.docker.localhost",
|
||||
},
|
||||
},
|
||||
},
|
||||
"frontend-Host-test2-docker-localhost": {
|
||||
"frontend-Host-test2-docker-localhost-1": {
|
||||
Backend: "backend-foobar",
|
||||
PassHostHeader: true,
|
||||
EntryPoints: []string{},
|
||||
BasicAuth: []string{},
|
||||
Routes: map[string]types.Route{
|
||||
"route-frontend-Host-test2-docker-localhost": {
|
||||
"route-frontend-Host-test2-docker-localhost-1": {
|
||||
Rule: "Host:test2.docker.localhost",
|
||||
},
|
||||
},
|
||||
|
||||
@@ -162,12 +162,12 @@ func (p *Provider) Provide(configurationChan chan<- types.ConfigMessage, pool *s
|
||||
})
|
||||
|
||||
operation := func() error {
|
||||
aws, err := p.createClient()
|
||||
awsClient, err := p.createClient()
|
||||
if err != nil {
|
||||
return handleCanceled(ctx, err)
|
||||
}
|
||||
|
||||
configuration, err := p.loadDynamoConfig(aws)
|
||||
configuration, err := p.loadDynamoConfig(awsClient)
|
||||
if err != nil {
|
||||
return handleCanceled(ctx, err)
|
||||
}
|
||||
@@ -184,7 +184,7 @@ func (p *Provider) Provide(configurationChan chan<- types.ConfigMessage, pool *s
|
||||
log.Debug("Watching Provider...")
|
||||
select {
|
||||
case <-reload.C:
|
||||
configuration, err := p.loadDynamoConfig(aws)
|
||||
configuration, err := p.loadDynamoConfig(awsClient)
|
||||
if err != nil {
|
||||
return handleCanceled(ctx, err)
|
||||
}
|
||||
|
||||
@@ -122,12 +122,12 @@ func (p *Provider) Provide(configurationChan chan<- types.ConfigMessage, pool *s
|
||||
})
|
||||
|
||||
operation := func() error {
|
||||
aws, err := p.createClient()
|
||||
awsClient, err := p.createClient()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
configuration, err := p.loadECSConfig(ctx, aws)
|
||||
configuration, err := p.loadECSConfig(ctx, awsClient)
|
||||
if err != nil {
|
||||
return handleCanceled(ctx, err)
|
||||
}
|
||||
@@ -143,7 +143,7 @@ func (p *Provider) Provide(configurationChan chan<- types.ConfigMessage, pool *s
|
||||
for {
|
||||
select {
|
||||
case <-reload.C:
|
||||
configuration, err := p.loadECSConfig(ctx, aws)
|
||||
configuration, err := p.loadECSConfig(ctx, awsClient)
|
||||
if err != nil {
|
||||
return handleCanceled(ctx, err)
|
||||
}
|
||||
@@ -180,11 +180,13 @@ func wrapAws(ctx context.Context, req *request.Request) error {
|
||||
|
||||
func (p *Provider) loadECSConfig(ctx context.Context, client *awsClient) (*types.Configuration, error) {
|
||||
var ecsFuncMap = template.FuncMap{
|
||||
"filterFrontends": p.filterFrontends,
|
||||
"getFrontendRule": p.getFrontendRule,
|
||||
"getBasicAuth": p.getBasicAuth,
|
||||
"getLoadBalancerSticky": p.getLoadBalancerSticky,
|
||||
"getLoadBalancerMethod": p.getLoadBalancerMethod,
|
||||
"filterFrontends": p.filterFrontends,
|
||||
"getFrontendRule": p.getFrontendRule,
|
||||
"getBasicAuth": p.getBasicAuth,
|
||||
"getLoadBalancerMethod": p.getLoadBalancerMethod,
|
||||
"getLoadBalancerSticky": p.getLoadBalancerSticky,
|
||||
"hasStickinessLabel": p.hasStickinessLabel,
|
||||
"getStickinessCookieName": p.getStickinessCookieName,
|
||||
}
|
||||
|
||||
instances, err := p.listInstances(ctx, client)
|
||||
@@ -477,16 +479,33 @@ func (p *Provider) getBasicAuth(i ecsInstance) []string {
|
||||
return []string{}
|
||||
}
|
||||
|
||||
func getFirstInstanceLabel(instances []ecsInstance, labelName string) string {
|
||||
if len(instances) > 0 {
|
||||
return instances[0].label(labelName)
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (p *Provider) getLoadBalancerSticky(instances []ecsInstance) string {
|
||||
if len(instances) > 0 {
|
||||
label := instances[0].label(types.LabelBackendLoadbalancerSticky)
|
||||
label := getFirstInstanceLabel(instances, types.LabelBackendLoadbalancerSticky)
|
||||
if label != "" {
|
||||
log.Warnf("Deprecated configuration found: %s. Please use %s.", types.LabelBackendLoadbalancerSticky, types.LabelBackendLoadbalancerStickiness)
|
||||
return label
|
||||
}
|
||||
}
|
||||
return "false"
|
||||
}
|
||||
|
||||
func (p *Provider) hasStickinessLabel(instances []ecsInstance) bool {
|
||||
stickinessLabel := getFirstInstanceLabel(instances, types.LabelBackendLoadbalancerStickiness)
|
||||
return len(stickinessLabel) > 0 && strings.EqualFold(strings.TrimSpace(stickinessLabel), "true")
|
||||
}
|
||||
|
||||
func (p *Provider) getStickinessCookieName(instances []ecsInstance) string {
|
||||
return getFirstInstanceLabel(instances, types.LabelBackendLoadbalancerStickinessCookieName)
|
||||
}
|
||||
|
||||
func (p *Provider) getLoadBalancerMethod(instances []ecsInstance) string {
|
||||
if len(instances) > 0 {
|
||||
label := instances[0].label(types.LabelBackendLoadbalancerMethod)
|
||||
|
||||
@@ -160,7 +160,6 @@ func (p *Provider) loadIngresses(k8sClient Client) (*types.Configuration, error)
|
||||
templateObjects.Backends[r.Host+pa.Path] = &types.Backend{
|
||||
Servers: make(map[string]types.Server),
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Sticky: false,
|
||||
Method: "wrr",
|
||||
},
|
||||
}
|
||||
@@ -180,11 +179,13 @@ func (p *Provider) loadIngresses(k8sClient Client) (*types.Configuration, error)
|
||||
log.Warnf("Unknown value '%s' for %s, falling back to %s", passHostHeaderAnnotation, types.LabelFrontendPassHostHeader, PassHostHeader)
|
||||
}
|
||||
if realm := i.Annotations[annotationKubernetesAuthRealm]; realm != "" && realm != traefikDefaultRealm {
|
||||
return nil, errors.New("no realm customization supported")
|
||||
log.Errorf("Value for annotation %q on ingress %s/%s invalid: no realm customization supported", annotationKubernetesAuthRealm, i.ObjectMeta.Namespace, i.ObjectMeta.Name)
|
||||
delete(templateObjects.Backends, r.Host+pa.Path)
|
||||
continue
|
||||
}
|
||||
|
||||
witelistSourceRangeAnnotation := i.Annotations[annotationKubernetesWhitelistSourceRange]
|
||||
whitelistSourceRange := provider.SplitAndTrimString(witelistSourceRangeAnnotation)
|
||||
whitelistSourceRangeAnnotation := i.Annotations[annotationKubernetesWhitelistSourceRange]
|
||||
whitelistSourceRange := provider.SplitAndTrimString(whitelistSourceRangeAnnotation)
|
||||
|
||||
if _, exists := templateObjects.Frontends[r.Host+pa.Path]; !exists {
|
||||
basicAuthCreds, err := handleBasicAuthConfig(i, k8sClient)
|
||||
@@ -247,8 +248,16 @@ func (p *Provider) loadIngresses(k8sClient Client) (*types.Configuration, error)
|
||||
templateObjects.Backends[r.Host+pa.Path].LoadBalancer.Method = "drr"
|
||||
}
|
||||
|
||||
if service.Annotations[types.LabelBackendLoadbalancerSticky] == "true" {
|
||||
templateObjects.Backends[r.Host+pa.Path].LoadBalancer.Sticky = true
|
||||
if sticky := service.Annotations[types.LabelBackendLoadbalancerSticky]; len(sticky) > 0 {
|
||||
log.Warnf("Deprecated configuration found: %s. Please use %s.", types.LabelBackendLoadbalancerSticky, types.LabelBackendLoadbalancerStickiness)
|
||||
templateObjects.Backends[r.Host+pa.Path].LoadBalancer.Sticky = strings.EqualFold(strings.TrimSpace(sticky), "true")
|
||||
}
|
||||
|
||||
if service.Annotations[types.LabelBackendLoadbalancerStickiness] == "true" {
|
||||
templateObjects.Backends[r.Host+pa.Path].LoadBalancer.Stickiness = &types.Stickiness{}
|
||||
if cookieName := service.Annotations[types.LabelBackendLoadbalancerStickinessCookieName]; len(cookieName) > 0 {
|
||||
templateObjects.Backends[r.Host+pa.Path].LoadBalancer.Stickiness.CookieName = cookieName
|
||||
}
|
||||
}
|
||||
|
||||
protocol := "http"
|
||||
@@ -316,13 +325,13 @@ func getRuleForPath(pa v1beta1.HTTPIngressPath, i *v1beta1.Ingress) string {
|
||||
ruleType = ruleTypePathPrefix
|
||||
}
|
||||
|
||||
rule := ruleType + ":" + pa.Path
|
||||
rules := []string{ruleType + ":" + pa.Path}
|
||||
|
||||
if rewriteTarget := i.Annotations[annotationKubernetesRewriteTarget]; rewriteTarget != "" {
|
||||
rule = ruleTypeReplacePath + ":" + rewriteTarget
|
||||
rules = append(rules, ruleTypeReplacePath+":"+rewriteTarget)
|
||||
}
|
||||
|
||||
return rule
|
||||
return strings.Join(rules, ";")
|
||||
}
|
||||
|
||||
func (p *Provider) getPriority(path v1beta1.HTTPIngressPath, i *v1beta1.Ingress) int {
|
||||
|
||||
@@ -243,7 +243,6 @@ func TestLoadIngresses(t *testing.T) {
|
||||
},
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Sticky: false,
|
||||
Method: "wrr",
|
||||
},
|
||||
},
|
||||
@@ -256,7 +255,6 @@ func TestLoadIngresses(t *testing.T) {
|
||||
},
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Sticky: false,
|
||||
Method: "wrr",
|
||||
},
|
||||
},
|
||||
@@ -273,7 +271,6 @@ func TestLoadIngresses(t *testing.T) {
|
||||
},
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Sticky: false,
|
||||
Method: "wrr",
|
||||
},
|
||||
},
|
||||
@@ -489,7 +486,6 @@ func TestGetPassHostHeader(t *testing.T) {
|
||||
Servers: map[string]types.Server{},
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Sticky: false,
|
||||
Method: "wrr",
|
||||
},
|
||||
},
|
||||
@@ -591,7 +587,6 @@ func TestOnlyReferencesServicesFromOwnNamespace(t *testing.T) {
|
||||
Servers: map[string]types.Server{},
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Sticky: false,
|
||||
Method: "wrr",
|
||||
},
|
||||
},
|
||||
@@ -771,7 +766,6 @@ func TestLoadNamespacedIngresses(t *testing.T) {
|
||||
Servers: map[string]types.Server{},
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Sticky: false,
|
||||
Method: "wrr",
|
||||
},
|
||||
},
|
||||
@@ -779,7 +773,6 @@ func TestLoadNamespacedIngresses(t *testing.T) {
|
||||
Servers: map[string]types.Server{},
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Sticky: false,
|
||||
Method: "wrr",
|
||||
},
|
||||
},
|
||||
@@ -996,7 +989,6 @@ func TestLoadMultipleNamespacedIngresses(t *testing.T) {
|
||||
Servers: map[string]types.Server{},
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Sticky: false,
|
||||
Method: "wrr",
|
||||
},
|
||||
},
|
||||
@@ -1004,7 +996,6 @@ func TestLoadMultipleNamespacedIngresses(t *testing.T) {
|
||||
Servers: map[string]types.Server{},
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Sticky: false,
|
||||
Method: "wrr",
|
||||
},
|
||||
},
|
||||
@@ -1012,7 +1003,6 @@ func TestLoadMultipleNamespacedIngresses(t *testing.T) {
|
||||
Servers: map[string]types.Server{},
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Sticky: false,
|
||||
Method: "wrr",
|
||||
},
|
||||
},
|
||||
@@ -1118,7 +1108,6 @@ func TestHostlessIngress(t *testing.T) {
|
||||
Servers: map[string]types.Server{},
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Sticky: false,
|
||||
Method: "wrr",
|
||||
},
|
||||
},
|
||||
@@ -1320,7 +1309,6 @@ func TestServiceAnnotations(t *testing.T) {
|
||||
},
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Method: "drr",
|
||||
Sticky: false,
|
||||
},
|
||||
},
|
||||
"bar": {
|
||||
@@ -1366,7 +1354,7 @@ func TestServiceAnnotations(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
assert.Equal(t, expected, actual)
|
||||
assert.EqualValues(t, expected, actual)
|
||||
}
|
||||
|
||||
func TestIngressAnnotations(t *testing.T) {
|
||||
@@ -1541,6 +1529,34 @@ func TestIngressAnnotations(t *testing.T) {
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "testing",
|
||||
Annotations: map[string]string{
|
||||
"ingress.kubernetes.io/auth-realm": "customized",
|
||||
},
|
||||
},
|
||||
Spec: v1beta1.IngressSpec{
|
||||
Rules: []v1beta1.IngressRule{
|
||||
{
|
||||
Host: "auth-realm-customized",
|
||||
IngressRuleValue: v1beta1.IngressRuleValue{
|
||||
HTTP: &v1beta1.HTTPIngressRuleValue{
|
||||
Paths: []v1beta1.HTTPIngressPath{
|
||||
{
|
||||
Path: "/auth-realm-customized",
|
||||
Backend: v1beta1.IngressBackend{
|
||||
ServiceName: "service1",
|
||||
ServicePort: intstr.FromInt(80),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
services := []*v1.Service{
|
||||
{
|
||||
@@ -1601,7 +1617,6 @@ func TestIngressAnnotations(t *testing.T) {
|
||||
},
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Sticky: false,
|
||||
Method: "wrr",
|
||||
},
|
||||
},
|
||||
@@ -1614,7 +1629,6 @@ func TestIngressAnnotations(t *testing.T) {
|
||||
},
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Sticky: false,
|
||||
Method: "wrr",
|
||||
},
|
||||
},
|
||||
@@ -1627,7 +1641,6 @@ func TestIngressAnnotations(t *testing.T) {
|
||||
},
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Sticky: false,
|
||||
Method: "wrr",
|
||||
},
|
||||
},
|
||||
@@ -1640,7 +1653,6 @@ func TestIngressAnnotations(t *testing.T) {
|
||||
},
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Sticky: false,
|
||||
Method: "wrr",
|
||||
},
|
||||
},
|
||||
@@ -1653,7 +1665,6 @@ func TestIngressAnnotations(t *testing.T) {
|
||||
},
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Sticky: false,
|
||||
Method: "wrr",
|
||||
},
|
||||
},
|
||||
@@ -1717,7 +1728,7 @@ func TestIngressAnnotations(t *testing.T) {
|
||||
PassHostHeader: true,
|
||||
Routes: map[string]types.Route{
|
||||
"/api": {
|
||||
Rule: "ReplacePath:/",
|
||||
Rule: "PathPrefix:/api;ReplacePath:/",
|
||||
},
|
||||
"rewrite": {
|
||||
Rule: "Host:rewrite",
|
||||
@@ -1807,7 +1818,6 @@ func TestPriorityHeaderValue(t *testing.T) {
|
||||
},
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Sticky: false,
|
||||
Method: "wrr",
|
||||
},
|
||||
},
|
||||
@@ -1909,7 +1919,6 @@ func TestInvalidPassHostHeaderValue(t *testing.T) {
|
||||
},
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Sticky: false,
|
||||
Method: "wrr",
|
||||
},
|
||||
},
|
||||
@@ -2192,14 +2201,12 @@ func TestMissingResources(t *testing.T) {
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Method: "wrr",
|
||||
Sticky: false,
|
||||
},
|
||||
},
|
||||
"missing_service": {
|
||||
Servers: map[string]types.Server{},
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Method: "wrr",
|
||||
Sticky: false,
|
||||
},
|
||||
},
|
||||
"missing_endpoints": {
|
||||
@@ -2207,7 +2214,6 @@ func TestMissingResources(t *testing.T) {
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Method: "wrr",
|
||||
Sticky: false,
|
||||
},
|
||||
},
|
||||
"missing_endpoint_subsets": {
|
||||
@@ -2215,7 +2221,6 @@ func TestMissingResources(t *testing.T) {
|
||||
CircuitBreaker: nil,
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Method: "wrr",
|
||||
Sticky: false,
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
@@ -102,7 +102,7 @@ func (p *Provider) watchKv(configurationChan chan<- types.ConfigMessage, prefix
|
||||
func (p *Provider) Provide(configurationChan chan<- types.ConfigMessage, pool *safe.Pool, constraints types.Constraints) error {
|
||||
p.Constraints = append(p.Constraints, constraints...)
|
||||
operation := func() error {
|
||||
if _, err := p.kvclient.Exists("qmslkjdfmqlskdjfmqlksjazçueznbvbwzlkajzebvkwjdcqmlsfj"); err != nil {
|
||||
if _, err := p.kvclient.Exists(p.Prefix + "/qmslkjdfmqlskdjfmqlksjazçueznbvbwzlkajzebvkwjdcqmlsfj"); err != nil {
|
||||
return fmt.Errorf("Failed to test KV store connection: %v", err)
|
||||
}
|
||||
if p.Watch {
|
||||
@@ -139,11 +139,14 @@ func (p *Provider) loadConfig() *types.Configuration {
|
||||
}
|
||||
|
||||
var KvFuncMap = template.FuncMap{
|
||||
"List": p.list,
|
||||
"ListServers": p.listServers,
|
||||
"Get": p.get,
|
||||
"SplitGet": p.splitGet,
|
||||
"Last": p.last,
|
||||
"List": p.list,
|
||||
"ListServers": p.listServers,
|
||||
"Get": p.get,
|
||||
"SplitGet": p.splitGet,
|
||||
"Last": p.last,
|
||||
"getSticky": p.getSticky,
|
||||
"hasStickinessLabel": p.hasStickinessLabel,
|
||||
"getStickinessCookieName": p.getStickinessCookieName,
|
||||
}
|
||||
|
||||
configuration, err := p.GetConfiguration("templates/kv.tmpl", KvFuncMap, templateObjects)
|
||||
@@ -239,3 +242,22 @@ func (p *Provider) checkConstraints(keys ...string) bool {
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (p *Provider) getSticky(rootPath string) string {
|
||||
stickyValue := p.get("", rootPath, "/loadbalancer", "/sticky")
|
||||
if len(stickyValue) > 0 {
|
||||
log.Warnf("Deprecated configuration found: %s. Please use %s.", "loadbalancer/sticky", "loadbalancer/stickiness")
|
||||
} else {
|
||||
stickyValue = "false"
|
||||
}
|
||||
return stickyValue
|
||||
}
|
||||
|
||||
func (p *Provider) hasStickinessLabel(rootPath string) bool {
|
||||
stickinessValue := p.get("false", rootPath, "/loadbalancer", "/stickiness")
|
||||
return len(stickinessValue) > 0 && strings.EqualFold(strings.TrimSpace(stickinessValue), "true")
|
||||
}
|
||||
|
||||
func (p *Provider) getStickinessCookieName(rootPath string) string {
|
||||
return p.get("", rootPath, "/loadbalancer", "/stickiness", "/cookiename")
|
||||
}
|
||||
|
||||
110
provider/kv/kv_mock_test.go
Normal file
110
provider/kv/kv_mock_test.go
Normal file
@@ -0,0 +1,110 @@
|
||||
package kv
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"strings"
|
||||
|
||||
"github.com/containous/traefik/types"
|
||||
"github.com/docker/libkv/store"
|
||||
)
|
||||
|
||||
type KvMock struct {
|
||||
Provider
|
||||
}
|
||||
|
||||
func (provider *KvMock) loadConfig() *types.Configuration {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Override Get/List to return a error
|
||||
type KvError struct {
|
||||
Get error
|
||||
List error
|
||||
}
|
||||
|
||||
// Extremely limited mock store so we can test initialization
|
||||
type Mock struct {
|
||||
Error KvError
|
||||
KVPairs []*store.KVPair
|
||||
WatchTreeMethod func() <-chan []*store.KVPair
|
||||
}
|
||||
|
||||
func (s *Mock) Put(key string, value []byte, opts *store.WriteOptions) error {
|
||||
return errors.New("Put not supported")
|
||||
}
|
||||
|
||||
func (s *Mock) Get(key string) (*store.KVPair, error) {
|
||||
if err := s.Error.Get; err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for _, kvPair := range s.KVPairs {
|
||||
if kvPair.Key == key {
|
||||
return kvPair, nil
|
||||
}
|
||||
}
|
||||
return nil, store.ErrKeyNotFound
|
||||
}
|
||||
|
||||
func (s *Mock) Delete(key string) error {
|
||||
return errors.New("Delete not supported")
|
||||
}
|
||||
|
||||
// Exists mock
|
||||
func (s *Mock) Exists(key string) (bool, error) {
|
||||
if err := s.Error.Get; err != nil {
|
||||
return false, err
|
||||
}
|
||||
for _, kvPair := range s.KVPairs {
|
||||
if strings.HasPrefix(kvPair.Key, key) {
|
||||
return true, nil
|
||||
}
|
||||
}
|
||||
return false, store.ErrKeyNotFound
|
||||
}
|
||||
|
||||
// Watch mock
|
||||
func (s *Mock) Watch(key string, stopCh <-chan struct{}) (<-chan *store.KVPair, error) {
|
||||
return nil, errors.New("Watch not supported")
|
||||
}
|
||||
|
||||
// WatchTree mock
|
||||
func (s *Mock) WatchTree(prefix string, stopCh <-chan struct{}) (<-chan []*store.KVPair, error) {
|
||||
return s.WatchTreeMethod(), nil
|
||||
}
|
||||
|
||||
// NewLock mock
|
||||
func (s *Mock) NewLock(key string, options *store.LockOptions) (store.Locker, error) {
|
||||
return nil, errors.New("NewLock not supported")
|
||||
}
|
||||
|
||||
// List mock
|
||||
func (s *Mock) List(prefix string) ([]*store.KVPair, error) {
|
||||
if err := s.Error.List; err != nil {
|
||||
return nil, err
|
||||
}
|
||||
kv := []*store.KVPair{}
|
||||
for _, kvPair := range s.KVPairs {
|
||||
if strings.HasPrefix(kvPair.Key, prefix) && !strings.ContainsAny(strings.TrimPrefix(kvPair.Key, prefix), "/") {
|
||||
kv = append(kv, kvPair)
|
||||
}
|
||||
}
|
||||
return kv, nil
|
||||
}
|
||||
|
||||
// DeleteTree mock
|
||||
func (s *Mock) DeleteTree(prefix string) error {
|
||||
return errors.New("DeleteTree not supported")
|
||||
}
|
||||
|
||||
// AtomicPut mock
|
||||
func (s *Mock) AtomicPut(key string, value []byte, previous *store.KVPair, opts *store.WriteOptions) (bool, *store.KVPair, error) {
|
||||
return false, nil, errors.New("AtomicPut not supported")
|
||||
}
|
||||
|
||||
// AtomicDelete mock
|
||||
func (s *Mock) AtomicDelete(key string, previous *store.KVPair) (bool, error) {
|
||||
return false, errors.New("AtomicDelete not supported")
|
||||
}
|
||||
|
||||
// Close mock
|
||||
func (s *Mock) Close() {}
|
||||
@@ -1,10 +1,8 @@
|
||||
package kv
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -237,14 +235,6 @@ func TestKvLast(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
type KvMock struct {
|
||||
Provider
|
||||
}
|
||||
|
||||
func (provider *KvMock) loadConfig() *types.Configuration {
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestKvWatchTree(t *testing.T) {
|
||||
returnedChans := make(chan chan []*store.KVPair)
|
||||
provider := &KvMock{
|
||||
@@ -288,91 +278,6 @@ func TestKvWatchTree(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
// Override Get/List to return a error
|
||||
type KvError struct {
|
||||
Get error
|
||||
List error
|
||||
}
|
||||
|
||||
// Extremely limited mock store so we can test initialization
|
||||
type Mock struct {
|
||||
Error KvError
|
||||
KVPairs []*store.KVPair
|
||||
WatchTreeMethod func() <-chan []*store.KVPair
|
||||
}
|
||||
|
||||
func (s *Mock) Put(key string, value []byte, opts *store.WriteOptions) error {
|
||||
return errors.New("Put not supported")
|
||||
}
|
||||
|
||||
func (s *Mock) Get(key string) (*store.KVPair, error) {
|
||||
if err := s.Error.Get; err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for _, kvPair := range s.KVPairs {
|
||||
if kvPair.Key == key {
|
||||
return kvPair, nil
|
||||
}
|
||||
}
|
||||
return nil, store.ErrKeyNotFound
|
||||
}
|
||||
|
||||
func (s *Mock) Delete(key string) error {
|
||||
return errors.New("Delete not supported")
|
||||
}
|
||||
|
||||
// Exists mock
|
||||
func (s *Mock) Exists(key string) (bool, error) {
|
||||
return false, errors.New("Exists not supported")
|
||||
}
|
||||
|
||||
// Watch mock
|
||||
func (s *Mock) Watch(key string, stopCh <-chan struct{}) (<-chan *store.KVPair, error) {
|
||||
return nil, errors.New("Watch not supported")
|
||||
}
|
||||
|
||||
// WatchTree mock
|
||||
func (s *Mock) WatchTree(prefix string, stopCh <-chan struct{}) (<-chan []*store.KVPair, error) {
|
||||
return s.WatchTreeMethod(), nil
|
||||
}
|
||||
|
||||
// NewLock mock
|
||||
func (s *Mock) NewLock(key string, options *store.LockOptions) (store.Locker, error) {
|
||||
return nil, errors.New("NewLock not supported")
|
||||
}
|
||||
|
||||
// List mock
|
||||
func (s *Mock) List(prefix string) ([]*store.KVPair, error) {
|
||||
if err := s.Error.List; err != nil {
|
||||
return nil, err
|
||||
}
|
||||
kv := []*store.KVPair{}
|
||||
for _, kvPair := range s.KVPairs {
|
||||
if strings.HasPrefix(kvPair.Key, prefix) && !strings.ContainsAny(strings.TrimPrefix(kvPair.Key, prefix), "/") {
|
||||
kv = append(kv, kvPair)
|
||||
}
|
||||
}
|
||||
return kv, nil
|
||||
}
|
||||
|
||||
// DeleteTree mock
|
||||
func (s *Mock) DeleteTree(prefix string) error {
|
||||
return errors.New("DeleteTree not supported")
|
||||
}
|
||||
|
||||
// AtomicPut mock
|
||||
func (s *Mock) AtomicPut(key string, value []byte, previous *store.KVPair, opts *store.WriteOptions) (bool, *store.KVPair, error) {
|
||||
return false, nil, errors.New("AtomicPut not supported")
|
||||
}
|
||||
|
||||
// AtomicDelete mock
|
||||
func (s *Mock) AtomicDelete(key string, previous *store.KVPair) (bool, error) {
|
||||
return false, errors.New("AtomicDelete not supported")
|
||||
}
|
||||
|
||||
// Close mock
|
||||
func (s *Mock) Close() {}
|
||||
|
||||
func TestKVLoadConfig(t *testing.T) {
|
||||
provider := &Provider{
|
||||
Prefix: "traefik",
|
||||
@@ -463,3 +368,55 @@ func TestKVLoadConfig(t *testing.T) {
|
||||
t.Fatalf("expected %+v, got %+v", expected.Frontends, actual.Frontends)
|
||||
}
|
||||
}
|
||||
|
||||
func TestKVHasStickinessLabel(t *testing.T) {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
KVPairs []*store.KVPair
|
||||
expected bool
|
||||
}{
|
||||
{
|
||||
desc: "without option",
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
desc: "with cookie name without stickiness=true",
|
||||
KVPairs: []*store.KVPair{
|
||||
{
|
||||
Key: "loadbalancer/stickiness/cookiename",
|
||||
Value: []byte("foo"),
|
||||
},
|
||||
},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
desc: "stickiness=true",
|
||||
KVPairs: []*store.KVPair{
|
||||
{
|
||||
Key: "loadbalancer/stickiness",
|
||||
Value: []byte("true"),
|
||||
},
|
||||
},
|
||||
expected: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
p := &Provider{
|
||||
kvclient: &Mock{
|
||||
KVPairs: test.KVPairs,
|
||||
},
|
||||
}
|
||||
|
||||
actual := p.hasStickinessLabel("")
|
||||
|
||||
if actual != test.expected {
|
||||
t.Fatalf("expected %v, got %v", test.expected, actual)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -189,6 +189,8 @@ func (p *Provider) loadMarathonConfig() *types.Configuration {
|
||||
"getLoadBalancerMethod": p.getLoadBalancerMethod,
|
||||
"getCircuitBreakerExpression": p.getCircuitBreakerExpression,
|
||||
"getSticky": p.getSticky,
|
||||
"hasStickinessLabel": p.hasStickinessLabel,
|
||||
"getStickinessCookieName": p.getStickinessCookieName,
|
||||
"hasHealthCheckLabels": p.hasHealthCheckLabels,
|
||||
"getHealthCheckPath": p.getHealthCheckPath,
|
||||
"getHealthCheckInterval": p.getHealthCheckInterval,
|
||||
@@ -430,11 +432,24 @@ func (p *Provider) getProtocol(application marathon.Application, serviceName str
|
||||
|
||||
func (p *Provider) getSticky(application marathon.Application) string {
|
||||
if sticky, ok := p.getAppLabel(application, types.LabelBackendLoadbalancerSticky); ok {
|
||||
log.Warnf("Deprecated configuration found: %s. Please use %s.", types.LabelBackendLoadbalancerSticky, types.LabelBackendLoadbalancerStickiness)
|
||||
return sticky
|
||||
}
|
||||
return "false"
|
||||
}
|
||||
|
||||
func (p *Provider) hasStickinessLabel(application marathon.Application) bool {
|
||||
labelStickiness, okStickiness := p.getAppLabel(application, types.LabelBackendLoadbalancerStickiness)
|
||||
return okStickiness && len(labelStickiness) > 0 && strings.EqualFold(strings.TrimSpace(labelStickiness), "true")
|
||||
}
|
||||
|
||||
func (p *Provider) getStickinessCookieName(application marathon.Application) string {
|
||||
if label, ok := p.getAppLabel(application, types.LabelBackendLoadbalancerStickinessCookieName); ok {
|
||||
return label
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (p *Provider) getPassHostHeader(application marathon.Application, serviceName string) string {
|
||||
if passHostHeader, ok := p.getLabel(application, types.LabelFrontendPassHostHeader, serviceName); ok {
|
||||
return passHostHeader
|
||||
|
||||
@@ -93,7 +93,9 @@ func TestMarathonLoadConfigNonAPIErrors(t *testing.T) {
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedBackends: nil,
|
||||
expectedBackends: map[string]*types.Backend{
|
||||
"backend-app": {},
|
||||
},
|
||||
},
|
||||
{
|
||||
desc: "load balancer / circuit breaker labels",
|
||||
@@ -854,9 +856,8 @@ func TestMarathonGetProtocol(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestMarathonGetSticky(t *testing.T) {
|
||||
cases := []struct {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
application marathon.Application
|
||||
expected string
|
||||
@@ -873,14 +874,51 @@ func TestMarathonGetSticky(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
for _, c := range cases {
|
||||
c := c
|
||||
t.Run(c.desc, func(t *testing.T) {
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
provider := &Provider{}
|
||||
actual := provider.getSticky(c.application)
|
||||
if actual != c.expected {
|
||||
t.Errorf("actual %q, expected %q", actual, c.expected)
|
||||
actual := provider.getSticky(test.application)
|
||||
if actual != test.expected {
|
||||
t.Errorf("actual %q, expected %q", actual, test.expected)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestMarathonHasStickinessLabel(t *testing.T) {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
application marathon.Application
|
||||
expected bool
|
||||
}{
|
||||
{
|
||||
desc: "label missing",
|
||||
application: application(),
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
desc: "stickiness=true",
|
||||
application: application(label(types.LabelBackendLoadbalancerStickiness, "true")),
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
desc: "stickiness=false ",
|
||||
application: application(label(types.LabelBackendLoadbalancerStickiness, "true")),
|
||||
expected: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
provider := &Provider{}
|
||||
actual := provider.hasStickinessLabel(test.application)
|
||||
if actual != test.expected {
|
||||
t.Errorf("actual %q, expected %q", actual, test.expected)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
@@ -34,7 +34,7 @@ type BaseProvider struct {
|
||||
// MatchConstraints must match with EVERY single contraint
|
||||
// returns first constraint that do not match or nil
|
||||
func (p *BaseProvider) MatchConstraints(tags []string) (bool, *types.Constraint) {
|
||||
// if there is no tags and no contraints, filtering is disabled
|
||||
// if there is no tags and no constraints, filtering is disabled
|
||||
if len(tags) == 0 && len(p.Constraints) == 0 {
|
||||
return true, nil
|
||||
}
|
||||
|
||||
@@ -85,7 +85,7 @@ func (p *Provider) intervalPoll(client rancher.Client, updateConfiguration func(
|
||||
_, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
ticker := time.NewTicker(time.Duration(p.RefreshSeconds))
|
||||
ticker := time.NewTicker(time.Second * time.Duration(p.RefreshSeconds))
|
||||
defer ticker.Stop()
|
||||
|
||||
var version string
|
||||
|
||||
@@ -92,10 +92,10 @@ func (p *Provider) getLoadBalancerMethod(service rancherData) string {
|
||||
func (p *Provider) hasLoadBalancerLabel(service rancherData) bool {
|
||||
_, errMethod := getServiceLabel(service, types.LabelBackendLoadbalancerMethod)
|
||||
_, errSticky := getServiceLabel(service, types.LabelBackendLoadbalancerSticky)
|
||||
if errMethod != nil && errSticky != nil {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
_, errStickiness := getServiceLabel(service, types.LabelBackendLoadbalancerStickiness)
|
||||
_, errCookieName := getServiceLabel(service, types.LabelBackendLoadbalancerStickinessCookieName)
|
||||
|
||||
return errMethod == nil || errSticky == nil || errStickiness == nil || errCookieName == nil
|
||||
}
|
||||
|
||||
func (p *Provider) hasCircuitBreakerLabel(service rancherData) bool {
|
||||
@@ -114,11 +114,25 @@ func (p *Provider) getCircuitBreakerExpression(service rancherData) string {
|
||||
|
||||
func (p *Provider) getSticky(service rancherData) string {
|
||||
if _, err := getServiceLabel(service, types.LabelBackendLoadbalancerSticky); err == nil {
|
||||
log.Warnf("Deprecated configuration found: %s. Please use %s.", types.LabelBackendLoadbalancerSticky, types.LabelBackendLoadbalancerStickiness)
|
||||
return "true"
|
||||
}
|
||||
return "false"
|
||||
}
|
||||
|
||||
func (p *Provider) hasStickinessLabel(service rancherData) bool {
|
||||
labelStickiness, errStickiness := getServiceLabel(service, types.LabelBackendLoadbalancerStickiness)
|
||||
|
||||
return errStickiness == nil && len(labelStickiness) > 0 && strings.EqualFold(strings.TrimSpace(labelStickiness), "true")
|
||||
}
|
||||
|
||||
func (p *Provider) getStickinessCookieName(service rancherData) string {
|
||||
if label, err := getServiceLabel(service, types.LabelBackendLoadbalancerStickinessCookieName); err == nil {
|
||||
return label
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (p *Provider) getBackend(service rancherData) string {
|
||||
if label, err := getServiceLabel(service, types.LabelBackend); err == nil {
|
||||
return provider.Normalize(label)
|
||||
@@ -223,6 +237,8 @@ func (p *Provider) loadRancherConfig(services []rancherData) *types.Configuratio
|
||||
"getMaxConnAmount": p.getMaxConnAmount,
|
||||
"getMaxConnExtractorFunc": p.getMaxConnExtractorFunc,
|
||||
"getSticky": p.getSticky,
|
||||
"hasStickinessLabel": p.hasStickinessLabel,
|
||||
"getStickinessCookieName": p.getStickinessCookieName,
|
||||
}
|
||||
|
||||
// filter services
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"testing"
|
||||
|
||||
"github.com/containous/traefik/types"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestRancherServiceFilter(t *testing.T) {
|
||||
@@ -597,3 +598,53 @@ func TestRancherLoadRancherConfig(t *testing.T) {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestRancherHasStickinessLabel(t *testing.T) {
|
||||
provider := &Provider{
|
||||
Domain: "rancher.localhost",
|
||||
}
|
||||
|
||||
testCases := []struct {
|
||||
desc string
|
||||
service rancherData
|
||||
expected bool
|
||||
}{
|
||||
{
|
||||
desc: "no labels",
|
||||
service: rancherData{
|
||||
Name: "test-service",
|
||||
},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
desc: "stickiness=true",
|
||||
service: rancherData{
|
||||
Name: "test-service",
|
||||
Labels: map[string]string{
|
||||
types.LabelBackendLoadbalancerStickiness: "true",
|
||||
},
|
||||
},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
desc: "stickiness=true",
|
||||
service: rancherData{
|
||||
Name: "test-service",
|
||||
Labels: map[string]string{
|
||||
types.LabelBackendLoadbalancerStickiness: "false",
|
||||
},
|
||||
},
|
||||
expected: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
actual := provider.hasStickinessLabel(test.service)
|
||||
assert.Equal(t, actual, test.expected)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -56,17 +56,9 @@ func goroutines() interface{} {
|
||||
// Provide allows the provider to provide configurations to traefik
|
||||
// using the given configuration channel.
|
||||
func (provider *Provider) Provide(configurationChan chan<- types.ConfigMessage, pool *safe.Pool, _ types.Constraints) error {
|
||||
|
||||
systemRouter := mux.NewRouter()
|
||||
|
||||
if provider.Path == "" {
|
||||
provider.Path = "/"
|
||||
}
|
||||
|
||||
if provider.Path != "/" {
|
||||
if provider.Path[len(provider.Path)-1:] != "/" {
|
||||
provider.Path += "/"
|
||||
}
|
||||
systemRouter.Methods("GET").Path("/").HandlerFunc(func(response http.ResponseWriter, request *http.Request) {
|
||||
http.Redirect(response, request, provider.Path, 302)
|
||||
})
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
mkdocs>=0.16.1
|
||||
mkdocs==0.16.3
|
||||
pymdown-extensions>=1.4
|
||||
mkdocs-bootswatch>=0.4.0
|
||||
mkdocs-material>=1.8.1
|
||||
mkdocs-material==1.12.2
|
||||
|
||||
@@ -24,13 +24,17 @@ GIT_REPO_URL='github.com/containous/traefik/version'
|
||||
GO_BUILD_CMD="go build -ldflags"
|
||||
GO_BUILD_OPT="-s -w -X ${GIT_REPO_URL}.Version=${VERSION} -X ${GIT_REPO_URL}.Codename=${CODENAME} -X ${GIT_REPO_URL}.BuildDate=${DATE}"
|
||||
|
||||
# Build 386 amd64 binaries
|
||||
# Build amd64 binaries
|
||||
OS_PLATFORM_ARG=(linux windows darwin)
|
||||
OS_ARCH_ARG=(amd64)
|
||||
for OS in ${OS_PLATFORM_ARG[@]}; do
|
||||
BIN_EXT=''
|
||||
if [ "$OS" == "windows" ]; then
|
||||
BIN_EXT='.exe'
|
||||
fi
|
||||
for ARCH in ${OS_ARCH_ARG[@]}; do
|
||||
echo "Building binary for ${OS}/${ARCH}..."
|
||||
GOARCH=${ARCH} GOOS=${OS} CGO_ENABLED=0 ${GO_BUILD_CMD} "${GO_BUILD_OPT}" -o "dist/traefik_${OS}-${ARCH}" ./cmd/traefik/
|
||||
GOARCH=${ARCH} GOOS=${OS} CGO_ENABLED=0 ${GO_BUILD_CMD} "${GO_BUILD_OPT}" -o "dist/traefik_${OS}-${ARCH}${BIN_EXT}" ./cmd/traefik/
|
||||
done
|
||||
done
|
||||
|
||||
|
||||
@@ -28,9 +28,13 @@ GO_BUILD_OPT="-s -w -X ${GIT_REPO_URL}.Version=${VERSION} -X ${GIT_REPO_URL}.Cod
|
||||
OS_PLATFORM_ARG=(linux windows darwin)
|
||||
OS_ARCH_ARG=(386)
|
||||
for OS in ${OS_PLATFORM_ARG[@]}; do
|
||||
BIN_EXT=''
|
||||
if [ "$OS" == "windows" ]; then
|
||||
BIN_EXT='.exe'
|
||||
fi
|
||||
for ARCH in ${OS_ARCH_ARG[@]}; do
|
||||
echo "Building binary for $OS/$ARCH..."
|
||||
GOARCH=${ARCH} GOOS=${OS} CGO_ENABLED=0 ${GO_BUILD_CMD} "$GO_BUILD_OPT" -o "dist/traefik_$OS-$ARCH" ./cmd/traefik/
|
||||
echo "Building binary for ${OS}/${ARCH}..."
|
||||
GOARCH=${ARCH} GOOS=${OS} CGO_ENABLED=0 ${GO_BUILD_CMD} "${GO_BUILD_OPT}" -o "dist/traefik_${OS}-${ARCH}${BIN_EXT}" ./cmd/traefik/
|
||||
done
|
||||
done
|
||||
|
||||
|
||||
57
server/cookie/cookie.go
Normal file
57
server/cookie/cookie.go
Normal file
@@ -0,0 +1,57 @@
|
||||
package cookie
|
||||
|
||||
import (
|
||||
"crypto/sha1"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/containous/traefik/log"
|
||||
)
|
||||
|
||||
const cookieNameLength = 6
|
||||
|
||||
// GetName of a cookie
|
||||
func GetName(cookieName string, backendName string) string {
|
||||
if len(cookieName) != 0 {
|
||||
return sanitizeName(cookieName)
|
||||
}
|
||||
|
||||
return GenerateName(backendName)
|
||||
}
|
||||
|
||||
// GenerateName Generate a hashed name
|
||||
func GenerateName(backendName string) string {
|
||||
data := []byte("_TRAEFIK_BACKEND_" + backendName)
|
||||
|
||||
hash := sha1.New()
|
||||
_, err := hash.Write(data)
|
||||
if err != nil {
|
||||
// Impossible case
|
||||
log.Errorf("Fail to create cookie name: %v", err)
|
||||
}
|
||||
|
||||
return fmt.Sprintf("_%x", hash.Sum(nil))[:cookieNameLength]
|
||||
}
|
||||
|
||||
// sanitizeName According to [RFC 2616](https://www.ietf.org/rfc/rfc2616.txt) section 2.2
|
||||
func sanitizeName(backend string) string {
|
||||
sanitizer := func(r rune) rune {
|
||||
switch r {
|
||||
case '!', '#', '$', '%', '&', '\'', '*', '+', '-', '.', '^', '`', '|', '~':
|
||||
return r
|
||||
}
|
||||
|
||||
switch {
|
||||
case 'a' <= r && r <= 'z':
|
||||
fallthrough
|
||||
case 'A' <= r && r <= 'Z':
|
||||
fallthrough
|
||||
case '0' <= r && r <= '9':
|
||||
return r
|
||||
default:
|
||||
return '_'
|
||||
}
|
||||
}
|
||||
|
||||
return strings.Map(sanitizer, backend)
|
||||
}
|
||||
83
server/cookie/cookie_test.go
Normal file
83
server/cookie/cookie_test.go
Normal file
@@ -0,0 +1,83 @@
|
||||
package cookie
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestGetName(t *testing.T) {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
cookieName string
|
||||
backendName string
|
||||
expectedCookieName string
|
||||
}{
|
||||
{
|
||||
desc: "with backend name, without cookie name",
|
||||
cookieName: "",
|
||||
backendName: "/my/BACKEND-v1.0~rc1",
|
||||
expectedCookieName: "_5f7bc",
|
||||
},
|
||||
{
|
||||
desc: "without backend name, with cookie name",
|
||||
cookieName: "/my/BACKEND-v1.0~rc1",
|
||||
backendName: "",
|
||||
expectedCookieName: "_my_BACKEND-v1.0~rc1",
|
||||
},
|
||||
{
|
||||
desc: "with backend name, with cookie name",
|
||||
cookieName: "containous",
|
||||
backendName: "treafik",
|
||||
expectedCookieName: "containous",
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
cookieName := GetName(test.cookieName, test.backendName)
|
||||
|
||||
assert.Equal(t, test.expectedCookieName, cookieName)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_sanitizeName(t *testing.T) {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
srcName string
|
||||
expectedName string
|
||||
}{
|
||||
{
|
||||
desc: "with /",
|
||||
srcName: "/my/BACKEND-v1.0~rc1",
|
||||
expectedName: "_my_BACKEND-v1.0~rc1",
|
||||
},
|
||||
{
|
||||
desc: "some chars",
|
||||
srcName: "!#$%&'()*+-./:<=>?@[]^_`{|}~",
|
||||
expectedName: "!#$%&'__*+-._________^_`_|_~",
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
cookieName := sanitizeName(test.srcName)
|
||||
|
||||
assert.Equal(t, test.expectedName, cookieName, "Cookie name")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGenerateName(t *testing.T) {
|
||||
cookieName := GenerateName("containous")
|
||||
|
||||
assert.Len(t, "_8a7bc", 6)
|
||||
assert.Equal(t, "_8a7bc", cookieName)
|
||||
}
|
||||
51
server/header_rewriter.go
Normal file
51
server/header_rewriter.go
Normal file
@@ -0,0 +1,51 @@
|
||||
package server
|
||||
|
||||
import (
|
||||
"net"
|
||||
"net/http"
|
||||
"os"
|
||||
|
||||
"github.com/containous/traefik/whitelist"
|
||||
"github.com/vulcand/oxy/forward"
|
||||
)
|
||||
|
||||
// NewHeaderRewriter Create a header rewriter
|
||||
func NewHeaderRewriter(trustedIPs []string, insecure bool) (forward.ReqRewriter, error) {
|
||||
IPs, err := whitelist.NewIP(trustedIPs, insecure)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
h, err := os.Hostname()
|
||||
if err != nil {
|
||||
h = "localhost"
|
||||
}
|
||||
|
||||
return &headerRewriter{
|
||||
secureRewriter: &forward.HeaderRewriter{TrustForwardHeader: true, Hostname: h},
|
||||
insecureRewriter: &forward.HeaderRewriter{TrustForwardHeader: false, Hostname: h},
|
||||
ips: IPs,
|
||||
insecure: insecure,
|
||||
}, nil
|
||||
}
|
||||
|
||||
type headerRewriter struct {
|
||||
secureRewriter forward.ReqRewriter
|
||||
insecureRewriter forward.ReqRewriter
|
||||
insecure bool
|
||||
ips *whitelist.IP
|
||||
}
|
||||
|
||||
func (h *headerRewriter) Rewrite(req *http.Request) {
|
||||
clientIP, _, err := net.SplitHostPort(req.RemoteAddr)
|
||||
if err != nil {
|
||||
h.secureRewriter.Rewrite(req)
|
||||
}
|
||||
|
||||
authorized, _, err := h.ips.Contains(clientIP)
|
||||
if h.insecure || authorized {
|
||||
h.secureRewriter.Rewrite(req)
|
||||
} else {
|
||||
h.insecureRewriter.Rewrite(req)
|
||||
}
|
||||
}
|
||||
@@ -136,6 +136,57 @@ func TestPriorites(t *testing.T) {
|
||||
assert.NotEqual(t, foobarMatcher.Handler, fooHandler, "Error matching priority")
|
||||
}
|
||||
|
||||
func TestHostRegexp(t *testing.T) {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
hostExp string
|
||||
urls map[string]bool
|
||||
}{
|
||||
{
|
||||
desc: "capturing group",
|
||||
hostExp: "{subdomain:(foo\\.)?bar\\.com}",
|
||||
urls: map[string]bool{
|
||||
"http://foo.bar.com": true,
|
||||
"http://bar.com": true,
|
||||
"http://fooubar.com": false,
|
||||
"http://barucom": false,
|
||||
"http://barcom": false,
|
||||
},
|
||||
},
|
||||
{
|
||||
desc: "non capturing group",
|
||||
hostExp: "{subdomain:(?:foo\\.)?bar\\.com}",
|
||||
urls: map[string]bool{
|
||||
"http://foo.bar.com": true,
|
||||
"http://bar.com": true,
|
||||
"http://fooubar.com": false,
|
||||
"http://barucom": false,
|
||||
"http://barcom": false,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
rls := &Rules{
|
||||
route: &serverRoute{
|
||||
route: &mux.Route{},
|
||||
},
|
||||
}
|
||||
|
||||
rt := rls.hostRegexp(test.hostExp)
|
||||
|
||||
for testURL, match := range test.urls {
|
||||
req := testhelpers.MustNewRequest(http.MethodGet, testURL, nil)
|
||||
assert.Equal(t, match, rt.Match(req, &mux.RouteMatch{}))
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
type fakeHandler struct {
|
||||
name string
|
||||
}
|
||||
|
||||
@@ -6,7 +6,9 @@ import (
|
||||
"crypto/x509"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
stdlog "log"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/url"
|
||||
@@ -18,6 +20,7 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/armon/go-proxyproto"
|
||||
"github.com/containous/mux"
|
||||
"github.com/containous/traefik/cluster"
|
||||
@@ -30,7 +33,9 @@ import (
|
||||
mauth "github.com/containous/traefik/middlewares/auth"
|
||||
"github.com/containous/traefik/provider"
|
||||
"github.com/containous/traefik/safe"
|
||||
"github.com/containous/traefik/server/cookie"
|
||||
"github.com/containous/traefik/types"
|
||||
"github.com/containous/traefik/whitelist"
|
||||
"github.com/streamrail/concurrent-map"
|
||||
thoas_stats "github.com/thoas/stats"
|
||||
"github.com/urfave/negroni"
|
||||
@@ -43,7 +48,8 @@ import (
|
||||
)
|
||||
|
||||
var (
|
||||
oxyLogger = &OxyLogger{}
|
||||
oxyLogger = &OxyLogger{}
|
||||
httpServerLogger = stdlog.New(log.WriterLevel(logrus.DebugLevel), "", 0)
|
||||
)
|
||||
|
||||
// Server is the reverse-proxy/load-balancer engine
|
||||
@@ -153,8 +159,8 @@ func createHTTPTransport(globalConfiguration configuration.GlobalConfiguration)
|
||||
transport.TLSClientConfig = &tls.Config{
|
||||
RootCAs: createRootCACertPool(globalConfiguration.RootCAs),
|
||||
}
|
||||
http2.ConfigureTransport(transport)
|
||||
}
|
||||
http2.ConfigureTransport(transport)
|
||||
|
||||
return transport
|
||||
}
|
||||
@@ -338,7 +344,7 @@ func (server *Server) listenProviders(stop chan bool) {
|
||||
// last config received more than n s ago
|
||||
server.configurationValidatedChan <- configMsg
|
||||
} else {
|
||||
log.Debugf("Last %s config received less than %s, waiting...", configMsg.ProviderName, server.globalConfiguration.ProvidersThrottleDuration.String())
|
||||
log.Debugf("Last %s config received less than %s, waiting...", configMsg.ProviderName, server.globalConfiguration.ProvidersThrottleDuration)
|
||||
safe.Go(func() {
|
||||
<-time.After(providersThrottleDuration)
|
||||
lastReceivedConfigurationValue := lastReceivedConfiguration.Get().(time.Time)
|
||||
@@ -651,8 +657,22 @@ func (server *Server) prepareServer(entryPointName string, entryPoint *configura
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
if entryPoint.ProxyProtocol {
|
||||
listener = &proxyproto.Listener{Listener: listener}
|
||||
if entryPoint.ProxyProtocol != nil {
|
||||
IPs, err := whitelist.NewIP(entryPoint.ProxyProtocol.TrustedIPs, entryPoint.ProxyProtocol.Insecure)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("Error creating whitelist: %s", err)
|
||||
}
|
||||
log.Infof("Enabling ProxyProtocol for trusted IPs %v", entryPoint.ProxyProtocol.TrustedIPs)
|
||||
listener = &proxyproto.Listener{
|
||||
Listener: listener,
|
||||
SourceCheck: func(addr net.Addr) (bool, error) {
|
||||
ip, ok := addr.(*net.TCPAddr)
|
||||
if !ok {
|
||||
return false, fmt.Errorf("type error %v", addr)
|
||||
}
|
||||
return IPs.ContainsIP(ip.IP)
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
return &http.Server{
|
||||
@@ -662,6 +682,7 @@ func (server *Server) prepareServer(entryPointName string, entryPoint *configura
|
||||
ReadTimeout: readTimeout,
|
||||
WriteTimeout: writeTimeout,
|
||||
IdleTimeout: idleTimeout,
|
||||
ErrorLog: httpServerLogger,
|
||||
},
|
||||
listener,
|
||||
nil
|
||||
@@ -789,11 +810,19 @@ func (server *Server) loadConfig(configurations types.Configurations, globalConf
|
||||
continue frontend
|
||||
}
|
||||
|
||||
rewriter, err := NewHeaderRewriter(entryPoint.ForwardedHeaders.TrustedIPs, entryPoint.ForwardedHeaders.Insecure)
|
||||
if err != nil {
|
||||
log.Errorf("Error creating rewriter for frontend %s: %v", frontendName, err)
|
||||
log.Errorf("Skipping frontend %s...", frontendName)
|
||||
continue frontend
|
||||
}
|
||||
|
||||
fwd, err := forward.New(
|
||||
forward.Logger(oxyLogger),
|
||||
forward.PassHostHeader(frontend.PassHostHeader),
|
||||
forward.RoundTripper(roundTripper),
|
||||
forward.ErrorHandler(errorHandler),
|
||||
forward.Rewriter(rewriter),
|
||||
)
|
||||
|
||||
if err != nil {
|
||||
@@ -825,11 +854,10 @@ func (server *Server) loadConfig(configurations types.Configurations, globalConf
|
||||
continue frontend
|
||||
}
|
||||
|
||||
stickySession := config.Backends[frontend.Backend].LoadBalancer.Sticky
|
||||
cookieName := "_TRAEFIK_BACKEND_" + frontend.Backend
|
||||
var sticky *roundrobin.StickySession
|
||||
|
||||
if stickySession {
|
||||
var cookieName string
|
||||
if stickiness := config.Backends[frontend.Backend].LoadBalancer.Stickiness; stickiness != nil {
|
||||
cookieName = cookie.GetName(stickiness.CookieName, frontend.Backend)
|
||||
sticky = roundrobin.NewStickySession(cookieName)
|
||||
}
|
||||
|
||||
@@ -838,7 +866,7 @@ func (server *Server) loadConfig(configurations types.Configurations, globalConf
|
||||
case types.Drr:
|
||||
log.Debugf("Creating load-balancer drr")
|
||||
rebalancer, _ := roundrobin.NewRebalancer(rr, roundrobin.RebalancerLogger(oxyLogger))
|
||||
if stickySession {
|
||||
if sticky != nil {
|
||||
log.Debugf("Sticky session with cookie %v", cookieName)
|
||||
rebalancer, _ = roundrobin.NewRebalancer(rr, roundrobin.RebalancerLogger(oxyLogger), roundrobin.RebalancerStickySession(sticky))
|
||||
}
|
||||
@@ -855,7 +883,7 @@ func (server *Server) loadConfig(configurations types.Configurations, globalConf
|
||||
lb = middlewares.NewEmptyBackendHandler(rebalancer, lb)
|
||||
case types.Wrr:
|
||||
log.Debugf("Creating load-balancer wrr")
|
||||
if stickySession {
|
||||
if sticky != nil {
|
||||
log.Debugf("Sticky session with cookie %v", cookieName)
|
||||
if server.accessLoggerMiddleware != nil {
|
||||
rr, _ = roundrobin.New(saveFrontend, roundrobin.EnableStickySession(sticky))
|
||||
@@ -1148,17 +1176,37 @@ func (server *Server) configureFrontends(frontends map[string]*types.Frontend) {
|
||||
}
|
||||
|
||||
func (*Server) configureBackends(backends map[string]*types.Backend) {
|
||||
for backendName, backend := range backends {
|
||||
for backendName := range backends {
|
||||
backend := backends[backendName]
|
||||
if backend.LoadBalancer != nil && backend.LoadBalancer.Sticky {
|
||||
log.Warnf("Deprecated configuration found: %s. Please use %s.", "backend.LoadBalancer.Sticky", "backend.LoadBalancer.Stickiness")
|
||||
}
|
||||
|
||||
_, err := types.NewLoadBalancerMethod(backend.LoadBalancer)
|
||||
if err != nil {
|
||||
if err == nil {
|
||||
if backend.LoadBalancer != nil && backend.LoadBalancer.Stickiness == nil && backend.LoadBalancer.Sticky {
|
||||
backend.LoadBalancer.Stickiness = &types.Stickiness{
|
||||
CookieName: "_TRAEFIK_BACKEND",
|
||||
}
|
||||
}
|
||||
} else {
|
||||
log.Debugf("Validation of load balancer method for backend %s failed: %s. Using default method wrr.", backendName, err)
|
||||
var sticky bool
|
||||
|
||||
var stickiness *types.Stickiness
|
||||
if backend.LoadBalancer != nil {
|
||||
sticky = backend.LoadBalancer.Sticky
|
||||
if backend.LoadBalancer.Stickiness == nil {
|
||||
if backend.LoadBalancer.Sticky {
|
||||
stickiness = &types.Stickiness{
|
||||
CookieName: "_TRAEFIK_BACKEND",
|
||||
}
|
||||
}
|
||||
} else {
|
||||
stickiness = backend.LoadBalancer.Stickiness
|
||||
}
|
||||
}
|
||||
backend.LoadBalancer = &types.LoadBalancer{
|
||||
Method: "wrr",
|
||||
Sticky: sticky,
|
||||
Method: "wrr",
|
||||
Stickiness: stickiness,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -96,7 +96,10 @@ func TestPrepareServerTimeouts(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
entryPointName := "http"
|
||||
entryPoint := &configuration.EntryPoint{Address: "localhost:0"}
|
||||
entryPoint := &configuration.EntryPoint{
|
||||
Address: "localhost:0",
|
||||
ForwardedHeaders: &configuration.ForwardedHeaders{Insecure: true},
|
||||
}
|
||||
router := middlewares.NewHandlerSwitcher(mux.NewRouter())
|
||||
|
||||
srv := NewServer(test.globalConfig)
|
||||
@@ -210,7 +213,9 @@ func TestServerLoadConfigHealthCheckOptions(t *testing.T) {
|
||||
t.Run(fmt.Sprintf("%s/hc=%t", lbMethod, healthCheck != nil), func(t *testing.T) {
|
||||
globalConfig := configuration.GlobalConfiguration{
|
||||
EntryPoints: configuration.EntryPoints{
|
||||
"http": &configuration.EntryPoint{},
|
||||
"http": &configuration.EntryPoint{
|
||||
ForwardedHeaders: &configuration.ForwardedHeaders{Insecure: true},
|
||||
},
|
||||
},
|
||||
HealthCheck: &configuration.HealthCheckConfig{Interval: flaeg.Duration(5 * time.Second)},
|
||||
}
|
||||
@@ -355,7 +360,7 @@ func TestNewServerWithWhitelistSourceRange(t *testing.T) {
|
||||
"foo",
|
||||
},
|
||||
middlewareConfigured: false,
|
||||
errMessage: "parsing CIDR whitelist <nil>: invalid CIDR address: foo",
|
||||
errMessage: "parsing CIDR whitelist [foo]: parsing CIDR whitelist <nil>: invalid CIDR address: foo",
|
||||
},
|
||||
}
|
||||
|
||||
@@ -383,7 +388,7 @@ func TestNewServerWithWhitelistSourceRange(t *testing.T) {
|
||||
func TestServerLoadConfigEmptyBasicAuth(t *testing.T) {
|
||||
globalConfig := configuration.GlobalConfiguration{
|
||||
EntryPoints: configuration.EntryPoints{
|
||||
"http": &configuration.EntryPoint{},
|
||||
"http": &configuration.EntryPoint{ForwardedHeaders: &configuration.ForwardedHeaders{Insecure: true}},
|
||||
},
|
||||
}
|
||||
|
||||
@@ -422,52 +427,49 @@ func TestConfigureBackends(t *testing.T) {
|
||||
defaultMethod := "wrr"
|
||||
|
||||
tests := []struct {
|
||||
desc string
|
||||
lb *types.LoadBalancer
|
||||
wantMethod string
|
||||
wantSticky bool
|
||||
desc string
|
||||
lb *types.LoadBalancer
|
||||
wantMethod string
|
||||
wantStickiness *types.Stickiness
|
||||
}{
|
||||
{
|
||||
desc: "valid load balancer method with sticky enabled",
|
||||
lb: &types.LoadBalancer{
|
||||
Method: validMethod,
|
||||
Sticky: true,
|
||||
Method: validMethod,
|
||||
Stickiness: &types.Stickiness{},
|
||||
},
|
||||
wantMethod: validMethod,
|
||||
wantSticky: true,
|
||||
wantMethod: validMethod,
|
||||
wantStickiness: &types.Stickiness{},
|
||||
},
|
||||
{
|
||||
desc: "valid load balancer method with sticky disabled",
|
||||
lb: &types.LoadBalancer{
|
||||
Method: validMethod,
|
||||
Sticky: false,
|
||||
Method: validMethod,
|
||||
Stickiness: nil,
|
||||
},
|
||||
wantMethod: validMethod,
|
||||
wantSticky: false,
|
||||
},
|
||||
{
|
||||
desc: "invalid load balancer method with sticky enabled",
|
||||
lb: &types.LoadBalancer{
|
||||
Method: "Invalid",
|
||||
Sticky: true,
|
||||
Method: "Invalid",
|
||||
Stickiness: &types.Stickiness{},
|
||||
},
|
||||
wantMethod: defaultMethod,
|
||||
wantSticky: true,
|
||||
wantMethod: defaultMethod,
|
||||
wantStickiness: &types.Stickiness{},
|
||||
},
|
||||
{
|
||||
desc: "invalid load balancer method with sticky disabled",
|
||||
lb: &types.LoadBalancer{
|
||||
Method: "Invalid",
|
||||
Sticky: false,
|
||||
Method: "Invalid",
|
||||
Stickiness: nil,
|
||||
},
|
||||
wantMethod: defaultMethod,
|
||||
wantSticky: false,
|
||||
},
|
||||
{
|
||||
desc: "missing load balancer",
|
||||
lb: nil,
|
||||
wantMethod: defaultMethod,
|
||||
wantSticky: false,
|
||||
},
|
||||
}
|
||||
|
||||
@@ -485,8 +487,8 @@ func TestConfigureBackends(t *testing.T) {
|
||||
})
|
||||
|
||||
wantLB := types.LoadBalancer{
|
||||
Method: test.wantMethod,
|
||||
Sticky: test.wantSticky,
|
||||
Method: test.wantMethod,
|
||||
Stickiness: test.wantStickiness,
|
||||
}
|
||||
if !reflect.DeepEqual(*backend.LoadBalancer, wantLB) {
|
||||
t.Errorf("got backend load-balancer\n%v\nwant\n%v\n", spew.Sdump(backend.LoadBalancer), spew.Sdump(wantLB))
|
||||
@@ -495,7 +497,7 @@ func TestConfigureBackends(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestServerEntrypointWhitelistConfig(t *testing.T) {
|
||||
func TestServerEntryPointWhitelistConfig(t *testing.T) {
|
||||
tests := []struct {
|
||||
desc string
|
||||
entrypoint *configuration.EntryPoint
|
||||
@@ -504,7 +506,8 @@ func TestServerEntrypointWhitelistConfig(t *testing.T) {
|
||||
{
|
||||
desc: "no whitelist middleware if no config on entrypoint",
|
||||
entrypoint: &configuration.EntryPoint{
|
||||
Address: ":0",
|
||||
Address: ":0",
|
||||
ForwardedHeaders: &configuration.ForwardedHeaders{Insecure: true},
|
||||
},
|
||||
wantMiddleware: false,
|
||||
},
|
||||
@@ -515,6 +518,7 @@ func TestServerEntrypointWhitelistConfig(t *testing.T) {
|
||||
WhitelistSourceRange: []string{
|
||||
"127.0.0.1/32",
|
||||
},
|
||||
ForwardedHeaders: &configuration.ForwardedHeaders{Insecure: true},
|
||||
},
|
||||
wantMiddleware: true,
|
||||
},
|
||||
@@ -539,7 +543,7 @@ func TestServerEntrypointWhitelistConfig(t *testing.T) {
|
||||
handler := srvEntryPoint.httpServer.Handler.(*negroni.Negroni)
|
||||
found := false
|
||||
for _, handler := range handler.Handlers() {
|
||||
if reflect.TypeOf(handler) == reflect.TypeOf((*middlewares.IPWhitelister)(nil)) {
|
||||
if reflect.TypeOf(handler) == reflect.TypeOf((*middlewares.IPWhiteLister)(nil)) {
|
||||
found = true
|
||||
}
|
||||
}
|
||||
@@ -636,7 +640,7 @@ func TestServerResponseEmptyBackend(t *testing.T) {
|
||||
|
||||
globalConfig := configuration.GlobalConfiguration{
|
||||
EntryPoints: configuration.EntryPoints{
|
||||
"http": &configuration.EntryPoint{},
|
||||
"http": &configuration.EntryPoint{ForwardedHeaders: &configuration.ForwardedHeaders{Insecure: true}},
|
||||
},
|
||||
}
|
||||
dynamicConfigs := types.Configurations{"config": test.dynamicConfig(testServer.URL)}
|
||||
@@ -719,6 +723,10 @@ func withServer(name, url string) func(backend *types.Backend) {
|
||||
|
||||
func withLoadBalancer(method string, sticky bool) func(*types.Backend) {
|
||||
return func(be *types.Backend) {
|
||||
be.LoadBalancer = &types.LoadBalancer{Method: method, Sticky: sticky}
|
||||
if sticky {
|
||||
be.LoadBalancer = &types.LoadBalancer{Method: method, Stickiness: &types.Stickiness{CookieName: "test"}}
|
||||
} else {
|
||||
be.LoadBalancer = &types.LoadBalancer{Method: method}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -17,8 +17,12 @@
|
||||
{{end}}
|
||||
|
||||
[backends."backend-{{$service}}".loadbalancer]
|
||||
sticky = {{getAttribute "backend.loadbalancer.sticky" .Attributes "false"}}
|
||||
method = "{{getAttribute "backend.loadbalancer" .Attributes "wrr"}}"
|
||||
sticky = {{getSticky .Attributes}}
|
||||
{{if hasStickinessLabel .Attributes}}
|
||||
[backends."backend-{{$service}}".loadbalancer.stickiness]
|
||||
cookieName = "{{getStickinessCookieName .Attributes}}"
|
||||
{{end}}
|
||||
|
||||
{{if hasMaxconnAttributes .Attributes}}
|
||||
[backends."backend-{{$service}}".maxconn]
|
||||
|
||||
@@ -9,6 +9,10 @@
|
||||
[backends.backend-{{$backendName}}.loadbalancer]
|
||||
method = "{{getLoadBalancerMethod $backend}}"
|
||||
sticky = {{getSticky $backend}}
|
||||
{{if hasStickinessLabel $backend}}
|
||||
[backends.backend-{{$backendName}}.loadBalancer.stickiness]
|
||||
cookieName = "{{getStickinessCookieName $backend}}"
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
{{if hasMaxConnLabels $backend}}
|
||||
@@ -22,7 +26,7 @@
|
||||
{{if hasServices $server}}
|
||||
{{$services := getServiceNames $server}}
|
||||
{{range $serviceIndex, $serviceName := $services}}
|
||||
[backends.backend-{{getServiceBackend $server $serviceName}}.servers.service]
|
||||
[backends.backend-{{getServiceBackend $server $serviceName}}.servers.service-{{$serverName}}]
|
||||
url = "{{getServiceProtocol $server $serviceName}}://{{getIPAddress $server}}:{{getServicePort $server $serviceName}}"
|
||||
weight = {{getServiceWeight $server $serviceName}}
|
||||
{{end}}
|
||||
|
||||
@@ -1,7 +1,11 @@
|
||||
[backends]{{range $serviceName, $instances := .Services}}
|
||||
[backends.backend-{{ $serviceName }}.loadbalancer]
|
||||
sticky = {{ getLoadBalancerSticky $instances}}
|
||||
method = "{{ getLoadBalancerMethod $instances}}"
|
||||
sticky = {{ getLoadBalancerSticky $instances}}
|
||||
{{if hasStickinessLabel $instances}}
|
||||
[backends.backend-{{ $serviceName }}.loadbalancer.stickiness]
|
||||
cookieName = "{{getStickinessCookieName $instances}}"
|
||||
{{end}}
|
||||
|
||||
{{range $index, $i := $instances}}
|
||||
[backends.backend-{{ $i.Name }}.servers.server-{{ $i.Name }}{{ $i.ID }}]
|
||||
|
||||
@@ -7,7 +7,11 @@
|
||||
[backends."{{$backendName}}".loadbalancer]
|
||||
method = "{{$backend.LoadBalancer.Method}}"
|
||||
{{if $backend.LoadBalancer.Sticky}}
|
||||
sticky = true
|
||||
sticky = true
|
||||
{{end}}
|
||||
{{if $backend.LoadBalancer.Stickiness}}
|
||||
[backends."{{$backendName}}".loadbalancer.stickiness]
|
||||
cookieName = "{{$backend.LoadBalancer.Stickiness.CookieName}}"
|
||||
{{end}}
|
||||
{{range $serverName, $server := $backend.Servers}}
|
||||
[backends."{{$backendName}}".servers."{{$serverName}}"]
|
||||
|
||||
@@ -3,25 +3,29 @@
|
||||
|
||||
[backends]{{range $backends}}
|
||||
{{$backend := .}}
|
||||
{{$backendName := Last $backend}}
|
||||
{{$servers := ListServers $backend }}
|
||||
|
||||
{{$circuitBreaker := Get "" . "/circuitbreaker/" "expression"}}
|
||||
{{with $circuitBreaker}}
|
||||
[backends."{{Last $backend}}".circuitBreaker]
|
||||
[backends."{{$backendName}}".circuitBreaker]
|
||||
expression = "{{$circuitBreaker}}"
|
||||
{{end}}
|
||||
|
||||
{{$loadBalancer := Get "" . "/loadbalancer/" "method"}}
|
||||
{{$sticky := Get "false" . "/loadbalancer/" "sticky"}}
|
||||
{{with $loadBalancer}}
|
||||
[backends."{{Last $backend}}".loadBalancer]
|
||||
[backends."{{$backendName}}".loadBalancer]
|
||||
method = "{{$loadBalancer}}"
|
||||
sticky = {{$sticky}}
|
||||
sticky = {{ getSticky . }}
|
||||
{{if hasStickinessLabel $backend}}
|
||||
[backends."{{$backendName}}".loadBalancer.stickiness]
|
||||
cookieName = {{getStickinessCookieName $backend}}
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
{{$healthCheck := Get "" . "/healthcheck/" "path"}}
|
||||
{{with $healthCheck}}
|
||||
[backends."{{Last $backend}}".healthCheck]
|
||||
[backends."{{$backendName}}".healthCheck]
|
||||
path = "{{$healthCheck}}"
|
||||
interval = "{{ Get "30s" $backend "/healthcheck/" "interval" }}"
|
||||
{{end}}
|
||||
@@ -30,14 +34,14 @@
|
||||
{{$maxConnExtractorFunc := Get "" . "/maxconn/" "extractorfunc"}}
|
||||
{{with $maxConnAmt}}
|
||||
{{with $maxConnExtractorFunc}}
|
||||
[backends."{{Last $backend}}".maxConn]
|
||||
[backends."{{$backendName}}".maxConn]
|
||||
amount = {{$maxConnAmt}}
|
||||
extractorFunc = "{{$maxConnExtractorFunc}}"
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
{{range $servers}}
|
||||
[backends."{{Last $backend}}".servers."{{Last .}}"]
|
||||
[backends."{{$backendName}}".servers."{{Last .}}"]
|
||||
url = "{{Get "" . "/url"}}"
|
||||
weight = {{Get "0" . "/weight"}}
|
||||
{{end}}
|
||||
|
||||
@@ -12,6 +12,7 @@
|
||||
|
||||
{{range $app := $apps}}
|
||||
{{range $serviceIndex, $serviceName := getServiceNames $app}}
|
||||
[backends."backend{{getBackend $app $serviceName }}"]
|
||||
{{ if hasMaxConnLabels $app }}
|
||||
[backends."backend{{getBackend $app $serviceName }}".maxconn]
|
||||
amount = {{getMaxConnAmount $app }}
|
||||
@@ -21,6 +22,10 @@
|
||||
[backends."backend{{getBackend $app $serviceName }}".loadbalancer]
|
||||
method = "{{getLoadBalancerMethod $app }}"
|
||||
sticky = {{getSticky $app}}
|
||||
{{if hasStickinessLabel $app}}
|
||||
[backends."backend{{getBackend $app $serviceName }}".loadbalancer.stickiness]
|
||||
cookieName = "{{getStickinessCookieName $app}}"
|
||||
{{end}}
|
||||
{{end}}
|
||||
{{ if hasCircuitBreakerLabels $app }}
|
||||
[backends."backend{{getBackend $app $serviceName }}".circuitbreaker]
|
||||
|
||||
@@ -9,6 +9,10 @@
|
||||
[backends.backend-{{$backendName}}.loadbalancer]
|
||||
method = "{{getLoadBalancerMethod $backend}}"
|
||||
sticky = {{getSticky $backend}}
|
||||
{{if hasStickinessLabel $backend}}
|
||||
[backends.backend-{{$backendName}}.loadbalancer.stickiness]
|
||||
cookieName = "{{getStickinessCookieName $backend}}"
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
{{if hasMaxConnLabels $backend}}
|
||||
|
||||
@@ -2,59 +2,36 @@ package types
|
||||
|
||||
import "strings"
|
||||
|
||||
// Traefik labels
|
||||
const (
|
||||
// LabelPrefix Traefik label
|
||||
LabelPrefix = "traefik."
|
||||
// LabelDomain Traefik label
|
||||
LabelDomain = LabelPrefix + "domain"
|
||||
// LabelEnable Traefik label
|
||||
LabelEnable = LabelPrefix + "enable"
|
||||
// LabelPort Traefik label
|
||||
LabelPort = LabelPrefix + "port"
|
||||
// LabelPortIndex Traefik label
|
||||
LabelPortIndex = LabelPrefix + "portIndex"
|
||||
// LabelProtocol Traefik label
|
||||
LabelProtocol = LabelPrefix + "protocol"
|
||||
// LabelTags Traefik label
|
||||
LabelTags = LabelPrefix + "tags"
|
||||
// LabelWeight Traefik label
|
||||
LabelWeight = LabelPrefix + "weight"
|
||||
// LabelFrontendAuthBasic Traefik label
|
||||
LabelFrontendAuthBasic = LabelPrefix + "frontend.auth.basic"
|
||||
// LabelFrontendEntryPoints Traefik label
|
||||
LabelFrontendEntryPoints = LabelPrefix + "frontend.entryPoints"
|
||||
// LabelFrontendPassHostHeader Traefik label
|
||||
LabelFrontendPassHostHeader = LabelPrefix + "frontend.passHostHeader"
|
||||
// LabelFrontendPriority Traefik label
|
||||
LabelFrontendPriority = LabelPrefix + "frontend.priority"
|
||||
// LabelFrontendRule Traefik label
|
||||
LabelFrontendRule = LabelPrefix + "frontend.rule"
|
||||
// LabelFrontendRuleType Traefik label
|
||||
LabelFrontendRuleType = LabelPrefix + "frontend.rule.type"
|
||||
// LabelTraefikFrontendValue Traefik label
|
||||
LabelTraefikFrontendValue = LabelPrefix + "frontend.value"
|
||||
// LabelTraefikFrontendWhitelistSourceRange Traefik label
|
||||
LabelTraefikFrontendWhitelistSourceRange = LabelPrefix + "frontend.whitelistSourceRange"
|
||||
// LabelBackend Traefik label
|
||||
LabelBackend = LabelPrefix + "backend"
|
||||
// LabelBackendID Traefik label
|
||||
LabelBackendID = LabelPrefix + "backend.id"
|
||||
// LabelTraefikBackendCircuitbreaker Traefik label
|
||||
LabelTraefikBackendCircuitbreaker = LabelPrefix + "backend.circuitbreaker"
|
||||
// LabelBackendCircuitbreakerExpression Traefik label
|
||||
LabelBackendCircuitbreakerExpression = LabelPrefix + "backend.circuitbreaker.expression"
|
||||
// LabelBackendHealthcheckPath Traefik label
|
||||
LabelBackendHealthcheckPath = LabelPrefix + "backend.healthcheck.path"
|
||||
// LabelBackendHealthcheckInterval Traefik label
|
||||
LabelBackendHealthcheckInterval = LabelPrefix + "backend.healthcheck.interval"
|
||||
// LabelBackendLoadbalancerMethod Traefik label
|
||||
LabelBackendLoadbalancerMethod = LabelPrefix + "backend.loadbalancer.method"
|
||||
// LabelBackendLoadbalancerSticky Traefik label
|
||||
LabelBackendLoadbalancerSticky = LabelPrefix + "backend.loadbalancer.sticky"
|
||||
// LabelBackendMaxconnAmount Traefik label
|
||||
LabelBackendMaxconnAmount = LabelPrefix + "backend.maxconn.amount"
|
||||
// LabelBackendMaxconnExtractorfunc Traefik label
|
||||
LabelBackendMaxconnExtractorfunc = LabelPrefix + "backend.maxconn.extractorfunc"
|
||||
LabelPrefix = "traefik."
|
||||
LabelDomain = LabelPrefix + "domain"
|
||||
LabelEnable = LabelPrefix + "enable"
|
||||
LabelPort = LabelPrefix + "port"
|
||||
LabelPortIndex = LabelPrefix + "portIndex"
|
||||
LabelProtocol = LabelPrefix + "protocol"
|
||||
LabelTags = LabelPrefix + "tags"
|
||||
LabelWeight = LabelPrefix + "weight"
|
||||
LabelFrontendAuthBasic = LabelPrefix + "frontend.auth.basic"
|
||||
LabelFrontendEntryPoints = LabelPrefix + "frontend.entryPoints"
|
||||
LabelFrontendPassHostHeader = LabelPrefix + "frontend.passHostHeader"
|
||||
LabelFrontendPriority = LabelPrefix + "frontend.priority"
|
||||
LabelFrontendRule = LabelPrefix + "frontend.rule"
|
||||
LabelFrontendRuleType = LabelPrefix + "frontend.rule.type"
|
||||
LabelTraefikFrontendValue = LabelPrefix + "frontend.value"
|
||||
LabelTraefikFrontendWhitelistSourceRange = LabelPrefix + "frontend.whitelistSourceRange"
|
||||
LabelBackend = LabelPrefix + "backend"
|
||||
LabelBackendID = LabelPrefix + "backend.id"
|
||||
LabelTraefikBackendCircuitbreaker = LabelPrefix + "backend.circuitbreaker"
|
||||
LabelBackendCircuitbreakerExpression = LabelPrefix + "backend.circuitbreaker.expression"
|
||||
LabelBackendHealthcheckPath = LabelPrefix + "backend.healthcheck.path"
|
||||
LabelBackendHealthcheckInterval = LabelPrefix + "backend.healthcheck.interval"
|
||||
LabelBackendLoadbalancerMethod = LabelPrefix + "backend.loadbalancer.method"
|
||||
LabelBackendLoadbalancerSticky = LabelPrefix + "backend.loadbalancer.sticky"
|
||||
LabelBackendLoadbalancerStickiness = LabelPrefix + "backend.loadbalancer.stickiness"
|
||||
LabelBackendLoadbalancerStickinessCookieName = LabelPrefix + "backend.loadbalancer.stickiness.cookieName"
|
||||
LabelBackendMaxconnAmount = LabelPrefix + "backend.maxconn.amount"
|
||||
LabelBackendMaxconnExtractorfunc = LabelPrefix + "backend.maxconn.extractorfunc"
|
||||
)
|
||||
|
||||
//ServiceLabel converts a key value of Label*, given a serviceName, into a pattern <LabelPrefix>.<serviceName>.<property>
|
||||
|
||||
@@ -33,8 +33,14 @@ type MaxConn struct {
|
||||
|
||||
// LoadBalancer holds load balancing configuration.
|
||||
type LoadBalancer struct {
|
||||
Method string `json:"method,omitempty"`
|
||||
Sticky bool `json:"sticky,omitempty"`
|
||||
Method string `json:"method,omitempty"`
|
||||
Sticky bool `json:"sticky,omitempty"` // Deprecated: use Stickiness instead
|
||||
Stickiness *Stickiness `json:"stickiness,omitempty"`
|
||||
}
|
||||
|
||||
// Stickiness holds sticky session configuration.
|
||||
type Stickiness struct {
|
||||
CookieName string `json:"cookieName,omitempty"`
|
||||
}
|
||||
|
||||
// CircuitBreaker holds circuit breaker configuration.
|
||||
|
||||
14
vendor/github.com/NYTimes/gziphandler/gzip.go
generated
vendored
14
vendor/github.com/NYTimes/gziphandler/gzip.go
generated
vendored
@@ -150,7 +150,9 @@ func (w *GzipResponseWriter) startGzip() error {
|
||||
|
||||
// WriteHeader just saves the response code until close or GZIP effective writes.
|
||||
func (w *GzipResponseWriter) WriteHeader(code int) {
|
||||
w.code = code
|
||||
if w.code == 0 {
|
||||
w.code = code
|
||||
}
|
||||
}
|
||||
|
||||
// init graps a new gzip writer from the gzipWriterPool and writes the correct
|
||||
@@ -190,10 +192,16 @@ func (w *GzipResponseWriter) Close() error {
|
||||
// http.ResponseWriter if it is an http.Flusher. This makes GzipResponseWriter
|
||||
// an http.Flusher.
|
||||
func (w *GzipResponseWriter) Flush() {
|
||||
if w.gw != nil {
|
||||
w.gw.Flush()
|
||||
if w.gw == nil {
|
||||
// Only flush once startGzip has been called.
|
||||
//
|
||||
// Flush is thus a no-op until the written body
|
||||
// exceeds minSize.
|
||||
return
|
||||
}
|
||||
|
||||
w.gw.Flush()
|
||||
|
||||
if fw, ok := w.ResponseWriter.(http.Flusher); ok {
|
||||
fw.Flush()
|
||||
}
|
||||
|
||||
5
vendor/github.com/containous/mux/doc.go
generated
vendored
5
vendor/github.com/containous/mux/doc.go
generated
vendored
@@ -57,11 +57,6 @@ calling mux.Vars():
|
||||
vars := mux.Vars(request)
|
||||
category := vars["category"]
|
||||
|
||||
Note that if any capturing groups are present, mux will panic() during parsing. To prevent
|
||||
this, convert any capturing groups to non-capturing, e.g. change "/{sort:(asc|desc)}" to
|
||||
"/{sort:(?:asc|desc)}". This is a change from prior versions which behaved unpredictably
|
||||
when capturing groups were present.
|
||||
|
||||
And this is all you need to know about the basic usage. More advanced options
|
||||
are explained below.
|
||||
|
||||
|
||||
58
vendor/github.com/containous/mux/mux.go
generated
vendored
58
vendor/github.com/containous/mux/mux.go
generated
vendored
@@ -11,7 +11,10 @@ import (
|
||||
"path"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrMethodMismatch = errors.New("method is not allowed")
|
||||
)
|
||||
|
||||
// NewRouter returns a new router instance.
|
||||
@@ -40,6 +43,10 @@ func NewRouter() *Router {
|
||||
type Router struct {
|
||||
// Configurable Handler to be used when no route matches.
|
||||
NotFoundHandler http.Handler
|
||||
|
||||
// Configurable Handler to be used when the request method does not match the route.
|
||||
MethodNotAllowedHandler http.Handler
|
||||
|
||||
// Parent route, if this is a subrouter.
|
||||
parent parentRoute
|
||||
// Routes to be matched, in order.
|
||||
@@ -66,6 +73,11 @@ func (r *Router) Match(req *http.Request, match *RouteMatch) bool {
|
||||
}
|
||||
}
|
||||
|
||||
if match.MatchErr == ErrMethodMismatch && r.MethodNotAllowedHandler != nil {
|
||||
match.Handler = r.MethodNotAllowedHandler
|
||||
return true
|
||||
}
|
||||
|
||||
// Closest match for a router (includes sub-routers)
|
||||
if r.NotFoundHandler != nil {
|
||||
match.Handler = r.NotFoundHandler
|
||||
@@ -82,7 +94,7 @@ func (r *Router) ServeHTTP(w http.ResponseWriter, req *http.Request) {
|
||||
if !r.skipClean {
|
||||
path := req.URL.Path
|
||||
if r.useEncodedPath {
|
||||
path = getPath(req)
|
||||
path = req.URL.EscapedPath()
|
||||
}
|
||||
// Clean path to canonical form and redirect.
|
||||
if p := cleanPath(path); p != path {
|
||||
@@ -106,9 +118,15 @@ func (r *Router) ServeHTTP(w http.ResponseWriter, req *http.Request) {
|
||||
req = setVars(req, match.Vars)
|
||||
req = setCurrentRoute(req, match.Route)
|
||||
}
|
||||
|
||||
if handler == nil && match.MatchErr == ErrMethodMismatch {
|
||||
handler = methodNotAllowedHandler()
|
||||
}
|
||||
|
||||
if handler == nil {
|
||||
handler = http.NotFoundHandler()
|
||||
}
|
||||
|
||||
if !r.KeepContext {
|
||||
defer contextClear(req)
|
||||
}
|
||||
@@ -356,6 +374,11 @@ type RouteMatch struct {
|
||||
Route *Route
|
||||
Handler http.Handler
|
||||
Vars map[string]string
|
||||
|
||||
// MatchErr is set to appropriate matching error
|
||||
// It is set to ErrMethodMismatch if there is a mismatch in
|
||||
// the request method and route method
|
||||
MatchErr error
|
||||
}
|
||||
|
||||
type contextKey int
|
||||
@@ -397,28 +420,6 @@ func setCurrentRoute(r *http.Request, val interface{}) *http.Request {
|
||||
// Helpers
|
||||
// ----------------------------------------------------------------------------
|
||||
|
||||
// getPath returns the escaped path if possible; doing what URL.EscapedPath()
|
||||
// which was added in go1.5 does
|
||||
func getPath(req *http.Request) string {
|
||||
if req.RequestURI != "" {
|
||||
// Extract the path from RequestURI (which is escaped unlike URL.Path)
|
||||
// as detailed here as detailed in https://golang.org/pkg/net/url/#URL
|
||||
// for < 1.5 server side workaround
|
||||
// http://localhost/path/here?v=1 -> /path/here
|
||||
path := req.RequestURI
|
||||
path = strings.TrimPrefix(path, req.URL.Scheme+`://`)
|
||||
path = strings.TrimPrefix(path, req.URL.Host)
|
||||
if i := strings.LastIndex(path, "?"); i > -1 {
|
||||
path = path[:i]
|
||||
}
|
||||
if i := strings.LastIndex(path, "#"); i > -1 {
|
||||
path = path[:i]
|
||||
}
|
||||
return path
|
||||
}
|
||||
return req.URL.Path
|
||||
}
|
||||
|
||||
// cleanPath returns the canonical path for p, eliminating . and .. elements.
|
||||
// Borrowed from the net/http package.
|
||||
func cleanPath(p string) string {
|
||||
@@ -557,3 +558,12 @@ func matchMapWithRegex(toCheck map[string]*regexp.Regexp, toMatch map[string][]s
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// methodNotAllowed replies to the request with an HTTP status code 405.
|
||||
func methodNotAllowed(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(http.StatusMethodNotAllowed)
|
||||
}
|
||||
|
||||
// methodNotAllowedHandler returns a simple request handler
|
||||
// that replies to each request with a status code 405.
|
||||
func methodNotAllowedHandler() http.Handler { return http.HandlerFunc(methodNotAllowed) }
|
||||
|
||||
24
vendor/github.com/containous/mux/regexp.go
generated
vendored
24
vendor/github.com/containous/mux/regexp.go
generated
vendored
@@ -109,13 +109,6 @@ func newRouteRegexp(tpl string, matchHost, matchPrefix, matchQuery, strictSlash,
|
||||
if errCompile != nil {
|
||||
return nil, errCompile
|
||||
}
|
||||
|
||||
// Check for capturing groups which used to work in older versions
|
||||
if reg.NumSubexp() != len(idxs)/2 {
|
||||
panic(fmt.Sprintf("route %s contains capture groups in its regexp. ", template) +
|
||||
"Only non-capturing groups are accepted: e.g. (?:pattern) instead of (pattern)")
|
||||
}
|
||||
|
||||
// Done!
|
||||
return &routeRegexp{
|
||||
template: template,
|
||||
@@ -141,7 +134,7 @@ type routeRegexp struct {
|
||||
matchQuery bool
|
||||
// The strictSlash value defined on the route, but disabled if PathPrefix was used.
|
||||
strictSlash bool
|
||||
// Determines whether to use encoded path from getPath function or unencoded
|
||||
// Determines whether to use encoded req.URL.EnscapedPath() or unencoded
|
||||
// req.URL.Path for path matching
|
||||
useEncodedPath bool
|
||||
// Expanded regexp.
|
||||
@@ -162,7 +155,7 @@ func (r *routeRegexp) Match(req *http.Request, match *RouteMatch) bool {
|
||||
}
|
||||
path := req.URL.Path
|
||||
if r.useEncodedPath {
|
||||
path = getPath(req)
|
||||
path = req.URL.EscapedPath()
|
||||
}
|
||||
return r.regexp.MatchString(path)
|
||||
}
|
||||
@@ -272,7 +265,7 @@ func (v *routeRegexpGroup) setMatch(req *http.Request, m *RouteMatch, r *Route)
|
||||
}
|
||||
path := req.URL.Path
|
||||
if r.useEncodedPath {
|
||||
path = getPath(req)
|
||||
path = req.URL.EscapedPath()
|
||||
}
|
||||
// Store path variables.
|
||||
if v.path != nil {
|
||||
@@ -320,7 +313,14 @@ func getHost(r *http.Request) string {
|
||||
}
|
||||
|
||||
func extractVars(input string, matches []int, names []string, output map[string]string) {
|
||||
for i, name := range names {
|
||||
output[name] = input[matches[2*i+2]:matches[2*i+3]]
|
||||
matchesCount := 0
|
||||
prevEnd := -1
|
||||
for i := 2; i < len(matches) && matchesCount < len(names); i += 2 {
|
||||
if prevEnd < matches[i+1] {
|
||||
value := input[matches[i]:matches[i+1]]
|
||||
output[names[matchesCount]] = value
|
||||
prevEnd = matches[i+1]
|
||||
matchesCount++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
54
vendor/github.com/containous/mux/route.go
generated
vendored
54
vendor/github.com/containous/mux/route.go
generated
vendored
@@ -54,12 +54,27 @@ func (r *Route) Match(req *http.Request, match *RouteMatch) bool {
|
||||
if r.buildOnly || r.err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
var matchErr error
|
||||
|
||||
// Match everything.
|
||||
for _, m := range r.matchers {
|
||||
if matched := m.Match(req, match); !matched {
|
||||
if _, ok := m.(methodMatcher); ok {
|
||||
matchErr = ErrMethodMismatch
|
||||
continue
|
||||
}
|
||||
matchErr = nil
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
if matchErr != nil {
|
||||
match.MatchErr = matchErr
|
||||
return false
|
||||
}
|
||||
|
||||
match.MatchErr = nil
|
||||
// Yay, we have a match. Let's collect some info about it.
|
||||
if match.Route == nil {
|
||||
match.Route = r
|
||||
@@ -70,6 +85,7 @@ func (r *Route) Match(req *http.Request, match *RouteMatch) bool {
|
||||
if match.Vars == nil {
|
||||
match.Vars = make(map[string]string)
|
||||
}
|
||||
|
||||
// Set variables.
|
||||
if r.regexp != nil {
|
||||
r.regexp.setMatch(req, match, r)
|
||||
@@ -607,6 +623,44 @@ func (r *Route) GetPathRegexp() (string, error) {
|
||||
return r.regexp.path.regexp.String(), nil
|
||||
}
|
||||
|
||||
// GetQueriesRegexp returns the expanded regular expressions used to match the
|
||||
// route queries.
|
||||
// This is useful for building simple REST API documentation and for instrumentation
|
||||
// against third-party services.
|
||||
// An empty list will be returned if the route does not have queries.
|
||||
func (r *Route) GetQueriesRegexp() ([]string, error) {
|
||||
if r.err != nil {
|
||||
return nil, r.err
|
||||
}
|
||||
if r.regexp == nil || r.regexp.queries == nil {
|
||||
return nil, errors.New("mux: route doesn't have queries")
|
||||
}
|
||||
var queries []string
|
||||
for _, query := range r.regexp.queries {
|
||||
queries = append(queries, query.regexp.String())
|
||||
}
|
||||
return queries, nil
|
||||
}
|
||||
|
||||
// GetQueriesTemplates returns the templates used to build the
|
||||
// query matching.
|
||||
// This is useful for building simple REST API documentation and for instrumentation
|
||||
// against third-party services.
|
||||
// An empty list will be returned if the route does not define queries.
|
||||
func (r *Route) GetQueriesTemplates() ([]string, error) {
|
||||
if r.err != nil {
|
||||
return nil, r.err
|
||||
}
|
||||
if r.regexp == nil || r.regexp.queries == nil {
|
||||
return nil, errors.New("mux: route doesn't have queries")
|
||||
}
|
||||
var queries []string
|
||||
for _, query := range r.regexp.queries {
|
||||
queries = append(queries, query.template)
|
||||
}
|
||||
return queries, nil
|
||||
}
|
||||
|
||||
// GetMethods returns the methods the route matches against
|
||||
// This is useful for building simple REST API documentation and for instrumentation
|
||||
// against third-party services.
|
||||
|
||||
13
vendor/github.com/vulcand/oxy/forward/fwd.go
generated
vendored
13
vendor/github.com/vulcand/oxy/forward/fwd.go
generated
vendored
@@ -190,7 +190,9 @@ func (f *httpForwarder) serveHTTP(w http.ResponseWriter, req *http.Request, ctx
|
||||
stream = contentType == "text/event-stream"
|
||||
}
|
||||
}
|
||||
written, err := io.Copy(newResponseFlusher(w, stream), response.Body)
|
||||
|
||||
flush := stream || req.ProtoMajor == 2
|
||||
written, err := io.Copy(newResponseFlusher(w, flush), response.Body)
|
||||
if err != nil {
|
||||
ctx.log.Errorf("Error copying upstream response body: %v", err)
|
||||
ctx.errHandler.ServeHTTP(w, req, err)
|
||||
@@ -249,6 +251,12 @@ func (f *httpForwarder) copyRequest(req *http.Request, u *url.URL) *http.Request
|
||||
if f.rewriter != nil {
|
||||
f.rewriter.Rewrite(outReq)
|
||||
}
|
||||
|
||||
if req.ContentLength == 0 {
|
||||
// https://github.com/golang/go/issues/16036: nil Body for http.Transport retries
|
||||
outReq.Body = nil
|
||||
}
|
||||
|
||||
return outReq
|
||||
}
|
||||
|
||||
@@ -258,7 +266,8 @@ func (f *websocketForwarder) serveHTTP(w http.ResponseWriter, req *http.Request,
|
||||
|
||||
dialer := websocket.DefaultDialer
|
||||
if outReq.URL.Scheme == "wss" && f.TLSClientConfig != nil {
|
||||
dialer.TLSClientConfig = f.TLSClientConfig
|
||||
dialer.TLSClientConfig = f.TLSClientConfig.Clone()
|
||||
dialer.TLSClientConfig.NextProtos = []string{"http/1.1"}
|
||||
}
|
||||
targetConn, resp, err := dialer.Dial(outReq.URL.String(), outReq.Header)
|
||||
if err != nil {
|
||||
|
||||
10
vendor/github.com/vulcand/oxy/forward/headers.go
generated
vendored
10
vendor/github.com/vulcand/oxy/forward/headers.go
generated
vendored
@@ -6,6 +6,7 @@ const (
|
||||
XForwardedHost = "X-Forwarded-Host"
|
||||
XForwardedPort = "X-Forwarded-Port"
|
||||
XForwardedServer = "X-Forwarded-Server"
|
||||
XRealIp = "X-Real-Ip"
|
||||
Connection = "Connection"
|
||||
KeepAlive = "Keep-Alive"
|
||||
ProxyAuthenticate = "Proxy-Authenticate"
|
||||
@@ -50,3 +51,12 @@ var WebsocketUpgradeHeaders = []string{
|
||||
Connection,
|
||||
SecWebsocketAccept,
|
||||
}
|
||||
|
||||
var XHeaders = []string{
|
||||
XForwardedProto,
|
||||
XForwardedFor,
|
||||
XForwardedHost,
|
||||
XForwardedPort,
|
||||
XForwardedServer,
|
||||
XRealIp,
|
||||
}
|
||||
|
||||
54
vendor/github.com/vulcand/oxy/forward/rewrite.go
generated
vendored
54
vendor/github.com/vulcand/oxy/forward/rewrite.go
generated
vendored
@@ -15,30 +15,36 @@ type HeaderRewriter struct {
|
||||
}
|
||||
|
||||
func (rw *HeaderRewriter) Rewrite(req *http.Request) {
|
||||
if !rw.TrustForwardHeader {
|
||||
utils.RemoveHeaders(req.Header, XHeaders...)
|
||||
}
|
||||
|
||||
if clientIP, _, err := net.SplitHostPort(req.RemoteAddr); err == nil {
|
||||
if rw.TrustForwardHeader {
|
||||
if prior, ok := req.Header[XForwardedFor]; ok {
|
||||
clientIP = strings.Join(prior, ", ") + ", " + clientIP
|
||||
}
|
||||
if prior, ok := req.Header[XForwardedFor]; ok {
|
||||
req.Header.Set(XForwardedFor, strings.Join(prior, ", ")+", "+clientIP)
|
||||
} else {
|
||||
req.Header.Set(XForwardedFor, clientIP)
|
||||
}
|
||||
|
||||
if req.Header.Get(XRealIp) == "" {
|
||||
req.Header.Set(XRealIp, clientIP)
|
||||
}
|
||||
req.Header.Set(XForwardedFor, clientIP)
|
||||
}
|
||||
|
||||
if xfp := req.Header.Get(XForwardedProto); xfp != "" && rw.TrustForwardHeader {
|
||||
req.Header.Set(XForwardedProto, xfp)
|
||||
} else if req.TLS != nil {
|
||||
req.Header.Set(XForwardedProto, "https")
|
||||
} else {
|
||||
req.Header.Set(XForwardedProto, "http")
|
||||
xfProto := req.Header.Get(XForwardedProto)
|
||||
if xfProto == "" {
|
||||
if req.TLS != nil {
|
||||
req.Header.Set(XForwardedProto, "https")
|
||||
} else {
|
||||
req.Header.Set(XForwardedProto, "http")
|
||||
}
|
||||
}
|
||||
|
||||
if xfp := req.Header.Get(XForwardedPort); xfp != "" && rw.TrustForwardHeader {
|
||||
req.Header.Set(XForwardedPort, xfp)
|
||||
if xfp := req.Header.Get(XForwardedPort); xfp == "" {
|
||||
req.Header.Set(XForwardedPort, forwardedPort(req))
|
||||
}
|
||||
|
||||
if xfh := req.Header.Get(XForwardedHost); xfh != "" && rw.TrustForwardHeader {
|
||||
req.Header.Set(XForwardedHost, xfh)
|
||||
} else if req.Host != "" {
|
||||
if xfHost := req.Header.Get(XForwardedHost); xfHost == "" && req.Host != "" {
|
||||
req.Header.Set(XForwardedHost, req.Host)
|
||||
}
|
||||
|
||||
@@ -50,3 +56,19 @@ func (rw *HeaderRewriter) Rewrite(req *http.Request) {
|
||||
// connection, regardless of what the client sent to us.
|
||||
utils.RemoveHeaders(req.Header, HopHeaders...)
|
||||
}
|
||||
|
||||
func forwardedPort(req *http.Request) string {
|
||||
if req == nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
if _, port, err := net.SplitHostPort(req.Host); err == nil && port != "" {
|
||||
return port
|
||||
}
|
||||
|
||||
if req.TLS != nil {
|
||||
return "443"
|
||||
}
|
||||
|
||||
return "80"
|
||||
}
|
||||
|
||||
86
whitelist/ip.go
Normal file
86
whitelist/ip.go
Normal file
@@ -0,0 +1,86 @@
|
||||
package whitelist
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// IP allows to check that addresses are in a white list
|
||||
type IP struct {
|
||||
whiteListsIPs []*net.IP
|
||||
whiteListsNet []*net.IPNet
|
||||
insecure bool
|
||||
}
|
||||
|
||||
// NewIP builds a new IP given a list of CIDR-Strings to whitelist
|
||||
func NewIP(whitelistStrings []string, insecure bool) (*IP, error) {
|
||||
if len(whitelistStrings) == 0 && !insecure {
|
||||
return nil, errors.New("no whiteListsNet provided")
|
||||
}
|
||||
|
||||
ip := IP{}
|
||||
|
||||
if !insecure {
|
||||
for _, whitelistString := range whitelistStrings {
|
||||
ipAddr := net.ParseIP(whitelistString)
|
||||
if ipAddr != nil {
|
||||
ip.whiteListsIPs = append(ip.whiteListsIPs, &ipAddr)
|
||||
} else {
|
||||
_, whitelist, err := net.ParseCIDR(whitelistString)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parsing CIDR whitelist %s: %v", whitelist, err)
|
||||
}
|
||||
ip.whiteListsNet = append(ip.whiteListsNet, whitelist)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return &ip, nil
|
||||
}
|
||||
|
||||
// Contains checks if provided address is in the white list
|
||||
func (ip *IP) Contains(addr string) (bool, net.IP, error) {
|
||||
if ip.insecure {
|
||||
return true, nil, nil
|
||||
}
|
||||
|
||||
ipAddr, err := ipFromRemoteAddr(addr)
|
||||
if err != nil {
|
||||
return false, nil, fmt.Errorf("unable to parse address: %s: %s", addr, err)
|
||||
}
|
||||
|
||||
contains, err := ip.ContainsIP(ipAddr)
|
||||
return contains, ipAddr, err
|
||||
}
|
||||
|
||||
// ContainsIP checks if provided address is in the white list
|
||||
func (ip *IP) ContainsIP(addr net.IP) (bool, error) {
|
||||
if ip.insecure {
|
||||
return true, nil
|
||||
}
|
||||
|
||||
for _, whiteListIP := range ip.whiteListsIPs {
|
||||
if whiteListIP.Equal(addr) {
|
||||
return true, nil
|
||||
}
|
||||
}
|
||||
|
||||
for _, whiteListNet := range ip.whiteListsNet {
|
||||
if whiteListNet.Contains(addr) {
|
||||
return true, nil
|
||||
}
|
||||
}
|
||||
|
||||
return false, nil
|
||||
}
|
||||
|
||||
func ipFromRemoteAddr(addr string) (net.IP, error) {
|
||||
userIP := net.ParseIP(addr)
|
||||
if userIP == nil {
|
||||
return nil, fmt.Errorf("can't parse IP from address %s", addr)
|
||||
}
|
||||
|
||||
return userIP, nil
|
||||
}
|
||||
@@ -1,19 +1,14 @@
|
||||
package middlewares
|
||||
package whitelist
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
|
||||
"github.com/containous/traefik/testhelpers"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/urfave/negroni"
|
||||
)
|
||||
|
||||
func TestNewIPWhitelister(t *testing.T) {
|
||||
func TestNew(t *testing.T) {
|
||||
cases := []struct {
|
||||
desc string
|
||||
whitelistStrings []string
|
||||
@@ -24,12 +19,12 @@ func TestNewIPWhitelister(t *testing.T) {
|
||||
desc: "nil whitelist",
|
||||
whitelistStrings: nil,
|
||||
expectedWhitelists: nil,
|
||||
errMessage: "no whitelists provided",
|
||||
errMessage: "no whiteListsNet provided",
|
||||
}, {
|
||||
desc: "empty whitelist",
|
||||
whitelistStrings: []string{},
|
||||
expectedWhitelists: nil,
|
||||
errMessage: "no whitelists provided",
|
||||
errMessage: "no whiteListsNet provided",
|
||||
}, {
|
||||
desc: "whitelist containing empty string",
|
||||
whitelistStrings: []string{
|
||||
@@ -80,12 +75,12 @@ func TestNewIPWhitelister(t *testing.T) {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
whitelister, err := NewIPWhitelister(test.whitelistStrings)
|
||||
whitelister, err := NewIP(test.whitelistStrings, false)
|
||||
if test.errMessage != "" {
|
||||
require.EqualError(t, err, test.errMessage)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
for index, actual := range whitelister.whitelists {
|
||||
for index, actual := range whitelister.whiteListsNet {
|
||||
expected := test.expectedWhitelists[index]
|
||||
assert.Equal(t, expected.IP, actual.IP)
|
||||
assert.Equal(t, expected.Mask.String(), actual.Mask.String())
|
||||
@@ -95,7 +90,7 @@ func TestNewIPWhitelister(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestIPWhitelisterHandle(t *testing.T) {
|
||||
func TestIsAllowed(t *testing.T) {
|
||||
cases := []struct {
|
||||
desc string
|
||||
whitelistStrings []string
|
||||
@@ -122,6 +117,23 @@ func TestIPWhitelisterHandle(t *testing.T) {
|
||||
},
|
||||
{
|
||||
desc: "IPv4 single IP",
|
||||
whitelistStrings: []string{
|
||||
"8.8.8.8",
|
||||
},
|
||||
passIPs: []string{
|
||||
"8.8.8.8",
|
||||
},
|
||||
rejectIPs: []string{
|
||||
"8.8.8.7",
|
||||
"8.8.8.9",
|
||||
"8.8.8.0",
|
||||
"8.8.8.255",
|
||||
"4.4.4.4",
|
||||
"127.0.0.1",
|
||||
},
|
||||
},
|
||||
{
|
||||
desc: "IPv4 Net single IP",
|
||||
whitelistStrings: []string{
|
||||
"8.8.8.8/32",
|
||||
},
|
||||
@@ -167,16 +179,16 @@ func TestIPWhitelisterHandle(t *testing.T) {
|
||||
"2a03:4000:6:d080::/64",
|
||||
},
|
||||
passIPs: []string{
|
||||
"[2a03:4000:6:d080::]",
|
||||
"[2a03:4000:6:d080::1]",
|
||||
"[2a03:4000:6:d080:dead:beef:ffff:ffff]",
|
||||
"[2a03:4000:6:d080::42]",
|
||||
"2a03:4000:6:d080::",
|
||||
"2a03:4000:6:d080::1",
|
||||
"2a03:4000:6:d080:dead:beef:ffff:ffff",
|
||||
"2a03:4000:6:d080::42",
|
||||
},
|
||||
rejectIPs: []string{
|
||||
"[2a03:4000:7:d080::]",
|
||||
"[2a03:4000:7:d080::1]",
|
||||
"[fe80::]",
|
||||
"[4242::1]",
|
||||
"2a03:4000:7:d080::",
|
||||
"2a03:4000:7:d080::1",
|
||||
"fe80::",
|
||||
"4242::1",
|
||||
},
|
||||
},
|
||||
{
|
||||
@@ -185,12 +197,12 @@ func TestIPWhitelisterHandle(t *testing.T) {
|
||||
"2a03:4000:6:d080::42/128",
|
||||
},
|
||||
passIPs: []string{
|
||||
"[2a03:4000:6:d080::42]",
|
||||
"2a03:4000:6:d080::42",
|
||||
},
|
||||
rejectIPs: []string{
|
||||
"[2a03:4000:6:d080::1]",
|
||||
"[2a03:4000:6:d080:dead:beef:ffff:ffff]",
|
||||
"[2a03:4000:6:d080::43]",
|
||||
"2a03:4000:6:d080::1",
|
||||
"2a03:4000:6:d080:dead:beef:ffff:ffff",
|
||||
"2a03:4000:6:d080::43",
|
||||
},
|
||||
},
|
||||
{
|
||||
@@ -200,18 +212,18 @@ func TestIPWhitelisterHandle(t *testing.T) {
|
||||
"fe80::/16",
|
||||
},
|
||||
passIPs: []string{
|
||||
"[2a03:4000:6:d080::]",
|
||||
"[2a03:4000:6:d080::1]",
|
||||
"[2a03:4000:6:d080:dead:beef:ffff:ffff]",
|
||||
"[2a03:4000:6:d080::42]",
|
||||
"[fe80::1]",
|
||||
"[fe80:aa00:00bb:4232:ff00:eeee:00ff:1111]",
|
||||
"[fe80::fe80]",
|
||||
"2a03:4000:6:d080::",
|
||||
"2a03:4000:6:d080::1",
|
||||
"2a03:4000:6:d080:dead:beef:ffff:ffff",
|
||||
"2a03:4000:6:d080::42",
|
||||
"fe80::1",
|
||||
"fe80:aa00:00bb:4232:ff00:eeee:00ff:1111",
|
||||
"fe80::fe80",
|
||||
},
|
||||
rejectIPs: []string{
|
||||
"[2a03:4000:7:d080::]",
|
||||
"[2a03:4000:7:d080::1]",
|
||||
"[4242::1]",
|
||||
"2a03:4000:7:d080::",
|
||||
"2a03:4000:7:d080::1",
|
||||
"4242::1",
|
||||
},
|
||||
},
|
||||
{
|
||||
@@ -223,13 +235,13 @@ func TestIPWhitelisterHandle(t *testing.T) {
|
||||
"8.8.8.8/8",
|
||||
},
|
||||
passIPs: []string{
|
||||
"[2a03:4000:6:d080::]",
|
||||
"[2a03:4000:6:d080::1]",
|
||||
"[2a03:4000:6:d080:dead:beef:ffff:ffff]",
|
||||
"[2a03:4000:6:d080::42]",
|
||||
"[fe80::1]",
|
||||
"[fe80:aa00:00bb:4232:ff00:eeee:00ff:1111]",
|
||||
"[fe80::fe80]",
|
||||
"2a03:4000:6:d080::",
|
||||
"2a03:4000:6:d080::1",
|
||||
"2a03:4000:6:d080:dead:beef:ffff:ffff",
|
||||
"2a03:4000:6:d080::42",
|
||||
"fe80::1",
|
||||
"fe80:aa00:00bb:4232:ff00:eeee:00ff:1111",
|
||||
"fe80::fe80",
|
||||
"1.2.3.1",
|
||||
"1.2.3.32",
|
||||
"1.2.3.156",
|
||||
@@ -240,9 +252,9 @@ func TestIPWhitelisterHandle(t *testing.T) {
|
||||
"8.255.255.255",
|
||||
},
|
||||
rejectIPs: []string{
|
||||
"[2a03:4000:7:d080::]",
|
||||
"[2a03:4000:7:d080::1]",
|
||||
"[4242::1]",
|
||||
"2a03:4000:7:d080::",
|
||||
"2a03:4000:7:d080::1",
|
||||
"4242::1",
|
||||
"1.2.16.1",
|
||||
"1.2.32.1",
|
||||
"127.0.0.1",
|
||||
@@ -256,13 +268,6 @@ func TestIPWhitelisterHandle(t *testing.T) {
|
||||
"127.0.0.1/32",
|
||||
},
|
||||
passIPs: nil,
|
||||
rejectIPs: []string{
|
||||
"foo",
|
||||
"10.0.0.350",
|
||||
"fe:::80",
|
||||
"",
|
||||
"\\&$§&/(",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
@@ -270,38 +275,44 @@ func TestIPWhitelisterHandle(t *testing.T) {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
whitelister, err := NewIPWhitelister(test.whitelistStrings)
|
||||
whiteLister, err := NewIP(test.whitelistStrings, false)
|
||||
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, whitelister)
|
||||
|
||||
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
fmt.Fprintln(w, "traefik")
|
||||
})
|
||||
n := negroni.New(whitelister)
|
||||
n.UseHandler(handler)
|
||||
require.NotNil(t, whiteLister)
|
||||
|
||||
for _, testIP := range test.passIPs {
|
||||
req := testhelpers.MustNewRequest(http.MethodGet, "/", nil)
|
||||
|
||||
req.RemoteAddr = testIP + ":2342"
|
||||
recorder := httptest.NewRecorder()
|
||||
n.ServeHTTP(recorder, req)
|
||||
|
||||
assert.Equal(t, http.StatusOK, recorder.Code, testIP+" should have passed "+test.desc)
|
||||
assert.Contains(t, recorder.Body.String(), "traefik")
|
||||
allowed, ip, err := whiteLister.Contains(testIP)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, ip, err)
|
||||
assert.True(t, allowed, testIP+" should have passed "+test.desc)
|
||||
}
|
||||
|
||||
for _, testIP := range test.rejectIPs {
|
||||
req := testhelpers.MustNewRequest(http.MethodGet, "/", nil)
|
||||
|
||||
req.RemoteAddr = testIP + ":2342"
|
||||
recorder := httptest.NewRecorder()
|
||||
n.ServeHTTP(recorder, req)
|
||||
|
||||
assert.Equal(t, http.StatusForbidden, recorder.Code, testIP+" should not have passed "+test.desc)
|
||||
assert.NotContains(t, recorder.Body.String(), "traefik")
|
||||
allowed, ip, err := whiteLister.Contains(testIP)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, ip, err)
|
||||
assert.False(t, allowed, testIP+" should not have passed "+test.desc)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestBrokenIPs(t *testing.T) {
|
||||
brokenIPs := []string{
|
||||
"foo",
|
||||
"10.0.0.350",
|
||||
"fe:::80",
|
||||
"",
|
||||
"\\&$§&/(",
|
||||
}
|
||||
|
||||
whiteLister, err := NewIP([]string{"1.2.3.4/24"}, false)
|
||||
require.NoError(t, err)
|
||||
|
||||
for _, testIP := range brokenIPs {
|
||||
_, ip, err := whiteLister.Contains(testIP)
|
||||
assert.Error(t, err)
|
||||
require.Nil(t, ip, err)
|
||||
}
|
||||
|
||||
}
|
||||
Reference in New Issue
Block a user