forked from Ivasoft/traefik
Compare commits
60 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
9510e603f1 | ||
|
|
9a76369908 | ||
|
|
913d8737cc | ||
|
|
b98f5ed8b1 | ||
|
|
157c796294 | ||
|
|
845f1a7377 | ||
|
|
27e4a8a227 | ||
|
|
cf2d7497e4 | ||
|
|
df41cd925e | ||
|
|
feeb7f81a6 | ||
|
|
2beb5236d0 | ||
|
|
f062ee80c8 | ||
|
|
a7bb768e98 | ||
|
|
07be89d6e9 | ||
|
|
d81c4e6d1a | ||
|
|
60b4095c75 | ||
|
|
7ff6e6b66f | ||
|
|
dbe720f0f1 | ||
|
|
f0ab2721a5 | ||
|
|
a7c158f0e1 | ||
|
|
bdc0e3bfcf | ||
|
|
f173ff02e3 | ||
|
|
bacd58ed7b | ||
|
|
f323df466d | ||
|
|
b1836587f2 | ||
|
|
dbc3b85cd0 | ||
|
|
5eda08e9b8 | ||
|
|
ec6e46e2cb | ||
|
|
aa705dd691 | ||
|
|
1c3e4124f8 | ||
|
|
c1757372d3 | ||
|
|
5b2b29043c | ||
|
|
bb3f28ffa7 | ||
|
|
6ceb2af4a7 | ||
|
|
b59276ff1c | ||
|
|
2e95832812 | ||
|
|
2240bf9430 | ||
|
|
db036edccd | ||
|
|
08e1f626c1 | ||
|
|
c0d08f5e3e | ||
|
|
dec3f0798a | ||
|
|
62ded580ce | ||
|
|
446d73fcf5 | ||
|
|
e299775d67 | ||
|
|
2c18750537 | ||
|
|
f317e50136 | ||
|
|
1d84bda7ca | ||
|
|
ae7c947ba5 | ||
|
|
6d07729c55 | ||
|
|
1d7bf200a8 | ||
|
|
6bc59f8b33 | ||
|
|
b2cf03fa5c | ||
|
|
36e273714d | ||
|
|
6be77b7fb9 | ||
|
|
6bcf45f136 | ||
|
|
8bca8236db | ||
|
|
fb5aa4c9c1 | ||
|
|
3f5772c62a | ||
|
|
2d946d7ee7 | ||
|
|
10ca35dccd |
77
CHANGELOG.md
77
CHANGELOG.md
@@ -1,5 +1,82 @@
|
||||
# Change Log
|
||||
|
||||
## [v1.6.6](https://github.com/containous/traefik/tree/v1.6.6) (2018-08-20)
|
||||
[All Commits](https://github.com/containous/traefik/compare/v1.6.5...v1.6.6)
|
||||
|
||||
**Bug fixes:**
|
||||
- **[acme]** Avoid duplicated ACME resolution ([#3751](https://github.com/containous/traefik/pull/3751) by [nmengin](https://github.com/nmengin))
|
||||
- **[api]** Remove TLS in API ([#3788](https://github.com/containous/traefik/pull/3788) by [Juliens](https://github.com/Juliens))
|
||||
- **[cluster]** Remove unusable `--cluster` flag ([#3616](https://github.com/containous/traefik/pull/3616) by [dtomcej](https://github.com/dtomcej))
|
||||
- **[ecs]** Fix bad condition in ECS provider ([#3609](https://github.com/containous/traefik/pull/3609) by [mmatur](https://github.com/mmatur))
|
||||
- Set keepalive on TCP socket so idleTimeout works ([#3740](https://github.com/containous/traefik/pull/3740) by [ajardan](https://github.com/ajardan))
|
||||
|
||||
**Documentation:**
|
||||
- A tiny rewording on the documentation API's page ([#3794](https://github.com/containous/traefik/pull/3794) by [dduportal](https://github.com/dduportal))
|
||||
- Adding warnings and solution about the configuration exposure ([#3790](https://github.com/containous/traefik/pull/3790) by [dduportal](https://github.com/dduportal))
|
||||
- Fix path to the debug pprof API ([#3608](https://github.com/containous/traefik/pull/3608) by [multani](https://github.com/multani))
|
||||
|
||||
**Misc:**
|
||||
- **[oxy,websocket]** Update oxy dependency ([#3777](https://github.com/containous/traefik/pull/3777) by [Juliens](https://github.com/Juliens))
|
||||
|
||||
## [v1.6.5](https://github.com/containous/traefik/tree/v1.6.5) (2018-07-09)
|
||||
[All Commits](https://github.com/containous/traefik/compare/v1.6.4...v1.6.5)
|
||||
|
||||
**Bug fixes:**
|
||||
- **[acme]** Add a mutex on local store for HTTPChallenges ([#3579](https://github.com/containous/traefik/pull/3579) by [Juliens](https://github.com/Juliens))
|
||||
- **[consulcatalog]** Split the error handling from Consul Catalog (deadlock) ([#3560](https://github.com/containous/traefik/pull/3560) by [ortz](https://github.com/ortz))
|
||||
- **[docker]** segment labels: multiple frontends for one backend. ([#3511](https://github.com/containous/traefik/pull/3511) by [ldez](https://github.com/ldez))
|
||||
- **[kv]** Better support on same prefix at the same level in the KV ([#3532](https://github.com/containous/traefik/pull/3532) by [jbdoumenjou](https://github.com/jbdoumenjou))
|
||||
- **[logs]** Add logs when error is generated in error handler ([#3567](https://github.com/containous/traefik/pull/3567) by [Juliens](https://github.com/Juliens))
|
||||
- **[middleware]** Create middleware to be able to handle HTTP pipelining correctly ([#3513](https://github.com/containous/traefik/pull/3513) by [mmatur](https://github.com/mmatur))
|
||||
|
||||
**Documentation:**
|
||||
- **[acme]** The gandiv5 provider works with wildcard ([#3506](https://github.com/containous/traefik/pull/3506) by [manu5801](https://github.com/manu5801))
|
||||
- **[kv]** Update keyFile first/last line comment in kv-config.md ([#3558](https://github.com/containous/traefik/pull/3558) by [madnight](https://github.com/madnight))
|
||||
- Minor formatting issue in user-guide ([#3546](https://github.com/containous/traefik/pull/3546) by [Vanuan](https://github.com/Vanuan))
|
||||
|
||||
## [v1.6.4](https://github.com/containous/traefik/tree/v1.6.4) (2018-06-15)
|
||||
[All Commits](https://github.com/containous/traefik/compare/v1.6.3...v1.6.4)
|
||||
|
||||
**Bug fixes:**
|
||||
- **[acme]** Use logrus writer instead of os.Stderr ([#3498](https://github.com/containous/traefik/pull/3498) by [ldez](https://github.com/ldez))
|
||||
- **[consulcatalog]** Enclose IPv6 addresses in "[]" ([#3477](https://github.com/containous/traefik/pull/3477) by [herver](https://github.com/herver))
|
||||
- **[docker,ecs,marathon,mesos,rancher]** Use net.JoinHostPort for servers URL ([#3484](https://github.com/containous/traefik/pull/3484) by [ldez](https://github.com/ldez))
|
||||
- **[docker]** Backend name with docker-compose and segments. ([#3485](https://github.com/containous/traefik/pull/3485) by [ldez](https://github.com/ldez))
|
||||
- **[oxy]** Handle buffer pool for oxy ([#3450](https://github.com/containous/traefik/pull/3450) by [Juliens](https://github.com/Juliens))
|
||||
|
||||
**Documentation:**
|
||||
- **[acme]** The exoscale provider works with wildcard ([#3479](https://github.com/containous/traefik/pull/3479) by [greut](https://github.com/greut))
|
||||
- **[consul,docker]** Edit wording ([#3438](https://github.com/containous/traefik/pull/3438) by [mayank23](https://github.com/mayank23))
|
||||
- **[k8s]** Add missing annotation documentation. ([#3454](https://github.com/containous/traefik/pull/3454) by [ldez](https://github.com/ldez))
|
||||
- **[kv]** Fix typo in kv user guide ([#3474](https://github.com/containous/traefik/pull/3474) by [shambarick](https://github.com/shambarick))
|
||||
- Clean metrics documentation. ([#3488](https://github.com/containous/traefik/pull/3488) by [ldez](https://github.com/ldez))
|
||||
|
||||
## [v1.6.3](https://github.com/containous/traefik/tree/v1.6.3) (2018-06-05)
|
||||
[All Commits](https://github.com/containous/traefik/compare/v1.6.2...v1.6.3)
|
||||
|
||||
**Enhancements:**
|
||||
- **[acme]** Add user agent for ACME ([#3431](https://github.com/containous/traefik/pull/3431) by [ldez](https://github.com/ldez))
|
||||
- **[acme]** Use to the stable version of Lego ([#3418](https://github.com/containous/traefik/pull/3418) by [ldez](https://github.com/ldez))
|
||||
|
||||
**Bug fixes:**
|
||||
- **[acme,cluster]** Improve ACME account registration URI management ([#3398](https://github.com/containous/traefik/pull/3398) by [nmengin](https://github.com/nmengin))
|
||||
- **[acme,cluster]** Remove ACME empty certificates from KV store ([#3389](https://github.com/containous/traefik/pull/3389) by [nmengin](https://github.com/nmengin))
|
||||
- **[consulcatalog]** Reflect changes in catalog healthy nodes in healthCheck watch ([#3390](https://github.com/containous/traefik/pull/3390) by [thebinary](https://github.com/thebinary))
|
||||
- **[consulcatalog]** Detect change when service or node are in maintenance mode ([#3434](https://github.com/containous/traefik/pull/3434) by [mmatur](https://github.com/mmatur))
|
||||
- **[k8s]** Update Kubernetes provider to support IPv6 Backends ([#3432](https://github.com/containous/traefik/pull/3432) by [dtomcej](https://github.com/dtomcej))
|
||||
- **[logs,middleware]** Add URL and Host for some access logs. ([#3430](https://github.com/containous/traefik/pull/3430) by [ldez](https://github.com/ldez))
|
||||
- **[metrics]** Improve Prometheus metrics removal ([#3287](https://github.com/containous/traefik/pull/3287) by [marco-jantke](https://github.com/marco-jantke))
|
||||
- **[middleware]** Whitelist and XFF. ([#3411](https://github.com/containous/traefik/pull/3411) by [ldez](https://github.com/ldez))
|
||||
- **[middleware]** Error pages and header merge ([#3394](https://github.com/containous/traefik/pull/3394) by [ldez](https://github.com/ldez))
|
||||
- **[websocket]** Includes the headers in the HTTP response of a websocket request ([#3425](https://github.com/containous/traefik/pull/3425) by [geraldcroes](https://github.com/geraldcroes))
|
||||
- **[webui]** Webui Whitelist overflow. ([#3412](https://github.com/containous/traefik/pull/3412) by [ldez](https://github.com/ldez))
|
||||
|
||||
**Documentation:**
|
||||
- **[acme]** Docs: ACME Overhaul ([#3421](https://github.com/containous/traefik/pull/3421) by [Dargmuesli](https://github.com/Dargmuesli))
|
||||
- **[acme]** Minor documentation changes ([#3405](https://github.com/containous/traefik/pull/3405) by [amincheloh](https://github.com/amincheloh))
|
||||
- **[k8s]** Helm installation using values ([#3392](https://github.com/containous/traefik/pull/3392) by [erikaulin](https://github.com/erikaulin))
|
||||
- **[k8s]** Update Kubernetes Port Documentation ([#3368](https://github.com/containous/traefik/pull/3368) by [dtomcej](https://github.com/dtomcej))
|
||||
|
||||
## [v1.6.2](https://github.com/containous/traefik/tree/v1.6.2) (2018-05-22)
|
||||
[All Commits](https://github.com/containous/traefik/compare/v1.6.1...v1.6.2)
|
||||
|
||||
|
||||
33
Gopkg.lock
generated
33
Gopkg.lock
generated
@@ -257,8 +257,8 @@
|
||||
[[projects]]
|
||||
name = "github.com/containous/staert"
|
||||
packages = ["."]
|
||||
revision = "cc00c303ccbd2491ddc1dccc9eb7ccadd807557e"
|
||||
version = "v3.1.0"
|
||||
revision = "66717a0e0ca950c4b6dc8c87b46da0b8495c6e41"
|
||||
version = "v3.1.1"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/containous/traefik-extra-service-fabric"
|
||||
@@ -320,9 +320,10 @@
|
||||
version = "v3.2.0"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/dnsimple/dnsimple-go"
|
||||
packages = ["dnsimple"]
|
||||
revision = "f2d9b723cc9547d182e24ac2e527ae25d25fc93f"
|
||||
revision = "bbe1a2c87affea187478e24d3aea3cac25f870b3"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/docker/cli"
|
||||
@@ -991,9 +992,10 @@
|
||||
revision = "1f5c07e90700ae93ddcba0c7af7d9c7201646ccc"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/ovh/go-ovh"
|
||||
packages = ["ovh"]
|
||||
revision = "4b1fea467323b74c5f462f0947f402b428ca0626"
|
||||
revision = "91b7eb631d2eced3e706932a0b36ee8b5ee22e92"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
@@ -1215,7 +1217,7 @@
|
||||
"roundrobin",
|
||||
"utils"
|
||||
]
|
||||
revision = "6956548a7fa4272adeadf828455109c53933ea86"
|
||||
revision = "885e42fe04d8e0efa6c18facad4e0fc5757cde9b"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/vulcand/predicate"
|
||||
@@ -1241,10 +1243,11 @@
|
||||
revision = "0c8571ac0ce161a5feb57375a9cdf148c98c0f70"
|
||||
|
||||
[[projects]]
|
||||
branch = "containous-fork"
|
||||
branch = "master"
|
||||
name = "github.com/xenolf/lego"
|
||||
packages = [
|
||||
"acmev2",
|
||||
"acme",
|
||||
"log",
|
||||
"providers/dns",
|
||||
"providers/dns/auroradns",
|
||||
"providers/dns/azure",
|
||||
@@ -1262,9 +1265,9 @@
|
||||
"providers/dns/fastdns",
|
||||
"providers/dns/gandi",
|
||||
"providers/dns/gandiv5",
|
||||
"providers/dns/gcloud",
|
||||
"providers/dns/glesys",
|
||||
"providers/dns/godaddy",
|
||||
"providers/dns/googlecloud",
|
||||
"providers/dns/lightsail",
|
||||
"providers/dns/linode",
|
||||
"providers/dns/namecheap",
|
||||
@@ -1278,8 +1281,7 @@
|
||||
"providers/dns/route53",
|
||||
"providers/dns/vultr"
|
||||
]
|
||||
revision = "3d653ee2ee38f1d71beb5f09b37b23344eff0ab3"
|
||||
source = "github.com/containous/lego"
|
||||
revision = "7fedfd1388f016c7ca7ed92a7f2024d06a7e20d8"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
@@ -1319,6 +1321,7 @@
|
||||
revision = "22ae77b79946ea320088417e4d50825671d82d57"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "golang.org/x/oauth2"
|
||||
packages = [
|
||||
".",
|
||||
@@ -1327,7 +1330,7 @@
|
||||
"jws",
|
||||
"jwt"
|
||||
]
|
||||
revision = "7fdf09982454086d5570c7db3e11f360194830ca"
|
||||
revision = "ec22f46f877b4505e0117eeaab541714644fdd28"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
@@ -1366,6 +1369,7 @@
|
||||
revision = "8be79e1e0910c292df4e79c241bb7e8f7e725959"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "google.golang.org/api"
|
||||
packages = [
|
||||
"dns/v1",
|
||||
@@ -1373,7 +1377,7 @@
|
||||
"googleapi",
|
||||
"googleapi/internal/uritemplates"
|
||||
]
|
||||
revision = "1575df15c1bb8b18ad4d9bc5ca495cc85b0764fe"
|
||||
revision = "de943baf05a022a8f921b544b7827bacaba1aed5"
|
||||
|
||||
[[projects]]
|
||||
name = "google.golang.org/appengine"
|
||||
@@ -1444,6 +1448,7 @@
|
||||
revision = "cb884138e64c9a8bf5c7d6106d74b0fca082df0c"
|
||||
|
||||
[[projects]]
|
||||
branch = "v2"
|
||||
name = "gopkg.in/ns1/ns1-go.v2"
|
||||
packages = [
|
||||
"rest",
|
||||
@@ -1453,7 +1458,7 @@
|
||||
"rest/model/filter",
|
||||
"rest/model/monitor"
|
||||
]
|
||||
revision = "c563826f4cbef9c11bebeb9f20a3f7afe9c1e2f4"
|
||||
revision = "a5bcac82d3f637d3928d30476610891935b2d691"
|
||||
|
||||
[[projects]]
|
||||
name = "gopkg.in/square/go-jose.v2"
|
||||
@@ -1674,6 +1679,6 @@
|
||||
[solve-meta]
|
||||
analyzer-name = "dep"
|
||||
analyzer-version = 1
|
||||
inputs-digest = "c7d91203842be1915ca08a31917a079489bff7ffc6f2e494330e9556b4730a06"
|
||||
inputs-digest = "ad34e6336e6f19b82c52e991d22c5b43b9144ed7dc83d7b17197583ace43f346"
|
||||
solver-name = "gps-cdcl"
|
||||
solver-version = 1
|
||||
|
||||
@@ -62,7 +62,7 @@
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/containous/staert"
|
||||
version = "3.1.0"
|
||||
version = "3.1.1"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/containous/traefik-extra-service-fabric"
|
||||
@@ -181,9 +181,9 @@
|
||||
name = "github.com/vulcand/oxy"
|
||||
|
||||
[[constraint]]
|
||||
branch = "containous-fork"
|
||||
branch = "master"
|
||||
name = "github.com/xenolf/lego"
|
||||
source = "github.com/containous/lego"
|
||||
# version = "1.0.0"
|
||||
|
||||
[[constraint]]
|
||||
name = "google.golang.org/grpc"
|
||||
|
||||
12
README.md
12
README.md
@@ -9,7 +9,7 @@
|
||||
[](https://microbadger.com/images/traefik)
|
||||
[](https://github.com/containous/traefik/blob/master/LICENSE.md)
|
||||
[](https://traefik.herokuapp.com)
|
||||
[](https://twitter.com/intent/follow?screen_name=traefikproxy)
|
||||
[](https://twitter.com/intent/follow?screen_name=traefik)
|
||||
|
||||
|
||||
Træfik is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy.
|
||||
@@ -63,7 +63,7 @@ _(But if you'd rather configure some of your routes manually, Træfik supports t
|
||||
- Websocket, HTTP/2, GRPC ready
|
||||
- Provides metrics (Rest, Prometheus, Datadog, Statsd, InfluxDB)
|
||||
- Keeps access logs (JSON, CLF)
|
||||
- [Fast](https://docs.traefik.io/benchmarks) ... which is nice
|
||||
- Fast
|
||||
- Exposes a Rest API
|
||||
- Packaged as a single binary file (made with :heart: with go) and available as a [tiny](https://microbadger.com/images/traefik) [official](https://hub.docker.com/r/_/traefik/) docker image
|
||||
|
||||
@@ -164,12 +164,10 @@ Each version is supported until the next one is released (e.g. 1.1.x will be sup
|
||||
|
||||
We use [Semantic Versioning](http://semver.org/)
|
||||
|
||||
## Plumbing
|
||||
## Mailing lists
|
||||
|
||||
- [Oxy](https://github.com/vulcand/oxy): an awesome proxy library made by Mailgun folks
|
||||
- [Gorilla mux](https://github.com/gorilla/mux): famous request router
|
||||
- [Negroni](https://github.com/urfave/negroni): web middlewares made simple
|
||||
- [Lego](https://github.com/xenolf/lego): the best [Let's Encrypt](https://letsencrypt.org) library in go
|
||||
- General announcements, new releases: mail at news+subscribe@traefik.io or on [the online viewer](https://groups.google.com/a/traefik.io/forum/#!forum/news)
|
||||
- Security announcements: mail at security+subscribe@traefik.io or on [the online viewer](https://groups.google.com/a/traefik.io/forum/#!forum/security).
|
||||
|
||||
## Credits
|
||||
|
||||
|
||||
@@ -8,14 +8,16 @@ import (
|
||||
"crypto/x509"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/containous/traefik/log"
|
||||
acmeprovider "github.com/containous/traefik/provider/acme"
|
||||
"github.com/containous/traefik/types"
|
||||
acme "github.com/xenolf/lego/acmev2"
|
||||
"github.com/xenolf/lego/acme"
|
||||
)
|
||||
|
||||
// Account is used to store lets encrypt registration info
|
||||
@@ -42,6 +44,11 @@ func (a *Account) Init() error {
|
||||
return err
|
||||
}
|
||||
|
||||
err = a.RemoveAccountV1Values()
|
||||
if err != nil {
|
||||
log.Errorf("Unable to remove ACME Account V1 values during account initialization: %v", err)
|
||||
}
|
||||
|
||||
for _, cert := range a.ChallengeCerts {
|
||||
if cert.certificate == nil {
|
||||
certificate, err := tls.X509KeyPair(cert.Certificate, cert.PrivateKey)
|
||||
@@ -103,6 +110,29 @@ func (a *Account) GetPrivateKey() crypto.PrivateKey {
|
||||
return nil
|
||||
}
|
||||
|
||||
// RemoveAccountV1Values removes ACME account V1 values
|
||||
func (a *Account) RemoveAccountV1Values() error {
|
||||
// Check if ACME Account is in ACME V1 format
|
||||
if a.Registration != nil {
|
||||
isOldRegistration, err := regexp.MatchString(acmeprovider.RegistrationURLPathV1Regexp, a.Registration.URI)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if isOldRegistration {
|
||||
a.reset()
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (a *Account) reset() {
|
||||
log.Debug("Reset ACME account object.")
|
||||
a.Email = ""
|
||||
a.Registration = nil
|
||||
a.PrivateKey = nil
|
||||
}
|
||||
|
||||
// Certificate is used to store certificate info
|
||||
type Certificate struct {
|
||||
Domain string
|
||||
@@ -152,11 +182,23 @@ func (dc *DomainsCertificates) removeDuplicates() {
|
||||
}
|
||||
}
|
||||
|
||||
func (dc *DomainsCertificates) removeEmpty() {
|
||||
certs := []*DomainsCertificate{}
|
||||
for _, cert := range dc.Certs {
|
||||
if cert.Certificate != nil && len(cert.Certificate.Certificate) > 0 && len(cert.Certificate.PrivateKey) > 0 {
|
||||
certs = append(certs, cert)
|
||||
}
|
||||
}
|
||||
dc.Certs = certs
|
||||
}
|
||||
|
||||
// Init DomainsCertificates
|
||||
func (dc *DomainsCertificates) Init() error {
|
||||
dc.lock.Lock()
|
||||
defer dc.lock.Unlock()
|
||||
|
||||
dc.removeEmpty()
|
||||
|
||||
for _, domainsCertificate := range dc.Certs {
|
||||
tlsCert, err := tls.X509KeyPair(domainsCertificate.Certificate.Certificate, domainsCertificate.Certificate.PrivateKey)
|
||||
if err != nil {
|
||||
|
||||
59
acme/acme.go
59
acme/acme.go
@@ -9,9 +9,9 @@ import (
|
||||
fmtlog "log"
|
||||
"net"
|
||||
"net/http"
|
||||
"os"
|
||||
"reflect"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/BurntSushi/ty/fun"
|
||||
@@ -25,8 +25,11 @@ import (
|
||||
"github.com/containous/traefik/safe"
|
||||
"github.com/containous/traefik/tls/generate"
|
||||
"github.com/containous/traefik/types"
|
||||
"github.com/containous/traefik/version"
|
||||
"github.com/eapache/channels"
|
||||
acme "github.com/xenolf/lego/acmev2"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/xenolf/lego/acme"
|
||||
legolog "github.com/xenolf/lego/log"
|
||||
"github.com/xenolf/lego/providers/dns"
|
||||
)
|
||||
|
||||
@@ -42,7 +45,7 @@ type ACME struct {
|
||||
Domains []types.Domain `description:"SANs (alternative domains) to each main domain using format: --acme.domains='main.com,san1.com,san2.com' --acme.domains='main.net,san1.net,san2.net'"`
|
||||
Storage string `description:"File or key used for certificates storage."`
|
||||
StorageFile string // Deprecated
|
||||
OnDemand bool `description:"(Deprecated) Enable on demand certificate generation. This will request a certificate from Let's Encrypt during the first TLS handshake for a hostname that does not yet have a certificate."` //deprecated
|
||||
OnDemand bool `description:"(Deprecated) Enable on demand certificate generation. This will request a certificate from Let's Encrypt during the first TLS handshake for a hostname that does not yet have a certificate."` // Deprecated
|
||||
OnHostRule bool `description:"Enable certificate generation on frontends Host rules."`
|
||||
CAServer string `description:"CA server to use."`
|
||||
EntryPoint string `description:"Entrypoint to proxy acme challenge to."`
|
||||
@@ -59,22 +62,32 @@ type ACME struct {
|
||||
jobs *channels.InfiniteChannel
|
||||
TLSConfig *tls.Config `description:"TLS config in case wildcard certs are used"`
|
||||
dynamicCerts *safe.Safe
|
||||
resolvingDomains map[string]struct{}
|
||||
resolvingDomainsMutex sync.RWMutex
|
||||
}
|
||||
|
||||
func (a *ACME) init() error {
|
||||
acme.UserAgent = fmt.Sprintf("containous-traefik/%s", version.Version)
|
||||
|
||||
if a.ACMELogging {
|
||||
acme.Logger = fmtlog.New(os.Stderr, "legolog: ", fmtlog.LstdFlags)
|
||||
legolog.Logger = fmtlog.New(log.WriterLevel(logrus.DebugLevel), "legolog: ", 0)
|
||||
} else {
|
||||
acme.Logger = fmtlog.New(ioutil.Discard, "", 0)
|
||||
legolog.Logger = fmtlog.New(ioutil.Discard, "", 0)
|
||||
}
|
||||
|
||||
// no certificates in TLS config, so we add a default one
|
||||
cert, err := generate.DefaultCertificate()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
a.defaultCertificate = cert
|
||||
|
||||
a.jobs = channels.NewInfiniteChannel()
|
||||
|
||||
// Init the currently resolved domain map
|
||||
a.resolvingDomains = make(map[string]struct{})
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -178,6 +191,10 @@ func (a *ACME) leadershipListener(elected bool) error {
|
||||
|
||||
account := object.(*Account)
|
||||
account.Init()
|
||||
// Reset Account values if caServer changed, thus registration URI can be updated
|
||||
if account != nil && account.Registration != nil && !strings.HasPrefix(account.Registration.URI, a.CAServer) {
|
||||
account.reset()
|
||||
}
|
||||
|
||||
var needRegister bool
|
||||
if account == nil || len(account.Email) == 0 {
|
||||
@@ -492,6 +509,10 @@ func (a *ACME) LoadCertificateForDomains(domains []string) {
|
||||
if len(uncheckedDomains) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
a.addResolvingDomains(uncheckedDomains)
|
||||
defer a.removeResolvingDomains(uncheckedDomains)
|
||||
|
||||
certificate, err := a.getDomainsCertificates(uncheckedDomains)
|
||||
if err != nil {
|
||||
log.Errorf("Error getting ACME certificates %+v : %v", uncheckedDomains, err)
|
||||
@@ -523,6 +544,24 @@ func (a *ACME) LoadCertificateForDomains(domains []string) {
|
||||
}
|
||||
}
|
||||
|
||||
func (a *ACME) addResolvingDomains(resolvingDomains []string) {
|
||||
a.resolvingDomainsMutex.Lock()
|
||||
defer a.resolvingDomainsMutex.Unlock()
|
||||
|
||||
for _, domain := range resolvingDomains {
|
||||
a.resolvingDomains[domain] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
func (a *ACME) removeResolvingDomains(resolvingDomains []string) {
|
||||
a.resolvingDomainsMutex.Lock()
|
||||
defer a.resolvingDomainsMutex.Unlock()
|
||||
|
||||
for _, domain := range resolvingDomains {
|
||||
delete(a.resolvingDomains, domain)
|
||||
}
|
||||
}
|
||||
|
||||
// Get provided certificate which check a domains list (Main and SANs)
|
||||
// from static and dynamic provided certificates
|
||||
func (a *ACME) getProvidedCertificate(domains string) *tls.Certificate {
|
||||
@@ -558,6 +597,9 @@ func searchProvidedCertificateForDomains(domain string, certs map[string]*tls.Ce
|
||||
// Get provided certificate which check a domains list (Main and SANs)
|
||||
// from static and dynamic provided certificates
|
||||
func (a *ACME) getUncheckedDomains(domains []string, account *Account) []string {
|
||||
a.resolvingDomainsMutex.RLock()
|
||||
defer a.resolvingDomainsMutex.RUnlock()
|
||||
|
||||
log.Debugf("Looking for provided certificate to validate %s...", domains)
|
||||
allCerts := make(map[string]*tls.Certificate)
|
||||
|
||||
@@ -580,6 +622,13 @@ func (a *ACME) getUncheckedDomains(domains []string, account *Account) []string
|
||||
}
|
||||
}
|
||||
|
||||
// Get currently resolved domains
|
||||
for domain := range a.resolvingDomains {
|
||||
if _, ok := allCerts[domain]; !ok {
|
||||
allCerts[domain] = &tls.Certificate{}
|
||||
}
|
||||
}
|
||||
|
||||
// Get Configuration Domains
|
||||
for i := 0; i < len(a.Domains); i++ {
|
||||
allCerts[a.Domains[i].Main] = &tls.Certificate{}
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"reflect"
|
||||
"sort"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
@@ -14,7 +15,7 @@ import (
|
||||
"github.com/containous/traefik/tls/generate"
|
||||
"github.com/containous/traefik/types"
|
||||
"github.com/stretchr/testify/assert"
|
||||
acme "github.com/xenolf/lego/acmev2"
|
||||
"github.com/xenolf/lego/acme"
|
||||
)
|
||||
|
||||
func TestDomainsSet(t *testing.T) {
|
||||
@@ -330,9 +331,12 @@ func TestAcme_getUncheckedCertificates(t *testing.T) {
|
||||
mm["*.containo.us"] = &tls.Certificate{}
|
||||
mm["traefik.acme.io"] = &tls.Certificate{}
|
||||
|
||||
a := ACME{TLSConfig: &tls.Config{NameToCertificate: mm}}
|
||||
dm := make(map[string]struct{})
|
||||
dm["*.traefik.wtf"] = struct{}{}
|
||||
|
||||
domains := []string{"traefik.containo.us", "trae.containo.us"}
|
||||
a := ACME{TLSConfig: &tls.Config{NameToCertificate: mm}, resolvingDomains: dm}
|
||||
|
||||
domains := []string{"traefik.containo.us", "trae.containo.us", "foo.traefik.wtf"}
|
||||
uncheckedDomains := a.getUncheckedDomains(domains, nil)
|
||||
assert.Empty(t, uncheckedDomains)
|
||||
domains = []string{"traefik.acme.io", "trae.acme.io"}
|
||||
@@ -350,6 +354,9 @@ func TestAcme_getUncheckedCertificates(t *testing.T) {
|
||||
account := Account{DomainsCertificate: domainsCertificates}
|
||||
uncheckedDomains = a.getUncheckedDomains(domains, &account)
|
||||
assert.Empty(t, uncheckedDomains)
|
||||
domains = []string{"traefik.containo.us", "trae.containo.us", "traefik.wtf"}
|
||||
uncheckedDomains = a.getUncheckedDomains(domains, nil)
|
||||
assert.Len(t, uncheckedDomains, 1)
|
||||
}
|
||||
|
||||
func TestAcme_getProvidedCertificate(t *testing.T) {
|
||||
@@ -550,3 +557,268 @@ func TestAcme_getCertificateForDomain(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestRemoveEmptyCertificates(t *testing.T) {
|
||||
now := time.Now()
|
||||
fooCert, fooKey, _ := generate.KeyPair("foo.com", now)
|
||||
acmeCert, acmeKey, _ := generate.KeyPair("acme.wtf", now.Add(24*time.Hour))
|
||||
barCert, barKey, _ := generate.KeyPair("bar.com", now)
|
||||
testCases := []struct {
|
||||
desc string
|
||||
dc *DomainsCertificates
|
||||
expectedDc *DomainsCertificates
|
||||
}{
|
||||
{
|
||||
desc: "No empty certificate",
|
||||
dc: &DomainsCertificates{
|
||||
Certs: []*DomainsCertificate{
|
||||
{
|
||||
Certificate: &Certificate{
|
||||
Certificate: fooCert,
|
||||
PrivateKey: fooKey,
|
||||
},
|
||||
Domains: types.Domain{
|
||||
Main: "foo.com",
|
||||
},
|
||||
},
|
||||
{
|
||||
Certificate: &Certificate{
|
||||
Certificate: acmeCert,
|
||||
PrivateKey: acmeKey,
|
||||
},
|
||||
Domains: types.Domain{
|
||||
Main: "acme.wtf",
|
||||
},
|
||||
},
|
||||
{
|
||||
Certificate: &Certificate{
|
||||
Certificate: barCert,
|
||||
PrivateKey: barKey,
|
||||
},
|
||||
Domains: types.Domain{
|
||||
Main: "bar.com",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedDc: &DomainsCertificates{
|
||||
Certs: []*DomainsCertificate{
|
||||
{
|
||||
Certificate: &Certificate{
|
||||
Certificate: fooCert,
|
||||
PrivateKey: fooKey,
|
||||
},
|
||||
Domains: types.Domain{
|
||||
Main: "foo.com",
|
||||
},
|
||||
},
|
||||
{
|
||||
Certificate: &Certificate{
|
||||
Certificate: acmeCert,
|
||||
PrivateKey: acmeKey,
|
||||
},
|
||||
Domains: types.Domain{
|
||||
Main: "acme.wtf",
|
||||
},
|
||||
},
|
||||
{
|
||||
Certificate: &Certificate{
|
||||
Certificate: barCert,
|
||||
PrivateKey: barKey,
|
||||
},
|
||||
Domains: types.Domain{
|
||||
Main: "bar.com",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
desc: "First certificate is nil",
|
||||
dc: &DomainsCertificates{
|
||||
Certs: []*DomainsCertificate{
|
||||
{
|
||||
Domains: types.Domain{
|
||||
Main: "foo.com",
|
||||
},
|
||||
},
|
||||
{
|
||||
Certificate: &Certificate{
|
||||
Certificate: acmeCert,
|
||||
PrivateKey: acmeKey,
|
||||
},
|
||||
Domains: types.Domain{
|
||||
Main: "acme.wtf",
|
||||
},
|
||||
},
|
||||
{
|
||||
Certificate: &Certificate{
|
||||
Certificate: barCert,
|
||||
PrivateKey: barKey,
|
||||
},
|
||||
Domains: types.Domain{
|
||||
Main: "bar.com",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedDc: &DomainsCertificates{
|
||||
Certs: []*DomainsCertificate{
|
||||
{
|
||||
Certificate: &Certificate{
|
||||
Certificate: acmeCert,
|
||||
PrivateKey: acmeKey,
|
||||
},
|
||||
Domains: types.Domain{
|
||||
Main: "acme.wtf",
|
||||
},
|
||||
},
|
||||
{
|
||||
Certificate: &Certificate{
|
||||
Certificate: nil,
|
||||
PrivateKey: barKey,
|
||||
},
|
||||
Domains: types.Domain{
|
||||
Main: "bar.com",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
desc: "Last certificate is empty",
|
||||
dc: &DomainsCertificates{
|
||||
Certs: []*DomainsCertificate{
|
||||
{
|
||||
Certificate: &Certificate{
|
||||
Certificate: fooCert,
|
||||
PrivateKey: fooKey,
|
||||
},
|
||||
Domains: types.Domain{
|
||||
Main: "foo.com",
|
||||
},
|
||||
},
|
||||
{
|
||||
Certificate: &Certificate{
|
||||
Certificate: acmeCert,
|
||||
PrivateKey: acmeKey,
|
||||
},
|
||||
Domains: types.Domain{
|
||||
Main: "acme.wtf",
|
||||
},
|
||||
},
|
||||
{
|
||||
Certificate: &Certificate{},
|
||||
Domains: types.Domain{
|
||||
Main: "bar.com",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedDc: &DomainsCertificates{
|
||||
Certs: []*DomainsCertificate{
|
||||
{
|
||||
Certificate: &Certificate{
|
||||
Certificate: fooCert,
|
||||
PrivateKey: fooKey,
|
||||
},
|
||||
Domains: types.Domain{
|
||||
Main: "foo.com",
|
||||
},
|
||||
},
|
||||
{
|
||||
Certificate: &Certificate{
|
||||
Certificate: acmeCert,
|
||||
PrivateKey: acmeKey,
|
||||
},
|
||||
Domains: types.Domain{
|
||||
Main: "acme.wtf",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
desc: "First and last certificates are nil or empty",
|
||||
dc: &DomainsCertificates{
|
||||
Certs: []*DomainsCertificate{
|
||||
{
|
||||
Domains: types.Domain{
|
||||
Main: "foo.com",
|
||||
},
|
||||
},
|
||||
{
|
||||
Certificate: &Certificate{
|
||||
Certificate: acmeCert,
|
||||
PrivateKey: acmeKey,
|
||||
},
|
||||
Domains: types.Domain{
|
||||
Main: "acme.wtf",
|
||||
},
|
||||
},
|
||||
{
|
||||
Certificate: &Certificate{},
|
||||
Domains: types.Domain{
|
||||
Main: "bar.com",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedDc: &DomainsCertificates{
|
||||
Certs: []*DomainsCertificate{
|
||||
{
|
||||
Certificate: &Certificate{
|
||||
Certificate: acmeCert,
|
||||
PrivateKey: acmeKey,
|
||||
},
|
||||
Domains: types.Domain{
|
||||
Main: "acme.wtf",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
desc: "All certificates are nil or empty",
|
||||
dc: &DomainsCertificates{
|
||||
Certs: []*DomainsCertificate{
|
||||
{
|
||||
Domains: types.Domain{
|
||||
Main: "foo.com",
|
||||
},
|
||||
},
|
||||
{
|
||||
Domains: types.Domain{
|
||||
Main: "foo24.com",
|
||||
},
|
||||
},
|
||||
{
|
||||
Certificate: &Certificate{},
|
||||
Domains: types.Domain{
|
||||
Main: "bar.com",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedDc: &DomainsCertificates{
|
||||
Certs: []*DomainsCertificate{},
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
a := &Account{DomainsCertificate: *test.dc}
|
||||
a.Init()
|
||||
|
||||
assert.Equal(t, len(test.expectedDc.Certs), len(a.DomainsCertificate.Certs))
|
||||
sort.Sort(&a.DomainsCertificate)
|
||||
sort.Sort(test.expectedDc)
|
||||
for key, value := range test.expectedDc.Certs {
|
||||
assert.Equal(t, value.Domains.Main, a.DomainsCertificate.Certs[key].Domains.Main)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -9,7 +9,7 @@ import (
|
||||
"github.com/containous/traefik/cluster"
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/containous/traefik/safe"
|
||||
acme "github.com/xenolf/lego/acmev2"
|
||||
"github.com/xenolf/lego/acme"
|
||||
)
|
||||
|
||||
var _ acme.ChallengeProviderTimeout = (*challengeHTTPProvider)(nil)
|
||||
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"encoding/json"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"regexp"
|
||||
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/containous/traefik/provider/acme"
|
||||
@@ -51,24 +50,6 @@ func (s *LocalStore) Get() (*Account, error) {
|
||||
return account, nil
|
||||
}
|
||||
|
||||
// RemoveAccountV1Values removes ACME account V1 values
|
||||
func RemoveAccountV1Values(account *Account) error {
|
||||
// Check if ACME Account is in ACME V1 format
|
||||
if account != nil && account.Registration != nil {
|
||||
isOldRegistration, err := regexp.MatchString(acme.RegistrationURLPathV1Regexp, account.Registration.URI)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if isOldRegistration {
|
||||
account.Email = ""
|
||||
account.Registration = nil
|
||||
account.PrivateKey = nil
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ConvertToNewFormat converts old acme.json format to the new one and store the result into the file (used for the backward compatibility)
|
||||
func ConvertToNewFormat(fileName string) {
|
||||
localStore := acme.NewLocalStore(fileName)
|
||||
@@ -99,13 +80,13 @@ func ConvertToNewFormat(fileName string) {
|
||||
if account != nil && len(account.Email) > 0 {
|
||||
err = backupACMEFile(fileName, account)
|
||||
if err != nil {
|
||||
log.Errorf("Unable to create a backup for the V1 formatted ACME file: %s", err.Error())
|
||||
log.Errorf("Unable to create a backup for the V1 formatted ACME file: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
err = RemoveAccountV1Values(account)
|
||||
err = account.RemoveAccountV1Values()
|
||||
if err != nil {
|
||||
log.Errorf("Unable to remove ACME Account V1 values: %s", err.Error())
|
||||
log.Errorf("Unable to remove ACME Account V1 values during format conversion: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
|
||||
@@ -140,7 +140,7 @@ func migrateACMEData(fileName string) (*acme.Account, error) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = acme.RemoveAccountV1Values(account)
|
||||
err = account.RemoveAccountV1Values()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -175,7 +175,7 @@ func runCmd(globalConfiguration *configuration.GlobalConfiguration, configFile s
|
||||
log.Debugf("Global configuration loaded %s", string(jsonConf))
|
||||
if acme.IsEnabled() {
|
||||
store := acme.NewLocalStore(acme.Get().Storage)
|
||||
acme.Get().Store = &store
|
||||
acme.Get().Store = store
|
||||
}
|
||||
svr := server.NewServer(*globalConfiguration, configuration.NewProviderAggregator(globalConfiguration))
|
||||
if acme.IsEnabled() && acme.Get().OnHostRule {
|
||||
|
||||
@@ -58,19 +58,19 @@ const (
|
||||
// GlobalConfiguration holds global configuration (with providers, etc.).
|
||||
// It's populated from the traefik configuration file passed as an argument to the binary.
|
||||
type GlobalConfiguration struct {
|
||||
LifeCycle *LifeCycle `description:"Timeouts influencing the server life cycle" export:"true"`
|
||||
GraceTimeOut flaeg.Duration `short:"g" description:"(Deprecated) Duration to give active requests a chance to finish before Traefik stops" export:"true"` // Deprecated
|
||||
Debug bool `short:"d" description:"Enable debug mode" export:"true"`
|
||||
CheckNewVersion bool `description:"Periodically check if a new version has been released" export:"true"`
|
||||
SendAnonymousUsage bool `description:"send periodically anonymous usage statistics" export:"true"`
|
||||
AccessLogsFile string `description:"(Deprecated) Access logs file" export:"true"` // Deprecated
|
||||
AccessLog *types.AccessLog `description:"Access log settings" export:"true"`
|
||||
TraefikLogsFile string `description:"(Deprecated) Traefik logs file. Stdout is used when omitted or empty" export:"true"` // Deprecated
|
||||
TraefikLog *types.TraefikLog `description:"Traefik log settings" export:"true"`
|
||||
Tracing *tracing.Tracing `description:"OpenTracing configuration" export:"true"`
|
||||
LogLevel string `short:"l" description:"Log level" export:"true"`
|
||||
EntryPoints EntryPoints `description:"Entrypoints definition using format: --entryPoints='Name:http Address::8000 Redirect.EntryPoint:https' --entryPoints='Name:https Address::4442 TLS:tests/traefik.crt,tests/traefik.key;prod/traefik.crt,prod/traefik.key'" export:"true"`
|
||||
Cluster *types.Cluster `description:"Enable clustering" export:"true"`
|
||||
LifeCycle *LifeCycle `description:"Timeouts influencing the server life cycle" export:"true"`
|
||||
GraceTimeOut flaeg.Duration `short:"g" description:"(Deprecated) Duration to give active requests a chance to finish before Traefik stops" export:"true"` // Deprecated
|
||||
Debug bool `short:"d" description:"Enable debug mode" export:"true"`
|
||||
CheckNewVersion bool `description:"Periodically check if a new version has been released" export:"true"`
|
||||
SendAnonymousUsage bool `description:"send periodically anonymous usage statistics" export:"true"`
|
||||
AccessLogsFile string `description:"(Deprecated) Access logs file" export:"true"` // Deprecated
|
||||
AccessLog *types.AccessLog `description:"Access log settings" export:"true"`
|
||||
TraefikLogsFile string `description:"(Deprecated) Traefik logs file. Stdout is used when omitted or empty" export:"true"` // Deprecated
|
||||
TraefikLog *types.TraefikLog `description:"Traefik log settings" export:"true"`
|
||||
Tracing *tracing.Tracing `description:"OpenTracing configuration" export:"true"`
|
||||
LogLevel string `short:"l" description:"Log level" export:"true"`
|
||||
EntryPoints EntryPoints `description:"Entrypoints definition using format: --entryPoints='Name:http Address::8000 Redirect.EntryPoint:https' --entryPoints='Name:https Address::4442 TLS:tests/traefik.crt,tests/traefik.key;prod/traefik.crt,prod/traefik.key'" export:"true"`
|
||||
Cluster *types.Cluster
|
||||
Constraints types.Constraints `description:"Filter services by constraint, matching with service tags" export:"true"`
|
||||
ACME *acme.ACME `description:"Enable ACME (Let's Encrypt): automatic SSL" export:"true"`
|
||||
DefaultEntryPoints DefaultEntryPoints `description:"Entrypoints to be used by frontends that do not specify any entrypoint" export:"true"`
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
FROM alpine
|
||||
FROM alpine:3.14
|
||||
|
||||
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/.local/bin
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# ACME (Let's Encrypt) configuration
|
||||
# ACME (Let's Encrypt) Configuration
|
||||
|
||||
See also [Let's Encrypt examples](/user-guide/examples/#lets-encrypt-support) and [Docker & Let's Encrypt user guide](/user-guide/docker-and-lets-encrypt).
|
||||
See [Let's Encrypt examples](/user-guide/examples/#lets-encrypt-support) and [Docker & Let's Encrypt user guide](/user-guide/docker-and-lets-encrypt) as well.
|
||||
|
||||
## Configuration
|
||||
|
||||
@@ -63,14 +63,14 @@ entryPoint = "https"
|
||||
#
|
||||
# acmeLogging = true
|
||||
|
||||
# Enable on demand certificate generation.
|
||||
# Deprecated. Enable on demand certificate generation.
|
||||
#
|
||||
# Optional (Deprecated)
|
||||
# Optional
|
||||
# Default: false
|
||||
#
|
||||
# onDemand = true
|
||||
|
||||
# Enable certificate generation on frontends Host rules.
|
||||
# Enable certificate generation on frontends host rules.
|
||||
#
|
||||
# Optional
|
||||
# Default: false
|
||||
@@ -78,8 +78,8 @@ entryPoint = "https"
|
||||
# onHostRule = true
|
||||
|
||||
# CA server to use.
|
||||
# - Uncomment the line to run on the staging let's encrypt server.
|
||||
# - Leave comment to go to prod.
|
||||
# Uncomment the line to use Let's Encrypt's staging server,
|
||||
# leave commented to go to prod.
|
||||
#
|
||||
# Optional
|
||||
# Default: "https://acme-v02.api.letsencrypt.org/directory"
|
||||
@@ -94,15 +94,13 @@ entryPoint = "https"
|
||||
# sans = ["test1.local1.com", "test2.local1.com"]
|
||||
# [[acme.domains]]
|
||||
# main = "local2.com"
|
||||
# sans = ["test1.local2.com", "test2.local2.com"]
|
||||
# [[acme.domains]]
|
||||
# main = "local3.com"
|
||||
# [[acme.domains]]
|
||||
# main = "local4.com"
|
||||
# main = "*.local3.com"
|
||||
# sans = ["local3.com", "test1.test1.local3.com"]
|
||||
|
||||
# Use a HTTP-01 acme challenge.
|
||||
# Use a HTTP-01 ACME challenge.
|
||||
#
|
||||
# Optional but recommend
|
||||
# Optional (but recommended)
|
||||
#
|
||||
[acme.httpChallenge]
|
||||
|
||||
@@ -112,21 +110,21 @@ entryPoint = "https"
|
||||
#
|
||||
entryPoint = "http"
|
||||
|
||||
# Use a DNS-01/DNS-01 acme challenge rather than HTTP-01 challenge.
|
||||
# Note : Mandatory for wildcard certificates generation.
|
||||
# Use a DNS-01 ACME challenge rather than HTTP-01 challenge.
|
||||
# Note: mandatory for wildcard certificate generation.
|
||||
#
|
||||
# Optional
|
||||
#
|
||||
# [acme.dnsChallenge]
|
||||
|
||||
# Provider used.
|
||||
# DNS provider used.
|
||||
#
|
||||
# Required
|
||||
#
|
||||
# provider = "digitalocean"
|
||||
|
||||
# By default, the provider will verify the TXT DNS challenge record before letting ACME verify.
|
||||
# If delayBeforeCheck is greater than zero, avoid this & instead just wait so many seconds.
|
||||
# If delayBeforeCheck is greater than zero, this check is delayed for the configured duration in seconds.
|
||||
# Useful if internal networks block external DNS queries.
|
||||
#
|
||||
# Optional
|
||||
@@ -135,98 +133,134 @@ entryPoint = "https"
|
||||
# delayBeforeCheck = 0
|
||||
```
|
||||
|
||||
!!! note
|
||||
If `HTTP-01` challenge is used, `acme.httpChallenge.entryPoint` has to be defined and reachable by Let's Encrypt through the port 80.
|
||||
These are Let's Encrypt limitations as described on the [community forum](https://community.letsencrypt.org/t/support-for-ports-other-than-80-and-443/3419/72).
|
||||
### `caServer`
|
||||
|
||||
!!! note
|
||||
Wildcard certificates can be generated only if `acme.dnsChallenge`
|
||||
option is enable.
|
||||
The CA server to use.
|
||||
|
||||
### Let's Encrypt downtime
|
||||
|
||||
Let's Encrypt functionality will be limited until Træfik is restarted.
|
||||
|
||||
If Let's Encrypt is not reachable, these certificates will be used :
|
||||
|
||||
- ACME certificates already generated before downtime
|
||||
- Expired ACME certificates
|
||||
- Provided certificates
|
||||
|
||||
!!! note
|
||||
Default Træfik certificate will be used instead of ACME certificates for new (sub)domains (which need Let's Encrypt challenge).
|
||||
|
||||
### `storage`
|
||||
This example shows the usage of Let's Encrypt's staging server:
|
||||
|
||||
```toml
|
||||
[acme]
|
||||
# ...
|
||||
storage = "acme.json"
|
||||
caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
|
||||
# ...
|
||||
```
|
||||
|
||||
The `storage` option sets where are stored your ACME certificates.
|
||||
### `dnsChallenge`
|
||||
|
||||
There are two kind of `storage` :
|
||||
Use the `DNS-01` challenge to generate and renew ACME certificates by provisioning a DNS record.
|
||||
|
||||
- a JSON file,
|
||||
- a KV store entry.
|
||||
```toml
|
||||
[acme]
|
||||
# ...
|
||||
[acme.dnsChallenge]
|
||||
provider = "digitalocean"
|
||||
delayBeforeCheck = 0
|
||||
# ...
|
||||
```
|
||||
|
||||
!!! danger "DEPRECATED"
|
||||
`storage` replaces `storageFile` which is deprecated.
|
||||
#### `delayBeforeCheck`
|
||||
|
||||
By default, the `provider` will verify the TXT DNS challenge record before letting ACME verify.
|
||||
If `delayBeforeCheck` is greater than zero, this check is delayed for the configured duration in seconds.
|
||||
|
||||
Useful if internal networks block external DNS queries.
|
||||
|
||||
!!! note
|
||||
During Træfik configuration migration from a configuration file to a KV store (thanks to `storeconfig` subcommand as described [here](/user-guide/kv-config/#store-configuration-in-key-value-store)), if ACME certificates have to be migrated too, use both `storageFile` and `storage`.
|
||||
A `provider` is mandatory.
|
||||
|
||||
- `storageFile` will contain the path to the `acme.json` file to migrate.
|
||||
- `storage` will contain the key where the certificates will be stored.
|
||||
#### `provider`
|
||||
|
||||
#### Store data in a file
|
||||
Here is a list of supported `provider`s, that can automate the DNS verification, along with the required environment variables and their [wildcard & root domain support](/configuration/acme/#wildcard-domains) for each. Do not hesitate to complete it.
|
||||
|
||||
ACME certificates can be stored in a JSON file which with the `600` right mode.
|
||||
| Provider Name | Provider Code | Environment Variables | Wildcard & Root Domain Support |
|
||||
|--------------------------------------------------------|----------------|-----------------------------------------------------------------------------------------------------------------------------|--------------------------------|
|
||||
| [Auroradns](https://www.pcextreme.com/aurora/dns) | `auroradns` | `AURORA_USER_ID`, `AURORA_KEY`, `AURORA_ENDPOINT` | Not tested yet |
|
||||
| [Azure](https://azure.microsoft.com/services/dns/) | `azure` | `AZURE_CLIENT_ID`, `AZURE_CLIENT_SECRET`, `AZURE_SUBSCRIPTION_ID`, `AZURE_TENANT_ID`, `AZURE_RESOURCE_GROUP` | Not tested yet |
|
||||
| [Blue Cat](https://www.bluecatnetworks.com/) | `bluecat` | `BLUECAT_SERVER_URL`, `BLUECAT_USER_NAME`, `BLUECAT_PASSWORD`, `BLUECAT_CONFIG_NAME`, `BLUECAT_DNS_VIEW` | Not tested yet |
|
||||
| [Cloudflare](https://www.cloudflare.com) | `cloudflare` | `CLOUDFLARE_EMAIL`, `CLOUDFLARE_API_KEY` - The `Global API Key` needs to be used, not the `Origin CA Key` | YES |
|
||||
| [CloudXNS](https://www.cloudxns.net) | `cloudxns` | `CLOUDXNS_API_KEY`, `CLOUDXNS_SECRET_KEY` | Not tested yet |
|
||||
| [DigitalOcean](https://www.digitalocean.com) | `digitalocean` | `DO_AUTH_TOKEN` | YES |
|
||||
| [DNSimple](https://dnsimple.com) | `dnsimple` | `DNSIMPLE_OAUTH_TOKEN`, `DNSIMPLE_BASE_URL` | Not tested yet |
|
||||
| [DNS Made Easy](https://dnsmadeeasy.com) | `dnsmadeeasy` | `DNSMADEEASY_API_KEY`, `DNSMADEEASY_API_SECRET`, `DNSMADEEASY_SANDBOX` | Not tested yet |
|
||||
| [DNSPod](http://www.dnspod.net/) | `dnspod` | `DNSPOD_API_KEY` | Not tested yet |
|
||||
| [Duck DNS](https://www.duckdns.org/) | `duckdns` | `DUCKDNS_TOKEN` | Not tested yet |
|
||||
| [Dyn](https://dyn.com) | `dyn` | `DYN_CUSTOMER_NAME`, `DYN_USER_NAME`, `DYN_PASSWORD` | Not tested yet |
|
||||
| External Program | `exec` | `EXEC_PATH` | Not tested yet |
|
||||
| [Exoscale](https://www.exoscale.ch) | `exoscale` | `EXOSCALE_API_KEY`, `EXOSCALE_API_SECRET`, `EXOSCALE_ENDPOINT` | YES |
|
||||
| [Fast DNS](https://www.akamai.com/) | `fastdns` | `AKAMAI_CLIENT_TOKEN`, `AKAMAI_CLIENT_SECRET`, `AKAMAI_ACCESS_TOKEN` | Not tested yet |
|
||||
| [Gandi](https://www.gandi.net) | `gandi` | `GANDI_API_KEY` | Not tested yet |
|
||||
| [Gandi V5](http://doc.livedns.gandi.net) | `gandiv5` | `GANDIV5_API_KEY` | YES |
|
||||
| [Glesys](https://glesys.com/) | `glesys` | `GLESYS_API_USER`, `GLESYS_API_KEY`, `GLESYS_DOMAIN` | Not tested yet |
|
||||
| [GoDaddy](https://godaddy.com/domains) | `godaddy` | `GODADDY_API_KEY`, `GODADDY_API_SECRET` | Not tested yet |
|
||||
| [Google Cloud DNS](https://cloud.google.com/dns/docs/) | `gcloud` | `GCE_PROJECT`, `GCE_SERVICE_ACCOUNT_FILE` | YES |
|
||||
| [Lightsail](https://aws.amazon.com/lightsail/) | `lightsail` | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `DNS_ZONE` | Not tested yet |
|
||||
| [Linode](https://www.linode.com) | `linode` | `LINODE_API_KEY` | Not tested yet |
|
||||
| manual | - | none, but you need to run Træfik interactively, turn on `acmeLogging` to see instructions and press <kbd>Enter</kbd>. | YES |
|
||||
| [Namecheap](https://www.namecheap.com) | `namecheap` | `NAMECHEAP_API_USER`, `NAMECHEAP_API_KEY` | Not tested yet |
|
||||
| [name.com](https://www.name.com/) | `namedotcom` | `NAMECOM_USERNAME`, `NAMECOM_API_TOKEN`, `NAMECOM_SERVER` | Not tested yet |
|
||||
| [Ns1](https://ns1.com/) | `ns1` | `NS1_API_KEY` | Not tested yet |
|
||||
| [Open Telekom Cloud](https://cloud.telekom.de/en/) | `otc` | `OTC_DOMAIN_NAME`, `OTC_USER_NAME`, `OTC_PASSWORD`, `OTC_PROJECT_NAME`, `OTC_IDENTITY_ENDPOINT` | Not tested yet |
|
||||
| [OVH](https://www.ovh.com) | `ovh` | `OVH_ENDPOINT`, `OVH_APPLICATION_KEY`, `OVH_APPLICATION_SECRET`, `OVH_CONSUMER_KEY` | YES |
|
||||
| [PowerDNS](https://www.powerdns.com) | `pdns` | `PDNS_API_KEY`, `PDNS_API_URL` | Not tested yet |
|
||||
| [Rackspace](https://www.rackspace.com/cloud/dns) | `rackspace` | `RACKSPACE_USER`, `RACKSPACE_API_KEY` | Not tested yet |
|
||||
| [RFC2136](https://tools.ietf.org/html/rfc2136) | `rfc2136` | `RFC2136_TSIG_KEY`, `RFC2136_TSIG_SECRET`, `RFC2136_TSIG_ALGORITHM`, `RFC2136_NAMESERVER` | Not tested yet |
|
||||
| [Route 53](https://aws.amazon.com/route53/) | `route53` | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_REGION`, `AWS_HOSTED_ZONE_ID` or a configured user/instance IAM profile. | YES |
|
||||
| [VULTR](https://www.vultr.com) | `vultr` | `VULTR_API_KEY` | Not tested yet |
|
||||
|
||||
There are two ways to store ACME certificates in a file from Docker:
|
||||
### `domains`
|
||||
|
||||
You can provide SANs (alternative domains) to each main domain.
|
||||
All domains must have A/AAAA records pointing to Træfik.
|
||||
Each domain & SAN will lead to a certificate request.
|
||||
|
||||
- create a file on your host and mount it as a volume:
|
||||
```toml
|
||||
storage = "acme.json"
|
||||
```
|
||||
```bash
|
||||
docker run -v "/my/host/acme.json:acme.json" traefik
|
||||
```
|
||||
- mount the folder containing the file as a volume
|
||||
```toml
|
||||
storage = "/etc/traefik/acme/acme.json"
|
||||
```
|
||||
```bash
|
||||
docker run -v "/my/host/acme:/etc/traefik/acme" traefik
|
||||
[acme]
|
||||
# ...
|
||||
[[acme.domains]]
|
||||
main = "local1.com"
|
||||
sans = ["test1.local1.com", "test2.local1.com"]
|
||||
[[acme.domains]]
|
||||
main = "local2.com"
|
||||
[[acme.domains]]
|
||||
main = "*.local3.com"
|
||||
sans = ["local3.com", "test1.test1.local3.com"]
|
||||
# ...
|
||||
```
|
||||
|
||||
!!! warning
|
||||
This file cannot be shared per many instances of Træfik at the same time.
|
||||
If you have to use Træfik cluster mode, please use [a KV Store entry](/configuration/acme/#storage-kv-entry).
|
||||
|
||||
#### Store data in a KV store entry
|
||||
|
||||
ACME certificates can be stored in a KV Store entry.
|
||||
|
||||
```toml
|
||||
storage = "traefik/acme/account"
|
||||
```
|
||||
|
||||
**This kind of storage is mandatory in cluster mode.**
|
||||
|
||||
Because KV stores (like Consul) have limited entries size, the certificates list is compressed before to be set in a KV store entry.
|
||||
Take note that Let's Encrypt applies [rate limiting](https://letsencrypt.org/docs/rate-limits).
|
||||
|
||||
!!! note
|
||||
It's possible to store up to approximately 100 ACME certificates in Consul.
|
||||
Wildcard certificates can only be verified through a `DNS-01` challenge.
|
||||
|
||||
#### Wildcard Domains
|
||||
|
||||
[ACME V2](https://community.letsencrypt.org/t/acme-v2-and-wildcard-certificate-support-is-live/55579) allows wildcard certificate support.
|
||||
As described in [Let's Encrypt's post](https://community.letsencrypt.org/t/staging-endpoint-for-acme-v2/49605) wildcard certificates can only be generated through a [`DNS-01` challenge](/configuration/acme/#dnschallenge).
|
||||
|
||||
```toml
|
||||
[acme]
|
||||
# ...
|
||||
[[acme.domains]]
|
||||
main = "*.local1.com"
|
||||
sans = ["local1.com"]
|
||||
# ...
|
||||
```
|
||||
|
||||
It is not possible to request a double wildcard certificate for a domain (for example `*.*.local.com`).
|
||||
Due to ACME limitation it is not possible to define wildcards in SANs (alternative domains). Thus, the wildcard domain has to be defined as a main domain.
|
||||
Most likely the root domain should receive a certificate too, so it needs to be specified as SAN and 2 `DNS-01` challenges are executed.
|
||||
In this case the generated DNS TXT record for both domains is the same.
|
||||
Eventhough this behaviour is [DNS RFC](https://community.letsencrypt.org/t/wildcard-issuance-two-txt-records-for-the-same-name/54528/2) compliant, it can lead to problems as all DNS providers keep DNS records cached for a certain time (TTL) and this TTL can be superior to the challenge timeout making the `DNS-01` challenge fail.
|
||||
The Træfik ACME client library [LEGO](https://github.com/xenolf/lego) supports some but not all DNS providers to work around this issue.
|
||||
The [`provider` table](/configuration/acme/#provider) indicates if they allow generating certificates for a wildcard domain and its root domain.
|
||||
|
||||
### `httpChallenge`
|
||||
|
||||
Use `HTTP-01` challenge to generate/renew ACME certificates.
|
||||
Use the `HTTP-01` challenge to generate and renew ACME certificates by provisioning a HTTP resource under a well-known URI.
|
||||
|
||||
The redirection is fully compatible with the HTTP-01 challenge.
|
||||
You can use redirection with HTTP-01 challenge without problem.
|
||||
Redirection is fully compatible with the `HTTP-01` challenge.
|
||||
|
||||
```toml
|
||||
[acme]
|
||||
@@ -236,6 +270,10 @@ entryPoint = "https"
|
||||
entryPoint = "http"
|
||||
```
|
||||
|
||||
!!! note
|
||||
If the `HTTP-01` challenge is used, `acme.httpChallenge.entryPoint` has to be defined and reachable by Let's Encrypt through port 80.
|
||||
This is a Let's Encrypt limitation as described on the [community forum](https://community.letsencrypt.org/t/support-for-ports-other-than-80-and-443/3419/72).
|
||||
|
||||
#### `entryPoint`
|
||||
|
||||
Specify the entryPoint to use during the challenges.
|
||||
@@ -259,73 +297,7 @@ defaultEntryPoints = ["http", "https"]
|
||||
```
|
||||
|
||||
!!! note
|
||||
`acme.httpChallenge.entryPoint` has to be reachable by Let's Encrypt through the port 80.
|
||||
It's a Let's Encrypt limitation as described on the [community forum](https://community.letsencrypt.org/t/support-for-ports-other-than-80-and-443/3419/72).
|
||||
|
||||
### `dnsChallenge`
|
||||
|
||||
Use `DNS-01/DNS-01` challenge to generate/renew ACME certificates.
|
||||
|
||||
```toml
|
||||
[acme]
|
||||
# ...
|
||||
[acme.dnsChallenge]
|
||||
provider = "digitalocean"
|
||||
delayBeforeCheck = 0
|
||||
# ...
|
||||
```
|
||||
|
||||
!!! note
|
||||
ACME wildcard certificates can only be generated thanks to a `DNS-01` challenge.
|
||||
|
||||
#### `provider`
|
||||
|
||||
Select the provider that matches the DNS domain that will host the challenge TXT record, and provide environment variables to enable setting it:
|
||||
|
||||
| Provider Name | Provider code | Configuration |
|
||||
|--------------------------------------------------------|----------------|---------------------------------------------------------------------------------------------------------------------------|
|
||||
| [Auroradns](https://www.pcextreme.com/aurora/dns) | `auroradns` | `AURORA_USER_ID`, `AURORA_KEY`, `AURORA_ENDPOINT` |
|
||||
| [Azure](https://azure.microsoft.com/services/dns/) | `azure` | `AZURE_CLIENT_ID`, `AZURE_CLIENT_SECRET`, `AZURE_SUBSCRIPTION_ID`, `AZURE_TENANT_ID`, `AZURE_RESOURCE_GROUP` |
|
||||
| [Blue Cat](https://www.bluecatnetworks.com/) | `bluecat` | `BLUECAT_SERVER_URL`, `BLUECAT_USER_NAME`, `BLUECAT_PASSWORD`, `BLUECAT_CONFIG_NAME`, `BLUECAT_DNS_VIEW` |
|
||||
| [Cloudflare](https://www.cloudflare.com) | `cloudflare` | `CLOUDFLARE_EMAIL`, `CLOUDFLARE_API_KEY` - The Cloudflare `Global API Key` needs to be used and not the `Origin CA Key` |
|
||||
| [CloudXNS](https://www.cloudxns.net) | `cloudxns` | `CLOUDXNS_API_KEY`, `CLOUDXNS_SECRET_KEY` |
|
||||
| [DigitalOcean](https://www.digitalocean.com) | `digitalocean` | `DO_AUTH_TOKEN` |
|
||||
| [DNSimple](https://dnsimple.com) | `dnsimple` | `DNSIMPLE_OAUTH_TOKEN`, `DNSIMPLE_BASE_URL` |
|
||||
| [DNS Made Easy](https://dnsmadeeasy.com) | `dnsmadeeasy` | `DNSMADEEASY_API_KEY`, `DNSMADEEASY_API_SECRET`, `DNSMADEEASY_SANDBOX` |
|
||||
| [DNSPod](http://www.dnspod.net/) | `dnspod` | `DNSPOD_API_KEY` |
|
||||
| [Duck DNS](https://www.duckdns.org/) | `duckdns` | `DUCKDNS_TOKEN` |
|
||||
| [Dyn](https://dyn.com) | `dyn` | `DYN_CUSTOMER_NAME`, `DYN_USER_NAME`, `DYN_PASSWORD` |
|
||||
| External Program | `exec` | `EXEC_PATH` |
|
||||
| [Exoscale](https://www.exoscale.ch) | `exoscale` | `EXOSCALE_API_KEY`, `EXOSCALE_API_SECRET`, `EXOSCALE_ENDPOINT` |
|
||||
| [Fast DNS](https://www.akamai.com/) | `fastdns` | `AKAMAI_CLIENT_TOKEN`, `AKAMAI_CLIENT_SECRET`, `AKAMAI_ACCESS_TOKEN` |
|
||||
| [Gandi](https://www.gandi.net) | `gandi` | `GANDI_API_KEY` |
|
||||
| [Gandi V5](http://doc.livedns.gandi.net) | `gandiv5` | `GANDIV5_API_KEY` |
|
||||
| [Glesys](https://glesys.com/) | `glesys` | `GLESYS_API_USER`, `GLESYS_API_KEY`, `GLESYS_DOMAIN` |
|
||||
| [GoDaddy](https://godaddy.com/domains) | `godaddy` | `GODADDY_API_KEY`, `GODADDY_API_SECRET` |
|
||||
| [Google Cloud DNS](https://cloud.google.com/dns/docs/) | `gcloud` | `GCE_PROJECT`, `GCE_SERVICE_ACCOUNT_FILE` |
|
||||
| [Lightsail](https://aws.amazon.com/lightsail/) | `lightsail` | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `DNS_ZONE` |
|
||||
| [Linode](https://www.linode.com) | `linode` | `LINODE_API_KEY` |
|
||||
| manual | - | none, but run Træfik interactively & turn on `acmeLogging` to see instructions & press <kbd>Enter</kbd>. |
|
||||
| [Namecheap](https://www.namecheap.com) | `namecheap` | `NAMECHEAP_API_USER`, `NAMECHEAP_API_KEY` |
|
||||
| [name.com](https://www.name.com/) | `namedotcom` | `NAMECOM_USERNAME`, `NAMECOM_API_TOKEN`, `NAMECOM_SERVER` |
|
||||
| [Ns1](https://ns1.com/) | `ns1` | `NS1_API_KEY` |
|
||||
| [Open Telekom Cloud](https://cloud.telekom.de/en/) | `otc` | `OTC_DOMAIN_NAME`, `OTC_USER_NAME`, `OTC_PASSWORD`, `OTC_PROJECT_NAME`, `OTC_IDENTITY_ENDPOINT` |
|
||||
| [OVH](https://www.ovh.com) | `ovh` | `OVH_ENDPOINT`, `OVH_APPLICATION_KEY`, `OVH_APPLICATION_SECRET`, `OVH_CONSUMER_KEY` |
|
||||
| [PowerDNS](https://www.powerdns.com) | `pdns` | `PDNS_API_KEY`, `PDNS_API_URL` |
|
||||
| [Rackspace](https://www.rackspace.com/cloud/dns) | `rackspace` | `RACKSPACE_USER`, `RACKSPACE_API_KEY` |
|
||||
| [RFC2136](https://tools.ietf.org/html/rfc2136) | `rfc2136` | `RFC2136_TSIG_KEY`, `RFC2136_TSIG_SECRET`, `RFC2136_TSIG_ALGORITHM`, `RFC2136_NAMESERVER` |
|
||||
| [Route 53](https://aws.amazon.com/route53/) | `route53` | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_REGION`, `AWS_HOSTED_ZONE_ID` or configured user/instance IAM profile. |
|
||||
| [VULTR](https://www.vultr.com) | `vultr` | `VULTR_API_KEY` |
|
||||
|
||||
#### `delayBeforeCheck`
|
||||
|
||||
By default, the `provider` will verify the TXT DNS challenge record before letting ACME verify.
|
||||
If `delayBeforeCheck` is greater than zero, avoid this & instead just wait so many seconds.
|
||||
|
||||
Useful if internal networks block external DNS queries.
|
||||
|
||||
!!! note
|
||||
This field has no sense if a `provider` is not defined.
|
||||
`acme.httpChallenge.entryPoint` has to be reachable through port 80. It's a Let's Encrypt limitation as described on the [community forum](https://community.letsencrypt.org/t/support-for-ports-other-than-80-and-443/3419/72).
|
||||
|
||||
### `onDemand` (Deprecated)
|
||||
|
||||
@@ -339,15 +311,15 @@ onDemand = true
|
||||
# ...
|
||||
```
|
||||
|
||||
Enable on demand certificate.
|
||||
Enable on demand certificate generation.
|
||||
|
||||
This will request a certificate from Let's Encrypt during the first TLS handshake for a host name that does not yet have a certificate.
|
||||
This will request certificates from Let's Encrypt during the first TLS handshake for host names that do not yet have certificates.
|
||||
|
||||
!!! warning
|
||||
TLS handshakes will be slow when requesting a host name certificate for the first time, this can lead to DoS attacks.
|
||||
TLS handshakes are slow when requesting a host name certificate for the first time. This can lead to DoS attacks!
|
||||
|
||||
!!! warning
|
||||
Take note that Let's Encrypt have [rate limiting](https://letsencrypt.org/docs/rate-limits).
|
||||
Take note that Let's Encrypt applies [rate limiting](https://letsencrypt.org/docs/rate-limits).
|
||||
|
||||
### `onHostRule`
|
||||
|
||||
@@ -358,199 +330,94 @@ onHostRule = true
|
||||
# ...
|
||||
```
|
||||
|
||||
Enable certificate generation on frontends `Host` rules (for frontends wired on the `acme.entryPoint`).
|
||||
Enable certificate generation on frontend `Host` rules (for frontends wired to the `acme.entryPoint`).
|
||||
|
||||
This will request a certificate from Let's Encrypt for each frontend with a Host rule.
|
||||
|
||||
For example, a rule `Host:test1.traefik.io,test2.traefik.io` will request a certificate with main domain `test1.traefik.io` and SAN `test2.traefik.io`.
|
||||
For example, the rule `Host:test1.traefik.io,test2.traefik.io` will request a certificate with main domain `test1.traefik.io` and SAN `test2.traefik.io`.
|
||||
|
||||
!!! warning
|
||||
`onHostRule` option can not be used to generate wildcard certificates.
|
||||
Refer to [the wildcard generation section](/configuration/acme/#wildcard-domain) for more information.
|
||||
Refer to [wildcard generation](/configuration/acme/#wildcard-domain) for further information.
|
||||
|
||||
### `caServer`
|
||||
### `storage`
|
||||
|
||||
The `storage` option sets the location where your ACME certificates are saved to.
|
||||
|
||||
```toml
|
||||
[acme]
|
||||
# ...
|
||||
caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
|
||||
storage = "acme.json"
|
||||
# ...
|
||||
```
|
||||
|
||||
CA server to use.
|
||||
The value can refer to two kinds of storage:
|
||||
|
||||
- Uncomment the line to run on the staging Let's Encrypt server.
|
||||
- Leave comment to go to prod.
|
||||
- a JSON file
|
||||
- a KV store entry
|
||||
|
||||
### `domains`
|
||||
!!! danger "DEPRECATED"
|
||||
`storage` replaces `storageFile` which is deprecated.
|
||||
|
||||
!!! note
|
||||
During migration to a KV store use both `storageFile` and `storage` to migrate ACME certificates too. See [`storeconfig` subcommand](/user-guide/kv-config/#store-configuration-in-key-value-store) for further information.
|
||||
|
||||
#### As a File
|
||||
|
||||
ACME certificates can be stored in a JSON file that needs to have file mode `600`.
|
||||
|
||||
In Docker you can either mount the JSON file or the folder containing it:
|
||||
|
||||
```bash
|
||||
docker run -v "/my/host/acme.json:acme.json" traefik
|
||||
```
|
||||
```bash
|
||||
docker run -v "/my/host/acme:/etc/traefik/acme" traefik
|
||||
```
|
||||
|
||||
!!! warning
|
||||
This file cannot be shared across multiple instances of Træfik at the same time. Please use a [KV Store entry](/configuration/acme/#as-a-key-value-store-entry) instead.
|
||||
|
||||
#### As a Key Value Store Entry
|
||||
|
||||
ACME certificates can be stored in a KV Store entry. This kind of storage is **mandatory in cluster mode**.
|
||||
|
||||
```toml
|
||||
[acme]
|
||||
# ...
|
||||
[[acme.domains]]
|
||||
main = "local1.com"
|
||||
sans = ["test1.local1.com", "test2.local1.com"]
|
||||
[[acme.domains]]
|
||||
main = "local2.com"
|
||||
sans = ["test1.local2.com", "test2.local2.com"]
|
||||
[[acme.domains]]
|
||||
main = "local3.com"
|
||||
[[acme.domains]]
|
||||
main = "*.local4.com"
|
||||
sans = ["local4.com", "test1.test1.local4.com"]
|
||||
# ...
|
||||
storage = "traefik/acme/account"
|
||||
```
|
||||
|
||||
#### Wildcard domains
|
||||
Because KV stores (like Consul) have limited entry size the certificates list is compressed before it is saved as KV store entry.
|
||||
|
||||
Wildcard domain has to be defined as a main domain.
|
||||
All domains must have A/AAAA records pointing to Træfik.
|
||||
!!! note
|
||||
It is possible to store up to approximately 100 ACME certificates in Consul.
|
||||
|
||||
Due to ACME limitation, it's not possible to define a wildcard as a SAN (alternative domains).
|
||||
It's neither possible to define a wildcard on a wildcard domain (for example `*.*.local.com`).
|
||||
#### ACME v2 Migration
|
||||
|
||||
!!! warning
|
||||
Note that Let's Encrypt has [rate limiting](https://letsencrypt.org/docs/rate-limits).
|
||||
During migration from ACME v1 to ACME v2, using a storage file, a backup of the original file is created in the same place as the latter (with a `.bak` extension).
|
||||
|
||||
Each domain & SANs will lead to a certificate request.
|
||||
For example: if `acme.storage`'s value is `/etc/traefik/acme/acme.json`, the backup file will be `/etc/traefik/acme/acme.json.bak`.
|
||||
|
||||
#### Others domains
|
||||
|
||||
You can provide SANs (alternative domains) to each main domain.
|
||||
All domains must have A/AAAA records pointing to Træfik.
|
||||
|
||||
!!! warning
|
||||
Take note that Let's Encrypt have [rate limiting](https://letsencrypt.org/docs/rate-limits).
|
||||
|
||||
Each domain & SANs will lead to a certificate request.
|
||||
!!! note
|
||||
When Træfik is launched in a container, the storage file's parent directory needs to be mounted to be able to access the backup file on the host.
|
||||
Otherwise the backup file will be deleted when the container is stopped. Træfik will only generate it once!
|
||||
|
||||
### `dnsProvider` (Deprecated)
|
||||
|
||||
!!! danger "DEPRECATED"
|
||||
This option is deprecated, use [dnsChallenge.provider](/configuration/acme/#dnschallenge) instead.
|
||||
This option is deprecated. Please use [dnsChallenge.provider](/configuration/acme/#provider) instead.
|
||||
|
||||
### `delayDontCheckDNS` (Deprecated)
|
||||
|
||||
!!! danger "DEPRECATED"
|
||||
This option is deprecated, use [dnsChallenge.delayBeforeCheck](/configuration/acme/#dnschallenge) instead.
|
||||
This option is deprecated. Please use [dnsChallenge.delayBeforeCheck](/configuration/acme/#dnschallenge) instead.
|
||||
|
||||
## Wildcard certificates
|
||||
## Fallbacks
|
||||
|
||||
[ACME V2](https://community.letsencrypt.org/t/acme-v2-and-wildcard-certificate-support-is-live/55579) allows wildcard certificate support.
|
||||
However, this feature needs a specific configuration.
|
||||
If Let's Encrypt is not reachable, these certificates will be used:
|
||||
|
||||
### DNS-01 Challenge
|
||||
|
||||
As described in [Let's Encrypt post](https://community.letsencrypt.org/t/staging-endpoint-for-acme-v2/49605), wildcard certificates can only be generated through a `DNS-01` Challenge.
|
||||
This challenge is linked to the Træfik option `acme.dnsChallenge`.
|
||||
|
||||
```toml
|
||||
[acme]
|
||||
# ...
|
||||
[acme.dnsChallenge]
|
||||
provider = "digitalocean"
|
||||
delayBeforeCheck = 0
|
||||
# ...
|
||||
```
|
||||
|
||||
For more information about this option, please refer to the [dnsChallenge section](/configuration/acme/#dnschallenge).
|
||||
|
||||
### Wildcard domain
|
||||
|
||||
Wildcard domains can currently be provided only by to the `acme.domains` option.
|
||||
|
||||
```toml
|
||||
[acme]
|
||||
# ...
|
||||
[[acme.domains]]
|
||||
main = "*.local1.com"
|
||||
sans = ["local1.com"]
|
||||
[[acme.domains]]
|
||||
main = "*.local2.com"
|
||||
# ...
|
||||
```
|
||||
|
||||
For more information about this option, please refer to the [domains section](/configuration/acme/#domains).
|
||||
|
||||
### Limitations
|
||||
|
||||
Let's Encrypt wildcard support have some limitations to take into account :
|
||||
|
||||
- Wildcard domain can not be a SAN (alternative domain),
|
||||
- Wildcard domain on a wildcard domain is forbidden (for example `*.*.local.com`),
|
||||
- A DNS-01 Challenge is executed for each domain (CN and SANs), DNS provider can not manage correctly this behavior as explained in the [DNS provider support section](/configuration/acme/#dns-provider-support)
|
||||
|
||||
|
||||
### DNS provider support
|
||||
|
||||
All DNS providers allow creating ACME wildcard certificates.
|
||||
However, many troubles can appear for wildcard domains with SANs.
|
||||
|
||||
If a wildcard domain is defined with it root domain as SAN, as described below, 2 DNS-01 Challenges will be executed.
|
||||
|
||||
```toml
|
||||
[acme]
|
||||
# ...
|
||||
[[acme.domains]]
|
||||
main = "*.local1.com"
|
||||
sans = ["local1.com"]
|
||||
# ...
|
||||
```
|
||||
|
||||
When a DNS-01 Challenge is done, Let's Encrypt checks if a TXT record is created with a given name and a given value.
|
||||
When a certificate is generated for a wildcard domain is defined with it root domain as SAN, the requested TXT record name for both the wildcard domain and the root domain is the same.
|
||||
|
||||
The [DNS RFC](https://community.letsencrypt.org/t/wildcard-issuance-two-txt-records-for-the-same-name/54528/2) allows this behavior.
|
||||
But all DNS providers keep TXT records values in a cache with a TTL.
|
||||
In function of the parameters given by the Træfik ACME client library ([LEGO](https://github.com/xenolf/lego)), the TXT record TTL can be superior to challenge Timeout.
|
||||
In that event, the DNS-01 Challenge will not work correctly.
|
||||
|
||||
[LEGO](https://github.com/xenolf/lego) will involve in the way to be adapted to all of DNS providers.
|
||||
Meanwhile, the table described below contains all the DNS providers supported by Træfik and indicates if they allow generating certificates for a wildcard domain and its root domain.
|
||||
Do not hesitate to complete it.
|
||||
|
||||
| Provider Name | Provider code | Wildcard and Root Domain Support |
|
||||
|--------------------------------------------------------|----------------|----------------------------------|
|
||||
| [Auroradns](https://www.pcextreme.com/aurora/dns) | `auroradns` | Not tested yet |
|
||||
| [Azure](https://azure.microsoft.com/services/dns/) | `azure` | Not tested yet |
|
||||
| [Blue Cat](https://www.bluecatnetworks.com/) | `bluecat` | Not tested yet |
|
||||
| [Cloudflare](https://www.cloudflare.com) | `cloudflare` | YES |
|
||||
| [CloudXNS](https://www.cloudxns.net) | `cloudxns` | Not tested yet |
|
||||
| [DigitalOcean](https://www.digitalocean.com) | `digitalocean` | YES |
|
||||
| [DNSimple](https://dnsimple.com) | `dnsimple` | Not tested yet |
|
||||
| [DNS Made Easy](https://dnsmadeeasy.com) | `dnsmadeeasy` | Not tested yet |
|
||||
| [DNSPod](http://www.dnspod.net/) | `dnspod` | Not tested yet |
|
||||
| [Duck DNS](https://www.duckdns.org/) | `duckdns` | Not tested yet |
|
||||
| [Dyn](https://dyn.com) | `dyn` | Not tested yet |
|
||||
| External Program | `exec` | Not tested yet |
|
||||
| [Exoscale](https://www.exoscale.ch) | `exoscale` | Not tested yet |
|
||||
| [Fast DNS](https://www.akamai.com/) | `fastdns` | Not tested yet |
|
||||
| [Gandi](https://www.gandi.net) | `gandi` | Not tested yet |
|
||||
| [Gandi V5](http://doc.livedns.gandi.net) | `gandiv5` | Not tested yet |
|
||||
| [Glesys](https://glesys.com/) | `glesys` | Not tested yet |
|
||||
| [GoDaddy](https://godaddy.com/domains) | `godaddy` | Not tested yet |
|
||||
| [Google Cloud DNS](https://cloud.google.com/dns/docs/) | `gcloud` | YES |
|
||||
| [Lightsail](https://aws.amazon.com/lightsail/) | `lightsail` | Not tested yet |
|
||||
| [Linode](https://www.linode.com) | `linode` | Not tested yet |
|
||||
| manual | - | YES |
|
||||
| [Namecheap](https://www.namecheap.com) | `namecheap` | Not tested yet |
|
||||
| [name.com](https://www.name.com/) | `namedotcom` | Not tested yet |
|
||||
| [Ns1](https://ns1.com/) | `ns1` | Not tested yet |
|
||||
| [Open Telekom Cloud](https://cloud.telekom.de/en/) | `otc` | Not tested yet |
|
||||
| [OVH](https://www.ovh.com) | `ovh` | YES |
|
||||
| [PowerDNS](https://www.powerdns.com) | `pdns` | Not tested yet |
|
||||
| [Rackspace](https://www.rackspace.com/cloud/dns) | `rackspace` | Not tested yet |
|
||||
| [RFC2136](https://tools.ietf.org/html/rfc2136) | `rfc2136` | Not tested yet |
|
||||
| [Route 53](https://aws.amazon.com/route53/) | `route53` | YES |
|
||||
| [VULTR](https://www.vultr.com) | `vultr` | Not tested yet |
|
||||
|
||||
## ACME V2 migration
|
||||
|
||||
During migration from ACME V1 to ACME V2 with a storage file, a backup is created with the content of the ACME V1 file.
|
||||
To obtain the name of the backup file, Træfik concatenates the option `acme.storage` and the suffix `.bak`.
|
||||
|
||||
For example : if `acme.storage` value is `/etc/traefik/acme/acme.json`, the backup file will be named `/etc/traefik/acme/acme.json.bak`.
|
||||
1. ACME certificates already generated before downtime
|
||||
1. Expired ACME certificates
|
||||
1. Provided certificates
|
||||
|
||||
!!! note
|
||||
When Træfik is launched in a container, do not forget to create a volume of the parent folder to get the backup file on the host.
|
||||
Otherwise, the backup file will be deleted when the container will be stopped and Træfik will not generate it again.
|
||||
For new (sub)domains which need Let's Encrypt authentification, the default Træfik certificate will be used until Træfik is restarted.
|
||||
|
||||
@@ -4,6 +4,9 @@
|
||||
|
||||
```toml
|
||||
# API definition
|
||||
# Warning: Enabling API will expose Træfik's configuration.
|
||||
# It is not recommended in production,
|
||||
# unless secured by authentication and authorizations
|
||||
[api]
|
||||
# Name of the related entry point
|
||||
#
|
||||
@@ -12,7 +15,7 @@
|
||||
#
|
||||
entryPoint = "traefik"
|
||||
|
||||
# Enabled Dashboard
|
||||
# Enable Dashboard
|
||||
#
|
||||
# Optional
|
||||
# Default: true
|
||||
@@ -21,7 +24,7 @@
|
||||
|
||||
# Enable debug mode.
|
||||
# This will install HTTP handlers to expose Go expvars under /debug/vars and
|
||||
# pprof profiling data under /debug/pprof.
|
||||
# pprof profiling data under /debug/pprof/.
|
||||
# Additionally, the log level will be set to DEBUG.
|
||||
#
|
||||
# Optional
|
||||
@@ -30,7 +33,7 @@
|
||||
debug = true
|
||||
```
|
||||
|
||||
For more customization, see [entry points](/configuration/entrypoints/) documentation and [examples](/user-guide/examples/#ping-health-check).
|
||||
For more customization, see [entry points](/configuration/entrypoints/) documentation.
|
||||
|
||||
## Web UI
|
||||
|
||||
@@ -38,6 +41,22 @@ For more customization, see [entry points](/configuration/entrypoints/) document
|
||||
|
||||

|
||||
|
||||
## Security
|
||||
|
||||
Enabling the API will expose all configuration elements,
|
||||
including sensitive data.
|
||||
|
||||
It is not recommended in production,
|
||||
unless secured by authentication and authorizations.
|
||||
|
||||
A good sane default (but not exhaustive) set of recommendations
|
||||
would be to apply the following protection mechanism:
|
||||
|
||||
* _At application level:_ enabling HTTP [Basic Authentication](#authentication)
|
||||
* _At transport level:_ NOT exposing publicly the API's port,
|
||||
keeping it restricted over internal networks
|
||||
(restricted networks as in https://en.wikipedia.org/wiki/Principle_of_least_privilege).
|
||||
|
||||
## API
|
||||
|
||||
| Path | Method | Description |
|
||||
|
||||
@@ -138,6 +138,7 @@ The following general annotations are applicable on the Ingress object:
|
||||
| `traefik.ingress.kubernetes.io/rewrite-target: /users` | Replaces each matched Ingress path with the specified one, and adds the old path to the `X-Replaced-Path` header. |
|
||||
| `traefik.ingress.kubernetes.io/rule-type: PathPrefixStrip` | Override the default frontend rule type. Default: `PathPrefix`. |
|
||||
| `traefik.ingress.kubernetes.io/whitelist-source-range: "1.2.3.0/24, fe80::/16"` | A comma-separated list of IP ranges permitted for access. all source IPs are permitted if the list is empty or a single range is ill-formatted. Please note, you may have to set `service.spec.externalTrafficPolicy` to the value `Local` to preserve the source IP of the request for filtering. Please see [this link](https://kubernetes.io/docs/tutorials/services/source-ip/) for more information.|
|
||||
| `ingress.kubernetes.io/whitelist-x-forwarded-for: "true"` | Use `X-Forwarded-For` header as valid source of IP for the white list. |
|
||||
| `traefik.ingress.kubernetes.io/app-root: "/index.html"` | Redirects all requests for `/` to the defined path. (4) |
|
||||
|
||||
<1> `traefik.ingress.kubernetes.io/error-pages` example:
|
||||
|
||||
@@ -18,7 +18,7 @@
|
||||
|
||||
# Enable debug mode.
|
||||
# This will install HTTP handlers to expose Go expvars under /debug/vars and
|
||||
# pprof profiling data under /debug/pprof.
|
||||
# pprof profiling data under /debug/pprof/.
|
||||
# The log level will be set to DEBUG unless `logLevel` is specified.
|
||||
#
|
||||
# Optional
|
||||
|
||||
@@ -80,7 +80,8 @@
|
||||
|
||||
# ...
|
||||
```
|
||||
### InfluxDB
|
||||
|
||||
## InfluxDB
|
||||
|
||||
```toml
|
||||
[metrics]
|
||||
@@ -105,22 +106,3 @@
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
## Statistics
|
||||
|
||||
```toml
|
||||
# Metrics definition
|
||||
[metrics]
|
||||
# ...
|
||||
|
||||
# Enable more detailed statistics.
|
||||
[metrics.statistics]
|
||||
|
||||
# Number of recent errors logged.
|
||||
#
|
||||
# Default: 10
|
||||
#
|
||||
recentErrors = 10
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
[](https://goreportcard.com/report/github.com/containous/traefik)
|
||||
[](https://github.com/containous/traefik/blob/master/LICENSE.md)
|
||||
[](https://traefik.herokuapp.com)
|
||||
[](https://twitter.com/intent/follow?screen_name=traefikproxy)
|
||||
[](https://twitter.com/intent/follow?screen_name=traefik)
|
||||
|
||||
|
||||
Træfik is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy.
|
||||
@@ -42,7 +42,7 @@ _(But if you'd rather configure some of your routes manually, Træfik supports t
|
||||
- Websocket, HTTP/2, GRPC ready
|
||||
- Provides metrics (Rest, Prometheus, Datadog, Statsd, InfluxDB)
|
||||
- Keeps access logs (JSON, CLF)
|
||||
- [Fast](/benchmarks) ... which is nice
|
||||
- Fast
|
||||
- Exposes a Rest API
|
||||
- Packaged as a single binary file (made with :heart: with go) and available as a [tiny](https://microbadger.com/images/traefik) [official](https://hub.docker.com/r/_/traefik/) docker image
|
||||
|
||||
@@ -86,6 +86,10 @@ services:
|
||||
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
|
||||
```
|
||||
|
||||
!!! warning
|
||||
Enabling the Web UI with the `--api` flag might exposes configuration elements. You can read more about this on the [API/Dashboard's Security section](/configuration/api#security).
|
||||
|
||||
|
||||
**That's it. Now you can launch Træfik!**
|
||||
|
||||
Start your `reverse-proxy` with the following command:
|
||||
@@ -199,3 +203,19 @@ Using the tiny Docker image:
|
||||
```shell
|
||||
docker run -d -p 8080:8080 -p 80:80 -v $PWD/traefik.toml:/etc/traefik/traefik.toml traefik
|
||||
```
|
||||
|
||||
## Security
|
||||
|
||||
### Security Advisories
|
||||
|
||||
We strongly advise you to join our mailing list to be aware of the latest announcements from our security team. You can subscribe sending a mail to security+subscribe@traefik.io or on [the online viewer](https://groups.google.com/a/traefik.io/forum/#!forum/security).
|
||||
|
||||
### CVE
|
||||
|
||||
Reported vulnerabilities can be found on
|
||||
[cve.mitre.org](https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=traefik).
|
||||
|
||||
### Report a Vulnerability
|
||||
|
||||
We want to keep Træfik safe for everyone.
|
||||
If you've discovered a security vulnerability in Træfik, we appreciate your help in disclosing it to us in a responsible manner, using [this form](https://security.traefik.io).
|
||||
@@ -9,9 +9,9 @@ If you want to use Let's Encrypt with Træfik, sharing configuration or TLS cert
|
||||
Ok, could we mount a shared volume used by all my instances? Yes, you can, but it will not work.
|
||||
When you use Let's Encrypt, you need to store certificates, but not only.
|
||||
When Træfik generates a new certificate, it configures a challenge and once Let's Encrypt will verify the ownership of the domain, it will ping back the challenge.
|
||||
If the challenge is not knowing by other Træfik instances, the validation will fail.
|
||||
If the challenge is not known by other Træfik instances, the validation will fail.
|
||||
|
||||
For more information about challenge: [Automatic Certificate Management Environment (ACME)](https://github.com/ietf-wg-acme/acme/blob/master/draft-ietf-acme-acme.md#http-challenge)
|
||||
For more information about the challenge: [Automatic Certificate Management Environment (ACME)](https://github.com/ietf-wg-acme/acme/blob/master/draft-ietf-acme-acme.md#http-challenge)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@ In addition, we want to use Let's Encrypt to automatically generate and renew SS
|
||||
|
||||
## Setting Up
|
||||
|
||||
In order for this to work, you'll need a server with a public IP address, with Docker installed on it.
|
||||
In order for this to work, you'll need a server with a public IP address, with Docker and docker-compose installed on it.
|
||||
|
||||
In this example, we're using the fictitious domain _my-awesome-app.org_.
|
||||
|
||||
|
||||
@@ -81,9 +81,11 @@ For namespaced restrictions, one RoleBinding is required per watched namespace a
|
||||
It is possible to use Træfik with a [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) or a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) object,
|
||||
whereas both options have their own pros and cons:
|
||||
|
||||
- The scalability is much better when using a Deployment, because you will have a Single-Pod-per-Node model when using the DaemonSet.
|
||||
- It is possible to exclusively run a Service on a dedicated set of machines using taints and tolerations with a DaemonSet.
|
||||
- On the other hand the DaemonSet allows you to access any Node directly on Port 80 and 443, where you have to setup a [Service](https://kubernetes.io/docs/concepts/services-networking/service/) object with a Deployment.
|
||||
- The scalability can be much better when using a Deployment, because you will have a Single-Pod-per-Node model when using a DaemonSet, whereas you may need less replicas based on your environment when using a Deployment.
|
||||
- DaemonSets automatically scale to new nodes, when the nodes join the cluster, whereas Deployment pods are only scheduled on new nodes if required.
|
||||
- DaemonSets ensure that only one replica of pods run on any single node. Deployments require affinity settings if you want to ensure that two pods don't end up on the same node.
|
||||
- DaemonSets can be run with the `NET_BIND_SERVICE` capability, which will allow it to bind to port 80/443/etc on each host. This will allow bypassing the kube-proxy, and reduce traffic hops. Note that this is against the Kubernetes Best Practices [Guidelines](https://kubernetes.io/docs/concepts/configuration/overview/#services), and raises the potential for scheduling/scaling issues. Despite potential issues, this remains the choice for most ingress controllers.
|
||||
- If you are unsure which to choose, start with the Daemonset.
|
||||
|
||||
The Deployment objects looks like this:
|
||||
|
||||
@@ -118,6 +120,11 @@ spec:
|
||||
containers:
|
||||
- image: traefik
|
||||
name: traefik-ingress-lb
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 80
|
||||
- name: admin
|
||||
containerPort: 8080
|
||||
args:
|
||||
- --api
|
||||
- --kubernetes
|
||||
@@ -172,7 +179,6 @@ spec:
|
||||
spec:
|
||||
serviceAccountName: traefik-ingress-controller
|
||||
terminationGracePeriodSeconds: 60
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- image: traefik
|
||||
name: traefik-ingress-lb
|
||||
@@ -208,11 +214,13 @@ spec:
|
||||
- protocol: TCP
|
||||
port: 8080
|
||||
name: admin
|
||||
type: NodePort
|
||||
```
|
||||
|
||||
[examples/k8s/traefik-ds.yaml](https://github.com/containous/traefik/tree/master/examples/k8s/traefik-ds.yaml)
|
||||
|
||||
!!! note
|
||||
This will create a Daemonset that uses privileged ports 80/8080 on the host. This may not work on all providers, but illustrates the static (non-NodePort) hostPort binding. The `traefik-ingress-service` can still be used inside the cluster to access the DaemonSet pods.
|
||||
|
||||
To deploy Træfik to your cluster start by submitting one of the YAML files to the cluster with `kubectl`:
|
||||
|
||||
```shell
|
||||
@@ -293,7 +301,21 @@ Install the Træfik chart by:
|
||||
```shell
|
||||
helm install stable/traefik
|
||||
```
|
||||
Install the Træfik chart using a values.yaml file.
|
||||
|
||||
```shell
|
||||
helm install --values values.yaml stable/traefik
|
||||
```
|
||||
|
||||
```yaml
|
||||
dashboard:
|
||||
enabled: true
|
||||
domain: traefik-ui.minikube
|
||||
kubernetes:
|
||||
namespaces:
|
||||
- default
|
||||
- kube-system
|
||||
```
|
||||
For more information, check out [the documentation](https://github.com/kubernetes/charts/tree/master/stable/traefik).
|
||||
|
||||
## Submitting an Ingress to the Cluster
|
||||
|
||||
@@ -85,9 +85,9 @@ defaultEntryPoints = ["http", "https"]
|
||||
certFile = """-----BEGIN CERTIFICATE-----
|
||||
<cert file content>
|
||||
-----END CERTIFICATE-----"""
|
||||
keyFile = """-----BEGIN CERTIFICATE-----
|
||||
keyFile = """-----BEGIN PRIVATE KEY-----
|
||||
<key file content>
|
||||
-----END CERTIFICATE-----"""
|
||||
-----END PRIVATE KEY-----"""
|
||||
[entryPoints.other-https]
|
||||
address = ":4443"
|
||||
[entryPoints.other-https.tls]
|
||||
@@ -355,7 +355,7 @@ And there, the same dynamic configuration in a KV Store (using `prefix = "traefi
|
||||
|---------------------------------------|-----------------------|
|
||||
| `/traefik/tls/2/entrypoints` | `https,other-https` |
|
||||
| `/traefik/tls/2/certificate/certfile` | `<cert file content>` |
|
||||
| `/traefik/tls/2/certificate/certfile` | `<key file content>` |
|
||||
| `/traefik/tls/2/certificate/keyfile` | `<key file content>` |
|
||||
|
||||
### Atomic configuration changes
|
||||
|
||||
|
||||
@@ -102,7 +102,7 @@ Let's explain this command:
|
||||
| `--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock` | we bind mount the docker socket where Træfik is scheduled to be able to speak to the daemon. |
|
||||
| `--network traefik-net` | we attach the Træfik service (and thus the underlying container) to the `traefik-net` network. |
|
||||
| `--docker` | enable docker provider, and `--docker.swarmMode` to enable the swarm mode on Træfik. |
|
||||
| `--api | activate the webUI on port 8080 |
|
||||
| `--api` | activate the webUI on port 8080 |
|
||||
|
||||
|
||||
## Deploy your apps
|
||||
|
||||
@@ -50,7 +50,7 @@ start_boulder() {
|
||||
# Script usage
|
||||
show_usage() {
|
||||
echo
|
||||
echo "USAGE : manage_acme_docker_environment.sh [--start|--stop|--restart]"
|
||||
echo "USAGE : manage_acme_docker_environment.sh [--dev|--start|--stop|--restart]"
|
||||
echo
|
||||
}
|
||||
|
||||
|
||||
@@ -28,6 +28,11 @@ spec:
|
||||
containers:
|
||||
- image: traefik
|
||||
name: traefik-ingress-lb
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 80
|
||||
- name: admin
|
||||
containerPort: 8080
|
||||
args:
|
||||
- --api
|
||||
- --kubernetes
|
||||
|
||||
@@ -21,7 +21,6 @@ spec:
|
||||
spec:
|
||||
serviceAccountName: traefik-ingress-controller
|
||||
terminationGracePeriodSeconds: 60
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- image: traefik
|
||||
name: traefik-ingress-lb
|
||||
@@ -31,6 +30,7 @@ spec:
|
||||
hostPort: 80
|
||||
- name: admin
|
||||
containerPort: 8080
|
||||
hostport: 8080
|
||||
securityContext:
|
||||
capabilities:
|
||||
drop:
|
||||
@@ -57,4 +57,3 @@ spec:
|
||||
- protocol: TCP
|
||||
port: 8080
|
||||
name: admin
|
||||
type: NodePort
|
||||
|
||||
@@ -30,7 +30,7 @@ type accessLogValue struct {
|
||||
code string
|
||||
user string
|
||||
frontendName string
|
||||
backendName string
|
||||
backendURL string
|
||||
}
|
||||
|
||||
func (s *AccessLogSuite) SetUpSuite(c *check.C) {
|
||||
@@ -103,7 +103,7 @@ func (s *AccessLogSuite) TestAccessLogAuthFrontend(c *check.C) {
|
||||
code: "401",
|
||||
user: "-",
|
||||
frontendName: "Auth for frontend-Host-frontend-auth-docker-local",
|
||||
backendName: "-",
|
||||
backendURL: "/",
|
||||
},
|
||||
}
|
||||
|
||||
@@ -151,7 +151,7 @@ func (s *AccessLogSuite) TestAccessLogAuthEntrypoint(c *check.C) {
|
||||
code: "401",
|
||||
user: "-",
|
||||
frontendName: "Auth for entrypoint",
|
||||
backendName: "-",
|
||||
backendURL: "/",
|
||||
},
|
||||
}
|
||||
|
||||
@@ -199,7 +199,7 @@ func (s *AccessLogSuite) TestAccessLogAuthEntrypointSuccess(c *check.C) {
|
||||
code: "200",
|
||||
user: "test",
|
||||
frontendName: "Host-entrypoint-auth-docker",
|
||||
backendName: "http://172.17.0",
|
||||
backendURL: "http://172.17.0",
|
||||
},
|
||||
}
|
||||
|
||||
@@ -247,14 +247,14 @@ func (s *AccessLogSuite) TestAccessLogDigestAuthEntrypoint(c *check.C) {
|
||||
code: "401",
|
||||
user: "-",
|
||||
frontendName: "Auth for entrypoint",
|
||||
backendName: "-",
|
||||
backendURL: "/",
|
||||
},
|
||||
{
|
||||
formatOnly: false,
|
||||
code: "200",
|
||||
user: "test",
|
||||
frontendName: "Host-entrypoint-digest-auth-docker",
|
||||
backendName: "http://172.17.0",
|
||||
backendURL: "http://172.17.0",
|
||||
},
|
||||
}
|
||||
|
||||
@@ -355,7 +355,7 @@ func (s *AccessLogSuite) TestAccessLogEntrypointRedirect(c *check.C) {
|
||||
code: "302",
|
||||
user: "-",
|
||||
frontendName: "entrypoint redirect for frontend-",
|
||||
backendName: "-",
|
||||
backendURL: "/",
|
||||
},
|
||||
{
|
||||
formatOnly: true,
|
||||
@@ -405,7 +405,7 @@ func (s *AccessLogSuite) TestAccessLogFrontendRedirect(c *check.C) {
|
||||
code: "302",
|
||||
user: "-",
|
||||
frontendName: "frontend redirect for frontend-Path-",
|
||||
backendName: "-",
|
||||
backendURL: "/",
|
||||
},
|
||||
{
|
||||
formatOnly: true,
|
||||
@@ -461,7 +461,7 @@ func (s *AccessLogSuite) TestAccessLogRateLimit(c *check.C) {
|
||||
code: "429",
|
||||
user: "-",
|
||||
frontendName: "rate limit for frontend-Host-ratelimit",
|
||||
backendName: "/",
|
||||
backendURL: "/",
|
||||
},
|
||||
}
|
||||
|
||||
@@ -512,7 +512,7 @@ func (s *AccessLogSuite) TestAccessLogBackendNotFound(c *check.C) {
|
||||
code: "404",
|
||||
user: "-",
|
||||
frontendName: "backend not found",
|
||||
backendName: "/",
|
||||
backendURL: "/",
|
||||
},
|
||||
}
|
||||
|
||||
@@ -557,7 +557,7 @@ func (s *AccessLogSuite) TestAccessLogEntrypointWhitelist(c *check.C) {
|
||||
code: "403",
|
||||
user: "-",
|
||||
frontendName: "ipwhitelister for entrypoint httpWhitelistReject",
|
||||
backendName: "-",
|
||||
backendURL: "/",
|
||||
},
|
||||
}
|
||||
|
||||
@@ -604,7 +604,7 @@ func (s *AccessLogSuite) TestAccessLogFrontendWhitelist(c *check.C) {
|
||||
code: "403",
|
||||
user: "-",
|
||||
frontendName: "ipwhitelister for frontend-Host-frontend-whitelist",
|
||||
backendName: "-",
|
||||
backendURL: "/",
|
||||
},
|
||||
}
|
||||
|
||||
@@ -734,7 +734,7 @@ func checkAccessLogExactValues(c *check.C, line string, i int, v accessLogValue)
|
||||
c.Assert(results[accesslog.OriginStatus], checker.Equals, v.code)
|
||||
c.Assert(results[accesslog.RequestCount], checker.Equals, fmt.Sprintf("%d", i+1))
|
||||
c.Assert(results[accesslog.FrontendName], checker.Matches, `^"?`+v.frontendName+`.*$`)
|
||||
c.Assert(results[accesslog.BackendURL], checker.Matches, `^"?`+v.backendName+`.*$`)
|
||||
c.Assert(results[accesslog.BackendURL], checker.Matches, `^"?`+v.backendURL+`.*$`)
|
||||
c.Assert(results[accesslog.Duration], checker.Matches, `^\d+ms$`)
|
||||
}
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package integration
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"time"
|
||||
@@ -41,7 +42,7 @@ func (s *ConsulCatalogSuite) waitToElectConsulLeader() error {
|
||||
leader, err := s.consulClient.Status().Leader()
|
||||
|
||||
if err != nil || len(leader) == 0 {
|
||||
return fmt.Errorf("Leader not found. %v", err)
|
||||
return fmt.Errorf("leader not found. %v", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
@@ -55,9 +56,6 @@ func (s *ConsulCatalogSuite) createConsulClient(config *api.Config, c *check.C)
|
||||
s.consulClient = consulClient
|
||||
return consulClient
|
||||
}
|
||||
func (s *ConsulCatalogSuite) startConsulService(c *check.C) {
|
||||
|
||||
}
|
||||
|
||||
func (s *ConsulCatalogSuite) registerService(name string, address string, port int, tags []string) error {
|
||||
catalog := s.consulClient.Catalog()
|
||||
@@ -78,28 +76,45 @@ func (s *ConsulCatalogSuite) registerService(name string, address string, port i
|
||||
return err
|
||||
}
|
||||
|
||||
func (s *ConsulCatalogSuite) registerAgentService(name string, address string, port int, tags []string) error {
|
||||
func (s *ConsulCatalogSuite) registerAgentService(name string, address string, port int, tags []string, withHealthCheck bool) error {
|
||||
agent := s.consulClient.Agent()
|
||||
err := agent.ServiceRegister(
|
||||
var healthCheck *api.AgentServiceCheck
|
||||
if withHealthCheck {
|
||||
healthCheck = &api.AgentServiceCheck{
|
||||
HTTP: "http://" + address,
|
||||
Interval: "10s",
|
||||
}
|
||||
} else {
|
||||
healthCheck = nil
|
||||
}
|
||||
return agent.ServiceRegister(
|
||||
&api.AgentServiceRegistration{
|
||||
ID: address,
|
||||
Tags: tags,
|
||||
Name: name,
|
||||
Address: address,
|
||||
Port: port,
|
||||
Check: &api.AgentServiceCheck{
|
||||
HTTP: "http://" + address,
|
||||
Interval: "10s",
|
||||
},
|
||||
Check: healthCheck,
|
||||
},
|
||||
)
|
||||
return err
|
||||
}
|
||||
|
||||
func (s *ConsulCatalogSuite) registerCheck(name string, address string, port int) error {
|
||||
agent := s.consulClient.Agent()
|
||||
checkRegistration := &api.AgentCheckRegistration{
|
||||
ID: fmt.Sprintf("%s-%s", name, address),
|
||||
Name: name,
|
||||
ServiceID: address,
|
||||
}
|
||||
checkRegistration.HTTP = fmt.Sprintf("http://%s:%d/health", address, port)
|
||||
checkRegistration.Interval = "2s"
|
||||
checkRegistration.CheckID = address
|
||||
return agent.CheckRegister(checkRegistration)
|
||||
}
|
||||
|
||||
func (s *ConsulCatalogSuite) deregisterAgentService(address string) error {
|
||||
agent := s.consulClient.Agent()
|
||||
err := agent.ServiceDeregister(address)
|
||||
return err
|
||||
return agent.ServiceDeregister(address)
|
||||
}
|
||||
|
||||
func (s *ConsulCatalogSuite) deregisterService(name string, address string) error {
|
||||
@@ -115,6 +130,22 @@ func (s *ConsulCatalogSuite) deregisterService(name string, address string) erro
|
||||
return err
|
||||
}
|
||||
|
||||
func (s *ConsulCatalogSuite) consulEnableServiceMaintenance(name string) error {
|
||||
return s.consulClient.Agent().EnableServiceMaintenance(name, fmt.Sprintf("Maintenance mode for service %s", name))
|
||||
}
|
||||
|
||||
func (s *ConsulCatalogSuite) consulDisableServiceMaintenance(name string) error {
|
||||
return s.consulClient.Agent().DisableServiceMaintenance(name)
|
||||
}
|
||||
|
||||
func (s *ConsulCatalogSuite) consulEnableNodeMaintenance() error {
|
||||
return s.consulClient.Agent().EnableNodeMaintenance("Maintenance mode for node")
|
||||
}
|
||||
|
||||
func (s *ConsulCatalogSuite) consulDisableNodeMaintenance() error {
|
||||
return s.consulClient.Agent().DisableNodeMaintenance()
|
||||
}
|
||||
|
||||
func (s *ConsulCatalogSuite) TestSimpleConfiguration(c *check.C) {
|
||||
cmd, display := s.traefikCmd(
|
||||
withConfigFile("fixtures/consul_catalog/simple.toml"),
|
||||
@@ -273,7 +304,7 @@ func (s *ConsulCatalogSuite) TestRefreshConfigWithMultipleNodeWithoutHealthCheck
|
||||
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
||||
defer s.deregisterService("test", whoami.NetworkSettings.IPAddress)
|
||||
|
||||
err = s.registerAgentService("test", whoami.NetworkSettings.IPAddress, 80, []string{"name=whoami1"})
|
||||
err = s.registerAgentService("test", whoami.NetworkSettings.IPAddress, 80, []string{"name=whoami1"}, true)
|
||||
c.Assert(err, checker.IsNil, check.Commentf("Error registering agent service"))
|
||||
defer s.deregisterAgentService(whoami.NetworkSettings.IPAddress)
|
||||
|
||||
@@ -514,3 +545,132 @@ func (s *ConsulCatalogSuite) TestRetryWithConsulServer(c *check.C) {
|
||||
err = try.Request(req, 10*time.Second, try.StatusCodeIs(http.StatusOK), try.HasBody())
|
||||
c.Assert(err, checker.IsNil)
|
||||
}
|
||||
|
||||
func (s *ConsulCatalogSuite) TestServiceWithMultipleHealthCheck(c *check.C) {
|
||||
//Scale consul to 0 to be able to start traefik before and test retry
|
||||
s.composeProject.Scale(c, "consul", 0)
|
||||
|
||||
cmd, display := s.traefikCmd(
|
||||
withConfigFile("fixtures/consul_catalog/simple.toml"),
|
||||
"--consulCatalog",
|
||||
"--consulCatalog.watch=false",
|
||||
"--consulCatalog.exposedByDefault=true",
|
||||
"--consulCatalog.endpoint="+s.consulIP+":8500",
|
||||
"--consulCatalog.domain=consul.localhost")
|
||||
defer display(c)
|
||||
err := cmd.Start()
|
||||
c.Assert(err, checker.IsNil)
|
||||
defer cmd.Process.Kill()
|
||||
|
||||
// Wait for Traefik to turn ready.
|
||||
err = try.GetRequest("http://127.0.0.1:8000/", 2*time.Second, try.StatusCodeIs(http.StatusNotFound))
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "http://127.0.0.1:8000/", nil)
|
||||
c.Assert(err, checker.IsNil)
|
||||
req.Host = "test.consul.localhost"
|
||||
|
||||
// Request should fail
|
||||
err = try.Request(req, 2*time.Second, try.StatusCodeIs(http.StatusNotFound), try.HasBody())
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
// Scale consul to 1
|
||||
s.composeProject.Scale(c, "consul", 1)
|
||||
s.waitToElectConsulLeader()
|
||||
|
||||
whoami := s.composeProject.Container(c, "whoami1")
|
||||
// Register service
|
||||
err = s.registerAgentService("test", whoami.NetworkSettings.IPAddress, 80, []string{"name=whoami1"}, true)
|
||||
c.Assert(err, checker.IsNil, check.Commentf("Error registering agent service"))
|
||||
defer s.deregisterAgentService(whoami.NetworkSettings.IPAddress)
|
||||
|
||||
// Register one healthcheck
|
||||
err = s.registerCheck("test", whoami.NetworkSettings.IPAddress, 80)
|
||||
c.Assert(err, checker.IsNil, check.Commentf("Error registering check"))
|
||||
|
||||
// Provider consul catalog should be present
|
||||
err = try.GetRequest("http://127.0.0.1:8080/api/providers", 10*time.Second, try.BodyContains("consul_catalog"))
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
// Should be ok
|
||||
err = try.Request(req, 10*time.Second, try.StatusCodeIs(http.StatusOK), try.HasBody())
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
// Change health value of service to critical
|
||||
reqHealth, err := http.NewRequest(http.MethodPost, fmt.Sprintf("http://%s:80/health", whoami.NetworkSettings.IPAddress), bytes.NewBuffer([]byte("500")))
|
||||
c.Assert(err, checker.IsNil)
|
||||
reqHealth.Host = "test.consul.localhost"
|
||||
|
||||
err = try.Request(reqHealth, 10*time.Second, try.StatusCodeIs(http.StatusOK))
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
// Should be a 404
|
||||
err = try.Request(req, 10*time.Second, try.StatusCodeIs(http.StatusNotFound), try.HasBody())
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
// Change health value of service to passing
|
||||
reqHealth, err = http.NewRequest(http.MethodPost, fmt.Sprintf("http://%s:80/health", whoami.NetworkSettings.IPAddress), bytes.NewBuffer([]byte("200")))
|
||||
c.Assert(err, checker.IsNil)
|
||||
err = try.Request(reqHealth, 10*time.Second, try.StatusCodeIs(http.StatusOK))
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
// Should be a 200
|
||||
err = try.Request(req, 10*time.Second, try.StatusCodeIs(http.StatusOK), try.HasBody())
|
||||
c.Assert(err, checker.IsNil)
|
||||
}
|
||||
|
||||
func (s *ConsulCatalogSuite) TestMaintenanceMode(c *check.C) {
|
||||
cmd, display := s.traefikCmd(
|
||||
withConfigFile("fixtures/consul_catalog/simple.toml"),
|
||||
"--consulCatalog",
|
||||
"--consulCatalog.endpoint="+s.consulIP+":8500",
|
||||
"--consulCatalog.domain=consul.localhost")
|
||||
defer display(c)
|
||||
err := cmd.Start()
|
||||
c.Assert(err, checker.IsNil)
|
||||
defer cmd.Process.Kill()
|
||||
|
||||
// Wait for Traefik to turn ready.
|
||||
err = try.GetRequest("http://127.0.0.1:8000/", 2*time.Second, try.StatusCodeIs(http.StatusNotFound))
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
whoami := s.composeProject.Container(c, "whoami1")
|
||||
|
||||
err = s.registerAgentService("test", whoami.NetworkSettings.IPAddress, 80, []string{}, false)
|
||||
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "http://127.0.0.1:8000/", nil)
|
||||
c.Assert(err, checker.IsNil)
|
||||
req.Host = "test.consul.localhost"
|
||||
|
||||
err = try.Request(req, 10*time.Second, try.StatusCodeIs(http.StatusOK), try.HasBody())
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
// Enable service maintenance mode
|
||||
err = s.consulEnableServiceMaintenance(whoami.NetworkSettings.IPAddress)
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
err = try.Request(req, 10*time.Second, try.StatusCodeIs(http.StatusNotFound), try.HasBody())
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
// Disable service maintenance mode
|
||||
err = s.consulDisableServiceMaintenance(whoami.NetworkSettings.IPAddress)
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
err = try.Request(req, 10*time.Second, try.StatusCodeIs(http.StatusOK), try.HasBody())
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
// Enable node maintenance mode
|
||||
err = s.consulEnableNodeMaintenance()
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
err = try.Request(req, 10*time.Second, try.StatusCodeIs(http.StatusNotFound), try.HasBody())
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
// Disable node maintenance mode
|
||||
err = s.consulDisableNodeMaintenance()
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
err = try.Request(req, 10*time.Second, try.StatusCodeIs(http.StatusOK), try.HasBody())
|
||||
c.Assert(err, checker.IsNil)
|
||||
}
|
||||
|
||||
@@ -585,21 +585,14 @@ func (s *ConsulSuite) TestSNIDynamicTlsConfig(c *check.C) {
|
||||
})
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
// wait for traefik
|
||||
err = try.GetRequest("http://127.0.0.1:8081/api/providers", 60*time.Second, try.BodyContains("MIIEpQIBAAKCAQEA1RducBK6EiFDv3TYB8ZcrfKWRVaSfHzWicO3J5WdST9oS7hG"))
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "https://127.0.0.1:4443/", nil)
|
||||
c.Assert(err, checker.IsNil)
|
||||
client := &http.Client{Transport: tr1}
|
||||
req.Host = tr1.TLSClientConfig.ServerName
|
||||
req.Header.Set("Host", tr1.TLSClientConfig.ServerName)
|
||||
req.Header.Set("Accept", "*/*")
|
||||
var resp *http.Response
|
||||
resp, err = client.Do(req)
|
||||
|
||||
err = try.RequestWithTransport(req, 30*time.Second, tr1, try.HasCn("snitest.com"))
|
||||
c.Assert(err, checker.IsNil)
|
||||
cn := resp.TLS.PeerCertificates[0].Subject.CommonName
|
||||
c.Assert(cn, checker.Equals, "snitest.com")
|
||||
|
||||
// now we configure the second keypair in consul and the request for host "snitest.org" will use the second keypair
|
||||
for key, value := range tlsconfigure2 {
|
||||
@@ -614,18 +607,12 @@ func (s *ConsulSuite) TestSNIDynamicTlsConfig(c *check.C) {
|
||||
})
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
// waiting for traefik to pull configuration
|
||||
err = try.GetRequest("http://127.0.0.1:8081/api/providers", 30*time.Second, try.BodyContains("MIIEogIBAAKCAQEAvG9kL+vF57+MICehzbqcQAUlAOSl5r"))
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
req, err = http.NewRequest(http.MethodGet, "https://127.0.0.1:4443/", nil)
|
||||
c.Assert(err, checker.IsNil)
|
||||
client = &http.Client{Transport: tr2}
|
||||
req.Host = tr2.TLSClientConfig.ServerName
|
||||
req.Header.Set("Host", tr2.TLSClientConfig.ServerName)
|
||||
req.Header.Set("Accept", "*/*")
|
||||
resp, err = client.Do(req)
|
||||
|
||||
err = try.RequestWithTransport(req, 30*time.Second, tr2, try.HasCn("snitest.org"))
|
||||
c.Assert(err, checker.IsNil)
|
||||
cn = resp.TLS.PeerCertificates[0].Subject.CommonName
|
||||
c.Assert(cn, checker.Equals, "snitest.org")
|
||||
}
|
||||
|
||||
@@ -532,21 +532,14 @@ func (s *Etcd3Suite) TestSNIDynamicTlsConfig(c *check.C) {
|
||||
c.Assert(err, checker.IsNil)
|
||||
defer cmd.Process.Kill()
|
||||
|
||||
// wait for Træfik
|
||||
err = try.GetRequest("http://127.0.0.1:8081/api/providers", 60*time.Second, try.BodyContains(string("MIIEpQIBAAKCAQEA1RducBK6EiFDv3TYB8ZcrfKWRVaSfHzWicO3J5WdST9oS7h")))
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "https://127.0.0.1:4443/", nil)
|
||||
c.Assert(err, checker.IsNil)
|
||||
client := &http.Client{Transport: tr1}
|
||||
req.Host = tr1.TLSClientConfig.ServerName
|
||||
req.Header.Set("Host", tr1.TLSClientConfig.ServerName)
|
||||
req.Header.Set("Accept", "*/*")
|
||||
var resp *http.Response
|
||||
resp, err = client.Do(req)
|
||||
|
||||
err = try.RequestWithTransport(req, 30*time.Second, tr1, try.HasCn("snitest.com"))
|
||||
c.Assert(err, checker.IsNil)
|
||||
cn := resp.TLS.PeerCertificates[0].Subject.CommonName
|
||||
c.Assert(cn, checker.Equals, "snitest.com")
|
||||
|
||||
// now we configure the second keypair in etcd and the request for host "snitest.org" will use the second keypair
|
||||
|
||||
@@ -562,20 +555,14 @@ func (s *Etcd3Suite) TestSNIDynamicTlsConfig(c *check.C) {
|
||||
})
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
// waiting for Træfik to pull configuration
|
||||
err = try.GetRequest("http://127.0.0.1:8081/api/providers", 30*time.Second, try.BodyContains("MIIEogIBAAKCAQEAvG9kL+vF57+MICehzbqcQAUlAOSl5r"))
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
req, err = http.NewRequest(http.MethodGet, "https://127.0.0.1:4443/", nil)
|
||||
c.Assert(err, checker.IsNil)
|
||||
client = &http.Client{Transport: tr2}
|
||||
req.Host = tr2.TLSClientConfig.ServerName
|
||||
req.Header.Set("Host", tr2.TLSClientConfig.ServerName)
|
||||
req.Header.Set("Accept", "*/*")
|
||||
resp, err = client.Do(req)
|
||||
|
||||
err = try.RequestWithTransport(req, 30*time.Second, tr2, try.HasCn("snitest.org"))
|
||||
c.Assert(err, checker.IsNil)
|
||||
cn = resp.TLS.PeerCertificates[0].Subject.CommonName
|
||||
c.Assert(cn, checker.Equals, "snitest.org")
|
||||
}
|
||||
|
||||
func (s *Etcd3Suite) TestDeleteSNIDynamicTlsConfig(c *check.C) {
|
||||
@@ -646,21 +633,14 @@ func (s *Etcd3Suite) TestDeleteSNIDynamicTlsConfig(c *check.C) {
|
||||
c.Assert(err, checker.IsNil)
|
||||
defer cmd.Process.Kill()
|
||||
|
||||
// wait for Træfik
|
||||
err = try.GetRequest(traefikWebEtcdURL+"api/providers", 60*time.Second, try.BodyContains(string("MIIEpQIBAAKCAQEA1RducBK6EiFDv3TYB8ZcrfKWRVaSfHzWicO3J5WdST9oS7h")))
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "https://127.0.0.1:4443/", nil)
|
||||
c.Assert(err, checker.IsNil)
|
||||
client := &http.Client{Transport: tr1}
|
||||
req.Host = tr1.TLSClientConfig.ServerName
|
||||
req.Header.Set("Host", tr1.TLSClientConfig.ServerName)
|
||||
req.Header.Set("Accept", "*/*")
|
||||
var resp *http.Response
|
||||
resp, err = client.Do(req)
|
||||
|
||||
err = try.RequestWithTransport(req, 30*time.Second, tr1, try.HasCn("snitest.com"))
|
||||
c.Assert(err, checker.IsNil)
|
||||
cn := resp.TLS.PeerCertificates[0].Subject.CommonName
|
||||
c.Assert(cn, checker.Equals, "snitest.com")
|
||||
|
||||
// now we delete the tls cert/key pairs,so the endpoint show use default cert/key pair
|
||||
for key := range tlsconfigure1 {
|
||||
@@ -668,18 +648,12 @@ func (s *Etcd3Suite) TestDeleteSNIDynamicTlsConfig(c *check.C) {
|
||||
c.Assert(err, checker.IsNil)
|
||||
}
|
||||
|
||||
// waiting for Træfik to pull configuration
|
||||
err = try.GetRequest(traefikWebEtcdURL+"api/providers", 30*time.Second, try.BodyNotContains("MIIEpQIBAAKCAQEA1RducBK6EiFDv3TYB8ZcrfKWRVaSfHzWicO3J5WdST9oS7h"))
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
req, err = http.NewRequest(http.MethodGet, "https://127.0.0.1:4443/", nil)
|
||||
c.Assert(err, checker.IsNil)
|
||||
client = &http.Client{Transport: tr1}
|
||||
req.Host = tr1.TLSClientConfig.ServerName
|
||||
req.Header.Set("Host", tr1.TLSClientConfig.ServerName)
|
||||
req.Header.Set("Accept", "*/*")
|
||||
resp, err = client.Do(req)
|
||||
|
||||
err = try.RequestWithTransport(req, 30*time.Second, tr1, try.HasCn("TRAEFIK DEFAULT CERT"))
|
||||
c.Assert(err, checker.IsNil)
|
||||
cn = resp.TLS.PeerCertificates[0].Subject.CommonName
|
||||
c.Assert(cn, checker.Equals, "TRAEFIK DEFAULT CERT")
|
||||
}
|
||||
|
||||
@@ -548,21 +548,14 @@ func (s *EtcdSuite) TestSNIDynamicTlsConfig(c *check.C) {
|
||||
c.Assert(err, checker.IsNil)
|
||||
defer cmd.Process.Kill()
|
||||
|
||||
// wait for Træfik
|
||||
err = try.GetRequest("http://127.0.0.1:8081/api/providers", 60*time.Second, try.BodyContains(string("MIIEpQIBAAKCAQEA1RducBK6EiFDv3TYB8ZcrfKWRVaSfHzWicO3J5WdST9oS7h")))
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "https://127.0.0.1:4443/", nil)
|
||||
c.Assert(err, checker.IsNil)
|
||||
client := &http.Client{Transport: tr1}
|
||||
req.Host = tr1.TLSClientConfig.ServerName
|
||||
req.Header.Set("Host", tr1.TLSClientConfig.ServerName)
|
||||
req.Header.Set("Accept", "*/*")
|
||||
var resp *http.Response
|
||||
resp, err = client.Do(req)
|
||||
|
||||
err = try.RequestWithTransport(req, 30*time.Second, tr1, try.HasCn("snitest.com"))
|
||||
c.Assert(err, checker.IsNil)
|
||||
cn := resp.TLS.PeerCertificates[0].Subject.CommonName
|
||||
c.Assert(cn, checker.Equals, "snitest.com")
|
||||
|
||||
// now we configure the second keypair in etcd and the request for host "snitest.org" will use the second keypair
|
||||
|
||||
@@ -578,18 +571,12 @@ func (s *EtcdSuite) TestSNIDynamicTlsConfig(c *check.C) {
|
||||
})
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
// waiting for Træfik to pull configuration
|
||||
err = try.GetRequest("http://127.0.0.1:8081/api/providers", 30*time.Second, try.BodyContains("MIIEogIBAAKCAQEAvG9kL+vF57+MICehzbqcQAUlAOSl5r"))
|
||||
c.Assert(err, checker.IsNil)
|
||||
|
||||
req, err = http.NewRequest(http.MethodGet, "https://127.0.0.1:4443/", nil)
|
||||
c.Assert(err, checker.IsNil)
|
||||
client = &http.Client{Transport: tr2}
|
||||
req.Host = tr2.TLSClientConfig.ServerName
|
||||
req.Header.Set("Host", tr2.TLSClientConfig.ServerName)
|
||||
req.Header.Set("Accept", "*/*")
|
||||
resp, err = client.Do(req)
|
||||
|
||||
err = try.RequestWithTransport(req, 30*time.Second, tr2, try.HasCn("snitest.org"))
|
||||
c.Assert(err, checker.IsNil)
|
||||
cn = resp.TLS.PeerCertificates[0].Subject.CommonName
|
||||
c.Assert(cn, checker.Equals, "snitest.org")
|
||||
}
|
||||
|
||||
@@ -14,6 +14,7 @@ defaultEntryPoints = ["http", "https"]
|
||||
email = "test@traefik.io"
|
||||
storage = "/tmp/acme.json"
|
||||
entryPoint = "https"
|
||||
acmeLogging = true
|
||||
onDemand = {{.OnDemand}}
|
||||
onHostRule = {{.OnHostRule}}
|
||||
caServer = "http://{{.BoulderHost}}:4001/directory"
|
||||
|
||||
@@ -13,6 +13,7 @@ defaultEntryPoints = ["http", "https"]
|
||||
email = "test@traefik.io"
|
||||
storage = "/tmp/acme.json"
|
||||
entryPoint = "https"
|
||||
acmeLogging = true
|
||||
onDemand = {{.OnDemand}}
|
||||
onHostRule = {{.OnHostRule}}
|
||||
caServer = "http://{{.BoulderHost}}:4001/directory"
|
||||
|
||||
@@ -16,6 +16,7 @@ defaultEntryPoints = ["http", "https"]
|
||||
email = "test@traefik.io"
|
||||
storage = "/tmp/acme.json"
|
||||
entryPoint = "https"
|
||||
acmeLogging = true
|
||||
onDemand = {{.OnDemand}}
|
||||
onHostRule = {{.OnHostRule}}
|
||||
caServer = "http://{{.BoulderHost}}:4001/directory"
|
||||
|
||||
@@ -14,6 +14,7 @@ defaultEntryPoints = ["http", "https"]
|
||||
email = "test@traefik.io"
|
||||
storage = "/tmp/acme.json"
|
||||
entryPoint = "https"
|
||||
acmeLogging = true
|
||||
onDemand = {{.OnDemand}}
|
||||
onHostRule = {{.OnHostRule}}
|
||||
caServer = "http://{{.BoulderHost}}:4001/directory"
|
||||
|
||||
@@ -17,6 +17,7 @@ email = "test@traefik.io"
|
||||
storage = "/tmp/acme.json"
|
||||
entryPoint = "https"
|
||||
onHostRule = true
|
||||
acmeLogging = true
|
||||
caServer = "http://{{.BoulderHost}}:4001/directory"
|
||||
# No challenge defined
|
||||
|
||||
|
||||
@@ -16,6 +16,7 @@ defaultEntryPoints = ["http", "https"]
|
||||
email = "test@traefik.io"
|
||||
storage = "/tmp/acme.json"
|
||||
entryPoint = "https"
|
||||
acmeLogging = true
|
||||
onHostRule = true
|
||||
caServer = "http://wrongurl:4001/directory"
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
[backends]
|
||||
[backends.backend2]
|
||||
[backends.backend2.servers.server1]
|
||||
url = "http://172.17.0.2:80"
|
||||
url = "http://172.17.0.123:80"
|
||||
weight = 1
|
||||
|
||||
[frontends]
|
||||
|
||||
@@ -14,6 +14,7 @@ defaultEntryPoints = ["http", "https"]
|
||||
email = "test@traefik.io"
|
||||
storage = "/tmp/acme.json"
|
||||
entryPoint = "https"
|
||||
acmeLogging = true
|
||||
onDemand = {{.OnDemand}}
|
||||
onHostRule = {{.OnHostRule}}
|
||||
caServer = "http://{{.BoulderHost}}:4001/directory"
|
||||
|
||||
@@ -14,6 +14,7 @@ defaultEntryPoints = ["http", "https"]
|
||||
email = "test@traefik.io"
|
||||
storage = "/tmp/acme.json"
|
||||
entryPoint = "https"
|
||||
acmeLogging = true
|
||||
onDemand = false
|
||||
onHostRule = false
|
||||
caServer = "http://{{.BoulderHost}}:4001/directory"
|
||||
|
||||
@@ -14,6 +14,7 @@ defaultEntryPoints = ["http", "https"]
|
||||
email = "test@traefik.io"
|
||||
storage = "/tmp/acme.jsonl"
|
||||
entryPoint = "https"
|
||||
acmeLogging = true
|
||||
onDemand = {{.OnDemand}}
|
||||
onHostRule = {{.OnHostRule}}
|
||||
caServer = "http://{{.BoulderHost}}:4001/directory"
|
||||
|
||||
@@ -88,6 +88,31 @@ func HasBody() ResponseCondition {
|
||||
}
|
||||
}
|
||||
|
||||
// HasCn returns a retry condition function.
|
||||
// The condition returns an error if the cn is not correct.
|
||||
func HasCn(cn string) ResponseCondition {
|
||||
return func(res *http.Response) error {
|
||||
if res.TLS == nil {
|
||||
return errors.New("response doesn't have TLS")
|
||||
}
|
||||
|
||||
if len(res.TLS.PeerCertificates) == 0 {
|
||||
return errors.New("response TLS doesn't have peer certificates")
|
||||
}
|
||||
|
||||
if res.TLS.PeerCertificates[0] == nil {
|
||||
return errors.New("first peer certificate is nil")
|
||||
}
|
||||
|
||||
commonName := res.TLS.PeerCertificates[0].Subject.CommonName
|
||||
if cn != commonName {
|
||||
return fmt.Errorf("common name don't match: %s != %s", cn, commonName)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// StatusCodeIs returns a retry condition function.
|
||||
// The condition returns an error if the given response's status code is not the
|
||||
// given HTTP status code.
|
||||
|
||||
@@ -31,7 +31,7 @@ func Sleep(d time.Duration) {
|
||||
// response body needs to be closed or not. Callers are expected to close on
|
||||
// their own if the function returns a nil error.
|
||||
func Response(req *http.Request, timeout time.Duration) (*http.Response, error) {
|
||||
return doTryRequest(req, timeout)
|
||||
return doTryRequest(req, timeout, nil)
|
||||
}
|
||||
|
||||
// ResponseUntilStatusCode is like Request, but returns the response for further
|
||||
@@ -40,7 +40,7 @@ func Response(req *http.Request, timeout time.Duration) (*http.Response, error)
|
||||
// response body needs to be closed or not. Callers are expected to close on
|
||||
// their own if the function returns a nil error.
|
||||
func ResponseUntilStatusCode(req *http.Request, timeout time.Duration, statusCode int) (*http.Response, error) {
|
||||
return doTryRequest(req, timeout, StatusCodeIs(statusCode))
|
||||
return doTryRequest(req, timeout, nil, StatusCodeIs(statusCode))
|
||||
}
|
||||
|
||||
// GetRequest is like Do, but runs a request against the given URL and applies
|
||||
@@ -48,7 +48,7 @@ func ResponseUntilStatusCode(req *http.Request, timeout time.Duration, statusCod
|
||||
// ResponseCondition may be nil, in which case only the request against the URL must
|
||||
// succeed.
|
||||
func GetRequest(url string, timeout time.Duration, conditions ...ResponseCondition) error {
|
||||
resp, err := doTryGet(url, timeout, conditions...)
|
||||
resp, err := doTryGet(url, timeout, nil, conditions...)
|
||||
|
||||
if resp != nil && resp.Body != nil {
|
||||
defer resp.Body.Close()
|
||||
@@ -62,7 +62,21 @@ func GetRequest(url string, timeout time.Duration, conditions ...ResponseConditi
|
||||
// ResponseCondition may be nil, in which case only the request against the URL must
|
||||
// succeed.
|
||||
func Request(req *http.Request, timeout time.Duration, conditions ...ResponseCondition) error {
|
||||
resp, err := doTryRequest(req, timeout, conditions...)
|
||||
resp, err := doTryRequest(req, timeout, nil, conditions...)
|
||||
|
||||
if resp != nil && resp.Body != nil {
|
||||
defer resp.Body.Close()
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// RequestWithTransport is like Do, but runs a request against the given URL and applies
|
||||
// the condition on the response.
|
||||
// ResponseCondition may be nil, in which case only the request against the URL must
|
||||
// succeed.
|
||||
func RequestWithTransport(req *http.Request, timeout time.Duration, transport *http.Transport, conditions ...ResponseCondition) error {
|
||||
resp, err := doTryRequest(req, timeout, transport, conditions...)
|
||||
|
||||
if resp != nil && resp.Body != nil {
|
||||
defer resp.Body.Close()
|
||||
@@ -112,24 +126,27 @@ func Do(timeout time.Duration, operation DoCondition) error {
|
||||
}
|
||||
}
|
||||
|
||||
func doTryGet(url string, timeout time.Duration, conditions ...ResponseCondition) (*http.Response, error) {
|
||||
func doTryGet(url string, timeout time.Duration, transport *http.Transport, conditions ...ResponseCondition) (*http.Response, error) {
|
||||
req, err := http.NewRequest(http.MethodGet, url, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return doTryRequest(req, timeout, conditions...)
|
||||
return doTryRequest(req, timeout, transport, conditions...)
|
||||
}
|
||||
|
||||
func doTryRequest(request *http.Request, timeout time.Duration, conditions ...ResponseCondition) (*http.Response, error) {
|
||||
return doRequest(Do, timeout, request, conditions...)
|
||||
func doTryRequest(request *http.Request, timeout time.Duration, transport *http.Transport, conditions ...ResponseCondition) (*http.Response, error) {
|
||||
return doRequest(Do, timeout, request, transport, conditions...)
|
||||
}
|
||||
|
||||
func doRequest(action timedAction, timeout time.Duration, request *http.Request, conditions ...ResponseCondition) (*http.Response, error) {
|
||||
func doRequest(action timedAction, timeout time.Duration, request *http.Request, transport *http.Transport, conditions ...ResponseCondition) (*http.Response, error) {
|
||||
var resp *http.Response
|
||||
return resp, action(timeout, func() error {
|
||||
var err error
|
||||
client := http.DefaultClient
|
||||
if transport != nil {
|
||||
client.Transport = transport
|
||||
}
|
||||
|
||||
resp, err = client.Do(request)
|
||||
if err != nil {
|
||||
|
||||
@@ -36,25 +36,18 @@ const (
|
||||
backendServerUpName = metricNamePrefix + "backend_server_up"
|
||||
)
|
||||
|
||||
const (
|
||||
// generationAgeForever indicates that a metric never gets outdated.
|
||||
generationAgeForever = 0
|
||||
// generationAgeDefault is the default age of three generations.
|
||||
generationAgeDefault = 3
|
||||
)
|
||||
|
||||
// promState holds all metric state internally and acts as the only Collector we register for Prometheus.
|
||||
//
|
||||
// This enables control to remove metrics that belong to outdated configuration.
|
||||
// As an example why this is required, consider Traefik learns about a new service.
|
||||
// It populates the 'traefik_server_backend_up' metric for it with a value of 1 (alive).
|
||||
// When the backend is undeployed now the metric is still there in the client library
|
||||
// and will be until Traefik would be restarted.
|
||||
// and will be returned on the metrics endpoint until Traefik would be restarted.
|
||||
//
|
||||
// To solve this problem promState keeps track of configuration generations.
|
||||
// Every time a new configuration is loaded, the generation is increased by one.
|
||||
// Metrics that "belong" to a dynamic configuration part of Traefik (e.g. backend, entrypoint)
|
||||
// are removed, given they were tracked more than 3 generations ago.
|
||||
// To solve this problem promState keeps track of Traefik's dynamic configuration.
|
||||
// Metrics that "belong" to a dynamic configuration part like backends or entrypoints
|
||||
// are removed after they were scraped at least once when the corresponding object
|
||||
// doesn't exist anymore.
|
||||
var promState = newPrometheusState()
|
||||
|
||||
// PrometheusHandler exposes Prometheus routes.
|
||||
@@ -163,40 +156,66 @@ func RegisterPrometheus(config *types.Prometheus) Registry {
|
||||
}
|
||||
}
|
||||
|
||||
// OnConfigurationUpdate increases the current generation of the prometheus state.
|
||||
func OnConfigurationUpdate() {
|
||||
promState.IncGeneration()
|
||||
// OnConfigurationUpdate receives the current configuration from Traefik.
|
||||
// It then converts the configuration to the optimized package internal format
|
||||
// and sets it to the promState.
|
||||
func OnConfigurationUpdate(configurations types.Configurations) {
|
||||
dynamicConfig := newDynamicConfig()
|
||||
|
||||
for _, config := range configurations {
|
||||
for _, frontend := range config.Frontends {
|
||||
for _, entrypointName := range frontend.EntryPoints {
|
||||
dynamicConfig.entrypoints[entrypointName] = true
|
||||
}
|
||||
}
|
||||
|
||||
for backendName, backend := range config.Backends {
|
||||
dynamicConfig.backends[backendName] = make(map[string]bool)
|
||||
for _, server := range backend.Servers {
|
||||
dynamicConfig.backends[backendName][server.URL] = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
promState.SetDynamicConfig(dynamicConfig)
|
||||
}
|
||||
|
||||
func newPrometheusState() *prometheusState {
|
||||
collectors := make(chan *collector)
|
||||
state := make(map[string]*collector)
|
||||
|
||||
return &prometheusState{
|
||||
collectors: collectors,
|
||||
state: state,
|
||||
collectors: make(chan *collector),
|
||||
dynamicConfig: newDynamicConfig(),
|
||||
state: make(map[string]*collector),
|
||||
}
|
||||
}
|
||||
|
||||
type prometheusState struct {
|
||||
currentGeneration int
|
||||
collectors chan *collector
|
||||
describers []func(ch chan<- *stdprometheus.Desc)
|
||||
collectors chan *collector
|
||||
describers []func(ch chan<- *stdprometheus.Desc)
|
||||
|
||||
mtx sync.Mutex
|
||||
state map[string]*collector
|
||||
mtx sync.Mutex
|
||||
dynamicConfig *dynamicConfig
|
||||
state map[string]*collector
|
||||
}
|
||||
|
||||
func (ps *prometheusState) IncGeneration() {
|
||||
// reset is a utility method for unit testing. It should be called after each
|
||||
// test run that changes promState internally in order to avoid dependencies
|
||||
// between unit tests.
|
||||
func (ps *prometheusState) reset() {
|
||||
ps.collectors = make(chan *collector)
|
||||
ps.describers = []func(ch chan<- *stdprometheus.Desc){}
|
||||
ps.dynamicConfig = newDynamicConfig()
|
||||
ps.state = make(map[string]*collector)
|
||||
}
|
||||
|
||||
func (ps *prometheusState) SetDynamicConfig(dynamicConfig *dynamicConfig) {
|
||||
ps.mtx.Lock()
|
||||
defer ps.mtx.Unlock()
|
||||
ps.currentGeneration++
|
||||
ps.dynamicConfig = dynamicConfig
|
||||
}
|
||||
|
||||
func (ps *prometheusState) ListenValueUpdates() {
|
||||
for collector := range ps.collectors {
|
||||
ps.mtx.Lock()
|
||||
collector.lastTrackedGeneration = ps.currentGeneration
|
||||
ps.state[collector.id] = collector
|
||||
ps.mtx.Unlock()
|
||||
}
|
||||
@@ -212,42 +231,89 @@ func (ps *prometheusState) Describe(ch chan<- *stdprometheus.Desc) {
|
||||
|
||||
// Collect implements prometheus.Collector. It calls the Collect
|
||||
// method of all metrics it received on the collectors channel.
|
||||
// It's also responsible to remove metrics that were tracked
|
||||
// at least three generations ago. Those metrics are cleaned up
|
||||
// after the Collect of them were called.
|
||||
// It's also responsible to remove metrics that belong to an outdated configuration.
|
||||
// The removal happens only after their Collect method was called to ensure that
|
||||
// also those metrics will be exported on the current scrape.
|
||||
func (ps *prometheusState) Collect(ch chan<- stdprometheus.Metric) {
|
||||
ps.mtx.Lock()
|
||||
defer ps.mtx.Unlock()
|
||||
|
||||
outdatedKeys := []string{}
|
||||
var outdatedKeys []string
|
||||
for key, cs := range ps.state {
|
||||
cs.collector.Collect(ch)
|
||||
|
||||
if cs.maxAge == generationAgeForever {
|
||||
continue
|
||||
}
|
||||
if ps.currentGeneration-cs.lastTrackedGeneration >= cs.maxAge {
|
||||
if ps.isOutdated(cs) {
|
||||
outdatedKeys = append(outdatedKeys, key)
|
||||
}
|
||||
}
|
||||
|
||||
for _, key := range outdatedKeys {
|
||||
ps.state[key].delete()
|
||||
delete(ps.state, key)
|
||||
}
|
||||
}
|
||||
|
||||
func newCollector(metricName string, lnvs labelNamesValues, c stdprometheus.Collector) *collector {
|
||||
maxAge := generationAgeDefault
|
||||
// isOutdated checks whether the passed collector has labels that mark
|
||||
// it as belonging to an outdated configuration of Traefik.
|
||||
func (ps *prometheusState) isOutdated(collector *collector) bool {
|
||||
labels := collector.labels
|
||||
|
||||
// metrics without labels should never become outdated
|
||||
if len(lnvs) == 0 {
|
||||
maxAge = generationAgeForever
|
||||
if entrypointName, ok := labels["entrypoint"]; ok && !ps.dynamicConfig.hasEntrypoint(entrypointName) {
|
||||
return true
|
||||
}
|
||||
|
||||
if backendName, ok := labels["backend"]; ok {
|
||||
if !ps.dynamicConfig.hasBackend(backendName) {
|
||||
return true
|
||||
}
|
||||
if url, ok := labels["url"]; ok && !ps.dynamicConfig.hasServerURL(backendName, url) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
func newDynamicConfig() *dynamicConfig {
|
||||
return &dynamicConfig{
|
||||
entrypoints: make(map[string]bool),
|
||||
backends: make(map[string]map[string]bool),
|
||||
}
|
||||
}
|
||||
|
||||
// dynamicConfig holds the current configuration for entrypoints, backends,
|
||||
// and server URLs in an optimized way to check for existence. This provides
|
||||
// a performant way to check whether the collected metrics belong to the
|
||||
// current configuration or to an outdated one.
|
||||
type dynamicConfig struct {
|
||||
entrypoints map[string]bool
|
||||
backends map[string]map[string]bool
|
||||
}
|
||||
|
||||
func (d *dynamicConfig) hasEntrypoint(entrypointName string) bool {
|
||||
_, ok := d.entrypoints[entrypointName]
|
||||
return ok
|
||||
}
|
||||
|
||||
func (d *dynamicConfig) hasBackend(backendName string) bool {
|
||||
_, ok := d.backends[backendName]
|
||||
return ok
|
||||
}
|
||||
|
||||
func (d *dynamicConfig) hasServerURL(backendName, serverURL string) bool {
|
||||
if backend, hasBackend := d.backends[backendName]; hasBackend {
|
||||
_, ok := backend[serverURL]
|
||||
return ok
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func newCollector(metricName string, labels stdprometheus.Labels, c stdprometheus.Collector, delete func()) *collector {
|
||||
return &collector{
|
||||
id: buildMetricID(metricName, lnvs),
|
||||
maxAge: maxAge,
|
||||
id: buildMetricID(metricName, labels),
|
||||
labels: labels,
|
||||
collector: c,
|
||||
delete: delete,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -255,16 +321,19 @@ func newCollector(metricName string, lnvs labelNamesValues, c stdprometheus.Coll
|
||||
// It adds information on how many generations this metric should be present
|
||||
// in the /metrics output, relatived to the time it was last tracked.
|
||||
type collector struct {
|
||||
id string
|
||||
collector stdprometheus.Collector
|
||||
lastTrackedGeneration int
|
||||
maxAge int
|
||||
id string
|
||||
labels stdprometheus.Labels
|
||||
collector stdprometheus.Collector
|
||||
delete func()
|
||||
}
|
||||
|
||||
func buildMetricID(metricName string, lnvs labelNamesValues) string {
|
||||
newLnvs := append([]string{}, lnvs...)
|
||||
sort.Strings(newLnvs)
|
||||
return metricName + ":" + strings.Join(newLnvs, "|")
|
||||
func buildMetricID(metricName string, labels stdprometheus.Labels) string {
|
||||
var labelNamesValues []string
|
||||
for name, value := range labels {
|
||||
labelNamesValues = append(labelNamesValues, name, value)
|
||||
}
|
||||
sort.Strings(labelNamesValues)
|
||||
return metricName + ":" + strings.Join(labelNamesValues, "|")
|
||||
}
|
||||
|
||||
func newCounterFrom(collectors chan<- *collector, opts stdprometheus.CounterOpts, labelNames []string) *counter {
|
||||
@@ -297,9 +366,12 @@ func (c *counter) With(labelValues ...string) metrics.Counter {
|
||||
}
|
||||
|
||||
func (c *counter) Add(delta float64) {
|
||||
collector := c.cv.With(c.labelNamesValues.ToLabels())
|
||||
labels := c.labelNamesValues.ToLabels()
|
||||
collector := c.cv.With(labels)
|
||||
collector.Add(delta)
|
||||
c.collectors <- newCollector(c.name, c.labelNamesValues, collector)
|
||||
c.collectors <- newCollector(c.name, labels, collector, func() {
|
||||
c.cv.Delete(labels)
|
||||
})
|
||||
}
|
||||
|
||||
func (c *counter) Describe(ch chan<- *stdprometheus.Desc) {
|
||||
@@ -336,15 +408,21 @@ func (g *gauge) With(labelValues ...string) metrics.Gauge {
|
||||
}
|
||||
|
||||
func (g *gauge) Add(delta float64) {
|
||||
collector := g.gv.With(g.labelNamesValues.ToLabels())
|
||||
labels := g.labelNamesValues.ToLabels()
|
||||
collector := g.gv.With(labels)
|
||||
collector.Add(delta)
|
||||
g.collectors <- newCollector(g.name, g.labelNamesValues, collector)
|
||||
g.collectors <- newCollector(g.name, labels, collector, func() {
|
||||
g.gv.Delete(labels)
|
||||
})
|
||||
}
|
||||
|
||||
func (g *gauge) Set(value float64) {
|
||||
collector := g.gv.With(g.labelNamesValues.ToLabels())
|
||||
labels := g.labelNamesValues.ToLabels()
|
||||
collector := g.gv.With(labels)
|
||||
collector.Set(value)
|
||||
g.collectors <- newCollector(g.name, g.labelNamesValues, collector)
|
||||
g.collectors <- newCollector(g.name, labels, collector, func() {
|
||||
g.gv.Delete(labels)
|
||||
})
|
||||
}
|
||||
|
||||
func (g *gauge) Describe(ch chan<- *stdprometheus.Desc) {
|
||||
@@ -377,9 +455,12 @@ func (h *histogram) With(labelValues ...string) metrics.Histogram {
|
||||
}
|
||||
|
||||
func (h *histogram) Observe(value float64) {
|
||||
collector := h.hv.With(h.labelNamesValues.ToLabels())
|
||||
labels := h.labelNamesValues.ToLabels()
|
||||
collector := h.hv.With(labels)
|
||||
collector.Observe(value)
|
||||
h.collectors <- newCollector(h.name, h.labelNamesValues, collector)
|
||||
h.collectors <- newCollector(h.name, labels, collector, func() {
|
||||
h.hv.Delete(labels)
|
||||
})
|
||||
}
|
||||
|
||||
func (h *histogram) Describe(ch chan<- *stdprometheus.Desc) {
|
||||
|
||||
@@ -7,12 +7,16 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
th "github.com/containous/traefik/testhelpers"
|
||||
"github.com/containous/traefik/types"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
)
|
||||
|
||||
func TestPrometheus(t *testing.T) {
|
||||
// Reset state of global promState.
|
||||
defer promState.reset()
|
||||
|
||||
prometheusRegistry := RegisterPrometheus(&types.Prometheus{})
|
||||
defer prometheus.Unregister(promState)
|
||||
|
||||
@@ -177,56 +181,94 @@ func TestPrometheus(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestPrometheusGenerationLogicForMetricWithLabel(t *testing.T) {
|
||||
func TestPrometheusMetricRemoval(t *testing.T) {
|
||||
// Reset state of global promState.
|
||||
defer promState.reset()
|
||||
|
||||
prometheusRegistry := RegisterPrometheus(&types.Prometheus{})
|
||||
defer prometheus.Unregister(promState)
|
||||
|
||||
// Metrics with labels belonging to a specific configuration in Traefik
|
||||
// should be removed when the generationMaxAge is exceeded. As example
|
||||
// we use the traefik_backend_requests_total metric.
|
||||
configurations := make(types.Configurations)
|
||||
configurations["providerName"] = th.BuildConfiguration(
|
||||
th.WithFrontends(
|
||||
th.WithFrontend("backend1", th.WithEntryPoints("entrypoint1")),
|
||||
),
|
||||
th.WithBackends(
|
||||
th.WithBackendNew("backend1", th.WithServersNew(th.WithServerNew("http://localhost:9000"))),
|
||||
),
|
||||
)
|
||||
OnConfigurationUpdate(configurations)
|
||||
|
||||
// Register some metrics manually that are not part of the active configuration.
|
||||
// Those metrics should be part of the /metrics output on the first scrape but
|
||||
// should be removed after that scrape.
|
||||
prometheusRegistry.
|
||||
EntrypointReqsCounter().
|
||||
With("entrypoint", "entrypoint2", "code", strconv.Itoa(http.StatusOK), "method", http.MethodGet, "protocol", "http").
|
||||
Add(1)
|
||||
prometheusRegistry.
|
||||
BackendReqsCounter().
|
||||
With("backend", "backend1", "code", strconv.Itoa(http.StatusOK), "method", http.MethodGet, "protocol", "http").
|
||||
With("backend", "backend2", "code", strconv.Itoa(http.StatusOK), "method", http.MethodGet, "protocol", "http").
|
||||
Add(1)
|
||||
prometheusRegistry.
|
||||
BackendServerUpGauge().
|
||||
With("backend", "backend1", "url", "http://localhost:9999").
|
||||
Set(1)
|
||||
|
||||
delayForTrackingCompletion()
|
||||
|
||||
assertMetricsExist(t, mustScrape(), entrypointReqsTotalName, backendReqsTotalName, backendServerUpName)
|
||||
assertMetricsAbsent(t, mustScrape(), entrypointReqsTotalName, backendReqsTotalName, backendServerUpName)
|
||||
|
||||
// To verify that metrics belonging to active configurations are not removed
|
||||
// here the counter examples.
|
||||
prometheusRegistry.
|
||||
EntrypointReqsCounter().
|
||||
With("entrypoint", "entrypoint1", "code", strconv.Itoa(http.StatusOK), "method", http.MethodGet, "protocol", "http").
|
||||
Add(1)
|
||||
|
||||
delayForTrackingCompletion()
|
||||
|
||||
assertMetricExists(t, backendReqsTotalName, mustScrape())
|
||||
|
||||
// Increase the config generation one more than the max age of a metric.
|
||||
for i := 0; i < generationAgeDefault+1; i++ {
|
||||
OnConfigurationUpdate()
|
||||
}
|
||||
|
||||
// On the next scrape the metric still exists and will be removed
|
||||
// after the scrape completed.
|
||||
assertMetricExists(t, backendReqsTotalName, mustScrape())
|
||||
|
||||
// Now the metric should be absent.
|
||||
assertMetricAbsent(t, backendReqsTotalName, mustScrape())
|
||||
assertMetricsExist(t, mustScrape(), entrypointReqsTotalName)
|
||||
assertMetricsExist(t, mustScrape(), entrypointReqsTotalName)
|
||||
}
|
||||
|
||||
func TestPrometheusGenerationLogicForMetricWithoutLabel(t *testing.T) {
|
||||
func TestPrometheusRemovedMetricsReset(t *testing.T) {
|
||||
// Reset state of global promState.
|
||||
defer promState.reset()
|
||||
|
||||
prometheusRegistry := RegisterPrometheus(&types.Prometheus{})
|
||||
defer prometheus.Unregister(promState)
|
||||
|
||||
// Metrics without labels like traefik_config_reloads_total should live forever
|
||||
// and never get removed.
|
||||
prometheusRegistry.ConfigReloadsCounter().Add(1)
|
||||
labelNamesValues := []string{
|
||||
"backend", "backend",
|
||||
"code", strconv.Itoa(http.StatusOK),
|
||||
"method", http.MethodGet,
|
||||
"protocol", "http",
|
||||
}
|
||||
prometheusRegistry.
|
||||
BackendReqsCounter().
|
||||
With(labelNamesValues...).
|
||||
Add(3)
|
||||
|
||||
delayForTrackingCompletion()
|
||||
|
||||
assertMetricExists(t, configReloadsTotalName, mustScrape())
|
||||
metricsFamilies := mustScrape()
|
||||
assertCounterValue(t, 3, findMetricFamily(backendReqsTotalName, metricsFamilies), labelNamesValues...)
|
||||
|
||||
// Increase the config generation one more than the max age of a metric.
|
||||
for i := 0; i < generationAgeDefault+100; i++ {
|
||||
OnConfigurationUpdate()
|
||||
}
|
||||
// There is no dynamic configuration and so this metric will be deleted
|
||||
// after the first scrape.
|
||||
assertMetricsAbsent(t, mustScrape(), backendReqsTotalName)
|
||||
|
||||
// Scrape two times in order to verify, that it is not removed after the
|
||||
// first scrape completed.
|
||||
assertMetricExists(t, configReloadsTotalName, mustScrape())
|
||||
assertMetricExists(t, configReloadsTotalName, mustScrape())
|
||||
prometheusRegistry.
|
||||
BackendReqsCounter().
|
||||
With(labelNamesValues...).
|
||||
Add(1)
|
||||
|
||||
delayForTrackingCompletion()
|
||||
|
||||
metricsFamilies = mustScrape()
|
||||
assertCounterValue(t, 1, findMetricFamily(backendReqsTotalName, metricsFamilies), labelNamesValues...)
|
||||
}
|
||||
|
||||
// Tracking and gathering the metrics happens concurrently.
|
||||
@@ -247,17 +289,23 @@ func mustScrape() []*dto.MetricFamily {
|
||||
return families
|
||||
}
|
||||
|
||||
func assertMetricExists(t *testing.T, name string, families []*dto.MetricFamily) {
|
||||
func assertMetricsExist(t *testing.T, families []*dto.MetricFamily, metricNames ...string) {
|
||||
t.Helper()
|
||||
if findMetricFamily(name, families) == nil {
|
||||
t.Errorf("gathered metrics do not contain %q", name)
|
||||
|
||||
for _, metricName := range metricNames {
|
||||
if findMetricFamily(metricName, families) == nil {
|
||||
t.Errorf("gathered metrics should contain %q", metricName)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func assertMetricAbsent(t *testing.T, name string, families []*dto.MetricFamily) {
|
||||
func assertMetricsAbsent(t *testing.T, families []*dto.MetricFamily, metricNames ...string) {
|
||||
t.Helper()
|
||||
if findMetricFamily(name, families) != nil {
|
||||
t.Errorf("gathered metrics contain %q, but should not", name)
|
||||
|
||||
for _, metricName := range metricNames {
|
||||
if findMetricFamily(metricName, families) != nil {
|
||||
t.Errorf("gathered metrics should not contain %q", metricName)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -270,6 +318,58 @@ func findMetricFamily(name string, families []*dto.MetricFamily) *dto.MetricFami
|
||||
return nil
|
||||
}
|
||||
|
||||
func findMetricByLabelNamesValues(family *dto.MetricFamily, labelNamesValues ...string) *dto.Metric {
|
||||
if family == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
for _, metric := range family.Metric {
|
||||
if hasMetricAllLabelPairs(metric, labelNamesValues...) {
|
||||
return metric
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func hasMetricAllLabelPairs(metric *dto.Metric, labelNamesValues ...string) bool {
|
||||
for i := 0; i < len(labelNamesValues); i += 2 {
|
||||
name, val := labelNamesValues[i], labelNamesValues[i+1]
|
||||
if !hasMetricLabelPair(metric, name, val) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func hasMetricLabelPair(metric *dto.Metric, labelName, labelValue string) bool {
|
||||
for _, labelPair := range metric.Label {
|
||||
if labelPair.GetName() == labelName && labelPair.GetValue() == labelValue {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func assertCounterValue(t *testing.T, want float64, family *dto.MetricFamily, labelNamesValues ...string) {
|
||||
t.Helper()
|
||||
|
||||
metric := findMetricByLabelNamesValues(family, labelNamesValues...)
|
||||
|
||||
if metric == nil {
|
||||
t.Error("metric must not be nil")
|
||||
return
|
||||
}
|
||||
if metric.Counter == nil {
|
||||
t.Errorf("metric %s must be a counter", family.GetName())
|
||||
return
|
||||
}
|
||||
|
||||
if cv := metric.Counter.GetValue(); cv != want {
|
||||
t.Errorf("metric %s has value %v, want %v", family.GetName(), cv, want)
|
||||
}
|
||||
}
|
||||
|
||||
func buildCounterAssert(t *testing.T, metricName string, expectedValue int) func(family *dto.MetricFamily) {
|
||||
return func(family *dto.MetricFamily) {
|
||||
if cv := int(family.Metric[0].Counter.GetValue()); cv != expectedValue {
|
||||
|
||||
@@ -3,15 +3,14 @@ package accesslog
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/urfave/negroni"
|
||||
"github.com/vulcand/oxy/utils"
|
||||
)
|
||||
|
||||
// SaveBackend sends the backend name to the logger. These are always used with a corresponding
|
||||
// SaveFrontend handler.
|
||||
// SaveBackend sends the backend name to the logger.
|
||||
// These are always used with a corresponding SaveFrontend handler.
|
||||
type SaveBackend struct {
|
||||
next http.Handler
|
||||
backendName string
|
||||
@@ -23,61 +22,9 @@ func NewSaveBackend(next http.Handler, backendName string) http.Handler {
|
||||
}
|
||||
|
||||
func (sb *SaveBackend) ServeHTTP(rw http.ResponseWriter, r *http.Request) {
|
||||
table := GetLogDataTable(r)
|
||||
table.Core[BackendName] = sb.backendName
|
||||
table.Core[BackendURL] = r.URL // note that this is *not* the original incoming URL
|
||||
table.Core[BackendAddr] = r.URL.Host
|
||||
|
||||
crw := &captureResponseWriter{rw: rw}
|
||||
start := time.Now().UTC()
|
||||
|
||||
sb.next.ServeHTTP(crw, r)
|
||||
|
||||
// use UTC to handle switchover of daylight saving correctly
|
||||
table.Core[OriginDuration] = time.Now().UTC().Sub(start)
|
||||
table.Core[OriginStatus] = crw.Status()
|
||||
table.Core[OriginStatusLine] = fmt.Sprintf("%03d %s", crw.Status(), http.StatusText(crw.Status()))
|
||||
// make copy of headers so we can ensure there is no subsequent mutation during response processing
|
||||
table.OriginResponse = make(http.Header)
|
||||
utils.CopyHeaders(table.OriginResponse, crw.Header())
|
||||
table.Core[OriginContentSize] = crw.Size()
|
||||
}
|
||||
|
||||
// SaveFrontend sends the frontend name to the logger. These are sometimes used with a corresponding
|
||||
// SaveBackend handler, but not always. For example, redirected requests don't reach a backend.
|
||||
type SaveFrontend struct {
|
||||
next http.Handler
|
||||
frontendName string
|
||||
}
|
||||
|
||||
// NewSaveFrontend creates a SaveFrontend handler.
|
||||
func NewSaveFrontend(next http.Handler, frontendName string) http.Handler {
|
||||
return &SaveFrontend{next, frontendName}
|
||||
}
|
||||
|
||||
func (sb *SaveFrontend) ServeHTTP(rw http.ResponseWriter, r *http.Request) {
|
||||
table := GetLogDataTable(r)
|
||||
table.Core[FrontendName] = strings.TrimPrefix(sb.frontendName, "frontend-")
|
||||
|
||||
sb.next.ServeHTTP(rw, r)
|
||||
}
|
||||
|
||||
// SaveNegroniFrontend sends the frontend name to the logger.
|
||||
type SaveNegroniFrontend struct {
|
||||
next negroni.Handler
|
||||
frontendName string
|
||||
}
|
||||
|
||||
// NewSaveNegroniFrontend creates a SaveNegroniFrontend handler.
|
||||
func NewSaveNegroniFrontend(next negroni.Handler, frontendName string) negroni.Handler {
|
||||
return &SaveNegroniFrontend{next, frontendName}
|
||||
}
|
||||
|
||||
func (sb *SaveNegroniFrontend) ServeHTTP(rw http.ResponseWriter, r *http.Request, next http.HandlerFunc) {
|
||||
table := GetLogDataTable(r)
|
||||
table.Core[FrontendName] = strings.TrimPrefix(sb.frontendName, "frontend-")
|
||||
|
||||
sb.next.ServeHTTP(rw, r, next)
|
||||
serveSaveBackend(rw, r, sb.backendName, func(crw *captureResponseWriter) {
|
||||
sb.next.ServeHTTP(crw, r)
|
||||
})
|
||||
}
|
||||
|
||||
// SaveNegroniBackend sends the backend name to the logger.
|
||||
@@ -92,13 +39,21 @@ func NewSaveNegroniBackend(next negroni.Handler, backendName string) negroni.Han
|
||||
}
|
||||
|
||||
func (sb *SaveNegroniBackend) ServeHTTP(rw http.ResponseWriter, r *http.Request, next http.HandlerFunc) {
|
||||
serveSaveBackend(rw, r, sb.backendName, func(crw *captureResponseWriter) {
|
||||
sb.next.ServeHTTP(crw, r, next)
|
||||
})
|
||||
}
|
||||
|
||||
func serveSaveBackend(rw http.ResponseWriter, r *http.Request, backendName string, apply func(*captureResponseWriter)) {
|
||||
table := GetLogDataTable(r)
|
||||
table.Core[BackendName] = sb.backendName
|
||||
table.Core[BackendName] = backendName
|
||||
table.Core[BackendURL] = r.URL // note that this is *not* the original incoming URL
|
||||
table.Core[BackendAddr] = r.URL.Host
|
||||
|
||||
crw := &captureResponseWriter{rw: rw}
|
||||
start := time.Now().UTC()
|
||||
|
||||
sb.next.ServeHTTP(crw, r, next)
|
||||
apply(crw)
|
||||
|
||||
// use UTC to handle switchover of daylight saving correctly
|
||||
table.Core[OriginDuration] = time.Now().UTC().Sub(start)
|
||||
|
||||
51
middlewares/accesslog/save_frontend.go
Normal file
51
middlewares/accesslog/save_frontend.go
Normal file
@@ -0,0 +1,51 @@
|
||||
package accesslog
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
"github.com/urfave/negroni"
|
||||
)
|
||||
|
||||
// SaveFrontend sends the frontend name to the logger.
|
||||
// These are sometimes used with a corresponding SaveBackend handler, but not always.
|
||||
// For example, redirected requests don't reach a backend.
|
||||
type SaveFrontend struct {
|
||||
next http.Handler
|
||||
frontendName string
|
||||
}
|
||||
|
||||
// NewSaveFrontend creates a SaveFrontend handler.
|
||||
func NewSaveFrontend(next http.Handler, frontendName string) http.Handler {
|
||||
return &SaveFrontend{next, frontendName}
|
||||
}
|
||||
|
||||
func (sf *SaveFrontend) ServeHTTP(rw http.ResponseWriter, r *http.Request) {
|
||||
serveSaveFrontend(r, sf.frontendName, func() {
|
||||
sf.next.ServeHTTP(rw, r)
|
||||
})
|
||||
}
|
||||
|
||||
// SaveNegroniFrontend sends the frontend name to the logger.
|
||||
type SaveNegroniFrontend struct {
|
||||
next negroni.Handler
|
||||
frontendName string
|
||||
}
|
||||
|
||||
// NewSaveNegroniFrontend creates a SaveNegroniFrontend handler.
|
||||
func NewSaveNegroniFrontend(next negroni.Handler, frontendName string) negroni.Handler {
|
||||
return &SaveNegroniFrontend{next, frontendName}
|
||||
}
|
||||
|
||||
func (sf *SaveNegroniFrontend) ServeHTTP(rw http.ResponseWriter, r *http.Request, next http.HandlerFunc) {
|
||||
serveSaveFrontend(r, sf.frontendName, func() {
|
||||
sf.next.ServeHTTP(rw, r, next)
|
||||
})
|
||||
}
|
||||
|
||||
func serveSaveFrontend(r *http.Request, frontendName string, apply func()) {
|
||||
table := GetLogDataTable(r)
|
||||
table.Core[FrontendName] = strings.TrimPrefix(frontendName, "frontend-")
|
||||
|
||||
apply()
|
||||
}
|
||||
@@ -101,12 +101,7 @@ func (h *Handler) ServeHTTP(w http.ResponseWriter, req *http.Request, next http.
|
||||
|
||||
h.backendHandler.ServeHTTP(recorderErrorPage, pageReq.WithContext(req.Context()))
|
||||
|
||||
utils.CopyHeaders(w.Header(), recorder.Header())
|
||||
for key := range recorderErrorPage.Header() {
|
||||
w.Header().Del(key)
|
||||
}
|
||||
utils.CopyHeaders(w.Header(), recorderErrorPage.Header())
|
||||
|
||||
w.WriteHeader(recorder.GetCode())
|
||||
w.Write(recorderErrorPage.GetBody().Bytes())
|
||||
return
|
||||
@@ -174,64 +169,65 @@ type responseRecorderWithCloseNotify struct {
|
||||
|
||||
// CloseNotify returns a channel that receives at most a
|
||||
// single value (true) when the client connection has gone away.
|
||||
func (rw *responseRecorderWithCloseNotify) CloseNotify() <-chan bool {
|
||||
return rw.responseWriter.(http.CloseNotifier).CloseNotify()
|
||||
func (r *responseRecorderWithCloseNotify) CloseNotify() <-chan bool {
|
||||
return r.responseWriter.(http.CloseNotifier).CloseNotify()
|
||||
}
|
||||
|
||||
// Header returns the response headers.
|
||||
func (rw *responseRecorderWithoutCloseNotify) Header() http.Header {
|
||||
if rw.HeaderMap == nil {
|
||||
rw.HeaderMap = make(http.Header)
|
||||
func (r *responseRecorderWithoutCloseNotify) Header() http.Header {
|
||||
if r.HeaderMap == nil {
|
||||
r.HeaderMap = make(http.Header)
|
||||
}
|
||||
return rw.HeaderMap
|
||||
|
||||
return r.HeaderMap
|
||||
}
|
||||
|
||||
func (rw *responseRecorderWithoutCloseNotify) GetCode() int {
|
||||
return rw.Code
|
||||
func (r *responseRecorderWithoutCloseNotify) GetCode() int {
|
||||
return r.Code
|
||||
}
|
||||
|
||||
func (rw *responseRecorderWithoutCloseNotify) GetBody() *bytes.Buffer {
|
||||
return rw.Body
|
||||
func (r *responseRecorderWithoutCloseNotify) GetBody() *bytes.Buffer {
|
||||
return r.Body
|
||||
}
|
||||
|
||||
func (rw *responseRecorderWithoutCloseNotify) IsStreamingResponseStarted() bool {
|
||||
return rw.streamingResponseStarted
|
||||
func (r *responseRecorderWithoutCloseNotify) IsStreamingResponseStarted() bool {
|
||||
return r.streamingResponseStarted
|
||||
}
|
||||
|
||||
// Write always succeeds and writes to rw.Body, if not nil.
|
||||
func (rw *responseRecorderWithoutCloseNotify) Write(buf []byte) (int, error) {
|
||||
if rw.err != nil {
|
||||
return 0, rw.err
|
||||
func (r *responseRecorderWithoutCloseNotify) Write(buf []byte) (int, error) {
|
||||
if r.err != nil {
|
||||
return 0, r.err
|
||||
}
|
||||
return rw.Body.Write(buf)
|
||||
return r.Body.Write(buf)
|
||||
}
|
||||
|
||||
// WriteHeader sets rw.Code.
|
||||
func (rw *responseRecorderWithoutCloseNotify) WriteHeader(code int) {
|
||||
rw.Code = code
|
||||
func (r *responseRecorderWithoutCloseNotify) WriteHeader(code int) {
|
||||
r.Code = code
|
||||
}
|
||||
|
||||
// Hijack hijacks the connection
|
||||
func (rw *responseRecorderWithoutCloseNotify) Hijack() (net.Conn, *bufio.ReadWriter, error) {
|
||||
return rw.responseWriter.(http.Hijacker).Hijack()
|
||||
func (r *responseRecorderWithoutCloseNotify) Hijack() (net.Conn, *bufio.ReadWriter, error) {
|
||||
return r.responseWriter.(http.Hijacker).Hijack()
|
||||
}
|
||||
|
||||
// Flush sends any buffered data to the client.
|
||||
func (rw *responseRecorderWithoutCloseNotify) Flush() {
|
||||
if !rw.streamingResponseStarted {
|
||||
utils.CopyHeaders(rw.responseWriter.Header(), rw.Header())
|
||||
rw.responseWriter.WriteHeader(rw.Code)
|
||||
rw.streamingResponseStarted = true
|
||||
func (r *responseRecorderWithoutCloseNotify) Flush() {
|
||||
if !r.streamingResponseStarted {
|
||||
utils.CopyHeaders(r.responseWriter.Header(), r.Header())
|
||||
r.responseWriter.WriteHeader(r.Code)
|
||||
r.streamingResponseStarted = true
|
||||
}
|
||||
|
||||
_, err := rw.responseWriter.Write(rw.Body.Bytes())
|
||||
_, err := r.responseWriter.Write(r.Body.Bytes())
|
||||
if err != nil {
|
||||
log.Errorf("Error writing response in responseRecorder: %s", err)
|
||||
rw.err = err
|
||||
log.Errorf("Error writing response in responseRecorder: %v", err)
|
||||
r.err = err
|
||||
}
|
||||
rw.Body.Reset()
|
||||
r.Body.Reset()
|
||||
|
||||
if flusher, ok := rw.responseWriter.(http.Flusher); ok {
|
||||
if flusher, ok := r.responseWriter.(http.Flusher); ok {
|
||||
flusher.Flush()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -318,7 +318,6 @@ func TestHandlerOldWayIntegration(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("X-Foo", "bar")
|
||||
w.WriteHeader(test.backendCode)
|
||||
fmt.Fprintln(w, http.StatusText(test.backendCode))
|
||||
})
|
||||
@@ -331,7 +330,6 @@ func TestHandlerOldWayIntegration(t *testing.T) {
|
||||
n.ServeHTTP(recorder, req)
|
||||
|
||||
test.validate(t, recorder)
|
||||
assert.Equal(t, "bar", recorder.Header().Get("X-Foo"), "missing header")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
62
middlewares/pipelining/pipelining.go
Normal file
62
middlewares/pipelining/pipelining.go
Normal file
@@ -0,0 +1,62 @@
|
||||
package pipelining
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"net"
|
||||
"net/http"
|
||||
)
|
||||
|
||||
// Pipelining returns a middleware
|
||||
type Pipelining struct {
|
||||
next http.Handler
|
||||
}
|
||||
|
||||
// NewPipelining returns a new Pipelining instance
|
||||
func NewPipelining(next http.Handler) *Pipelining {
|
||||
return &Pipelining{
|
||||
next: next,
|
||||
}
|
||||
}
|
||||
|
||||
func (p *Pipelining) ServeHTTP(rw http.ResponseWriter, r *http.Request) {
|
||||
// https://github.com/golang/go/blob/3d59583836630cf13ec4bfbed977d27b1b7adbdc/src/net/http/server.go#L201-L218
|
||||
if r.Method == http.MethodPut || r.Method == http.MethodPost {
|
||||
p.next.ServeHTTP(rw, r)
|
||||
} else {
|
||||
p.next.ServeHTTP(&writerWithoutCloseNotify{rw}, r)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// writerWithoutCloseNotify helps to disable closeNotify
|
||||
type writerWithoutCloseNotify struct {
|
||||
W http.ResponseWriter
|
||||
}
|
||||
|
||||
// Header returns the response headers.
|
||||
func (w *writerWithoutCloseNotify) Header() http.Header {
|
||||
return w.W.Header()
|
||||
}
|
||||
|
||||
// Write writes the data to the connection as part of an HTTP reply.
|
||||
func (w *writerWithoutCloseNotify) Write(buf []byte) (int, error) {
|
||||
return w.W.Write(buf)
|
||||
}
|
||||
|
||||
// WriteHeader sends an HTTP response header with the provided
|
||||
// status code.
|
||||
func (w *writerWithoutCloseNotify) WriteHeader(code int) {
|
||||
w.W.WriteHeader(code)
|
||||
}
|
||||
|
||||
// Flush sends any buffered data to the client.
|
||||
func (w *writerWithoutCloseNotify) Flush() {
|
||||
if f, ok := w.W.(http.Flusher); ok {
|
||||
f.Flush()
|
||||
}
|
||||
}
|
||||
|
||||
// Hijack hijacks the connection.
|
||||
func (w *writerWithoutCloseNotify) Hijack() (net.Conn, *bufio.ReadWriter, error) {
|
||||
return w.W.(http.Hijacker).Hijack()
|
||||
}
|
||||
69
middlewares/pipelining/pipelining_test.go
Normal file
69
middlewares/pipelining/pipelining_test.go
Normal file
@@ -0,0 +1,69 @@
|
||||
package pipelining
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
type recorderWithCloseNotify struct {
|
||||
*httptest.ResponseRecorder
|
||||
}
|
||||
|
||||
func (r *recorderWithCloseNotify) CloseNotify() <-chan bool {
|
||||
panic("implement me")
|
||||
}
|
||||
|
||||
func TestNewPipelining(t *testing.T) {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
HTTPMethod string
|
||||
implementCloseNotifier bool
|
||||
}{
|
||||
{
|
||||
desc: "should not implement CloseNotifier with GET method",
|
||||
HTTPMethod: http.MethodGet,
|
||||
implementCloseNotifier: false,
|
||||
},
|
||||
{
|
||||
desc: "should implement CloseNotifier with PUT method",
|
||||
HTTPMethod: http.MethodPut,
|
||||
implementCloseNotifier: true,
|
||||
},
|
||||
{
|
||||
desc: "should implement CloseNotifier with POST method",
|
||||
HTTPMethod: http.MethodPost,
|
||||
implementCloseNotifier: true,
|
||||
},
|
||||
{
|
||||
desc: "should not implement CloseNotifier with GET method",
|
||||
HTTPMethod: http.MethodHead,
|
||||
implementCloseNotifier: false,
|
||||
},
|
||||
{
|
||||
desc: "should not implement CloseNotifier with PROPFIND method",
|
||||
HTTPMethod: "PROPFIND",
|
||||
implementCloseNotifier: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
_, ok := w.(http.CloseNotifier)
|
||||
assert.Equal(t, test.implementCloseNotifier, ok)
|
||||
w.WriteHeader(http.StatusOK)
|
||||
})
|
||||
handler := NewPipelining(nextHandler)
|
||||
|
||||
req := httptest.NewRequest(test.HTTPMethod, "http://localhost", nil)
|
||||
|
||||
handler.ServeHTTP(&recorderWithCloseNotify{httptest.NewRecorder()}, req)
|
||||
})
|
||||
}
|
||||
}
|
||||
10
mkdocs.yml
10
mkdocs.yml
@@ -16,14 +16,11 @@ theme:
|
||||
include_sidebar: true
|
||||
favicon: img/traefik.icon.png
|
||||
logo: img/traefik.logo.png
|
||||
palette:
|
||||
primary: 'blue'
|
||||
accent: 'light blue'
|
||||
feature:
|
||||
tabs: false
|
||||
palette:
|
||||
primary: 'cyan'
|
||||
accent: 'cyan'
|
||||
feature:
|
||||
tabs: false
|
||||
i18n:
|
||||
prev: 'Previous'
|
||||
next: 'Next'
|
||||
@@ -45,7 +42,7 @@ google_analytics:
|
||||
# - type: 'slack'
|
||||
# link: 'https://traefik.herokuapp.com'
|
||||
# - type: 'twitter'
|
||||
# link: 'https://twitter.com/traefikproxy'
|
||||
# link: 'https://twitter.com/traefik'
|
||||
|
||||
extra_css:
|
||||
- theme/styles/extra.css
|
||||
@@ -101,4 +98,3 @@ pages:
|
||||
- 'Clustering/HA': 'user-guide/cluster.md'
|
||||
- 'gRPC Example': 'user-guide/grpc.md'
|
||||
- 'Traefik cluster example with Swarm': 'user-guide/cluster-docker-consul.md'
|
||||
- Benchmarks: benchmarks.md
|
||||
|
||||
@@ -7,7 +7,7 @@ import (
|
||||
"crypto/x509"
|
||||
|
||||
"github.com/containous/traefik/log"
|
||||
acme "github.com/xenolf/lego/acmev2"
|
||||
"github.com/xenolf/lego/acme"
|
||||
)
|
||||
|
||||
// Account is used to store lets encrypt registration info
|
||||
|
||||
@@ -8,7 +8,7 @@ import (
|
||||
"github.com/containous/flaeg"
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/containous/traefik/safe"
|
||||
acme "github.com/xenolf/lego/acmev2"
|
||||
"github.com/xenolf/lego/acme"
|
||||
)
|
||||
|
||||
func dnsOverrideDelay(delay flaeg.Duration) error {
|
||||
@@ -34,15 +34,9 @@ func getTokenValue(token, domain string, store Store) []byte {
|
||||
var result []byte
|
||||
|
||||
operation := func() error {
|
||||
var ok bool
|
||||
httpChallenges, err := store.GetHTTPChallenges()
|
||||
if err != nil {
|
||||
return fmt.Errorf("HTTPChallenges not available : %s", err)
|
||||
}
|
||||
if result, ok = httpChallenges[token][domain]; !ok {
|
||||
return fmt.Errorf("cannot find challenge for token %v", token)
|
||||
}
|
||||
return nil
|
||||
var err error
|
||||
result, err = store.GetHTTPChallengeToken(token, domain)
|
||||
return err
|
||||
}
|
||||
|
||||
notify := func(err error, time time.Duration) {
|
||||
@@ -60,40 +54,9 @@ func getTokenValue(token, domain string, store Store) []byte {
|
||||
}
|
||||
|
||||
func presentHTTPChallenge(domain, token, keyAuth string, store Store) error {
|
||||
httpChallenges, err := store.GetHTTPChallenges()
|
||||
if err != nil {
|
||||
return fmt.Errorf("unable to get HTTPChallenges : %s", err)
|
||||
}
|
||||
|
||||
if httpChallenges == nil {
|
||||
httpChallenges = map[string]map[string][]byte{}
|
||||
}
|
||||
|
||||
if _, ok := httpChallenges[token]; !ok {
|
||||
httpChallenges[token] = map[string][]byte{}
|
||||
}
|
||||
|
||||
httpChallenges[token][domain] = []byte(keyAuth)
|
||||
|
||||
return store.SaveHTTPChallenges(httpChallenges)
|
||||
return store.SetHTTPChallengeToken(token, domain, []byte(keyAuth))
|
||||
}
|
||||
|
||||
func cleanUpHTTPChallenge(domain, token string, store Store) error {
|
||||
httpChallenges, err := store.GetHTTPChallenges()
|
||||
if err != nil {
|
||||
return fmt.Errorf("unable to get HTTPChallenges : %s", err)
|
||||
}
|
||||
|
||||
log.Debugf("Challenge CleanUp for domain %s", domain)
|
||||
|
||||
if _, ok := httpChallenges[token]; ok {
|
||||
if _, domainOk := httpChallenges[token][domain]; domainOk {
|
||||
delete(httpChallenges[token], domain)
|
||||
}
|
||||
if len(httpChallenges[token]) == 0 {
|
||||
delete(httpChallenges, token)
|
||||
}
|
||||
return store.SaveHTTPChallenges(httpChallenges)
|
||||
}
|
||||
return nil
|
||||
return store.RemoveHTTPChallengeToken(token, domain)
|
||||
}
|
||||
|
||||
@@ -2,9 +2,11 @@ package acme
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"regexp"
|
||||
"sync"
|
||||
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/containous/traefik/safe"
|
||||
@@ -17,11 +19,12 @@ type LocalStore struct {
|
||||
filename string
|
||||
storedData *StoredData
|
||||
SaveDataChan chan *StoredData `json:"-"`
|
||||
lock sync.RWMutex
|
||||
}
|
||||
|
||||
// NewLocalStore initializes a new LocalStore with a file name
|
||||
func NewLocalStore(filename string) LocalStore {
|
||||
store := LocalStore{filename: filename, SaveDataChan: make(chan *StoredData)}
|
||||
func NewLocalStore(filename string) *LocalStore {
|
||||
store := &LocalStore{filename: filename, SaveDataChan: make(chan *StoredData)}
|
||||
store.listenSaveAction()
|
||||
return store
|
||||
}
|
||||
@@ -60,6 +63,7 @@ func (s *LocalStore) get() (*StoredData, error) {
|
||||
return nil, err
|
||||
}
|
||||
if isOldRegistration {
|
||||
log.Debug("Reset ACME account.")
|
||||
s.storedData.Account = nil
|
||||
s.SaveDataChan <- s.storedData
|
||||
}
|
||||
@@ -148,13 +152,59 @@ func (s *LocalStore) SaveCertificates(certificates []*Certificate) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetHTTPChallenges returns ACME HTTP Challenges list
|
||||
func (s *LocalStore) GetHTTPChallenges() (map[string]map[string][]byte, error) {
|
||||
return s.storedData.HTTPChallenges, nil
|
||||
// GetHTTPChallengeToken Get the http challenge token from the store
|
||||
func (s *LocalStore) GetHTTPChallengeToken(token, domain string) ([]byte, error) {
|
||||
s.lock.RLock()
|
||||
defer s.lock.RUnlock()
|
||||
|
||||
if s.storedData.HTTPChallenges == nil {
|
||||
s.storedData.HTTPChallenges = map[string]map[string][]byte{}
|
||||
}
|
||||
|
||||
if _, ok := s.storedData.HTTPChallenges[token]; !ok {
|
||||
return nil, fmt.Errorf("cannot find challenge for token %v", token)
|
||||
}
|
||||
|
||||
result, ok := s.storedData.HTTPChallenges[token][domain]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("cannot find challenge for token %v", token)
|
||||
}
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// SaveHTTPChallenges stores ACME HTTP Challenges list
|
||||
func (s *LocalStore) SaveHTTPChallenges(httpChallenges map[string]map[string][]byte) error {
|
||||
s.storedData.HTTPChallenges = httpChallenges
|
||||
// SetHTTPChallengeToken Set the http challenge token in the store
|
||||
func (s *LocalStore) SetHTTPChallengeToken(token, domain string, keyAuth []byte) error {
|
||||
s.lock.Lock()
|
||||
defer s.lock.Unlock()
|
||||
|
||||
if s.storedData.HTTPChallenges == nil {
|
||||
s.storedData.HTTPChallenges = map[string]map[string][]byte{}
|
||||
}
|
||||
|
||||
if _, ok := s.storedData.HTTPChallenges[token]; !ok {
|
||||
s.storedData.HTTPChallenges[token] = map[string][]byte{}
|
||||
}
|
||||
|
||||
s.storedData.HTTPChallenges[token][domain] = []byte(keyAuth)
|
||||
return nil
|
||||
}
|
||||
|
||||
// RemoveHTTPChallengeToken Remove the http challenge token in the store
|
||||
func (s *LocalStore) RemoveHTTPChallengeToken(token, domain string) error {
|
||||
s.lock.Lock()
|
||||
defer s.lock.Unlock()
|
||||
|
||||
if s.storedData.HTTPChallenges == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
if _, ok := s.storedData.HTTPChallenges[token]; ok {
|
||||
if _, domainOk := s.storedData.HTTPChallenges[token][domain]; domainOk {
|
||||
delete(s.storedData.HTTPChallenges[token], domain)
|
||||
}
|
||||
if len(s.storedData.HTTPChallenges[token]) == 0 {
|
||||
delete(s.storedData.HTTPChallenges, token)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -8,7 +8,6 @@ import (
|
||||
fmtlog "log"
|
||||
"net"
|
||||
"net/http"
|
||||
"os"
|
||||
"reflect"
|
||||
"strings"
|
||||
"sync"
|
||||
@@ -22,8 +21,11 @@ import (
|
||||
"github.com/containous/traefik/safe"
|
||||
traefikTLS "github.com/containous/traefik/tls"
|
||||
"github.com/containous/traefik/types"
|
||||
"github.com/containous/traefik/version"
|
||||
"github.com/pkg/errors"
|
||||
acme "github.com/xenolf/lego/acmev2"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/xenolf/lego/acme"
|
||||
legolog "github.com/xenolf/lego/log"
|
||||
"github.com/xenolf/lego/providers/dns"
|
||||
)
|
||||
|
||||
@@ -61,6 +63,8 @@ type Provider struct {
|
||||
clientMutex sync.Mutex
|
||||
configFromListenerChan chan types.Configuration
|
||||
pool *safe.Pool
|
||||
resolvingDomains map[string]struct{}
|
||||
resolvingDomainsMutex sync.RWMutex
|
||||
}
|
||||
|
||||
// Certificate is a struct which contains all data needed from an ACME certificate
|
||||
@@ -97,10 +101,11 @@ func (p *Provider) SetConfigListenerChan(configFromListenerChan chan types.Confi
|
||||
}
|
||||
|
||||
func (p *Provider) init() error {
|
||||
acme.UserAgent = fmt.Sprintf("containous-traefik/%s", version.Version)
|
||||
if p.ACMELogging {
|
||||
acme.Logger = fmtlog.New(os.Stderr, "legolog: ", fmtlog.LstdFlags)
|
||||
legolog.Logger = fmtlog.New(log.WriterLevel(logrus.DebugLevel), "legolog: ", 0)
|
||||
} else {
|
||||
acme.Logger = fmtlog.New(ioutil.Discard, "", 0)
|
||||
legolog.Logger = fmtlog.New(ioutil.Discard, "", 0)
|
||||
}
|
||||
|
||||
var err error
|
||||
@@ -114,11 +119,19 @@ func (p *Provider) init() error {
|
||||
return fmt.Errorf("unable to get ACME account : %v", err)
|
||||
}
|
||||
|
||||
// Reset Account if caServer changed, thus registration URI can be updated
|
||||
if p.account != nil && p.account.Registration != nil && !strings.HasPrefix(p.account.Registration.URI, p.CAServer) {
|
||||
p.account = nil
|
||||
}
|
||||
|
||||
p.certificates, err = p.Store.GetCertificates()
|
||||
if err != nil {
|
||||
return fmt.Errorf("unable to get ACME certificates : %v", err)
|
||||
}
|
||||
|
||||
// Init the currently resolved domain map
|
||||
p.resolvingDomains = make(map[string]struct{})
|
||||
|
||||
p.watchCertificate()
|
||||
p.watchNewDomains()
|
||||
|
||||
@@ -168,7 +181,7 @@ func (p *Provider) watchNewDomains() {
|
||||
}
|
||||
|
||||
if len(domains) == 0 {
|
||||
log.Debugf("No domain parsed in rule %q", route.Rule)
|
||||
log.Debugf("No domain parsed in rule %q in provider ACME", route.Rule)
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -218,6 +231,9 @@ func (p *Provider) resolveCertificate(domain types.Domain, domainFromConfigurati
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
p.addResolvingDomains(uncheckedDomains)
|
||||
defer p.removeResolvingDomains(uncheckedDomains)
|
||||
|
||||
log.Debugf("Loading ACME certificates %+v...", uncheckedDomains)
|
||||
client, err := p.getClient()
|
||||
if err != nil {
|
||||
@@ -243,7 +259,25 @@ func (p *Provider) resolveCertificate(domain types.Domain, domainFromConfigurati
|
||||
}
|
||||
p.addCertificateForDomain(domain, certificate.Certificate, certificate.PrivateKey)
|
||||
|
||||
return &certificate, nil
|
||||
return certificate, nil
|
||||
}
|
||||
|
||||
func (p *Provider) removeResolvingDomains(resolvingDomains []string) {
|
||||
p.resolvingDomainsMutex.Lock()
|
||||
defer p.resolvingDomainsMutex.Unlock()
|
||||
|
||||
for _, domain := range resolvingDomains {
|
||||
delete(p.resolvingDomains, domain)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *Provider) addResolvingDomains(resolvingDomains []string) {
|
||||
p.resolvingDomainsMutex.Lock()
|
||||
defer p.resolvingDomainsMutex.Unlock()
|
||||
|
||||
for _, domain := range resolvingDomains {
|
||||
p.resolvingDomains[domain] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
func (p *Provider) getClient() (*acme.Client, error) {
|
||||
@@ -315,7 +349,6 @@ func (p *Provider) getClient() (*acme.Client, error) {
|
||||
}
|
||||
p.client = client
|
||||
}
|
||||
|
||||
return p.client, nil
|
||||
}
|
||||
|
||||
@@ -496,6 +529,9 @@ func (p *Provider) AddRoutes(router *mux.Router) {
|
||||
// Get provided certificate which check a domains list (Main and SANs)
|
||||
// from static and dynamic provided certificates
|
||||
func (p *Provider) getUncheckedDomains(domainsToCheck []string, checkConfigurationDomains bool) []string {
|
||||
p.resolvingDomainsMutex.RLock()
|
||||
defer p.resolvingDomainsMutex.RUnlock()
|
||||
|
||||
log.Debugf("Looking for provided certificate(s) to validate %q...", domainsToCheck)
|
||||
var allCerts []string
|
||||
|
||||
@@ -516,6 +552,11 @@ func (p *Provider) getUncheckedDomains(domainsToCheck []string, checkConfigurati
|
||||
allCerts = append(allCerts, strings.Join(certificate.Domain.ToStrArray(), ","))
|
||||
}
|
||||
|
||||
// Get currently resolved domains
|
||||
for domain := range p.resolvingDomains {
|
||||
allCerts = append(allCerts, domain)
|
||||
}
|
||||
|
||||
// Get Configuration Domains
|
||||
if checkConfigurationDomains {
|
||||
for i := 0; i < len(p.Domains); i++ {
|
||||
@@ -533,8 +574,9 @@ func searchUncheckedDomains(domainsToCheck []string, existentDomains []string) [
|
||||
uncheckedDomains = append(uncheckedDomains, domainToCheck)
|
||||
}
|
||||
}
|
||||
|
||||
if len(uncheckedDomains) == 0 {
|
||||
log.Debugf("No ACME certificate to generate for domains %q.", domainsToCheck)
|
||||
log.Debugf("No ACME certificate generation required for domains %q.", domainsToCheck)
|
||||
} else {
|
||||
log.Debugf("Domains %q need ACME certificates generation for domains %q.", domainsToCheck, strings.Join(uncheckedDomains, ","))
|
||||
}
|
||||
|
||||
@@ -26,6 +26,7 @@ func TestGetUncheckedCertificates(t *testing.T) {
|
||||
desc string
|
||||
dynamicCerts *safe.Safe
|
||||
staticCerts map[string]*tls.Certificate
|
||||
resolvingDomains map[string]struct{}
|
||||
acmeCertificates []*Certificate
|
||||
domains []string
|
||||
expectedDomains []string
|
||||
@@ -138,17 +139,55 @@ func TestGetUncheckedCertificates(t *testing.T) {
|
||||
},
|
||||
expectedDomains: []string{"traefik.wtf"},
|
||||
},
|
||||
{
|
||||
desc: "all domains already managed by ACME",
|
||||
domains: []string{"traefik.wtf", "foo.traefik.wtf"},
|
||||
resolvingDomains: map[string]struct{}{
|
||||
"traefik.wtf": {},
|
||||
"foo.traefik.wtf": {},
|
||||
},
|
||||
expectedDomains: []string{},
|
||||
},
|
||||
{
|
||||
desc: "one domain already managed by ACME",
|
||||
domains: []string{"traefik.wtf", "foo.traefik.wtf"},
|
||||
resolvingDomains: map[string]struct{}{
|
||||
"traefik.wtf": {},
|
||||
},
|
||||
expectedDomains: []string{"foo.traefik.wtf"},
|
||||
},
|
||||
{
|
||||
desc: "wildcard domain already managed by ACME checks the domains",
|
||||
domains: []string{"bar.traefik.wtf", "foo.traefik.wtf"},
|
||||
resolvingDomains: map[string]struct{}{
|
||||
"*.traefik.wtf": {},
|
||||
},
|
||||
expectedDomains: []string{},
|
||||
},
|
||||
{
|
||||
desc: "wildcard domain already managed by ACME checks domains and another domain checks one other domain, one domain still unchecked",
|
||||
domains: []string{"traefik.wtf", "bar.traefik.wtf", "foo.traefik.wtf", "acme.wtf"},
|
||||
resolvingDomains: map[string]struct{}{
|
||||
"*.traefik.wtf": {},
|
||||
"traefik.wtf": {},
|
||||
},
|
||||
expectedDomains: []string{"acme.wtf"},
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range testCases {
|
||||
test := test
|
||||
if test.resolvingDomains == nil {
|
||||
test.resolvingDomains = make(map[string]struct{})
|
||||
}
|
||||
t.Run(test.desc, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
acmeProvider := Provider{
|
||||
dynamicCerts: test.dynamicCerts,
|
||||
staticCerts: test.staticCerts,
|
||||
certificates: test.acmeCertificates,
|
||||
dynamicCerts: test.dynamicCerts,
|
||||
staticCerts: test.staticCerts,
|
||||
certificates: test.acmeCertificates,
|
||||
resolvingDomains: test.resolvingDomains,
|
||||
}
|
||||
|
||||
domains := acmeProvider.getUncheckedDomains(test.domains, false)
|
||||
|
||||
@@ -13,6 +13,7 @@ type Store interface {
|
||||
SaveAccount(*Account) error
|
||||
GetCertificates() ([]*Certificate, error)
|
||||
SaveCertificates([]*Certificate) error
|
||||
GetHTTPChallenges() (map[string]map[string][]byte, error)
|
||||
SaveHTTPChallenges(map[string]map[string][]byte) error
|
||||
GetHTTPChallengeToken(token, domain string) ([]byte, error)
|
||||
SetHTTPChallengeToken(token, domain string, keyAuth []byte) error
|
||||
RemoveHTTPChallengeToken(token, domain string) error
|
||||
}
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"crypto/sha1"
|
||||
"encoding/base64"
|
||||
"fmt"
|
||||
"net"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
@@ -116,7 +117,7 @@ func (p *Provider) getServer(node *api.ServiceEntry) types.Server {
|
||||
address := getBackendAddress(node)
|
||||
|
||||
return types.Server{
|
||||
URL: fmt.Sprintf("%s://%s:%d", scheme, address, node.Service.Port),
|
||||
URL: fmt.Sprintf("%s://%s", scheme, net.JoinHostPort(address, strconv.Itoa(node.Service.Port))),
|
||||
Weight: p.getWeight(node.Service.Tags),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -113,6 +113,97 @@ func TestProviderBuildConfiguration(t *testing.T) {
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
desc: "Should build config containing one frontend, one IPv4 and one IPv6 backend",
|
||||
nodes: []catalogUpdate{
|
||||
{
|
||||
Service: &serviceUpdate{
|
||||
ServiceName: "test",
|
||||
Attributes: []string{
|
||||
"random.foo=bar",
|
||||
label.TraefikBackendLoadBalancerMethod + "=drr",
|
||||
label.TraefikBackendCircuitBreakerExpression + "=NetworkErrorRatio() > 0.5",
|
||||
label.TraefikBackendMaxConnAmount + "=1000",
|
||||
label.TraefikBackendMaxConnExtractorFunc + "=client.ip",
|
||||
label.TraefikFrontendAuthBasic + "=test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/,test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0",
|
||||
},
|
||||
},
|
||||
Nodes: []*api.ServiceEntry{
|
||||
{
|
||||
Service: &api.AgentService{
|
||||
Service: "test",
|
||||
Address: "127.0.0.1",
|
||||
Port: 80,
|
||||
Tags: []string{
|
||||
"random.foo=bar",
|
||||
label.Prefix + "backend.weight=42", // Deprecated label
|
||||
label.TraefikFrontendPassHostHeader + "=true",
|
||||
label.TraefikProtocol + "=https",
|
||||
},
|
||||
},
|
||||
Node: &api.Node{
|
||||
Node: "localhost",
|
||||
Address: "127.0.0.1",
|
||||
},
|
||||
},
|
||||
{
|
||||
Service: &api.AgentService{
|
||||
Service: "test",
|
||||
Address: "::1",
|
||||
Port: 80,
|
||||
Tags: []string{
|
||||
"random.foo=bar",
|
||||
label.Prefix + "backend.weight=42", // Deprecated label
|
||||
label.TraefikFrontendPassHostHeader + "=true",
|
||||
label.TraefikProtocol + "=https",
|
||||
},
|
||||
},
|
||||
Node: &api.Node{
|
||||
Node: "localhost",
|
||||
Address: "::1",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedFrontends: map[string]*types.Frontend{
|
||||
"frontend-test": {
|
||||
Backend: "backend-test",
|
||||
PassHostHeader: true,
|
||||
Routes: map[string]types.Route{
|
||||
"route-host-test": {
|
||||
Rule: "Host:test.localhost",
|
||||
},
|
||||
},
|
||||
EntryPoints: []string{},
|
||||
BasicAuth: []string{"test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/", "test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0"},
|
||||
},
|
||||
},
|
||||
expectedBackends: map[string]*types.Backend{
|
||||
"backend-test": {
|
||||
Servers: map[string]types.Server{
|
||||
"test-0-us4-27hAOu2ARV7nNrmv6GoKlcA": {
|
||||
URL: "https://127.0.0.1:80",
|
||||
Weight: 42,
|
||||
},
|
||||
"test-1-Gh4zrXo5flAAz1A8LAEHm1-TSnE": {
|
||||
URL: "https://[::1]:80",
|
||||
Weight: 42,
|
||||
},
|
||||
},
|
||||
LoadBalancer: &types.LoadBalancer{
|
||||
Method: "drr",
|
||||
},
|
||||
CircuitBreaker: &types.CircuitBreaker{
|
||||
Expression: "NetworkErrorRatio() > 0.5",
|
||||
},
|
||||
MaxConn: &types.MaxConn{
|
||||
Amount: 1000,
|
||||
ExtractorFunc: "client.ip",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range testCases {
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
package consulcatalog
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
@@ -154,14 +154,8 @@ func (p *Provider) watch(configurationChan chan<- types.ConfigMessage, stop chan
|
||||
defer close(stopCh)
|
||||
defer close(watchCh)
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-stop:
|
||||
return nil
|
||||
case index, ok := <-watchCh:
|
||||
if !ok {
|
||||
return errors.New("consul service list nil")
|
||||
}
|
||||
safe.Go(func() {
|
||||
for index := range watchCh {
|
||||
log.Debug("List of services changed")
|
||||
nodes, err := p.getNodes(index)
|
||||
if err != nil {
|
||||
@@ -172,6 +166,13 @@ func (p *Provider) watch(configurationChan chan<- types.ConfigMessage, stop chan
|
||||
ProviderName: "consul_catalog",
|
||||
Configuration: configuration,
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-stop:
|
||||
return nil
|
||||
case err := <-errorCh:
|
||||
return err
|
||||
}
|
||||
@@ -255,7 +256,8 @@ func (p *Provider) watchHealthState(stopCh <-chan struct{}, watchCh chan<- map[s
|
||||
|
||||
safe.Go(func() {
|
||||
// variable to hold previous state
|
||||
var flashback []string
|
||||
var flashback map[string][]string
|
||||
var flashbackMaintenance []string
|
||||
|
||||
options := &api.QueryOptions{WaitTime: DefaultWatchWaitTime}
|
||||
|
||||
@@ -267,19 +269,31 @@ func (p *Provider) watchHealthState(stopCh <-chan struct{}, watchCh chan<- map[s
|
||||
}
|
||||
|
||||
// Listening to changes that leads to `passing` state or degrades from it.
|
||||
healthyState, meta, err := health.State("passing", options)
|
||||
healthyState, meta, err := health.State("any", options)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Failed to retrieve health checks")
|
||||
notifyError(err)
|
||||
return
|
||||
}
|
||||
|
||||
var current []string
|
||||
var current = make(map[string][]string)
|
||||
var currentFailing = make(map[string]*api.HealthCheck)
|
||||
var maintenance []string
|
||||
if healthyState != nil {
|
||||
for _, healthy := range healthyState {
|
||||
current = append(current, healthy.ServiceID)
|
||||
key := fmt.Sprintf("%s-%s", healthy.Node, healthy.ServiceID)
|
||||
_, failing := currentFailing[key]
|
||||
if healthy.Status == "passing" && !failing {
|
||||
current[key] = append(current[key], healthy.Node)
|
||||
} else if strings.HasPrefix(healthy.CheckID, "_service_maintenance") || strings.HasPrefix(healthy.CheckID, "_node_maintenance") {
|
||||
maintenance = append(maintenance, healthy.CheckID)
|
||||
} else {
|
||||
currentFailing[key] = healthy
|
||||
if _, ok := current[key]; ok {
|
||||
delete(current, key)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// If LastIndex didn't change then it means `Get` returned
|
||||
@@ -302,18 +316,26 @@ func (p *Provider) watchHealthState(stopCh <-chan struct{}, watchCh chan<- map[s
|
||||
// A critical note is that the return of a blocking request is no guarantee of a change.
|
||||
// It is possible that there was an idempotent write that does not affect the result of the query.
|
||||
// Thus it is required to do extra check for changes...
|
||||
addedKeys, removedKeys := getChangedStringKeys(current, flashback)
|
||||
addedKeys, removedKeys, changedKeys := getChangedHealth(current, flashback)
|
||||
|
||||
if len(addedKeys) > 0 || len(removedKeys) > 0 || len(changedKeys) > 0 {
|
||||
log.WithField("DiscoveredServices", addedKeys).
|
||||
WithField("MissingServices", removedKeys).
|
||||
WithField("ChangedServices", changedKeys).
|
||||
Debug("Health State change detected.")
|
||||
|
||||
if len(addedKeys) > 0 {
|
||||
log.WithField("DiscoveredServices", addedKeys).Debug("Health State change detected.")
|
||||
watchCh <- data
|
||||
flashback = current
|
||||
}
|
||||
flashbackMaintenance = maintenance
|
||||
} else {
|
||||
addedKeysMaintenance, removedMaintenance := getChangedStringKeys(maintenance, flashbackMaintenance)
|
||||
|
||||
if len(removedKeys) > 0 {
|
||||
log.WithField("MissingServices", removedKeys).Debug("Health State change detected.")
|
||||
watchCh <- data
|
||||
flashback = current
|
||||
if len(addedKeysMaintenance) > 0 || len(removedMaintenance) > 0 {
|
||||
log.WithField("MaintenanceMode", maintenance).Debug("Maintenance change detected.")
|
||||
watchCh <- data
|
||||
flashback = current
|
||||
flashbackMaintenance = maintenance
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -394,6 +416,27 @@ func getChangedStringKeys(currState []string, prevState []string) ([]string, []s
|
||||
return fun.Keys(addedKeys).([]string), fun.Keys(removedKeys).([]string)
|
||||
}
|
||||
|
||||
func getChangedHealth(current map[string][]string, previous map[string][]string) ([]string, []string, []string) {
|
||||
currKeySet := fun.Set(fun.Keys(current).([]string)).(map[string]bool)
|
||||
prevKeySet := fun.Set(fun.Keys(previous).([]string)).(map[string]bool)
|
||||
|
||||
addedKeys := fun.Difference(currKeySet, prevKeySet).(map[string]bool)
|
||||
removedKeys := fun.Difference(prevKeySet, currKeySet).(map[string]bool)
|
||||
|
||||
var changedKeys []string
|
||||
|
||||
for key, value := range current {
|
||||
if prevValue, ok := previous[key]; ok {
|
||||
addedNodesKeys, removedNodesKeys := getChangedStringKeys(value, prevValue)
|
||||
if len(addedNodesKeys) > 0 || len(removedNodesKeys) > 0 {
|
||||
changedKeys = append(changedKeys, key)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return fun.Keys(addedKeys).([]string), fun.Keys(removedKeys).([]string), changedKeys
|
||||
}
|
||||
|
||||
func getChangedIntKeys(currState []int, prevState []int) ([]int, []int) {
|
||||
currKeySet := fun.Set(currState).(map[int]bool)
|
||||
prevKeySet := fun.Set(prevState).(map[int]bool)
|
||||
|
||||
@@ -2,7 +2,10 @@ package docker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/md5"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"net"
|
||||
"strconv"
|
||||
"strings"
|
||||
"text/template"
|
||||
@@ -106,13 +109,11 @@ func (p *Provider) buildConfigurationV2(containersInspected []dockerData) *types
|
||||
}
|
||||
|
||||
func getServiceNameKey(container dockerData, swarmMode bool, segmentName string) string {
|
||||
serviceNameKey := container.ServiceName
|
||||
|
||||
if values, err := label.GetStringMultipleStrict(container.Labels, labelDockerComposeProject, labelDockerComposeService); !swarmMode && err == nil {
|
||||
serviceNameKey = values[labelDockerComposeService] + values[labelDockerComposeProject]
|
||||
if swarmMode {
|
||||
return container.ServiceName + segmentName
|
||||
}
|
||||
|
||||
return serviceNameKey + segmentName
|
||||
return getServiceName(container) + segmentName
|
||||
}
|
||||
|
||||
func (p *Provider) containerFilter(container dockerData) bool {
|
||||
@@ -169,7 +170,7 @@ func checkSegmentPort(labels map[string]string, segmentName string) error {
|
||||
func (p *Provider) getFrontendName(container dockerData, idx int) string {
|
||||
var name string
|
||||
if len(container.SegmentName) > 0 {
|
||||
name = getBackendName(container)
|
||||
name = container.SegmentName + "-" + getBackendName(container)
|
||||
} else {
|
||||
name = p.getFrontendRule(container, container.SegmentLabels) + "-" + strconv.Itoa(idx)
|
||||
}
|
||||
@@ -261,12 +262,21 @@ func isBackendLBSwarm(container dockerData) bool {
|
||||
return label.GetBoolValue(container.Labels, labelBackendLoadBalancerSwarm, false)
|
||||
}
|
||||
|
||||
func getSegmentBackendName(container dockerData) string {
|
||||
if value := label.GetStringValue(container.SegmentLabels, label.TraefikBackend, ""); len(value) > 0 {
|
||||
return provider.Normalize(container.ServiceName + "-" + value)
|
||||
func getBackendName(container dockerData) string {
|
||||
if len(container.SegmentName) > 0 {
|
||||
return getSegmentBackendName(container)
|
||||
}
|
||||
|
||||
return provider.Normalize(container.ServiceName + "-" + getDefaultBackendName(container) + "-" + container.SegmentName)
|
||||
return getDefaultBackendName(container)
|
||||
}
|
||||
|
||||
func getSegmentBackendName(container dockerData) string {
|
||||
serviceName := getServiceName(container)
|
||||
if value := label.GetStringValue(container.SegmentLabels, label.TraefikBackend, ""); len(value) > 0 {
|
||||
return provider.Normalize(serviceName + "-" + value)
|
||||
}
|
||||
|
||||
return provider.Normalize(serviceName + "-" + container.SegmentName)
|
||||
}
|
||||
|
||||
func getDefaultBackendName(container dockerData) string {
|
||||
@@ -274,19 +284,17 @@ func getDefaultBackendName(container dockerData) string {
|
||||
return provider.Normalize(value)
|
||||
}
|
||||
|
||||
if values, err := label.GetStringMultipleStrict(container.Labels, labelDockerComposeProject, labelDockerComposeService); err == nil {
|
||||
return provider.Normalize(values[labelDockerComposeService] + "_" + values[labelDockerComposeProject])
|
||||
}
|
||||
|
||||
return provider.Normalize(container.ServiceName)
|
||||
return provider.Normalize(getServiceName(container))
|
||||
}
|
||||
|
||||
func getBackendName(container dockerData) string {
|
||||
if len(container.SegmentName) > 0 {
|
||||
return getSegmentBackendName(container)
|
||||
func getServiceName(container dockerData) string {
|
||||
serviceName := container.ServiceName
|
||||
|
||||
if values, err := label.GetStringMultipleStrict(container.Labels, labelDockerComposeProject, labelDockerComposeService); err == nil {
|
||||
serviceName = values[labelDockerComposeService] + "_" + values[labelDockerComposeProject]
|
||||
}
|
||||
|
||||
return getDefaultBackendName(container)
|
||||
return serviceName
|
||||
}
|
||||
|
||||
func getPort(container dockerData) string {
|
||||
@@ -316,7 +324,7 @@ func getPort(container dockerData) string {
|
||||
func (p *Provider) getServers(containers []dockerData) map[string]types.Server {
|
||||
var servers map[string]types.Server
|
||||
|
||||
for i, container := range containers {
|
||||
for _, container := range containers {
|
||||
ip := p.getIPAddress(container)
|
||||
if len(ip) == 0 {
|
||||
log.Warnf("Unable to find the IP address for the container %q: the server is ignored.", container.Name)
|
||||
@@ -330,16 +338,30 @@ func (p *Provider) getServers(containers []dockerData) map[string]types.Server {
|
||||
protocol := label.GetStringValue(container.SegmentLabels, label.TraefikProtocol, label.DefaultProtocol)
|
||||
port := getPort(container)
|
||||
|
||||
serverName := "server-" + container.SegmentName + "-" + container.Name
|
||||
if len(container.SegmentName) > 0 {
|
||||
serverName += "-" + strconv.Itoa(i)
|
||||
serverURL := fmt.Sprintf("%s://%s", protocol, net.JoinHostPort(ip, port))
|
||||
|
||||
serverName := getServerName(container.Name, serverURL)
|
||||
if _, exist := servers[serverName]; exist {
|
||||
log.Debugf("Skipping server %q with the same URL.", serverName)
|
||||
continue
|
||||
}
|
||||
|
||||
servers[provider.Normalize(serverName)] = types.Server{
|
||||
URL: fmt.Sprintf("%s://%s:%s", protocol, ip, port),
|
||||
servers[serverName] = types.Server{
|
||||
URL: serverURL,
|
||||
Weight: label.GetIntValue(container.SegmentLabels, label.TraefikWeight, label.DefaultWeight),
|
||||
}
|
||||
}
|
||||
|
||||
return servers
|
||||
}
|
||||
|
||||
func getServerName(containerName, url string) string {
|
||||
hash := md5.New()
|
||||
_, err := hash.Write([]byte(url))
|
||||
if err != nil {
|
||||
// Impossible case
|
||||
log.Errorf("Fail to hash server URL %q", url)
|
||||
}
|
||||
|
||||
return provider.Normalize("server-" + containerName + "-" + hex.EncodeToString(hash.Sum(nil)))
|
||||
}
|
||||
|
||||
@@ -55,7 +55,7 @@ func TestDockerBuildConfiguration(t *testing.T) {
|
||||
expectedBackends: map[string]*types.Backend{
|
||||
"backend-test": {
|
||||
Servers: map[string]types.Server{
|
||||
"server-test": {
|
||||
"server-test-842895ca2aca17f6ee36ddb2f621194d": {
|
||||
URL: "http://127.0.0.1:80",
|
||||
Weight: label.DefaultWeight,
|
||||
},
|
||||
@@ -270,7 +270,7 @@ func TestDockerBuildConfiguration(t *testing.T) {
|
||||
expectedBackends: map[string]*types.Backend{
|
||||
"backend-foobar": {
|
||||
Servers: map[string]types.Server{
|
||||
"server-test1": {
|
||||
"server-test1-7f6444e0dff3330c8b0ad2bbbd383b0f": {
|
||||
URL: "https://127.0.0.1:666",
|
||||
Weight: 12,
|
||||
},
|
||||
@@ -372,10 +372,11 @@ func TestDockerBuildConfiguration(t *testing.T) {
|
||||
expectedBackends: map[string]*types.Backend{
|
||||
"backend-myService-myProject": {
|
||||
Servers: map[string]types.Server{
|
||||
"server-test-0": {
|
||||
"server-test-0-842895ca2aca17f6ee36ddb2f621194d": {
|
||||
URL: "http://127.0.0.1:80",
|
||||
Weight: label.DefaultWeight,
|
||||
}, "server-test-1": {
|
||||
},
|
||||
"server-test-1-48093b9fc43454203aacd2bc4057a08c": {
|
||||
URL: "http://127.0.0.2:80",
|
||||
Weight: label.DefaultWeight,
|
||||
},
|
||||
@@ -384,7 +385,7 @@ func TestDockerBuildConfiguration(t *testing.T) {
|
||||
},
|
||||
"backend-myService2-myProject": {
|
||||
Servers: map[string]types.Server{
|
||||
"server-test-2": {
|
||||
"server-test-2-405767e9733427148cd8dae6c4d331b0": {
|
||||
URL: "http://127.0.0.3:80",
|
||||
Weight: label.DefaultWeight,
|
||||
},
|
||||
@@ -850,8 +851,9 @@ func TestDockerGetFrontendRule(t *testing.T) {
|
||||
|
||||
func TestDockerGetBackendName(t *testing.T) {
|
||||
testCases := []struct {
|
||||
container docker.ContainerJSON
|
||||
expected string
|
||||
container docker.ContainerJSON
|
||||
segmentName string
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
container: containerJSON(name("foo")),
|
||||
@@ -874,6 +876,15 @@ func TestDockerGetBackendName(t *testing.T) {
|
||||
})),
|
||||
expected: "bar-foo",
|
||||
},
|
||||
{
|
||||
container: containerJSON(labels(map[string]string{
|
||||
"com.docker.compose.project": "foo",
|
||||
"com.docker.compose.service": "bar",
|
||||
"traefik.sauternes.backend": "titi",
|
||||
})),
|
||||
segmentName: "sauternes",
|
||||
expected: "bar-foo-titi",
|
||||
},
|
||||
}
|
||||
|
||||
for containerID, test := range testCases {
|
||||
@@ -883,7 +894,8 @@ func TestDockerGetBackendName(t *testing.T) {
|
||||
|
||||
dData := parseContainer(test.container)
|
||||
segmentProperties := label.ExtractTraefikLabels(dData.Labels)
|
||||
dData.SegmentLabels = segmentProperties[""]
|
||||
dData.SegmentLabels = segmentProperties[test.segmentName]
|
||||
dData.SegmentName = test.segmentName
|
||||
|
||||
actual := getBackendName(dData)
|
||||
assert.Equal(t, test.expected, actual)
|
||||
@@ -1044,7 +1056,7 @@ func TestDockerGetServers(t *testing.T) {
|
||||
})),
|
||||
},
|
||||
expected: map[string]types.Server{
|
||||
"server-test1": {
|
||||
"server-test1-fb00f762970935200c76ccdaf91458f6": {
|
||||
URL: "http://10.10.10.10:80",
|
||||
Weight: 1,
|
||||
},
|
||||
@@ -1073,15 +1085,15 @@ func TestDockerGetServers(t *testing.T) {
|
||||
})),
|
||||
},
|
||||
expected: map[string]types.Server{
|
||||
"server-test1": {
|
||||
"server-test1-743440b6f4a8ffd8737626215f2c5a33": {
|
||||
URL: "http://10.10.10.11:80",
|
||||
Weight: 1,
|
||||
},
|
||||
"server-test2": {
|
||||
"server-test2-547f74bbb5da02b6c8141ce9aa96c13b": {
|
||||
URL: "http://10.10.10.12:81",
|
||||
Weight: 1,
|
||||
},
|
||||
"server-test3": {
|
||||
"server-test3-c57fd8b848c814a3f2a4a4c12e13c179": {
|
||||
URL: "http://10.10.10.13:82",
|
||||
Weight: 1,
|
||||
},
|
||||
@@ -1110,11 +1122,11 @@ func TestDockerGetServers(t *testing.T) {
|
||||
})),
|
||||
},
|
||||
expected: map[string]types.Server{
|
||||
"server-test2": {
|
||||
"server-test2-547f74bbb5da02b6c8141ce9aa96c13b": {
|
||||
URL: "http://10.10.10.12:81",
|
||||
Weight: 1,
|
||||
},
|
||||
"server-test3": {
|
||||
"server-test3-c57fd8b848c814a3f2a4a4c12e13c179": {
|
||||
URL: "http://10.10.10.13:82",
|
||||
Weight: 1,
|
||||
},
|
||||
|
||||
@@ -57,7 +57,7 @@ func TestSwarmBuildConfiguration(t *testing.T) {
|
||||
expectedBackends: map[string]*types.Backend{
|
||||
"backend-test": {
|
||||
Servers: map[string]types.Server{
|
||||
"server-test": {
|
||||
"server-test-842895ca2aca17f6ee36ddb2f621194d": {
|
||||
URL: "http://127.0.0.1:80",
|
||||
Weight: label.DefaultWeight,
|
||||
},
|
||||
@@ -238,7 +238,6 @@ func TestSwarmBuildConfiguration(t *testing.T) {
|
||||
ReferrerPolicy: "foo",
|
||||
IsDevelopment: true,
|
||||
},
|
||||
|
||||
Errors: map[string]*types.ErrorPage{
|
||||
"foo": {
|
||||
Status: []string{"404"},
|
||||
@@ -276,7 +275,7 @@ func TestSwarmBuildConfiguration(t *testing.T) {
|
||||
expectedBackends: map[string]*types.Backend{
|
||||
"backend-foobar": {
|
||||
Servers: map[string]types.Server{
|
||||
"server-test1": {
|
||||
"server-test1-7f6444e0dff3330c8b0ad2bbbd383b0f": {
|
||||
URL: "https://127.0.0.1:666",
|
||||
Weight: 12,
|
||||
},
|
||||
|
||||
@@ -42,22 +42,22 @@ func TestSegmentBuildConfiguration(t *testing.T) {
|
||||
),
|
||||
},
|
||||
expectedFrontends: map[string]*types.Frontend{
|
||||
"frontend-foo-foo-sauternes": {
|
||||
Backend: "backend-foo-foo-sauternes",
|
||||
"frontend-sauternes-foo-sauternes": {
|
||||
Backend: "backend-foo-sauternes",
|
||||
PassHostHeader: true,
|
||||
EntryPoints: []string{"http", "https"},
|
||||
BasicAuth: []string{},
|
||||
Routes: map[string]types.Route{
|
||||
"route-frontend-foo-foo-sauternes": {
|
||||
"route-frontend-sauternes-foo-sauternes": {
|
||||
Rule: "Host:foo.docker.localhost",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedBackends: map[string]*types.Backend{
|
||||
"backend-foo-foo-sauternes": {
|
||||
"backend-foo-sauternes": {
|
||||
Servers: map[string]types.Server{
|
||||
"server-sauternes-foo-0": {
|
||||
"server-foo-863563a2e23c95502862016417ee95ea": {
|
||||
URL: "http://127.0.0.1:2503",
|
||||
Weight: label.DefaultWeight,
|
||||
},
|
||||
@@ -132,8 +132,8 @@ func TestSegmentBuildConfiguration(t *testing.T) {
|
||||
),
|
||||
},
|
||||
expectedFrontends: map[string]*types.Frontend{
|
||||
"frontend-foo-foo-sauternes": {
|
||||
Backend: "backend-foo-foo-sauternes",
|
||||
"frontend-sauternes-foo-sauternes": {
|
||||
Backend: "backend-foo-sauternes",
|
||||
EntryPoints: []string{
|
||||
"http",
|
||||
"https",
|
||||
@@ -224,16 +224,16 @@ func TestSegmentBuildConfiguration(t *testing.T) {
|
||||
},
|
||||
|
||||
Routes: map[string]types.Route{
|
||||
"route-frontend-foo-foo-sauternes": {
|
||||
"route-frontend-sauternes-foo-sauternes": {
|
||||
Rule: "Host:foo.docker.localhost",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedBackends: map[string]*types.Backend{
|
||||
"backend-foo-foo-sauternes": {
|
||||
"backend-foo-sauternes": {
|
||||
Servers: map[string]types.Server{
|
||||
"server-sauternes-foo-0": {
|
||||
"server-foo-7f6444e0dff3330c8b0ad2bbbd383b0f": {
|
||||
URL: "https://127.0.0.1:666",
|
||||
Weight: 12,
|
||||
},
|
||||
@@ -278,7 +278,7 @@ func TestSegmentBuildConfiguration(t *testing.T) {
|
||||
),
|
||||
},
|
||||
expectedFrontends: map[string]*types.Frontend{
|
||||
"frontend-test1-foobar": {
|
||||
"frontend-sauternes-test1-foobar": {
|
||||
Backend: "backend-test1-foobar",
|
||||
PassHostHeader: false,
|
||||
Priority: 5000,
|
||||
@@ -288,18 +288,18 @@ func TestSegmentBuildConfiguration(t *testing.T) {
|
||||
EntryPoint: "https",
|
||||
},
|
||||
Routes: map[string]types.Route{
|
||||
"route-frontend-test1-foobar": {
|
||||
"route-frontend-sauternes-test1-foobar": {
|
||||
Rule: "Path:/mypath",
|
||||
},
|
||||
},
|
||||
},
|
||||
"frontend-test2-test2-anothersauternes": {
|
||||
Backend: "backend-test2-test2-anothersauternes",
|
||||
"frontend-anothersauternes-test2-anothersauternes": {
|
||||
Backend: "backend-test2-anothersauternes",
|
||||
PassHostHeader: true,
|
||||
EntryPoints: []string{},
|
||||
BasicAuth: []string{},
|
||||
Routes: map[string]types.Route{
|
||||
"route-frontend-test2-test2-anothersauternes": {
|
||||
"route-frontend-anothersauternes-test2-anothersauternes": {
|
||||
Rule: "Path:/anotherpath",
|
||||
},
|
||||
},
|
||||
@@ -308,16 +308,16 @@ func TestSegmentBuildConfiguration(t *testing.T) {
|
||||
expectedBackends: map[string]*types.Backend{
|
||||
"backend-test1-foobar": {
|
||||
Servers: map[string]types.Server{
|
||||
"server-sauternes-test1-0": {
|
||||
"server-test1-79533a101142718f0fdf84c42593c41e": {
|
||||
URL: "https://127.0.0.1:2503",
|
||||
Weight: 80,
|
||||
},
|
||||
},
|
||||
CircuitBreaker: nil,
|
||||
},
|
||||
"backend-test2-test2-anothersauternes": {
|
||||
"backend-test2-anothersauternes": {
|
||||
Servers: map[string]types.Server{
|
||||
"server-anothersauternes-test2-0": {
|
||||
"server-test2-e9c1b66f9af919aa46053fbc2391bb4a": {
|
||||
URL: "http://127.0.0.1:8079",
|
||||
Weight: 33,
|
||||
},
|
||||
@@ -326,6 +326,152 @@ func TestSegmentBuildConfiguration(t *testing.T) {
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
desc: "several segments with the same backend name and same port",
|
||||
containers: []docker.ContainerJSON{
|
||||
containerJSON(
|
||||
name("test1"),
|
||||
labels(map[string]string{
|
||||
"traefik.port": "2503",
|
||||
"traefik.protocol": "https",
|
||||
"traefik.weight": "80",
|
||||
"traefik.frontend.entryPoints": "http,https",
|
||||
"traefik.frontend.redirect.entryPoint": "https",
|
||||
|
||||
"traefik.sauternes.backend": "foobar",
|
||||
"traefik.sauternes.frontend.rule": "Path:/sauternes",
|
||||
"traefik.sauternes.frontend.priority": "5000",
|
||||
|
||||
"traefik.arbois.backend": "foobar",
|
||||
"traefik.arbois.frontend.rule": "Path:/arbois",
|
||||
"traefik.arbois.frontend.priority": "3000",
|
||||
}),
|
||||
ports(nat.PortMap{
|
||||
"80/tcp": {},
|
||||
}),
|
||||
withNetwork("bridge", ipv4("127.0.0.1")),
|
||||
),
|
||||
},
|
||||
expectedFrontends: map[string]*types.Frontend{
|
||||
"frontend-sauternes-test1-foobar": {
|
||||
Backend: "backend-test1-foobar",
|
||||
PassHostHeader: true,
|
||||
Priority: 5000,
|
||||
EntryPoints: []string{"http", "https"},
|
||||
BasicAuth: []string{},
|
||||
Redirect: &types.Redirect{
|
||||
EntryPoint: "https",
|
||||
},
|
||||
Routes: map[string]types.Route{
|
||||
"route-frontend-sauternes-test1-foobar": {
|
||||
Rule: "Path:/sauternes",
|
||||
},
|
||||
},
|
||||
},
|
||||
"frontend-arbois-test1-foobar": {
|
||||
Backend: "backend-test1-foobar",
|
||||
PassHostHeader: true,
|
||||
Priority: 3000,
|
||||
EntryPoints: []string{"http", "https"},
|
||||
BasicAuth: []string{},
|
||||
Redirect: &types.Redirect{
|
||||
EntryPoint: "https",
|
||||
},
|
||||
Routes: map[string]types.Route{
|
||||
"route-frontend-arbois-test1-foobar": {
|
||||
Rule: "Path:/arbois",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedBackends: map[string]*types.Backend{
|
||||
"backend-test1-foobar": {
|
||||
Servers: map[string]types.Server{
|
||||
"server-test1-79533a101142718f0fdf84c42593c41e": {
|
||||
URL: "https://127.0.0.1:2503",
|
||||
Weight: 80,
|
||||
},
|
||||
},
|
||||
CircuitBreaker: nil,
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
desc: "several segments with the same backend name and different port (wrong behavior)",
|
||||
containers: []docker.ContainerJSON{
|
||||
containerJSON(
|
||||
name("test1"),
|
||||
labels(map[string]string{
|
||||
"traefik.protocol": "https",
|
||||
"traefik.frontend.entryPoints": "http,https",
|
||||
"traefik.frontend.redirect.entryPoint": "https",
|
||||
|
||||
"traefik.sauternes.port": "2503",
|
||||
"traefik.sauternes.weight": "80",
|
||||
"traefik.sauternes.backend": "foobar",
|
||||
"traefik.sauternes.frontend.rule": "Path:/sauternes",
|
||||
"traefik.sauternes.frontend.priority": "5000",
|
||||
|
||||
"traefik.arbois.port": "2504",
|
||||
"traefik.arbois.weight": "90",
|
||||
"traefik.arbois.backend": "foobar",
|
||||
"traefik.arbois.frontend.rule": "Path:/arbois",
|
||||
"traefik.arbois.frontend.priority": "3000",
|
||||
}),
|
||||
ports(nat.PortMap{
|
||||
"80/tcp": {},
|
||||
}),
|
||||
withNetwork("bridge", ipv4("127.0.0.1")),
|
||||
),
|
||||
},
|
||||
expectedFrontends: map[string]*types.Frontend{
|
||||
"frontend-sauternes-test1-foobar": {
|
||||
Backend: "backend-test1-foobar",
|
||||
PassHostHeader: true,
|
||||
Priority: 5000,
|
||||
EntryPoints: []string{"http", "https"},
|
||||
BasicAuth: []string{},
|
||||
Redirect: &types.Redirect{
|
||||
EntryPoint: "https",
|
||||
},
|
||||
Routes: map[string]types.Route{
|
||||
"route-frontend-sauternes-test1-foobar": {
|
||||
Rule: "Path:/sauternes",
|
||||
},
|
||||
},
|
||||
},
|
||||
"frontend-arbois-test1-foobar": {
|
||||
Backend: "backend-test1-foobar",
|
||||
PassHostHeader: true,
|
||||
Priority: 3000,
|
||||
EntryPoints: []string{"http", "https"},
|
||||
BasicAuth: []string{},
|
||||
Redirect: &types.Redirect{
|
||||
EntryPoint: "https",
|
||||
},
|
||||
Routes: map[string]types.Route{
|
||||
"route-frontend-arbois-test1-foobar": {
|
||||
Rule: "Path:/arbois",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedBackends: map[string]*types.Backend{
|
||||
"backend-test1-foobar": {
|
||||
Servers: map[string]types.Server{
|
||||
"server-test1-79533a101142718f0fdf84c42593c41e": {
|
||||
URL: "https://127.0.0.1:2503",
|
||||
Weight: 80,
|
||||
},
|
||||
"server-test1-315a41140f1bd825b066e39686c18482": {
|
||||
URL: "https://127.0.0.1:2504",
|
||||
Weight: 90,
|
||||
},
|
||||
},
|
||||
CircuitBreaker: nil,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
provider := &Provider{
|
||||
|
||||
@@ -2,6 +2,7 @@ package ecs
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"strconv"
|
||||
"strings"
|
||||
"text/template"
|
||||
@@ -134,7 +135,7 @@ func getServers(instances []ecsInstance) map[string]types.Server {
|
||||
|
||||
serverName := provider.Normalize(fmt.Sprintf("server-%s-%s", instance.Name, instance.ID))
|
||||
servers[serverName] = types.Server{
|
||||
URL: fmt.Sprintf("%s://%s:%s", protocol, host, port),
|
||||
URL: fmt.Sprintf("%s://%s", protocol, net.JoinHostPort(host, port)),
|
||||
Weight: label.GetIntValue(instance.TraefikLabels, label.TraefikWeight, label.DefaultWeight),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -205,7 +205,7 @@ func getFuncFirstStringValueV1(labelName string, defaultValue string) func(insta
|
||||
// Deprecated
|
||||
func getFuncFirstBoolValueV1(labelName string, defaultValue bool) func(instances []ecsInstance) bool {
|
||||
return func(instances []ecsInstance) bool {
|
||||
if len(instances) < 0 {
|
||||
if len(instances) == 0 {
|
||||
return defaultValue
|
||||
}
|
||||
return getBoolValueV1(instances[0], labelName, defaultValue)
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"errors"
|
||||
"flag"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"reflect"
|
||||
"strconv"
|
||||
@@ -302,7 +303,7 @@ func (p *Provider) loadIngresses(k8sClient Client) (*types.Configuration, error)
|
||||
|
||||
for _, subset := range endpoints.Subsets {
|
||||
for _, address := range subset.Addresses {
|
||||
url := protocol + "://" + address.IP + ":" + strconv.Itoa(endpointPortNumber(port, subset.Ports))
|
||||
url := protocol + "://" + net.JoinHostPort(address.IP, strconv.Itoa(endpointPortNumber(port, subset.Ports)))
|
||||
name := url
|
||||
if address.TargetRef != nil && address.TargetRef.Name != "" {
|
||||
name = address.TargetRef.Name
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"math"
|
||||
"net"
|
||||
"strconv"
|
||||
"strings"
|
||||
"text/template"
|
||||
@@ -340,7 +341,7 @@ func (p *Provider) getServer(app appData, task marathon.Task) (string, *types.Se
|
||||
serverName := provider.Normalize("server-" + app.ID + "-" + task.ID + getSegmentNameSuffix(app.SegmentName))
|
||||
|
||||
return serverName, &types.Server{
|
||||
URL: fmt.Sprintf("%s://%s:%v", protocol, host, port),
|
||||
URL: fmt.Sprintf("%s://%s", protocol, net.JoinHostPort(host, port)),
|
||||
Weight: label.GetIntValue(app.SegmentLabels, label.TraefikWeight, label.DefaultWeight),
|
||||
}, nil
|
||||
}
|
||||
|
||||
@@ -3,6 +3,7 @@ package mesos
|
||||
import (
|
||||
"fmt"
|
||||
"math"
|
||||
"net"
|
||||
"strconv"
|
||||
"strings"
|
||||
"text/template"
|
||||
@@ -185,7 +186,7 @@ func (p *Provider) getServers(tasks []taskData) map[string]types.Server {
|
||||
|
||||
serverName := "server-" + getID(task)
|
||||
servers[serverName] = types.Server{
|
||||
URL: fmt.Sprintf("%s://%s:%s", protocol, host, port),
|
||||
URL: fmt.Sprintf("%s://%s", protocol, net.JoinHostPort(host, port)),
|
||||
Weight: getIntValue(task.TraefikLabels, label.TraefikWeight, label.DefaultWeight, math.MaxInt32),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,6 +2,7 @@ package rancher
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"strconv"
|
||||
"strings"
|
||||
"text/template"
|
||||
@@ -181,7 +182,7 @@ func getServers(service rancherData) map[string]types.Server {
|
||||
|
||||
serverName := "server-" + strconv.Itoa(index)
|
||||
servers[serverName] = types.Server{
|
||||
URL: fmt.Sprintf("%s://%s:%s", protocol, ip, port),
|
||||
URL: fmt.Sprintf("%s://%s", protocol, net.JoinHostPort(ip, port)),
|
||||
Weight: weight,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,4 +1,39 @@
|
||||
mkdocs>=0.17.3
|
||||
pymdown-extensions>=1.4
|
||||
mkdocs-bootswatch>=0.4.0
|
||||
mkdocs-material>=2.2.6
|
||||
mkdocs==0.17.5
|
||||
pymdown-extensions==4.12
|
||||
mkdocs-bootswatch==0.5.0
|
||||
mkdocs-material==2.9.4
|
||||
|
||||
appdirs==1.4.4
|
||||
CacheControl==0.12.6
|
||||
certifi==2020.12.5
|
||||
chardet==4.0.0
|
||||
click==8.1.3
|
||||
colorama==0.4.4
|
||||
contextlib2==0.6.0
|
||||
distlib==0.3.1
|
||||
distro==1.5.0
|
||||
html5lib==1.1
|
||||
idna==3.2
|
||||
importlib-metadata==4.12.0
|
||||
Jinja2==3.1.2
|
||||
livereload==2.6.3
|
||||
lockfile==0.12.2
|
||||
Markdown==3.3.7
|
||||
MarkupSafe==2.1.1
|
||||
msgpack==1.0.2
|
||||
ordered-set==4.0.2
|
||||
packaging==20.9
|
||||
pep517==0.10.0
|
||||
progress==1.5
|
||||
Pygments==2.12.0
|
||||
pymdown-extensions==4.12
|
||||
pyparsing==2.4.7
|
||||
PyYAML==6.0
|
||||
requests==2.25.1
|
||||
retrying==1.3.3
|
||||
six==1.15.0
|
||||
toml==0.10.2
|
||||
tornado==4.5.3
|
||||
urllib3==1.26.5
|
||||
webencodings==0.5.1
|
||||
zipp==3.8.1
|
||||
|
||||
@@ -9,7 +9,6 @@ import (
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"github.com/BurntSushi/ty/fun"
|
||||
"github.com/containous/mux"
|
||||
"github.com/containous/traefik/types"
|
||||
)
|
||||
@@ -270,9 +269,11 @@ func (r *Rules) Parse(expression string) (*mux.Route, error) {
|
||||
// ParseDomains parses rules expressions and returns domains
|
||||
func (r *Rules) ParseDomains(expression string) ([]string, error) {
|
||||
var domains []string
|
||||
isHostRule := false
|
||||
|
||||
err := r.parseRules(expression, func(functionName string, function interface{}, arguments []string) error {
|
||||
if functionName == "Host" {
|
||||
isHostRule = true
|
||||
domains = append(domains, arguments...)
|
||||
}
|
||||
return nil
|
||||
@@ -281,5 +282,18 @@ func (r *Rules) ParseDomains(expression string) ([]string, error) {
|
||||
return nil, fmt.Errorf("error parsing domains: %v", err)
|
||||
}
|
||||
|
||||
return fun.Map(types.CanonicalDomain, domains).([]string), nil
|
||||
var cleanDomains []string
|
||||
for _, domain := range domains {
|
||||
canonicalDomain := types.CanonicalDomain(domain)
|
||||
if len(canonicalDomain) > 0 {
|
||||
cleanDomains = append(cleanDomains, canonicalDomain)
|
||||
}
|
||||
}
|
||||
|
||||
// Return an error if an Host rule is detected but no domain are parsed
|
||||
if isHostRule && len(cleanDomains) == 0 {
|
||||
return nil, fmt.Errorf("unable to parse correctly the domains in the Host rule from %q", expression)
|
||||
}
|
||||
|
||||
return cleanDomains, nil
|
||||
}
|
||||
|
||||
@@ -54,24 +54,38 @@ func TestParseDomains(t *testing.T) {
|
||||
rules := &Rules{}
|
||||
|
||||
tests := []struct {
|
||||
expression string
|
||||
domain []string
|
||||
description string
|
||||
expression string
|
||||
domain []string
|
||||
errorExpected bool
|
||||
}{
|
||||
{
|
||||
expression: "Host:foo.bar,test.bar",
|
||||
domain: []string{"foo.bar", "test.bar"},
|
||||
description: "Many host rules",
|
||||
expression: "Host:foo.bar,test.bar",
|
||||
domain: []string{"foo.bar", "test.bar"},
|
||||
errorExpected: false,
|
||||
},
|
||||
{
|
||||
expression: "Path:/test",
|
||||
domain: []string{},
|
||||
description: "No host rule",
|
||||
expression: "Path:/test",
|
||||
errorExpected: false,
|
||||
},
|
||||
{
|
||||
expression: "Host:foo.bar;Path:/test",
|
||||
domain: []string{"foo.bar"},
|
||||
description: "Host rule and another rule",
|
||||
expression: "Host:foo.bar;Path:/test",
|
||||
domain: []string{"foo.bar"},
|
||||
errorExpected: false,
|
||||
},
|
||||
{
|
||||
expression: "Host: Foo.Bar ;Path:/test",
|
||||
domain: []string{"foo.bar"},
|
||||
description: "Host rule to trim and another rule",
|
||||
expression: "Host: Foo.Bar ;Path:/test",
|
||||
domain: []string{"foo.bar"},
|
||||
errorExpected: false,
|
||||
},
|
||||
{
|
||||
description: "Host rule with no domain",
|
||||
expression: "Host: ;Path:/test",
|
||||
errorExpected: true,
|
||||
},
|
||||
}
|
||||
|
||||
@@ -81,7 +95,12 @@ func TestParseDomains(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
domains, err := rules.ParseDomains(test.expression)
|
||||
require.NoError(t, err, "%s: Error while parsing domain.", test.expression)
|
||||
|
||||
if test.errorExpected {
|
||||
require.Errorf(t, err, "unable to parse correctly the domains in the Host rule from %q", test.expression)
|
||||
} else {
|
||||
require.NoError(t, err, "%s: Error while parsing domain.", test.expression)
|
||||
}
|
||||
|
||||
assert.EqualValues(t, test.domain, domains, "%s: Error parsing domains from expression.", test.expression)
|
||||
})
|
||||
|
||||
27
server/bufferpool.go
Normal file
27
server/bufferpool.go
Normal file
@@ -0,0 +1,27 @@
|
||||
package server
|
||||
|
||||
import "sync"
|
||||
|
||||
const bufferPoolSize int = 32 * 1024
|
||||
|
||||
func newBufferPool() *bufferPool {
|
||||
return &bufferPool{
|
||||
pool: sync.Pool{
|
||||
New: func() interface{} {
|
||||
return make([]byte, bufferPoolSize)
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
type bufferPool struct {
|
||||
pool sync.Pool
|
||||
}
|
||||
|
||||
func (b *bufferPool) Get() []byte {
|
||||
return b.pool.Get().([]byte)
|
||||
}
|
||||
|
||||
func (b *bufferPool) Put(bytes []byte) {
|
||||
b.pool.Put(bytes)
|
||||
}
|
||||
@@ -1,13 +1,21 @@
|
||||
package server
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"net"
|
||||
"net/http"
|
||||
|
||||
"github.com/containous/traefik/log"
|
||||
"github.com/containous/traefik/middlewares"
|
||||
)
|
||||
|
||||
// StatusClientClosedRequest non-standard HTTP status code for client disconnection
|
||||
const StatusClientClosedRequest = 499
|
||||
|
||||
// StatusClientClosedRequestText non-standard HTTP status for client disconnection
|
||||
const StatusClientClosedRequestText = "Client Closed Request"
|
||||
|
||||
// RecordingErrorHandler is an error handler, implementing the vulcand/oxy
|
||||
// error handler interface, which is recording network errors by using the netErrorRecorder.
|
||||
// In addition it sets a proper HTTP status code and body, depending on the type of error occurred.
|
||||
@@ -33,8 +41,18 @@ func (eh *RecordingErrorHandler) ServeHTTP(w http.ResponseWriter, req *http.Requ
|
||||
} else if err == io.EOF {
|
||||
eh.netErrorRecorder.Record(req.Context())
|
||||
statusCode = http.StatusBadGateway
|
||||
} else if err == context.Canceled {
|
||||
statusCode = StatusClientClosedRequest
|
||||
}
|
||||
|
||||
w.WriteHeader(statusCode)
|
||||
w.Write([]byte(http.StatusText(statusCode)))
|
||||
w.Write([]byte(statusText(statusCode)))
|
||||
log.Debugf("'%d %s' caused by: %v", statusCode, statusText(statusCode), err)
|
||||
}
|
||||
|
||||
func statusText(statusCode int) string {
|
||||
if statusCode == StatusClientClosedRequest {
|
||||
return StatusClientClosedRequestText
|
||||
}
|
||||
return http.StatusText(statusCode)
|
||||
}
|
||||
|
||||
@@ -44,7 +44,7 @@ func (h *headerRewriter) Rewrite(req *http.Request) {
|
||||
|
||||
err := h.ips.IsAuthorized(req)
|
||||
if err != nil {
|
||||
log.Error(err)
|
||||
log.Debug(err)
|
||||
h.secureRewriter.Rewrite(req)
|
||||
return
|
||||
}
|
||||
|
||||
158
server/server.go
158
server/server.go
@@ -11,6 +11,7 @@ import (
|
||||
stdlog "log"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/http/httputil"
|
||||
"net/url"
|
||||
"os"
|
||||
"os/signal"
|
||||
@@ -31,6 +32,7 @@ import (
|
||||
"github.com/containous/traefik/middlewares/accesslog"
|
||||
mauth "github.com/containous/traefik/middlewares/auth"
|
||||
"github.com/containous/traefik/middlewares/errorpages"
|
||||
"github.com/containous/traefik/middlewares/pipelining"
|
||||
"github.com/containous/traefik/middlewares/redirect"
|
||||
"github.com/containous/traefik/middlewares/tracing"
|
||||
"github.com/containous/traefik/provider"
|
||||
@@ -75,16 +77,115 @@ type Server struct {
|
||||
metricsRegistry metrics.Registry
|
||||
provider provider.Provider
|
||||
configurationListeners []func(types.Configuration)
|
||||
bufferPool httputil.BufferPool
|
||||
}
|
||||
|
||||
func newHijackConnectionTracker() *hijackConnectionTracker {
|
||||
return &hijackConnectionTracker{
|
||||
conns: make(map[net.Conn]struct{}),
|
||||
}
|
||||
}
|
||||
|
||||
type hijackConnectionTracker struct {
|
||||
conns map[net.Conn]struct{}
|
||||
lock sync.RWMutex
|
||||
}
|
||||
|
||||
// AddHijackedConnection add a connection in the tracked connections list
|
||||
func (h *hijackConnectionTracker) AddHijackedConnection(conn net.Conn) {
|
||||
h.lock.Lock()
|
||||
defer h.lock.Unlock()
|
||||
h.conns[conn] = struct{}{}
|
||||
}
|
||||
|
||||
// RemoveHijackedConnection remove a connection from the tracked connections list
|
||||
func (h *hijackConnectionTracker) RemoveHijackedConnection(conn net.Conn) {
|
||||
h.lock.Lock()
|
||||
defer h.lock.Unlock()
|
||||
delete(h.conns, conn)
|
||||
}
|
||||
|
||||
// Shutdown wait for the connection closing
|
||||
func (h *hijackConnectionTracker) Shutdown(ctx context.Context) error {
|
||||
ticker := time.NewTicker(500 * time.Millisecond)
|
||||
defer ticker.Stop()
|
||||
for {
|
||||
h.lock.RLock()
|
||||
if len(h.conns) == 0 {
|
||||
return nil
|
||||
}
|
||||
h.lock.RUnlock()
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
case <-ticker.C:
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Close close all the connections in the tracked connections list
|
||||
func (h *hijackConnectionTracker) Close() {
|
||||
for conn := range h.conns {
|
||||
if err := conn.Close(); err != nil {
|
||||
log.Errorf("Error while closing Hijacked conn: %v", err)
|
||||
}
|
||||
delete(h.conns, conn)
|
||||
}
|
||||
}
|
||||
|
||||
type serverEntryPoints map[string]*serverEntryPoint
|
||||
|
||||
type serverEntryPoint struct {
|
||||
httpServer *http.Server
|
||||
listener net.Listener
|
||||
httpRouter *middlewares.HandlerSwitcher
|
||||
certs safe.Safe
|
||||
onDemandListener func(string) (*tls.Certificate, error)
|
||||
httpServer *http.Server
|
||||
listener net.Listener
|
||||
httpRouter *middlewares.HandlerSwitcher
|
||||
certs safe.Safe
|
||||
onDemandListener func(string) (*tls.Certificate, error)
|
||||
hijackConnectionTracker *hijackConnectionTracker
|
||||
}
|
||||
|
||||
func (s serverEntryPoint) Shutdown(ctx context.Context) {
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
if err := s.httpServer.Shutdown(ctx); err != nil {
|
||||
if ctx.Err() == context.DeadlineExceeded {
|
||||
log.Debugf("Wait server shutdown is over due to: %s", err)
|
||||
err = s.httpServer.Close()
|
||||
if err != nil {
|
||||
log.Error(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
if err := s.hijackConnectionTracker.Shutdown(ctx); err != nil {
|
||||
if ctx.Err() == context.DeadlineExceeded {
|
||||
log.Debugf("Wait hijack connection is over due to: %s", err)
|
||||
s.hijackConnectionTracker.Close()
|
||||
}
|
||||
}
|
||||
}()
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
// tcpKeepAliveListener sets TCP keep-alive timeouts on accepted
|
||||
// connections.
|
||||
type tcpKeepAliveListener struct {
|
||||
*net.TCPListener
|
||||
}
|
||||
|
||||
func (ln tcpKeepAliveListener) Accept() (net.Conn, error) {
|
||||
tc, err := ln.AcceptTCP()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
tc.SetKeepAlive(true)
|
||||
tc.SetKeepAlivePeriod(3 * time.Minute)
|
||||
return tc, nil
|
||||
}
|
||||
|
||||
// NewServer returns an initialized Server.
|
||||
@@ -106,6 +207,8 @@ func NewServer(globalConfiguration configuration.GlobalConfiguration, provider p
|
||||
server.globalConfiguration.API.CurrentConfigurations = &server.currentConfigurations
|
||||
}
|
||||
|
||||
server.bufferPool = newBufferPool()
|
||||
|
||||
server.routinesPool = safe.NewPool(context.Background())
|
||||
server.defaultForwardingRoundTripper = createHTTPTransport(globalConfiguration)
|
||||
|
||||
@@ -239,10 +342,7 @@ func (s *Server) Stop() {
|
||||
graceTimeOut := time.Duration(s.globalConfiguration.LifeCycle.GraceTimeOut)
|
||||
ctx, cancel := context.WithTimeout(context.Background(), graceTimeOut)
|
||||
log.Debugf("Waiting %s seconds before killing connections on entrypoint %s...", graceTimeOut, serverEntryPointName)
|
||||
if err := serverEntryPoint.httpServer.Shutdown(ctx); err != nil {
|
||||
log.Debugf("Wait is over due to: %s", err)
|
||||
serverEntryPoint.httpServer.Close()
|
||||
}
|
||||
serverEntryPoint.Shutdown(ctx)
|
||||
cancel()
|
||||
log.Debugf("Entrypoint %s closed", serverEntryPointName)
|
||||
}(sepn, sep)
|
||||
@@ -355,9 +455,20 @@ func (s *Server) setupServerEntryPoint(newServerEntryPointName string, newServer
|
||||
log.Fatal("Error preparing server: ", err)
|
||||
}
|
||||
serverEntryPoint := s.serverEntryPoints[newServerEntryPointName]
|
||||
|
||||
serverEntryPoint.httpServer = newSrv
|
||||
serverEntryPoint.listener = listener
|
||||
|
||||
serverEntryPoint.hijackConnectionTracker = newHijackConnectionTracker()
|
||||
serverEntryPoint.httpServer.ConnState = func(conn net.Conn, state http.ConnState) {
|
||||
switch state {
|
||||
case http.StateHijacked:
|
||||
serverEntryPoint.hijackConnectionTracker.AddHijackedConnection(conn)
|
||||
case http.StateClosed:
|
||||
serverEntryPoint.hijackConnectionTracker.RemoveHijackedConnection(conn)
|
||||
}
|
||||
}
|
||||
|
||||
return serverEntryPoint
|
||||
}
|
||||
|
||||
@@ -535,7 +646,10 @@ func (s *serverEntryPoint) getCertificate(clientHello *tls.ClientHelloInfo) (*tl
|
||||
}
|
||||
|
||||
func (s *Server) postLoadConfiguration() {
|
||||
metrics.OnConfigurationUpdate()
|
||||
if s.metricsRegistry.IsEnabled() {
|
||||
activeConfig := s.currentConfigurations.Get().(types.Configurations)
|
||||
metrics.OnConfigurationUpdate(activeConfig)
|
||||
}
|
||||
|
||||
if s.globalConfiguration.ACME == nil || s.leadership == nil || !s.leadership.IsLeader() {
|
||||
return
|
||||
@@ -562,9 +676,15 @@ func (s *Server) postLoadConfiguration() {
|
||||
domains, err := rules.ParseDomains(route.Rule)
|
||||
if err != nil {
|
||||
log.Errorf("Error parsing domains: %v", err)
|
||||
} else {
|
||||
s.globalConfiguration.ACME.LoadCertificateForDomains(domains)
|
||||
continue
|
||||
}
|
||||
|
||||
if len(domains) == 0 {
|
||||
log.Debugf("No domain parsed in rule %q", route.Rule)
|
||||
continue
|
||||
}
|
||||
|
||||
s.globalConfiguration.ACME.LoadCertificateForDomains(domains)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -795,6 +915,8 @@ func (s *Server) prepareServer(entryPointName string, entryPoint *configuration.
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
listener = tcpKeepAliveListener{listener.(*net.TCPListener)}
|
||||
|
||||
if entryPoint.ProxyProtocol != nil {
|
||||
IPs, err := whitelist.NewIP(entryPoint.ProxyProtocol.TrustedIPs, entryPoint.ProxyProtocol.Insecure, false)
|
||||
if err != nil {
|
||||
@@ -998,6 +1120,16 @@ func (s *Server) loadConfig(configurations types.Configurations, globalConfigura
|
||||
forward.ErrorHandler(errorHandler),
|
||||
forward.Rewriter(rewriter),
|
||||
forward.ResponseModifier(responseModifier),
|
||||
forward.BufferPool(s.bufferPool),
|
||||
forward.WebsocketConnectionClosedHook(func(req *http.Request, conn net.Conn) {
|
||||
server := req.Context().Value(http.ServerContextKey).(*http.Server)
|
||||
if server != nil {
|
||||
connState := server.ConnState
|
||||
if connState != nil {
|
||||
connState(conn, http.StateClosed)
|
||||
}
|
||||
}
|
||||
}),
|
||||
)
|
||||
|
||||
if err != nil {
|
||||
@@ -1015,6 +1147,8 @@ func (s *Server) loadConfig(configurations types.Configurations, globalConfigura
|
||||
})
|
||||
}
|
||||
|
||||
fwd = pipelining.NewPipelining(fwd)
|
||||
|
||||
var rr *roundrobin.RoundRobin
|
||||
var saveFrontend http.Handler
|
||||
if s.accessLoggerMiddleware != nil {
|
||||
|
||||
@@ -16,9 +16,8 @@ import (
|
||||
"github.com/containous/traefik/healthcheck"
|
||||
"github.com/containous/traefik/metrics"
|
||||
"github.com/containous/traefik/middlewares"
|
||||
"github.com/containous/traefik/provider/label"
|
||||
"github.com/containous/traefik/rules"
|
||||
"github.com/containous/traefik/testhelpers"
|
||||
th "github.com/containous/traefik/testhelpers"
|
||||
"github.com/containous/traefik/tls"
|
||||
"github.com/containous/traefik/types"
|
||||
"github.com/davecgh/go-spew/spew"
|
||||
@@ -211,9 +210,9 @@ func TestListenProvidersSkipsSameConfigurationForProvider(t *testing.T) {
|
||||
}
|
||||
}()
|
||||
|
||||
config := buildDynamicConfig(
|
||||
withFrontend("frontend", buildFrontend()),
|
||||
withBackend("backend", buildBackend()),
|
||||
config := th.BuildConfiguration(
|
||||
th.WithFrontends(th.WithFrontend("backend")),
|
||||
th.WithBackends(th.WithBackendNew("backend")),
|
||||
)
|
||||
|
||||
// provide a configuration
|
||||
@@ -252,9 +251,9 @@ func TestListenProvidersPublishesConfigForEachProvider(t *testing.T) {
|
||||
}
|
||||
}()
|
||||
|
||||
config := buildDynamicConfig(
|
||||
withFrontend("frontend", buildFrontend()),
|
||||
withBackend("backend", buildBackend()),
|
||||
config := th.BuildConfiguration(
|
||||
th.WithFrontends(th.WithFrontend("backend")),
|
||||
th.WithBackends(th.WithBackendNew("backend")),
|
||||
)
|
||||
server.configurationChan <- types.ConfigMessage{ProviderName: "kubernetes", Configuration: config}
|
||||
server.configurationChan <- types.ConfigMessage{ProviderName: "marathon", Configuration: config}
|
||||
@@ -410,7 +409,7 @@ func TestServerMultipleFrontendRules(t *testing.T) {
|
||||
t.Fatalf("Error while building route for %s: %+v", expression, err)
|
||||
}
|
||||
|
||||
request := testhelpers.MustNewRequest(http.MethodGet, test.requestURL, nil)
|
||||
request := th.MustNewRequest(http.MethodGet, test.requestURL, nil)
|
||||
routeMatch := routeResult.Match(request, &mux.RouteMatch{Route: routeResult})
|
||||
|
||||
if !routeMatch {
|
||||
@@ -491,7 +490,7 @@ func TestServerLoadConfigHealthCheckOptions(t *testing.T) {
|
||||
if healthCheck != nil {
|
||||
wantNumHealthCheckBackends = 1
|
||||
}
|
||||
gotNumHealthCheckBackends := len(healthcheck.GetHealthCheck(testhelpers.NewCollectingHealthCheckMetrics()).Backends)
|
||||
gotNumHealthCheckBackends := len(healthcheck.GetHealthCheck(th.NewCollectingHealthCheckMetrics()).Backends)
|
||||
if gotNumHealthCheckBackends != wantNumHealthCheckBackends {
|
||||
t.Errorf("got %d health check backends, want %d", gotNumHealthCheckBackends, wantNumHealthCheckBackends)
|
||||
}
|
||||
@@ -859,62 +858,88 @@ func TestServerResponseEmptyBackend(t *testing.T) {
|
||||
|
||||
testCases := []struct {
|
||||
desc string
|
||||
dynamicConfig func(testServerURL string) *types.Configuration
|
||||
config func(testServerURL string) *types.Configuration
|
||||
wantStatusCode int
|
||||
}{
|
||||
{
|
||||
desc: "Ok",
|
||||
dynamicConfig: func(testServerURL string) *types.Configuration {
|
||||
return buildDynamicConfig(
|
||||
withFrontend("frontend", buildFrontend(withRoute(requestPath, routeRule))),
|
||||
withBackend("backend", buildBackend(withServer("testServer", testServerURL))),
|
||||
config: func(testServerURL string) *types.Configuration {
|
||||
return th.BuildConfiguration(
|
||||
th.WithFrontends(th.WithFrontend("backend",
|
||||
th.WithEntryPoints("http"),
|
||||
th.WithRoutes(th.WithRoute(requestPath, routeRule))),
|
||||
),
|
||||
th.WithBackends(th.WithBackendNew("backend",
|
||||
th.WithLBMethod("wrr"),
|
||||
th.WithServersNew(th.WithServerNew(testServerURL))),
|
||||
),
|
||||
)
|
||||
},
|
||||
wantStatusCode: http.StatusOK,
|
||||
},
|
||||
{
|
||||
desc: "No Frontend",
|
||||
dynamicConfig: func(testServerURL string) *types.Configuration {
|
||||
return buildDynamicConfig()
|
||||
config: func(testServerURL string) *types.Configuration {
|
||||
return th.BuildConfiguration()
|
||||
},
|
||||
wantStatusCode: http.StatusNotFound,
|
||||
},
|
||||
{
|
||||
desc: "Empty Backend LB-Drr",
|
||||
dynamicConfig: func(testServerURL string) *types.Configuration {
|
||||
return buildDynamicConfig(
|
||||
withFrontend("frontend", buildFrontend(withRoute(requestPath, routeRule))),
|
||||
withBackend("backend", buildBackend(withLoadBalancer("Drr", false))),
|
||||
config: func(testServerURL string) *types.Configuration {
|
||||
return th.BuildConfiguration(
|
||||
th.WithFrontends(th.WithFrontend("backend",
|
||||
th.WithEntryPoints("http"),
|
||||
th.WithRoutes(th.WithRoute(requestPath, routeRule))),
|
||||
),
|
||||
th.WithBackends(th.WithBackendNew("backend",
|
||||
th.WithLBMethod("drr")),
|
||||
),
|
||||
)
|
||||
},
|
||||
wantStatusCode: http.StatusServiceUnavailable,
|
||||
},
|
||||
{
|
||||
desc: "Empty Backend LB-Drr Sticky",
|
||||
dynamicConfig: func(testServerURL string) *types.Configuration {
|
||||
return buildDynamicConfig(
|
||||
withFrontend("frontend", buildFrontend(withRoute(requestPath, routeRule))),
|
||||
withBackend("backend", buildBackend(withLoadBalancer("Drr", true))),
|
||||
config: func(testServerURL string) *types.Configuration {
|
||||
return th.BuildConfiguration(
|
||||
th.WithFrontends(th.WithFrontend("backend",
|
||||
th.WithEntryPoints("http"),
|
||||
th.WithRoutes(th.WithRoute(requestPath, routeRule))),
|
||||
),
|
||||
th.WithBackends(th.WithBackendNew("backend",
|
||||
th.WithLBMethod("drr"), th.WithLBSticky("test")),
|
||||
),
|
||||
)
|
||||
},
|
||||
wantStatusCode: http.StatusServiceUnavailable,
|
||||
},
|
||||
{
|
||||
desc: "Empty Backend LB-Wrr",
|
||||
dynamicConfig: func(testServerURL string) *types.Configuration {
|
||||
return buildDynamicConfig(
|
||||
withFrontend("frontend", buildFrontend(withRoute(requestPath, routeRule))),
|
||||
withBackend("backend", buildBackend(withLoadBalancer("Wrr", false))),
|
||||
config: func(testServerURL string) *types.Configuration {
|
||||
return th.BuildConfiguration(
|
||||
th.WithFrontends(th.WithFrontend("backend",
|
||||
th.WithEntryPoints("http"),
|
||||
th.WithRoutes(th.WithRoute(requestPath, routeRule))),
|
||||
),
|
||||
th.WithBackends(th.WithBackendNew("backend",
|
||||
th.WithLBMethod("wrr")),
|
||||
),
|
||||
)
|
||||
},
|
||||
wantStatusCode: http.StatusServiceUnavailable,
|
||||
},
|
||||
{
|
||||
desc: "Empty Backend LB-Wrr Sticky",
|
||||
dynamicConfig: func(testServerURL string) *types.Configuration {
|
||||
return buildDynamicConfig(
|
||||
withFrontend("frontend", buildFrontend(withRoute(requestPath, routeRule))),
|
||||
withBackend("backend", buildBackend(withLoadBalancer("Wrr", true))),
|
||||
config: func(testServerURL string) *types.Configuration {
|
||||
return th.BuildConfiguration(
|
||||
th.WithFrontends(th.WithFrontend("backend",
|
||||
th.WithEntryPoints("http"),
|
||||
th.WithRoutes(th.WithRoute(requestPath, routeRule))),
|
||||
),
|
||||
th.WithBackends(th.WithBackendNew("backend",
|
||||
th.WithLBMethod("wrr"), th.WithLBSticky("test")),
|
||||
),
|
||||
)
|
||||
},
|
||||
wantStatusCode: http.StatusServiceUnavailable,
|
||||
@@ -937,7 +962,7 @@ func TestServerResponseEmptyBackend(t *testing.T) {
|
||||
"http": &configuration.EntryPoint{ForwardedHeaders: &configuration.ForwardedHeaders{Insecure: true}},
|
||||
},
|
||||
}
|
||||
dynamicConfigs := types.Configurations{"config": test.dynamicConfig(testServer.URL)}
|
||||
dynamicConfigs := types.Configurations{"config": test.config(testServer.URL)}
|
||||
|
||||
srv := NewServer(globalConfig, nil)
|
||||
entryPoints, err := srv.loadConfig(dynamicConfigs, globalConfig)
|
||||
@@ -1036,7 +1061,7 @@ func TestBuildRedirectHandler(t *testing.T) {
|
||||
rewrite, err := srv.buildRedirectHandler(test.srcEntryPointName, test.redirect)
|
||||
require.NoError(t, err)
|
||||
|
||||
req := testhelpers.MustNewRequest(http.MethodGet, test.url, nil)
|
||||
req := th.MustNewRequest(http.MethodGet, test.url, nil)
|
||||
recorder := httptest.NewRecorder()
|
||||
|
||||
rewrite.ServeHTTP(recorder, req, http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
@@ -1166,71 +1191,3 @@ func TestNewServerWithResponseModifiers(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func buildDynamicConfig(dynamicConfigBuilders ...func(*types.Configuration)) *types.Configuration {
|
||||
config := &types.Configuration{
|
||||
Frontends: make(map[string]*types.Frontend),
|
||||
Backends: make(map[string]*types.Backend),
|
||||
}
|
||||
for _, build := range dynamicConfigBuilders {
|
||||
build(config)
|
||||
}
|
||||
return config
|
||||
}
|
||||
|
||||
func withFrontend(frontendName string, frontend *types.Frontend) func(*types.Configuration) {
|
||||
return func(config *types.Configuration) {
|
||||
config.Frontends[frontendName] = frontend
|
||||
}
|
||||
}
|
||||
|
||||
func withBackend(backendName string, backend *types.Backend) func(*types.Configuration) {
|
||||
return func(config *types.Configuration) {
|
||||
config.Backends[backendName] = backend
|
||||
}
|
||||
}
|
||||
|
||||
func buildFrontend(frontendBuilders ...func(*types.Frontend)) *types.Frontend {
|
||||
fe := &types.Frontend{
|
||||
EntryPoints: []string{"http"},
|
||||
Backend: "backend",
|
||||
Routes: make(map[string]types.Route),
|
||||
}
|
||||
for _, build := range frontendBuilders {
|
||||
build(fe)
|
||||
}
|
||||
return fe
|
||||
}
|
||||
|
||||
func withRoute(routeName, rule string) func(*types.Frontend) {
|
||||
return func(fe *types.Frontend) {
|
||||
fe.Routes[routeName] = types.Route{Rule: rule}
|
||||
}
|
||||
}
|
||||
|
||||
func buildBackend(backendBuilders ...func(*types.Backend)) *types.Backend {
|
||||
be := &types.Backend{
|
||||
Servers: make(map[string]types.Server),
|
||||
LoadBalancer: &types.LoadBalancer{Method: "Wrr"},
|
||||
}
|
||||
for _, build := range backendBuilders {
|
||||
build(be)
|
||||
}
|
||||
return be
|
||||
}
|
||||
|
||||
func withServer(name, url string) func(backend *types.Backend) {
|
||||
return func(be *types.Backend) {
|
||||
be.Servers[name] = types.Server{URL: url, Weight: label.DefaultWeight}
|
||||
}
|
||||
}
|
||||
|
||||
func withLoadBalancer(method string, sticky bool) func(*types.Backend) {
|
||||
return func(be *types.Backend) {
|
||||
if sticky {
|
||||
be.LoadBalancer = &types.LoadBalancer{Method: method, Stickiness: &types.Stickiness{CookieName: "test"}}
|
||||
} else {
|
||||
be.LoadBalancer = &types.LoadBalancer{Method: method}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
134
testhelpers/config.go
Normal file
134
testhelpers/config.go
Normal file
@@ -0,0 +1,134 @@
|
||||
package testhelpers
|
||||
|
||||
import (
|
||||
"github.com/containous/traefik/provider"
|
||||
"github.com/containous/traefik/types"
|
||||
)
|
||||
|
||||
// BuildConfiguration is a helper to create a configuration.
|
||||
func BuildConfiguration(dynamicConfigBuilders ...func(*types.Configuration)) *types.Configuration {
|
||||
config := &types.Configuration{}
|
||||
for _, build := range dynamicConfigBuilders {
|
||||
build(config)
|
||||
}
|
||||
return config
|
||||
}
|
||||
|
||||
// -- Backend
|
||||
|
||||
// WithBackends is a helper to create a configuration
|
||||
func WithBackends(opts ...func(*types.Backend) string) func(*types.Configuration) {
|
||||
return func(c *types.Configuration) {
|
||||
c.Backends = make(map[string]*types.Backend)
|
||||
for _, opt := range opts {
|
||||
b := &types.Backend{}
|
||||
name := opt(b)
|
||||
c.Backends[name] = b
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// WithBackendNew is a helper to create a configuration
|
||||
func WithBackendNew(name string, opts ...func(*types.Backend)) func(*types.Backend) string {
|
||||
return func(b *types.Backend) string {
|
||||
for _, opt := range opts {
|
||||
opt(b)
|
||||
}
|
||||
return name
|
||||
}
|
||||
}
|
||||
|
||||
// WithServersNew is a helper to create a configuration
|
||||
func WithServersNew(opts ...func(*types.Server) string) func(*types.Backend) {
|
||||
return func(b *types.Backend) {
|
||||
b.Servers = make(map[string]types.Server)
|
||||
for _, opt := range opts {
|
||||
s := &types.Server{Weight: 1}
|
||||
name := opt(s)
|
||||
b.Servers[name] = *s
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// WithServerNew is a helper to create a configuration
|
||||
func WithServerNew(url string, opts ...func(*types.Server)) func(*types.Server) string {
|
||||
return func(s *types.Server) string {
|
||||
for _, opt := range opts {
|
||||
opt(s)
|
||||
}
|
||||
s.URL = url
|
||||
return provider.Normalize(url)
|
||||
}
|
||||
}
|
||||
|
||||
// WithLBMethod is a helper to create a configuration
|
||||
func WithLBMethod(method string) func(*types.Backend) {
|
||||
return func(b *types.Backend) {
|
||||
if b.LoadBalancer == nil {
|
||||
b.LoadBalancer = &types.LoadBalancer{}
|
||||
}
|
||||
b.LoadBalancer.Method = method
|
||||
}
|
||||
}
|
||||
|
||||
// -- Frontend
|
||||
|
||||
// WithFrontends is a helper to create a configuration
|
||||
func WithFrontends(opts ...func(*types.Frontend) string) func(*types.Configuration) {
|
||||
return func(c *types.Configuration) {
|
||||
c.Frontends = make(map[string]*types.Frontend)
|
||||
for _, opt := range opts {
|
||||
f := &types.Frontend{}
|
||||
name := opt(f)
|
||||
c.Frontends[name] = f
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// WithFrontend is a helper to create a configuration
|
||||
func WithFrontend(backend string, opts ...func(*types.Frontend)) func(*types.Frontend) string {
|
||||
return func(f *types.Frontend) string {
|
||||
for _, opt := range opts {
|
||||
opt(f)
|
||||
}
|
||||
f.Backend = backend
|
||||
return backend
|
||||
}
|
||||
}
|
||||
|
||||
// WithEntryPoints is a helper to create a configuration
|
||||
func WithEntryPoints(eps ...string) func(*types.Frontend) {
|
||||
return func(f *types.Frontend) {
|
||||
f.EntryPoints = eps
|
||||
}
|
||||
}
|
||||
|
||||
// WithRoutes is a helper to create a configuration
|
||||
func WithRoutes(opts ...func(*types.Route) string) func(*types.Frontend) {
|
||||
return func(f *types.Frontend) {
|
||||
f.Routes = make(map[string]types.Route)
|
||||
for _, opt := range opts {
|
||||
s := &types.Route{}
|
||||
name := opt(s)
|
||||
f.Routes[name] = *s
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// WithRoute is a helper to create a configuration
|
||||
func WithRoute(name string, rule string) func(*types.Route) string {
|
||||
return func(r *types.Route) string {
|
||||
r.Rule = rule
|
||||
return name
|
||||
}
|
||||
}
|
||||
|
||||
// WithLBSticky is a helper to create a configuration
|
||||
func WithLBSticky(cookieName string) func(*types.Backend) {
|
||||
return func(b *types.Backend) {
|
||||
if b.LoadBalancer == nil {
|
||||
b.LoadBalancer = &types.LoadBalancer{}
|
||||
}
|
||||
b.LoadBalancer.Stickiness = &types.Stickiness{CookieName: cookieName}
|
||||
}
|
||||
}
|
||||
@@ -7,16 +7,6 @@ import (
|
||||
"net/url"
|
||||
)
|
||||
|
||||
// Intp returns a pointer to the given integer value.
|
||||
func Intp(i int) *int {
|
||||
return &i
|
||||
}
|
||||
|
||||
// Stringp returns a pointer to the given string value.
|
||||
func Stringp(s string) *string {
|
||||
return &s
|
||||
}
|
||||
|
||||
// MustNewRequest creates a new http get request or panics if it can't
|
||||
func MustNewRequest(method, urlStr string, body io.Reader) *http.Request {
|
||||
request, err := http.NewRequest(method, urlStr, body)
|
||||
|
||||
@@ -46,12 +46,12 @@ type CollectingHealthCheckMetrics struct {
|
||||
Gauge *CollectingGauge
|
||||
}
|
||||
|
||||
// NewCollectingHealthCheckMetrics creates a new CollectingHealthCheckMetrics instance.
|
||||
func NewCollectingHealthCheckMetrics() *CollectingHealthCheckMetrics {
|
||||
return &CollectingHealthCheckMetrics{&CollectingGauge{}}
|
||||
}
|
||||
|
||||
// BackendServerUpGauge is there to satisfy the healthcheck.metricsRegistry interface.
|
||||
func (m *CollectingHealthCheckMetrics) BackendServerUpGauge() metrics.Gauge {
|
||||
return m.Gauge
|
||||
}
|
||||
|
||||
// NewCollectingHealthCheckMetrics creates a new CollectingHealthCheckMetrics instance.
|
||||
func NewCollectingHealthCheckMetrics() *CollectingHealthCheckMetrics {
|
||||
return &CollectingHealthCheckMetrics{&CollectingGauge{}}
|
||||
}
|
||||
|
||||
@@ -235,7 +235,7 @@ type Configurations map[string]*Configuration
|
||||
type Configuration struct {
|
||||
Backends map[string]*Backend `json:"backends,omitempty"`
|
||||
Frontends map[string]*Frontend `json:"frontends,omitempty"`
|
||||
TLS []*traefiktls.Configuration `json:"tls,omitempty"`
|
||||
TLS []*traefiktls.Configuration `json:"-"`
|
||||
}
|
||||
|
||||
// ConfigMessage hold configuration information exchanged between parts of traefik.
|
||||
|
||||
51
vendor/github.com/containous/staert/kv.go
generated
vendored
51
vendor/github.com/containous/staert/kv.go
generated
vendored
@@ -46,16 +46,16 @@ func (kv *KvSource) Parse(cmd *flaeg.Command) (*flaeg.Command, error) {
|
||||
|
||||
// LoadConfig loads data from the KV Store into the config structure (given by reference)
|
||||
func (kv *KvSource) LoadConfig(config interface{}) error {
|
||||
pairs := map[string][]byte{}
|
||||
if err := kv.ListRecursive(kv.Prefix, pairs); err != nil {
|
||||
pairs, err := kv.ListValuedPairWithPrefix(kv.Prefix)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// fmt.Printf("pairs : %#v\n", pairs)
|
||||
|
||||
mapStruct, err := generateMapstructure(convertPairs(pairs), kv.Prefix)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// fmt.Printf("mapStruct : %#v\n", mapStruct)
|
||||
|
||||
configDecoder := &mapstructure.DecoderConfig{
|
||||
Metadata: nil,
|
||||
Result: config,
|
||||
@@ -77,11 +77,11 @@ func generateMapstructure(pairs []*store.KVPair, prefix string) (map[string]inte
|
||||
for _, p := range pairs {
|
||||
// Trim the prefix off our key first
|
||||
key := strings.TrimPrefix(strings.Trim(p.Key, "/"), strings.Trim(prefix, "/")+"/")
|
||||
raw, err := processKV(key, p.Value, raw)
|
||||
var err error
|
||||
raw, err = processKV(key, p.Value, raw)
|
||||
if err != nil {
|
||||
return raw, err
|
||||
}
|
||||
|
||||
}
|
||||
return raw, nil
|
||||
}
|
||||
@@ -313,15 +313,23 @@ func collateKvRecursive(objValue reflect.Value, kv map[string]string, key string
|
||||
func writeCompressedData(data []byte) (string, error) {
|
||||
var buffer bytes.Buffer
|
||||
gzipWriter := gzip.NewWriter(&buffer)
|
||||
|
||||
_, err := gzipWriter.Write(data)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
gzipWriter.Close()
|
||||
|
||||
err = gzipWriter.Close()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return buffer.String(), nil
|
||||
}
|
||||
|
||||
// ListRecursive lists all key value children under key
|
||||
// Replaced by ListValuedPairWithPrefix
|
||||
// Deprecated
|
||||
func (kv *KvSource) ListRecursive(key string, pairs map[string][]byte) error {
|
||||
pairsN1, err := kv.List(key, nil)
|
||||
if err == store.ErrKeyNotFound {
|
||||
@@ -342,14 +350,37 @@ func (kv *KvSource) ListRecursive(key string, pairs map[string][]byte) error {
|
||||
return nil
|
||||
}
|
||||
for _, p := range pairsN1 {
|
||||
err := kv.ListRecursive(p.Key, pairs)
|
||||
if err != nil {
|
||||
return err
|
||||
if p.Key != key {
|
||||
err := kv.ListRecursive(p.Key, pairs)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ListValuedPairWithPrefix lists all key value children under key
|
||||
func (kv *KvSource) ListValuedPairWithPrefix(key string) (map[string][]byte, error) {
|
||||
pairs := make(map[string][]byte)
|
||||
|
||||
pairsN1, err := kv.List(key, nil)
|
||||
if err == store.ErrKeyNotFound {
|
||||
return pairs, nil
|
||||
}
|
||||
if err != nil {
|
||||
return pairs, err
|
||||
}
|
||||
|
||||
for _, p := range pairsN1 {
|
||||
if len(p.Value) > 0 {
|
||||
pairs[p.Key] = p.Value
|
||||
}
|
||||
}
|
||||
|
||||
return pairs, nil
|
||||
}
|
||||
|
||||
func convertPairs(pairs map[string][]byte) []*store.KVPair {
|
||||
slicePairs := make([]*store.KVPair, len(pairs))
|
||||
i := 0
|
||||
|
||||
164
vendor/github.com/containous/staert/staert.go
generated
vendored
164
vendor/github.com/containous/staert/staert.go
generated
vendored
@@ -2,12 +2,8 @@ package staert
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"strings"
|
||||
|
||||
"github.com/BurntSushi/toml"
|
||||
"github.com/containous/flaeg"
|
||||
)
|
||||
|
||||
@@ -24,10 +20,7 @@ type Staert struct {
|
||||
|
||||
// NewStaert creates and return a pointer on Staert. Need defaultConfig and defaultPointersConfig given by references
|
||||
func NewStaert(rootCommand *flaeg.Command) *Staert {
|
||||
s := Staert{
|
||||
command: rootCommand,
|
||||
}
|
||||
return &s
|
||||
return &Staert{command: rootCommand}
|
||||
}
|
||||
|
||||
// AddSource adds new Source to Staert, give it by reference
|
||||
@@ -35,40 +28,31 @@ func (s *Staert) AddSource(src Source) {
|
||||
s.sources = append(s.sources, src)
|
||||
}
|
||||
|
||||
// getConfig for a flaeg.Command run sources Parse func in the raw
|
||||
func (s *Staert) parseConfigAllSources(cmd *flaeg.Command) error {
|
||||
for _, src := range s.sources {
|
||||
var err error
|
||||
_, err = src.Parse(cmd)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// LoadConfig check which command is called and parses config
|
||||
// It returns the the parsed config or an error if it fails
|
||||
func (s *Staert) LoadConfig() (interface{}, error) {
|
||||
for _, src := range s.sources {
|
||||
//Type assertion
|
||||
f, ok := src.(*flaeg.Flaeg)
|
||||
if ok {
|
||||
if fCmd, err := f.GetCommand(); err != nil {
|
||||
// Type assertion
|
||||
if flg, ok := src.(*flaeg.Flaeg); ok {
|
||||
fCmd, err := flg.GetCommand()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
} else if s.command != fCmd {
|
||||
//IF fleag sub-command
|
||||
}
|
||||
|
||||
// if fleag sub-command
|
||||
if s.command != fCmd {
|
||||
// if parseAllSources
|
||||
if fCmd.Metadata["parseAllSources"] == "true" {
|
||||
//IF parseAllSources
|
||||
fCmdConfigType := reflect.TypeOf(fCmd.Config)
|
||||
sCmdConfigType := reflect.TypeOf(s.command.Config)
|
||||
if fCmdConfigType != sCmdConfigType {
|
||||
return nil, fmt.Errorf("command %s : Config type doesn't match with root command config type. Expected %s got %s", fCmd.Name, sCmdConfigType.Name(), fCmdConfigType.Name())
|
||||
return nil, fmt.Errorf("command %s : Config type doesn't match with root command config type. Expected %s got %s",
|
||||
fCmd.Name, sCmdConfigType.Name(), fCmdConfigType.Name())
|
||||
}
|
||||
s.command = fCmd
|
||||
} else {
|
||||
// ELSE (not parseAllSources)
|
||||
s.command, err = f.Parse(fCmd)
|
||||
// (not parseAllSources)
|
||||
s.command, err = flg.Parse(fCmd)
|
||||
return s.command.Config, err
|
||||
}
|
||||
}
|
||||
@@ -78,117 +62,19 @@ func (s *Staert) LoadConfig() (interface{}, error) {
|
||||
return s.command.Config, err
|
||||
}
|
||||
|
||||
// parseConfigAllSources getConfig for a flaeg.Command run sources Parse func in the raw
|
||||
func (s *Staert) parseConfigAllSources(cmd *flaeg.Command) error {
|
||||
for _, src := range s.sources {
|
||||
_, err := src.Parse(cmd)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Run calls the Run func of the command
|
||||
// Warning, Run doesn't parse the config
|
||||
func (s *Staert) Run() error {
|
||||
return s.command.Run()
|
||||
}
|
||||
|
||||
//TomlSource impement Source
|
||||
type TomlSource struct {
|
||||
filename string
|
||||
dirNfullpath []string
|
||||
fullpath string
|
||||
}
|
||||
|
||||
// NewTomlSource creates and return a pointer on TomlSource.
|
||||
// Parameter filename is the file name (without extension type, ".toml" will be added)
|
||||
// dirNfullpath may contain directories or fullpath to the file.
|
||||
func NewTomlSource(filename string, dirNfullpath []string) *TomlSource {
|
||||
return &TomlSource{filename, dirNfullpath, ""}
|
||||
}
|
||||
|
||||
// ConfigFileUsed return config file used
|
||||
func (ts *TomlSource) ConfigFileUsed() string {
|
||||
return ts.fullpath
|
||||
}
|
||||
|
||||
func preprocessDir(dirIn string) (string, error) {
|
||||
dirOut := dirIn
|
||||
expanded := os.ExpandEnv(dirIn)
|
||||
dirOut, err := filepath.Abs(expanded)
|
||||
return dirOut, err
|
||||
}
|
||||
|
||||
func findFile(filename string, dirNfile []string) string {
|
||||
for _, df := range dirNfile {
|
||||
if df != "" {
|
||||
fullPath, _ := preprocessDir(df)
|
||||
if fileInfo, err := os.Stat(fullPath); err == nil && !fileInfo.IsDir() {
|
||||
return fullPath
|
||||
}
|
||||
|
||||
fullPath = filepath.Join(fullPath, filename+".toml")
|
||||
if fileInfo, err := os.Stat(fullPath); err == nil && !fileInfo.IsDir() {
|
||||
return fullPath
|
||||
}
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// Parse calls toml.DecodeFile() func
|
||||
func (ts *TomlSource) Parse(cmd *flaeg.Command) (*flaeg.Command, error) {
|
||||
ts.fullpath = findFile(ts.filename, ts.dirNfullpath)
|
||||
if len(ts.fullpath) < 2 {
|
||||
return cmd, nil
|
||||
}
|
||||
metadata, err := toml.DecodeFile(ts.fullpath, cmd.Config)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
boolFlags, err := flaeg.GetBoolFlags(cmd.Config)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
flaegArgs, hasUnderField, err := generateArgs(metadata, boolFlags)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// fmt.Println(flaegArgs)
|
||||
err = flaeg.Load(cmd.Config, cmd.DefaultPointersConfig, flaegArgs)
|
||||
//if err!= missing parser err
|
||||
if err != nil && err != flaeg.ErrParserNotFound {
|
||||
return nil, err
|
||||
}
|
||||
if hasUnderField {
|
||||
_, err := toml.DecodeFile(ts.fullpath, cmd.Config)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return cmd, nil
|
||||
}
|
||||
|
||||
func generateArgs(metadata toml.MetaData, flags []string) ([]string, bool, error) {
|
||||
var flaegArgs []string
|
||||
keys := metadata.Keys()
|
||||
hasUnderField := false
|
||||
for i, key := range keys {
|
||||
// fmt.Println(key)
|
||||
if metadata.Type(key.String()) == "Hash" {
|
||||
// TOML hashes correspond to Go structs or maps.
|
||||
// fmt.Printf("%s could be a ptr on a struct, or a map\n", key)
|
||||
for j := i; j < len(keys); j++ {
|
||||
// fmt.Printf("%s =? %s\n", keys[j].String(), "."+key.String())
|
||||
if strings.Contains(keys[j].String(), key.String()+".") {
|
||||
hasUnderField = true
|
||||
break
|
||||
}
|
||||
}
|
||||
match := false
|
||||
for _, flag := range flags {
|
||||
if flag == strings.ToLower(key.String()) {
|
||||
match = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if match {
|
||||
flaegArgs = append(flaegArgs, "--"+strings.ToLower(key.String()))
|
||||
}
|
||||
}
|
||||
}
|
||||
return flaegArgs, hasUnderField, nil
|
||||
}
|
||||
|
||||
121
vendor/github.com/containous/staert/toml.go
generated
vendored
Normal file
121
vendor/github.com/containous/staert/toml.go
generated
vendored
Normal file
@@ -0,0 +1,121 @@
|
||||
package staert
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/BurntSushi/toml"
|
||||
"github.com/containous/flaeg"
|
||||
)
|
||||
|
||||
var _ Source = (*TomlSource)(nil)
|
||||
|
||||
// TomlSource implement staert.Source
|
||||
type TomlSource struct {
|
||||
filename string
|
||||
dirNFullPath []string
|
||||
fullPath string
|
||||
}
|
||||
|
||||
// NewTomlSource creates and return a pointer on Source.
|
||||
// Parameter filename is the file name (without extension type, ".toml" will be added)
|
||||
// dirNFullPath may contain directories or fullPath to the file.
|
||||
func NewTomlSource(filename string, dirNFullPath []string) *TomlSource {
|
||||
return &TomlSource{filename, dirNFullPath, ""}
|
||||
}
|
||||
|
||||
// ConfigFileUsed return config file used
|
||||
func (ts *TomlSource) ConfigFileUsed() string {
|
||||
return ts.fullPath
|
||||
}
|
||||
|
||||
// Parse calls toml.DecodeFile() func
|
||||
func (ts *TomlSource) Parse(cmd *flaeg.Command) (*flaeg.Command, error) {
|
||||
ts.fullPath = findFile(ts.filename, ts.dirNFullPath)
|
||||
if len(ts.fullPath) < 2 {
|
||||
return cmd, nil
|
||||
}
|
||||
|
||||
metadata, err := toml.DecodeFile(ts.fullPath, cmd.Config)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
boolFlags, err := flaeg.GetBoolFlags(cmd.Config)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
flgArgs, hasUnderField, err := generateArgs(metadata, boolFlags)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = flaeg.Load(cmd.Config, cmd.DefaultPointersConfig, flgArgs)
|
||||
if err != nil && err != flaeg.ErrParserNotFound {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if hasUnderField {
|
||||
_, err := toml.DecodeFile(ts.fullPath, cmd.Config)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return cmd, nil
|
||||
}
|
||||
|
||||
func preProcessDir(dirIn string) (string, error) {
|
||||
expanded := os.ExpandEnv(dirIn)
|
||||
return filepath.Abs(expanded)
|
||||
}
|
||||
|
||||
func findFile(filename string, dirNFile []string) string {
|
||||
for _, df := range dirNFile {
|
||||
if df != "" {
|
||||
fullPath, _ := preProcessDir(df)
|
||||
if fileInfo, err := os.Stat(fullPath); err == nil && !fileInfo.IsDir() {
|
||||
return fullPath
|
||||
}
|
||||
|
||||
fullPath = filepath.Join(fullPath, filename+".toml")
|
||||
if fileInfo, err := os.Stat(fullPath); err == nil && !fileInfo.IsDir() {
|
||||
return fullPath
|
||||
}
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func generateArgs(metadata toml.MetaData, flags []string) ([]string, bool, error) {
|
||||
var flgArgs []string
|
||||
keys := metadata.Keys()
|
||||
hasUnderField := false
|
||||
|
||||
for i, key := range keys {
|
||||
if metadata.Type(key.String()) == "Hash" {
|
||||
// TOML hashes correspond to Go structs or maps.
|
||||
for j := i; j < len(keys); j++ {
|
||||
if strings.Contains(keys[j].String(), key.String()+".") {
|
||||
hasUnderField = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
match := false
|
||||
for _, flag := range flags {
|
||||
if flag == strings.ToLower(key.String()) {
|
||||
match = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if match {
|
||||
flgArgs = append(flgArgs, "--"+strings.ToLower(key.String()))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return flgArgs, hasUnderField, nil
|
||||
}
|
||||
2
vendor/github.com/dnsimple/dnsimple-go/LICENSE.txt
generated
vendored
2
vendor/github.com/dnsimple/dnsimple-go/LICENSE.txt
generated
vendored
@@ -1,6 +1,6 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2014-2017 Aetrion LLC dba DNSimple
|
||||
Copyright (c) 2014-2018 Aetrion LLC dba DNSimple
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
||||
5
vendor/github.com/dnsimple/dnsimple-go/dnsimple/accounts.go
generated
vendored
5
vendor/github.com/dnsimple/dnsimple-go/dnsimple/accounts.go
generated
vendored
@@ -1,15 +1,12 @@
|
||||
package dnsimple
|
||||
|
||||
import (
|
||||
)
|
||||
|
||||
type AccountsService struct {
|
||||
client *Client
|
||||
}
|
||||
|
||||
// Account represents a DNSimple account.
|
||||
type Account struct {
|
||||
ID int `json:"id,omitempty"`
|
||||
ID int64 `json:"id,omitempty"`
|
||||
Email string `json:"email,omitempty"`
|
||||
PlanIdentifier string `json:"plan_identifier,omitempty"`
|
||||
CreatedAt string `json:"created_at,omitempty"`
|
||||
|
||||
175
vendor/github.com/dnsimple/dnsimple-go/dnsimple/certificates.go
generated
vendored
175
vendor/github.com/dnsimple/dnsimple-go/dnsimple/certificates.go
generated
vendored
@@ -2,29 +2,31 @@ package dnsimple
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
// CertificatesService handles communication with the certificate related
|
||||
// methods of the DNSimple API.
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/domains/certificates
|
||||
// See https://developer.dnsimple.com/v2/certificates
|
||||
type CertificatesService struct {
|
||||
client *Client
|
||||
}
|
||||
|
||||
// Certificate represents a Certificate in DNSimple.
|
||||
type Certificate struct {
|
||||
ID int `json:"id,omitempty"`
|
||||
DomainID int `json:"domain_id,omitempty"`
|
||||
CommonName string `json:"common_name,omitempty"`
|
||||
Years int `json:"years,omitempty"`
|
||||
State string `json:"state,omitempty"`
|
||||
AuthorityIdentifier string `json:"authority_identifier,omitempty"`
|
||||
CreatedAt string `json:"created_at,omitempty"`
|
||||
UpdatedAt string `json:"updated_at,omitempty"`
|
||||
ExpiresOn string `json:"expires_on,omitempty"`
|
||||
CertificateRequest string `json:"csr,omitempty"`
|
||||
ID int64 `json:"id,omitempty"`
|
||||
DomainID int64 `json:"domain_id,omitempty"`
|
||||
ContactID int64 `json:"contact_id,omitempty"`
|
||||
CommonName string `json:"common_name,omitempty"`
|
||||
AlternateNames []string `json:"alternate_names,omitempty"`
|
||||
Years int `json:"years,omitempty"`
|
||||
State string `json:"state,omitempty"`
|
||||
AuthorityIdentifier string `json:"authority_identifier,omitempty"`
|
||||
AutoRenew bool `json:"auto_renew"`
|
||||
CreatedAt string `json:"created_at,omitempty"`
|
||||
UpdatedAt string `json:"updated_at,omitempty"`
|
||||
ExpiresOn string `json:"expires_on,omitempty"`
|
||||
CertificateRequest string `json:"csr,omitempty"`
|
||||
}
|
||||
|
||||
// CertificateBundle represents a container for all the PEM-encoded X509 certificate entities,
|
||||
@@ -37,9 +39,46 @@ type CertificateBundle struct {
|
||||
IntermediateCertificates []string `json:"chain,omitempty"`
|
||||
}
|
||||
|
||||
func certificatePath(accountID, domainIdentifier, certificateID string) (path string) {
|
||||
// CertificatePurchase represents a Certificate Purchase in DNSimple.
|
||||
type CertificatePurchase struct {
|
||||
ID int64 `json:"id,omitempty"`
|
||||
CertificateID int64 `json:"new_certificate_id,omitempty"`
|
||||
State string `json:"state,omitempty"`
|
||||
AutoRenew bool `json:"auto_renew,omitempty"`
|
||||
CreatedAt string `json:"created_at,omitempty"`
|
||||
UpdatedAt string `json:"updated_at,omitempty"`
|
||||
}
|
||||
|
||||
// CertificateRenewal represents a Certificate Renewal in DNSimple.
|
||||
type CertificateRenewal struct {
|
||||
ID int64 `json:"id,omitempty"`
|
||||
OldCertificateID int64 `json:"old_certificate_id,omitempty"`
|
||||
NewCertificateID int64 `json:"new_certificate_id,omitempty"`
|
||||
State string `json:"state,omitempty"`
|
||||
AutoRenew bool `json:"auto_renew,omitempty"`
|
||||
CreatedAt string `json:"created_at,omitempty"`
|
||||
UpdatedAt string `json:"updated_at,omitempty"`
|
||||
}
|
||||
|
||||
// LetsencryptCertificateAttributes is a set of attributes to purchase a Let's Encrypt certificate.
|
||||
type LetsencryptCertificateAttributes struct {
|
||||
ContactID int64 `json:"contact_id,omitempty"`
|
||||
Name string `json:"name,omitempty"`
|
||||
AutoRenew bool `json:"auto_renew,omitempty"`
|
||||
AlternateNames []string `json:"alternate_names,omitempty"`
|
||||
}
|
||||
|
||||
func certificatePath(accountID, domainIdentifier string, certificateID int64) (path string) {
|
||||
path = fmt.Sprintf("%v/certificates", domainPath(accountID, domainIdentifier))
|
||||
if certificateID != "" {
|
||||
if certificateID != 0 {
|
||||
path += fmt.Sprintf("/%v", certificateID)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func letsencryptCertificatePath(accountID, domainIdentifier string, certificateID int64) (path string) {
|
||||
path = fmt.Sprintf("%v/certificates/letsencrypt", domainPath(accountID, domainIdentifier))
|
||||
if certificateID != 0 {
|
||||
path += fmt.Sprintf("/%v", certificateID)
|
||||
}
|
||||
return
|
||||
@@ -63,11 +102,23 @@ type certificatesResponse struct {
|
||||
Data []Certificate `json:"data"`
|
||||
}
|
||||
|
||||
// ListCertificates list the certificates for a domain.
|
||||
// certificatePurchaseResponse represents a response from an API method that returns a CertificatePurchase struct.
|
||||
type certificatePurchaseResponse struct {
|
||||
Response
|
||||
Data *CertificatePurchase `json:"data"`
|
||||
}
|
||||
|
||||
// certificateRenewalResponse represents a response from an API method that returns a CertificateRenewal struct.
|
||||
type certificateRenewalResponse struct {
|
||||
Response
|
||||
Data *CertificateRenewal `json:"data"`
|
||||
}
|
||||
|
||||
// ListCertificates lists the certificates for a domain in the account.
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/domains/certificates#list
|
||||
// See https://developer.dnsimple.com/v2/certificates#listCertificates
|
||||
func (s *CertificatesService) ListCertificates(accountID, domainIdentifier string, options *ListOptions) (*certificatesResponse, error) {
|
||||
path := versioned(certificatePath(accountID, domainIdentifier, ""))
|
||||
path := versioned(certificatePath(accountID, domainIdentifier, 0))
|
||||
certificatesResponse := &certificatesResponse{}
|
||||
|
||||
path, err := addURLQueryOptions(path, options)
|
||||
@@ -84,11 +135,11 @@ func (s *CertificatesService) ListCertificates(accountID, domainIdentifier strin
|
||||
return certificatesResponse, nil
|
||||
}
|
||||
|
||||
// GetCertificate fetches the certificate.
|
||||
// GetCertificate gets the details of a certificate.
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/domains/certificates#get
|
||||
func (s *CertificatesService) GetCertificate(accountID, domainIdentifier string, certificateID int) (*certificateResponse, error) {
|
||||
path := versioned(certificatePath(accountID, domainIdentifier, strconv.Itoa(certificateID)))
|
||||
// See https://developer.dnsimple.com/v2/certificates#getCertificate
|
||||
func (s *CertificatesService) GetCertificate(accountID, domainIdentifier string, certificateID int64) (*certificateResponse, error) {
|
||||
path := versioned(certificatePath(accountID, domainIdentifier, certificateID))
|
||||
certificateResponse := &certificateResponse{}
|
||||
|
||||
resp, err := s.client.get(path, certificateResponse)
|
||||
@@ -100,12 +151,12 @@ func (s *CertificatesService) GetCertificate(accountID, domainIdentifier string,
|
||||
return certificateResponse, nil
|
||||
}
|
||||
|
||||
// DownloadCertificate download the issued server certificate,
|
||||
// as well the root certificate and the intermediate chain.
|
||||
// DownloadCertificate gets the PEM-encoded certificate,
|
||||
// along with the root certificate and intermediate chain.
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/domains/certificates#download
|
||||
func (s *CertificatesService) DownloadCertificate(accountID, domainIdentifier string, certificateID int) (*certificateBundleResponse, error) {
|
||||
path := versioned(certificatePath(accountID, domainIdentifier, strconv.Itoa(certificateID)) + "/download")
|
||||
// See https://developer.dnsimple.com/v2/certificates#downloadCertificate
|
||||
func (s *CertificatesService) DownloadCertificate(accountID, domainIdentifier string, certificateID int64) (*certificateBundleResponse, error) {
|
||||
path := versioned(certificatePath(accountID, domainIdentifier, certificateID) + "/download")
|
||||
certificateBundleResponse := &certificateBundleResponse{}
|
||||
|
||||
resp, err := s.client.get(path, certificateBundleResponse)
|
||||
@@ -117,11 +168,11 @@ func (s *CertificatesService) DownloadCertificate(accountID, domainIdentifier st
|
||||
return certificateBundleResponse, nil
|
||||
}
|
||||
|
||||
// GetCertificatePrivateKey fetches the certificate private key.
|
||||
// GetCertificatePrivateKey gets the PEM-encoded certificate private key.
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/domains/certificates#get-private-key
|
||||
func (s *CertificatesService) GetCertificatePrivateKey(accountID, domainIdentifier string, certificateID int) (*certificateBundleResponse, error) {
|
||||
path := versioned(certificatePath(accountID, domainIdentifier, strconv.Itoa(certificateID)) + "/private_key")
|
||||
// See https://developer.dnsimple.com/v2/certificates#getCertificatePrivateKey
|
||||
func (s *CertificatesService) GetCertificatePrivateKey(accountID, domainIdentifier string, certificateID int64) (*certificateBundleResponse, error) {
|
||||
path := versioned(certificatePath(accountID, domainIdentifier, certificateID) + "/private_key")
|
||||
certificateBundleResponse := &certificateBundleResponse{}
|
||||
|
||||
resp, err := s.client.get(path, certificateBundleResponse)
|
||||
@@ -132,3 +183,67 @@ func (s *CertificatesService) GetCertificatePrivateKey(accountID, domainIdentifi
|
||||
certificateBundleResponse.HttpResponse = resp
|
||||
return certificateBundleResponse, nil
|
||||
}
|
||||
|
||||
// PurchaseLetsencryptCertificate purchases a Let's Encrypt certificate.
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/certificates/#purchaseLetsencryptCertificate
|
||||
func (s *CertificatesService) PurchaseLetsencryptCertificate(accountID, domainIdentifier string, certificateAttributes LetsencryptCertificateAttributes) (*certificatePurchaseResponse, error) {
|
||||
path := versioned(letsencryptCertificatePath(accountID, domainIdentifier, 0))
|
||||
certificatePurchaseResponse := &certificatePurchaseResponse{}
|
||||
|
||||
resp, err := s.client.post(path, certificateAttributes, certificatePurchaseResponse)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
certificatePurchaseResponse.HttpResponse = resp
|
||||
return certificatePurchaseResponse, nil
|
||||
}
|
||||
|
||||
// IssueLetsencryptCertificate issues a pending Let's Encrypt certificate purchase order.
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/certificates/#issueLetsencryptCertificate
|
||||
func (s *CertificatesService) IssueLetsencryptCertificate(accountID, domainIdentifier string, certificateID int64) (*certificateResponse, error) {
|
||||
path := versioned(letsencryptCertificatePath(accountID, domainIdentifier, certificateID) + "/issue")
|
||||
certificateResponse := &certificateResponse{}
|
||||
|
||||
resp, err := s.client.post(path, nil, certificateResponse)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
certificateResponse.HttpResponse = resp
|
||||
return certificateResponse, nil
|
||||
}
|
||||
|
||||
// PurchaseLetsencryptCertificateRenewal purchases a Let's Encrypt certificate renewal.
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/certificates/#purchaseRenewalLetsencryptCertificate
|
||||
func (s *CertificatesService) PurchaseLetsencryptCertificateRenewal(accountID, domainIdentifier string, certificateID int64, certificateAttributes LetsencryptCertificateAttributes) (*certificateRenewalResponse, error) {
|
||||
path := versioned(letsencryptCertificatePath(accountID, domainIdentifier, certificateID) + "/renewals")
|
||||
certificateRenewalResponse := &certificateRenewalResponse{}
|
||||
|
||||
resp, err := s.client.post(path, certificateAttributes, certificateRenewalResponse)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
certificateRenewalResponse.HttpResponse = resp
|
||||
return certificateRenewalResponse, nil
|
||||
}
|
||||
|
||||
// IssueLetsencryptCertificateRenewal issues a pending Let's Encrypt certificate renewal order.
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/certificates/#issueRenewalLetsencryptCertificate
|
||||
func (s *CertificatesService) IssueLetsencryptCertificateRenewal(accountID, domainIdentifier string, certificateID, certificateRenewalID int64) (*certificateResponse, error) {
|
||||
path := versioned(letsencryptCertificatePath(accountID, domainIdentifier, certificateID) + fmt.Sprintf("/renewals/%d/issue", certificateRenewalID))
|
||||
certificateResponse := &certificateResponse{}
|
||||
|
||||
resp, err := s.client.post(path, nil, certificateResponse)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
certificateResponse.HttpResponse = resp
|
||||
return certificateResponse, nil
|
||||
}
|
||||
|
||||
12
vendor/github.com/dnsimple/dnsimple-go/dnsimple/contacts.go
generated
vendored
12
vendor/github.com/dnsimple/dnsimple-go/dnsimple/contacts.go
generated
vendored
@@ -14,8 +14,8 @@ type ContactsService struct {
|
||||
|
||||
// Contact represents a Contact in DNSimple.
|
||||
type Contact struct {
|
||||
ID int `json:"id,omitempty"`
|
||||
AccountID int `json:"account_id,omitempty"`
|
||||
ID int64 `json:"id,omitempty"`
|
||||
AccountID int64 `json:"account_id,omitempty"`
|
||||
Label string `json:"label,omitempty"`
|
||||
FirstName string `json:"first_name,omitempty"`
|
||||
LastName string `json:"last_name,omitempty"`
|
||||
@@ -34,7 +34,7 @@ type Contact struct {
|
||||
UpdatedAt string `json:"updated_at,omitempty"`
|
||||
}
|
||||
|
||||
func contactPath(accountID string, contactID int) (path string) {
|
||||
func contactPath(accountID string, contactID int64) (path string) {
|
||||
path = fmt.Sprintf("/%v/contacts", accountID)
|
||||
if contactID != 0 {
|
||||
path += fmt.Sprintf("/%v", contactID)
|
||||
@@ -94,7 +94,7 @@ func (s *ContactsService) CreateContact(accountID string, contactAttributes Cont
|
||||
// GetContact fetches a contact.
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/contacts/#get
|
||||
func (s *ContactsService) GetContact(accountID string, contactID int) (*contactResponse, error) {
|
||||
func (s *ContactsService) GetContact(accountID string, contactID int64) (*contactResponse, error) {
|
||||
path := versioned(contactPath(accountID, contactID))
|
||||
contactResponse := &contactResponse{}
|
||||
|
||||
@@ -110,7 +110,7 @@ func (s *ContactsService) GetContact(accountID string, contactID int) (*contactR
|
||||
// UpdateContact updates a contact.
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/contacts/#update
|
||||
func (s *ContactsService) UpdateContact(accountID string, contactID int, contactAttributes Contact) (*contactResponse, error) {
|
||||
func (s *ContactsService) UpdateContact(accountID string, contactID int64, contactAttributes Contact) (*contactResponse, error) {
|
||||
path := versioned(contactPath(accountID, contactID))
|
||||
contactResponse := &contactResponse{}
|
||||
|
||||
@@ -126,7 +126,7 @@ func (s *ContactsService) UpdateContact(accountID string, contactID int, contact
|
||||
// DeleteContact PERMANENTLY deletes a contact from the account.
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/contacts/#delete
|
||||
func (s *ContactsService) DeleteContact(accountID string, contactID int) (*contactResponse, error) {
|
||||
func (s *ContactsService) DeleteContact(accountID string, contactID int64) (*contactResponse, error) {
|
||||
path := versioned(contactPath(accountID, contactID))
|
||||
contactResponse := &contactResponse{}
|
||||
|
||||
|
||||
2
vendor/github.com/dnsimple/dnsimple-go/dnsimple/dnsimple.go
generated
vendored
2
vendor/github.com/dnsimple/dnsimple-go/dnsimple/dnsimple.go
generated
vendored
@@ -23,7 +23,7 @@ const (
|
||||
// This is a pro-forma convention given that Go dependencies
|
||||
// tends to be fetched directly from the repo.
|
||||
// It is also used in the user-agent identify the client.
|
||||
Version = "0.14.0"
|
||||
Version = "0.16.0"
|
||||
|
||||
// defaultBaseURL to the DNSimple production API.
|
||||
defaultBaseURL = "https://api.dnsimple.com"
|
||||
|
||||
6
vendor/github.com/dnsimple/dnsimple-go/dnsimple/domains.go
generated
vendored
6
vendor/github.com/dnsimple/dnsimple-go/dnsimple/domains.go
generated
vendored
@@ -14,9 +14,9 @@ type DomainsService struct {
|
||||
|
||||
// Domain represents a domain in DNSimple.
|
||||
type Domain struct {
|
||||
ID int `json:"id,omitempty"`
|
||||
AccountID int `json:"account_id,omitempty"`
|
||||
RegistrantID int `json:"registrant_id,omitempty"`
|
||||
ID int64 `json:"id,omitempty"`
|
||||
AccountID int64 `json:"account_id,omitempty"`
|
||||
RegistrantID int64 `json:"registrant_id,omitempty"`
|
||||
Name string `json:"name,omitempty"`
|
||||
UnicodeName string `json:"unicode_name,omitempty"`
|
||||
Token string `json:"token,omitempty"`
|
||||
|
||||
18
vendor/github.com/dnsimple/dnsimple-go/dnsimple/domains_collaborators.go
generated
vendored
18
vendor/github.com/dnsimple/dnsimple-go/dnsimple/domains_collaborators.go
generated
vendored
@@ -6,10 +6,10 @@ import (
|
||||
|
||||
// Collaborator represents a Collaborator in DNSimple.
|
||||
type Collaborator struct {
|
||||
ID int `json:"id,omitempty"`
|
||||
DomainID int `json:"domain_id,omitempty"`
|
||||
ID int64 `json:"id,omitempty"`
|
||||
DomainID int64 `json:"domain_id,omitempty"`
|
||||
DomainName string `json:"domain_name,omitempty"`
|
||||
UserID int `json:"user_id,omitempty"`
|
||||
UserID int64 `json:"user_id,omitempty"`
|
||||
UserEmail string `json:"user_email,omitempty"`
|
||||
Invitation bool `json:"invitation,omitempty"`
|
||||
CreatedAt string `json:"created_at,omitempty"`
|
||||
@@ -17,9 +17,9 @@ type Collaborator struct {
|
||||
AcceptedAt string `json:"accepted_at,omitempty"`
|
||||
}
|
||||
|
||||
func collaboratorPath(accountID, domainIdentifier, collaboratorID string) (path string) {
|
||||
func collaboratorPath(accountID, domainIdentifier string, collaboratorID int64) (path string) {
|
||||
path = fmt.Sprintf("%v/collaborators", domainPath(accountID, domainIdentifier))
|
||||
if collaboratorID != "" {
|
||||
if collaboratorID != 0 {
|
||||
path += fmt.Sprintf("/%v", collaboratorID)
|
||||
}
|
||||
return
|
||||
@@ -46,7 +46,7 @@ type collaboratorsResponse struct {
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/domains/collaborators#list
|
||||
func (s *DomainsService) ListCollaborators(accountID, domainIdentifier string, options *ListOptions) (*collaboratorsResponse, error) {
|
||||
path := versioned(collaboratorPath(accountID, domainIdentifier, ""))
|
||||
path := versioned(collaboratorPath(accountID, domainIdentifier, 0))
|
||||
collaboratorsResponse := &collaboratorsResponse{}
|
||||
|
||||
path, err := addURLQueryOptions(path, options)
|
||||
@@ -67,7 +67,7 @@ func (s *DomainsService) ListCollaborators(accountID, domainIdentifier string, o
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/domains/collaborators#add
|
||||
func (s *DomainsService) AddCollaborator(accountID string, domainIdentifier string, attributes CollaboratorAttributes) (*collaboratorResponse, error) {
|
||||
path := versioned(collaboratorPath(accountID, domainIdentifier, ""))
|
||||
path := versioned(collaboratorPath(accountID, domainIdentifier, 0))
|
||||
collaboratorResponse := &collaboratorResponse{}
|
||||
|
||||
resp, err := s.client.post(path, attributes, collaboratorResponse)
|
||||
@@ -81,8 +81,8 @@ func (s *DomainsService) AddCollaborator(accountID string, domainIdentifier stri
|
||||
|
||||
// RemoveCollaborator PERMANENTLY deletes a domain from the account.
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/domains/collaborators#add
|
||||
func (s *DomainsService) RemoveCollaborator(accountID string, domainIdentifier string, collaboratorID string) (*collaboratorResponse, error) {
|
||||
// See https://developer.dnsimple.com/v2/domains/collaborators#remove
|
||||
func (s *DomainsService) RemoveCollaborator(accountID string, domainIdentifier string, collaboratorID int64) (*collaboratorResponse, error) {
|
||||
path := versioned(collaboratorPath(accountID, domainIdentifier, collaboratorID))
|
||||
collaboratorResponse := &collaboratorResponse{}
|
||||
|
||||
|
||||
12
vendor/github.com/dnsimple/dnsimple-go/dnsimple/domains_delegation_signer_records.go
generated
vendored
12
vendor/github.com/dnsimple/dnsimple-go/dnsimple/domains_delegation_signer_records.go
generated
vendored
@@ -4,8 +4,8 @@ import "fmt"
|
||||
|
||||
// DelegationSignerRecord represents a delegation signer record for a domain in DNSimple.
|
||||
type DelegationSignerRecord struct {
|
||||
ID int `json:"id,omitempty"`
|
||||
DomainID int `json:"domain_id,omitempty"`
|
||||
ID int64 `json:"id,omitempty"`
|
||||
DomainID int64 `json:"domain_id,omitempty"`
|
||||
Algorithm string `json:"algorithm"`
|
||||
Digest string `json:"digest"`
|
||||
DigestType string `json:"digest_type"`
|
||||
@@ -14,10 +14,10 @@ type DelegationSignerRecord struct {
|
||||
UpdatedAt string `json:"updated_at,omitempty"`
|
||||
}
|
||||
|
||||
func delegationSignerRecordPath(accountID string, domainIdentifier string, dsRecordID int) (path string) {
|
||||
func delegationSignerRecordPath(accountID string, domainIdentifier string, dsRecordID int64) (path string) {
|
||||
path = fmt.Sprintf("%v/ds_records", domainPath(accountID, domainIdentifier))
|
||||
if dsRecordID != 0 {
|
||||
path += fmt.Sprintf("/%d", dsRecordID)
|
||||
path += fmt.Sprintf("/%v", dsRecordID)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -74,7 +74,7 @@ func (s *DomainsService) CreateDelegationSignerRecord(accountID string, domainId
|
||||
// GetDelegationSignerRecord fetches a delegation signer record.
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/domains/dnssec/#ds-record-get
|
||||
func (s *DomainsService) GetDelegationSignerRecord(accountID string, domainIdentifier string, dsRecordID int) (*delegationSignerRecordResponse, error) {
|
||||
func (s *DomainsService) GetDelegationSignerRecord(accountID string, domainIdentifier string, dsRecordID int64) (*delegationSignerRecordResponse, error) {
|
||||
path := versioned(delegationSignerRecordPath(accountID, domainIdentifier, dsRecordID))
|
||||
dsRecordResponse := &delegationSignerRecordResponse{}
|
||||
|
||||
@@ -91,7 +91,7 @@ func (s *DomainsService) GetDelegationSignerRecord(accountID string, domainIdent
|
||||
// from the domain.
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/domains/dnssec/#ds-record-delete
|
||||
func (s *DomainsService) DeleteDelegationSignerRecord(accountID string, domainIdentifier string, dsRecordID int) (*delegationSignerRecordResponse, error) {
|
||||
func (s *DomainsService) DeleteDelegationSignerRecord(accountID string, domainIdentifier string, dsRecordID int64) (*delegationSignerRecordResponse, error) {
|
||||
path := versioned(delegationSignerRecordPath(accountID, domainIdentifier, dsRecordID))
|
||||
dsRecordResponse := &delegationSignerRecordResponse{}
|
||||
|
||||
|
||||
4
vendor/github.com/dnsimple/dnsimple-go/dnsimple/domains_dnssec.go
generated
vendored
4
vendor/github.com/dnsimple/dnsimple-go/dnsimple/domains_dnssec.go
generated
vendored
@@ -1,6 +1,8 @@
|
||||
package dnsimple
|
||||
|
||||
import "fmt"
|
||||
import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// Dnssec represents the current DNSSEC settings for a domain in DNSimple.
|
||||
type Dnssec struct {
|
||||
|
||||
14
vendor/github.com/dnsimple/dnsimple-go/dnsimple/domains_email_forwards.go
generated
vendored
14
vendor/github.com/dnsimple/dnsimple-go/dnsimple/domains_email_forwards.go
generated
vendored
@@ -6,18 +6,18 @@ import (
|
||||
|
||||
// EmailForward represents an email forward in DNSimple.
|
||||
type EmailForward struct {
|
||||
ID int `json:"id,omitempty"`
|
||||
DomainID int `json:"domain_id,omitempty"`
|
||||
ID int64 `json:"id,omitempty"`
|
||||
DomainID int64 `json:"domain_id,omitempty"`
|
||||
From string `json:"from,omitempty"`
|
||||
To string `json:"to,omitempty"`
|
||||
CreatedAt string `json:"created_at,omitempty"`
|
||||
UpdatedAt string `json:"updated_at,omitempty"`
|
||||
}
|
||||
|
||||
func emailForwardPath(accountID string, domainIdentifier string, forwardID int) (path string) {
|
||||
func emailForwardPath(accountID string, domainIdentifier string, forwardID int64) (path string) {
|
||||
path = fmt.Sprintf("%v/email_forwards", domainPath(accountID, domainIdentifier))
|
||||
if forwardID != 0 {
|
||||
path += fmt.Sprintf("/%d", forwardID)
|
||||
path += fmt.Sprintf("/%v", forwardID)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -38,7 +38,7 @@ type emailForwardsResponse struct {
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/domains/email-forwards/#list
|
||||
func (s *DomainsService) ListEmailForwards(accountID string, domainIdentifier string, options *ListOptions) (*emailForwardsResponse, error) {
|
||||
path := versioned(emailForwardPath(accountID, domainIdentifier , 0))
|
||||
path := versioned(emailForwardPath(accountID, domainIdentifier, 0))
|
||||
forwardsResponse := &emailForwardsResponse{}
|
||||
|
||||
path, err := addURLQueryOptions(path, options)
|
||||
@@ -74,7 +74,7 @@ func (s *DomainsService) CreateEmailForward(accountID string, domainIdentifier s
|
||||
// GetEmailForward fetches an email forward.
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/domains/email-forwards/#get
|
||||
func (s *DomainsService) GetEmailForward(accountID string, domainIdentifier string, forwardID int) (*emailForwardResponse, error) {
|
||||
func (s *DomainsService) GetEmailForward(accountID string, domainIdentifier string, forwardID int64) (*emailForwardResponse, error) {
|
||||
path := versioned(emailForwardPath(accountID, domainIdentifier, forwardID))
|
||||
forwardResponse := &emailForwardResponse{}
|
||||
|
||||
@@ -90,7 +90,7 @@ func (s *DomainsService) GetEmailForward(accountID string, domainIdentifier stri
|
||||
// DeleteEmailForward PERMANENTLY deletes an email forward from the domain.
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/domains/email-forwards/#delete
|
||||
func (s *DomainsService) DeleteEmailForward(accountID string, domainIdentifier string, forwardID int) (*emailForwardResponse, error) {
|
||||
func (s *DomainsService) DeleteEmailForward(accountID string, domainIdentifier string, forwardID int64) (*emailForwardResponse, error) {
|
||||
path := versioned(emailForwardPath(accountID, domainIdentifier, forwardID))
|
||||
forwardResponse := &emailForwardResponse{}
|
||||
|
||||
|
||||
20
vendor/github.com/dnsimple/dnsimple-go/dnsimple/domains_pushes.go
generated
vendored
20
vendor/github.com/dnsimple/dnsimple-go/dnsimple/domains_pushes.go
generated
vendored
@@ -6,19 +6,19 @@ import (
|
||||
|
||||
// DomainPush represents a domain push in DNSimple.
|
||||
type DomainPush struct {
|
||||
ID int `json:"id,omitempty"`
|
||||
DomainID int `json:"domain_id,omitempty"`
|
||||
ContactID int `json:"contact_id,omitempty"`
|
||||
AccountID int `json:"account_id,omitempty"`
|
||||
ID int64 `json:"id,omitempty"`
|
||||
DomainID int64 `json:"domain_id,omitempty"`
|
||||
ContactID int64 `json:"contact_id,omitempty"`
|
||||
AccountID int64 `json:"account_id,omitempty"`
|
||||
CreatedAt string `json:"created_at,omitempty"`
|
||||
UpdatedAt string `json:"updated_at,omitempty"`
|
||||
AcceptedAt string `json:"accepted_at,omitempty"`
|
||||
}
|
||||
|
||||
func domainPushPath(accountID string, pushID int) (path string) {
|
||||
func domainPushPath(accountID string, pushID int64) (path string) {
|
||||
path = fmt.Sprintf("/%v/pushes", accountID)
|
||||
if pushID != 0 {
|
||||
path += fmt.Sprintf("/%d", pushID)
|
||||
path += fmt.Sprintf("/%v", pushID)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -38,13 +38,13 @@ type domainPushesResponse struct {
|
||||
// DomainPushAttributes represent a domain push payload (see initiate).
|
||||
type DomainPushAttributes struct {
|
||||
NewAccountEmail string `json:"new_account_email,omitempty"`
|
||||
ContactID string `json:"contact_id,omitempty"`
|
||||
ContactID int64 `json:"contact_id,omitempty"`
|
||||
}
|
||||
|
||||
// InitiatePush initiate a new domain push.
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/domains/pushes/#initiate
|
||||
func (s *DomainsService) InitiatePush(accountID string, domainID string, pushAttributes DomainPushAttributes) (*domainPushResponse, error) {
|
||||
func (s *DomainsService) InitiatePush(accountID, domainID string, pushAttributes DomainPushAttributes) (*domainPushResponse, error) {
|
||||
path := versioned(fmt.Sprintf("/%v/pushes", domainPath(accountID, domainID)))
|
||||
pushResponse := &domainPushResponse{}
|
||||
|
||||
@@ -81,7 +81,7 @@ func (s *DomainsService) ListPushes(accountID string, options *ListOptions) (*do
|
||||
// AcceptPush accept a push for a domain.
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/domains/pushes/#accept
|
||||
func (s *DomainsService) AcceptPush(accountID string, pushID int, pushAttributes DomainPushAttributes) (*domainPushResponse, error) {
|
||||
func (s *DomainsService) AcceptPush(accountID string, pushID int64, pushAttributes DomainPushAttributes) (*domainPushResponse, error) {
|
||||
path := versioned(domainPushPath(accountID, pushID))
|
||||
pushResponse := &domainPushResponse{}
|
||||
|
||||
@@ -97,7 +97,7 @@ func (s *DomainsService) AcceptPush(accountID string, pushID int, pushAttributes
|
||||
// RejectPush reject a push for a domain.
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/domains/pushes/#reject
|
||||
func (s *DomainsService) RejectPush(accountID string, pushID int) (*domainPushResponse, error) {
|
||||
func (s *DomainsService) RejectPush(accountID string, pushID int64) (*domainPushResponse, error) {
|
||||
path := versioned(domainPushPath(accountID, pushID))
|
||||
pushResponse := &domainPushResponse{}
|
||||
|
||||
|
||||
25
vendor/github.com/dnsimple/dnsimple-go/dnsimple/registrar.go
generated
vendored
25
vendor/github.com/dnsimple/dnsimple-go/dnsimple/registrar.go
generated
vendored
@@ -28,7 +28,7 @@ type domainCheckResponse struct {
|
||||
// CheckDomain checks a domain name.
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/registrar/#check
|
||||
func (s *RegistrarService) CheckDomain(accountID, domainName string) (*domainCheckResponse, error) {
|
||||
func (s *RegistrarService) CheckDomain(accountID string, domainName string) (*domainCheckResponse, error) {
|
||||
path := versioned(fmt.Sprintf("/%v/registrar/domains/%v/check", accountID, domainName))
|
||||
checkResponse := &domainCheckResponse{}
|
||||
|
||||
@@ -70,7 +70,7 @@ type DomainPremiumPriceOptions struct {
|
||||
// - renewal
|
||||
//
|
||||
// See https://developer.dnsimple.com/v2/registrar/#premium-price
|
||||
func (s *RegistrarService) GetDomainPremiumPrice(accountID, domainName string, options *DomainPremiumPriceOptions) (*domainPremiumPriceResponse, error) {
|
||||
func (s *RegistrarService) GetDomainPremiumPrice(accountID string, domainName string, options *DomainPremiumPriceOptions) (*domainPremiumPriceResponse, error) {
|
||||
var err error
|
||||
path := versioned(fmt.Sprintf("/%v/registrar/domains/%v/premium_price", accountID, domainName))
|
||||
priceResponse := &domainPremiumPriceResponse{}
|
||||
@@ -100,7 +100,6 @@ type DomainRegistration struct {
|
||||
State string `json:"state"`
|
||||
AutoRenew bool `json:"auto_renew"`
|
||||
WhoisPrivacy bool `json:"whois_privacy"`
|
||||
PremiumPrice string `json:"premium_price"`
|
||||
CreatedAt string `json:"created_at,omitempty"`
|
||||
UpdatedAt string `json:"updated_at,omitempty"`
|
||||
}
|
||||
@@ -122,6 +121,8 @@ type DomainRegisterRequest struct {
|
||||
// Set to true to enable the auto-renewal of the domain.
|
||||
// Default to true.
|
||||
EnableAutoRenewal bool `json:"auto_renew,omitempty"`
|
||||
// Required as confirmation of the price, only if the domain is premium.
|
||||
PremiumPrice string `json:"premium_price,omitempty"`
|
||||
}
|
||||
|
||||
// RegisterDomain registers a domain name.
|
||||
@@ -150,7 +151,6 @@ type DomainTransfer struct {
|
||||
State string `json:"state"`
|
||||
AutoRenew bool `json:"auto_renew"`
|
||||
WhoisPrivacy bool `json:"whois_privacy"`
|
||||
PremiumPrice string `json:"premium_price"`
|
||||
CreatedAt string `json:"created_at,omitempty"`
|
||||
UpdatedAt string `json:"updated_at,omitempty"`
|
||||
}
|
||||
@@ -175,6 +175,8 @@ type DomainTransferRequest struct {
|
||||
// Set to true to enable the auto-renewal of the domain.
|
||||
// Default to true.
|
||||
EnableAutoRenewal bool `json:"auto_renew,omitempty"`
|
||||
// Required as confirmation of the price, only if the domain is premium.
|
||||
PremiumPrice string `json:"premium_price,omitempty"`
|
||||
}
|
||||
|
||||
// TransferDomain transfers a domain name.
|
||||
@@ -219,13 +221,12 @@ func (s *RegistrarService) TransferDomainOut(accountID string, domainName string
|
||||
|
||||
// DomainRenewal represents the result of a domain renewal call.
|
||||
type DomainRenewal struct {
|
||||
ID int `json:"id"`
|
||||
DomainID int `json:"domain_id"`
|
||||
Period int `json:"period"`
|
||||
State string `json:"state"`
|
||||
PremiumPrice string `json:"premium_price"`
|
||||
CreatedAt string `json:"created_at,omitempty"`
|
||||
UpdatedAt string `json:"updated_at,omitempty"`
|
||||
ID int `json:"id"`
|
||||
DomainID int `json:"domain_id"`
|
||||
Period int `json:"period"`
|
||||
State string `json:"state"`
|
||||
CreatedAt string `json:"created_at,omitempty"`
|
||||
UpdatedAt string `json:"updated_at,omitempty"`
|
||||
}
|
||||
|
||||
// domainRenewalResponse represents a response from an API method that returns a domain renewal.
|
||||
@@ -239,6 +240,8 @@ type domainRenewalResponse struct {
|
||||
type DomainRenewRequest struct {
|
||||
// The number of years
|
||||
Period int `json:"period"`
|
||||
// Required as confirmation of the price, only if the domain is premium.
|
||||
PremiumPrice string `json:"premium_price,omitempty"`
|
||||
}
|
||||
|
||||
// RenewDomain renews a domain name.
|
||||
|
||||
4
vendor/github.com/dnsimple/dnsimple-go/dnsimple/registrar_whois_privacy.go
generated
vendored
4
vendor/github.com/dnsimple/dnsimple-go/dnsimple/registrar_whois_privacy.go
generated
vendored
@@ -6,8 +6,8 @@ import (
|
||||
|
||||
// WhoisPrivacy represents a whois privacy in DNSimple.
|
||||
type WhoisPrivacy struct {
|
||||
ID int `json:"id,omitempty"`
|
||||
DomainID int `json:"domain_id,omitempty"`
|
||||
ID int64 `json:"id,omitempty"`
|
||||
DomainID int64 `json:"domain_id,omitempty"`
|
||||
Enabled bool `json:"enabled,omitempty"`
|
||||
ExpiresOn string `json:"expires_on,omitempty"`
|
||||
CreatedAt string `json:"created_at,omitempty"`
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user