Compare commits

..

18 Commits

Author SHA1 Message Date
Ludovic Fernandez
418dca1113 Prepare release v1.7.8 2019-01-29 17:22:07 +01:00
Ludovic Fernandez
ed293e3058 Fixes docker swarm mode refresh second for KV. 2019-01-29 17:08:08 +01:00
Foivos Filippopoulos
40bb0cd879 Check for dynamic tls updates on configuration preload 2019-01-29 16:46:09 +01:00
Ludovic Fernandez
32f5e0df8f Updates lego. 2019-01-25 10:24:04 +01:00
Maarten van der Hoef
40b8a93930 Generic awsvpc support, not just Fargate 2019-01-21 17:00:07 +01:00
hwhelan-CB
b52b0f58b9 Cache exising task definitions to avoid rate limiting 2019-01-19 08:56:02 +01:00
David Birks
0016db0856 Minor formatting fixes 2019-01-16 20:10:04 +01:00
Joost Cassee
5aeca4507e Support Datadog tracer priority sampling 2019-01-16 17:08:06 +01:00
Thorsten
11bfb49e65 Route priorities: document minimum priority value 2019-01-16 16:44:05 +01:00
Dragnucs
d0cdb5608f Note about quotes for entrypoint definition with docker-compose 2019-01-16 16:36:07 +01:00
Timo Reimann
8bb8ad5e02 Assert that test timeout service is ready. 2019-01-16 15:00:09 +01:00
rbq
b488d8365c Allow Træfik to update Ingress status 2019-01-15 16:42:05 +01:00
Ludovic Fernandez
e2bd7b45d1 doc: more detailed info about Google Cloud DNS. 2019-01-15 16:30:08 +01:00
Tim Stackhouse
c8ea2ce703 Tested wildcard ACME challenge with DNSimple 2019-01-14 17:40:04 +01:00
Henri Larget
7d1edcd735 doc missing information about statistics parameter 2019-01-14 17:28:04 +01:00
Ishaan Bahal
9660c31d3d Removed repeated entryPoints.http from grpc.md 2019-01-11 16:34:04 +01:00
Ludovic Fernandez
37ac19583a fix: update lego. 2019-01-11 16:22:03 +01:00
Emile Vauge
ca60c52199 Happy 2019 2019-01-08 16:22:04 +01:00
47 changed files with 728 additions and 186 deletions

View File

@@ -1,5 +1,31 @@
# Change Log
## [v1.7.8](https://github.com/containous/traefik/tree/v1.7.8) (2019-01-29)
[All Commits](https://github.com/containous/traefik/compare/v1.7.7...v1.7.8)
**Bug fixes:**
- **[acme]** Updates lego. ([#4428](https://github.com/containous/traefik/pull/4428) by [ldez](https://github.com/ldez))
- **[acme]** Updates lego. ([#4376](https://github.com/containous/traefik/pull/4376) by [ldez](https://github.com/ldez))
- **[docker]** Fixes docker swarm mode refresh second for KV. ([#4420](https://github.com/containous/traefik/pull/4420) by [ldez](https://github.com/ldez))
- **[ecs]** Generic awsvpc support, not just Fargate ([#4360](https://github.com/containous/traefik/pull/4360) by [maartenvanderhoef](https://github.com/maartenvanderhoef))
- **[ecs]** Cache exising task definitions to avoid rate limiting ([#4177](https://github.com/containous/traefik/pull/4177) by [hwhelan-CB](https://github.com/hwhelan-CB))
- **[tls]** Check for dynamic tls updates on configuration preload ([#4022](https://github.com/containous/traefik/pull/4022) by [ffilippopoulos](https://github.com/ffilippopoulos))
- **[tracing]** Support Datadog tracer priority sampling ([#4359](https://github.com/containous/traefik/pull/4359) by [jcassee](https://github.com/jcassee))
**Documentation:**
- **[acme]** More detailed info about Google Cloud DNS. ([#4395](https://github.com/containous/traefik/pull/4395) by [ldez](https://github.com/ldez))
- **[acme]** Tested wildcard ACME challenge with DNSimple ([#4384](https://github.com/containous/traefik/pull/4384) by [tstackhouse](https://github.com/tstackhouse))
- **[docker]** Note about quotes for entrypoint definition with docker-compose ([#4390](https://github.com/containous/traefik/pull/4390) by [Dragnucs](https://github.com/Dragnucs))
- **[k8s]** Allow Træfik to update Ingress status ([#4397](https://github.com/containous/traefik/pull/4397) by [rbq](https://github.com/rbq))
- **[k8s]** Minor formatting fixes ([#4394](https://github.com/containous/traefik/pull/4394) by [dbirks](https://github.com/dbirks))
- **[metrics]** Missing information about statistics parameter ([#4393](https://github.com/containous/traefik/pull/4393) by [decima](https://github.com/decima))
- **[rules]** Route priorities: document minimum priority value ([#4374](https://github.com/containous/traefik/pull/4374) by [tw-360vier](https://github.com/tw-360vier))
- Removed repeated entryPoints.http from grpc.md ([#4370](https://github.com/containous/traefik/pull/4370) by [ishaanbahal](https://github.com/ishaanbahal))
- Happy 2019 ([#4367](https://github.com/containous/traefik/pull/4367) by [emilevauge](https://github.com/emilevauge))
**Misc:**
- Assert that test timeout service is ready. ([#4398](https://github.com/containous/traefik/pull/4398) by [timoreimann](https://github.com/timoreimann))
## [v1.7.7](https://github.com/containous/traefik/tree/v1.7.7) (2019-01-08)
[All Commits](https://github.com/containous/traefik/compare/v1.7.6...v1.7.7)

11
Gopkg.lock generated
View File

@@ -1376,7 +1376,6 @@
revision = "0c8571ac0ce161a5feb57375a9cdf148c98c0f70"
[[projects]]
branch = "master"
name = "github.com/xenolf/lego"
packages = [
"acme",
@@ -1452,9 +1451,11 @@
"providers/dns/vscale",
"providers/dns/vscale/internal",
"providers/dns/vultr",
"providers/dns/zoneee",
"registration"
]
revision = "43401f2475dd1f6cc2e220908f0caba246ea854e"
revision = "00ad82dec10ad0af1e037ee652ad1f1cc7015178"
version = "v2.1.0"
[[projects]]
branch = "master"
@@ -1619,8 +1620,8 @@
"ddtrace/opentracer",
"ddtrace/tracer"
]
revision = "48eeff27357376bcb31a15674dc4be9078de88b3"
version = "v1.5.0"
revision = "7fb2bce4b1ed6ab61f7a9e1be30dea56de19db7c"
version = "v1.8.0"
[[projects]]
name = "gopkg.in/fsnotify.v1"
@@ -1879,6 +1880,6 @@
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
inputs-digest = "4db3d8feea9f875e0c32ce071e49eceab18a89a3cef3d3b9bea59f2a992b2628"
inputs-digest = "781857167b93ec784d40c7c093a46c4d817f2d47b0dbc1f08fe3ec8ffc562cf9"
solver-name = "gps-cdcl"
solver-version = 1

View File

@@ -186,9 +186,9 @@
name = "github.com/vulcand/oxy"
[[constraint]]
branch = "master"
# branch = "master"
name = "github.com/xenolf/lego"
# version = "1.0.0"
version = "2.0.1"
[[constraint]]
name = "google.golang.org/grpc"
@@ -263,4 +263,4 @@
[[constraint]]
name = "gopkg.in/DataDog/dd-trace-go.v1"
version = "1.5.0"
version = "1.7.0"

View File

@@ -428,7 +428,7 @@ func (a *ACME) buildACMEClient(account *Account) (*lego.Client, error) {
config := lego.NewConfig(account)
config.CADirURL = caServer
config.KeyType = account.KeyType
config.Certificate.KeyType = account.KeyType
config.UserAgent = fmt.Sprintf("containous-traefik/%s", version.Version)
client, err := lego.NewClient(config)

View File

@@ -239,6 +239,7 @@ func NewTraefikDefaultPointersConfiguration() *TraefikConfiguration {
LocalAgentHostPort: "localhost:8126",
GlobalTag: "",
Debug: false,
PrioritySampling: false,
},
}

View File

@@ -235,6 +235,10 @@ func (gc *GlobalConfiguration) SetEffectiveConfiguration(configFile string) {
} else {
gc.Docker.TemplateVersion = 2
}
if gc.Docker.SwarmModeRefreshSeconds <= 0 {
gc.Docker.SwarmModeRefreshSeconds = 15
}
}
if gc.Marathon != nil {
@@ -368,6 +372,7 @@ func (gc *GlobalConfiguration) initTracing() {
LocalAgentHostPort: "localhost:8126",
GlobalTag: "",
Debug: false,
PrioritySampling: false,
}
}
if gc.Tracing.Zipkin != nil {

View File

@@ -236,7 +236,8 @@ The following rules are both `Matchers` and `Modifiers`, so the `Matcher` portio
#### Priorities
By default, routes will be sorted (in descending order) using rules length (to avoid path overlap):
`PathPrefix:/foo;Host:foo.com` (length == 28) will be matched before `PathPrefixStrip:/foobar` (length == 23) will be matched before `PathPrefix:/foo,/bar` (length == 20).
- `PathPrefix:/foo;Host:foo.com` (length == 28) will be matched before `PathPrefixStrip:/foobar` (length == 23) will be matched before `PathPrefix:/foo,/bar` (length == 20).
- A priority value of 0 will be ignored, so the default value will be calculated (rules length).
You can customize priority by frontend. The priority value override the rule length during sorting:

View File

@@ -284,11 +284,11 @@ Here is a list of supported `provider`s, that can automate the DNS verification,
| [CloudXNS](https://www.cloudxns.net) | `cloudxns` | `CLOUDXNS_API_KEY`, `CLOUDXNS_SECRET_KEY` | Not tested yet |
| [ConoHa](https://www.conoha.jp) | `conoha` | `CONOHA_TENANT_ID`, `CONOHA_API_USERNAME`, `CONOHA_API_PASSWORD` | YES |
| [DigitalOcean](https://www.digitalocean.com) | `digitalocean` | `DO_AUTH_TOKEN` | YES |
| [DNSimple](https://dnsimple.com) | `dnsimple` | `DNSIMPLE_OAUTH_TOKEN`, `DNSIMPLE_BASE_URL` | Not tested yet |
| [DNSimple](https://dnsimple.com) | `dnsimple` | `DNSIMPLE_OAUTH_TOKEN`, `DNSIMPLE_BASE_URL` | YES |
| [DNS Made Easy](https://dnsmadeeasy.com) | `dnsmadeeasy` | `DNSMADEEASY_API_KEY`, `DNSMADEEASY_API_SECRET`, `DNSMADEEASY_SANDBOX` | Not tested yet |
| [DNSPod](https://www.dnspod.com/) | `dnspod` | `DNSPOD_API_KEY` | Not tested yet |
| [DreamHost](https://www.dreamhost.com/) | `dreamhost` | `DREAMHOST_API_KEY` | YES |
| [Duck DNS](https://www.duckdns.org/) | `duckdns` | `DUCKDNS_TOKEN` | No |
| [Duck DNS](https://www.duckdns.org/) | `duckdns` | `DUCKDNS_TOKEN` | YES |
| [Dyn](https://dyn.com) | `dyn` | `DYN_CUSTOMER_NAME`, `DYN_USER_NAME`, `DYN_PASSWORD` | Not tested yet |
| External Program | `exec` | `EXEC_PATH` | YES |
| [Exoscale](https://www.exoscale.com) | `exoscale` | `EXOSCALE_API_KEY`, `EXOSCALE_API_SECRET`, `EXOSCALE_ENDPOINT` | YES |
@@ -297,7 +297,7 @@ Here is a list of supported `provider`s, that can automate the DNS verification,
| [Gandi v5](http://doc.livedns.gandi.net) | `gandiv5` | `GANDIV5_API_KEY` | YES |
| [Glesys](https://glesys.com/) | `glesys` | `GLESYS_API_USER`, `GLESYS_API_KEY`, `GLESYS_DOMAIN` | Not tested yet |
| [GoDaddy](https://godaddy.com/domains) | `godaddy` | `GODADDY_API_KEY`, `GODADDY_API_SECRET` | Not tested yet |
| [Google Cloud DNS](https://cloud.google.com/dns/docs/) | `gcloud` | `GCE_PROJECT`, `GCE_SERVICE_ACCOUNT_FILE` | YES |
| [Google Cloud DNS](https://cloud.google.com/dns/docs/) | `gcloud` | `GCE_PROJECT`, Application Default Credentials (2) (3), [`GCE_SERVICE_ACCOUNT_FILE`] | YES |
| [hosting.de](https://www.hosting.de) | `hostingde` | `HOSTINGDE_API_KEY`, `HOSTINGDE_ZONE_NAME` | Not tested yet |
| HTTP request | `httpreq` | `HTTPREQ_ENDPOINT`, `HTTPREQ_MODE`, `HTTPREQ_USERNAME`, `HTTPREQ_PASSWORD` (1) | YES |
| [IIJ](https://www.iij.ad.jp/) | `iij` | `IIJ_API_ACCESS_KEY`, `IIJ_API_SECRET_KEY`, `IIJ_DO_SERVICE_CODE` | Not tested yet |
@@ -325,8 +325,11 @@ Here is a list of supported `provider`s, that can automate the DNS verification,
| [VegaDNS](https://github.com/shupp/VegaDNS-API) | `vegadns` | `SECRET_VEGADNS_KEY`, `SECRET_VEGADNS_SECRET`, `VEGADNS_URL` | Not tested yet |
| [Vscale](https://vscale.io/) | `vscale` | `VSCALE_API_TOKEN` | YES |
| [VULTR](https://www.vultr.com) | `vultr` | `VULTR_API_KEY` | Not tested yet |
| [Zone.ee](https://www.zone.ee) | `zoneee` | `ZONEEE_API_USER`, `ZONEEE_API_KEY` | YES |
- (1): more information about the HTTP message format can be found [here](https://github.com/xenolf/lego/blob/master/providers/dns/httpreq/readme.md)
- (2): https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application
- (3): https://github.com/golang/oauth2/blob/36a7019397c4c86cf59eeab3bc0d188bac444277/google/default.go#L61-L76
#### `resolvers`

View File

@@ -301,7 +301,7 @@ curl -s "http://localhost:8080/health" | jq .
// average response time in seconds
"average_response_time_sec": 0.8648016000000001,
// request statistics [requires --statistics to be set]
// request statistics [requires --api.statistics to be set]
// ten most recent requests with 4xx and 5xx status codes
"recent_errors": [
{

View File

@@ -93,7 +93,7 @@ For more information about the CLI, see the documentation about [Traefik command
Whitespace is used as option separator and `,` is used as value separator for the list.
The names of the options are case-insensitive.
In compose file the entrypoint syntax is different:
In compose file the entrypoint syntax is different. Notice how quotes are used:
```yaml
traefik:

View File

@@ -156,4 +156,10 @@ Traefik supports three tracing backends: Jaeger, Zipkin and DataDog.
#
globalTag = ""
# Enable priority sampling. When using distributed tracing, this option must be enabled in order
# to get all the parts of a distributed trace sampled.
#
# Default: false
#
prioritySampling = false
```

View File

@@ -14,7 +14,6 @@ defaultEntryPoints = ["https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http]
[api]

View File

@@ -679,7 +679,7 @@ spec:
[examples/k8s/cheese-ingress.yaml](https://github.com/containous/traefik/tree/master/examples/k8s/cheese-ingress.yaml)
!!! note
we list each hostname, and add a backend service.
We list each hostname, and add a backend service.
```shell
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/cheese-ingress.yaml
@@ -783,11 +783,11 @@ Traefik will now look for cheddar service endpoints (ports on healthy pods) in b
Deploying cheddar into the cheese namespace and afterwards shutting down cheddar in the default namespace is enough to migrate the traffic.
!!! note
The kubernetes documentation does not specify this merging behavior.
The kubernetes documentation does not specify this merging behavior.
!!! note
Merging ingress definitions can cause problems if the annotations differ or if the services handle requests differently.
Be careful and extra cautious when running multiple overlapping ingress definitions.
Merging ingress definitions can cause problems if the annotations differ or if the services handle requests differently.
Be careful and extra cautious when running multiple overlapping ingress definitions.
## Specifying Routing Priorities

View File

@@ -22,6 +22,12 @@ rules:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses/status
verbs:
- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1

View File

@@ -1,5 +1,8 @@
consul:
image: consul
# use v1.4.0 because https://github.com/hashicorp/consul/issues/5270
# v1.4.1 cannot be used.
# waiting for v1.4.2
image: consul:1.4.0
command: agent -server -bootstrap-expect 1 -client 0.0.0.0 -log-level debug -ui
ports:
- "8400:8400"

View File

@@ -1,6 +1,7 @@
package integration
import (
"fmt"
"net/http"
"os"
"time"
@@ -38,6 +39,10 @@ func (s *TimeoutSuite) TestForwardingTimeouts(c *check.C) {
c.Assert(err, checker.IsNil)
c.Assert(response.StatusCode, checker.Equals, http.StatusGatewayTimeout)
// Check that timeout service is available
statusURL := fmt.Sprintf("http://%s:9000/statusTest?status=200", httpTimeoutEndpoint)
c.Assert(try.GetRequest(statusURL, 60*time.Second, try.StatusCodeIs(http.StatusOK)), checker.IsNil)
// This simulates a ResponseHeaderTimeout.
response, err = http.Get("http://127.0.0.1:8000/responseHeaderTimeout?sleep=1000")
c.Assert(err, checker.IsNil)

View File

@@ -18,6 +18,7 @@ type Config struct {
LocalAgentHostPort string `description:"Set datadog-agent's host:port that the reporter will used. Defaults to localhost:8126" export:"false"`
GlobalTag string `description:"Key:Value tag to be set on all the spans." export:"true"`
Debug bool `description:"Enable DataDog debug." export:"true"`
PrioritySampling bool `description:"Enable priority sampling. When using distributed tracing, this option must be enabled in order to get all the parts of a distributed trace sampled."`
}
// Setup sets up the tracer
@@ -29,12 +30,16 @@ func (c *Config) Setup(serviceName string) (opentracing.Tracer, io.Closer, error
value = tag[1]
}
tracer := ddtracer.New(
opts := []datadog.StartOption{
datadog.WithAgentAddr(c.LocalAgentHostPort),
datadog.WithServiceName(serviceName),
datadog.WithGlobalTag(tag[0], value),
datadog.WithDebugMode(c.Debug),
)
}
if c.PrioritySampling {
opts = append(opts, datadog.WithPrioritySampling())
}
tracer := ddtracer.New(opts...)
// Without this, child spans are getting the NOOP tracer
opentracing.SetGlobalTracer(tracer)

View File

@@ -25,7 +25,7 @@ theme:
prev: 'Previous'
next: 'Next'
copyright: "Copyright &copy; 2016-2018 Containous SAS"
copyright: "Copyright &copy; 2016-2019 Containous"
google_analytics:
- 'UA-51880359-3'

View File

@@ -232,7 +232,7 @@ func (p *Provider) getClient() (*lego.Client, error) {
config := lego.NewConfig(account)
config.CADirURL = caServer
config.KeyType = account.KeyType
config.Certificate.KeyType = account.KeyType
config.UserAgent = fmt.Sprintf("containous-traefik/%s", version.Version)
client, err := lego.NewClient(config)

View File

@@ -19,9 +19,11 @@ import (
"github.com/containous/traefik/provider"
"github.com/containous/traefik/safe"
"github.com/containous/traefik/types"
"github.com/patrickmn/go-cache"
)
var _ provider.Provider = (*Provider)(nil)
var existingTaskDefCache = cache.New(30*time.Minute, 5*time.Minute)
// Provider holds configurations of the provider.
type Provider struct {
@@ -291,7 +293,7 @@ func (p *Provider) listInstances(ctx context.Context, client *awsClient) ([]ecsI
}
var mach *machine
if aws.StringValue(task.LaunchType) == ecs.LaunchTypeFargate {
if len(task.Attachments) != 0 {
var ports []portMapping
for _, mapping := range containerDefinition.PortMappings {
if mapping != nil {
@@ -400,16 +402,22 @@ func (p *Provider) lookupEc2Instances(ctx context.Context, client *awsClient, cl
func (p *Provider) lookupTaskDefinitions(ctx context.Context, client *awsClient, taskDefArns map[string]*ecs.Task) (map[string]*ecs.TaskDefinition, error) {
taskDef := make(map[string]*ecs.TaskDefinition)
for arn, task := range taskDefArns {
resp, err := client.ecs.DescribeTaskDefinitionWithContext(ctx, &ecs.DescribeTaskDefinitionInput{
TaskDefinition: task.TaskDefinitionArn,
})
if definition, ok := existingTaskDefCache.Get(arn); ok {
taskDef[arn] = definition.(*ecs.TaskDefinition)
log.Debugf("Found cached task definition for %s. Skipping the call", arn)
} else {
resp, err := client.ecs.DescribeTaskDefinitionWithContext(ctx, &ecs.DescribeTaskDefinitionInput{
TaskDefinition: task.TaskDefinitionArn,
})
if err != nil {
log.Errorf("Unable to describe task definition: %s", err)
return nil, err
if err != nil {
log.Errorf("Unable to describe task definition: %s", err)
return nil, err
}
taskDef[arn] = resp.TaskDefinition
existingTaskDefCache.Set(arn, resp.TaskDefinition, cache.DefaultExpiration)
}
taskDef[arn] = resp.TaskDefinition
}
return taskDef, nil
}

View File

@@ -175,17 +175,36 @@ func (p *Provider) loadFileConfig(filename string, parseTemplate bool) (*types.C
} else {
configuration, err = p.DecodeConfiguration(fileContent)
}
if err != nil {
return nil, err
}
var tlsConfigs []*tls.Configuration
for _, conf := range configuration.TLS {
bytes, err := conf.Certificate.CertFile.Read()
if err != nil {
log.Error(err)
continue
}
conf.Certificate.CertFile = tls.FileOrContent(string(bytes))
bytes, err = conf.Certificate.KeyFile.Read()
if err != nil {
log.Error(err)
continue
}
conf.Certificate.KeyFile = tls.FileOrContent(string(bytes))
tlsConfigs = append(tlsConfigs, conf)
}
configuration.TLS = tlsConfigs
if configuration == nil || configuration.Backends == nil && configuration.Frontends == nil && configuration.TLS == nil {
configuration = &types.Configuration{
Frontends: make(map[string]*types.Frontend),
Backends: make(map[string]*types.Backend),
}
}
return configuration, err
return configuration, nil
}
func (p *Provider) loadFileConfigFromDirectory(directory string, configuration *types.Configuration) (*types.Configuration, error) {

View File

@@ -12,6 +12,7 @@ import (
"github.com/containous/traefik/safe"
"github.com/containous/traefik/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// createRandomFile Helper
@@ -332,3 +333,24 @@ func createProvider(t *testing.T, test ProvideTestCase, watch bool) (*Provider,
os.Remove(tempDir)
}
}
func TestTLSContent(t *testing.T) {
tempDir := createTempDir(t, "testdir")
defer os.Remove(tempDir)
fileTLS := createRandomFile(t, tempDir, "CONTENT")
fileConfig := createRandomFile(t, tempDir, `
[[tls]]
entryPoints = ["https"]
[tls.certificate]
certFile = "`+fileTLS.Name()+`"
keyFile = "`+fileTLS.Name()+`"
`)
provider := &Provider{}
configuration, err := provider.loadFileConfig(fileConfig.Name(), true)
require.NoError(t, err)
require.Equal(t, "CONTENT", configuration.TLS[0].Certificate.CertFile.String())
require.Equal(t, "CONTENT", configuration.TLS[0].Certificate.KeyFile.String())
}

View File

@@ -44,7 +44,7 @@ func New(httpClient *http.Client, userAgent string, caDirURL, kid string, privat
jws := secure.NewJWS(privateKey, kid, nonceManager)
c := &Core{doer: doer, nonceManager: nonceManager, jws: jws, directory: dir}
c := &Core{doer: doer, nonceManager: nonceManager, jws: jws, directory: dir, HTTPClient: httpClient}
c.common.core = c
c.Accounts = (*AccountService)(&c.common)

View File

@@ -10,7 +10,7 @@ import (
"fmt"
"github.com/xenolf/lego/acme/api/internal/nonces"
"gopkg.in/square/go-jose.v2"
jose "gopkg.in/square/go-jose.v2"
)
// JWS Represents a JWS.

View File

@@ -5,10 +5,10 @@ package sender
const (
// ourUserAgent is the User-Agent of this underlying library package.
ourUserAgent = "xenolf-acme/1.2.1"
ourUserAgent = "xenolf-acme/2.1.0"
// ourUserAgentComment is part of the UA comment linked to the version status of this underlying library package.
// values: detach|release
// NOTE: Update this with each tagged release.
ourUserAgentComment = "detach"
ourUserAgentComment = "release"
)

View File

@@ -17,6 +17,7 @@ import (
"github.com/xenolf/lego/certcrypto"
"github.com/xenolf/lego/challenge"
"github.com/xenolf/lego/log"
"github.com/xenolf/lego/platform/wait"
"golang.org/x/crypto/ocsp"
"golang.org/x/net/idna"
)
@@ -60,17 +61,24 @@ type resolver interface {
Solve(authorizations []acme.Authorization) error
}
type Certifier struct {
core *api.Core
keyType certcrypto.KeyType
resolver resolver
type CertifierOptions struct {
KeyType certcrypto.KeyType
Timeout time.Duration
}
func NewCertifier(core *api.Core, keyType certcrypto.KeyType, resolver resolver) *Certifier {
// Certifier A service to obtain/renew/revoke certificates.
type Certifier struct {
core *api.Core
resolver resolver
options CertifierOptions
}
// NewCertifier creates a Certifier.
func NewCertifier(core *api.Core, resolver resolver, options CertifierOptions) *Certifier {
return &Certifier{
core: core,
keyType: keyType,
resolver: resolver,
options: options,
}
}
@@ -191,7 +199,7 @@ func (c *Certifier) ObtainForCSR(csr x509.CertificateRequest, bundle bool) (*Res
func (c *Certifier) getForOrder(domains []string, order acme.ExtendedOrder, bundle bool, privateKey crypto.PrivateKey, mustStaple bool) (*Resource, error) {
if privateKey == nil {
var err error
privateKey, err = certcrypto.GeneratePrivateKey(c.keyType)
privateKey, err = certcrypto.GeneratePrivateKey(c.options.KeyType)
if err != nil {
return nil, err
}
@@ -237,9 +245,9 @@ func (c *Certifier) getForCSR(domains []string, order acme.ExtendedOrder, bundle
if respOrder.Status == acme.StatusValid {
// if the certificate is available right away, short cut!
ok, err := c.checkResponse(respOrder, certRes, bundle)
if err != nil {
return nil, err
ok, errR := c.checkResponse(respOrder, certRes, bundle)
if errR != nil {
return nil, errR
}
if ok {
@@ -247,34 +255,26 @@ func (c *Certifier) getForCSR(domains []string, order acme.ExtendedOrder, bundle
}
}
return c.waitForCertificate(certRes, order.Location, bundle)
}
func (c *Certifier) waitForCertificate(certRes *Resource, orderURL string, bundle bool) (*Resource, error) {
stopTimer := time.NewTimer(30 * time.Second)
defer stopTimer.Stop()
retryTick := time.NewTicker(500 * time.Millisecond)
defer retryTick.Stop()
for {
select {
case <-stopTimer.C:
return nil, errors.New("certificate polling timed out")
case <-retryTick.C:
order, err := c.core.Orders.Get(orderURL)
if err != nil {
return nil, err
}
done, err := c.checkResponse(order, certRes, bundle)
if err != nil {
return nil, err
}
if done {
return certRes, nil
}
}
timeout := c.options.Timeout
if c.options.Timeout <= 0 {
timeout = 30 * time.Second
}
err = wait.For("certificate", timeout, timeout/60, func() (bool, error) {
ord, errW := c.core.Orders.Get(order.Location)
if errW != nil {
return false, errW
}
done, errW := c.checkResponse(ord, certRes, bundle)
if errW != nil {
return false, errW
}
return done, nil
})
return certRes, err
}
// checkResponse checks to see if the certificate is ready and a link is contained in the response.

View File

@@ -53,9 +53,10 @@ func NewClient(config *Config) (*Client, error) {
solversManager := resolver.NewSolversManager(core)
prober := resolver.NewProber(solversManager)
certifier := certificate.NewCertifier(core, prober, certificate.CertifierOptions{KeyType: config.Certificate.KeyType, Timeout: config.Certificate.Timeout})
return &Client{
Certificate: certificate.NewCertifier(core, config.KeyType, prober),
Certificate: certifier,
Challenge: solversManager,
Registration: registration.NewRegistrar(core, config.User),
core: core,

View File

@@ -35,22 +35,30 @@ const (
)
type Config struct {
CADirURL string
User registration.User
KeyType certcrypto.KeyType
UserAgent string
HTTPClient *http.Client
CADirURL string
User registration.User
UserAgent string
HTTPClient *http.Client
Certificate CertificateConfig
}
func NewConfig(user registration.User) *Config {
return &Config{
CADirURL: LEDirectoryProduction,
User: user,
KeyType: certcrypto.RSA2048,
HTTPClient: createDefaultHTTPClient(),
Certificate: CertificateConfig{
KeyType: certcrypto.RSA2048,
Timeout: 30 * time.Second,
},
}
}
type CertificateConfig struct {
KeyType certcrypto.KeyType
Timeout time.Duration
}
// createDefaultHTTPClient Creates an HTTP client with a reasonable timeout value
// and potentially a custom *x509.CertPool
// based on the caCertificatesEnvVar environment variable (see the `initCertPool` function)

View File

@@ -150,7 +150,7 @@ func (d *DNSProvider) getHostedZone(domain string) (string, string, error) {
domains = append(domains, response.Domains.Domain...)
if response.PageNumber >= response.PageSize {
if response.PageNumber*response.PageSize >= response.TotalCount {
break
}

View File

@@ -7,7 +7,7 @@ import (
"sync"
"time"
"github.com/ldez/go-auroradns"
auroradns "github.com/ldez/go-auroradns"
"github.com/xenolf/lego/challenge/dns01"
"github.com/xenolf/lego/platform/config/env"
)

View File

@@ -7,7 +7,7 @@ import (
"net/http"
"time"
"github.com/cloudflare/cloudflare-go"
cloudflare "github.com/cloudflare/cloudflare-go"
"github.com/xenolf/lego/challenge/dns01"
"github.com/xenolf/lego/log"
"github.com/xenolf/lego/platform/config/env"

View File

@@ -54,6 +54,7 @@ import (
"github.com/xenolf/lego/providers/dns/vegadns"
"github.com/xenolf/lego/providers/dns/vscale"
"github.com/xenolf/lego/providers/dns/vultr"
"github.com/xenolf/lego/providers/dns/zoneee"
)
// NewDNSChallengeProviderByName Factory for DNS providers
@@ -159,6 +160,8 @@ func NewDNSChallengeProviderByName(name string) (challenge.Provider, error) {
return vultr.NewDNSProvider()
case "vscale":
return vscale.NewDNSProvider()
case "zoneee":
return zoneee.NewDNSProvider()
default:
return nil, fmt.Errorf("unrecognised DNS provider: %s", name)
}

View File

@@ -9,7 +9,7 @@ import (
"strings"
"time"
"github.com/decker502/dnspod-go"
dnspod "github.com/decker502/dnspod-go"
"github.com/xenolf/lego/challenge/dns01"
"github.com/xenolf/lego/platform/config/env"
)

View File

@@ -55,11 +55,13 @@ type DNSProvider struct {
// Project name must be passed in the environment variable: GCE_PROJECT.
// A Service Account file can be passed in the environment variable: GCE_SERVICE_ACCOUNT_FILE
func NewDNSProvider() (*DNSProvider, error) {
// Use a service account file if specified via environment variable.
if saFile, ok := os.LookupEnv("GCE_SERVICE_ACCOUNT_FILE"); ok {
return NewDNSProviderServiceAccount(saFile)
}
project := os.Getenv("GCE_PROJECT")
// Use default credentials.
project := env.GetOrDefaultString("GCE_PROJECT", "")
return NewDNSProviderCredentials(project)
}
@@ -94,15 +96,20 @@ func NewDNSProviderServiceAccount(saFile string) (*DNSProvider, error) {
return nil, fmt.Errorf("googlecloud: unable to read Service Account file: %v", err)
}
// read project id from service account file
var datJSON struct {
ProjectID string `json:"project_id"`
// If GCE_PROJECT is non-empty it overrides the project in the service
// account file.
project := env.GetOrDefaultString("GCE_PROJECT", "")
if project == "" {
// read project id from service account file
var datJSON struct {
ProjectID string `json:"project_id"`
}
err = json.Unmarshal(dat, &datJSON)
if err != nil || datJSON.ProjectID == "" {
return nil, fmt.Errorf("googlecloud: project ID not found in Google Cloud Service Account file")
}
project = datJSON.ProjectID
}
err = json.Unmarshal(dat, &datJSON)
if err != nil || datJSON.ProjectID == "" {
return nil, fmt.Errorf("googlecloud: project ID not found in Google Cloud Service Account file")
}
project := datJSON.ProjectID
conf, err := google.JWTConfigFromJSON(dat, dns.NdevClouddnsReadwriteScope)
if err != nil {

View File

@@ -180,10 +180,12 @@ func (d *DNSProvider) CleanUp(domain, token, keyAuth string) error {
// Find the challenge TXT record and remove it if found.
var found bool
for i, h := range records {
var newRecords []Record
for _, h := range records {
if h.Name == ch.key && h.Type == "TXT" {
records = append(records[:i], records[i+1:]...)
found = true
} else {
newRecords = append(newRecords, h)
}
}
@@ -191,7 +193,7 @@ func (d *DNSProvider) CleanUp(domain, token, keyAuth string) error {
return nil
}
err = d.setHosts(ch.sld, ch.tld, records)
err = d.setHosts(ch.sld, ch.tld, newRecords)
if err != nil {
return fmt.Errorf("namecheap: %v", err)
}

View File

@@ -13,7 +13,7 @@ import (
)
const (
defaultBaseURL = "https://dns.api.cloud.nifty.com"
defaultBaseURL = "https://dns.api.nifcloud.com"
apiVersion = "2012-12-12N2013-12-16"
// XMLNs XML NS of Route53
XMLNs = "https://route53.amazonaws.com/doc/2012-12-12/"

View File

@@ -22,16 +22,19 @@ type Config struct {
PropagationTimeout time.Duration
PollingInterval time.Duration
TTL int
SequenceInterval time.Duration
DNSTimeout time.Duration
}
// NewDefaultConfig returns a default configuration for the DNSProvider
func NewDefaultConfig() *Config {
return &Config{
TSIGAlgorithm: env.GetOrDefaultString("RFC2136_TSIG_ALGORITHM", dns.HmacMD5),
TTL: env.GetOrDefaultInt("RFC2136_TTL", dns01.DefaultTTL),
PropagationTimeout: env.GetOrDefaultSecond("RFC2136_PROPAGATION_TIMEOUT",
env.GetOrDefaultSecond("RFC2136_TIMEOUT", 60*time.Second)),
PollingInterval: env.GetOrDefaultSecond("RFC2136_POLLING_INTERVAL", 2*time.Second),
TSIGAlgorithm: env.GetOrDefaultString("RFC2136_TSIG_ALGORITHM", dns.HmacMD5),
TTL: env.GetOrDefaultInt("RFC2136_TTL", dns01.DefaultTTL),
PropagationTimeout: env.GetOrDefaultSecond("RFC2136_PROPAGATION_TIMEOUT", env.GetOrDefaultSecond("RFC2136_TIMEOUT", 60*time.Second)),
PollingInterval: env.GetOrDefaultSecond("RFC2136_POLLING_INTERVAL", 2*time.Second),
SequenceInterval: env.GetOrDefaultSecond("RFC2136_SEQUENCE_INTERVAL", dns01.DefaultPropagationTimeout),
DNSTimeout: env.GetOrDefaultSecond("RFC2136_DNS_TIMEOUT", 10*time.Second),
}
}
@@ -102,13 +105,19 @@ func (d *DNSProvider) Timeout() (timeout, interval time.Duration) {
return d.config.PropagationTimeout, d.config.PollingInterval
}
// Sequential All DNS challenges for this provider will be resolved sequentially.
// Returns the interval between each iteration.
func (d *DNSProvider) Sequential() time.Duration {
return d.config.SequenceInterval
}
// Present creates a TXT record using the specified parameters
func (d *DNSProvider) Present(domain, token, keyAuth string) error {
fqdn, value := dns01.GetRecord(domain, keyAuth)
err := d.changeRecord("INSERT", fqdn, value, d.config.TTL)
if err != nil {
return fmt.Errorf("rfc2136: %v", err)
return fmt.Errorf("rfc2136: failed to insert: %v", err)
}
return nil
}
@@ -119,7 +128,7 @@ func (d *DNSProvider) CleanUp(domain, token, keyAuth string) error {
err := d.changeRecord("REMOVE", fqdn, value, d.config.TTL)
if err != nil {
return fmt.Errorf("rfc2136: %v", err)
return fmt.Errorf("rfc2136: failed to remove: %v", err)
}
return nil
}
@@ -152,7 +161,7 @@ func (d *DNSProvider) changeRecord(action, fqdn, value string, ttl int) error {
}
// Setup client
c := new(dns.Client)
c := &dns.Client{Timeout: d.config.DNSTimeout}
c.SingleInflight = true
// TSIG authentication / msg signing
@@ -167,7 +176,7 @@ func (d *DNSProvider) changeRecord(action, fqdn, value string, ttl int) error {
return fmt.Errorf("DNS update failed: %v", err)
}
if reply != nil && reply.Rcode != dns.RcodeSuccess {
return fmt.Errorf("DNS update failed. Server replied: %s", dns.RcodeToString[reply.Rcode])
return fmt.Errorf("DNS update failed: server replied: %s", dns.RcodeToString[reply.Rcode])
}
return nil

View File

@@ -5,6 +5,7 @@ import (
"errors"
"fmt"
"strings"
"sync"
"time"
"github.com/transip/gotransip"
@@ -33,8 +34,9 @@ func NewDefaultConfig() *Config {
// DNSProvider describes a provider for TransIP
type DNSProvider struct {
config *Config
client gotransip.SOAPClient
config *Config
client gotransip.Client
dnsEntriesMu sync.Mutex
}
// NewDNSProvider returns a DNSProvider instance configured for TransIP.
@@ -90,6 +92,10 @@ func (d *DNSProvider) Present(domain, token, keyAuth string) error {
// get the subDomain
subDomain := strings.TrimSuffix(dns01.UnFqdn(fqdn), "."+domainName)
// use mutex to prevent race condition from GetInfo until SetDNSEntries
d.dnsEntriesMu.Lock()
defer d.dnsEntriesMu.Unlock()
// get all DNS entries
info, err := transipdomain.GetInfo(d.client, domainName)
if err != nil {
@@ -126,6 +132,10 @@ func (d *DNSProvider) CleanUp(domain, token, keyAuth string) error {
// get the subDomain
subDomain := strings.TrimSuffix(dns01.UnFqdn(fqdn), "."+domainName)
// use mutex to prevent race condition from GetInfo until SetDNSEntries
d.dnsEntriesMu.Lock()
defer d.dnsEntriesMu.Unlock()
// get all DNS entries
info, err := transipdomain.GetInfo(d.client, domainName)
if err != nil {

View File

@@ -0,0 +1,132 @@
package zoneee
import (
"bytes"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
"path"
)
const defaultEndpoint = "https://api.zone.eu/v2/dns/"
type txtRecord struct {
// Identifier (identificator)
ID string `json:"id,omitempty"`
// Hostname
Name string `json:"name"`
// TXT content value
Destination string `json:"destination"`
// Can this record be deleted
Delete bool `json:"delete,omitempty"`
// Can this record be modified
Modify bool `json:"modify,omitempty"`
// API url to get this entity
ResourceURL string `json:"resource_url,omitempty"`
}
func (d *DNSProvider) addTxtRecord(domain string, record txtRecord) ([]txtRecord, error) {
reqBody := &bytes.Buffer{}
if err := json.NewEncoder(reqBody).Encode(record); err != nil {
return nil, err
}
req, err := d.makeRequest(http.MethodPost, path.Join(domain, "txt"), reqBody)
if err != nil {
return nil, err
}
var resp []txtRecord
if err := d.sendRequest(req, &resp); err != nil {
return nil, err
}
return resp, nil
}
func (d *DNSProvider) getTxtRecords(domain string) ([]txtRecord, error) {
req, err := d.makeRequest(http.MethodGet, path.Join(domain, "txt"), nil)
if err != nil {
return nil, err
}
var resp []txtRecord
if err := d.sendRequest(req, &resp); err != nil {
return nil, err
}
return resp, nil
}
func (d *DNSProvider) removeTxtRecord(domain, id string) error {
req, err := d.makeRequest(http.MethodDelete, path.Join(domain, "txt", id), nil)
if err != nil {
return err
}
return d.sendRequest(req, nil)
}
func (d *DNSProvider) makeRequest(method, resource string, body io.Reader) (*http.Request, error) {
uri, err := d.config.Endpoint.Parse(resource)
if err != nil {
return nil, err
}
req, err := http.NewRequest(method, uri.String(), body)
if err != nil {
return nil, err
}
req.Header.Set("Content-Type", "application/json")
req.SetBasicAuth(d.config.Username, d.config.APIKey)
return req, nil
}
func (d *DNSProvider) sendRequest(req *http.Request, result interface{}) error {
resp, err := d.config.HTTPClient.Do(req)
if err != nil {
return err
}
if err = checkResponse(resp); err != nil {
return err
}
defer resp.Body.Close()
if result == nil {
return nil
}
raw, err := ioutil.ReadAll(resp.Body)
if err != nil {
return err
}
err = json.Unmarshal(raw, result)
if err != nil {
return fmt.Errorf("unmarshaling %T error [status code=%d]: %v: %s", result, resp.StatusCode, err, string(raw))
}
return err
}
func checkResponse(resp *http.Response) error {
if resp.StatusCode < http.StatusBadRequest {
return nil
}
if resp.Body == nil {
return fmt.Errorf("response body is nil, status code=%d", resp.StatusCode)
}
defer resp.Body.Close()
raw, err := ioutil.ReadAll(resp.Body)
if err != nil {
return fmt.Errorf("unable to read body: status code=%d, error=%v", resp.StatusCode, err)
}
return fmt.Errorf("status code=%d: %s", resp.StatusCode, string(raw))
}

View File

@@ -0,0 +1,134 @@
// Package zoneee implements a DNS provider for solving the DNS-01 challenge through zone.ee.
package zoneee
import (
"errors"
"fmt"
"net/http"
"net/url"
"time"
"github.com/xenolf/lego/challenge/dns01"
"github.com/xenolf/lego/platform/config/env"
)
// Config is used to configure the creation of the DNSProvider
type Config struct {
Endpoint *url.URL
Username string
APIKey string
PropagationTimeout time.Duration
PollingInterval time.Duration
HTTPClient *http.Client
}
// NewDefaultConfig returns a default configuration for the DNSProvider
func NewDefaultConfig() *Config {
endpoint, _ := url.Parse(defaultEndpoint)
return &Config{
Endpoint: endpoint,
// zone.ee can take up to 5min to propagate according to the support
PropagationTimeout: env.GetOrDefaultSecond("ZONEEE_PROPAGATION_TIMEOUT", 5*time.Minute),
PollingInterval: env.GetOrDefaultSecond("ZONEEE_POLLING_INTERVAL", 5*time.Second),
HTTPClient: &http.Client{
Timeout: env.GetOrDefaultSecond("ZONEEE_HTTP_TIMEOUT", 30*time.Second),
},
}
}
// DNSProvider describes a provider for acme-proxy
type DNSProvider struct {
config *Config
}
// NewDNSProvider returns a DNSProvider instance.
func NewDNSProvider() (*DNSProvider, error) {
values, err := env.Get("ZONEEE_API_USER", "ZONEEE_API_KEY")
if err != nil {
return nil, fmt.Errorf("zoneee: %v", err)
}
rawEndpoint := env.GetOrDefaultString("ZONEEE_ENDPOINT", defaultEndpoint)
endpoint, err := url.Parse(rawEndpoint)
if err != nil {
return nil, fmt.Errorf("zoneee: %v", err)
}
config := NewDefaultConfig()
config.Username = values["ZONEEE_API_USER"]
config.APIKey = values["ZONEEE_API_KEY"]
config.Endpoint = endpoint
return NewDNSProviderConfig(config)
}
// NewDNSProviderConfig return a DNSProvider .
func NewDNSProviderConfig(config *Config) (*DNSProvider, error) {
if config == nil {
return nil, errors.New("zoneee: the configuration of the DNS provider is nil")
}
if config.Username == "" {
return nil, fmt.Errorf("zoneee: credentials missing: username")
}
if config.APIKey == "" {
return nil, fmt.Errorf("zoneee: credentials missing: API key")
}
if config.Endpoint == nil {
return nil, errors.New("zoneee: the endpoint is missing")
}
return &DNSProvider{config: config}, nil
}
// Timeout returns the timeout and interval to use when checking for DNS propagation.
// Adjusting here to cope with spikes in propagation times.
func (d *DNSProvider) Timeout() (timeout, interval time.Duration) {
return d.config.PropagationTimeout, d.config.PollingInterval
}
// Present creates a TXT record to fulfill the dns-01 challenge
func (d *DNSProvider) Present(domain, token, keyAuth string) error {
fqdn, value := dns01.GetRecord(domain, keyAuth)
record := txtRecord{
Name: fqdn[:len(fqdn)-1],
Destination: value,
}
_, err := d.addTxtRecord(domain, record)
if err != nil {
return fmt.Errorf("zoneee: %v", err)
}
return nil
}
// CleanUp removes the TXT record previously created
func (d *DNSProvider) CleanUp(domain, token, keyAuth string) error {
_, value := dns01.GetRecord(domain, keyAuth)
records, err := d.getTxtRecords(domain)
if err != nil {
return fmt.Errorf("zoneee: %v", err)
}
var id string
for _, record := range records {
if record.Destination == value {
id = record.ID
}
}
if id == "" {
return fmt.Errorf("zoneee: txt record does not exist for %v", value)
}
if err = d.removeTxtRecord(domain, id); err != nil {
return fmt.Errorf("zoneee: %v", err)
}
return nil
}

View File

@@ -1,6 +1,7 @@
package tracer
import (
"net/http"
"os"
"path/filepath"
"time"
@@ -33,6 +34,9 @@ type config struct {
// propagator propagates span context cross-process
propagator Propagator
// httpRoundTripper defines the http.RoundTripper used by the agent transport.
httpRoundTripper http.RoundTripper
}
// StartOption represents a function that can be provided as a parameter to Start.
@@ -45,6 +49,17 @@ func defaults(c *config) {
c.agentAddr = defaultAddress
}
// WithPrioritySampling is deprecated, and priority sampling is enabled by default.
// When using distributed tracing, the priority sampling value is propagated in order to
// get all the parts of a distributed trace sampled.
// To learn more about priority sampling, please visit:
// https://docs.datadoghq.com/tracing/getting_further/trace_sampling_and_storage/#priority-sampling-for-distributed-tracing
func WithPrioritySampling() StartOption {
return func(c *config) {
// This is now enabled by default.
}
}
// WithDebugMode enables debug mode on the tracer, resulting in more verbose logging.
func WithDebugMode(enabled bool) StartOption {
return func(c *config) {
@@ -93,6 +108,15 @@ func WithSampler(s Sampler) StartOption {
}
}
// WithHTTPRoundTripper allows customizing the underlying HTTP transport for
// emitting spans. This is useful for advanced customization such as emitting
// spans to a unix domain socket. The default should be used in most cases.
func WithHTTPRoundTripper(r http.RoundTripper) StartOption {
return func(c *config) {
c.httpRoundTripper = r
}
}
// StartSpanOption is a configuration option for StartSpan. It is aliased in order
// to help godoc group all the functions returning it together. It is considered
// more correct to refer to it as the type as the origin, ddtrace.StartSpanOption.

View File

@@ -1,10 +1,13 @@
package tracer
import (
"encoding/json"
"io"
"math"
"sync"
"gopkg.in/DataDog/dd-trace-go.v1/ddtrace"
"gopkg.in/DataDog/dd-trace-go.v1/ddtrace/ext"
)
// Sampler is the generic interface of any sampler. It must be safe for concurrent use.
@@ -59,14 +62,83 @@ const knuthFactor = uint64(1111111111111111111)
// Sample returns true if the given span should be sampled.
func (r *rateSampler) Sample(spn ddtrace.Span) bool {
if r.rate == 1 {
// fast path
return true
}
s, ok := spn.(*span)
if !ok {
return false
}
r.RLock()
defer r.RUnlock()
if r.rate < 1 {
return s.TraceID*knuthFactor < uint64(r.rate*math.MaxUint64)
return sampledByRate(s.TraceID, r.rate)
}
// sampledByRate verifies if the number n should be sampled at the specified
// rate.
func sampledByRate(n uint64, rate float64) bool {
if rate < 1 {
return n*knuthFactor < uint64(rate*math.MaxUint64)
}
return true
}
// prioritySampler holds a set of per-service sampling rates and applies
// them to spans.
type prioritySampler struct {
mu sync.RWMutex
rates map[string]float64
defaultRate float64
}
func newPrioritySampler() *prioritySampler {
return &prioritySampler{
rates: make(map[string]float64),
defaultRate: 1.,
}
}
// readRatesJSON will try to read the rates as JSON from the given io.ReadCloser.
func (ps *prioritySampler) readRatesJSON(rc io.ReadCloser) error {
var payload struct {
Rates map[string]float64 `json:"rate_by_service"`
}
if err := json.NewDecoder(rc).Decode(&payload); err != nil {
return err
}
rc.Close()
const defaultRateKey = "service:,env:"
ps.mu.Lock()
defer ps.mu.Unlock()
ps.rates = payload.Rates
if v, ok := ps.rates[defaultRateKey]; ok {
ps.defaultRate = v
delete(ps.rates, defaultRateKey)
}
return nil
}
// getRate returns the sampling rate to be used for the given span. Callers must
// guard the span.
func (ps *prioritySampler) getRate(spn *span) float64 {
key := "service:" + spn.Service + ",env:" + spn.Meta[ext.Environment]
ps.mu.RLock()
defer ps.mu.RUnlock()
if rate, ok := ps.rates[key]; ok {
return rate
}
return ps.defaultRate
}
// apply applies sampling priority to the given span. Caller must ensure it is safe
// to modify the span.
func (ps *prioritySampler) apply(spn *span) {
rate := ps.getRate(spn)
if sampledByRate(spn.TraceID, rate) {
spn.SetTag(ext.SamplingPriority, ext.PriorityAutoKeep)
} else {
spn.SetTag(ext.SamplingPriority, ext.PriorityAutoReject)
}
spn.SetTag(samplingPriorityRateKey, rate)
}

View File

@@ -201,8 +201,8 @@ func (s *span) finish(finishTime int64) {
}
s.finished = true
if !s.context.sampled {
// not sampled
if s.context.drop {
// not sampled by local sampler
return
}
s.context.finish()
@@ -235,4 +235,7 @@ func (s *span) String() string {
return strings.Join(lines, "\n")
}
const samplingPriorityKey = "_sampling_priority_v1"
const (
samplingPriorityKey = "_sampling_priority_v1"
samplingPriorityRateKey = "_sampling_priority_rate_v1"
)

View File

@@ -16,9 +16,9 @@ var _ ddtrace.SpanContext = (*spanContext)(nil)
type spanContext struct {
// the below group should propagate only locally
trace *trace // reference to the trace that this span belongs too
span *span // reference to the span that hosts this context
sampled bool // whether this span will be sampled or not
trace *trace // reference to the trace that this span belongs too
span *span // reference to the span that hosts this context
drop bool // when true, the span will not be sent to the agent
// the below group should propagate cross-process
@@ -40,7 +40,6 @@ func newSpanContext(span *span, parent *spanContext) *spanContext {
context := &spanContext{
traceID: span.TraceID,
spanID: span.SpanID,
sampled: true,
span: span,
}
if v, ok := span.Metrics[samplingPriorityKey]; ok {
@@ -49,7 +48,7 @@ func newSpanContext(span *span, parent *spanContext) *spanContext {
}
if parent != nil {
context.trace = parent.trace
context.sampled = parent.sampled
context.drop = parent.drop
context.hasPriority = parent.hasSamplingPriority()
context.priority = parent.samplingPriority()
parent.ForeachBaggageItem(func(k, v string) bool {

View File

@@ -175,11 +175,11 @@ func (p *propagator) extractTextMap(reader TextMapReader) (ddtrace.SpanContext,
return ErrSpanContextCorrupted
}
case p.cfg.PriorityHeader:
ctx.priority, err = strconv.Atoi(v)
priority, err := strconv.Atoi(v)
if err != nil {
return ErrSpanContextCorrupted
}
ctx.hasPriority = true
ctx.setSamplingPriority(priority)
default:
if strings.HasPrefix(key, p.cfg.BaggagePrefix) {
ctx.setBaggageItem(strings.TrimPrefix(key, p.cfg.BaggagePrefix), v)

View File

@@ -41,6 +41,9 @@ type tracer struct {
// a synchronous (blocking) operation, meaning that it will only return after
// the trace has been fully processed and added onto the payload.
syncPush chan struct{}
// prioritySampling holds an instance of the priority sampler.
prioritySampling *prioritySampler
}
const (
@@ -112,21 +115,22 @@ func newTracer(opts ...StartOption) *tracer {
fn(c)
}
if c.transport == nil {
c.transport = newTransport(c.agentAddr)
c.transport = newTransport(c.agentAddr, c.httpRoundTripper)
}
if c.propagator == nil {
c.propagator = NewPropagator(nil)
}
t := &tracer{
config: c,
payload: newPayload(),
flushAllReq: make(chan chan<- struct{}),
flushTracesReq: make(chan struct{}, 1),
flushErrorsReq: make(chan struct{}, 1),
exitReq: make(chan struct{}),
payloadQueue: make(chan []*span, payloadQueueSize),
errorBuffer: make(chan error, errorBufferSize),
stopped: make(chan struct{}),
config: c,
payload: newPayload(),
flushAllReq: make(chan chan<- struct{}),
flushTracesReq: make(chan struct{}, 1),
flushErrorsReq: make(chan struct{}, 1),
exitReq: make(chan struct{}),
payloadQueue: make(chan []*span, payloadQueueSize),
errorBuffer: make(chan error, errorBufferSize),
stopped: make(chan struct{}),
prioritySampling: newPrioritySampler(),
}
go t.worker()
@@ -247,6 +251,7 @@ func (t *tracer) StartSpan(operationName string, options ...ddtrace.StartSpanOpt
span.Metrics[samplingPriorityKey] = float64(context.samplingPriority())
}
if context.span != nil {
// it has a local parent, inherit the service
context.span.RLock()
span.Service = context.span.Service
context.span.RUnlock()
@@ -254,9 +259,8 @@ func (t *tracer) StartSpan(operationName string, options ...ddtrace.StartSpanOpt
}
span.context = newSpanContext(span, context)
if context == nil || context.span == nil {
// this is either a global root span or a process-level root span
// this is either a root span or it has a remote parent, we should add the PID.
span.SetTag(ext.Pid, strconv.Itoa(os.Getpid()))
t.sample(span)
}
// add tags from options
for k, v := range opts.Tags {
@@ -266,6 +270,10 @@ func (t *tracer) StartSpan(operationName string, options ...ddtrace.StartSpanOpt
for k, v := range t.config.globalTags {
span.SetTag(k, v)
}
if context == nil {
// this is a brand new trace, sample it
t.sample(span)
}
return span
}
@@ -299,10 +307,13 @@ func (t *tracer) flushTraces() {
if t.config.debug {
log.Printf("Sending payload: size: %d traces: %d\n", size, count)
}
err := t.config.transport.send(t.payload)
rc, err := t.config.transport.send(t.payload)
if err != nil {
t.pushError(&dataLossError{context: err, count: count})
}
if err == nil {
t.prioritySampling.readRatesJSON(rc) // TODO: handle error?
}
t.payload.reset()
}
@@ -350,21 +361,17 @@ const sampleRateMetricKey = "_sample_rate"
// Sample samples a span with the internal sampler.
func (t *tracer) sample(span *span) {
if span.context.hasPriority {
// sampling decision was already made
return
}
sampler := t.config.sampler
sampled := sampler.Sample(span)
span.context.sampled = sampled
if !sampled {
if !sampler.Sample(span) {
span.context.drop = true
return
}
if rs, ok := sampler.(RateSampler); ok && rs.Rate() < 1 {
// the span was sampled using a rate sampler which wasn't all permissive,
// so we make note of the sampling rate.
span.Lock()
defer span.Unlock()
if span.finished {
// we don't touch finished span as they might be flushing
return
}
span.Metrics[sampleRateMetricKey] = rs.Rate()
}
t.prioritySampling.apply(span)
}

View File

@@ -2,17 +2,37 @@ package tracer
import (
"fmt"
"io"
"net"
"net/http"
"os"
"runtime"
"strconv"
"strings"
"time"
)
// TODO(gbbr): find a more effective way to keep this up to date,
// e.g. via `go generate`
var tracerVersion = "v1.5.0"
var (
// TODO(gbbr): find a more effective way to keep this up to date,
// e.g. via `go generate`
tracerVersion = "v1.7.0"
// We copy the transport to avoid using the default one, as it might be
// augmented with tracing and we don't want these calls to be recorded.
// See https://golang.org/pkg/net/http/#DefaultTransport .
defaultRoundTripper = &http.Transport{
Proxy: http.ProxyFromEnvironment,
DialContext: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
DualStack: true,
}).DialContext,
MaxIdleConns: 100,
IdleConnTimeout: 90 * time.Second,
TLSHandshakeTimeout: 10 * time.Second,
ExpectContinueTimeout: 1 * time.Second,
}
)
const (
defaultHostname = "localhost"
@@ -22,25 +42,33 @@ const (
traceCountHeader = "X-Datadog-Trace-Count" // header containing the number of traces in the payload
)
// Transport is an interface for span submission to the agent.
// transport is an interface for span submission to the agent.
type transport interface {
send(p *payload) error
// send sends the payload p to the agent using the transport set up.
// It returns a non-nil response body when no error occurred.
send(p *payload) (body io.ReadCloser, err error)
}
// newTransport returns a new Transport implementation that sends traces to a
// trace agent running on the given hostname and port. If the zero values for
// hostname and port are provided, the default values will be used ("localhost"
// for hostname, and "8126" for port).
// trace agent running on the given hostname and port, using a given
// http.RoundTripper. If the zero values for hostname and port are provided,
// the default values will be used ("localhost" for hostname, and "8126" for
// port). If roundTripper is nil, a default is used.
//
// In general, using this method is only necessary if you have a trace agent
// running on a non-default port or if it's located on another machine.
func newTransport(addr string) transport {
return newHTTPTransport(addr)
// running on a non-default port, if it's located on another machine, or when
// otherwise needing to customize the transport layer, for instance when using
// a unix domain socket.
func newTransport(addr string, roundTripper http.RoundTripper) transport {
if roundTripper == nil {
roundTripper = defaultRoundTripper
}
return newHTTPTransport(addr, roundTripper)
}
// newDefaultTransport return a default transport for this tracing client
func newDefaultTransport() transport {
return newHTTPTransport(defaultAddress)
return newHTTPTransport(defaultAddress, defaultRoundTripper)
}
type httpTransport struct {
@@ -50,7 +78,7 @@ type httpTransport struct {
}
// newHTTPTransport returns an httpTransport for the given endpoint
func newHTTPTransport(addr string) *httpTransport {
func newHTTPTransport(addr string, roundTripper http.RoundTripper) *httpTransport {
// initialize the default EncoderPool with Encoder headers
defaultHeaders := map[string]string{
"Datadog-Meta-Lang": "go",
@@ -60,34 +88,20 @@ func newHTTPTransport(addr string) *httpTransport {
"Content-Type": "application/msgpack",
}
return &httpTransport{
traceURL: fmt.Sprintf("http://%s/v0.3/traces", resolveAddr(addr)),
traceURL: fmt.Sprintf("http://%s/v0.4/traces", resolveAddr(addr)),
client: &http.Client{
// We copy the transport to avoid using the default one, as it might be
// augmented with tracing and we don't want these calls to be recorded.
// See https://golang.org/pkg/net/http/#DefaultTransport .
Transport: &http.Transport{
Proxy: http.ProxyFromEnvironment,
DialContext: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
DualStack: true,
}).DialContext,
MaxIdleConns: 100,
IdleConnTimeout: 90 * time.Second,
TLSHandshakeTimeout: 10 * time.Second,
ExpectContinueTimeout: 1 * time.Second,
},
Timeout: defaultHTTPTimeout,
Transport: roundTripper,
Timeout: defaultHTTPTimeout,
},
headers: defaultHeaders,
}
}
func (t *httpTransport) send(p *payload) error {
func (t *httpTransport) send(p *payload) (body io.ReadCloser, err error) {
// prepare the client and send the payload
req, err := http.NewRequest("POST", t.traceURL, p)
if err != nil {
return fmt.Errorf("cannot create http request: %v", err)
return nil, fmt.Errorf("cannot create http request: %v", err)
}
for header, value := range t.headers {
req.Header.Set(header, value)
@@ -96,25 +110,26 @@ func (t *httpTransport) send(p *payload) error {
req.Header.Set("Content-Length", strconv.Itoa(p.size()))
response, err := t.client.Do(req)
if err != nil {
return err
return nil, err
}
defer response.Body.Close()
if code := response.StatusCode; code >= 400 {
// error, check the body for context information and
// return a nice error.
msg := make([]byte, 1000)
n, _ := response.Body.Read(msg)
response.Body.Close()
txt := http.StatusText(code)
if n > 0 {
return fmt.Errorf("%s (Status: %s)", msg[:n], txt)
return nil, fmt.Errorf("%s (Status: %s)", msg[:n], txt)
}
return fmt.Errorf("%s", txt)
return nil, fmt.Errorf("%s", txt)
}
return nil
return response.Body, nil
}
// resolveAddr resolves the given agent address and fills in any missing host
// and port using the defaults.
// and port using the defaults. Some environment variable settings will
// take precedence over configuration.
func resolveAddr(addr string) string {
host, port, err := net.SplitHostPort(addr)
if err != nil {
@@ -127,5 +142,11 @@ func resolveAddr(addr string) string {
if port == "" {
port = defaultPort
}
if v := os.Getenv("DD_AGENT_HOST"); v != "" {
host = v
}
if v := os.Getenv("DD_TRACE_AGENT_PORT"); v != "" {
port = v
}
return fmt.Sprintf("%s:%s", host, port)
}