Skip to content

Latest commit

 

History

History
807 lines (555 loc) · 63.9 KB

CHANGELOG-3.3.md

File metadata and controls

807 lines (555 loc) · 63.9 KB

Previous change logs can be found at CHANGELOG-3.2.

The minimum recommended etcd versions to run in production are 3.1.11+, 3.2.26+, and 3.3.11+.


v3.3.14 (2019-08-16)

See code changes and v3.3 upgrade guide for any breaking changes.

Again, before running upgrades from any previous release, please make sure to read change logs below and v3.3 upgrade guide.

NOTE: This patch release had to include some new features from 3.4, while trying to minimize the difference between client balancer implementation. This release fixes "kube-apiserver 1.13.x refuses to work when first etcd-server is not available" (kubernetes#72102).

Breaking Changes

etcd server

API

  • Add watch_id field to etcdserverpb.WatchCreateRequest to allow user-provided watch ID to mvcc.
    • Corresponding watch_id is returned via etcdserverpb.WatchResponse, if any.
  • Add fragment field to etcdserverpb.WatchCreateRequest to request etcd server to split watch events when the total size of events exceeds etcd --max-request-bytes flag value plus gRPC-overhead 512 bytes.
    • The default server-side request bytes limit is embed.DefaultMaxRequestBytes which is 1.5 MiB plus gRPC-overhead 512 bytes.
    • If watch response events exceed this server-side request limit and watch request is created with fragment field true, the server will split watch events into a set of chunks, each of which is a subset of watch events below server-side request limit.
    • Useful when client-side has limited bandwidths.
    • For example, watch response contains 10 events, where each event is 1 MiB. And server etcd --max-request-bytes flag value is 1 MiB. Then, server will send 10 separate fragmented events to the client.
    • For example, watch response contains 5 events, where each event is 2 MiB. And server etcd --max-request-bytes flag value is 1 MiB and clientv3.Config.MaxCallRecvMsgSize is 1 MiB. Then, server will try to send 5 separate fragmented events to the client, and the client will error with "code = ResourceExhausted desc = grpc: received message larger than max (...)".
    • Client must implement fragmented watch event merge (which clientv3 does in etcd v3.4).
  • Add WatchRequest.WatchProgressRequest.
    • To manually trigger broadcasting watch progress event (empty watch response with latest header) to all associated watch streams.
    • Think of it as WithProgressNotify that can be triggered manually.

Metrics, Monitoring

See List of metrics for all metrics per release.

Note that any etcd_debugging_* metrics are experimental and subject to change.

client v3

etcdctl v3

Package pkg/adt

Go


v3.3.13 (2019-05-02)

See code changes and v3.3 upgrade guide for any breaking changes.

Again, before running upgrades from any previous release, please make sure to read change logs below and v3.3 upgrade guide.

Improved

Metrics, Monitoring

See List of metrics for all metrics per release.

Note that any etcd_debugging_* metrics are experimental and subject to change.

client v3

Package wal

Dependency

Go


v3.3.12 (2019-02-07)

See code changes and v3.3 upgrade guide for any breaking changes.

Again, before running upgrades from any previous release, please make sure to read change logs below and v3.3 upgrade guide.

etcdctl

Go


v3.3.11 (2019-01-11)

See code changes and v3.3 upgrade guide for any breaking changes.

Again, before running upgrades from any previous release, please make sure to read change logs below and v3.3 upgrade guide.

gRPC Proxy

Security, Authentication

  • Disable CommonName authentication for gRPC-gateway gRPC-gateway proxy requests to etcd server use the etcd client server TLS certificate. If that certificate contains CommonName we do not want to use that for authentication as it could lead to permission escalation.

Go


v3.3.10 (2018-10-10)

See code changes and v3.3 upgrade guide for any breaking changes.

Again, before running upgrades from any previous release, please make sure to read change logs below and v3.3 upgrade guide.

Improved

Metrics, Monitoring

See List of metrics for all metrics per release.

Note that any etcd_debugging_* metrics are experimental and subject to change.

client v3

Go


v3.3.9 (2018-07-24)

See code changes and v3.3 upgrade guide for any breaking changes.

Again, before running upgrades from any previous release, please make sure to read change logs below and v3.3 upgrade guide.

Improved

Security, Authentication

Metrics, Monitoring

See List of metrics for all metrics per release.

Note that any etcd_debugging_* metrics are experimental and subject to change.

client v3

Go


v3.3.8 (2018-06-15)

See code changes and v3.3 upgrade guide for any breaking changes.

Again, before running upgrades from any previous release, please make sure to read change logs below and v3.3 upgrade guide.

Improved

Go


v3.3.7 (2018-06-06)

See code changes and v3.3 upgrade guide for any breaking changes.

Again, before running upgrades from any previous release, please make sure to read change logs below and v3.3 upgrade guide.

Security, Authentication

  • Support TLS cipher suite whitelisting.

etcdctl v3

Go


v3.3.6 (2018-05-31)

See code changes and v3.3 upgrade guide for any breaking changes.

Again, before running upgrades from any previous release, please make sure to read change logs below and v3.3 upgrade guide.

etcd server

Go


v3.3.5 (2018-05-09)

See code changes and v3.3 upgrade guide for any breaking changes.

Again, before running upgrades from any previous release, please make sure to read change logs below and v3.3 upgrade guide.

etcdctl v3

Go


v3.3.4 (2018-04-24)

See code changes and v3.3 upgrade guide for any breaking changes.

Again, before running upgrades from any previous release, please make sure to read change logs below and v3.3 upgrade guide.

Metrics, Monitoring

See List of metrics for all metrics per release.

Note that any etcd_debugging_* metrics are experimental and subject to change.

Security, Authentication

  • Fix TLS reload when certificate SAN field only includes IP addresses but no domain names.
    • In Go, server calls (*tls.Config).GetCertificate for TLS reload if and only if server's (*tls.Config).Certificates field is not empty, or (*tls.ClientHelloInfo).ServerName is not empty with a valid SNI from the client. Previously, etcd always populates (*tls.Config).Certificates on the initial client TLS handshake, as non-empty. Thus, client was always expected to supply a matching SNI in order to pass the TLS verification and to trigger (*tls.Config).GetCertificate to reload TLS assets.
    • However, a certificate whose SAN field does not include any domain names but only IP addresses would request *tls.ClientHelloInfo with an empty ServerName field, thus failing to trigger the TLS reload on initial TLS handshake; this becomes a problem when expired certificates need to be replaced online.
    • Now, (*tls.Config).Certificates is created empty on initial TLS client handshake, first to trigger (*tls.Config).GetCertificate, and then to populate rest of the certificates on every new TLS connection, even when client SNI is empty (e.g. cert only includes IPs).

etcd server

  • Add etcd --initial-election-tick-advance flag to configure initial election tick fast-forward.
    • By default, etcd --initial-election-tick-advance=true, then local member fast-forwards election ticks to speed up "initial" leader election trigger.
    • This benefits the case of larger election ticks. For instance, cross datacenter deployment may require longer election timeout of 10-second. If true, local node does not need wait up to 10-second. Instead, forwards its election ticks to 8-second, and have only 2-second left before leader election.
    • Major assumptions are that: cluster has no active leader thus advancing ticks enables faster leader election. Or cluster already has an established leader, and rejoining follower is likely to receive heartbeats from the leader after tick advance and before election timeout.
    • However, when network from leader to rejoining follower is congested, and the follower does not receive leader heartbeat within left election ticks, disruptive election has to happen thus affecting cluster availabilities.
    • Now, this can be disabled by setting --initial-election-tick-advance=false.
    • Disabling this would slow down initial bootstrap process for cross datacenter deployments. Make tradeoffs by configuring etcd --initial-election-tick-advance at the cost of slow initial bootstrap.
    • If single-node, it advances ticks regardless.
    • Address disruptive rejoining follower node.

Package embed

  • Add embed.Config.InitialElectionTickAdvance to enable/disable initial election tick fast-forward.
    • embed.NewConfig() would return *embed.Config with InitialElectionTickAdvance as true by default.

Go


v3.3.3 (2018-03-29)

See code changes and v3.3 upgrade guide for any breaking changes.

Again, before running upgrades from any previous release, please make sure to read change logs below and v3.3 upgrade guide.

Improved

  • Adjust election timeout on server restart to reduce disruptive rejoining servers.
    • Previously, etcd fast-forwards election ticks on server start, with only one tick left for leader election. This is to speed up start phase, without having to wait until all election ticks elapse. Advancing election ticks is useful for cross datacenter deployments with larger election timeouts. However, it was affecting cluster availability if the last tick elapses before leader contacts the restarted node.
    • Now, when etcd restarts, it adjusts election ticks with more than one tick left, thus more time for leader to prevent disruptive restart.
  • Adjust periodic compaction retention window.
    • e.g. etcd --auto-compaction-mode=revision --auto-compaction-retention=1000 automatically Compact on "latest revision" - 1000 every 5-minute (when latest revision is 30000, compact on revision 29000).
    • e.g. Previously, etcd --auto-compaction-mode=periodic --auto-compaction-retention=72h automatically Compact with 72-hour retention windown for every 7.2-hour. Now, Compact happens, for every 1-hour but still with 72-hour retention window.
    • e.g. Previously, etcd --auto-compaction-mode=periodic --auto-compaction-retention=30m automatically Compact with 30-minute retention windown for every 3-minute. Now, Compact happens, for every 30-minute but still with 30-minute retention window.
    • Periodic compactor keeps recording latest revisions for every compaction period when given period is less than 1-hour, or for every 1-hour when given compaction period is greater than 1-hour (e.g. 1-hour when etcd --auto-compaction-mode=periodic --auto-compaction-retention=24h).
    • For every compaction period or 1-hour, compactor uses the last revision that was fetched before compaction period, to discard historical data.
    • The retention window of compaction period moves for every given compaction period or hour.
    • For instance, when hourly writes are 100 and etcd --auto-compaction-mode=periodic --auto-compaction-retention=24h, v3.2.x, v3.3.0, v3.3.1, and v3.3.2 compact revision 2400, 2640, and 2880 for every 2.4-hour, while v3.3.3 or later compacts revision 2400, 2500, 2600 for every 1-hour.
    • Futhermore, when etcd --auto-compaction-mode=periodic --auto-compaction-retention=30m and writes per minute are about 1000, v3.3.0, v3.3.1, and v3.3.2 compact revision 30000, 33000, and 36000, for every 3-minute, while v3.3.3 or later compacts revision 30000, 60000, and 90000, for every 30-minute.

Metrics, Monitoring

See List of metrics for all metrics per release.

Note that any etcd_debugging_* metrics are experimental and subject to change.

Go


v3.3.2 (2018-03-08)

See code changes and v3.3 upgrade guide for any breaking changes.

Again, before running upgrades from any previous release, please make sure to read change logs below and v3.3 upgrade guide.

etcd server

Proxy v2

Go


v3.3.1 (2018-02-12)

See code changes and v3.3 upgrade guide for any breaking changes.

Again, before running upgrades from any previous release, please make sure to read change logs below and v3.3 upgrade guide.

Improved

etcd server

  • Fix mvcc "unsynced" watcher restore operation.
    • "unsynced" watcher is watcher that needs to be in sync with events that have happened.
    • That is, "unsynced" watcher is the slow watcher that was requested on old revision.
    • "unsynced" watcher restore operation was not correctly populating its underlying watcher group.
    • Which possibly causes missing events from "unsynced" watchers.
    • A node gets network partitioned with a watcher on a future revision, and falls behind receiving a leader snapshot after partition gets removed. When applying this snapshot, etcd watch storage moves current synced watchers to unsynced since sync watchers might have become stale during network partition. And reset synced watcher group to restart watcher routines. Previously, there was a bug when moving from synced watcher group to unsynced, thus client would miss events when the watcher was requested to the network-partitioned node.

Go


v3.3.0 (2018-02-01)

See code changes and v3.3 upgrade guide for any breaking changes.

Again, before running upgrades from any previous release, please make sure to read change logs below and v3.3 upgrade guide.

Improved

Breaking Changes

  • Require google.golang.org/grpc v1.7.4 or v1.7.5.
  • Translate gRPC status error in v3 client Snapshot API.
  • v3 etcdctl lease timetolive LEASE_ID on expired lease now prints "lease LEASE_ID already expired".
    • <=3.2 prints "lease LEASE_ID granted with TTL(0s), remaining(-1s)".
  • Replace gRPC gateway endpoint /v3alpha with /v3beta.
    • To deprecate /v3alpha in v3.4.
    • In v3.3, curl -L http://localhost:2379/v3alpha/kv/put -X POST -d '{"key": "Zm9v", "value": "YmFy"}' still works as a fallback to curl -L http://localhost:2379/v3beta/kv/put -X POST -d '{"key": "Zm9v", "value": "YmFy"}', but curl -L http://localhost:2379/v3alpha/kv/put -X POST -d '{"key": "Zm9v", "value": "YmFy"}' won't work in v3.4. Use curl -L http://localhost:2379/v3beta/kv/put -X POST -d '{"key": "Zm9v", "value": "YmFy"}' instead.
  • Change etcd --auto-compaction-retention flag to accept string values with finer granularity.
    • Now that etcd --auto-compaction-retention accepts string values, etcd configuration YAML file auto-compaction-retention field must be changed to string type.
    • Previously, --config-file etcd.config.yaml can have auto-compaction-retention: 24 field, now must be auto-compaction-retention: "24" or auto-compaction-retention: "24h".
    • If configured as etcd --auto-compaction-mode periodic --auto-compaction-retention "24h", the time duration value for etcd --auto-compaction-retention flag must be valid for time.ParseDuration function in Go.

Dependency

Metrics, Monitoring

See List of metrics for all metrics per release.

Note that any etcd_debugging_* metrics are experimental and subject to change.

Security, Authentication

See security doc for more details.

etcd server

API

client v3

etcdctl v3

etcdctl v3

etcdctl v2

gRPC Proxy

gRPC gateway

  • Replace gRPC gateway endpoint /v3alpha with /v3beta.
    • To deprecate /v3alpha in v3.4.
    • In v3.3, curl -L http://localhost:2379/v3alpha/kv/put -X POST -d '{"key": "Zm9v", "value": "YmFy"}' still works as a fallback to curl -L http://localhost:2379/v3beta/kv/put -X POST -d '{"key": "Zm9v", "value": "YmFy"}', but curl -L http://localhost:2379/v3alpha/kv/put -X POST -d '{"key": "Zm9v", "value": "YmFy"}' won't work in v3.4. Use curl -L http://localhost:2379/v3beta/kv/put -X POST -d '{"key": "Zm9v", "value": "YmFy"}' instead.
  • Support "authorization" token.
  • Support websocket for bi-directional streams.
  • Upgrade gRPC gateway to v1.3.0.

etcd server

client v2

Package raft

Other

Go