Knative Serving release v0.4.1
Pre-releaseMeta
Support for ConfigMap and Secret Volumes (Sample)
We have expanded our ConfigMap and Secret support to also allow them to be mounted as volumes. This was one of one of the most commonly requested features.
HTTP2 / GRPC support (Sample)
Knative Service can now serve HTTP2 and gRPC/HTTP2. Users will need to name their container port ‘h2c’ in order to enable this feature. There is an outstanding issue with streaming RPC cold-starts (#3239), but folks are encouraged to try this out and give feedback.
Enable websocket upgrades (Sample)
We have plumbed through the various layers of Serving the ability to upgrade an inbound HTTP connection to support Websockets. (#2910, #3240)
Better preserve ConfigMap updates
Starting with this release, you should be able to simply kubectl apply -f serving.yaml
to update knative/serving. Previously this would cause problems for any ConfigMap resources customized via kubectl edit cm
, such as configuring a different default domain.
To address this large problem with updates, we have moved all of our default ConfigMap settings into our code. The ConfigMap will only contain a detailed _example: |
block outlining how to use each ConfigMap with (validated!) examples. To try this out, see kubectl edit cm -knative-serving config-autoscaler
.
Autoscaling
Cold-start improvements (thanks @greghaynes)
- Immediately send stats from activator to autoscaler (#2659)
- Immediately scale up when autoscaler gets stats from activator (#2961, #2963)
Add “throttling” to the activator (thanks @vvraskin)
The activator now throttles the traffic it sends to pods to respect the containerConcurrency
setting of the Revision. This is to avoid overloading a single pod with a low concurrency setting when a big burst of traffic is received when scaled to zero. (#3082, #3089)
Use actual pod counts in the autoscaler (thanks @yanweiguo)
This release starts using actual ready pods count to calculate scaling up rate instead of the observed pods count from queue-proxy metrics data.
Core API
Support for ConfigMap and Secret Volumes (thanks @mattmoor)
See above
Support creating Services in “release” mode (thanks @vagababov)
Starting in 0.4 you can use the sentinel string @latest
in “release” mode to send a designated portion of the main traffic to the latest ready revision stamped out by the Service’s Configuration (#2819)
Phase 2 of CRD sub-resources (thanks @dprotaso)
This release contains the second phase of CRD sub-resource adoption, which consists of three parts: start using generateName
for Revision names, stop incrementing spec.generation
on updates, and using metadata.generation
for the value of our Configuration generation label on Revisions. (#3179, knative/pkg#234, #3204)
Service captures author info in annotation (thanks @vagababov)
The email of a Service’s creator is now captured in serving.knative.dev/creator
and in subsequent updates serving.knative.dev/lastModifier
(#3213)
(Ongoing) Progress on implementing the Runtime Contract (thanks @dgerd)
We have triaged a significant backlog of issues related to Conformance (tracked here: https://github.com/knative/serving/projects/20), and started to close a number of the gaps in our Runtime Contract conformance and validation.
Build extensibility now uses “Aggregated Cluster Roles” (thanks @mattmoor)
We have split out the part of the ClusterRole that Serving defined to enable it to consume knative/build and changed Serving to use an aggregated cluster role to glue together the set of capabilities that it needs. To use
Bug Fixes / Cleanups
- Make Conditions with
Severity: Error
less scary (#3038) - Make our resources work properly when “finalizers” are present (#3036)
- Fix several “readiness” races (#2954, #2430, #2735)
- Strengthen our validation logic (#3034, #3019, #1299)
- Better support for on-cluster registries both with and without certificates (#2266)
- Consistently set
status.observedGeneration
on resource status (#3076)
Networking
Removed the Knative Istio IngressGateway (thanks @tcnghia)
This release removes the Knative Istio IngressGateway, for better compatibility with multiple versions of Istio and also reduce the number of needed LoadBalancer. Users upgrading from 0.3.0 needs to reconfigure their DNS to point to the istio-ingressgateway
Service IP address before upgrading, and remove the knative-ingressgateway
Service & Deployment.
HTTP2 / GRPC support (thanks @tanzeeb)
See above
Configurable default ClusterIngress controller (thanks @tcnghia)
This release adds an option in the config-network
ConfigMap to change the default ClusterIngress controller from Istio if another controller is available.
Default TCP ReadinessProbe for user container (thanks @markusthoemmes)
Due to Istio proxy intercepting TCP handshakes, users won’t be able to specify meaningful TCP readiness check. We folded this TCP handshake into queue-proxy health check so that even if users don’t specify any Readiness Probe, some basic TCP probe will be provided automatically.
Enable websocket upgrades (thanks @tcnghia)
See above
Bug Fixes
- More reliably clean up ClusterIngress resources (#2570)
- Support non-default cluster service domain (#2892)
- Change the Envoy
connectionTimeout
Istio uses to reduce 503s (#2988)
Monitoring
No changes in 0.4