This doc explains how to set up a development environment so you can get started
contributing to Knative Serving
. Also
take a look at:
Follow the instructions below to set up your development environment. Once you meet these requirements, you can make changes and deploy your own version of Knative Serving!
Before submitting a PR, see also CONTRIBUTING.md.
Start by creating a GitHub account, then set up GitHub access via SSH.
To get started, create a codespace for this repository by clicking this 👇
A codespace will open in a web-based version of Visual Studio Code. The dev container is fully configured with software needed for this project. It creates a local container registry, a Kind cluster and deploys Knative Serving and Knative Ingress. If you use a codespace, then you can directly skip to the Iterating section of this document.
Note: Dev containers is an open spec which is supported by GitHub Codespaces and other tools.
You must install these tools:
go
: The languageKnative Serving
is built-in (1.16 or later)git
: For source controlko
: For development.kubectl
: For managing development environments.bash
v4 or later. On macOS the default bash is too old, you can use Homebrew to install a later version.
If you're working on and changing .proto
files:
protoc
: For compiling protocol buffers.protoc-gen-gogofaster
: For generating efficient golang code out of protocol buffers.
- Set up a Kubernetes cluster
- Minimum supported version is 1.20.0
- Follow the instructions in the Kubernetes doc.
- Set up a docker repository for pushing images. You can use any container
image registry by adjusting the authentication methods and repository paths
mentioned in the sections below.
- Google Container Registry quickstart
- Docker Hub quickstart
- If developing locally with Docker or Minikube, you can set
KO_DOCKER_REPO=ko.local
(preferred) or use the-L
flag toko
to build and push locally (in this case, authentication is not needed). If developing with Kind you can setKO_DOCKER_REPO=kind.local
. - If you encounter an
ImagePullBackOff
error while using Minikube or Kind, it may be due to the cluster's inability to pull locally built images because of image pull policies. To resolve this issue, consider enabling the local registry: for Minikube or for Kind.
Note: You'll need to be authenticated with your KO_DOCKER_REPO
before
pushing images. Run gcloud auth configure-docker
if you are using Google
Container Registry or docker login
if you are using Docker Hub.
To start your environment you'll need to set the following environment
variable (we recommend adding it to your .bashrc
):
KO_DOCKER_REPO
: The docker repository to which developer images should be pushed (e.g.gcr.io/[gcloud-project]
).
- Note: if you are using docker hub to store your images your
KO_DOCKER_REPO
variable should bedocker.io/<username>
. - Note: Currently Docker Hub doesn't let you create subdirs under your username.
.bashrc
example:
export KO_DOCKER_REPO='gcr.io/my-gcloud-project-id'
To check out this repository:
- Create your own fork of this repo
- Clone it to your machine:
git clone [email protected]:${YOUR_GITHUB_USERNAME}/serving.git
cd serving
git remote add upstream https://github.com/knative/serving.git
git remote set-url --push upstream no_push
Adding the upstream
remote sets you up nicely for regularly
syncing your fork.
Once you reach this point you are ready to do a full build and deploy as described below.
Once you've set up your development environment, stand up
Knative Serving
. Note that if you already installed Knative to your cluster,
redeploying the new version should work fine, but if you run into trouble, you
can easily clean your cluster up and try again.
Enter the serving
directory to install the following components.
Your user must be a cluster-admin to perform the setup needed for Knative. This should be the case by default if you've provisioned your own Kubernetes cluster. In particular, you'll need to be able to create Kubernetes cluster-scoped Namespace, CustomResourceDefinition, ClusterRole, and ClusterRoleBinding objects.
Please allocate sufficient resources for Kubernetes, especially when you run a Kubernetes cluster on your local machine. We recommend allocating at least 6 CPUs and 8G memory assuming a single node Kubernetes installation, and allocating at least 4 CPUs and 8G memory for each node assuming a 3-node Kubernetes installation. Please go back to your cluster setup to reconfigure your Kubernetes cluster in your designated environment, if necessary.
-
Deploy
cert-manager
kubectl apply -f ./third_party/cert-manager-latest/cert-manager.yaml kubectl wait --for=condition=Established --all crd kubectl wait --for=condition=Available -n cert-manager --all deployments
-
This step includes building Knative Serving, creating and pushing developer images, and deploying them to your Kubernetes cluster. If you're developing locally, set
KO_DOCKER_REPO=ko.local
(orKO_DOCKER_REPO=kind.local
respectively) to avoid needing to push your images to an off-machine registry. -
By default,
ko
will build container images for the architecture of your local machine, but if you need to build images for a different platform (OS and architecture), you can provide--platform
flag as follows:# Synopsis ko apply -f FILENAME [flags] # Usage ko apply --selector knative.dev/crd-install=true -Rf config/core/ --platform linux/arm64
Run:
ko apply --selector knative.dev/crd-install=true -Rf config/core/
kubectl wait --for=condition=Established --all crd
ko apply -Rf config/core/
# Optional steps
# Run post-install job to set up a nice sslip.io domain name. This only works
# if your Kubernetes LoadBalancer has an IPv4 address.
ko delete -f config/post-install/default-domain.yaml --ignore-not-found
ko apply -f config/post-install/default-domain.yaml
The above step is equivalent to applying the serving-crds.yaml
,
serving-core.yaml
, serving-hpa.yaml
and serving-nscert.yaml
for released
versions of Knative Serving.
You can see things running with:
kubectl -n knative-serving get pods
NAME READY STATUS RESTARTS AGE
activator-7454cd659f-rrz86 1/1 Running 0 105s
autoscaler-58cbfd4985-fl5h7 1/1 Running 0 105s
autoscaler-hpa-77964b9b8c-9sbgq 1/1 Running 0 105s
controller-847b7cc977-5mvvq 1/1 Running 0 105s
webhook-6b6c77567f-flr59 1/1 Running 0 105s
You can access the Knative Serving Controller's logs with:
kubectl -n knative-serving logs $(kubectl -n knative-serving get pods -l app=controller -o name) -c controller
If you're using a GCP project to host your Kubernetes cluster, it's good to check the Discovery & load balancing page to ensure that all services are up and running (and not blocked by a quota issue, for example).
Knative supports a variety of Ingress solutions.
For simplicity, you can just run the following command to install Kourier.
kubectl apply -f ./third_party/kourier-latest/kourier.yaml
kubectl patch configmap/config-network \
-n knative-serving \
--type merge \
-p '{"data":{"ingress.class":"kourier.ingress.networking.knative.dev"}}'
If you want to choose another Ingress solution, you can follow the instructions in the Knative installation doc to pick up an alternative Ingress solution and install it.
As you make changes to the code-base, there are several special cases to be aware of:
-
If you change an input to generated code, then you must run
./hack/update-codegen.sh
. Inputs include:- API type definitions in pkg/apis/serving/v1/.
- Type definitions annotated with
// +k8s:deepcopy-gen=true
. - The
_example
value of config maps (to keep theknative.dev/example-checksum
annotations in sync). These can also be individually updated using./hack/update-checksums.sh
. .proto
files. Run./hack/update-codegen.sh
with the--generate-protobufs
flag to enable protocol buffer generation.
-
If you change a package's deps (including adding an external dependency), then you must run
./hack/update-deps.sh
. -
If you change surface area of
PodSpec
that we allow in our resources then you must update the relevant section of./hack/schemapatch-config.yaml
and run./hack/update-schemas.sh
Additionally:-
If the new field is added behind a feature flag, then add the
kubebuilder:validation:DropProperties
andkubebuilder:pruning:PreserveUnknownFields
asadditionalMarkers
.additionalMarkers: # Part of a feature flag - so we want to omit the schema and preserve unknown fields - kubebuilder:validation:DropProperties - kubebuilder:pruning:PreserveUnknownFields
-
These are all idempotent, and we expect that running these at HEAD
to have no
diffs. Code generation and dependencies are automatically checked to produce no
diffs for each pull request.
update-deps.sh
runs go get/mod command. In some cases, if newer dependencies are
required, you need to run "go get" manually.
Once the codegen, dependency, and schema information is correct, redeploying the controller is simply:
ko apply -f config/core/deployments/controller.yaml
Or you can clean it up completely and
completely redeploy Knative Serving
.
To update existing dependencies execute
./hack/update-deps.sh --upgrade && ./hack/update-codegen.sh
You can delete all of the serving components with:
ko delete --ignore-not-found=true \
-Rf config/core/ \
-f ./third_party/kourier-latest/kourier.yaml \
-f ./third_party/cert-manager-latest/cert-manager.yaml