Secrets Store CSI driver for Kubernetes secrets - Integrates secrets stores with Kubernetes via a Container Storage Interface (CSI) volume.
The Secrets Store CSI driver secrets-store.csi.k8s.io
allows Kubernetes to mount multiple secrets, keys, and certs stored in enterprise-grade external secrets stores into their pods as a volume. Once the Volume is attached, the data in it is mounted into the container's file system.
- Mounts secrets/keys/certs to pod using a CSI volume
- Supports CSI Inline volume (Kubernetes version v1.15+)
- Supports mounting multiple secrets store objects as a single volume
- Supports pod identity to restrict access with specific identities (Azure provider only)
- Supports multiple secrets stores as providers. Multiple providers can run in the same cluster simultaneously.
- Supports pod portability with the SecretProviderClass CRD
The diagram below illustrates how Secrets Store CSI Volume works.
Recommended Kubernetes version: v1.16.0+
NOTE: The CSI Inline Volume feature was introduced in Kubernetes v1.15.x. Version 1.15.x will require the
CSIInlineVolume
feature gate to be updated in the cluster. Version 1.16+ does not require any feature gate.
For v1.15.x, update CSI Inline Volume feature gate
The CSI Inline Volume feature was introduced in Kubernetes v1.15.x. We need to make the following updates to include the CSIInlineVolume
feature gate:
- Update the API Server manifest to append the following feature gate:
--feature-gates=CSIInlineVolume=true
- Update Kubelet manifest on each node to append the
CSIInlineVolume
feature gate:
--feature-gates=CSIInlineVolume=true
Make sure you already have helm CLI installed. To install the secrets store csi driver:
NAMESPACE=dev
helm repo add secrets-store-csi-driver https://raw.githubusercontent.com/kubernetes-sigs/secrets-store-csi-driver/master/charts
helm install csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver --namespace $NAMESPACE
Expected output:
NAME: csi-secrets-store
NAMESPACE: dev
STATUS: DEPLOYED
RESOURCES:
==> v1/ClusterRole
NAME AGE
secretproviderclasses-role 1s
==> v1/ClusterRoleBinding
NAME AGE
secretproviderclasses-rolebinding 0s
==> v1/DaemonSet
NAME AGE
csi-secrets-store-secrets-store-csi-driver 0s
==> v1/Pod(related)
NAME AGE
csi-secrets-store-secrets-store-csi-driver-hb8gb 0s
csi-secrets-store-secrets-store-csi-driver-rk7hg 0s
==> v1/ServiceAccount
NAME AGE
secrets-store-csi-driver 1s
==> v1beta1/CSIDriver
NAME AGE
secrets-store.csi.k8s.io 0s
==> v1beta1/CustomResourceDefinition
NAME AGE
secretproviderclasses.secrets-store.csi.x-k8s.io 1s
NOTES:
The Secrets Store CSI Driver is getting deployed to your cluster.
To verify that Secrets Store CSI Driver has started, run:
kubectl --namespace=dev get pods -l "app=secrets-store-csi-driver"
Now you can follow these steps https://github.com/kubernetes-sigs/secrets-store-csi-driver#use-the-secrets-store-csi-driver
to create a SecretProviderClass resource, and a deployment using the SecretProviderClass.
You can also template this chart locally without Tiller and apply the result using kubectl
.
helm template csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver --namespace $NAMESPACE > manifest.yml
kubectl apply -f manifest.yml
[ALTERNATIVE DEPLOYMENT OPTION] Using Deployment Yamls
kubectl apply -f deploy/rbac-secretproviderclass.yaml # update the namespace of the secrets-store-csi-driver ServiceAccount
kubectl apply -f deploy/csidriver.yaml
kubectl apply -f deploy/secrets-store.csi.x-k8s.io_secretproviderclasses.yaml
kubectl apply -f deploy/secrets-store-csi-driver.yaml --namespace $NAMESPACE
# [OPTIONAL] For kubernetes version < 1.16 running `kubectl apply -f deploy/csidriver.yaml` will fail. To install the driver run
kubectl apply -f deploy/csidriver-1.15.yaml
To validate the installer is running as expected, run the following commands:
kubectl get po --namespace $NAMESPACE
You should see the Secrets Store CSI driver pods running on each agent node:
csi-secrets-store-qp9r8 2/2 Running 0 4m
csi-secrets-store-zrjt2 2/2 Running 0 4m
You should see the following CRDs deployed:
kubectl get crd
NAME
secretproviderclasses.secrets-store.csi.x-k8s.io
-
Select a provider from the list of supported providers and deploy the provider yaml
# [REQUIRED FOR AZURE PROVIDER] kubectl apply -f https://raw.githubusercontent.com/Azure/secrets-store-csi-driver-provider-azure/master/deployment/provider-azure-installer.yaml --namespace $NAMESPACE # [REQUIRED FOR VAULT PROVIDER] kubectl apply -f https://raw.githubusercontent.com/hashicorp/secrets-store-csi-driver-provider-vault/master/deployment/provider-vault-installer.yaml --namespace $NAMESPACE
You should see the following pods deployed for the provider(s) you selected. For example, for the Azure Key Vault provider:
csi-secrets-store-provider-azure-pksfd 2/2 Running 0 4m csi-secrets-store-provider-azure-sxht2 2/2 Running 0 4m
-
Create a
secretproviderclasses
resource to provide provider-specific parameters for the Secrets Store CSI driver. Follow specific deployment steps for the selected provider to update all required fields see example secretproviderclass.apiVersion: secrets-store.csi.x-k8s.io/v1alpha1 kind: SecretProviderClass metadata: name: azure-kvname spec: provider: azure # accepted provider options: azure or vault parameters: usePodIdentity: "false" # [OPTIONAL for Azure] if not provided, will default to "false" keyvaultName: "kvname" # the name of the KeyVault objects: | array: - | objectName: secret1 objectType: secret # object types: secret, key or cert objectVersion: "" # [OPTIONAL] object versions, default to latest if empty - | objectName: key1 objectType: key objectVersion: "" resourceGroup: "rg1" # the resource group of the KeyVault subscriptionId: "subid" # the subscription ID of the KeyVault tenantId: "tid" # the tenant ID of the KeyVault
-
Update your deployment yaml to use the Secrets Store CSI driver and reference the
secretProviderClass
resource created in the previous stepvolumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "azure-kvname"
-
Deploy your resource with the inline CSI volume using the Secrets Store CSI driver
kubectl apply -f pkg/providers/azure/examples/nginx-pod-secrets-store-inline-volume-secretproviderclass.yaml
-
Validate the pod has access to the secret from your secrets store instance:
kubectl exec -it nginx-secrets-store-inline ls /mnt/secrets-store/ secret1
This project features a pluggable provider interface developers can implement that defines the actions of the Secrets Store CSI driver. This enables retrieval of sensitive objects stored in an enterprise-grade external secrets store into Kubernetes while continue to manage these objects outside of Kubernetes.
Here is a list of criteria for supported provider:
- Code audit of the provider implementation to ensure it adheres to the required provider-driver interface, which includes:
- implementation of provider command args https://github.com/kubernetes-sigs/secrets-store-csi-driver/blob/master/pkg/secrets-store/nodeserver.go#L223-L236
- provider binary naming convention and semver convention
- provider binary deployment volume path
- provider logs are written to stdout and stderr so they can be part of the driver logs
- Add provider to the e2e test suite to demonstrate it functions as expected https://github.com/kubernetes-sigs/secrets-store-csi-driver/tree/master/test/bats Please use existing providers e2e tests as a reference.
- If any update is made by a provider (not limited to security updates), the provider is expected to update the provider's e2e test in this repo
Failure to adhere to the Criteria for Supported Providers will result in the removal of the provider from the supported list and subject to another review before it can be added back to the list of supported providers.
When a provider's e2e tests are consistently failing with the latest version of the driver, the driver maintainers will coordinate with the provider maintainers to provide a fix. If the test failures are not resolved within 4 weeks, then the provider will be removed from the list of supported providers.
Run unit tests locally with make test
.
End-to-end tests automatically runs on Travis CI when a PR is submitted. If you want to run using a local or remote Kubernetes cluster, make sure to have kubectl
, helm
(with tiller
running on the cluster) and bats
set up in your local environment and then run make e2e
. You can find the steps in .travis.yml
for getting started for setting up your environment, which uses Kind to set up a cluster.
- To troubleshoot issues with the csi driver, you can look at logs from the
secrets-store
container of the csi driver pod running on the same node as your application pod:kubectl get pod -o wide # find the secrets store csi driver pod running on the same node as your application pod kubectl logs csi-secrets-store-secrets-store-csi-driver-7x44t secrets-store
Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.