The Falcon Operator offers various installation and deployment options for specific Kubernetes distributions. The guides below provide detailed instructions for each case.
Warning
If none of the guides provide installation for your specific Kubernetes distribution, use the Deployment Guide for Generic Kubernetes.
For an optimal experience, use the following preferred methods when installing for specific Kubernetes Distributions:
- Deployment Guide for AKS/ACR
- Deployment Guide for EKS/ECR
- Deployment Guide for EKS Fargate
- Deployment Guide for GKE/GCR
- Deployment Guide for OpenShift
- Deployment Guide for Generic Kubernetes
Falcon Operator and Sensor management and upgrades are best handled by using GitOps methodologies and workflows. Multi-Cluster Management tools such as Red Hat Advanced Cluster Management for Kubernetes or SuSE Rancher can help when needing to scale management across multiple clusters from Git workflows. Using GitOps ensures several best operational and security practices around Kubernetes as it is the configuration management tool of Kubernetes:
- Containers are immutable and are meant to be immutable. This means that a container should not be modified during its life: no updates, no patches, no configuration changes. Immutable containers ensures deployments are safe, consistently repeatable, and makes it easier to roll back an upgrade in case of a problem. If a container is modified or drifts from its original build, this could be an indication of an attack compromise.
- Kubernetes expands on the concept of container immutability by creating and coalescing around the concept of Immutable Infrastructure: changes e.g. upgrades deploy a new version with no upgrade in place.
- Latest versions of released components should always be used which means no more N-1, N-2, etc. for sensor deployments.
- No upgrades should happen outside the configuration management tool.
See the individual deployment guides for commands on how to upgrade the operator.
To effectively deploy and use the Falcon sensor in a Kubernetes environment, the following is recommended for the reasons listed above:
- Copy the CrowdStrike sensor(s) to your own container registry.
- Use Git to store the FalconNodeSensor and/or FalconContainer Kind(s) specifying the sensor in your internal registry.
- Alway use the latest sensor version as soon as it is released and able to be updated in your environment.
- As soon as the sensor version is changed in Git, a CI/CD pipeline should update the FalconNodeSensor and/or FalconContainer Kind(s) which will then cause the operator to deploy the updated versions to your Kubernetes environments. This is the proper way to handle sensor updates in Kubernetes.
- Upgrades should usually happen in a rolling update manner to ensure the Kubernetes cluster and deployed resources stay accessible and operational.
- The operator image is hosted at quay.io/crowdstrike/falcon-operator. If necessary, the operator image itself can be mirrored to your registry of choice, including internally hosted registries.
- The operator must access your specific Falcon cloud region (
api.crowdstrike.com
orapi.[YOUR CLOUD].crowdstrike.com
). - Depending on whether the image is mirrored, the operator or your nodes may need access to
registry.crowdstrike.com
. - If Falcon Cloud is set to autodiscover, the operator may also attempt to reach the Falcon Cloud Region us-1.
- If a proxy is configured, please ensure appropriate connections are allowed to Falcon Cloud; otherwise, the operator or custom resource may not deploy correctly.
To review the logs of Falcon Operator:
kubectl -n falcon-operator logs -f deploy/falcon-operator-controller-manager -c manager
If a cluster-wide nodeSelector policy is in place, this must be disabled in the namespaces that the sensors are deployed.
For example, on OpenShift:
oc annotate ns falcon-operator openshift.io/node-selector=""
If the Falcon Operator Controller Manager becomes OOMKilled on startup, it could be due to the number and size of resources in the Kubernetes cluster that it has to monitor. The OOMKilled error looks like:
$ kubectl get pods -n falcon-operator
NAME READY STATUS RESTARTS AGE
falcon-operator-controller-manager-77d7b44f96-t6jsr 1/2 OOMKilled 2 (45s ago) 98s
To remediate this problem in an OpenShift cluster, increase the memory limit of the operator by adding the desired resource configuration to the Subscription:
oc edit subscription falcon-operator -n falcon-operator
and add/edit the resource configuration to the spec
. For example:
spec:
channel: certified-0.9
config:
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 250m
memory: 64Mi