You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
During evaluating Falco on different managed k8s clusters, my team and I observed some unexpected behaviour.
Describe the bug
On AKS k8s cluster, generated alerts are randomly fulfilled with container and k8s information. Only container.id is listed, but details regarding container's name, image and k8s pod's name, namespace are missing.
The behaviour is not related to k8s namespace where target k8s pod is running, but most often the above mentioned details are missing for some containers deployed in kube-system namespace.
How to reproduce it
Deploy least privileged Falco in AKS cluster
Open a shell to a container running in any pod kube-proxy-* in k8s namespace kube-system
Check the generated falco alert and notice the attributes like container.name, container.image.repository, k8s.pod.name, k8s.ns.name are set to null.
{
"hostname": "aks-lhxynh188x-14626312-vmss000014",
"output": "09:58:35.578767314: Notice A shell was spawned in a container with an attached terminal ...",
..."output_fields": {
"container.id": "446c7dbbeac8",
"container.image.repository": null,
"container.image.tag": null,
"container.name": null,
"evt.arg.flags": "EXE_WRITABLE",
"evt.time": 1718877515578767314,
"evt.type": "execve",
"k8s.ns.name": null,
"k8s.pod.name": null,
"proc.cmdline": "sh",
"proc.exepath": "/bin/dash",
"proc.name": "sh",
"proc.pname": "runc",
"proc.tty": 34816,
"user.loginuid": -1,
"user.name": "root",
"user.uid": 0
}
}
Open a shell to falco container and install crictl to check if container runtime socket could provide the container and k8s related information.
Install curl by running apk add curl
Install crictl by following the instruction to install-crictl
Run the command crictl -r /host/run/containerd/containerd.sock inspect {container.id}, replace the {container.id} placeholder with a valid container's identifier.
Notice all container and k8s related information are available in the response of the command above. Check the output of crictl inspect command in evidences section below.
Expected behaviour
All container and k8s related information should be reflected in the alert generated by Falco rule to guarantee the accurate traceability of affected k8s pods and containers.
Evidences
Alert generated for an attempt to open a Shell to a running container
Executing the command crictl -r /host/run/containerd/containerd.sock inspect 446c7dbbeac8 inside falco container to get information about container with id 446c7dbbeac8 from mounted container runtime's socket:
Similar experience when deploying in GKE in least privilege mode, when disabled all those detection, where k8s.pod.name and k8s.ns.name are set to null, dissapear.
Hello Falco team,
During evaluating Falco on different managed k8s clusters, my team and I observed some unexpected behaviour.
Describe the bug
On AKS k8s cluster, generated alerts are randomly fulfilled with container and k8s information. Only
container.id
is listed, but details regarding container's name, image and k8s pod's name, namespace are missing.The behaviour is not related to k8s namespace where target k8s pod is running, but most often the above mentioned details are missing for some containers deployed in
kube-system
namespace.How to reproduce it
kube-proxy-*
in k8s namespacekube-system
container.name
,container.image.repository
,k8s.pod.name
,k8s.ns.name
are set tonull
.crictl
to check if container runtime socket could provide the container and k8s related information.curl
by runningapk add curl
crictl
by following the instruction to install-crictlcrictl -r /host/run/containerd/containerd.sock inspect {container.id}
, replace the {container.id} placeholder with a valid container's identifier.crictl inspect
command in evidences section below.Expected behaviour
All container and k8s related information should be reflected in the alert generated by Falco rule to guarantee the accurate traceability of affected k8s pods and containers.
Evidences
Environment
Falco deployed on AKS cluster using Falco Helm Chart version 4.4.2.
Falco version:
Falco version: 0.38.0 (x86_64)
System info:
Linux falco-8b7db 5.15.0-1064-azure #73-Ubuntu SMP Tue Apr 30 14:24:24 UTC 2024 x86_64 GNU/Linux
Deploy to k8s cluster as DaemonSet by using Helm Chart version 4.4.2 using the custom values YAML file
The text was updated successfully, but these errors were encountered: