Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cluttered JSON output in KH v0.5.2 run as K8s pod #465

Closed
danielpacak opened this issue Jun 26, 2021 · 2 comments
Closed

Cluttered JSON output in KH v0.5.2 run as K8s pod #465

danielpacak opened this issue Jun 26, 2021 · 2 comments
Labels
bug Something isn't working

Comments

@danielpacak
Copy link

danielpacak commented Jun 26, 2021

What happened

Integrated KH with Starboard so it runs with JSON output format as K8s pod. Then we parse the logs. I realised that in v0.5.2 parsing JSON output fails due to stack trace added by KH before the actual JSON content:

2021-06-26 14:08:07,371 ERROR kube_hunter.modules.discovery.kubernetes_client Failed to initiate Kubernetes client
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/kube_hunter/modules/discovery/kubernetes_client.py", line 13, in list_all_k8s_cluster_nodes
    kubernetes.config.load_incluster_config()
  File "/usr/local/lib/python3.8/site-packages/kubernetes/config/incluster_config.py", line 118, in load_incluster_config
    InClusterConfigLoader(
  File "/usr/local/lib/python3.8/site-packages/kubernetes/config/incluster_config.py", line 54, in load_and_set
    self._load_config()
  File "/usr/local/lib/python3.8/site-packages/kubernetes/config/incluster_config.py", line 73, in _load_config
    raise ConfigException("Service token file does not exists.")
kubernetes.config.config_exception.ConfigException: Service token file does not exists.
{"nodes": [], "services": [], "vulnerabilities": [{"location": "Local to Pod (359a3672-3ed9-40df-a3bd-b24c1cf585a6-qr95w)", "vid": "None", "category": "Access Risk", "severity": "low", "vulnerability": "CAP_NET_RAW Enabled", "description": "CAP_NET_RAW is enabled by default for pods.\n    If an attacker manages to compromise a pod,\n    they could potentially take advantage of this capability to perform network\n    attacks on other pods running on the same node", "evidence": "", "avd_reference": "https://avd.aquasec.com/kube-hunter/none/", "hunter": "Pod Capabilities Hunter"}]}

Expected behavior

Error or valid JSON output. Not both. Otherwise it's hard to determine whether KH failed or succeeded.
I did not bump into such issue with previous v0.4.1

@danielsagi
Copy link
Contributor

@danielpacak
Thats because of a bug in a new feature weve added.
But anyways, by your question I understand you run kube-hunter with --log error. A better practice that will also solve this issue is running with --log none.
This way even if we have some error internally with kube-hunter it will not mess up the report output

@danielpacak
Copy link
Author

Got it. Thanks for the hint. We'll update the log level appropriately.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants