Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scan kubernetes/kubernetes with govulncheck #95

Open
2 of 3 tasks
Tracked by #3
PushkarJ opened this issue Aug 4, 2023 · 7 comments
Open
2 of 3 tasks
Tracked by #3

Scan kubernetes/kubernetes with govulncheck #95

PushkarJ opened this issue Aug 4, 2023 · 7 comments
Labels
committee/security-response Denotes an issue or PR intended to be handled by the product security committee. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/release Categorizes an issue or PR as relevant to SIG Release. sig/security Categorizes an issue or PR as relevant to SIG Security.

Comments

@PushkarJ
Copy link
Member

PushkarJ commented Aug 4, 2023

Background: Today we have scanning implemented using snyk. It has worked quite well with addition of some smart optimization to reduce false positives.

Go team recently released https://go.dev/blog/govulncheck v1.0.0. It promises to provide prioritized vulnerability scanning for CVEs that affect the functions that the code is calling. This is promising in terms of having a really really low false positive since most vulnerability scan reports are in general notoriously hard to wrangle.

Usecases: We have three real workflows for injecting this type of scanning:

  1. On k/k PRs: Create a diff between vulnerability scan report run on master branch and the one run on HEAD (current) branch. If the diff is non-zero, fail the pre-merge test. This can be run on symbol and module level depending on context of the PR
  2. On k/k master periodically: Run every few hours to get a sense of vulnerability impact for tip of the contributions
  3. On k/k release branches: Run every few hours to get a sense of vulnerability impact for release branches so cherry-picks can be created as needed

Tasklist

How it works

Example output on August 4 2023

demo$ govulncheck ./...
Using go1.20.6 and [email protected] with vulnerability data from https://vuln.go.dev (last modified 2023-08-02 20:33:39 +0000 UTC).

Scanning your code and 1968 packages across 204 dependent modules for known vulnerabilities...


Vulnerability #1: GO-2023-1987
    Large RSA keys can cause high CPU usage in crypto/tls
  More info: https://pkg.go.dev/vuln/GO-2023-1987
  Standard library
    Found in: crypto/[email protected]
    Fixed in: crypto/[email protected]
    Example traces found:
      #1: pkg/kubelet/server/server.go:234:24: server.ListenAndServePodResources calls grpc.Server.Serve, which eventually calls tls.Conn.Handshake
      #2: pkg/proxy/healthcheck/proxier_health.go:179:24: healthcheck.proxierHealthServer.Run calls http.Server.Serve, which eventually calls tls.Conn.HandshakeContext
      #3: test/e2e/framework/network/utils.go:1026:25: network.PokeHTTP calls io.ReadAll, which calls tls.Conn.Read
      #4: cmd/kubeadm/app/preflight/checks.go:544:13: preflight.SystemVerificationCheck.Check calls bufio.Writer.Flush, which calls tls.Conn.Write
      #5: test/utils/apiserver/testapiserver.go:73:59: apiserver.writeKubeConfigForWardleServerToKASConnection calls cert.GetServingCertificatesForURL, which eventually calls tls.Dial
      #6: test/e2e/framework/websocket/websocket_util.go:61:29: websocket.OpenWebSocketForURL calls websocket.DialConfig, which eventually calls tls.DialWithDialer
      #7: test/images/agnhost/inclusterclient/main.go:100:24: inclusterclient.debugRt.RoundTrip calls spdy.SpdyRoundTripper.RoundTrip, which eventually calls tls.Dialer.DialContext

=== Informational ===

Found 1 vulnerability in packages that you import, but there are no call
stacks leading to the use of this vulnerability. You may not need to
take any action. See https://pkg.go.dev/golang.org/x/vuln/cmd/govulncheck
for details.

Vulnerability #1: GO-2023-1988
    Improper rendering of text nodes in golang.org/x/net/html
  More info: https://pkg.go.dev/vuln/GO-2023-1988
  Module: golang.org/x/net
    Found in: golang.org/x/[email protected]
    Fixed in: golang.org/x/[email protected]

Your code is affected by 1 vulnerability from the Go standard library.

Some more examples from @liggitt https://gist.github.com/liggitt/4674c7eb194738989183abf08feb333f

Open Questions:

These questions need to be discussed and reached a consensus on amongst K8s SRC, SIG Architecture, Release and Security

  1. Do we make the scan results available in the CI output of the tests assuming the publicly available tool can not provide any new info than what is available in the CI script ?
  2. Do we create two separates tests one for triaged but not yet merged fixes & one for triaged and merged fixes (Triaged issues refer to CVE IDs which have been mentioned in GitHub Issues in k/k)
  3. How often does the vulnerability database gets updated?
  4. Does it have any differences in refresh cycles for vulnerabilities in go standard libraries (e.g. crypto/tls) v/s go packages in https://pkg.go.dev/ (e.g. golang.org/x/net v0.12.0)
  5. Is there a definitive GOOS and GOARCH support we need to adhere to for scanning as the scanning can give different results based on the values for these env vars?

Post-script
In case there is anyone worried about the above output:

GO-2023-1987 fixed in:

GO-2023-1988 fixed in:

Previous discussions:

/sig security architecture release
/committee security-response

@k8s-ci-robot k8s-ci-robot added sig/security Categorizes an issue or PR as relevant to SIG Security. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/release Categorizes an issue or PR as relevant to SIG Release. committee/security-response Denotes an issue or PR intended to be handled by the product security committee. labels Aug 4, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 30, 2024
@PushkarJ
Copy link
Member Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 30, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 29, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 29, 2024
@PushkarJ
Copy link
Member Author

PushkarJ commented Jun 1, 2024

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 1, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 7, 2024
@tabbysable
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
committee/security-response Denotes an issue or PR intended to be handled by the product security committee. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/release Categorizes an issue or PR as relevant to SIG Release. sig/security Categorizes an issue or PR as relevant to SIG Security.
Projects
Status: In Progress
Development

No branches or pull requests

4 participants