Skip to content

Latest commit

 

History

History
93 lines (71 loc) · 4.85 KB

README.md

File metadata and controls

93 lines (71 loc) · 4.85 KB

Scalability Special Interest Group

Responsible for answering scalability related questions such as: What size clusters do we think that we should support with Kubernetes in the short to medium term? How performant do we think that the control system should be at scale? What resource overhead should the Kubernetes control system reasonably consume? For more details about our objectives please review our Scaling And Performance Goals

Meetings

Leadership

Chairs

The Chairs of the SIG run operations and processes governing the SIG.

Contact

Subprojects

The following subprojects are owned by sig-scalability:

GitHub Teams

The below teams can be mentioned on issues and PRs in order to get attention from the right people. Note that the links to display team membership will only work if you are a member of the org.

The google groups contain the archive of Github team notifications. Mentioning a team on Github will CC its group. Monitor these for Github activity if you are not a member of the team.

Team Name Details Google Groups Description
@kubernetes/sig-scalability-api-reviews link link API Changes and Reviews
@kubernetes/sig-scalability-bugs link link Bug Triage and Troubleshooting
@kubernetes/sig-scalability-feature-requests link link Feature Requests
@kubernetes/sig-scalability-misc link link General Discussion
@kubernetes/sig-scalability-pr-reviews link link PR Reviews
@kubernetes/sig-scalability-proprosals link link Design Proposals
@kubernetes/sig-scalability-test-failures link link Test Failures and Triage

Upcoming 2018 Meeting Dates

  • 1/18
  • 2/1
  • 2/15
  • 3/1
  • 3/15
  • 3/29
  • 4/12
  • 4/26
  • 5/10
  • 5/24
  • 6/7
  • 6/21

Scalability SLOs

We officially support two different SLOs:

  1. "API-responsiveness": 99% of all API calls return in less than 1s

  2. "Pod startup time: 99% of pods (with pre-pulled images) start within 5s

This should be valid on appropriate hardware up to a 5000 node cluster with 30 pods/node. We eventually want to expand that to 100 pods/node.

For more details how do we measure those, you can look at: http://blog.kubernetes.io/2015_09_01_archive.html

We are working on refining existing SLOs and defining more for other areas of the system.