Workload/tooling | Short Description | Minimum Requirements |
---|---|---|
Tooling | Setup pbench instrumentation tools | Cluster-admin, Privileged Containers |
Test | Test/Run your workload from ssh Container | Cluster-admin, Privileged Containers |
Baseline | Baseline metrics capture | Tooling job* |
Scale | Scales worker nodes | Cluster-admin |
NodeVertical | Node Kubelet Density | Labeling Nodes |
PodVertical | Max Pod Density | None |
MasterVertical | Master Node Stress workload | None |
HTTP | HTTP ingress TPS/Latency | None |
Network | TCP/UDP Throughput/Latency | Labeling Nodes, See below |
Deployments Per Namespace | Maximum Deployments | None |
PVCscale | PVCScale test | Working storageclass |
Conformance | OCP/Kubernetes e2e tests | None |
Namespaces per cluster | Maximum Namespaces | None |
Services per namespace | Maximum services per namespace | None |
FIO I/O test | FIO I/O test - stress storage backend | Privileged Containers, Working storage class |
- Baseline job without a tooled cluster just idles a cluster. The goal is to capture resource consumption over a period of time to characterize resource requirements thus tooling is required. (For now)
Test | Requirement |
---|---|
Pod to Pod | Labeling Nodes |
Pod to Pod with HostNetwork | Labeling Nodes, Open firewall ports at Host |
Pod to Service | Labeling Nodes |
See this page for workload contributing guidelines.
Each workload will implement a form of pass/fail criteria in order to flag if the tests have failed in CI.
Workload/tooling | Pass/Fail |
---|---|
Tooling | NA |
Test | NA |
Baseline | NA |
Scale | Yes: Test Duration |
NodeVertical | Yes: Exit Code, Test Duration |
PodVertical | Yes: Exit Code, Test Duration |
MasterVertical | Yes: Exit Code, Test Duration |
HTTP | No |
Network | No |
Deployments Per Namespace | No |
PVCscale | No |
Conformance | No |
Namespaces per cluster | Yes: Exit code, Test Duration |
Services per namespace | Yes: Exit code, Test Duration |
FIO I/O test | No |