Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable compliance tests to use plugins for cluster provisioning #753

Open
wants to merge 36 commits into
base: main
Choose a base branch
from

Conversation

tonifinger
Copy link
Contributor

This PR introduces an interface that enables the provision of kubeconfig files for the sonobuoy test framework.
Each Kubernetes provider must derive its own specific plugin from this interface class in order to provide a cluster for testing

@tonifinger tonifinger self-assigned this Sep 18, 2024
@tonifinger tonifinger force-pushed the 710-feature-request-enable-compliance-tests-to-use-plugins-for-cluster-provisioning branch 2 times, most recently from d4b2b66 to bfd1a0f Compare September 18, 2024 13:57
@mbuechse
Copy link
Contributor

@tonifinger Can you please post what a run would look like in the shell? (Just paste something from your terminal.)

Copy link
Contributor

@mbuechse mbuechse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be good to use the test infrastructure that we already have, i.e., the scripts

and the spec files

The spec file for KaaS already includes the CNCF conformance test. What I think we could do:

  • We make a list of clusters that we need for testing purposes, and for each cluster, we say where we expect the Kubeconfig to sit,
  • then your tool can use the plugin to create all these clusters and the corresponding Kubeconfig files at the specified locations,
  • it could also use the plugin and this specification to remove the clusters.
  • We add calls to your tool to our Zuul job BEFORE and AFTER it runs the actual tests to create and remove clusters, respectively.

@tonifinger
Copy link
Contributor Author

@tonifinger Can you please post what a run would look like in the shell? (Just paste something from your terminal.)

This is the output to the shell using the plugin of kind and the logging lvl set to info:

No kind clusters found.
INFO:root:Creating cluster scs-cluster..
Creating cluster "scs-cluster" ...
 ✓ Ensuring node image (kindest/node:v1.25.3) 🖼
 ✓ Preparing nodes 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
Set kubectl context to "kind-scs-cluster"
You can now use your cluster with:

kubectl cluster-info --context kind-scs-cluster --kubeconfig .pytest-kind/scs-cluster/kubeconfig

Have a nice day! 👋
INFO:interface:check kubeconfig
INFO:interface:kubeconfigfile loaded successfully
Sonobuoy Version: v0.56.16
MinimumKubeVersion: 1.17.0
MaximumKubeVersion: 1.99.99
GitSHA: c7712478228e3b50a225783119fee1286b5104af
GoVersion: go1.19.10
Platform: linux/amd64
API Version:  v1.25.3
INFO:interface: invoke cncf conformance test
INFO[0000] create request issued                         name=sonobuoy namespace= resource=namespaces
INFO[0000] create request issued                         name=sonobuoy-serviceaccount namespace=sonobuoy resource=serviceaccounts
INFO[0000] create request issued                         name=sonobuoy-serviceaccount-sonobuoy namespace= resource=clusterrolebindings
INFO[0000] create request issued                         name=sonobuoy-serviceaccount-sonobuoy namespace= resource=clusterroles
INFO[0000] create request issued                         name=sonobuoy-config-cm namespace=sonobuoy resource=configmaps
INFO[0000] create request issued                         name=sonobuoy-plugins-cm namespace=sonobuoy resource=configmaps
INFO[0000] create request issued                         name=sonobuoy namespace=sonobuoy resource=pods
INFO[0000] create request issued                         name=sonobuoy-aggregator namespace=sonobuoy resource=services
14:08:41          PLUGIN                        NODE    STATUS   RESULT   PROGRESS
14:08:41    systemd-logs   scs-cluster-control-plane   running                    
14:08:41             e2e                      global   running                    
14:08:41 
14:08:41 Sonobuoy is still running. Runs can take 60 minutes or more depending on cluster and plugin configuration.
...
...
14:09:41    systemd-logs   scs-cluster-control-plane   complete                    
...
14:10:21    systemd-logs   scs-cluster-control-plane   complete   passed                         
14:10:21             e2e                      global   complete   passed   Passed:960, Failed:  0
14:10:21 Sonobuoy has completed. Use `sonobuoy retrieve` to get results.
INFO:interface: 1094 passed, 5976 failed of which 5976 were skipped
INFO:interface:removing sonobuoy tests from cluster
INFO[0000] delete request issued                         kind=namespace namespace=sonobuoy
INFO[0000] delete request issued                         kind=clusterrolebindings
INFO[0000] delete request issued                         kind=clusterroles

Namespace "sonobuoy" has status {Phase:Terminating Conditions:[]}

Namespace "sonobuoy" has status {Phase:Terminating Conditions:[{Type:NamespaceDeletionDiscoveryFailure Status:False LastTransitionTime:2024-09-19 14:10:26 +0200 CEST Reason:ResourcesDiscovered Message:All resources successfully discovered} {Type:NamespaceDeletionGroupVersionParsingFailure Status:False LastTransitionTime:2024-09-19 14:10:26 +0200 CEST Reason:ParsedGroupVersions Message:All legacy kube types successfully parsed} {Type:NamespaceDeletionContentFailure Status:False LastTransitionTime:2024-09-19 14:10:26 +0200 CEST Reason:ContentDeleted Message:All content successfully deleted, may be waiting on finalization} {Type:NamespaceContentRemaining Status:True LastTransitionTime:2024-09-19 14:10:26 +0200 CEST Reason:SomeResourcesRemain Message:Some resources are remaining: pods. has 2 resource instances} {Type:NamespaceFinalizersRemaining Status:False LastTransitionTime:2024-09-19 14:10:26 +0200 CEST Reason:ContentHasNoFinalizers Message:All content-preserving finalizers finished}]}

Namespace "sonobuoy" has status {Phase:Terminating Conditions:[{Type:NamespaceDeletionDiscoveryFailure Status:False LastTransitionTime:2024-09-19 14:10:26 +0200 CEST Reason:ResourcesDiscovered Message:All resources successfully discovered} {Type:NamespaceDeletionGroupVersionParsingFailure Status:False LastTransitionTime:2024-09-19 14:10:26 +0200 CEST Reason:ParsedGroupVersions Message:All legacy kube types successfully parsed} {Type:NamespaceDeletionContentFailure Status:False LastTransitionTime:2024-09-19 14:10:26 +0200 CEST Reason:ContentDeleted Message:All content successfully deleted, may be waiting on finalization} {Type:NamespaceContentRemaining Status:True LastTransitionTime:2024-09-19 14:10:26 +0200 CEST Reason:SomeResourcesRemain Message:Some resources are remaining: pods. has 1 resource instances} {Type:NamespaceFinalizersRemaining Status:False LastTransitionTime:2024-09-19 14:10:26 +0200 CEST Reason:ContentHasNoFinalizers Message:All content-preserving finalizers finished}]}
...
...
...

Namespace "sonobuoy" has been deleted

Deleted all ClusterRoles and ClusterRoleBindings.
INFO:interface:removing sonobuoy tests from cluster
INFO[0000] already deleted                               kind=namespace namespace=sonobuoy
INFO[0000] delete request issued                         kind=clusterrolebindings
INFO[0000] delete request issued                         kind=clusterroles

Namespace "sonobuoy" has been deleted

Deleted all ClusterRoles and ClusterRoleBindings.
INFO:root:Deleting cluster scs-cluster..
Deleting cluster "scs-cluster" ...

And apply static plugin

Signed-off-by: Toni Finger <[email protected]>
Signed-off-by: Toni Finger <[email protected]>
@tonifinger tonifinger force-pushed the 710-feature-request-enable-compliance-tests-to-use-plugins-for-cluster-provisioning branch from a42542d to 199fdf2 Compare September 20, 2024 12:10
@mbuechse
Copy link
Contributor

mbuechse commented Oct 7, 2024

Please re-request review from me when you've reached the point.

Splitting the functionality into the
handling of sonobuoys and the provision of K8s clusters.

Signed-off-by: Toni Finger <[email protected]>
@tonifinger
Copy link
Contributor Author

tonifinger commented Oct 10, 2024

Please re-request review from me when you've reached the point.

I have reached a point where you can test the first approach by using scs-test-runner.py to run the kaas tests on self provisioned k8s clusters.

You should be able to test this with the following command:

./scs-test-runner.py --config ./config-kaas-example.toml --debug run --preset="all-kaas" -o report.yaml --no-upload 

This PR is still in draft as I have to:

  • Handle configurations to allow selecting cluster versions on plugin side
  • Handle generic configuration hand over
  • Add the function to delete clusters to the cleanup command
  • Finalize static_plugin to handle excising k8s clusters
  • Add integration Tests

Tests/config-kaas-example.toml Outdated Show resolved Hide resolved
Tests/config-kaas-example.toml Outdated Show resolved Hide resolved
Tests/kaas/plugin/interface.py Outdated Show resolved Hide resolved
Tests/kaas/plugin/interface.py Outdated Show resolved Hide resolved
Tests/kaas/plugin/interface.py Outdated Show resolved Hide resolved
Tests/scs-test-runner.py Outdated Show resolved Hide resolved
Tests/scs-test-runner.py Outdated Show resolved Hide resolved
Tests/scs-test-runner.py Outdated Show resolved Hide resolved
Tests/scs-test-runner.py Outdated Show resolved Hide resolved
Tests/scs-test-runner.py Outdated Show resolved Hide resolved
tonifinger and others added 5 commits October 18, 2024 11:36
Signed-off-by: Toni Finger <[email protected]>
Signed-off-by: Matthias Büchse <[email protected]>
Rework rebase conflicts

Signed-off-by: Toni Finger <[email protected]>
@mbuechse
Copy link
Contributor

mbuechse commented Nov 1, 2024

Just a quick update on ongoing work:

  • we decided that we create a 'subject' for each combination of CSP and k8s branch,
  • we decided that each such subject must be equipped with the additional information necessary for the plugin in question to work (for instance, if the CSP uses Gardener, then we might need some sort of "shoot cluster manifest")
  • if this additional information is added to this repository at all, it would either go into .zuul.d/secure.yaml or into somewhere under playbooks (and then referenced by config.toml)

In order to have one clusterspec file for each k8s version

Signed-off-by: Toni Finger <[email protected]>
Signed-off-by: Toni Finger <[email protected]>
Copy link
Contributor

@mbuechse mbuechse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a few remarks

Tests/config.toml Outdated Show resolved Hide resolved
Tests/kaas/clusterspec.yaml Outdated Show resolved Hide resolved
Tests/kaas/clusterspec.yaml Outdated Show resolved Hide resolved
Tests/scs-compatible-kaas.yaml Outdated Show resolved Hide resolved
Signed-off-by: Toni Finger <[email protected]>
Copy link
Contributor

@mbuechse mbuechse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my minor remarks. Apart from that, it does look good. Now it depends on thorough testing. And of course, we should merge the CAPI plugin before we can go ahead.

tonifinger and others added 7 commits November 5, 2024 16:04
Co-authored-by: Matthias Büchse <[email protected]>
Signed-off-by: tonifinger <[email protected]>
Co-authored-by: Matthias Büchse <[email protected]>
Signed-off-by: tonifinger <[email protected]>
Add plugin_config to delete process as well
Add missing file `kind_cluster_config.yaml` as configuration example

Signed-off-by: Toni Finger <[email protected]>
@tonifinger tonifinger marked this pull request as ready for review November 6, 2024 15:31
@tonifinger
Copy link
Contributor Author

See my minor remarks. Apart from that, it does look good. Now it depends on thorough testing. And of course, we should merge the CAPI plugin before we can go ahead.

The Test with kind plugin gives the following result:

provisioning process with kind as plugin:

(venv)@ThinkPad:~/standards/Tests$ ./scs-test-runner.py --config ./config.toml --debug provision --preset="all-kaas"
DEBUG: running provision for subject(s) cspA-current, cspA-current-1, cspA-current-2, num_workers: 4
INFO: Init provider plug-in of type PluginKind
No kind clusters found.
INFO: Creating cluster current-k8s-release-1..
INFO: Init provider plug-in of type PluginKind
INFO: Init provider plug-in of type PluginKind
No kind clusters found.
INFO: Creating cluster current-k8s-release..
No kind clusters found.
INFO: Creating cluster current-k8s-release-2..
Creating cluster "current-k8s-release-2" ...
Creating cluster "current-k8s-release" ...
Creating cluster "current-k8s-release-1" ...
 ✓ Ensuring node image (kindest/node:v1.30.4) 🖼
 ✓ Ensuring node image (kindest/node:v1.29.8) 🖼
 ✓ Ensuring node image (kindest/node:v1.31.1) 🖼
 ✓ Preparing nodes 📦 📦  
 ✓ Preparing nodes 📦 📦  
 ✓ Preparing nodes 📦 📦 
 ✓ Writing configuration 📜 
 ✓ Writing configuration 📜 
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌plane 🕹️ 
 ✓ Starting control-plane 🕹️   
 ✓ Installing StorageClass 💾 
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌des 🚜 
 ✓ Installing CNI 🔌des 🚜 💾 
 ✓ Installing StorageClass 💾 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
⠈⠁ Joining worker nodes 🚜 Set kubectl context to "kind-current-k8s-release"
You can now use your cluster with:

kubectl cluster-info --context kind-current-k8s-release --kubeconfig .pytest-kind/current-k8s-release/kubeconfig

Thanks for using kind! 😊
 ✓ Joining worker nodes 🚜 
⠊⠁ Joining worker nodes 🚜 Set kubectl context to "kind-current-k8s-release-1"
You can now use your cluster with:

kubectl cluster-info --context kind-current-k8s-release-1 --kubeconfig .pytest-kind/current-k8s-release-1/kubeconfig

Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
 ✓ Joining worker nodes 🚜 
Set kubectl context to "kind-current-k8s-release-2"
You can now use your cluster with:

kubectl cluster-info --context kind-current-k8s-release-2 --kubeconfig .pytest-kind/current-k8s-release-2/kubeconfig

Not sure what to do next? 😅  Check out https://kind.sigs.k8s.io/docs/user/quick-start/

(venv)@ThinkPad:~/standards/Tests$ kind get clusters
current-k8s-release
current-k8s-release-1
current-k8s-release-2

run tests:

(venv)@ThinkPad:~/standards/Tests$ ./scs-test-runner.py --config ./config.toml --debug run --preset="all-kaas" --monitor-url localhost -o REPORT.yaml
DEBUG: running tests for scope(s) scs-compatible-kaas and subject(s) cspA-current, cspA-current-1, cspA-current-2
DEBUG: monitor url: localhost, num_workers: 4, output: REPORT.yaml
INFO: module cncf-k8s-conformance missing checks or test cases
DEBUG: running './kaas/k8s-version-policy/k8s_version_policy.py -k cspA-current-2/kubeconfig.yaml'...
INFO: module cncf-k8s-conformance missing checks or test cases
DEBUG: running './kaas/k8s-version-policy/k8s_version_policy.py -k cspA-current/kubeconfig.yaml'...
INFO: module cncf-k8s-conformance missing checks or test cases
DEBUG: running './kaas/k8s-version-policy/k8s_version_policy.py -k cspA-current-1/kubeconfig.yaml'...

DEBUG: .. rc 1, 0 critical, 1 error
DEBUG: running './kaas/k8s-node-distribution/k8s_node_distribution_check.py -k cspA-current-1/kubeconfig.yaml'...

DEBUG: .. rc 1, 0 critical, 1 error
DEBUG: running './kaas/k8s-node-distribution/k8s_node_distribution_check.py -k cspA-current-2/kubeconfig.yaml'...

DEBUG: .. rc 1, 0 critical, 1 error
DEBUG: running './kaas/k8s-node-distribution/k8s_node_distribution_check.py -k cspA-current/kubeconfig.yaml'...
ERROR: The label for regions doesn't seem to be set for all nodes.
DEBUG: .. rc 2, 0 critical, 1 error
********************************************************************************
cspA-current-2 SCS-compatible KaaS v1 (draft):
- main: FAIL (0 passed, 2 failed, 1 missing)
  - FAILED:
    - version-policy-check:
    - node-distribution-check:
  - MISSING:
    - cncf-k8s-conformance:
ERROR: The label for regions doesn't seem to be set for all nodes.
DEBUG: .. rc 2, 0 critical, 1 error
********************************************************************************
cspA-current SCS-compatible KaaS v1 (draft):
- main: FAIL (0 passed, 2 failed, 1 missing)
  - FAILED:
    - version-policy-check:
    - node-distribution-check:
  - MISSING:
    - cncf-k8s-conformance:
ERROR: The label for regions doesn't seem to be set for all nodes.
DEBUG: .. rc 2, 0 critical, 1 error
********************************************************************************
cspA-current-1 SCS-compatible KaaS v1 (draft):
- main: FAIL (0 passed, 2 failed, 1 missing)
  - FAILED:
    - version-policy-check:
    - node-distribution-check:
  - MISSING:
    - cncf-k8s-conformance:
Traceback (most recent call last):
  File "/home/tf/repos/ci/scs/01_ISSUES/standard_710/Tests/./scs-test-runner.py", line 293, in <module>
    cli(obj=Config())
  File "/home/tf/repos/ci/scs/01_ISSUES/standard_710/venv/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
  File "/home/tf/repos/ci/scs/01_ISSUES/standard_710/venv/lib/python3.10/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
  File "/home/tf/repos/ci/scs/01_ISSUES/standard_710/venv/lib/python3.10/site-packages/click/core.py", line 1688, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/tf/repos/ci/scs/01_ISSUES/standard_710/venv/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/tf/repos/ci/scs/01_ISSUES/standard_710/venv/lib/python3.10/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
  File "/home/tf/repos/ci/scs/01_ISSUES/standard_710/venv/lib/python3.10/site-packages/click/decorators.py", line 45, in new_func
    return f(get_current_context().obj, *args, **kwargs)
  File "/home/tf/repos/ci/scs/01_ISSUES/standard_710/Tests/./scs-test-runner.py", line 228, in run
    subprocess.run(cfg.build_sign_command(report_yaml_tmp))
  File "/usr/lib/python3.10/subprocess.py", line 503, in run
    with Popen(*popenargs, **kwargs) as process:
  File "/usr/lib/python3.10/subprocess.py", line 971, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/usr/lib/python3.10/subprocess.py", line 1863, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'args'

Content of REPORT.yaml:

---
spec:
  uuid: 1fffebe6-fd4b-44d3-a36c-fc58b4bb0180
  name: SCS-compatible KaaS
  url: https://raw.githubusercontent.com/SovereignCloudStack/standards/main/Tests/scs-compatible-kaas.yaml
checked_at: 2024-11-06 16:34:28.794798
reference_date: 2024-11-06
subject: cspA-current
versions:
  v1:
    version-policy-check:
      result: -1
      invocation: c3c96581-2e66-41db-8953-01a388d233ec
    node-distribution-check:
      result: -1
      invocation: 8d0592a5-dd14-45ec-97f9-5debb21b6f51
run:
  uuid: 10489580-03bc-4b20-8bba-d556d82e02e6
  argv:
  - /home/tf/repos/ci/scs/01_ISSUES/standard_710/Tests/./scs-compatible-kaas.yaml
  - --debug
  - -C
  - -o
  - /home/tf/repos/ci/scs/01_ISSUES/standard_710/Tests/tmpuwc7uuew/report-0.yaml
  - -s
  - cspA-current
  - -a
  - os_cloud=cspA-current
  - -a
  - subject_root=cspA-current
  assignment:
    os_cloud: cspA-current
    subject_root: cspA-current
  sections: null
  forced_version: null
  forced_tests: null
  invocations:
    c3c96581-2e66-41db-8953-01a388d233ec:
      id: c3c96581-2e66-41db-8953-01a388d233ec
      cmd: ./kaas/k8s-version-policy/k8s_version_policy.py -k cspA-current/kubeconfig.yaml
      result: 0
      results:
        version-policy-check: -1
      rc: 1
      stdout:
      - 'WARNING: The EOL data in k8s-eol-data.yml isn''t up-to-date.'
      - 'INFO: Checking cluster specified by default context in cspA-current/kubeconfig.yaml.'
      - 'ERROR: The K8s cluster version 1.31.1 of cluster ''kind-current-k8s-release''
        is already EOL.'
      - 'version-policy-check: FAIL'
      stderr: []
      info: 1
      warning: 1
      error: 1
      critical: 0
    8d0592a5-dd14-45ec-97f9-5debb21b6f51:
      id: 8d0592a5-dd14-45ec-97f9-5debb21b6f51
      cmd: ./kaas/k8s-node-distribution/k8s_node_distribution_check.py -k cspA-current/kubeconfig.yaml
      result: 0
      results:
        node-distribution-check: -1
      rc: 2
      stdout:
      - 'node-distribution-check: FAIL'
      stderr:
      - 'ERROR: The label for regions doesn''t seem to be set for all nodes.'
      info: 0
      warning: 0
      error: 1
      critical: 0
---
spec:
  uuid: 1fffebe6-fd4b-44d3-a36c-fc58b4bb0180
  name: SCS-compatible KaaS
  url: https://raw.githubusercontent.com/SovereignCloudStack/standards/main/Tests/scs-compatible-kaas.yaml
checked_at: 2024-11-06 16:34:29.040151
reference_date: 2024-11-06
subject: cspA-current-1
versions:
  v1:
    version-policy-check:
      result: -1
      invocation: 64aa2078-154e-491a-b271-45daf3b2df8a
    node-distribution-check:
      result: -1
      invocation: 19ed2df3-3f70-4edf-b80d-ce2462e08f23
run:
  uuid: 499404b5-1e24-4139-8413-96f235eed565
  argv:
  - /home/tf/repos/ci/scs/01_ISSUES/standard_710/Tests/./scs-compatible-kaas.yaml
  - --debug
  - -C
  - -o
  - /home/tf/repos/ci/scs/01_ISSUES/standard_710/Tests/tmpuwc7uuew/report-1.yaml
  - -s
  - cspA-current-1
  - -a
  - os_cloud=cspA-current-1
  - -a
  - subject_root=cspA-current-1
  assignment:
    os_cloud: cspA-current-1
    subject_root: cspA-current-1
  sections: null
  forced_version: null
  forced_tests: null
  invocations:
    64aa2078-154e-491a-b271-45daf3b2df8a:
      id: 64aa2078-154e-491a-b271-45daf3b2df8a
      cmd: ./kaas/k8s-version-policy/k8s_version_policy.py -k cspA-current-1/kubeconfig.yaml
      result: 0
      results:
        version-policy-check: -1
      rc: 1
      stdout:
      - 'WARNING: The EOL data in k8s-eol-data.yml isn''t up-to-date.'
      - 'INFO: Checking cluster specified by default context in cspA-current-1/kubeconfig.yaml.'
      - 'ERROR: The K8s cluster version 1.30.4 of cluster ''kind-current-k8s-release-1''
        is outdated according to the standard.'
      - 'version-policy-check: FAIL'
      stderr: []
      info: 1
      warning: 1
      error: 1
      critical: 0
    19ed2df3-3f70-4edf-b80d-ce2462e08f23:
      id: 19ed2df3-3f70-4edf-b80d-ce2462e08f23
      cmd: ./kaas/k8s-node-distribution/k8s_node_distribution_check.py -k cspA-current-1/kubeconfig.yaml
      result: 0
      results:
        node-distribution-check: -1
      rc: 2
      stdout:
      - 'node-distribution-check: FAIL'
      stderr:
      - 'ERROR: The label for regions doesn''t seem to be set for all nodes.'
      info: 0
      warning: 0
      error: 1
      critical: 0
---
spec:
  uuid: 1fffebe6-fd4b-44d3-a36c-fc58b4bb0180
  name: SCS-compatible KaaS
  url: https://raw.githubusercontent.com/SovereignCloudStack/standards/main/Tests/scs-compatible-kaas.yaml
checked_at: 2024-11-06 16:34:28.736436
reference_date: 2024-11-06
subject: cspA-current-2
versions:
  v1:
    version-policy-check:
      result: -1
      invocation: 12be6326-2b8d-4ff8-83b3-fb0921077c03
    node-distribution-check:
      result: -1
      invocation: 62c58c79-6b32-4afc-bd2c-9db081061c22
run:
  uuid: b3c83d2c-2730-4954-aa40-fa40950e4f6a
  argv:
  - /home/tf/repos/ci/scs/01_ISSUES/standard_710/Tests/./scs-compatible-kaas.yaml
  - --debug
  - -C
  - -o
  - /home/tf/repos/ci/scs/01_ISSUES/standard_710/Tests/tmpuwc7uuew/report-2.yaml
  - -s
  - cspA-current-2
  - -a
  - os_cloud=cspA-current-2
  - -a
  - subject_root=cspA-current-2
  assignment:
    os_cloud: cspA-current-2
    subject_root: cspA-current-2
  sections: null
  forced_version: null
  forced_tests: null
  invocations:
    12be6326-2b8d-4ff8-83b3-fb0921077c03:
      id: 12be6326-2b8d-4ff8-83b3-fb0921077c03
      cmd: ./kaas/k8s-version-policy/k8s_version_policy.py -k cspA-current-2/kubeconfig.yaml
      result: 0
      results:
        version-policy-check: -1
      rc: 1
      stdout:
      - 'WARNING: The EOL data in k8s-eol-data.yml isn''t up-to-date.'
      - 'INFO: Checking cluster specified by default context in cspA-current-2/kubeconfig.yaml.'
      - 'ERROR: The K8s cluster version 1.29.8 of cluster ''kind-current-k8s-release-2''
        is outdated according to the standard.'
      - 'version-policy-check: FAIL'
      stderr: []
      info: 1
      warning: 1
      error: 1
      critical: 0
    62c58c79-6b32-4afc-bd2c-9db081061c22:
      id: 62c58c79-6b32-4afc-bd2c-9db081061c22
      cmd: ./kaas/k8s-node-distribution/k8s_node_distribution_check.py -k cspA-current-2/kubeconfig.yaml
      result: 0
      results:
        node-distribution-check: -1
      rc: 2
      stdout:
      - 'node-distribution-check: FAIL'
      stderr:
      - 'ERROR: The label for regions doesn''t seem to be set for all nodes.'
      info: 0
      warning: 0
      error: 1
      critical: 0

Unprovision process:

(venv)@ThinkPad:~/standards/Tests$  ./scs-test-runner.py --config ./config.toml --debug unprovision --preset="all-kaas"
DEBUG: running unprovision for subject(s) cspA-current, cspA-current-1, cspA-current-2, num_workers: 4
INFO: Init provider plug-in of type PluginKind
INFO: Deleting cluster current-k8s-release..
INFO: Init provider plug-in of type PluginKind
INFO: Deleting cluster current-k8s-release-2..
INFO: Init provider plug-in of type PluginKind
INFO: Deleting cluster current-k8s-release-1..
Deleting cluster "current-k8s-release" ...
Deleting cluster "current-k8s-release-1" ...
Deleting cluster "current-k8s-release-2" ...
(venv)@ThinkPad:~/standards/Tests$  kind get clusters
No kind clusters found.

Copy link
Contributor

@mbuechse mbuechse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is starting to look really good now. A few minor things though.

Tests/kaas/plugin/interface.py Outdated Show resolved Hide resolved
Tests/kaas/plugin/interface.py Outdated Show resolved Hide resolved
Tests/kaas/plugin/plugin_kind.py Outdated Show resolved Hide resolved
Tests/kaas/plugin/plugin_kind.py Outdated Show resolved Hide resolved
Tests/kaas/plugin/plugin_kind.py Outdated Show resolved Hide resolved
Tests/kaas/plugin/plugin_static.py Outdated Show resolved Hide resolved
Tests/kaas/plugin/run_plugin.py Outdated Show resolved Hide resolved
Tests/kaas/plugin/run_plugin.py Outdated Show resolved Hide resolved
tonifinger and others added 10 commits November 7, 2024 10:10
make use of Not ImplementedError

Signed-off-by: Toni Finger <[email protected]>
Signed-off-by: Toni Finger <[email protected]>
Co-authored-by: Matthias Büchse <[email protected]>
Signed-off-by: tonifinger <[email protected]>
Removed the return of the kubeconfig filepath from the `create_cluster`
method as we do not use this handling.

Signed-off-by: Toni Finger <[email protected]>
Signed-off-by: Toni Finger <[email protected]>
Remove default values to prevent parameters from being optional

Signed-off-by: Toni Finger <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature Request] enable compliance tests to use plugins for cluster provisioning
2 participants