Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cnf ran: add ZTP verify BIOS settings test case #204

Merged
merged 9 commits into from
Sep 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 18 additions & 18 deletions tests/cnf/ran/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,36 +48,36 @@ For the optional inputs listed below, see [default.yaml](internal/ranconfig/defa

Currently, only the TALM tests need more than the first spoke kubeconfig.

- `KUBECONFIG`: Global input that refers to the first spoke cluster.
- `ECO_CNF_RAN_KUBECONFIG_HUB`: For tests that need a hub cluster, this is the path to its kubeconfig.
- `ECO_CNF_RAN_KUBECONFIG_SPOKE2`: For tests that need a second spoke cluster, this is the path to its kubeconfig.
* `KUBECONFIG`: Global input that refers to the first spoke cluster.
* `ECO_CNF_RAN_KUBECONFIG_HUB`: For tests that need a hub cluster, this is the path to its kubeconfig.
* `ECO_CNF_RAN_KUBECONFIG_SPOKE2`: For tests that need a second spoke cluster, this is the path to its kubeconfig.

#### BMC credentials

Only the powermanagement and TALM pre-cache tests need BMC credentials.

- `ECO_CNF_RAN_BMC_USERNAME`: Username used for the Redfish API.
- `ECO_CNF_RAN_BMC_PASSWORD`: Password used for the Redfish API.
- `ECO_CNF_RAN_BMC_HOSTS`: IP address (without the leading `https://`) used for the Redfish API. Can be comma separated, but only the first host IP will be used.
- `ECO_CNF_RAN_BMC_TIMEOUT`: Timeout in the form of a Go duration string to use when connecting to the Redfish API. Defaults to 15s which should usually be plenty.
* `ECO_CNF_RAN_BMC_USERNAME`: Username used for the Redfish API.
* `ECO_CNF_RAN_BMC_PASSWORD`: Password used for the Redfish API.
* `ECO_CNF_RAN_BMC_HOSTS`: IP address (without the leading `https://`) used for the Redfish API. Can be comma separated, but only the first host IP will be used.
* `ECO_CNF_RAN_BMC_TIMEOUT`: Timeout in the form of a Go duration string to use when connecting to the Redfish API. Defaults to 15s which should usually be plenty.

#### Power management inputs

All of these inputs are optional.

- `ECO_CNF_RAN_METRIC_SAMPLING_INTERVAL`: Time between samples when gathering power usage metrics.
- `ECO_CNF_RAN_NO_WORKLOAD_DURATION`: Duration to sample power usage metrics for the no workload scenario.
- `ECO_CNF_RAN_WORKLOAD_DURATION`: Duration to sample power usage metrics for the workload scenario.
- `ECO_CNF_RAN_STRESSNG_TEST_IMAGE`: Container image to use for the workload pods during the workload scenario.
- `ECO_CNF_RAN_TEST_IMAGE`: Container image to use for testing container resource limits.
* `ECO_CNF_RAN_METRIC_SAMPLING_INTERVAL`: Time between samples when gathering power usage metrics.
* `ECO_CNF_RAN_NO_WORKLOAD_DURATION`: Duration to sample power usage metrics for the no workload scenario.
* `ECO_CNF_RAN_WORKLOAD_DURATION`: Duration to sample power usage metrics for the workload scenario.
* `ECO_CNF_RAN_STRESSNG_TEST_IMAGE`: Container image to use for the workload pods during the workload scenario.
* `ECO_CNF_RAN_TEST_IMAGE`: Container image to use for testing container resource limits.

#### TALM pre-cache inputs

These inputs are all specific to the TALM pre-cache tests. They are also all optional.

- `ECO_CNF_RAN_OCP_UPGRADE_UPSTREAM_URL`: URL of upstream upgrade graph.
- `ECO_CNF_RAN_PTP_OPERATOR_NAMESPACE`: Namespace that the PTP operator uses.
- `ECO_CNF_RAN_TALM_PRECACHE_POLICIES`: List of policies to copy for the precache operator tests.
* `ECO_CNF_RAN_OCP_UPGRADE_UPSTREAM_URL`: URL of upstream upgrade graph.
* `ECO_CNF_RAN_PTP_OPERATOR_NAMESPACE`: Namespace that the PTP operator uses.
* `ECO_CNF_RAN_TALM_PRECACHE_POLICIES`: List of policies to copy for the precache operator tests.

#### ZTP generator inputs

Expand All @@ -91,15 +91,15 @@ Except for the container namespace hiding tests, a dump of relevant CRs will be

#### Running the container namespace hiding test suite

```
```bash
# export KUBECONFIG=</path/to/spoke/kubeconfig>
# export ECO_TEST_FEATURES=containernshide
# make run-tests
```

#### Running the power management test suite

```
```bash
# export KUBECONFIG=</path/to/spoke/kubeconfig>
# export ECO_TEST_FEATURES=powermanagement
# export ECO_CNF_RAN_BMC_USERNAME=<bmc username>
Expand All @@ -112,7 +112,7 @@ If using more selective labels that do not include the powersaving tests, such a

#### Running the TALM test suite

```
```bash
# export KUBECONFIG=</path/to/spoke/kubeconfig>
# export ECO_TEST_FEATURES=talm
# export ECO_CNF_RAN_KUBECONFIG_HUB=</path/to/hub/kubeconfig>
Expand Down
3 changes: 3 additions & 0 deletions tests/cnf/ran/gitopsztp/internal/tsparams/consts.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,9 @@ const (
// LabelSpokeCheckerTests is the label for a particular set of test cases.
LabelSpokeCheckerTests = "ztp-spoke-checker"

// LabelBiosDayZeroTests is the label for a particuarl set of test cases.
LabelBiosDayZeroTests = "ztp-bios-day-zero"

// MultiClusterHubOperator is the name of the multi cluster hub operator.
MultiClusterHubOperator = "multiclusterhub-operator"
// AcmPolicyGeneratorName is the name of the ACM policy generator container.
Expand Down
141 changes: 141 additions & 0 deletions tests/cnf/ran/gitopsztp/tests/ztp-bios-day-zero.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
package tests

import (
"fmt"

"github.com/golang/glog"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/openshift-kni/eco-goinfra/pkg/bmh"
"github.com/openshift-kni/eco-goinfra/pkg/clients"
"github.com/openshift-kni/eco-goinfra/pkg/hive"
"github.com/openshift-kni/eco-goinfra/pkg/nodes"
"github.com/openshift-kni/eco-goinfra/pkg/reportxml"
"github.com/openshift-kni/eco-gotests/tests/cnf/ran/gitopsztp/internal/tsparams"
. "github.com/openshift-kni/eco-gotests/tests/cnf/ran/internal/raninittools"
"github.com/openshift-kni/eco-gotests/tests/cnf/ran/internal/version"
"github.com/openshift-kni/eco-gotests/tests/internal/cluster"
)

var _ = Describe("ZTP BIOS Configuration Tests", Label(tsparams.LabelBiosDayZeroTests), func() {
var (
spokeClusterName string
nodeNames []string
)

// 75196 - Check if spoke has required BIOS setting values applied
It("Verifies SNO spoke has required BIOS setting values applied", reportxml.ID("75196"), func() {
versionInRange, err := version.IsVersionStringInRange(RANConfig.ZTPVersion, "4.17", "")
Expect(err).ToNot(HaveOccurred(), "Failed to check if ZTP version is in range")

if !versionInRange {
Skip("ZTP BIOS configuration tests require ZTP version of least 4.17")
}

spokeClusterName, err = GetSpokeClusterName(HubAPIClient, Spoke1APIClient)
Expect(err).ToNot(HaveOccurred(), "Failed to get SNO cluster name")
glog.V(tsparams.LogLevel).Infof("cluster name: %s", spokeClusterName)

nodeNames, err = GetNodeNames(Spoke1APIClient)
Expect(err).ToNot(HaveOccurred(), "Failed to get node names")
glog.V(tsparams.LogLevel).Infof("Node names: %v", nodeNames)

By("getting HFS for spoke")
hfs, err := bmh.PullHFS(HubAPIClient, nodeNames[0], spokeClusterName)
Expect(err).ToNot(
klaskosk marked this conversation as resolved.
Show resolved Hide resolved
HaveOccurred(),
"Failed to get HFS for spoke %s in cluster %s",
nodeNames[0],
spokeClusterName,
)

hfsObject, err := hfs.Get()
Expect(err).ToNot(
HaveOccurred(),
"Failed to get HFS Obj for spoke %s in cluster %s",
nodeNames[0],
spokeClusterName,
)

By("comparing requsted BIOS settings to actual BIOS settings")
hfsRequestedSettings := hfsObject.Spec.Settings
hfsCurrentSettings := hfsObject.Status.Settings

if len(hfsRequestedSettings) == 0 {
Skip("hfs.spec.settings map is empty")
}

Expect(hfsCurrentSettings).ToNot(
BeEmpty(),
"hfs.spec.settings map is not empty, but hfs.status.settings map is empty",
)

allSettingsMatch := true
for param, value := range hfsRequestedSettings {
setting, ok := hfsCurrentSettings[param]
if !ok {
glog.V(tsparams.LogLevel).Infof("Current settings does not have param %s", param)

continue
}

requestedSetting := value.String()
if requestedSetting == setting {
glog.V(tsparams.LogLevel).Infof("Requested setting matches current: %s=%s", param, setting)
} else {
glog.V(tsparams.LogLevel).Infof(
"Requested setting %s value %s does not match current value %s",
param,
requestedSetting,
setting)
allSettingsMatch = false
}

}

Expect(allSettingsMatch).To(BeTrueBecause("One or more requested settings does not match current settings"))
})

})

// GetSpokeClusterName gets the spoke cluster name as string.
func GetSpokeClusterName(hubAPIClient, spokeAPIClient *clients.Settings) (string, error) {
klaskosk marked this conversation as resolved.
Show resolved Hide resolved
spokeClusterVersion, err := cluster.GetOCPClusterVersion(spokeAPIClient)
if err != nil {
return "", err
}

spokeClusterID := spokeClusterVersion.Object.Spec.ClusterID

clusterDeployments, err := hive.ListClusterDeploymentsInAllNamespaces(hubAPIClient)
if err != nil {
return "", err
}

for _, clusterDeploymentBuilder := range clusterDeployments {
if clusterDeploymentBuilder.Object.Spec.ClusterMetadata != nil &&
clusterDeploymentBuilder.Object.Spec.ClusterMetadata.ClusterID == string(spokeClusterID) {
return clusterDeploymentBuilder.Object.Spec.ClusterName, nil
}
}

return "", fmt.Errorf("could not find ClusterDeployment from provided API clients")
}

// GetNodeNames gets node names in cluster.
func GetNodeNames(spokeAPIClient *clients.Settings) ([]string, error) {
nodeList, err := nodes.List(
klaskosk marked this conversation as resolved.
Show resolved Hide resolved
spokeAPIClient,
)

if err != nil {
return nil, err
}

nodeNames := []string{}
for _, node := range nodeList {
nodeNames = append(nodeNames, node.Definition.Name)
}

return nodeNames, nil
}
Loading