Skip to content

Commit

Permalink
Merge pull request #354 from jwforres/fix-openshift-casing
Browse files Browse the repository at this point in the history
Fix all instances of Openshift to OpenShift
  • Loading branch information
openshift-merge-robot authored Jun 1, 2020
2 parents 4cf172b + fa747df commit f681a13
Show file tree
Hide file tree
Showing 8 changed files with 29 additions and 29 deletions.
4 changes: 2 additions & 2 deletions enhancements/baremetal/baremetal-provisioning-config.md
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ with 2 members:
1. As a Deployment Operator, I want Barametal IPI deployments to be customizable to
hardware and network requirements.

2. As an Openshift Administrator, I want Baremetal IPI deployments to take place without
2. As an OpenShift Administrator, I want Baremetal IPI deployments to take place without
manual workarounds like creating a ConfigMap for the config (which is the current approach
being used in 4.2 and 4.3.)

Expand Down Expand Up @@ -208,7 +208,7 @@ metal3 cluster has been up and a few workers have come up successfully.
### Upgrade / Downgrade Strategy

Baremetal Platform type will be available for customers to use for the first
time in Openshift 4.3. And, when it is installed, it will always start as a
time in OpenShift 4.3. And, when it is installed, it will always start as a
fresh baremetal installation at least in 4.3. There is no use case where a 4.2
installation would be upgraded to a 4.3 installation with Baremetal Platform
support enabled.
Expand Down
2 changes: 1 addition & 1 deletion enhancements/etcd/disaster-recovery-with-ceo.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ The proposal is to transfer the ownership of the DR scripts from MCO, remove all
simplify other existing scripts for backing up and restoring the cluster state.

With the simplification of the scripts, different disaster recovery scenarios are documented properly to utilize the
simplified scripts along with other Openshift utility commands to achieve the recovery of the cluster.
simplified scripts along with other OpenShift utility commands to achieve the recovery of the cluster.

## Implementation Plan
### Transfer the ownership of the DR files from the MCO to CEO
Expand Down
2 changes: 1 addition & 1 deletion enhancements/machine-api/spot-instances.md
Original file line number Diff line number Diff line change
Expand Up @@ -243,7 +243,7 @@ This risk will be documented and it will be strongly advised that users do not a

#### Spot instances and Autoscaling

The Kubernetes Cluster Autoscaler, deployed to Openshift clusters, is currently unaware of differences
The Kubernetes Cluster Autoscaler, deployed to OpenShift clusters, is currently unaware of differences
between Spot instances and on-demand instances.

If, while the cloud provider has no capacity, or the bid price is too low for AWS/Azure,
Expand Down
22 changes: 11 additions & 11 deletions enhancements/machine-config/ignition-spec-dual-support.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,13 +35,13 @@ superseded-by:
## Summary

This enhancement proposal aims to add dual Ignition specification version 2/3
(Ignition version 0/2) support to Openshift 4.x, which currently only support
(Ignition version 0/2) support to OpenShift 4.x, which currently only support
Ignition version 0 spec 2 for OS provisioning and machine config updates. We
aim to introduce a method to switch all new and existing clusters to Ignition
spec version 3 at some version of the cluster, which will be performed by the
Machine-Config-Operator (Henceforth MCO). The switching will be non-breaking
for clusters that have no un-translatable configs (see below), and will have
a grace period for admins to intervene and transition otherwise. The Openshift
a grace period for admins to intervene and transition otherwise. The OpenShift
installer and underlying control plane/bootstrap configuration, as well as RHEL
CoreOS (Henceforth RHCOS) package version will also be updated.

Expand All @@ -67,7 +67,7 @@ All new installs will be on spec 3 only. Existing clusters that have non spec 3
Ignition config machineconfigs will not be allowed to update to the new version.

- RHCOS bootimages switches to only accept Ignition spec 3 configs
- The Openshift installer is updated to generate spec 3 configs
- The OpenShift installer is updated to generate spec 3 configs
- Remaining MC* components generate spec 3 only
- MCO enforces that all configs are spec 3 before allowing the CVO to start the update

Expand Down Expand Up @@ -111,7 +111,7 @@ OKD and OCP.
- [ ] The MCO gains the ability to manage installer-generated stub master/worker Ignition configs (separate enhancement proposal)
- [ ] Ignition-dracut spec 2 and spec 3 diffs are aligned
- [ ] RHCOS bootimages switches to only accept Ignition spec 3 configs
- [ ] The Openshift installer is updated to generate spec 3 configs
- [ ] The OpenShift installer is updated to generate spec 3 configs
- [ ] The MCC gains the ability for users to provide necessary changes to update spec 2 to spec 3
- [ ] MCO enforces that all configs are spec 3 before allowing the CVO to start the update
- [ ] Further tests/docs are added
Expand Down Expand Up @@ -253,7 +253,7 @@ Acceptance criteria:
Acceptance criteria:
- Essentially the same as story 1

** As a user of Openshift, I’d like to install a fresh Ignition spec 3 cluster **
** As a user of OpenShift, I’d like to install a fresh Ignition spec 3 cluster **

Acceptance criteria:
- The workflow remains the same for an IPI cluster
Expand Down Expand Up @@ -325,7 +325,7 @@ Most of the actual configs for master/workers are generated in the MCO via
templates, and served by the MCS. Thus the installer would need to, during
phase 2, generate spec 3 configs. For existing clusters, there exists a need
to update the stub master/worker configs, which today exists as secrets
unmanaged by any component in Openshift. See
unmanaged by any component in OpenShift. See
"Managing stub master/worker Ignition configs" section below.

At the time of writing this proposal, there exist FCOS/OKD branches for the
Expand All @@ -339,14 +339,14 @@ with this change.

Note: This will be also be a separate enhancement proposal.

Today in Openshift, the installer generates stub Ignition configs for master
Today in OpenShift, the installer generates stub Ignition configs for master
and worker nodes. These stub configs serve as initial configs given to
RHCOS bootimages for master/worker. They function to tell Ignition that actual
Ignition configs will be served at port 22623, under /config/master or
/config/worker, for Ignition to fetch during its run.

The Ignition stub config is then saved as `master-user-data` and `worker-user-data`
secrets in Openshift. These stub configs are defined in a MachineSet, e.g.
secrets in OpenShift. These stub configs are defined in a MachineSet, e.g.

```
userDataSecret:
Expand All @@ -356,13 +356,13 @@ Which the MAO can interpret to fetch for the machine, when provisioning new
machines.

The issue today is that after installer creates these secrets, they are
effectively "forgotten". No componenent in Openshift manages these secrets.
effectively "forgotten". No componenent in OpenShift manages these secrets.
The only way to update these secrets would be if a user knows the name, and
manually changes it to another valid Ignition config.

When Ignition spec 3 bootimages come into the picture, there currently exists
no method to create a new MachineSet to referece new Ignition configs to serve
to these machines. A component of the Openshift system (likely the MCO) thus
to these machines. A component of the OpenShift system (likely the MCO) thus
needs to create new secrets/update existing secrets to point to new stub
configs with spec 3. The MCS can then serve these spec 3 configs at different
directories at the same port, and it will be up to correctly defined
Expand Down Expand Up @@ -406,7 +406,7 @@ There could be many edge cases we have not yet considered. There are other
potential difficulties such as serving the correct Ignition config. See above
section on risks and mitigations.

Starting from some version of Openshift, likely v4.6, we can remove dual
Starting from some version of OpenShift, likely v4.6, we can remove dual
support and be fully Ignition spec 3.

Kubernetes 1.16 onwards has support for CRD versioning:
Expand Down
6 changes: 3 additions & 3 deletions enhancements/network/allow-external-ip-overrides.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ superseded-by:

## Summary

The Openshift API server provides a set of network admission plugins. One of these plugins is the external IP range checker.
The OpenShift API server provides a set of network admission plugins. One of these plugins is the external IP range checker.
There is an [externalIPNetworkCIDRs](https://docs.openshift.com/container-platform/3.11/install_config/master_node_configuration.html#master-node-config-network-config "externalipnetworkcidr") parameter
that controls the allowable range of external IPs that a service can have in a cluster.
This enhancement proposal is to modify the external IP range checker admission plugin to allow an admin user with
Expand Down Expand Up @@ -66,7 +66,7 @@ specified at the cluster level.

## Proposal

- As an administrator with [sufficient privilege](#admin-user) on a 4.x Openshift cluster, I want the ability to
- As an administrator with [sufficient privilege](#admin-user) on a 4.x OpenShift cluster, I want the ability to
specify external IPs which may fall out of the range specified by cluster administrators for services belonging to my app.

This proposal is to add an RBAC check in case the external IPs specified for the service don't fall in the range of the
Expand Down Expand Up @@ -142,4 +142,4 @@ user has [sufficient privileges](#admin-user).

## Infrastructure Needed [optional]

Openshift 4.x cluster
OpenShift 4.x cluster
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ Numerous customer complaints have resulted in this issue being highly escalated.
### Goals

1. To enable the OpenShift 4.4 installer to succesfully deploy a cluster on an OpenStack cloud that uses self signed certificates for auth.
2. Openshift 4.4 can succesfully and stabily run on OpenStack clusters that use self signed certificates for auth.
2. OpenShift 4.4 can succesfully and stabily run on OpenStack clusters that use self signed certificates for auth.
3. Backport to 4.3

## Proposal
Expand Down
18 changes: 9 additions & 9 deletions enhancements/storage/csi-driver-install.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ superseded-by:

## Summary

We want certain CSI drivers such as AWS, GCE, Cinder, Azure and vSphere to be installable on Openshift, so as
We want certain CSI drivers such as AWS, GCE, Cinder, Azure and vSphere to be installable on OpenShift, so as
they can be used along-side in-tree drivers and when upstream enables migration flag for these volume types, their
replacement CSI drivers can take over and none of the storage features get affected.

Expand All @@ -40,8 +40,8 @@ Upstream Kubernetes is moving towards removing code of in-tree drivers and repla
current expectation is that - all in-tree drivers that depend on cloudprovider should be removed from core Kubernetes by 1.21.
This may not happen all at once and we expect migration for certain drivers to happen sooner.

This does mean that - Openshift should be prepared to handle such migration. We have to iron out any bugs in driver themselves and
their interfacing with Openshift. We need a way for users to use the CSI drivers and optionally enable migration from in-tree driver
This does mean that - OpenShift should be prepared to handle such migration. We have to iron out any bugs in driver themselves and
their interfacing with OpenShift. We need a way for users to use the CSI drivers and optionally enable migration from in-tree driver
to CSI driver. To support upstream design - we will also need a way for users to disable the migration and keep using in-tree driver, until
in-tree code is finally removed.

Expand Down Expand Up @@ -87,7 +87,7 @@ We propose that - we provide each driver mentioned above as a separate operator
is responsible for its installation and release. The operator is responsible for creating storageclass that the driver provides.

The configuration of CSI driver can be done via OLM UI if required and CSI driver can access cloudprovider credentials
from Openshift provided sources.The CR that is responsible for driver configuration can be installed by the operator
from OpenShift provided sources.The CR that is responsible for driver configuration can be installed by the operator
itself optionally or by the user. We expect operator configuration CR to be *cluster-scoped* rather than namespace scoped.

The reason for choosing cluster-scoped CRs are two fold:
Expand All @@ -97,7 +97,7 @@ The reason for choosing cluster-scoped CRs are two fold:
User should be able to edit the CR and change log level, managementState and update credentials(if operator configuration CR is mechanism by which credentials are delivered to the CSI driver) required for talking to storage backend.

Installation via OLM however means that, when we want to enable these CSI drivers as default drivers,
they must be installed by default in Openshift installs. We further propose that -
they must be installed by default in OpenShift installs. We further propose that -
Cluster Storage Operator(https://github.com/openshift/cluster-storage-operator)
could create subscriptions for these driver operators when drivers have to be installed by default.

Expand All @@ -107,7 +107,7 @@ Expected workflow as optional driver (using EBS as an example):
4. While the operator is installed in a user provider namespace, the CSI driver should be installed in a namespace pre-defined(for example - `openshift-csi-ebs-driver`) in the operator.
3. EBS CSI driver is installed and it creates relevant storageclass that user can use to provision CSI EBS volumes.

When a CSI driver operator is in technical preview, we expect that the operator will be available from a `beta` channel. Moving to a `stable` channel once a driver reaches GA will require Openshift admin to manually change subscribed channel from beta to stable. At this point we expect that, operator in GA state will simply **adopt** the resources(CRs) created by beta version of the operator.
When a CSI driver operator is in technical preview, we expect that the operator will be available from a `beta` channel. Moving to a `stable` channel once a driver reaches GA will require OpenShift admin to manually change subscribed channel from beta to stable. At this point we expect that, operator in GA state will simply **adopt** the resources(CRs) created by beta version of the operator.


#### Uninstallation of optional CSI driver operator.
Expand All @@ -124,7 +124,7 @@ should have a finalizer which will be removed by the operator when csi-driver op

### Expected workflow as default driver:

When these drivers become mandatory part of Openshift cluster, we need to install them by default. This section in general only applies to drivers which want to be enabled by default in Openshift installation.
When these drivers become mandatory part of OpenShift cluster, we need to install them by default. This section in general only applies to drivers which want to be enabled by default in OpenShift installation.

1. CVO installs cluster-storage-operator.
2. cluster-storage-operator detects cloudprovider on which cluster is running(lets say EBS).
Expand Down Expand Up @@ -162,11 +162,11 @@ When a CSI driver is moved from optional to a mandatory one, existing installati
https://docs.openshift.com/container-platform/4.3/operators/olm-restricted-networks.html but index images should make it easier.


2. We need a way for a CSI driver operator to say version range of Openshift against which it is supported.
2. We need a way for a CSI driver operator to say version range of OpenShift against which it is supported.

A: This is less of a problem with index images because all versions of operator is not available from same source.

3. Are channel to which user is subscribed to automatically upgraded when Openshift version is bumped? For example: If we install an operator from 4.2 channel on OCP-4.2 and then upgrade to OCP-4.3, is subscription updated to use channel 4.3? Or this should be handled via `skipRange`?
3. Are channel to which user is subscribed to automatically upgraded when OpenShift version is bumped? For example: If we install an operator from 4.2 channel on OCP-4.2 and then upgrade to OCP-4.3, is subscription updated to use channel 4.3? Or this should be handled via `skipRange`?

A: Channels aren't automatically upgraded on OCP upgrade but we will be using stable and beta channel names rather than version specific channels.As
proposed above we expect that an operator installed from stable channel will adopt resources created by beta channel.
Expand Down
2 changes: 1 addition & 1 deletion enhancements/storage/manila-csi-driver-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ This document describes [Manila](https://docs.openstack.org/manila/latest/) [CSI

## Proposal

Our main goal is to add RWX volume support in Openshift 4 on OpenStack. So we are going to use Manila through the CSI driver available in upstream as a part of [cloud-provider-openstack](https://github.com/kubernetes/cloud-provider-openstack/tree/master/pkg/csi/manila) repo.
Our main goal is to add RWX volume support in OpenShift 4 on OpenStack. So we are going to use Manila through the CSI driver available in upstream as a part of [cloud-provider-openstack](https://github.com/kubernetes/cloud-provider-openstack/tree/master/pkg/csi/manila) repo.

To maintain the lifecycle of the driver we want to implement an operator, that will handle all administrative tasks: deploy, restore, upgrade, healthchecks, and so on.

Expand Down

0 comments on commit f681a13

Please sign in to comment.