Skip to content

Commit

Permalink
Merge pull request #13660 from roosterfish/powerflex_sdc
Browse files Browse the repository at this point in the history
Storage: Add Dell PowerFlex SDC operation mode
  • Loading branch information
tomponline authored Jul 1, 2024
2 parents 9b0243b + 044e88b commit c148008
Show file tree
Hide file tree
Showing 9 changed files with 284 additions and 143 deletions.
2 changes: 2 additions & 0 deletions doc/.custom_wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,7 @@ macOS
macvlan
manpages
Mbit
MDM
MiB
Mibit
MicroCeph
Expand Down Expand Up @@ -199,6 +200,7 @@ runtime
SATA
scalable
scriptlet
SDC
SDN
SDS
SDT
Expand Down
2 changes: 1 addition & 1 deletion doc/config_options.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5467,7 +5467,7 @@ This option is required only if {config:option}`storage-powerflex-pool-conf:powe
:shortdesc: "How volumes are mapped to the local server"
:type: "string"
The mode gets discovered automatically if the system provides the necessary kernel modules.
Currently, only `nvme` is supported.
This can be either `nvme` or `sdc`.
```

```{config:option} powerflex.pool storage-powerflex-pool-conf
Expand Down
18 changes: 12 additions & 6 deletions doc/reference/storage_powerflex.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,15 @@

[Dell PowerFlex](https://www.dell.com/en-us/shop/powerflex/sf/powerflex) is a software-defined storage solution from [Dell Technologies](https://www.dell.com/). Among other things it offers the consumption of redundant block storage across the network.

LXD offers access to PowerFlex storage clusters by making use of the NVMe/TCP transport protocol.
LXD offers access to PowerFlex storage clusters using either NVMe/TCP or Dell's Storage Data Client (SDC).
In addition, PowerFlex offers copy-on-write snapshots, thin provisioning and other features.

To use PowerFlex, make sure you have the required kernel modules installed on your host system.
To use PowerFlex with NVMe/TCP, make sure you have the required kernel modules installed on your host system.
On Ubuntu these are `nvme_fabrics` and `nvme_tcp`, which come bundled in the `linux-modules-extra-$(uname -r)` package.
LXD takes care of connecting to the respective subsystem.

When using the SDC, LXD requires it to already be connected to the Dell Metadata Manager (MDM) beforehand.
As LXD doesn't set up the SDC, follow the official guides from Dell for configuration details.

## Terminology

Expand All @@ -17,9 +21,11 @@ A *protection domain* contains storage pools, which represent a set of physical
LXD creates its volumes in those storage pools.

You can take a snapshot of any volume in PowerFlex, which will create an independent copy of the parent volume.
PowerFlex volumes get added as a NVMe drive to the respective LXD host the volume got mapped to.
For this, the LXD host connects to one or multiple NVMe {abbr}`SDT (storage data targets)` provided by PowerFlex.
PowerFlex volumes get added as a drive to the respective LXD host the volume got mapped to.
In case of NVMe/TCP, the LXD host connects to one or multiple NVMe {abbr}`SDT (storage data targets)` provided by PowerFlex.
Those SDT run as components on the PowerFlex storage layer.
In case of SDC, the LXD hosts don't set up any connection by themselves.
Instead they depend on the SDC to make the volumes available on the system for consumption.

## `powerflex` driver in LXD

Expand All @@ -35,14 +41,14 @@ This driver behaves differently than some of the other drivers in that it provid
As a result and depending on the internal network, storage access might be a bit slower than for local storage.
On the other hand, using remote storage has big advantages in a cluster setup, because all cluster members have access to the same storage pools with the exact same contents, without the need to synchronize storage pools.

When creating a new storage pool using the `powerflex` driver, LXD tries to discover one of the SDT from the given storage pool.
When creating a new storage pool using the `powerflex` driver in `nvme` mode, LXD tries to discover one of the SDT from the given storage pool.
Alternatively, you can specify which SDT to use with {config:option}`storage-powerflex-pool-conf:powerflex.sdt`.
LXD instructs the NVMe initiator to connect to all the other SDT when first connecting to the subsystem.

Due to the way copy-on-write works in PowerFlex, snapshots of any volume don't rely on its parent.
As a result, volume snapshots are fully functional volumes themselves, and it's possible to take additional snapshots from such volume snapshots.
This tree of dependencies is called the *PowerFlex vTree*.
Both volumes and their snapshots get added as standalone NVMe disks to the LXD host.
Both volumes and their snapshots get added as standalone disks to the LXD host.

(storage-powerflex-volume-names)=
### Volume names
Expand Down
1 change: 1 addition & 0 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ require (
github.com/canonical/go-dqlite v1.21.0
github.com/checkpoint-restore/go-criu/v6 v6.3.0
github.com/checkpoint-restore/go-criu/v7 v7.1.0
github.com/dell/goscaleio v1.14.1
github.com/digitalocean/go-qemu v0.0.0-20230711162256-2e3d0186973e
github.com/digitalocean/go-smbios v0.0.0-20180907143718-390a4f403a8e
github.com/dustinkirkland/golang-petname v0.0.0-20240428194347-eebcea082ee0
Expand Down
2 changes: 2 additions & 0 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -105,6 +105,8 @@ github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dell/goscaleio v1.14.1 h1:SCHGLoOBKxQZ8EodChOLoIghcyhepbO5MLPRd4YQZ5c=
github.com/dell/goscaleio v1.14.1/go.mod h1:h7SCmReARG/szFWBMQGETGkZObknhS45lQipQbtdmJ8=
github.com/dgryski/go-farm v0.0.0-20200201041132-a6ae2369ad13 h1:fAjc9m62+UWV/WAFKLNi6ZS0675eEUC9y3AlwSbQu1Y=
github.com/dgryski/go-farm v0.0.0-20200201041132-a6ae2369ad13/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw=
github.com/digitalocean/go-libvirt v0.0.0-20240610184155-f66fb3c0f6d7 h1:KMOLn19gbh7KbPEgu76ZIf/b2CnnYhC2GFLgLiN/YkA=
Expand Down
2 changes: 1 addition & 1 deletion lxd/metadata/configuration.json
Original file line number Diff line number Diff line change
Expand Up @@ -6154,7 +6154,7 @@
{
"powerflex.mode": {
"defaultdesc": "the discovered mode",
"longdesc": "The mode gets discovered automatically if the system provides the necessary kernel modules.\nCurrently, only `nvme` is supported.",
"longdesc": "The mode gets discovered automatically if the system provides the necessary kernel modules.\nThis can be either `nvme` or `sdc`.",
"shortdesc": "How volumes are mapped to the local server",
"type": "string"
}
Expand Down
47 changes: 36 additions & 11 deletions lxd/storage/drivers/driver_powerflex.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ import (
"fmt"
"strings"

"github.com/dell/goscaleio"

deviceConfig "github.com/canonical/lxd/lxd/device/config"
"github.com/canonical/lxd/lxd/migration"
"github.com/canonical/lxd/lxd/operations"
Expand All @@ -18,6 +20,11 @@ const powerFlexDefaultUser = "admin"
// powerFlexDefaultSize represents the default PowerFlex volume size.
const powerFlexDefaultSize = "8GiB"

const (
powerFlexModeNVMe = "nvme"
powerFlexModeSDC = "sdc"
)

var powerFlexLoaded bool
var powerFlexVersion string

Expand All @@ -27,6 +34,10 @@ type powerflex struct {
// Holds the low level HTTP client for the PowerFlex API.
// Use powerflex.client() to retrieve the client struct.
httpClient *powerFlexClient

// Holds the SDC GUID of this specific host.
// Use powerflex.getHostGUID() to retrieve the actual value.
sdcGUID string
}

// load is used to run one-time action per-driver rather than per-pool.
Expand Down Expand Up @@ -87,9 +98,14 @@ func (d *powerflex) FillConfig() error {
d.config["powerflex.user.name"] = powerFlexDefaultUser
}

// Try to discover the PowerFlex operation mode.
// First try if the NVMe/TCP kernel modules can be loaed.
// Second try if the SDC kernel module is setup.
if d.config["powerflex.mode"] == "" {
if d.loadNVMeModules() {
d.config["powerflex.mode"] = "nvme"
d.config["powerflex.mode"] = powerFlexModeNVMe
} else if goscaleio.DrvCfgIsSDCInstalled() {
d.config["powerflex.mode"] = powerFlexModeSDC
}
}

Expand Down Expand Up @@ -120,15 +136,11 @@ func (d *powerflex) Create() error {
return fmt.Errorf("The powerflex.gateway cannot be empty")
}

// Fail if no PowerFlex mode can be discovered.
if d.config["powerflex.mode"] == "" {
return fmt.Errorf("Failed to discover PowerFlex mode")
}

client := d.client()

// Discover one of the storage pools SDS services.
if d.config["powerflex.mode"] == "nvme" {
switch d.config["powerflex.mode"] {
case powerFlexModeNVMe:
// Discover one of the storage pools SDT services.
if d.config["powerflex.sdt"] == "" {
pool, err := d.resolvePool()
if err != nil {
Expand All @@ -150,6 +162,19 @@ func (d *powerflex) Create() error {

d.config["powerflex.sdt"] = relations[0].IPList[0].IP
}

case powerFlexModeSDC:
if d.config["powerflex.sdt"] != "" {
return fmt.Errorf("The powerflex.sdt config key is specific to the NVMe/TCP mode")
}

if !goscaleio.DrvCfgIsSDCInstalled() {
return fmt.Errorf("PowerFlex SDC is not available on the host")
}

default:
// Fail if no PowerFlex mode can be discovered.
return fmt.Errorf("Failed to discover PowerFlex mode")
}

return nil
Expand Down Expand Up @@ -209,12 +234,12 @@ func (d *powerflex) Validate(config map[string]string) error {
"powerflex.domain": validate.Optional(validate.IsAny),
// lxdmeta:generate(entities=storage-powerflex; group=pool-conf; key=powerflex.mode)
// The mode gets discovered automatically if the system provides the necessary kernel modules.
// Currently, only `nvme` is supported.
// This can be either `nvme` or `sdc`.
// ---
// type: string
// defaultdesc: the discovered mode
// shortdesc: How volumes are mapped to the local server
"powerflex.mode": validate.Optional(validate.IsOneOf("nvme")),
"powerflex.mode": validate.Optional(validate.IsOneOf("nvme", "sdc")),
// lxdmeta:generate(entities=storage-powerflex; group=pool-conf; key=powerflex.sdt)
//
// ---
Expand Down Expand Up @@ -251,7 +276,7 @@ func (d *powerflex) Validate(config map[string]string) error {
// on the other cluster members too. This can be done here since Validate
// gets executed on every cluster member when receiving the cluster
// notification to finally create the pool.
if d.config["powerflex.mode"] == "nvme" && !d.loadNVMeModules() {
if d.config["powerflex.mode"] == powerFlexModeNVMe && !d.loadNVMeModules() {
return fmt.Errorf("NVMe/TCP is not supported")
}

Expand Down
Loading

0 comments on commit c148008

Please sign in to comment.