-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lvm-localpv + FC + PureStorage #257
Comments
@metbog This seems like a shared vg feature requirement. We had tried shared vg previously but had to shelf the task due to technical roadblocks. |
If I understand the requirement correctly, the same PVC is required to be used by applications on two(or more) different nodes, where the underlying PV is a shared vg managed by LVM, and assuming the lock managers required for shared vg are up and running on all worker nodes. |
@abhilashshetty04 what was the technical roadblock that we identified? |
@orville-wright , We had a previous employee of OpenEBS who attempted this feature. AFAIK, There were some roadblocks due to kernel semaphore dependencies. This is the PR for your reference. |
Is there another way to achieve this with OpenEBS? For instance, VMware uses VMFS to reattach volumes or disks between VMs. I would like to find a way to use shared storage between nodes, and the only solution I've found so far involves replication. Is there an alternative approach? |
Hi @m-czarnik-exa - I run Product Mgmt for openEBS. openEBS is primarily designed as a Hyper-converged vSAN system. This means that...
Operations There are ways to do things that are close to what you are asking for. You mention VMFS.
VMware vSAN and our openEBSFor openEBS, we have 5 Storage Engines that the user can choose to deploy. Each has different characteristics and different backend Block Allocator kernels. In all of the above... I am referring to openEBS Mayastor (see attached pics). - Not openEBS Local-PV LVM, becasue Mayastor is the only Storage Engine that
openEBS Local-PV LVM utilizes the
|
First of all, thank you for this elaborate explanation :)
Basically what I'm trying to achieve is a migration from VMware CSI to open source virtualization platforms like proxmox/opennebula etc... or possibly a bare metal solution. The setup that I'm trying to configure (sorry for simplifications but I don't have a deep expertise in the field of storage) is to connect let's say, tree k8s nodes to SAN storage with fc or iscsi (to allocate block storage for these nodes) and be able to attach pv's created on one node, to another node (VMware CSI just reattach vmkd from one vm to another when pod with pvc starts on other node). Taking into account that the SAN storage has configured RAID and that I would like to use velero for backups, I don't need to replicate data from one node to another because it will affect performance. What I'm looking for is CSI that will handle shared storage from disk array between nodes and that will simultaneously be as fast as possible. |
@m-czarnik-exa Today it won't be possible to let a PV(Persistent Volume) be used by multiple nodes(or by node different than where PV is created). With LVM-localPV engine, a PV represents the LVM logical volume created on that node. There is a slightly similar use case of using an LVM VG as shared so that multiple nodes can create PVs on the same VG. This isn't complete yet as discussed and designed here: #134 |
This requirement needs the LVM shared VG support, this will be tracked post #134 |
Hi there,
We have a Kubernetes cluster (k8s) with BareMetal (BM) workers. These BM workers are connected via Fibre Channel (FC) to PureStorage FA. Our goal is to create a shared volume for our BM workers and use it with lvm-localpv.
PureStorage -> (BM1, BM2) -> /dev/mapper/sharevolume (attached to each BM worker via FC) -> PV -> VG1
Here is StorageClass:
One idea is to make it possible to reattach LVM to any of the BM workers because currently, it creates a Persistent Volume bound to one worker (where it was originally created). This limitation prevents pods from starting on other workers.
Is it possible to achieve this? Perhaps there is already a solution available for this issue?
The text was updated successfully, but these errors were encountered: