-
Notifications
You must be signed in to change notification settings - Fork 545
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dynamically create file systems #310
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
This would be useful for more isolation than what access points provide. |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
Am I correct that this would allow having the same user experience as with EBS? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The hard limit of 120 access points can be extremely limiting for using it in Kubernetes since that would essentially mean you're limited to 120 PVCs. Then you'll need to create another EFS with another storage class to scale further.... I think it defeats the purpose of the scalability aspect of EFS and will require manual intervention. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Any idea why do we have 120 AP as hard limit? Was this limit selected based on some performance statistics? |
It's an AWS hard-limit based on use cases. |
We even have a use case to spin up more than 1000 Persistent Volumes. AWS does not support nfs-subdir-external-provisioner. Switching to AWS-CSI, which supports dynamic provisioning, now has its own issues with its hard limit. We have even had a case opened with EFS Engineers to see if we can get any changes to this in future versions at least. We will have to wait and see. |
/remove-lifecycle rotten |
@leakingtapan Can we re-open this and freeze the lifecycle please? I think it's clear based on the number of 👍 that this is desirable. This would be ranked the second most requested feature by +1's if it were open. To give some additional context, cost tracking is important to us, so I'd like to be able to create different EFS filesystems for different cost categories, instead of creating a single one and reusing access points. But you could create multiple filesystem with Terraform. I have three reasons why I'd prefer the driver:
|
@leakingtapan I'd also like to request re-opening and for the lifecycle to be frozen for the same reasons that @Almenon mentioned. |
With the newly introduced dynamic provisioner, the driver lets users associate an EFS File System (FS) with a Storage Class (SC), with Persistent Volume Claims (PVC) resulting in provisioning of EFS Access Points underneath that file system with unique directories/UIDs/GIDs for privacy. Currently EFS has a hard limit of 120 APs per FS, so in environments that need more than 120 volumes multiple FSs need to be created and associated with multiple SCs.
We can enhance this by allowing the provisioner to both create file systems and manage multiple file systems per storage class, that way users can provision up to thousands of PVCs per SC without manual intervention.
The SC definition could look something like this (fields added to control :
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
parameters:
provisioningMode: efs-fs-ap
maxFileSystems: 50
maxApsPerFileSystem: 100 // 50 FS * 100 APs = 5k potential volumes
fileSystemMode: generalPurpose // (generalPurpose or maxIo)
mountTargetSubnets: [] //input subnet ids (one per AZ) for us to create mount targets, if not specified use default subnets
securityGroup: sg-123456 //SG to apply to mount targets, default to default security group
throughputMode: bursting // (bursting (default) or specify amount of provisioned throughput, e.g. 100 for 100MB/s)
lifecyclePolicy: 7 // default 30d, 0 to turn off lifecycle management
autobackup: true // default true, false to turn off AWS Backup
tags: [] // tags added to both FS and AP
gidRangeStart: "1000" //Optional
gidRangeEnd: "2000" //Optional
directoryPerms: "777" //Optional
basePath: "/data" //Optional
The text was updated successfully, but these errors were encountered: