This document defines a high level roadmap for Rook development and upcoming releases. The features and themes included in each milestone are optimistic in the sense that many do not have clear owners yet. Community and contributor involvement is vital for successfully implementing all desired items for each release. We hope that the items listed below will inspire further engagement from the community to keep Rook progressing and shipping exciting and valuable features.
Any dates listed below and the specific issues that will ship in a given milestone are subject to change but should give a general idea of what we are planning. We use the milestone feature in Github so look there for the most up-to-date and issue plan.
The following high level features are targeted for Rook v1.4 (July 2020). For more detailed project tracking see the v1.4 board.
- Ceph
- Admission controller #4819
- RGW Multi-site replication (experimental) #1584
- Handle IPv4/IPv6 dual stack configurations #3850
- Support for provisioning OSDs with drive groups #4916
- Multus networking configuration declared stable
- RBD Mirroring configured with a CRD
- All Ceph controllers updated to the controller runtime
- Uninstall options for sanitizing OSD devices
- Enhancements to external cluster configuration
The general areas for improvements include the following, though may not be committed to a release.
- Admission Controllers
- Improve custom resource validation for each storage provider
- Build hygiene
- Controller Runtime
- Update remaining Rook controllers to build on the controller runtime
- Cassandra
- Ceph
- RGW Multi-site configurations
- Declare the feature stable
- Support additional scenarios
- RBD Mirroring
- Define CRD(s) to simplify the RBD mirroring configuration
- Disaster Recovery (DR)
- CSI solution for application failover in the event of cluster failure
- Disaster Recovery (Rook)
- Simplify metadata backup and disaster recovery #592
- Encryption
- Data at rest encrypted for OSDs backed by PVCs
- Encryption configuration per pool or per volume via the CSI driver
- Helm chart for the cluster CR #2109
- CSI Driver improvements tracked in the CSI repo
- Ceph-CSI 3.0 features
- RGW Multi-site configurations
- CockroachDB
- EdgeFS
- Cluster-wide SysRepCount support to enable single node deployments with SysRepCount=1 or 2
- Cluster-wide FailureDomain support to enable single node deployments with FailureDomain="device"
- NFS
- YugabyteDB
- Graduate CRDs to beta