Skip to content

Latest commit

 

History

History
72 lines (48 loc) · 3.83 KB

index.adoc

File metadata and controls

72 lines (48 loc) · 3.83 KB

Stackable Operator for Apache HDFS

The Stackable Operator for Apache HDFS (Hadoop Distributed File System) is used to set up HFDS in high-availability mode. HDFS is a distributed file system designed to store and manage massive amounts of data across multiple machines in a fault-tolerant manner. The Operator depends on the zookeeper:index.adoc to operate a ZooKeeper cluster to coordinate the active and standby NameNodes.

Getting started

Follow the Getting started guide which will guide you through installing the Stackable HDFS and ZooKeeper Operators, setting up ZooKeeper and HDFS and writing a file to HDFS to verify that everything is set up correctly.

Afterwards you can consult the usage-guide/index.adoc to learn more about tailoring your HDFS configuration to your needs, or have a look at the demos for some example setups.

Operator model

The Operator manages the HdfsCluster custom resource. The cluster implements three roles:

A diagram depicting the Kubernetes resources created by the Stackable Operator for Apache HDFS

The operator creates the following K8S objects per role group defined in the custom resource.

  • Service - ClusterIP used for intra-cluster communication.

  • ConfigMap - HDFS configuration files like core-site.xml, hdfs-site.xml and log4j.properties are defined here and mounted in the pods.

  • StatefulSet - where the replica count, volume mounts and more for each role group is defined.

In addition, a NodePort service is created for each pod labeled with hdfs.stackable.tech/pod-service=true that exposes all container ports to the outside world (from the perspective of K8S).

In the custom resource you can specify the number of replicas per role group (NameNode, DataNode or JournalNode). A minimal working configuration requires:

  • 2 NameNodes (HA)

  • 1 JournalNode

  • 1 DataNode (should match at least the clusterConfig.dfsReplication factor)

The Operator creates a service discovery ConfigMap for the HDFS instance. The discovery ConfigMap contains the core-site.xml file and the hdfs-site.xml file.

Dependencies

HDFS depends on ZooKeeper for coordination between nodes. You can run a ZooKeeper cluster with the zookeeper:index.adoc. Additionally, the commons-operator:index.adoc and secret-operator:index.adoc are needed.

Demos

Two demos that use HDFS are available.

demos:hbase-hdfs-load-cycling-data.adoc loads a dataset of cycling data from S3 into HDFS and then uses HBase to analyze the data.

demos:jupyterhub-pyspark-hdfs-anomaly-detection-taxi-data.adoc showcases the integration between HDFS and Jupyter. New York Taxi data is stored in HDFS and analyzed in a Jupyter notebook.

Supported Versions

The Stackable Operator for Apache HDFS currently supports the following versions of HDFS: