You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As users of the HDFS operator and a Stackable deployed HDFS we want it to ensure data locality by talking to a DataNode on the same Kubernetes node as the client first if one exists.
Value
HDFS tries to store the first copy of a block on a "local" machine before shipping data to remote machines over the network. This relies on a simple IP address comparison in the HDFS code which breaks due to the nature of Kubernetes where pods don't share the same IP even if they are on the same Kubernetes node.
I believe we can improve this situation by changing the HDFS code to consider the Kubernetes node while looking for a "local" machine.
We already have precedent with the hdfs-topology-provider which does something similar. I believe we can plug this logic into the chooseLocalOrFavoredStorage method of BlockPlacementPolicyDefault.
We want this because it will probably benefit all workloads that are using HDFS and locally attached storage and that are using things like Spark or HBase where processing can happen on the same Kubernetes node as the storage. The benefit is going to be less network traffic and a boost in performance.
Dependencies
It probably makes sense to reuse code from the hdfs-topology-provider project.
Tasks
Understand exactly what the topology provider is doing: We need a way - from within Kubernetes - to get the node a client is "calling" from (via its IP) as well as the node the DataNodes are running on
Decide whether this behavior is going to be enabled by default or not
Discuss the best point where to plug this behavior in and create a patch or pluggable class (etc. a BlockPlacementPolicy)
Document the behavior
Test it on a multi-node cluster and also test it with external clients (listener etc.)
Marketing: Discuss with marketing whether we should create a blog post about it
Acceptance Criteria
We have a way to compare data locality based on Kubernetes nodes that falls back to the default in case there is an error
The behavior is documented even if it is not changeable/pluggable
Release Notes
The HDFS NameNodes will now look at the Kubernetes topology when considering whether a client request is made locally or not. This means it will consider all clients "local" that are hosted on the same Kubernetes node as a DataNode.
The text was updated successfully, but these errors were encountered:
The StackableTopologyProvider implements DNSToSwitchMapping class uses the entry point public List<String> resolve(List<String> names) to return a topology for a given pod name/IP.
The topology is in the form: /{resolved-label-1}/{resolved-label-2}/.... For example, in the integration test the labels:
are used to check that a topology has been created for a specific dataNode role group. Internally, StackableTopologyProvider first of all resolves the name (which could be an IP or a pod name) to an IP and maps that to labels locating the same kubernetes node where that pod is running.
This steps are as follows (ignoring caching logic):
lookup the IPs for the HDFS data nodes
collect the kubernetes node labels for each (data node) IP
collect the pod labels for each (data node) IP
for each HDFS data node IP we now have a collection of its pod and node labels)
the entry point input names are resolved to a collection of either the pod names (for non-listener) or the dataNode IP to which this listener resolves (for listeners)
the names are resolved to their IPs (either their own - if data nodes - or that of the data node where this pod is running)
we now have the data node IP for each input pod 🟢
labels are looked up from the pod and the pod's node to build the topology
We can use the information here 🟢 for this ticket.
Description
As users of the HDFS operator and a Stackable deployed HDFS we want it to ensure data locality by talking to a DataNode on the same Kubernetes node as the client first if one exists.
Value
HDFS tries to store the first copy of a block on a "local" machine before shipping data to remote machines over the network. This relies on a simple IP address comparison in the HDFS code which breaks due to the nature of Kubernetes where pods don't share the same IP even if they are on the same Kubernetes node.
I believe we can improve this situation by changing the HDFS code to consider the Kubernetes node while looking for a "local" machine.
We already have precedent with the
hdfs-topology-provider
which does something similar. I believe we can plug this logic into thechooseLocalOrFavoredStorage
method ofBlockPlacementPolicyDefault
.We want this because it will probably benefit all workloads that are using HDFS and locally attached storage and that are using things like Spark or HBase where processing can happen on the same Kubernetes node as the storage. The benefit is going to be less network traffic and a boost in performance.
Dependencies
It probably makes sense to reuse code from the
hdfs-topology-provider
project.Tasks
Acceptance Criteria
Release Notes
The HDFS NameNodes will now look at the Kubernetes topology when considering whether a
client
request is madelocally
or not. This means it will consider all clients "local" that are hosted on the same Kubernetes node as a DataNode.The text was updated successfully, but these errors were encountered: