-
Notifications
You must be signed in to change notification settings - Fork 373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
incompatible clusterID Hadoop #55
Comments
@Atahualkpa Hi! Which docker-compose are you using? Or what is your setup? Do you persist the data to the local drive from your docker containers? Eg by having volumes key.
|
Hi @earthquakesan thanks for your answer,
but I observe I have not set a path for hdfs. I try to set a local path but the problem is still present.
also checking the directory i fount this folder BP-1651631011-10.0.0.12-1527073017748/current and into this folder is present another file called VERSION but in this is written this:
this is the exception generated
Thanks for your support. |
@Atahualkpa How many nodes do you have in your swarm cluster? Do the containers always allocated on the same nodes? |
Now I have three nodes into the swarm into the leader are running 6 containers their are:
and into the others are running
Do the containers always allocated on the same nodes? moreover anytime I deploy the swarm was are present this hadoop_volume into any node the swarm. thanks. |
incompatible clusterID Hadoop
Hi,
anytime I rebooted the swarm I have this problem
I solved this problem deleting this docker volume
[ { "CreatedAt": "2018-05-10T19:35:31Z", "Driver": "local", "Labels": { "com.docker.stack.namespace": "hadoop" }, "Mountpoint": "/data0/docker_var/volumes/hadoop_datanode/_data", "Name": "hadoop_datanode", "Options": {}, "Scope": "local" } ]
but in this volume are present the files which I put in hdfs, so in this way I have to to put again the files into hdfs when I deploy the swarm. I'm not sure this is the right way to solve this problem.
Googling I found one solution but I dont know how to applicate it before the swarm reboot, this is the solution:
The problem is with the property name dfs.datanode.data.dir, it is misspelt as dfs.dataode.data.dir. This invalidates the property from being recognised and as a result, the default location of ${hadoop.tmp.dir}/hadoop-${USER}/dfs/data is used as data directory.
hadoop.tmp.dir is /tmp by default, on every reboot the contents of this directory will be deleted and forces datanode to recreate the folder on startup. And thus Incompatible clusterIDs.
Edit this property name in hdfs-site.xml before formatting the namenode and starting the services.
thanks.
The text was updated successfully, but these errors were encountered: