Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

invalid ELF header error #12

Open
Maryom opened this issue Jan 28, 2018 · 4 comments
Open

invalid ELF header error #12

Maryom opened this issue Jan 28, 2018 · 4 comments

Comments

@Maryom
Copy link

Maryom commented Jan 28, 2018

Hi,

Thanks for this repo.

When I ran:

yarn jar BigBWA-2.1.jar com.github.bigbwa.BigBWA -D mapreduce.input.fileinputformat.split.minsize=123641127 -D mapreduce.input.fileinputformat.split.maxsize=123641127 -D mapreduce.map.memory.mb=7500 -m -p --index /Data/HumanBase/lambda_virus -r ERR000589.fqBD ExitERR000589

I got this error:

18/01/28 16:45:31 INFO mapreduce.Job: Running job: job_1517157509312_0002
18/01/28 16:45:37 INFO mapreduce.Job: Job job_1517157509312_0002 running in uber mode : false
18/01/28 16:45:37 INFO mapreduce.Job:  map 0% reduce 0%
18/01/28 16:45:42 INFO mapreduce.Job: Task Id : attempt_1517157509312_0002_m_000000_0, Status : FAILED
Error: /mnt/yarn/usercache/hadoop/appcache/application_1517157509312_0002/container_1517157509312_0002_01_000002/tmp/libbwa8142273599705513639.so: /mnt/yarn/usercache/hadoop/appcache/application_1517157509312_0002/container_1517157509312_0002_01_000002/tmp/libbwa8142273599705513639.so: invalid ELF header (Possible cause: endianness mismatch)
18/01/28 16:45:47 INFO mapreduce.Job: Task Id : attempt_1517157509312_0002_m_000000_1, Status : FAILED
Error: /mnt/yarn/usercache/hadoop/appcache/application_1517157509312_0002/container_1517157509312_0002_01_000003/tmp/libbwa7110550933609652176.so: /mnt/yarn/usercache/hadoop/appcache/application_1517157509312_0002/container_1517157509312_0002_01_000003/tmp/libbwa7110550933609652176.so: invalid ELF header (Possible cause: endianness mismatch)
18/01/28 16:45:52 INFO mapreduce.Job: Task Id : attempt_1517157509312_0002_m_000000_2, Status : FAILED
Error: /mnt/yarn/usercache/hadoop/appcache/application_1517157509312_0002/container_1517157509312_0002_01_000004/tmp/libbwa6662802721160382805.so: /mnt/yarn/usercache/hadoop/appcache/application_1517157509312_0002/container_1517157509312_0002_01_000004/tmp/libbwa6662802721160382805.so: invalid ELF header (Possible cause: endianness mismatch)
18/01/28 16:45:58 INFO mapreduce.Job:  map 100% reduce 100%
18/01/28 16:45:58 INFO mapreduce.Job: Job job_1517157509312_0002 failed with state FAILED due to: Task failed task_1517157509312_0002_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0

18/01/28 16:45:58 INFO mapreduce.Job: Counters: 16
	Job Counters 
		Failed map tasks=4
		Killed reduce tasks=1
		Launched map tasks=4
		Other local map tasks=3
		Rack-local map tasks=1
		Total time spent by all maps in occupied slots (ms)=3085785
		Total time spent by all reduces in occupied slots (ms)=0
		Total time spent by all map tasks (ms)=13131
		Total time spent by all reduce tasks (ms)=0
		Total vcore-milliseconds taken by all map tasks=13131
		Total vcore-milliseconds taken by all reduce tasks=0
		Total megabyte-milliseconds taken by all map tasks=98482500
		Total megabyte-milliseconds taken by all reduce tasks=0
	Map-Reduce Framework
		CPU time spent (ms)=0
		Physical memory (bytes) snapshot=0
		Virtual memory (bytes) snapshot=0

I used hadoop cluster on Amazon EMR.

@Maryom
Copy link
Author

Maryom commented Jan 29, 2018

Note that I used EFS to share the index between nodes in the cluster as you can see all files are there:

[hadoop@ip-172-31-2-103 efs]$ aws s3 cp s3://mariamup/nice/lambda_virus . --recursive
download: s3://mariamup/nice/lambda_virus/lambda_virus.fa to ./lambda_virus.fa
download: s3://mariamup/nice/lambda_virus/lambda_virus.dict to ./lambda_virus.dict
download: s3://mariamup/nice/lambda_virus/lambda_virus.fa.ann to ./lambda_virus.fa.ann
download: s3://mariamup/nice/lambda_virus/lambda_virus.fa.amb to ./lambda_virus.fa.amb
download: s3://mariamup/nice/lambda_virus/lambda_virus.fa.bwt to ./lambda_virus.fa.bwt
download: s3://mariamup/nice/lambda_virus/lambda_virus.fa.sa to ./lambda_virus.fa.sa
download: s3://mariamup/nice/lambda_virus/lambda_virus.fa.fai to ./lambda_virus.fa.fai
download: s3://mariamup/nice/lambda_virus/lambda_virus.fa.pac to ./lambda_virus.fa.pac
[hadoop@ip-172-31-2-103 efs]$ ls
lambda_virus.dict  lambda_virus.fa.amb  lambda_virus.fa.bwt  lambda_virus.fa.pac
lambda_virus.fa    lambda_virus.fa.ann  lambda_virus.fa.fai  lambda_virus.fa.sa

Then, I ran:

yarn jar BigBWA-2.1.jar com.github.bigbwa.BigBWA -D mapreduce.input.fileinputformat.split.minsize=123641127 -D mapreduce.input.fileinputformat.split.maxsize=123641127 -D mapreduce.map.memory.mb=7500 -m -p --index /home/hadoop/efs/lambda_virus -r ERR000589.fqBD ExitERR00058

Then, I got error:

Error: /mnt/yarn/usercache/hadoop/appcache/application_1517229999487_0003/container_1517229999487_0003_01_000002/tmp/libbwa9035204717035358356.so: /mnt/yarn/usercache/hadoop/appcache/application_1517229999487_0003/container_1517229999487_0003_01_000002/tmp/libbwa9035204717035358356.so: invalid ELF header (Possible cause: endianness mismatch)
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

Is BigBWA requires 32-bit ? because my machines are 64-bit

@jmabuin
Copy link
Contributor

jmabuin commented Mar 12, 2019

Please, check your application logs to find the error:

yarn logs -applicationId application_1517157509312_0002

Where application_1517157509312_0002 must be replaced with the failed application ID

@Maryom
Copy link
Author

Maryom commented Mar 12, 2019

I checked here are some of the error:

[fclose] No such file or directory

[Java_com_github_sparkbwa_BwaJni_bwa_1jni] Error saving stdout.

19/03/10 16:54:47 INFO DAGScheduler: failed: Set()

@Beooz
Copy link

Beooz commented Mar 28, 2019

hi,I have some problems.
The first one is :
19/03/28 13:53:47 INFO mapreduce.Job: map 100% reduce 0%
19/03/28 13:53:51 INFO mapreduce.Job: map 100% reduce 100%
19/03/28 13:53:51 INFO mapreduce.Job: Job job_1552831617942_0008 completed successfully
it said taht it had completed successfull but there
Map input records=131250
Map output records=1
Map output bytes=30
Map output materialized bytes=38
Input split bytes=108
Combine input records=0
Combine output records=0
Reduce input groups=1
Reduce shuffle bytes=38
Reduce input records=1
Reduce output records=0
it didn't have any output. I wad used two smaller fastq filess. then I through the files on HDFS:
hdfs dfs -ls /user/root/ExitERR000589/*
-rw-r--r-- 1 root supergroup 4919873 2019-03-28 13:53 /user/root/ExitERR000589/Input0_1.fq
-rw-r--r-- 1 root supergroup 4921106 2019-03-28 13:53 /user/root/ExitERR000589/Input0_2.fq
-rw-r--r-- 1 root supergroup 0 2019-03-28 13:53 /user/root/ExitERR000589/_SUCCESS
-rw-r--r-- 1 root supergroup 0 2019-03-28 13:53 /user/root/ExitERR000589/part-r-00000
so, It didn't have any output. But I don't know that where were the Input_1.fq and Input_1.fq files come from? I didn't do it.

The seconde one is about "--index". I don't know what function about this arguement have. I change this arguement's value then I get a same result.
And it export so much exceptions about java.lang.ArrayIndexOutOfBoundsExceptio. But I run it at another computer it didn't appear,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants