Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crash with error "Check failed: data_ MemoryDataLayer needs to be initalized by calling Reset" #35

Closed
manishrdmishra opened this issue Jun 8, 2016 · 4 comments

Comments

@manishrdmishra
Copy link

This issue is reproducible for both cpu and gpu mode.


Device information for cpu mode
OS - Ubunut 14.04
processor - intel core i3 ( four cores )
RAM - 8 GB


steps to reproduce the issue
solver mode - cpu
Run naibaf7/caffe with the the solver file at naibaf7/caffe_neural_models/net_u_9out/neuraltissue_solver.prototxt.
$ caffe train --solver=neuraltissue_solver.prototxt


Crash report

I0608 11:21:59.480715 30799 caffe.cpp:246] Starting Optimization
I0608 11:21:59.480729 30799 solver.cpp:303] Solving Neuraltissue-train
I0608 11:21:59.480739 30799 solver.cpp:304] Learning Rate Policy: inv
F0608 11:21:59.521008 30799 memory_data_layer.cpp:136] Check failed: data_ MemoryDataLayer needs to be initalized by calling Reset
*** Check failure stack trace: ***
@ 0x7ffff757adaa (unknown)
@ 0x7ffff757ace4 (unknown)
@ 0x7ffff757a6e6 (unknown)
@ 0x7ffff757d687 (unknown)
@ 0x68a34e caffe::MemoryDataLayer<>::Forward_cpu()
@ 0x6048f1 caffe::Layer<>::Forward()
@ 0x71f52c caffe::Net<>::ForwardFromTo()
@ 0x71f229 caffe::Net<>::Forward()
@ 0x71fee3 caffe::Net<>::ForwardBackward()
@ 0x73c994 caffe::Solver<>::Step()
@ 0x73c346 caffe::Solver<>::Solve()
@ 0x5ff82d train()
@ 0x601b8e main
@ 0x7ffff5196f45 (unknown)
@ 0x5fe799 (unknown)
@ (nil) (unknown)


gdb back trace
#0 0x00007ffff51abc37 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1 0x00007ffff51af028 in __GI_abort () at abort.c:89
#2 0x00007ffff7582ec3 in ?? () from /usr/lib/x86_64-linux-gnu/libglog.so.0
#3 0x00007ffff757adaa in google::LogMessage::Fail() () from /usr/lib/x86_64-linux-gnu/libglog.so.0
#4 0x00007ffff757ace4 in google::LogMessage::SendToLog() () from /usr/lib/x86_64-linux-gnu/libglog.so.0
#5 0x00007ffff757a6e6 in google::LogMessage::Flush() () from /usr/lib/x86_64-linux-gnu/libglog.so.0
#6 0x00007ffff757d687 in google::LogMessageFatal::~LogMessageFatal() () from /usr/lib/x86_64-linux-gnu/libglog.so.0
#7 0x000000000068a34e in caffe::MemoryDataLayer::Forward_cpu (this=0xbf5e30, bottom=std::vector of length 0, capacity 0, top=std::vector of length 2, capacity 2 = {...}) at /home/manish/git/caffe/src/caffe/layers/memory_data_layer.cpp:136
#8 0x00000000006048f1 in caffe::Layer::Forward (this=0xbf5e30, bottom=std::vector of length 0, capacity 0, top=std::vector of length 2, capacity 2 = {...}) at /home/manish/git/caffe/include/caffe/layer.hpp:510
#9 0x000000000071f52c in caffe::Net::ForwardFromTo (this=0xb5a870, start=0, end=60) at /home/manish/git/caffe/src/caffe/net.cpp:580
#10 0x000000000071f229 in caffe::Net::Forward (this=0xb5a870, loss=0x7fffffffd5ac) at /home/manish/git/caffe/src/caffe/net.cpp:602
#11 0x000000000071fee3 in caffe::Net::ForwardBackward (this=0xb5a870) at /home/manish/git/caffe/include/caffe/net.hpp:93
#12 0x000000000073c994 in caffe::Solver::Step (this=0xb4ddb0, iters=100000) at /home/manish/git/caffe/src/caffe/solver.cpp:245
#13 0x000000000073c346 in caffe::Solver::Solve (this=0xb4ddb0, resume_file=0x0) at /home/manish/git/caffe/src/caffe/solver.cpp:317
#14 0x00000000005ff82d in train () at /home/manish/git/caffe/tools/caffe.cpp:247
#15 0x0000000000601b8e in main (argc=2, argv=0x7fffffffdd10) at /home/manish/git/caffe/tools/caffe.cpp:518


@naibaf7
Copy link
Owner

naibaf7 commented Jun 8, 2016

This is by design and not a bug. This network has a MemoryDataLayer and needs a custom C++ code to train and test it (or you have to switch to using LMDB or HDF5 memory layers).

Custom interface to train it is here: https://github.com/naibaf7/caffe_neural_tool, however not sure if this still is fully compatible with the current version of Caffe, as it is not officially maintained anymore.

@naibaf7 naibaf7 closed this as completed Jun 8, 2016
@manishrdmishra
Copy link
Author

I run with naibaf7/caffe_neural_tool also and getting the same crash.

@naibaf7
Copy link
Owner

naibaf7 commented Jun 8, 2016

@manishrdmishra
Ok that is different, I will have a look there later today.

@naibaf7
Copy link
Owner

naibaf7 commented Jun 9, 2016

@manishrdmishra
Just tested again, using script examples from my caffe_neural_models repository with caffe_neural_tool. They work fine, when the data is present.
Make sure the data and scripts are aligned as in the dataset_01 example.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants