Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose Solver::Snapshot to pycaffe #3082

Merged
merged 2 commits into from
Oct 31, 2015
Merged

Conversation

gustavla
Copy link
Contributor

This addresses #3077, making it possible to manually save a snapshot (caffemodel+solverstate) from Python.

On a related note, does it also make sense to expose Solver::TestAll, so that it's possible to manually instigate a test run through Python?

- Solver::Snapshot is made public
- It is also added as `snapshot` to pycaffe

Addressing BVLC#3077
@longjon
Copy link
Contributor

longjon commented Sep 19, 2015

Looks good -- I wonder if we could have a basic test though?

Exposing TestAll in a different PR sounds good to me.

@gustavla
Copy link
Contributor Author

gustavla commented Oct 6, 2015

Sorry for delay on this. I added a test that runs solver.snapshot() and checks if the expected files were generated.

shelhamer added a commit that referenced this pull request Oct 31, 2015
Expose `Solver::Snapshot` to pycaffe
@shelhamer shelhamer merged commit f5fd18b into BVLC:master Oct 31, 2015
@shelhamer
Copy link
Member

Thanks for the pycaffe extension @gustavla!

@mpkuse
Copy link

mpkuse commented Aug 9, 2016

Is there also a way to load a snapshot into the solver with pycaffe?

@rogertrullo
Copy link

Hi @gustavla , could you please provide an example on how to use this feature?
in #3077 you mentioned that you wrote a SIGTERM interrupt handler?
Thanks!

@rayryeng
Copy link

rayryeng commented Nov 18, 2016

@mpkuse Just use the restore method that accompanies the solver. The input is the path to your .solverstate file that gets saved during snapshots --> solver.restore('your_solverstate_file.solverstate'). It's also mentioned in the Python section under Interfaces in the Caffe documentation: http://caffe.berkeleyvision.org/tutorial/interfaces.html

@VasLem
Copy link

VasLem commented Apr 30, 2017

Is there a way to set snapshot file name from inside a Python program, or is it necessary to have the name inside the solver prototxt from the beginning? Sorry if I ask something obvious, but there is no documentation, apart from this page and http://caffe.berkeleyvision.org/tutorial/interfaces.html.

@naibaf7
Copy link
Member

naibaf7 commented Apr 30, 2017

@VasLem
This example demonstrates the Python interface to it's full extent. However do note some options may only be available on the opencl branch and not (yet) mainline. But it should give you a good idea of it:
https://github.com/naibaf7/opencl_caffe_examples/blob/master/mnist_lenet/mnist_lenet.ipynb

@Coderx7
Copy link
Contributor

Coderx7 commented Apr 3, 2018

@naibaf7 :The current snapshot seems to have a problem, or at least I cant find a way around it.
When someone tries to train using Pycaffe and save the best model, one way to do so is to have a main loop like this:

for it in range (max_iter):
    solver.step(1)

    #check if its the best so far, if so , save it  
    solver.snapshot()

However, at this point ,solver.iter has increased by one already and when a solver.snapshot() is issued, it saves the "Next iterations model".
Is there a way around this? or is it me doing something wrong here?
Trying to get the accuracy according to the examples in IPython notebooks also doesn't work. Simply doing :

def run_test(solver, test_iter):
    acc = 0
    #batch_size_test =  solver.test_nets[0].blobs['data'].data.shape[0]
    
    for i in range(test_iter):
        #testing the network on all test set and calculate the test accuracy
        solver.test_nets[0].forward()
        acc += solver.test_nets[0].blobs['accuracy'].data
        # or 
        # corrects += sum(solver.test_nets[0].blobs['ip1'].data.argmax(1) == 
        # solver.test_nets[0].blobs['label'].data)
    
    final_result= '{0:.4f}'.format( acc / test_iter )
    # or 
    # final_result= '{0:.4f}'.format( corrects / test_iter * batch_size_test )
  return float(final_result)

Won't capture the best model, it captures the next immediate one instead.
So Simply put, it if you have set your test_interval to 500, and you get a model with x accuracy.
you wont be able to save that specific model, in that specific iteration.
All you can do is to save the next immediate one.
Any idea how to fix this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants