Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Next: release candidate #1112

Merged
merged 421 commits into from
Sep 19, 2014
Merged

Next: release candidate #1112

merged 421 commits into from
Sep 19, 2014

Conversation

shelhamer
Copy link
Member

The next release packages up 400+ commits by 18 authors. Thanks all!

DOCUMENTATION: there is tutorial documentation and developer API documentation courtesy of Doxygen (thanks to Jeff!). The documentation, in particular the developer API, is still in-progress so come help and join the comment crusade!

DEPENDENCIES: CUDA 6.5 is the suggested version. cuDNN is an acceleration library for deep network operations with drop-in integration to Caffe. It is not required but suggested for best performance. See #1046.

DEPRECATION: transformation parameters now have their own configuration message to reduce duplication across the data layers. For instance

layers {
  name: "mnist"
  type: DATA
  top: "data"
  top: "label"
  data_param {
    source: "examples/mnist/mnist_train_lmdb"
    backend: LMDB
    batch_size: 64
  }
  transform_param {
    scale: 0.00390625
  }
}

is now the proper format with the transform_param block. Old models are currently automagically upgraded but you should upgrade with the included tools upgrade_net_proto_{text,binary}.

mohomran and others added 30 commits August 31, 2014 14:05
convert MNIST demo to lmdb, fixes
longjon and others added 22 commits September 18, 2014 12:41
Note that we are dropping some checks from LRN layer. However, these
checks are fairly redundant; something is very wrong if these layers
are producing top blobs that are different sizes than their inputs, and
tests are the right place to catch that. The thing that really should be
checked (that isn't) is that that local_size needs to be odd; this will
be added in a future commit.
Strictly speaking, Reshape doesn't need to be called until the first
Forward call; however, much existing code (especially tests) assumes
that top blobs will be set up in SetUp, so we may as well do it there.
Now that top blobs are set up in Layer::Reshape, it's Reshape that is
mandatory, and simple layers often don't need to implement LayerSetUp.
Reshape is (already) declared abstract, so not implementing it is a
compile-time error.
Since we are now calling Reshape in the Forward pass, it's only fair to
include it when timing. Reshape calls should normally be four or so
orders of magnitude faster than Forward calls; this change also makes it
easy to notice a mistake that causes something slow to happen in
Reshape.
Note that it is not normally necessary to call this function when using
reshapable nets, but sometimes it can be useful to compute the sizes of
intermediate layers without waiting for the forward pass.
On-the-fly net resizing, without reallocation (where possible)
  [model zoo] download gist script
- invoke by shell
- default download dir to models/
- save to flat dir of owner-gist instead of nested owner/gist
  Add contrastive loss layer, tests, and a siamese network example
@shelhamer shelhamer mentioned this pull request Sep 19, 2014
shelhamer added a commit that referenced this pull request Sep 19, 2014
Next: release candidate
@shelhamer shelhamer merged commit 737ea5e into master Sep 19, 2014
@shelhamer shelhamer deleted the next branch September 19, 2014 05:22
shelhamer added a commit that referenced this pull request Sep 19, 2014
mitmul pushed a commit to mitmul/caffe that referenced this pull request Sep 30, 2014
RazvanRanca pushed a commit to RazvanRanca/caffe that referenced this pull request Nov 4, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.