-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BilinearUpsamplingFiller used with DeconvolutionLayer for upsampling #2213
Conversation
Here is an ipython notebook example. |
BilinearUpsamplingFiller used with DeconvolutionLayer for upsampling
I believe this is wrong. According to https://gist.githubusercontent.com/shelhamer/80667189b218ad570e82/raw/938f470ad91072929cbb9f6c739dc34488ca7d03/solve.py, it uses net.params[l][0].data[range(m), range(k), :, :] = filt. This doesn't fill filt in each channel in each kernel. It only fills the ith channel in the ith kernel, all others are 0. Bilinear interpolation is doing channels-wise. It could be easy to fix it. |
Thanks for commenting @weiliu89, but this is not wrong. Please note that I assume you specify EDIT: |
@tnarihi Thanks for pointing out my mistake. Yours is indeed more efficient than the original python code. I thought group is not important in the setting and thus ignored it. Probably it is better to emphasize this more in the code, otherwise the filler is doing completely the wrong thing. |
@tnarihi @weiliu89 setting group is more obvious and makes use of less memory. The posted semantic segmentation FCNs will likely be switched to group deconvolution at some point for this reason. However the current Python filler is correct:
indexes each |
@tnarihi thanks for the C++ filler so interpolation can be defined instead of needing initialization by net surgery. I might call this @longjon thoughts? |
15d784f
to
4f249a0
Compare
@weiliu89 Your point is right. I updated doc. @shelhamer Thanks for your attention to this PR. Yeah, I once called it |
@shelhamer @tnarihi, I agree that "upsampling" is a bit overly specific, e.g., you can use this to initialize a convolution layer and do _down_sampling. Either name is okay with me though. Also consider |
@longjon I am still unclear how we could create a downsampling layer which is equivalent to common image downsampling implementation by using this filler though, |
add bilinear interpolation filler
Thanks for merging, @shelhamer! |
how to use this to downsampling with bilinear? |
This does not work for 5D blobs. Does anyone know a workaround to use the |
Hi, I have a problem that the result of tensorflow.image.resize_bilinear() is different from Deconvolution-Layer with bilinear in Caffe, could you give me a help? Thank you very much. |
This is intended to be used in
DeconvolutionLayer
acting likeUpsamplingLayer
which is not implemented explicitly. You can upsample a feature map by any integer factor using the following proto.Replace
{{}}
with your values. Note that the learning rate and the weight decay are set to 0 in order to keep coefficient values of bilinear interpolation unchanged during training. If you apply this to an image, this operation is equivalent to the following call in Python with Scikit.Image.