Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MSR weight filler #1883

Closed
wants to merge 2 commits into from
Closed

MSR weight filler #1883

wants to merge 2 commits into from

Conversation

nickcarlevaris
Copy link

This PR implements the weight initialization strategy from http://arxiv.org/abs/1502.01852. It is very similar to the Xavier filler---except designed for RLU instead of tanh non-linearities.

This would compliment #1880 which implements the PReLU layer also proposed in this paper.

… for use

with LRUs instead of tanh. Based on paper: He et al, "Delving Deep into
Rectifiers: Surpassing Human-Level Performance on ImageNet Classification," 2015
: Filler<Dtype>(param) {}
virtual void Fill(Blob<Dtype>* blob) {
CHECK(blob->count());
int fan_in = blob->count() / blob->num();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my understanding, they use number of output channels instead of input in order to avoid increasing/decreasing variances of gradients though backward pass, don't they? In this case, it should be the following.

fan_in = blob->count() / blobs->channels()

I am sorry if my understanding is not correct.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, the current version implements the forward propagation case, which is equation (10) in the paper. If you used the fan_out

fan_out = blob->count() / blobs->channels()

that would implement the backward propagation case in equation (14). They say at the end of that section that "We note that it is sufficient to use either Eqn.(14) or Eqn.(10) alone." and that " For all models in this paper, both forms can make them converge."

I don't know which is better. The current Caffe Xavier implementation only considers the fan_in, so this PR follows that lead.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nickcarlevaris I havn't read those. It might be better to name it like MSRForwardFiller? Anyways, this PR should be helpful since we no longer need to set the filler variances by hand. Thanks!

@sguada
Copy link
Contributor

sguada commented Feb 17, 2015

@nickcarlevaris you could add a parameter to the Filler to specify which formula to use, i.e fan_in or fan_out

@nickcarlevaris
Copy link
Author

@sguada and @tnarihi,
I added two boolean parameters to the filler, a fan_in and a fan_out.
fan_in = true uses equation (10) in the paper
fan_out = true uses equation (14) in the paper
If both fan_in and fan_out are true it uses the mean, (fan_in + fan_out) / 2.0. They don't discuss it in the MSR paper but this average was originally proposed in the Xavier paper.

If you guys think this is sensible way to do the settings, we could allow the same options to apply for the XavierFiller as well.

@weiliu89
Copy link

@nickcarlevaris @sguada Do you know why XavierFiller uses sqrt(3/fan_in)?
I cannot really understand how they get Eq. 15. Besides, I think the final Xavier initialization is derived at Eq. 16, which is a uniform distribution? Is there any document or reference why Caffe is implemented differently? Or maybe I am missing something here...

http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf

@nickcarlevaris
Copy link
Author

@weiliu89
Eq (15) is just there to show that their original heuristic, from eq (1), does not provide the same weight variance as the desired conditions in (10) , (11), or (12).

Eq (16) is derived starting with the mean of the fan_in and fan_out values (12). The current XavierFilter in Caffe starts with (10) instead of (12). That is why you get
sqrt(3 / fan_in)
instead of
sqrt(6 / (fan_in + fan_out))

@weiliu89
Copy link

@nickcarlevaris

Thanks for the explanation. Now I understand that Eq. 15 is only to show their initial heuristic is not good.

However, I still don't get Eq. 16 (It is a uniform distribution according to the paper I think, not normal distribution). According to Eq. 12, we should initialize the weight to be a normal distribution with std sqrt(2/(fan_in+fan_out)). If we don't consider fan_out, then according to Eq. 10, we should initialize the weight to be a normal distribution with std sqrt(1/fan_in). The Caffe implementation is a normal distribution with std sqrt(3/fan_in), you maybe right if they derive it from Eq. 16 by ignoring fan_out. But it is not correct, right? At least if we strictly following the paper.

Besides, the MSR paper and your implementation of it use std = sqrt(2/fan_in), which they claim is larger than sqrt(1/fan_in) as shown in the Xavier paper, however is smaller than Caffe's implementation?

I am a little confused here.

@weiliu89
Copy link

@nickcarlevaris

Ah. I think I get it now. I always thought Caffe's Xavier filler is using Gaussian distribution... I just checked that it uses uniform distribution. My bad. Sorry...

I think it is also equivalent if it is initialized with normal distribution with sqrt(1/fan_in) for xavier method.

@longjon
Copy link
Contributor

longjon commented Feb 18, 2015

Having two separate parameters to control the sense of the normalization is a bit confusing; it's not obvious at a glance what fan_in: true fan_out: true is supposed to mean. And what if both are false?

I'd rather see an enum, with settings (perhaps) FAN_IN, FAN_OUT, or AVERAGE.

Also not sure that "MSR" is the right name for this, discussion is welcome.

@nickcarlevaris
Copy link
Author

@longjon you're right an enum, would be more clear. Something like?

// Normalize the filler variance by fan_in, fan_out, or their average.
// Applies to xavier and relu fillers.
enum VarianceNorm {
  FAN_IN = 0;
  FAN_OUT = 1;
  AVERAGE = 2;
}
optional VarianceNorm variance_norm = 8 [default = FAN_IN];

Would 'relu' be preferable for the filler name since it was designed for use with ReLUs?

While I'm updating the PR, should the same option apply to the xavier filler?

@weiliu89
Copy link

I have another question. According to "xavier" paper, we need var[w] = 1/fan_in. The paper propose use uniform distribution with scale of sqrt(3/fan_in) as implemented in Caffe. Is it better than gaussian distribution with std of sqrt(1/fan_in)? Is there any comparison between the two initialization method?

The same question apply to the "msr" paper, that we know we need var[w] = 2/fan_in. The paper says it uses gaussian distribution with std of sqrt(2/fan_in). But how does it compare to uniform distribution with scale of sqrt(6/fan_in)? Theoretically they are equal because they both satisfy var[w] = 2/fan_in.

I am just curious... maybe they are practically also same...

@shelhamer
Copy link
Member

Re: naming, "MSRA" does give attribution as "Xavier" does but it is perhaps opaque when it comes to the actual equation and purpose. As long as the name is clear in reference however it should be ok -- if everyone calls it the "MSRA" there isn't so much of a problem. "ReLU" could be an accurate name but ReLU layer vs. ReLU parameter filling vs. parametric ReLU starts to seem a bit overloaded.

@shelhamer shelhamer added the JL label Feb 22, 2015
@shelhamer
Copy link
Member

@nickcarlevaris

While I'm updating the PR, should the same option apply to the xavier filler?

That seems reasonable as long as the current behavior is the default for xavier -- in which case for consistency the "MSRA" filler should perhaps have the same default. I think it is fine to only add the enum for the new filler you're contributing and keep xavier as-is for now.

p.s. It's fine if you push your change to this PR -- we can merge it to master instead -- or you can rebase to the new master and make a replacement PR. Either way is good.

@seanbell
Copy link

This PR seems to copy over an existing bug from Xavier (#1575): the computations fan_in = blob->count() / blob->num() and fan_out = blob->count() / blob->channels() are both incorrect for fully connected layers. They only work for convolutional layers.

This is because fully connected layers have parameter shape 1 x 1 x output x input but convolutional layers have parameter shape output x input/group x height x width. So for fully connected layers, this formula computes fan_in = count / num = 1 * 1 * output * input / 1 = output * input, which is incorrect. For fan_out, we get the same incorrect result.

In #1575, it was suggested that the fully connected weights should be changed to output x input x 1 x 1 (and similar change to the bias) which would fix the above formulas. Alternatively, here are some other ideas for fixes: (a) extend the Layer interface to report FanIn() and FanOut() and then use that in Filler, or (b) give the layer type string to Filler so that it can branch depending on the blob layout convention.

@nickcarlevaris
Copy link
Author

Replaced by #1946 which makes updates based on this thread and is rebased against master.

@shelhamer
Copy link
Member

@seanbell Thanks for raising the conv / innerproduct fan-in issue again. We've talked about switching parameter order before so it should be decided for 1.0. There is also a performance consideration for the GEMMs based on the parameter shape.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants