Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace custom epsilons with numpy equivalent in LdaModel #2308

Merged
merged 3 commits into from
Jan 9, 2019

Conversation

horpto
Copy link
Contributor

@horpto horpto commented Dec 24, 2018

Fix #2115

@piskvorky
Copy link
Owner

@horpto please expand the PR description. What problem is this solving, what's the motivation for this PR?

@@ -668,6 +656,7 @@ def inference(self, chunk, collect_sstats=False):
# Lee&Seung trick which speeds things up by an order of magnitude, compared
# to Blei's original LDA-C code, cool!).
integer_types = six.integer_types + (np.integer,)
epsilon = np.finfo(self.dtype).eps
Copy link
Owner

@piskvorky piskvorky Dec 24, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure this is a good idea. What are the guarantees for such epsilon?

If the epsilon is too close to the underflow edge, it might be silently ignored in some cases. I'd prefer an epsilon that is less ambiguous. I don't think we really care about getting the smallest possible number here.

In fact, do we need epsilon at all? It hints at some instability in the algorithm if it needs to be avoiding singularities in this way. Identifying when such singularities happen as soon as possible (is it a function of the input corpus? empty documents? something else?), and raising an exception, might be a preferable solution.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New Epsilon already better than we have right now (bigger, we can even use 3 * eps). I agree that this is not the best solution (reason in algorithm instability), but this is a good workaround to avoid NaN values in models (at least, this will happens less often).

LGTM for me (improve overall model stability, but not prefect solution of course), wdyt @piskvorky ?

Copy link
Owner

@piskvorky piskvorky Jan 8, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, if it's an improvement we should merge it. But I'm still wary of the implications of this. Isn't it better to just raise an exception, rather than work around x / 0.0 by doing x / eps? Isn't the user screwed anyway (no exception, but nonsense results)?

Unfortunately I no longer remember why this code needs to be there :(

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't it better to just raise an exception, rather than work around x / 0.0 by doing x / eps?

No, because this can be raised at any moment (for example, I train model 10h and before the end, model raises an exception, in final - time already spent and no model).

Isn't the user screwed anyway (no exception, but nonsense results)

Usually not: if no NaNs in matrices, model behaves adequately.

@menshikh-iv menshikh-iv changed the title Fix #2115: Replace custom epsilons with automatic numpy equivalent Replace custom epsilons with numpy equivalent in LdaModel Jan 9, 2019
@menshikh-iv
Copy link
Contributor

Thanks @horpto 👍

@menshikh-iv menshikh-iv merged commit 1b07f81 into piskvorky:develop Jan 9, 2019
@menshikh-iv menshikh-iv mentioned this pull request Jan 17, 2019
@horpto horpto deleted the I2115-nans branch January 19, 2019 12:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants