-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ns_exponent
parameter to control the negative sampling distribution for *2vec
models. Fix #2090
#2093
Merged
menshikh-iv
merged 4 commits into
piskvorky:develop
from
fernandocamargoai:feature/negative_sampling_distribution_parameter
Jun 22, 2018
Merged
Add ns_exponent
parameter to control the negative sampling distribution for *2vec
models. Fix #2090
#2093
Changes from 2 commits
Commits
Show all changes
4 commits
Select commit
Hold shift + click to select a range
35047b9
Adding ns_exponent parameter to control the negative sampling distrib…
fernandocamargoai 5d45235
Fixed a code style problem.
fernandocamargoai b860c50
Updated the documentation of the ns_exponent parameter.
fernandocamargoai 4c72455
Merge branch 'develop' of github.com:RaRe-Technologies/gensim into fe…
fernandocamargoai File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -162,8 +162,8 @@ class FastText(BaseWordEmbeddingsModel): | |
""" | ||
def __init__(self, sentences=None, sg=0, hs=0, size=100, alpha=0.025, window=5, min_count=5, | ||
max_vocab_size=None, word_ngrams=1, sample=1e-3, seed=1, workers=3, min_alpha=0.0001, | ||
negative=5, cbow_mean=1, hashfxn=hash, iter=5, null_word=0, min_n=3, max_n=6, sorted_vocab=1, | ||
bucket=2000000, trim_rule=None, batch_words=MAX_WORDS_IN_BATCH, callbacks=()): | ||
negative=5, ns_exponent=0.75, cbow_mean=1, hashfxn=hash, iter=5, null_word=0, min_n=3, max_n=6, | ||
sorted_vocab=1, bucket=2000000, trim_rule=None, batch_words=MAX_WORDS_IN_BATCH, callbacks=()): | ||
"""Initialize the model from an iterable of `sentences`. Each sentence is a | ||
list of words (unicode strings) that will be used for training. | ||
|
||
|
@@ -210,6 +210,11 @@ def __init__(self, sentences=None, sg=0, hs=0, size=100, alpha=0.025, window=5, | |
If > 0, negative sampling will be used, the int for negative specifies how many "noise words" | ||
should be drawn (usually between 5-20). | ||
If set to 0, no negative sampling is used. | ||
ns_exponent : float | ||
The exponent used to smooth the cumulative distribution used for negative sampling. | ||
1.0 leads to a sampling based on the frequency distribution, 0.0 makes items beings sampled equally, | ||
while a negative value makes unpopular items being sampled more often than popular onces. The default value | ||
is empirically set to 0.75 following the original paper of Word2Vec. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Same as above. |
||
cbow_mean : int {1,0} | ||
If 0, use the sum of the context word vectors. If 1, use the mean, only applies when cbow is used. | ||
hashfxn : function | ||
|
@@ -267,7 +272,7 @@ def __init__(self, sentences=None, sg=0, hs=0, size=100, alpha=0.025, window=5, | |
self.wv = FastTextKeyedVectors(size, min_n, max_n) | ||
self.vocabulary = FastTextVocab( | ||
max_vocab_size=max_vocab_size, min_count=min_count, sample=sample, | ||
sorted_vocab=bool(sorted_vocab), null_word=null_word) | ||
sorted_vocab=bool(sorted_vocab), null_word=null_word, ns_exponent=ns_exponent) | ||
self.trainables = FastTextTrainables( | ||
vector_size=size, seed=seed, bucket=bucket, hashfxn=hashfxn) | ||
self.wv.bucket = self.bucket | ||
|
@@ -731,10 +736,10 @@ def accuracy(self, questions, restrict_vocab=30000, most_similar=None, case_inse | |
|
||
|
||
class FastTextVocab(Word2VecVocab): | ||
def __init__(self, max_vocab_size=None, min_count=5, sample=1e-3, sorted_vocab=True, null_word=0): | ||
def __init__(self, max_vocab_size=None, min_count=5, sample=1e-3, sorted_vocab=True, null_word=0, ns_exponent=0.75): | ||
super(FastTextVocab, self).__init__( | ||
max_vocab_size=max_vocab_size, min_count=min_count, sample=sample, | ||
sorted_vocab=sorted_vocab, null_word=null_word) | ||
sorted_vocab=sorted_vocab, null_word=null_word, ns_exponent=ns_exponent) | ||
|
||
def prepare_vocab(self, hs, negative, wv, update=False, keep_raw_vocab=False, trim_rule=None, | ||
min_count=None, sample=None, dry_run=False): | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For clarity, grammar, and to give a hint of when this could be beneficially tuned, I'd reword as:
"The exponent used to shape the negative sampling distribution. A value of 1.0 samples exactly in proportion to the frequencies, 0.0 samples all words equally, while a negative value samples low-frequency words more than high-frequency words. The popular default value of 0.75 was chosen by the original Word2Vec paper. More recently, in https://arxiv.org/abs/1804.04212, Caselles-Dupré, Lesaint, & Royo-Letelier suggest that other values may perform better for recommendation applications."