Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alex lemmatizer classifier 2 #1422

Open
wants to merge 15 commits into
base: dev
Choose a base branch
from
Open

Conversation

AngledLuffa
Copy link
Collaborator

Add a word classifier to cover ambiguous lemmas such as 's

@AngledLuffa AngledLuffa force-pushed the alex_lemmatizer_classifier_2 branch 30 times, most recently from 21cb859 to f8455f4 Compare September 16, 2024 02:27
@AngledLuffa AngledLuffa force-pushed the alex_lemmatizer_classifier_2 branch 13 times, most recently from 6f61d58 to ee377f1 Compare November 11, 2024 16:52
… token in English or other lemmas with ambiguous resolutions

Includes data processing class for extracting sentences of interest

Has evaluation functions for single example and multiexample

Adds utility functions for loading dataset from file and handling unknown tokens during embedding lookup

Can use charlm models for training

Includes a baseline which uses a transformer to compare against the LSTM model
Uses AutoTokenizer and AutoModel to load the transformer - can provide a specific model name with the --bert_model flag

Includes a feature to drop certain lemmas, or rather, only accept lemmas if they match a regex.  This will be particularly useful for a language like Farsi, where the training data only has 6 and 1 examples of the 3rd and 4th most common expansions

Automatically extract the label information from the dataset.
Save the label_decoder in the regular model and the transformer baseline model.

Word vectors are trainable in the LSTM model
Word vectors used are the ones shipped with Stanza for whichever language, not specifically Glove.  This allows for using WV for whichever language we are using

Model selection during training loop done using eval set performance - both baseline and LSTM model

Training/testing done via batch processing for speed

Include UPOS tags in data processing/loading for files.  We then use UPOS embeddings for the words in the LSTM model as an additional signal for the query word

Implement multihead attention option for LSTM model
Add positional encodings to MultiHeadAttention layer of the LSTM model.

The common train() method from the two trainer classes is treated as one parent class.  Should make it easier to update pieces and keep them in sync

Keep the dataset in a single object rather than a bunch of lists.  Makes it easier to shuffle, keeps everything in one place

Don't save the transformer, charlm, or original word vector file in the model files.  Word vectors are finetuned and the deltas are saved.

import full path
… to store other information with the data, such as the tag being processed
… charlms if they exist

run_lemma_classifier.py now automatically tries to pick a save name and training filename appropriate for the dataset being trained.  Still need to calculate the lemmas to predict and use a language-appropriate wordvec file before we can do other languages, though

Add the ability to use run_lemma_classifier.py in --score_dev mode
Add --score_test to the lemma_classifier as well

Connects the transformer baseline to the run_lemma_classifier script

Reports the dev & test scores when running in TRAIN mode
AngledLuffa and others added 12 commits November 11, 2024 23:43
…taset

fa_perdt, ja_gsd, AR, HI as current options for the lemma classifier
This requires using a target regex instead of target word to make it simpler to match multiple words at once in the data preparation code
Add a sample 9/2/2 dataset and test that it gets read in a way we might like
…mmaClassifier model

Call evaluate_model just in case, although the expectation is that the F1 isn't going to be great
… Will be useful for integrating with the Pipeline

Save the target upos for a lemma classifier along with the target words
…- now running on text with a lemma trainer that has one or more of these classifiers should attach the words correctly
…ssarily require the sentences be written anywhere
…luding all of them, there seems to be enough 's -> have without adding artificial data
…r data, make run_lemma automatically attach it
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants