Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bringing laplace-torch to foundation-model era #144

Merged
merged 84 commits into from
Apr 27, 2024
Merged

Bringing laplace-torch to foundation-model era #144

merged 84 commits into from
Apr 27, 2024

Conversation

wiseodd
Copy link
Collaborator

@wiseodd wiseodd commented Feb 24, 2024

Main features of this pull request:

  1. Support only doing Laplace on params that require grad. Use case: PEFT (like LoRA) on top of a frozen foundation model. This is more efficient than SubnetLaplace since the latter still computes the full Jacobians.
  2. Add support to multiple leading dims in classification likelihood. E.g. the logits is (batch_size, seq_len, n_classes). Useful for language modeling and reward modeling.
    1. This PR also contains the integrate-latest-asdl changes. I tested it with my ASDL fork (only a couple of light changes to support weight-sharing dim and ignore_index; so crucial in language modeling): https://github.com/wiseodd/asdl/commits/dev/. Please also check this and let me know what's the most elegant way.
  3. Support Huggingface dataset. The assumption is that x is a UserDict containing input_ids, attention_mask, etc., things that are produced by HF dataloader.
  4. Add a new likelihood called reward_modeling where the classification likelihood is used during training and the regression likelihood is used during prediction.
  5. Add support to torchmetrics for gridsearch. The benefit is that it supports running metrics => less memory overhead (vis-a-vis gathering all the logits first).
  6. Add Jacobian computation with torch.func (functorch, really) as a general Jacobian computation for GLM predictive. Useful for Bayesian optimization/invariance learning where you need to backprop through the variance. Much more elegant than to change ASDL.

Relevant unit tests are provided. All tests passed; the only ones failed are the old LowRankLaplace issues.

aleximmer and others added 30 commits August 9, 2022 13:25
…n arbitrary parameter indices as in SubnetLaplace, but per-parameter (i.e. weight, bias) subsets.
…softmax) in NN predictive; 3. Pass model kwargs in NN predictive.
…rec; add support for softmax temp for classification predictive
Add support for cross entropy loss inputs with multiple leading dimensions
@wiseodd
Copy link
Collaborator Author

wiseodd commented Apr 25, 2024

I replaced get_nll in crossval with RunningNLLMetrics(). So this PR will close #160 .
(I thought before that I RunningNLLMetics hadn't been implemented, so would have involve major work.)

@wiseodd wiseodd linked an issue Apr 25, 2024 that may be closed by this pull request
@wiseodd wiseodd linked an issue Apr 25, 2024 that may be closed by this pull request
@wiseodd
Copy link
Collaborator Author

wiseodd commented Apr 25, 2024

Might as well fixes #156 while we're at it.

README.md Show resolved Hide resolved
examples/huggingface_example.md Outdated Show resolved Hide resolved
laplace/laplace.py Show resolved Hide resolved
setup.cfg Show resolved Hide resolved
@wiseodd
Copy link
Collaborator Author

wiseodd commented Apr 26, 2024

All tasks finished!

@wiseodd
Copy link
Collaborator Author

wiseodd commented Apr 27, 2024

Double-checked and everything looks fine! Merging this.

@wiseodd wiseodd merged commit 76a04eb into main Apr 27, 2024
@wiseodd wiseodd deleted the mc-subset2 branch April 27, 2024 18:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
4 participants