Skip to content

CS330 Deep Multi Task and Meta Learning Assignment. Stanford University

Notifications You must be signed in to change notification settings

karengarm/CS330-Deep-Multi-Task-Meta-Learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CS330 Deep Multi Task and Meta Learning Assignment. Stanford University

Homework 0: Multitask Training for Recommender Systems

In this assignment, we will implement a multi-task movie recommender system based on the classic Matrix Factorization and Neural Collaborative Filtering algorithms. In particular, we will build a model based on the BellKor solution to the Netflix Grand Prize challenge and extend it to predict both likely user-movie interactions and potential scores.

Homework 1: Data Processing and Black-Box Meta-Learning

In this assignment, we will look at meta-learning for few shot classification:

  1. Learn how to process and partition data for meta learning problems, where training is done over a distribution of training tasks.
  2. Implement and train memory augmented neural networks, a black-box meta-learner that uses a recurrent neural network.
  3. Analyze the learning performance for different size problems.
  4. Experiment with model parameters and explore how they improve performance.

Homework 2: Prototypical Networks and Model-Agnostic Meta-Learning

In this assignment, we will experiment with two meta-learning algorithms, prototypical networks (protonets) and model-agnostic meta-learning (MAML), for few-shot image classification on the Omniglot dataset:

  1. Implement both algorithms (given starter code).
  2. Interpret key metrics of both algorithms.
  3. Investigate the effect of task composition during protonet training on evaluation.
  4. Investigate the effect of different inner loop adaptation settings in MAML.
  5. Investigate the performance of both algorithms on meta-test tasks that have more support data than training tasks do.

Homework 3: Few-Shot Learning with Pre-trained Language Models

This assignment will explore several methods for performing few-shot (and zero-shot) learning with pre-trained language models (LMs), including variants of fine-tuning and in-context learning. The goal of this assignment is to gain familiarity with performing fewshot learning with pre-trained LMs, learn about the relative strengths and weaknesses of fine-tuning and in-context learning, and explore some recent methods proposed for improving on the basic form of these algorithms.

About

CS330 Deep Multi Task and Meta Learning Assignment. Stanford University

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages