Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scaling laws for neural language models #507

Open
1 task
irthomasthomas opened this issue Feb 4, 2024 · 0 comments
Open
1 task

Scaling laws for neural language models #507

irthomasthomas opened this issue Feb 4, 2024 · 0 comments
Labels
Algorithms Sorting, Learning or Classifying. All algorithms go here. llm-experiments experiments with large language models MachineLearning ML Models, Training and Inference Papers Research papers Research personal research notes for a topic

Comments

@irthomasthomas
Copy link
Owner

TITLE: Scaling laws for neural language models

DESCRIPTION:

Abstract
We study empirical scaling laws for language model performance on the cross-entropy loss. The loss scales as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude. Other architectural details such as network width or depth have minimal effects within a wide range. Simple equations govern the dependence of overfitting on model/dataset size and the dependence of training speed on model size. These relationships allow us to determine the optimal allocation of a fixed compute budget. Larger models are significantly more sample-efficient, such that optimally compute-efficient training involves training very large models on a relatively modest amount of data and stopping significantly before convergence.

URL: https://openai.com/research/scaling-laws-for-neural-language-models

Suggested labels

{ "label-name": "Scaling-Laws", "description": "Analysis of scaling laws for language models", "confidence": 69.89 }

@irthomasthomas irthomasthomas added Algorithms Sorting, Learning or Classifying. All algorithms go here. llm-experiments experiments with large language models MachineLearning ML Models, Training and Inference New-Label Choose this option if the existing labels are insufficient to describe the content accurately Papers Research papers Research personal research notes for a topic and removed New-Label Choose this option if the existing labels are insufficient to describe the content accurately labels Feb 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Algorithms Sorting, Learning or Classifying. All algorithms go here. llm-experiments experiments with large language models MachineLearning ML Models, Training and Inference Papers Research papers Research personal research notes for a topic
Projects
None yet
Development

No branches or pull requests

1 participant