Skip to content

Doing simple retrieval from LLM models at various context lengths to measure accuracy

Notifications You must be signed in to change notification settings

bszollosinagy/LLMTest_NeedleInAHaystack

 
 

Repository files navigation

Pressure Testing GPT-4-128K

A simple 'needle in a haystack' analysis to test in-context retrieval ability of GPT-4-128K context

The Test

  1. Place a random fact or statement (the 'needle') in the middle of a long context window
  2. Ask the model to retrieve this statement
  3. Iterate over various document depths (where the needle is placed) and context lengths to measure performance

This is the code that backed this tweet.

If ran, this script will populate results.json with evaluation information. Original results are held within /original_results, though they don't have as much information as they should. The current script gathers and saves more data.

The key pieces:

  • needle : The random fact or statement you'll place in your context
  • question_to_ask: The question you'll ask your model which will prompt it to find your needle/statement
  • results_version: Set to 1. If you'd like to run this test multiple times for more data points change this value to your version number
  • context_lengths (List[int]): The list of various context lengths you'll test. In the original test this was set to 15 evenly spaced iterations between 1K and 128K (the max)
  • document_depth_percents (List[int]): The list of various depths to place your random fact
  • model_to_test: The original test chose gpt-4-1106-preview. You can easily change this to any chat model from OpenAI, or any other model w/ a bit of code adjustments

Results Visualization

alt text (Made via pivoting the results, averaging the multiple runs, and adding labels in google slides)

Reproduction (2023/11/21)

Original dataset

A single re-run of this needle in the Haystack tests on the original data is: alt text

File order reshuffled in dataset

In the Twitter feed it was suggested that the issue may be caused by unlucky placement of the needle inside the dataset, and perhaps it could be interesting to run the same test on a dataset where the input files are loaded and concatenated in a different order.

This repo contributes just this, and the result.json files for a single run.

The low scores are still within the first 50%, and it may seem that there are fewer such cases, but this may be caused by the fact that this plot only has a single run. 2x or more runs may result in the needle being missing at other locations.

alt text

About

Doing simple retrieval from LLM models at various context lengths to measure accuracy

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%