Skip to content

emory-irlab/pyterrier_genrank

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Version Python version License: Apache

The PyTerrier🐕 Plugin for listwise generative rerankers like RankGPT, RankVicuna, RankZephyr. A PyTerrier wrapper over the implementation available at RankLLM.

Installation

pip install --upgrade git+https://github.com/emory-irlab/pyterrier_genrank.git

Example Usage

Since this implementation uses listwise reranking, it is used a bit differently than other rerankers.

import pyterrier as pt

from rerank import LLMReRanker

dataset = pt.get_dataset("irds:vaswani")

bm25 = pt.terrier.Retriever.from_dataset("vaswani", "terrier_stemmed", wmodel="BM25")
llm_reranker = LLMReRanker("castorini/rank_vicuna_7b_v1")

genrank_pipeline = bm25 % 100 >> pt.text.get_text(index, 'text') >> llm_reranker

genrank_pipeline.search('best places to have Indian food')

If you want to use RankGPT, ensure that you have your api key set in an environment file. Then load the reranker with the OpenAI model string.

llm_reranker = LLMReRanker("gpt-35-turbo-1106", use_azure_openai=True)

The LLMReRanker function can take any 🤗HuggingFace model id. It has been tested using the following two reranking models for TREC-DL 2019:

Model nDCG@10
BM25 .48
BM25 + rank_vicuna_7b_v1 .67
BM25 + rank_zephyr_7b_v1_full .71
BM25 + gpt-35-turbo-1106 .66
BM25 + gpt-4-turbo-0409 .71
BM25 + gpt-4o-mini .71
BM25 + Llama-Spark (8B zero-shot) .61

Read the paper for detailed results here.

The reranker interface takes additional parameters that could be modified.

llm_reranker = LLMReRanker(model_path="castorini/rank_vicuna_7b_v1", 
                           num_few_shot_examples=0,
                           top_k_candidates=100,
                           window_size=20,
                           shuffle_candidates=False,
                           print_prompts_responses=False, step_size=10, variable_passages=True,
                           system_message='You are RankLLM, an intelligent assistant that can rank passages based on their relevancy to the query.',
                           prefix_instruction_fn=lambda num, query: f"I will provide you with {num} passages, each indicated by number identifier []. \nRank the passages based on their relevance to query: {query}.",
                           suffix_instruction_fn=lambda num, query: f"Search Query: {query}. \nRank the {num} passages above. You should rank them based on their relevance to the search query. The passages should be listed in descending order using identifiers. The most relevant passages should be listed first. The output format should be [] > [], e.g., [1] > [2]. Only response the ranking results, do not say any word or explain.",
                           prompt_mode: PromptMode = PromptMode.RANK_GPT,
                           context_size: int = 4096,
                           num_gpus = 1,
                           text_key = 'text',
                           use_azure_openai = False)

Reference

@software{Dhole_PyTerrier_Genrank,
    author = {Dhole, Kaustubh},
    license = {Apache-2.0},
    institution = {Emory University},
    title = {{PyTerrier-GenRank: The PyTerrier Plugin for Reranking with Large Language Models}},
    url = {https://github.com/emory-irlab/pyterrier_genrank},
    year = {2024}
}

Releases

No releases published

Packages

No packages published

Languages