Skip to content

hdong920/GRIFFIN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 

Repository files navigation

GRIFFIN

This repository contains the implementation of GRIFFIN (Gating by Repetition In Feedforward Intermediate Neurons), an efficient method to adaptively and instantaneously prune neurons in LLM feedforward blocks, presented in "Prompt-prompted Adaptive Structured Pruning for Efficient LLM Generation".

Harry Dong, Beidi Chen, Yuejie Chi

Carnegie Mellon University

Abstract

With the development of transformer-based large language models (LLMs), they have been applied to many fields due to their remarkable utility, but this comes at a considerable computational cost at deployment. Fortunately, some methods such as pruning or constructing a mixture of experts (MoE) aim at exploiting sparsity in transformer feedforward (FF) blocks to gain boosts in speed and reduction in memory requirements. However, these techniques can be very costly and inflexible in practice, as they often require training or are restricted to specific types of architectures. To address this, we introduce GRIFFIN, a novel training-free and calibration-free method that selects unique FF experts at the sequence level for efficient generation across a plethora of LLMs with different non-ReLU activation functions. This is possible due to a critical observation that many trained LLMs naturally produce highly structured FF activation patterns within a sequence, which we call flocking. Despite our method's simplicity, we show with 50% of the FF parameters, GRIFFIN maintains the original model's performance with little to no degradation on a variety of classification and generation tasks, all while improving latency (e.g. 1.29$\times$ and 1.25$\times$ speed-ups in Gemma 7B and Llama 2 13B, respectively, on an NVIDIA L40).

Usage

GRIFFIN implementations for different models are in src/griffin/, and similar implementations for other architectures can be placed here as well. To evaluate on XSum and CNN/DailyMail summarization tasks, use src/eval_gen.py. For LM Eval Harness tasks, use src/lm_eval.py.

Setup

Clone this repository, and then set up the conda environment as follows:

conda env create -f griffin.yml
conda activate griffin
cd src

Evaluation

GRIFFIN is designed for generation tasks since the algorithm makes a distinction between the prompt and generation (autoregressive) phases. Example generation evaluations located in scripts/gen/ can be run like so:

sh scripts/gen/gemma_7b_coqa.sh 
sh scripts/gen/llama2_7b_xsum.sh 

For many classification settings, the model never enters the generation phase, meaning GRIFFIN will produce the same outputs as the full model. For these, we can simulate generation by treating the input sequence except the last token as the prompt and force the model to use the experts for the final token (decribed in more detail in the paper). This is what --mode class will do and should be set for all such classification tasks. Examples can be found in scripts/class/:

sh scripts/class/mistral_7b_boolq.sh 

Citation

If you found this repository helpful in your work, please cite our paper:

@inproceedings{
  dong2024promptprompted,
  title={Prompt-prompted Adaptive Structured Pruning for Efficient {LLM} Generation},
  author={Harry Dong and Beidi Chen and Yuejie Chi},
  booktitle={First Conference on Language Modeling},
  year={2024},
  url={https://openreview.net/forum?id=4aqq9xTtih}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published