-
Support for LLAMA-3 (2024-07-08)
-
support JSON Schema as constraint(2024-05-13)
-
Token masking optimization( (2024-04-25)
-
Support for Phi (2024-04-16)
-
Online Demo with JSON Grammar at HF space (2024-04-10)
-
Support for Unicode(multilingual) grammars (2024-02-29)
-
Integration with Text-Generation-WebUI (2023-12-17)
We are thrilled to announce that transformers_cfg
has been used in the Text-Generation-WebUI project.
This integration enables users to utilize our CFG capabilities within the popular, 30.5K-starred web interface for text generation.
For more details, see Relevent Pull Request
transformers_cfg
is an extension library for the popular Transformers library by Hugging Face, tailored for working with context-free grammars (CFG).
This package provides additional tools and functionalities to enhance your experience with natural language processing tasks involving CFGs.
It was initially developed as a pull request to the Hugging Face Transformers library. See relevant discussion here.
- You can install the stable version of
transformers-cfg
using pip:
pip install transformers-cfg
- For the latest code and updates, you can install directly from the GitHub repository:
pip install git+https://github.com/epfl-dlab/transformers-CFG.git@main
This will install the package directly from the main
branch of the repository.
The below example can be found in examples/generate_json.py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers_cfg.grammar_utils import IncrementalGrammarConstraint
from transformers_cfg.generation.logits_process import GrammarConstrainedLogitsProcessor
if __name__ == "__main__":
# Detect if GPU is available, otherwise use CPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Using device: {device}")
model_id = "mistralai/Mistral-7B-v0.1"
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(model_id).to(
device
) # Load model to defined device
model.generation_config.pad_token_id = model.generation_config.eos_token_id
# Load json grammar
with open("examples/grammars/json.ebnf", "r") as file:
grammar_str = file.read()
grammar = IncrementalGrammarConstraint(grammar_str, "root", tokenizer)
grammar_processor = GrammarConstrainedLogitsProcessor(grammar)
# Generate
prefix1 = "This is a valid json string for http request:"
prefix2 = "This is a valid json string for shopping cart:"
input_ids = tokenizer([prefix1, prefix2], add_special_tokens=False, return_tensors="pt", padding=True)["input_ids"]
output = model.generate(
input_ids,
max_length=50,
logits_processor=[grammar_processor],
repetition_penalty=1.1,
num_return_sequences=1,
)
# decode output
generations = tokenizer.batch_decode(output, skip_special_tokens=True)
print(generations)
"""
'This is a valid json string for http request:{ "request": { "method": "GET", "headers": [], "content": "Content","type": "application" }}
'This is a valid json string for shopping cart:{ "name": "MyCart", "price": 0, "value": 1 }
"""
Alternatively, you can use transformers-cfg
to perform grammar-constrained decoding with huggingface pipeline.
Click here to see an example, or check it out in `examples/pipeline_json.py`
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token = tokenizer.eos_token
# Load model to defined device
model = AutoModelForCausalLM.from_pretrained(model_id).to(device)
# Load grammar
with open(f"examples/grammars/json.ebnf", "r") as file:
grammar_str = file.read()
grammar = IncrementalGrammarConstraint(grammar_str, "root", tokenizer)
grammar_processor = GrammarConstrainedLogitsProcessor(grammar)
# Initialize pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
device_map="auto",
max_length=50,
batch_size=2,
)
generations = pipe(
[
"This is a valid json string for http request: ",
"This is a valid json string for shopping cart: ",
],
do_sample=False,
logits_processor=[grammar_processor],
)
- We support EBNF grammar description format
- We offer the same grammar interface as llama-cpp project, allowing you to drop-in replace llama-cpp with transformers-CFG.
- We allow you to use any of the models in the π€ Transformers library, including the ones that are not supported by llama-cpp.
- We support multilingual grammars, you can use any character from any language in your grammar, e.g. δΈζ, ζ₯ζ¬θͺ, νκ΅μ΄, ΰ€Ήΰ€Ώΰ€¨ΰ₯ΰ€¦ΰ₯, Ψ§ΩΨΉΨ±Ψ¨ΩΨ©, Χ’ΧΧ¨ΧΧͺ, or emoji π€.
TL;DR: Think of it as an enhanced version of regular expressions.
Here is an example of a simplified JSON grammar:
# A JSON object is the root of the grammar
root ::= object
# An object starts with "{" and ends with "}" and contains pairs separated by ","
object ::= "{" pair ("," pair)* "}"
# A pair is a string followed by a ":" and a value
pair ::= string ":" value
# A string is a sequence of alphanumeric characters enclosed in double quotes
string ::= '"' [a-zA-Z0-9]* '"'
# A value can be a string, another object, or a boolean value
value ::= string | object | "true" | "false" | "null"
This grammar describes the structure of a JSON object. It specifies that a JSON object is a pair of key-value pairs, where the key is a string and the value can be a string, another object, or a boolean value.
Grammar doesn't need to be complicated. You can use it to describe very simple but useful things, like a valid email address, a valid URL, or phone number.
phone_number ::= "+" [0-9]+
You can also force it to generate only emojis or generate only korean characters.
['Describe your feeling with emoji: ππππ―π
ππππππππ
πππππππππππ
ππππππππππ', 'Write a poem with emoji: πππππππππππ
πππππππππππππππππππππππ']
More details can be found in this doc from llama-cpp Advanced grammar debugging guide can be found here
You can use custom grammars to constrain the output of a language model. Check out the documentation on json schema to grammar conversion to learn how to automatically create custom grammars for complex json objects.
We provide a collection of grammars in the examples/grammars
folder, which are mostly identical to the grammars in llama-cpp project.
We try to keep the grammars up-to-date with the original grammars from llama-cpp project.
But up to now, we can not yet guarantee that all grammars from llama-cpp project can be directly used in transformers-CFG.
The list of grammars contains:
- json.ebnf: A grammar for generating valid json objects.
- json_arr.ebnf: A grammar for generating valid json arrays.
- c.ebnf: A grammar for generating valid C programs.
- chess.ebnf: A grammar for generating valid chess moves.
- arithmetic.ebnf: A grammar for generating valid arithmetic expressions.
- LLaMa family models
- GPT family models
- Bloom family models
- Mistral family models
- Falcon family models
- ...
See supported_models.yaml for the full list of supported models.
As a rule of thumb, all models with the same tokenizer should naturally be supported. If you find any model that is not supported, please open an issue or submit a pull request.
Our update in the transformers_cfg
library has significantly improved the performance of grammar-constrained decoding (especially for complicated grammars).
Please consider citing our work, if you found the provided resources useful.
@inproceedings{geng-etal-2023-grammar,
title = {Grammar-Constrained Decoding for Structured {NLP} Tasks without Finetuning},
author = {Geng, Saibo and Josifoski, Martin and Peyrard, Maxime and West, Robert},
year = 2023,
month = dec,
booktitle = {Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing},
publisher = {Association for Computational Linguistics},
address = {Singapore},
url = {https://aclanthology.org/2023.emnlp-main.674},
editor = {Bouamor, Houda and Pino, Juan and Bali, Kalika}
}
This project is licensed under the MIT License.
This project is derived from the torch-grammars project, which was derived from the llama-cpp project.