Skip to content

MLX-VLM is a package for running Vision LLMs locally on your Mac using MLX.

License

Notifications You must be signed in to change notification settings

CharafChnioune/mlx-vlm

 
 

Repository files navigation

MLX-VLM

MLX-VLM a package for running Vision LLMs on your Mac using MLX.

Get started

The easiest way to get started is to install the mlx-vlm package:

With pip:

pip install mlx-vlm

Inference

CLI

python -m mlx_vlm.generate --model qnguyen3/nanoLLaVA --max-tokens 100 --temp 0.0

Chat UI with Gradio

python -m mlx_vlm.chat_ui --model qnguyen3/nanoLLaVA

Script

import mlx.core as mx
from mlx_vlm import load, generate

model_path = "mlx-community/llava-1.5-7b-4bit"
model, processor = load(model_path)

prompt = processor.tokenizer.apply_chat_template(
    [{"role": "user", "content": f"<image>\nWhat are these?"}],
    tokenize=False,
    add_generation_prompt=True,
)

output = generate(model, processor, "http://images.cocodataset.org/val2017/000000039769.jpg", prompt, verbose=False)

About

MLX-VLM is a package for running Vision LLMs locally on your Mac using MLX.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%