Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bugfix] Fix prefix strings for quantized VLMs #9772

Merged
merged 7 commits into from
Oct 29, 2024

Commits on Oct 28, 2024

  1. Configuration menu
    Copy the full SHA
    259bc5b View commit details
    Browse the repository at this point in the history
  2. Add prefix to language models

    mgoin committed Oct 28, 2024
    Configuration menu
    Copy the full SHA
    61c49b7 View commit details
    Browse the repository at this point in the history

Commits on Oct 29, 2024

  1. Fix qwen2-vl

    mgoin committed Oct 29, 2024
    Configuration menu
    Copy the full SHA
    a41cff0 View commit details
    Browse the repository at this point in the history
  2. Add optional prefix to model_loader build_model

    Signed-off-by: mgoin <[email protected]>
    mgoin committed Oct 29, 2024
    Configuration menu
    Copy the full SHA
    75a4a30 View commit details
    Browse the repository at this point in the history
  3. Fix internlm2

    Signed-off-by: mgoin <[email protected]>
    mgoin committed Oct 29, 2024
    Configuration menu
    Copy the full SHA
    2b7105f View commit details
    Browse the repository at this point in the history
  4. Update internlm2_ve.py

    mgoin authored Oct 29, 2024
    Configuration menu
    Copy the full SHA
    1893a18 View commit details
    Browse the repository at this point in the history
  5. Configuration menu
    Copy the full SHA
    c09f1a3 View commit details
    Browse the repository at this point in the history