Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Model] Initialize Phi-3-vision support #4986

Merged
merged 38 commits into from
Jun 18, 2024
Merged

Conversation

Isotr0py
Copy link
Contributor

@Isotr0py Isotr0py commented May 22, 2024

FILL IN THE PR DESCRIPTION HERE

FIX #4958

Note that I only implemented the Phi-3-vision model and only tested the model weights loading currently.

This PR is still under very low completion.

  • Implement the Phi-3-vision model and weights loading logics
  • Test Phi-3-vision inference and add Phi3ImageInputs etc.
  • Add Phi-3-vision ImageProcessor (waiting for [Core] Support image processor #4197 merged)

BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE


PR Checklist (Click to Expand)

Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.

PR Title and Classification

Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:

  • [Bugfix] for bug fixes.
  • [CI/Build] for build or continuous integration improvements.
  • [Doc] for documentation fixes and improvements.
  • [Model] for adding a new model or improving an existing model. Model name should appear in the title.
  • [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.)
  • [Kernel] for changes affecting CUDA kernels or other compute kernels.
  • [Core] for changes in the core vLLM logic (e.g., LLMEngine, AsyncLLMEngine, Scheduler, etc.)
  • [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD]).
  • [Misc] for PRs that do not fit the above categories. Please use this sparingly.

Note: If the PR spans more than one category, please include all relevant prefixes.

Code Quality

The PR need to meet the following code quality standards:

  • We adhere to Google Python style guide and Google C++ style guide.
  • Pass all linter checks. Please use format.sh to format your code.
  • The code need to be well-documented to ensure future contributors can easily understand the code.
  • Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests.
  • Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.

Notes for Large Changes

Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR.

What to Expect for the Reviews

The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:

  • After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability.
  • After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team.
  • After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.
  • Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion.

Thank You

Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!

@Reichenbachian
Copy link

Woot woot! I love the fast movement on this.

@ywang96 ywang96 self-assigned this May 23, 2024
@Isotr0py
Copy link
Contributor Author

Well, it seems that the inference can work now:

Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
INFO 05-23 17:49:28 llm_engine.py:103] Initializing an LLM engine (v0.4.2) with config: model='/data/LLM-model/Phi-3-vision-128k-instruct', speculative_config=None, tokenizer='/data/LLM-model/Phi-3-vision-128k-instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=4096, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cpu, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), seed=0, served_model_name=/data/LLM-model/Phi-3-vision-128k-instruct)
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
WARNING 05-23 17:49:28 cpu_executor.py:116] CUDA graph is not supported on CPU, fallback to the eager mode.
WARNING 05-23 17:49:28 cpu_executor.py:143] Environment variable VLLM_CPU_KVCACHE_SPACE (GB) for CPU backend is not set, using 4 by default.
INFO 05-23 17:49:28 selector.py:62] Using Torch SDPA backend.
INFO 05-23 17:49:43 phi3v.py:104] learnable separator enabled for hd transform, hd_transform_order = sub_glb
INFO 05-23 17:49:48 cpu_executor.py:72] # CPU blocks: 682
Processed prompts:   0%|                                                                                                                            | 0/1 [00:00<?, ?it/s, Generation Speed: 0.00 toks/s]torch.Size([1, 17, 3, 336, 336])
INFO 05-23 17:50:20 phi3v.py:263] img_embeds size: torch.Size([1, 17, 3, 336, 336]), image sizes: (1008, 1344) loading time 0:00:29.625456
Processed prompts: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [01:48<00:00, 108.67s/it, Generation Speed: 0.15 toks/s]
 The image shows a city street corner with a prominent red stop sign.

@Reichenbachian
Copy link

Hype. How can I help out? Would love to push the ball forward on this.

@Isotr0py Isotr0py marked this pull request as ready for review May 26, 2024 06:56
@ywang96
Copy link
Member

ywang96 commented Jun 7, 2024

Hey @Isotr0py - Now that we have done some refactoring around VLMs, do you have some bandwidth to update this PR so I can start reviewing? Thanks!

@Isotr0py
Copy link
Contributor Author

Isotr0py commented Jun 8, 2024

@ywang96 I have updated the code to make examples/phi3v_example.py work now. Can you have a look at this?

@DarkLight1337
Copy link
Member

Can you add a test to check the model's consistency with its HuggingFace counterpart? That would be the most straightforward way to verify the correctness of your implementation.

@Youho99
Copy link

Youho99 commented Jun 12, 2024

I am interested in using this phi3-v implementation

When do you think this will be functional / added to the vLLM project officially?

@sung-ho-moon
Copy link

I understand. thank you

@Youho99
Copy link

Youho99 commented Jun 19, 2024

Hello guys!

I installed one of the latest commits supporting Phi3-vision, to test it, to potentially use it later.

The implementation is functional.
However, it is impossible for me to have a long answer.

I ask the VLM to describe an image (a graph in my case).
He answers me, but only the beginning of his answer.

Example: “The image shows a line graph representing the projected share of the population in extreme”

Impossible to get the rest of the answer.

This is definitely related to the "max_tokens", so I increased it (tested with 1024/2048/4096), but I still get the same result.

Do you have any idea how to go about getting the full answer?

Thanks!

@Isotr0py
Copy link
Contributor Author

Isotr0py commented Jun 19, 2024

@Youho99 Have you limited the max_model_len? This may also affect the length of result.

Though phi3-vision in VLLM has numeric difference issue currently, it's unlikely that it would output incomplete result...

@Youho99
Copy link

Youho99 commented Jun 19, 2024

@Isotr0py max_model_len=4096

I take the phi3v_example.py as base for my code.

Actually, I don't think it "generates an incomplete result" per se. I rather think that a (complete) response is generated, but that the display of the response, for some unknown reason, only gives the beginning of the response.
(this is just a hypothesis)

@Isotr0py
Copy link
Contributor Author

Can you remove or increase the value in the max_model_len=4096 to see if it can give full answer?

max_model_len=4096 is used to prevent OOM with 128k context_length on my device and I forgot to remove it ...

@Youho99
Copy link

Youho99 commented Jun 19, 2024

Same results with max_model_len=8192, max_model_len=16384
OOM with max_model_len=32768

That's my code :

import os
import subprocess
from PIL import Image
from vllm import LLM, SamplingParams
from vllm.multimodal.image import ImagePixelData
import torch
import gc

model = None

def initialize_phi3v(model_path="microsoft/Phi-3-vision-128k-instruct"):

    global model

    # Delete previous model if it exists
    if 'model' in globals() and model is not None:
        del model
        gc.collect()
        if torch.cuda.is_available():
            torch.cuda.empty_cache()

    model = LLM(
        model=model_path,
        trust_remote_code=True,
        max_model_len=4096,
        image_input_type="pixel_values",
        image_token_id=32044,
        image_input_shape="1,3,1008,1344",
        image_feature_size=1921,
        disable_image_processor=False,
        disable_sliding_window=True, # For using Flash-Attn
    )
    return model

def run_phi3v(model, image_path, prompt):
    image = Image.open(image_path)

    # single-image prompt
    prompt = f"<|user|>\n<|image_1|>\n{prompt}<|end|>\n<|assistant|>\n"  # noqa: E501
    prompt = prompt.replace("<|image_1|>", "<|image|>" * 1921 + "<s>")

    sampling_params = SamplingParams(temperature=0, max_tokens=2048)

    outputs = model.generate({
        "prompt": prompt,
        "sampling_params": sampling_params,
        "multi_modal_data": ImagePixelData(image),
    })
    for o in outputs:
        generated_text = o.outputs[0].text
        print(generated_text)
model = initialize_phi3v()
image_path = "images/00006834003066.png"
prompt = "What is shown in this image?"

run_phi3v(model, image_path, prompt)

@Isotr0py
Copy link
Contributor Author

Isotr0py commented Jun 19, 2024

@Youho99 Seems that the sampling_params is passed incorrectly. This should work:

outputs = llm.generate({
        "prompt": prompt,
        "multi_modal_data": ImagePixelData(image),
    },
    sampling_params=sampling_params)

@Youho99
Copy link

Youho99 commented Jun 19, 2024

@Youho99 Seems that the sampling_params is passed incorrectly. This should work:

outputs = llm.generate({
        "prompt": prompt,
        "multi_modal_data": ImagePixelData(image),
    },
    sampling_params=sampling_params)

That's works! Thanks!

@subodhchhabra
Copy link

Getting an error when trying to scale with ray(tensor_parallel_size =2) , any idea what is happening here ?

Traceback (most recent call last):
File "/tmp/ray/session_2024-06-20_09-38-29_030934_8/runtime_resources/working_dir_files/_ray_pkg_0cbd81c426d5525d/src/phi3v_ext.py", line 94, in main
parse_result(ds)
File "/tmp/ray/session_2024-06-20_09-38-29_030934_8/runtime_resources/working_dir_files/_ray_pkg_0cbd81c426d5525d/src/phi3v_ext.py", line 50, in parse_result
for output in ds.iter_rows():
File "/home/ray/anaconda3/lib/python3.10/site-packages/ray/data/iterator.py", line 245, in _wrapped_iterator
for batch in batch_iterable:
File "/home/ray/anaconda3/lib/python3.10/site-packages/ray/data/iterator.py", line 162, in _create_iterator
block_iterator, stats, blocks_owned_by_consumer = self._to_block_iterator()
File "/home/ray/anaconda3/lib/python3.10/site-packages/ray/data/_internal/iterator/iterator_impl.py", line 33, in _to_block_iterator
block_iterator, stats, executor = ds._plan.execute_to_iterator()
File "/home/ray/anaconda3/lib/python3.10/site-packages/ray/data/exceptions.py", line 86, in handle_trace
raise e.with_traceback(None) from SystemException()
ray.exceptions.ActorDiedError: The actor died because of an error raised in its creation task, �[36mray::_MapWorker.init()�[39m (pid=812, ip=10.89.133.29, actor_id=2330fca6520bd378b441638d05000000, repr=MapWorker(MapBatches(LLMImagePredictor)))
File "/home/ray/anaconda3/lib/python3.10/site-packages/ray/data/_internal/execution/operators/actor_pool_map_operator.py", line 350, in init
self._map_transformer.init()
File "/home/ray/anaconda3/lib/python3.10/site-packages/ray/data/_internal/execution/operators/map_transformer.py", line 126, in init
self._init_fn()
File "/home/ray/anaconda3/lib/python3.10/site-packages/ray/data/_internal/planner/plan_udf_map_op.py", line 118, in init_fn
ray.data._cached_fn = op_fn(
File "/home/ray/anaconda3/lib/python3.10/site-packages/ray/data/_internal/execution/util.py", line 70, in init
super().init(*args, **kwargs)
File "/tmp/ray/session_2024-06-20_09-38-29_030934_8/runtime_resources/working_dir_files/_ray_pkg_0cbd81c426d5525d/src/image_predictor.py", line 34, in init
self.llm = LLM(
File "/home/ray/anaconda3/lib/python3.10/site-packages/vllm/entrypoints/llm.py", line 144, in init
self.llm_engine = LLMEngine.from_engine_args(
File "/home/ray/anaconda3/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 384, in from_engine_args
engine = cls(
File "/home/ray/anaconda3/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 230, in init
self.model_executor = executor_class(
File "/home/ray/anaconda3/lib/python3.10/site-packages/vllm/executor/distributed_gpu_executor.py", line 25, in init
super().init(*args, **kwargs)
File "/home/ray/anaconda3/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 41, in init
self._init_executor()
File "/home/ray/anaconda3/lib/python3.10/site-packages/vllm/executor/ray_gpu_executor.py", line 40, in _init_executor
self._init_workers_ray(placement_group)
File "/home/ray/anaconda3/lib/python3.10/site-packages/vllm/executor/ray_gpu_executor.py", line 185, in _init_workers_ray
self._run_workers("init_worker", all_kwargs=init_worker_all_kwargs)
File "/home/ray/anaconda3/lib/python3.10/site-packages/vllm/executor/ray_gpu_executor.py", line 282, in _run_workers
ray_worker_outputs = ray.get(ray_worker_outputs)
ray.exceptions.RayTaskError(RaySystemError): �[36mray::RayWorkerWrapper.execute_method()�[39m (pid=780, ip=10.89.142.145, actor_id=060bebe8f7ccbb9727b485db05000000, repr=<vllm.executor.ray_utils.RayWorkerWrapper object at 0x7fde9b6909a0>)
At least one of the input arguments for this task could not be computed:
ray.exceptions.RaySystemError: System error: No module named 'transformers_modules'
traceback: Traceback (most recent call last):
ModuleNotFoundError: No module named 'transformers_modules'

@DarkLight1337
Copy link
Member

May be related to #4169 #3593.

robertgshaw2-neuralmagic pushed a commit to neuralmagic/nm-vllm that referenced this pull request Jun 23, 2024
kzawora-intel added a commit to HabanaAI/vllm-fork that referenced this pull request Jul 2, 2024
* [Hardware][Intel] Optimize CPU backend and add more performance tips (vllm-project#4971)

Co-authored-by: Jianan Gu <[email protected]>

* [Docs] Add 4th meetup slides (vllm-project#5509)

* [Misc] Add vLLM version getter to utils (vllm-project#5098)

* [CI/Build] Simplify OpenAI server setup in tests (vllm-project#5100)

* [Doc] Update LLaVA docs (vllm-project#5437)

Co-authored-by: Roger Wang <[email protected]>

* [Kernel] Factor out epilogues from cutlass kernels (vllm-project#5391)

Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: zifeitong <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>

* [MISC] Remove FP8 warning (vllm-project#5472)

Co-authored-by: Philipp Moritz <[email protected]>

* Seperate dev requirements into lint and test (vllm-project#5474)

* Revert "[Core] Remove unnecessary copies in flash attn backend" (vllm-project#5478)

* [misc] fix format.sh (vllm-project#5511)

* [CI/Build] Disable test_fp8.py (vllm-project#5508)

* [Kernel] Disable CUTLASS kernels for fp8 (vllm-project#5505)

* Add `cuda_device_count_stateless` (vllm-project#5473)

* [Hardware][Intel] Support CPU inference with AVX2 ISA (vllm-project#5452)

* [Misc] Fix arg names in quantizer script (vllm-project#5507)

* bump version to v0.5.0.post1 (vllm-project#5522)

* [CI/Build][Misc] Add CI that benchmarks vllm performance on those PRs with `perf-benchmarks` label (vllm-project#5073)

Co-authored-by: simon-mo <[email protected]>

* [CI/Build] Disable LLaVA-NeXT CPU test (vllm-project#5529)

* [Kernel] Fix CUTLASS 3.x custom broadcast load epilogue (vllm-project#5516)

* [Misc] Fix arg names (vllm-project#5524)

* [ Misc ] Rs/compressed tensors cleanup (vllm-project#5432)

Co-authored-by: mgoin <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>

* [Kernel] Suppress mma.sp warning on CUDA 12.5 and later (vllm-project#5401)

* [mis] fix flaky test of test_cuda_device_count_stateless (vllm-project#5546)

* [Core] Remove duplicate processing in async engine (vllm-project#5525)

* [misc][distributed] fix benign error in `is_in_the_same_node` (vllm-project#5512)

* [Docs] Add ZhenFund as a Sponsor (vllm-project#5548)

* [Doc] Update documentation on Tensorizer (vllm-project#5471)

* [Bugfix] Enable loading FP8 checkpoints for gpt_bigcode models  (vllm-project#5460)

Signed-off-by: Thomas Parnell <[email protected]>

* [Bugfix] Fix typo in Pallas backend (vllm-project#5558)

* [Core][Distributed] improve p2p cache generation (vllm-project#5528)

* Add ccache to amd (vllm-project#5555)

* [Core][Bugfix]: fix prefix caching for blockv2 (vllm-project#5364)

Signed-off-by: Lei Wen <[email protected]>
Co-authored-by: Lei Wen <[email protected]>

* [mypy] Enable type checking for test directory (vllm-project#5017)

* [CI/Build] Test both text and token IDs in batched OpenAI Completions API (vllm-project#5568)

* [misc] Do not allow to use lora with chunked prefill. (vllm-project#5538)

Co-authored-by: Cyrus Leung <[email protected]>

* add gptq_marlin test for bug report vllm-project#5088 (vllm-project#5145)

* [BugFix] Don't start a Ray cluster when not using Ray (vllm-project#5570)

* [Fix] Correct OpenAI batch response format (vllm-project#5554)

* Add basic correctness 2 GPU tests to 4 GPU pipeline (vllm-project#5518)

* [CI][BugFix] Flip is_quant_method_supported condition (vllm-project#5577)

* [build][misc] limit numpy version (vllm-project#5582)

* [Doc] add debugging tips for crash and multi-node debugging (vllm-project#5581)

* Fix w8a8 benchmark and add Llama-3-8B (vllm-project#5562)

* [Model] Rename Phi3 rope scaling type (vllm-project#5595)

* Correct alignment in the seq_len diagram. (vllm-project#5592)

Co-authored-by: Liqian Chen <[email protected]>

* [Kernel] `compressed-tensors` marlin 24 support (vllm-project#5435)

* [Misc] use AutoTokenizer for benchmark serving when vLLM not installed (vllm-project#5588)

* [Hardware][Intel GPU] Add Intel GPU(XPU) inference backend (vllm-project#3814)

Co-authored-by: Jiang Li <[email protected]>
Co-authored-by: Abhilash Majumder <[email protected]>
Co-authored-by: Abhilash Majumder <[email protected]>

* [CI/BUILD] Support non-AVX512 vLLM building and testing (vllm-project#5574)

* [CI] the readability of benchmarking and prepare for dashboard (vllm-project#5571)

[CI] Improve the readability of performance benchmarking results and prepare for upcoming performance dashboard (vllm-project#5571)

* [bugfix][distributed] fix 16 gpus local rank arrangement (vllm-project#5604)

* [Optimization] use a pool to reuse LogicalTokenBlock.token_ids (vllm-project#5584)

* [Bugfix] Fix KV head calculation for MPT models when using GQA (vllm-project#5142)

* [Fix] Use utf-8 encoding in entrypoints/openai/run_batch.py (vllm-project#5606)

* [Speculative Decoding 1/2 ] Add typical acceptance sampling as one of the sampling techniques in the verifier (vllm-project#5131)

* [Model] Initialize Phi-3-vision support (vllm-project#4986)

* [Kernel] Add punica dimensions for Granite 13b (vllm-project#5559)

Signed-off-by: Joe Runde <[email protected]>

* [misc][typo] fix typo (vllm-project#5620)

* [Misc] Fix typo (vllm-project#5618)

* [CI] Avoid naming different metrics with the same name in performance benchmark (vllm-project#5615)

* [bugfix][distributed] improve p2p capability test (vllm-project#5612)

[bugfix][distributed] do not error if two processes do not agree on p2p capability (vllm-project#5612)

* [Misc] Remove import from transformers logging (vllm-project#5625)

* [CI/Build][Misc] Update Pytest Marker for VLMs (vllm-project#5623)

* [ci] Deprecate original CI template (vllm-project#5624)

Signed-off-by: kevin <[email protected]>

* [Misc] Add OpenTelemetry support (vllm-project#4687)

This PR adds basic support for OpenTelemetry distributed tracing.
It includes changes to enable tracing functionality and improve monitoring capabilities.

I've also added a markdown with print-screens to guide users how to use this feature. You can find it here

* [Misc] Add channel-wise quantization support for w8a8 dynamic per token activation quantization (vllm-project#5542)

* [ci] Setup Release pipeline and build release wheels with cache (vllm-project#5610)

Signed-off-by: kevin <[email protected]>

* [Model] LoRA support added for command-r (vllm-project#5178)

* [Bugfix] Fix for inconsistent behaviour related to sampling and repetition penalties  (vllm-project#5639)

Signed-off-by: Thomas Parnell <[email protected]>

* [Doc] Added cerebrium as Integration option (vllm-project#5553)

* [Bugfix] Fix CUDA version check for mma warning suppression (vllm-project#5642)

* [Bugfix] Fix w8a8 benchmarks for int8 case (vllm-project#5643)

* [Bugfix] Fix Phi-3 Long RoPE scaling implementation (vllm-project#5628)

* [Bugfix] Added test for sampling repetition penalty bug. (vllm-project#5659)

Signed-off-by: Thomas Parnell <[email protected]>

* [Bugfix][CI/Build][AMD][ROCm]Fixed the cmake build bug which generate garbage on certain devices (vllm-project#5641)

* [misc][distributed] use 127.0.0.1 for single-node (vllm-project#5619)

* [Model] Add FP8 kv cache for Qwen2 (vllm-project#5656)

* [Bugfix] Fix sampling_params passed incorrectly in Phi3v example (vllm-project#5684)

* [Misc]Add param max-model-len in benchmark_latency.py (vllm-project#5629)

* [CI/Build] Add tqdm to dependencies (vllm-project#5680)

* [ci] Add A100 queue into AWS CI template (vllm-project#5648)

Signed-off-by: kevin <[email protected]>

* [Frontend][Bugfix] Fix preemption_mode -> preemption-mode for CLI arg in arg_utils.py (vllm-project#5688)

* [ci][distributed] add tests for custom allreduce (vllm-project#5689)

* [Bugfix] AsyncLLMEngine hangs with asyncio.run (vllm-project#5654)

* [Doc] Update docker references (vllm-project#5614)

Signed-off-by: Rafael Vasquez <[email protected]>

* [Misc] Add per channel support for static activation quantization; update w8a8 schemes to share base classes (vllm-project#5650)

* [ci] Limit num gpus if specified for A100 (vllm-project#5694)

Signed-off-by: kevin <[email protected]>

* [Misc] Improve conftest (vllm-project#5681)

* [Bugfix][Doc] FIx Duplicate Explicit Target Name Errors (vllm-project#5703)

* [Kernel] Update Cutlass int8 kernel configs for SM90 (vllm-project#5514)

Co-authored-by: Varun Sundar Rabindranath <[email protected]>

* [Model] Port over CLIPVisionModel for VLMs (vllm-project#5591)

* [Kernel] Update Cutlass int8 kernel configs for SM80 (vllm-project#5275)

Co-authored-by: Varun Sundar Rabindranath <[email protected]>

* [Bugfix] Fix the CUDA version check for FP8 support in the CUTLASS kernels (vllm-project#5715)

* [Frontend] Add FlexibleArgumentParser to support both underscore and dash in names (vllm-project#5718)

* [distributed][misc] use fork by default for mp (vllm-project#5669)

* [Model] MLPSpeculator speculative decoding support (vllm-project#4947)

Signed-off-by: Thomas Parnell <[email protected]>

Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: Davis Wertheimer <[email protected]>

* [Kernel] Add punica dimension for Qwen2 LoRA (vllm-project#5441)

* [BugFix] Fix test_phi3v.py (vllm-project#5725)

* [Bugfix] Add  fully sharded layer for QKVParallelLinearWithLora (vllm-project#5665)

Co-authored-by: Antoni Baum <[email protected]>

* [Core][Distributed] add shm broadcast (vllm-project#5399)

Co-authored-by: Cody Yu <[email protected]>

* [Kernel][CPU] Add Quick `gelu` to CPU (vllm-project#5717)

* [Doc] Documentation on supported hardware for quantization methods (vllm-project#5745)

* [BugFix] exclude version 1.15.0 for modelscope (vllm-project#5668)

* [ci][test] fix ca test in main (vllm-project#5746)

* [LoRA] Add support for pinning lora adapters in the LRU cache (vllm-project#5603)

* [CI][Hardware][Intel GPU] add Intel GPU(XPU) ci pipeline (vllm-project#5616)

* [Model] Support Qwen-VL and Qwen-VL-Chat models with text-only inputs (vllm-project#5710)

Co-authored-by: Roger Wang <[email protected]>

* [Misc] Remove vllm-project#4789 workaround left in vllm/entrypoints/openai/run_batch.py (vllm-project#5756)

* [Bugfix] Fix pin_lora error in TPU executor (vllm-project#5760)

* [Docs][TPU] Add installation tip for TPU (vllm-project#5761)

* [core][distributed] improve shared memory broadcast (vllm-project#5754)

* [BugFix] [Kernel] Add Cutlass2x fallback kernels (vllm-project#5744)

Co-authored-by: Varun Sundar Rabindranath <[email protected]>

* [Distributed] Add send and recv helpers (vllm-project#5719)

* [Bugfix] Add phi3v resize for dynamic shape and fix torchvision requirement (vllm-project#5772)

* [doc][faq] add warning to download models for every nodes (vllm-project#5783)

* post-rebase api adjustments

* [Doc] Add "Suggest edit" button to doc pages (vllm-project#5789)

* [Doc] Add Phi-3-medium to list of supported models (vllm-project#5788)

* [Bugfix] Fix FlexibleArgumentParser replaces _ with - for actual args (vllm-project#5795)

* [ci] Remove aws template (vllm-project#5757)

Signed-off-by: kevin <[email protected]>

* [Doc] Add notice about breaking changes to VLMs (vllm-project#5818)

* [Speculative Decoding] Support draft model on different tensor-parallel size than target model (vllm-project#5414)

* add pin_lora to habana components

* add WA for model loader

* fix api mismatches with ray

* tensor parallel fixes

* workers cpu alignment fix

* [Misc] Remove useless code in cpu_worker (vllm-project#5824)

* prefill/decode metadata fixes

* [Core] Add fault tolerance for `RayTokenizerGroupPool` (vllm-project#5748)

* re-enable attn metadata trimming

* worker_use_ray fix

* [doc][distributed] add both gloo and nccl tests (vllm-project#5834)

* [CI/Build] Add unit testing for FlexibleArgumentParser (vllm-project#5798)

* [Misc] Update `w4a16` `compressed-tensors` support to include `w8a16` (vllm-project#5794)

* [Hardware][TPU] Refactor TPU backend (vllm-project#5831)

* [Hardware][AMD][CI/Build][Doc] Upgrade to ROCm 6.1, Dockerfile improvements, test fixes (vllm-project#5422)

* [Hardware][TPU] Raise errors for unsupported sampling params (vllm-project#5850)

* [CI/Build] Add E2E tests for MLPSpeculator (vllm-project#5791)

Signed-off-by: Thomas Parnell <[email protected]>

* [Bugfix] Fix assertion in NeuronExecutor (vllm-project#5841)

* [Core] Refactor Worker and ModelRunner to consolidate control plane communication (vllm-project#5408)

Signed-off-by: Stephanie Wang <[email protected]>
Signed-off-by: Stephanie <[email protected]>
Co-authored-by: Stephanie <[email protected]>

* [Misc][Doc] Add Example of using OpenAI Server with VLM (vllm-project#5832)

* [bugfix][distributed] fix shm broadcast when the queue size is full (vllm-project#5801)

* [Bugfix] Fix embedding to support 2D inputs (vllm-project#5829)

* [Bugfix][TPU] Fix KV cache size calculation (vllm-project#5860)

* [CI/Build] Refactor image test assets (vllm-project#5821)

* [Kernel] Adding bias epilogue support for `cutlass_scaled_mm` (vllm-project#5560)

Co-authored-by: Chih-Chieh-Yang <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>

* [Frontend] Add tokenize/detokenize endpoints (vllm-project#5054)

* [Hardware][TPU] Support parallel sampling & Swapping (vllm-project#5855)

* [Bugfix][TPU] Fix CPU cache allocation (vllm-project#5869)

* Support CPU inference with VSX PowerPC ISA (vllm-project#5652)

* [doc] update usage of env var to avoid conflict (vllm-project#5873)

* [Misc] Add example for LLaVA-NeXT (vllm-project#5879)

* [BugFix] Fix cuda graph for MLPSpeculator (vllm-project#5875)

Co-authored-by: Abhinav Goyal <[email protected]>

* [Doc] Add note about context length in Phi-3-Vision example (vllm-project#5887)

* [VLM][Bugfix] Make sure that `multi_modal_kwargs` is broadcasted properly (vllm-project#5880)

Signed-off-by: Xiaowei Jiang <[email protected]>

* [Model] Add base class for LoRA-supported models (vllm-project#5018)

* [Bugfix] Fix img_sizes Parsing in Phi3-Vision (vllm-project#5888)

* [CI/Build] [1/3] Reorganize entrypoints tests (vllm-project#5526)

* add collective crash WA

* add comment to the weird mark_step

* [Model][Bugfix] Implicit model flags and reenable Phi-3-Vision (vllm-project#5896)

* [doc][misc] add note for Kubernetes users (vllm-project#5916)

* [BugFix] Fix `MLPSpeculator` handling of `num_speculative_tokens` (vllm-project#5876)

* [BugFix] Fix `min_tokens` behaviour for multiple eos tokens (vllm-project#5849)

* [CI/Build] Fix Args for `_get_logits_warper` in Sampler Test (vllm-project#5922)

* [Model] Add Gemma 2 (vllm-project#5908)

* [core][misc] remove logical block (vllm-project#5882)

* [Kernel][ROCm][AMD] fused_moe Triton configs v2 for mi300X (vllm-project#5932)

* [Hardware][TPU] Optimize KV cache swapping (vllm-project#5878)

* [VLM][BugFix] Make sure that `multi_modal_kwargs` can broadcast properly with ring buffer. (vllm-project#5905)

Signed-off-by: Xiaowei Jiang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>

* [Bugfix][Hardware][Intel CPU] Fix unpassed multi_modal_kwargs for CPU runner (vllm-project#5956)

* [Core] Registry for processing model inputs (vllm-project#5214)

Co-authored-by: ywang96 <[email protected]>

* Unmark fused_moe config json file as executable (vllm-project#5960)

* [Hardware][Intel] OpenVINO vLLM backend (vllm-project#5379)

* [Bugfix] Better error message for MLPSpeculator when `num_speculative_tokens` is set too high (vllm-project#5894)

Signed-off-by: Thomas Parnell <[email protected]>

* [CI/Build] [2/3] Reorganize entrypoints tests (vllm-project#5904)

* [Distributed] Make it clear that % should not be in tensor dict keys. (vllm-project#5927)

Signed-off-by: Xiaowei Jiang <[email protected]>

* [Spec Decode] Introduce DraftModelRunner (vllm-project#5799)

* [Bugfix] Fix compute datatype for cutlass 3.x epilogues (vllm-project#5931)

* [ Misc ] Remove `fp8_shard_indexer` from Col/Row Parallel Linear (Simplify Weight Loading) (vllm-project#5928)

Co-authored-by: Robert Shaw <rshaw@neuralmagic>

* [ Bugfix ] Enabling Loading Models With Fused QKV/MLP on Disk with FP8 (vllm-project#5921)

Co-authored-by: Robert Shaw <rshaw@neuralmagic>

* Support Deepseek-V2 (vllm-project#4650)

Co-authored-by: Philipp Moritz <[email protected]>

* [Bugfix] Only add `Attention.kv_scale` if kv cache quantization is enabled (vllm-project#5936)

* Unmark more files as executable (vllm-project#5962)

* [Bugfix] Fix Engine Failing After Invalid Request - AsyncEngineDeadError (vllm-project#5963)

Co-authored-by: Robert Shaw <rshaw@neuralmagic>

* [Kernel] Flashinfer for prefill & decode, with Cudagraph support for decode (vllm-project#4628)

Co-authored-by: LiuXiaoxuanPKU <[email protected]>, bong-furiosa <[email protected]>

* [Bugfix][TPU] Fix TPU sampler output (vllm-project#5978)

* [Bugfix][TPU] Fix pad slot id (vllm-project#5977)

* [Bugfix] fix missing last itl in openai completions benchmark (vllm-project#5926)

* [Misc] Extend vLLM Metrics logging API (vllm-project#5925)

Co-authored-by: Antoni Baum <[email protected]>

* [Kernel] Add punica dimensions for Granite 3b and 8b (vllm-project#5930)

Signed-off-by: Joe Runde <[email protected]>

* [Bugfix] Fix precisions in Gemma 1 (vllm-project#5913)

* [Misc] Update Phi-3-Vision Example (vllm-project#5981)

Co-authored-by: Cyrus Leung <[email protected]>

* [Bugfix] Support `eos_token_id` from `config.json` (vllm-project#5954)

* [Core] Optimize `SequenceStatus.is_finished` by switching to IntEnum (vllm-project#5974)

* [Kernel] Raise an exception in MoE kernel if the batch size is larger then 65k (vllm-project#5939)

* [ CI/Build ] Added E2E Test For Compressed Tensors (vllm-project#5839)

Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Robert Shaw <rshaw@neuralmagic>

* [CI/Build] Add TP test for vision models (vllm-project#5892)

* [ CI/Build ] LM Eval Harness Based CI Testing (vllm-project#5838)

Co-authored-by: Robert Shaw <rshaw@neuralmagic>

* [Bugfix][CI/Build][Hardware][AMD] Install matching torchvision to fix AMD tests (vllm-project#5949)

* [CI/Build] Temporarily Remove Phi3-Vision from TP Test (vllm-project#5989)

* [CI/Build] Reuse code for checking output consistency (vllm-project#5988)

* [CI/Build] [3/3] Reorganize entrypoints tests (vllm-project#5966)

* [ci][distributed] fix device count call

[ci][distributed] fix some cuda init that makes it necessary to use spawn (vllm-project#5991)

* [Frontend]: Support base64 embedding (vllm-project#5935)

Co-authored-by: Cyrus Leung <[email protected]>

* [Lora] Use safetensor keys instead of adapter_config.json to find unexpected modules.  (vllm-project#5909)

Co-authored-by: sang <[email protected]>

* [ CI ] Temporarily Disable Large LM-Eval Tests (vllm-project#6005)

Co-authored-by: [email protected] <rshaw@neuralmagic>

* [Misc] Fix `get_min_capability` (vllm-project#5971)

* [ Misc ] Refactor w8a8 to use `process_weights_after_load` (Simplify Weight Loading) (vllm-project#5940)

Co-authored-by: Robert Shaw <rshaw@neuralmagic>

* [misc][cuda] use nvml to avoid accidentally cuda initialization (vllm-project#6007)

* [Speculative Decoding 2/2 ] Integrate typical acceptance sampler into Spec Decode Worker (vllm-project#5348)

* Revert test changes

* cleanup

* llm engine cleanup

* utils.py cleanup

* custom ops refactor

* move xops to ops

* remove vllm/hpu/attn_bias.py

* whitespace fix

* revert accidental changes in rmsnorm

* Fix hpugraph hashing

* add trim_attn_metadata comment

* fix prompt bucketing:

* [ CI ] Re-enable Large Model LM Eval (vllm-project#6031)

* [doc][misc] remove deprecated api server in doc (vllm-project#6037)

* [Misc] update benchmark backend for scalellm (vllm-project#6018)

* [doc][misc] further lower visibility of simple api server (vllm-project#6041)

Co-authored-by: Simon Mo <[email protected]>

* [Bugfix] Use RayActorError for older versions of Ray in  RayTokenizerGroupPool (vllm-project#6039)

* [Bugfix] adding chunking mechanism to fused_moe to handle large inputs (vllm-project#6029)

* add FAQ doc under 'serving' (vllm-project#5946)

* [Bugfix][Doc] Fix Doc Formatting (vllm-project#6048)

* [Bugfix] Add explicit `end_forward` calls to flashinfer (vllm-project#6044)

* [BugFix] Ensure worker model loop is always stopped at the right time (vllm-project#5987)

* [Frontend] Relax api url assertion for openai benchmarking (vllm-project#6046)

* [Model] Changes to MLPSpeculator to support tie_weights and input_scale (vllm-project#5965)

Signed-off-by: Thomas Parnell <[email protected]>
Co-authored-by: Joshua Rosenkranz <[email protected]>

* [Core] Optimize block_manager_v2 vs block_manager_v1 (to make V2 default)  (vllm-project#5602)

* [Frontend] Add template related params to request (vllm-project#5709)

* [VLM] Remove `image_input_type` from VLM config (vllm-project#5852)

Signed-off-by: Xiaowei Jiang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Roger Wang <[email protected]>

* [Doc] Reinstate doc dependencies (vllm-project#6061)

* guard model loader wa for hpu

---------

Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: Lei Wen <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: kevin <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Stephanie Wang <[email protected]>
Signed-off-by: Stephanie <[email protected]>
Signed-off-by: Xiaowei Jiang <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Jianan Gu <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: zifeitong <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Philipp Moritz <[email protected]>
Co-authored-by: Antoni Baum <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Allen.Dou <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: Sanger Steel <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: leiwen83 <[email protected]>
Co-authored-by: Lei Wen <[email protected]>
Co-authored-by: SangBin Cho <[email protected]>
Co-authored-by: Alexander Matveev <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: Amit Garg <[email protected]>
Co-authored-by: Charles Riggins <[email protected]>
Co-authored-by: Liqian Chen <[email protected]>
Co-authored-by: zhyncs <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: Abhilash Majumder <[email protected]>
Co-authored-by: Abhilash Majumder <[email protected]>
Co-authored-by: Bruce Fontaine <[email protected]>
Co-authored-by: zifeitong <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Chang Su <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Ronen Schaffer <[email protected]>
Co-authored-by: sergey-tinkoff <[email protected]>
Co-authored-by: milo157 <[email protected]>
Co-authored-by: Shukant Pal <[email protected]>
Co-authored-by: Hongxia Yang <[email protected]>
Co-authored-by: DearPlanet <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Joshua Rosenkranz <[email protected]>
Co-authored-by: Davis Wertheimer <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: Jee Li <[email protected]>
Co-authored-by: rohithkrn <[email protected]>
Co-authored-by: Murali Andoorveedu <[email protected]>
Co-authored-by: Woo-Yeon Lee <[email protected]>
Co-authored-by: Matt Wong <[email protected]>
Co-authored-by: aws-patlange <[email protected]>
Co-authored-by: Stephanie Wang <[email protected]>
Co-authored-by: Stephanie <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
Co-authored-by: Chih-Chieh-Yang <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: sasha0552 <[email protected]>
Co-authored-by: Chip Kerchner <[email protected]>
Co-authored-by: Abhinav Goyal <[email protected]>
Co-authored-by: xwjiang2010 <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
Co-authored-by: wangding zeng <[email protected]>
Co-authored-by: Lily Liu <[email protected]>
Co-authored-by: LiuXiaoxuanPKU <[email protected]>, bong-furiosa <[email protected]>
Co-authored-by: mcalman <[email protected]>
Co-authored-by: William Lin <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: llmpros <[email protected]>
Co-authored-by: sang <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: James Whedbee <[email protected]>
Co-authored-by: Joshua Rosenkranz <[email protected]>
Co-authored-by: danieljannai21 <[email protected]>
xjpang pushed a commit to xjpang/vllm that referenced this pull request Jul 8, 2024
xjpang pushed a commit to xjpang/vllm that referenced this pull request Jul 24, 2024
Temirulan pushed a commit to Temirulan/vllm-whisper that referenced this pull request Sep 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature]: microsoft/Phi-3-vision-128k-instruct Vision support
7 participants