GHSA-grg2-63fw-f2qr

Suggest an improvement
Source
https://github.com/advisories/GHSA-grg2-63fw-f2qr
Import Source
https://github.com/github/advisory-database/blob/main/advisories/github-reviewed/2026/01/GHSA-grg2-63fw-f2qr/GHSA-grg2-63fw-f2qr.json
JSON Data
https://api.osv.dev/v1/vulns/GHSA-grg2-63fw-f2qr
Aliases
Related
Published
2026-01-13T18:44:15Z
Modified
2026-03-27T01:12:52.982581Z
Severity
  • 6.5 (Medium) CVSS_V3 - CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H CVSS Calculator
Summary
vLLM is vulnerable to DoS in Idefics3 vision models via image payload with ambiguous dimensions
Details

Summary

Users can crash the vLLM engine serving multimodal models that use the Idefics3 vision model implementation by sending a specially crafted 1x1 pixel image. This causes a tensor dimension mismatch that results in an unhandled runtime error, leading to complete server termination.

Details

The vulnerability is triggered when the image processor encounters a 1x1 pixel image with shape (1, 1, 3) in HWC (Height, Width, Channel) format. Due to the ambiguous dimensions, the processor incorrectly assumes the image is in CHW (Channel, Height, Width) format with shape (3, H, W). This misinterpretation causes an incorrect calculation of the number of image patches, resulting in a fatal tensor split operation failure.

Crash location: vllm/model_executor/models/idefics3.py line 672:

def _process_image_input(self, image_input: ImageInputs) -> torch.Tensor | list[torch.Tensor]:
    # ...
    num_patches = image_input["num_patches"]
    return [e.flatten(0, 1) for e in image_features.split(num_patches.tolist())]

The split() call fails because the computed num_patches value (17) does not match the actual tensor dimension (9):

RuntimeError: split_with_sizes expects split_sizes to sum exactly to 9 
(input tensor's size at dimension 0), but got split_sizes=[17]

This unhandled exception terminates the EngineCore process, crashing the server.

Affected Models

Any model using the Idefics3 architecture. The vulnerability was tested with HuggingFaceTB/SmolVLM-Instruct.

Impact

Denial of service by crashing the engine

Mitigation

Validating the input:

def _validate_image_dimensions(self, image_shape):
    h, w = image_shape[:2] if len(image_shape) == 3 else image_shape
    if h < MIN_IMAGE_SIZE or w < MIN_IMAGE_SIZE:
        raise ValueError(f"Image dimensions too small: {h}x{w}")

Managing the exception:

try:
    return [e.flatten(0, 1) for e in image_features.split(num_patches.tolist())]
except RuntimeError as e:
    logger.error(f"Image processing failed: {e}")
    raise InvalidImageError("Failed to process image features") from e

Fixes

  • https://github.com/vllm-project/vllm/pull/29881
Database specific
{
    "github_reviewed": true,
    "nvd_published_at": "2026-01-10T07:16:03Z",
    "cwe_ids": [
        "CWE-770"
    ],
    "github_reviewed_at": "2026-01-13T18:44:15Z",
    "severity": "MODERATE"
}
References

Affected packages

PyPI / vllm

Package

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0.6.4
Fixed
0.12.0

Affected versions

0.*
0.6.4
0.6.4.post1
0.6.5
0.6.6
0.6.6.post1
0.7.0
0.7.1
0.7.2
0.7.3
0.8.0
0.8.1
0.8.2
0.8.3
0.8.4
0.8.5
0.8.5.post1
0.9.0
0.9.0.1
0.9.1
0.9.2
0.10.0
0.10.1
0.10.1.1
0.10.2
0.11.0
0.11.1
0.11.2

Database specific

source
"https://github.com/github/advisory-database/blob/main/advisories/github-reviewed/2026/01/GHSA-grg2-63fw-f2qr/GHSA-grg2-63fw-f2qr.json"