CVE-2025-46560

Source
https://nvd.nist.gov/vuln/detail/CVE-2025-46560
Import Source
https://storage.googleapis.com/cve-osv-conversion/osv-output/CVE-2025-46560.json
JSON Data
https://api.osv.dev/v1/vulns/CVE-2025-46560
Aliases
Related
Published
2025-04-30T01:15:52Z
Modified
2025-05-29T03:17:17Z
Severity
  • 7.5 (High) CVSS_V3 - CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H CVSS Calculator
Summary
[none]
Details

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.8.0 and prior to 0.8.5 are affected by a critical performance vulnerability in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., <|audio|>, <|image|>) with repeated tokens based on precomputed lengths. Due to ​​inefficient list concatenation operations​​, the algorithm exhibits ​​quadratic time complexity (O(n²))​​, allowing malicious actors to trigger resource exhaustion via specially crafted inputs. This issue has been patched in version 0.8.5.

References

Affected packages

Git / github.com/vllm-project/vllm

Affected ranges

Type
GIT
Repo
https://github.com/vllm-project/vllm
Events