CVE-2025-62164

Source
https://cve.org/CVERecord?id=CVE-2025-62164
Import Source
https://storage.googleapis.com/cve-osv-conversion/osv-output/CVE-2025-62164.json
JSON Data
https://api.osv.dev/v1/vulns/CVE-2025-62164
Aliases
Related
Published
2025-11-21T01:18:38.803Z
Modified
2026-04-10T05:32:52.267393Z
Severity
  • 8.8 (High) CVSS_V3 - CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H CVSS Calculator
Summary
VLLM deserialization vulnerability leading to DoS and potential RCE
Details

vLLM is an inference and serving engine for large language models (LLMs). From versions 0.10.2 to before 0.11.1, a memory corruption vulnerability could lead to a crash (denial-of-service) and potentially remote code execution (RCE), exists in the Completions API endpoint. When processing user-supplied prompt embeddings, the endpoint loads serialized tensors using torch.load() without sufficient validation. Due to a change introduced in PyTorch 2.8.0, sparse tensor integrity checks are disabled by default. As a result, maliciously crafted tensors can bypass internal bounds checks and trigger an out-of-bounds memory write during the call to to_dense(). This memory corruption can crash vLLM and potentially lead to code execution on the server hosting vLLM. This issue has been patched in version 0.11.1.

Database specific
{
    "cwe_ids": [
        "CWE-123",
        "CWE-20",
        "CWE-502",
        "CWE-787"
    ],
    "cna_assigner": "GitHub_M",
    "osv_generated_from": "https://github.com/CVEProject/cvelistV5/tree/main/cves/2025/62xxx/CVE-2025-62164.json"
}
References

Affected packages

Git / github.com/vllm-project/vllm

Affected ranges

Type
GIT
Repo
https://github.com/vllm-project/vllm
Events

Database specific

source
"https://storage.googleapis.com/cve-osv-conversion/osv-output/CVE-2025-62164.json"