GHSA-wc36-9694-f9rf

Suggest an improvement
Source
https://github.com/advisories/GHSA-wc36-9694-f9rf
Import Source
https://github.com/github/advisory-database/blob/main/advisories/github-reviewed/2024/09/GHSA-wc36-9694-f9rf/GHSA-wc36-9694-f9rf.json
JSON Data
https://api.osv.dev/v1/vulns/GHSA-wc36-9694-f9rf
Aliases
  • CVE-2024-8939
Published
2024-09-17T18:33:26Z
Modified
2024-09-17T22:12:20.511899Z
Severity
  • 6.2 (Medium) CVSS_V3 - CVSS:3.1/AV:L/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H CVSS Calculator
  • 6.9 (Medium) CVSS_V4 - CVSS:4.0/AV:L/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N CVSS Calculator
Summary
vLLM Denial of Service via the best_of parameter
Details

A vulnerability was found in the ilab model serve component, where improper handling of the bestof parameter in the vllm JSON web API can lead to a Denial of Service (DoS). The API used for LLM-based sentence or chat completion accepts a bestof parameter to return the best completion from several options. When this parameter is set to a large value, the API does not handle timeouts or resource exhaustion properly, allowing an attacker to cause a DoS by consuming excessive system resources. This leads to the API becoming unresponsive, preventing legitimate users from accessing the service.

References

Affected packages

PyPI / vllm

Package

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Last affected
0.5.0.post1

Affected versions

0.*

0.0.1
0.1.0
0.1.1
0.1.2
0.1.3
0.1.4
0.1.5
0.1.6
0.1.7
0.2.0
0.2.1
0.2.1.post1
0.2.2
0.2.3
0.2.4
0.2.5
0.2.6
0.2.7
0.3.0
0.3.1
0.3.2
0.3.3
0.4.0
0.4.0.post1
0.4.1
0.4.2
0.4.3
0.5.0
0.5.0.post1