A critical performance vulnerability has been identified in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., <|audio*|>, <|image*|>) with repeated tokens based on precomputed lengths. Due to inefficient list concatenation operations, the algorithm exhibits quadratic time complexity (O(n²)), allowing malicious actors to trigger resource exhaustion via specially crafted inputs.
Affected Component: inputprocessorforphi4mm function. https://github.com/vllm-project/vllm/blob/8cac35ba435906fb7eb07e44fe1a8c26e8744f4e/vllm/modelexecutor/models/phi4mm.py#L1182-L1197
The code modifies the inputids list in-place using inputids = inputids[:i] + tokens + inputids[i+1:]. Each concatenation operation copies the entire list, leading to O(n) operations per replacement. For k placeholders expanding to m tokens, total time becomes O(kmn), approximating O(n²) in worst-case scenarios.
Test data demonstrates exponential time growth:
test_cases = [100, 200, 400, 800, 1600, 3200, 6400]
run_times = [0.002, 0.007, 0.028, 0.136, 0.616, 2.707, 11.854] # seconds
Doubling input size increases runtime by ~4x (consistent with O(n²)).
Denial-of-Service (DoS): An attacker could submit inputs with many placeholders (e.g., 10,000 <|audio_1|> tokens), causing CPU/memory exhaustion. Example: 10,000 placeholders → ~100 million operations.
Precompute all placeholder positions and expansion lengths upfront. Replace dynamic list concatenation with a single preallocated array.
# Pseudocode for O(n) solution
new_input_ids = []
for token in input_ids:
if token is placeholder:
new_input_ids.extend([token] * precomputed_length)
else:
new_input_ids.append(token)
{ "nvd_published_at": "2025-04-30T01:15:52Z", "cwe_ids": [ "CWE-1333" ], "severity": "MODERATE", "github_reviewed": true, "github_reviewed_at": "2025-04-29T16:43:10Z" }