Two model implementation files hardcode trust_remote_code=True when loading sub-components, bypassing the user's explicit --trust-remote-code=False security opt-out. This enables remote code execution via malicious model
repositories even when the user has explicitly disabled remote code trust.
### Details
Affected files (latest main branch):
vllm/model_executor/models/nemotron_vl.py:430
```python
visionmodel = AutoModel.fromconfig(config.visionconfig, trustremote_code=True)
2. vllm/model_executor/models/kimi_k25.py:177
```python
cached_get_image_processor(self.ctx.model_config.model, trust_remote_code=True)
Both pass a hardcoded trustremotecode=True to HuggingFace API calls, overriding the user's global --trust-remote-code=False setting.
Relation to prior CVEs:
Remote code execution. An attacker can craft a malicious model repository that executes arbitrary Python code when loaded by vLLM, even when the user has explicitly set --trust-remote-code=False. This undermines the security guarantee that trustremotecode=False is intended to provide.
Remediation: Replace hardcoded trustremotecode=True with self.config.modelconfig.trustremote_code in both files. Raise a clear error if the model component requires remote code but the user hasn't opted in.
{
"github_reviewed": true,
"cwe_ids": [
"CWE-693"
],
"nvd_published_at": "2026-03-27T00:16:22Z",
"github_reviewed_at": "2026-03-27T15:27:20Z",
"severity": "HIGH"
}