In LangChain through 0.0.131, the LLMMathChain
chain allows prompt injection attacks that can execute arbitrary code via the Python exec()
method.
{ "nvd_published_at": "2023-04-05T02:15:00Z", "cwe_ids": [ "CWE-74", "CWE-94" ], "severity": "CRITICAL", "github_reviewed": true, "github_reviewed_at": "2023-04-05T19:39:41Z" }