An issue in langchain langchain-ai v.0.0.232 and before allows a remote attacker to execute arbitrary code via a crafted script to the PythonAstREPLTool._run component.
"https://github.com/pypa/advisory-database/blob/main/vulns/langchain/PYSEC-2023-147.yaml"