A serialization injection vulnerability exists in LangChain's dumps() and dumpd() functions. The functions do not escape dictionaries with 'lc' keys when serializing free-form dictionaries. The 'lc' key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data.
The core vulnerability was in dumps() and dumpd(): these functions failed to escape user-controlled dictionaries containing 'lc' keys. When this unescaped data was later deserialized via load() or loads(), the injected structures were treated as legitimate LangChain objects rather than plain user data.
This escaping bug enabled several attack vectors:
metadata, additional_kwargs, or response_metadataSerializable subclass, but only within the pre-approved trusted namespaces (langchain_core, langchain, langchain_community). This includes classes with side effects in __init__ (network calls, file operations, etc.). Note that namespace validation was already enforced before this patch, so arbitrary classes outside these trusted namespaces could not be instantiated.This patch fixes the escaping bug in dumps() and dumpd() and introduces new restrictive defaults in load() and loads(): allowlist enforcement via allowed_objects="core" (restricted to serialization mappings), secrets_from_env changed from True to False, and default Jinja2 template blocking via init_validator. These are breaking changes for some use cases.
Applications are vulnerable if they:
astream_events(version="v1") — The v1 implementation internally uses vulnerable serialization. Note: astream_events(version="v2") is not vulnerable.Runnable.astream_log() — This method internally uses vulnerable serialization for streaming outputs.dumps() or dumpd() on untrusted data, then deserialize with load() or loads() — Trusting your own serialization output makes you vulnerable if user-controlled data (e.g., from LLM responses, metadata fields, or user inputs) contains 'lc' key structures.load() or loads() — Directly deserializing untrusted data that may contain injected 'lc' structures.RunnableWithMessageHistory — Internal serialization in message history handling.InMemoryVectorStore.load() to deserialize untrusted documents.langchain-community caches.hub.pull.StringRunEvaluatorChain on untrusted runs.create_lc_store or create_kv_docstore with untrusted documents.MultiVectorRetriever with byte stores containing untrusted documents.LangSmithRunChatLoader with runs containing untrusted messages.The most common attack vector is through LLM response fields like additional_kwargs or response_metadata, which can be controlled via prompt injection and then serialized/deserialized in streaming operations.
Attackers who control serialized data can extract environment variable secrets by injecting {"lc": 1, "type": "secret", "id": ["ENV_VAR"]} to load environment variables during deserialization (when secrets_from_env=True, which was the old default). They can also instantiate classes with controlled parameters by injecting constructor structures to instantiate any class within trusted namespaces with attacker-controlled parameters, potentially triggering side effects such as network calls or file operations.
Key severity factors:
secrets_from_env=True (the old default)additional_kwargs can be controlled via prompt injectionfrom langchain_core.load import dumps, load
import os
# Attacker injects secret structure into user-controlled data
attacker_dict = {
"user_data": {
"lc": 1,
"type": "secret",
"id": ["OPENAI_API_KEY"]
}
}
serialized = dumps(attacker_dict) # Bug: does NOT escape the 'lc' key
os.environ["OPENAI_API_KEY"] = "sk-secret-key-12345"
deserialized = load(serialized, secrets_from_env=True)
print(deserialized["user_data"]) # "sk-secret-key-12345" - SECRET LEAKED!
This patch introduces three breaking changes to load() and loads():
allowed_objects parameter (defaults to 'core'): Enforces allowlist of classes that can be deserialized. The 'all' option corresponds to the list of objects specified in <code>mappings.py</code> while the 'core' option limits to objects within langchain_core. We recommend that users explicitly specify which objects they want to allow for serialization/deserialization.secrets_from_env default changed from True to False: Disables automatic secret loading from environmentinit_validator parameter (defaults to default_init_validator): Blocks Jinja2 templates by defaultIf you're deserializing standard LangChain types (messages, documents, prompts, trusted partner integrations like ChatOpenAI, ChatAnthropic, etc.), your code will work without changes:
from langchain_core.load import load
# Uses default allowlist from serialization mappings
obj = load(serialized_data)
If you're deserializing custom classes not in the serialization mappings, add them to the allowlist:
from langchain_core.load import load
from my_package import MyCustomClass
# Specify the classes you need
obj = load(serialized_data, allowed_objects=[MyCustomClass])
Jinja2 templates are now blocked by default because they can execute arbitrary code. If you need Jinja2 templates, pass init_validator=None:
from langchain_core.load import load
from langchain_core.prompts import PromptTemplate
obj = load(
serialized_data,
allowed_objects=[PromptTemplate],
init_validator=None
)
[!WARNING] Only disable
init_validatorif you trust the serialized data. Jinja2 templates can execute arbitrary Python code.
secrets_from_env now defaults to False. If you need to load secrets from environment variables:
from langchain_core.load import load
obj = load(serialized_data, secrets_from_env=True)
{
"github_reviewed_at": "2025-12-23T18:46:13Z",
"cwe_ids": [
"CWE-502"
],
"severity": "CRITICAL",
"github_reviewed": true,
"nvd_published_at": "2025-12-23T23:15:44Z"
}