Comprehensive Threat Exposure Management Platform
CVE-2025-68664, codenamed “LangGrinch,” represents a critical serialization injection vulnerability in LangChain Core with a CVSS score of 9.3. This critical LangChain vulnerability affects one of the most widely deployed AI frameworks, with over 847 million downloads globally. The LangGrinch vulnerability allows attackers to inject malicious data through AI model outputs or user inputs, which LangChain applications then mistakenly process as trusted commands.
The LangChain serialization flaw enables multiple exploitation scenarios including secret extraction from environment variables, unauthorized object instantiation, and potential remote code execution on systems running vulnerable LangChain versions. A parallel LangChain vulnerability (CVE-2025-68665) affects LangChain.js with similar exploitation mechanics, creating coordinated risk across both Python and JavaScript LangChain implementations.
Security researchers discovered the LangGrinch vulnerability on December 4, 2025, with public disclosure following on December 23, 2025. LangChain awarded a record $4,000 bounty for this critical vulnerability finding. Proof-of-concept exploits for the LangGrinch vulnerability are publicly available and the LangChain attack is automatable, significantly increasing the urgency for LangChain security remediation. Organizations must upgrade to langchain-core versions 0.3.81 or 1.2.5 immediately to protect against LangGrinch exploitation.
CVE-2025-68664, the LangGrinch vulnerability, represents a critical security flaw in LangChain Core with a CVSS severity score of 9.3. LangChain has become one of the most widely adopted frameworks for building AI-powered applications, with approximately 847 million total downloads across enterprise and development environments. The LangGrinch vulnerability discovery occurred on December 4, 2025, by security researchers who subsequently received a record $4,000 LangChain security bounty. Public disclosure of the LangGrinch vulnerability followed on December 23, 2025, alerting the global AI development community to this critical LangChain security risk.
The LangGrinch flaw exists in how LangChain handles data serialization processes—the critical mechanism for converting data into storable formats and reconstructing them. Attackers exploit the LangChain serialization vulnerability by injecting specially crafted malicious data through AI model responses or user inputs into LangChain applications. When vulnerable LangChain applications process this malicious serialization data, they mistakenly treat it as trusted internal commands rather than untrusted user input, creating a critical LangChain security boundary violation.
This LangChain serialization injection vulnerability enables attackers to steal sensitive secrets like API keys and passwords stored in environment variables accessible to LangChain processes, create unauthorized objects within the LangChain application runtime, and potentially execute malicious code on servers running vulnerable LangChain implementations. The LangGrinch vulnerability represents a fundamental trust boundary violation in LangChain’s serialization architecture.
Organizations at elevated risk from the LangGrinch vulnerability include those using LangChain for AI applications that process streaming outputs, store conversation history, implement caching mechanisms, or run logging pipelines. A similar LangChain vulnerability (CVE-2025-68665) also affects LangChain.js implementations, meaning development teams running both Python and JavaScript LangChain implementations should treat this as a coordinated LangChain security risk requiring immediate attention across multiple technology stacks.
Proof-of-concept exploits for the LangGrinch vulnerability are publicly available and the LangChain attack can be automated, dramatically increasing the urgency for LangChain security remediation. Affected LangChain versions include langchain-core prior to 0.3.81 and 1.2.5. Organizations must upgrade LangChain implementations immediately, conduct comprehensive reviews of any systems that process AI-generated content, and ensure environment variables containing secrets are properly protected against LangChain serialization injection attacks.
Immediately upgrade all affected LangChain components to their patched versions to remediate the LangGrinch vulnerability. For Python environments running vulnerable LangChain versions, update langchain-core to version 1.2.5 or later, or 0.3.81+ for legacy LangChain branches. For JavaScript and TypeScript deployments using LangChain.js, upgrade @langchain/core and langchain packages to their respective fixed releases addressing the LangChain serialization vulnerability. Treat this LangChain update as a critical security update due to the public availability of LangGrinch proof-of-concept exploits and the automatable nature of LangChain serialization injection attacks.
Use the new allowed_objects parameter introduced in patched LangChain versions to restrict deserializable classes and prevent LangChain serialization injection. Treat all LLM outputs processed by LangChain as untrusted input, including metadata fields that could contain malicious LangChain serialization payloads. Sandbox LangChain deserialization operations in restricted environments to limit the impact of potential LangChain exploitation. Log and alert on LangChain deserialization operations involving unexpected object types that could indicate LangGrinch exploitation attempts. Audit stored or cached LangChain objects for potential malicious serialization payloads that could compromise LangChain application security.
Monitor for unusual lc key structures in serialized data processed by LangChain applications, as these patterns may indicate LangGrinch exploitation attempts. Review logs for LangChain deserialization of unexpected object types that deviate from normal LangChain application behavior. Configure SIEM rules specifically designed to detect LangChain serialization injection exploitation attempts and indicators of compromise related to the LangGrinch vulnerability. Enable detailed audit logging for all LangChain serialization and deserialization events to facilitate forensic analysis of potential LangChain security incidents.
Establish comprehensive AI security guidelines documenting and enforcing security boundaries for AI application data flows involving LangChain frameworks. Add automated vulnerability scanning specifically targeting AI and ML framework dependencies including LangChain to detect future LangChain vulnerabilities early. Perform regular security assessments of serialization and deserialization patterns in LangChain implementations to identify potential LangChain security weaknesses. Educate developers on serialization injection risks specifically within AI frameworks like LangChain to build security awareness. Implement defense-in-depth security layers including input validation, output encoding, and runtime monitoring for LangChain applications. Track security advisories for LangChain and related AI frameworks to stay informed about emerging LangChain vulnerabilities.
Get through updates and upcoming events, and more directly in your inbox