GHSA-MPHV-75CG-56WG
Vulnerability from github – Published: 2026-02-25 22:59 – Updated: 2026-02-25 22:59Summary
A redirect-based Server-Side Request Forgery (SSRF) bypass exists in RecursiveUrlLoader in @langchain/community. The loader validates the initial URL but allows the underlying fetch to follow redirects automatically, which permits a transition from a safe public URL to an internal or metadata endpoint without revalidation. This is a bypass of the SSRF protections introduced in 1.1.14 (CVE-2026-26019).
Affected Component
- Package:
@langchain/community - Component:
RecursiveUrlLoader - Configuration:
preventOutside(default:true) is insufficient to prevent this bypass when redirects are followed automatically.
Description
RecursiveUrlLoader is a web crawler that recursively follows links from a starting URL. The existing SSRF mitigation validates the initial URL before fetching, but it does not re-validate when the request follows redirects. Because fetch follows redirects by default, an attacker can supply a public URL that passes validation and then redirects to a private network address, localhost, or cloud metadata endpoint.
This constitutes a “check‑then‑act” gap in the request lifecycle: the safety check occurs before the redirect chain is resolved, and the final destination is never validated.
Impact
If an attacker can influence content on a page being crawled (e.g., user‑generated content, untrusted external pages), they can cause the crawler to:
- Fetch cloud instance metadata (AWS, GCP, Azure), potentially exposing credentials or tokens
- Access internal services on private networks (10.x, 172.16.x, 192.168.x)
- Connect to localhost services
- Exfiltrate response data through attacker-controlled redirect chains
This is exploitable in any environment where RecursiveUrlLoader runs with access to internal networks or metadata services, which includes most cloud-hosted deployments.
Attack Scenario
- The crawler is pointed at a public URL that passes initial SSRF validation.
- That URL responds with a 3xx redirect to an internal target.
- The fetch follows the redirect automatically without revalidation.
- The crawler accesses the internal or metadata endpoint.
Example redirector:
https://302.r3dir.me/--to/?url=http://169.254.169.254/latest/meta-data/
Root Cause
- SSRF validation (
validateSafeUrl) is only performed on the initial URL. - Redirects are followed automatically by fetch (
redirect: "follow"default), so the request can change destinations without additional validation.
Resolution
Upgrade to @langchain/community >= 1.1.18, which validates every redirect hop by disabling automatic redirects and re-validating Location targets before following them.
- Automatic redirects are disabled (redirect: "manual").
- Each 3xx Location is resolved and validated with validateSafeUrl() before the next request.
- A maximum redirect limit prevents infinite loops.
Reources
- Original SSRF fix (CVE-2026-26019): enforced origin comparison and added initial URL validation
- https://github.com/langchain-ai/langchainjs/security/advisories/GHSA-gf3v-fwqg-4vh7
{
"affected": [
{
"database_specific": {
"last_known_affected_version_range": "\u003c= 1.1.17"
},
"package": {
"ecosystem": "npm",
"name": "@langchain/community"
},
"ranges": [
{
"events": [
{
"introduced": "0"
},
{
"fixed": "1.1.18"
}
],
"type": "ECOSYSTEM"
}
]
}
],
"aliases": [
"CVE-2026-27795"
],
"database_specific": {
"cwe_ids": [
"CWE-918"
],
"github_reviewed": true,
"github_reviewed_at": "2026-02-25T22:59:48Z",
"nvd_published_at": "2026-02-25T18:23:41Z",
"severity": "MODERATE"
},
"details": "## Summary\nA redirect-based Server-Side Request Forgery (SSRF) bypass exists in `RecursiveUrlLoader` in `@langchain/community`. The loader validates the initial URL but allows the underlying fetch to follow redirects automatically, which permits a transition from a safe public URL to an internal or metadata endpoint without revalidation. This is a bypass of the SSRF protections introduced in 1.1.14 (CVE-2026-26019).\n\n## Affected Component\n- Package: `@langchain/community`\n- Component: `RecursiveUrlLoader`\n- Configuration: `preventOutside` (default: `true`) is insufficient to prevent this bypass when redirects are followed automatically.\n\n## Description\n`RecursiveUrlLoader` is a web crawler that recursively follows links from a starting URL. The existing SSRF mitigation validates the initial URL before fetching, but it does not re-validate when the request follows redirects. Because fetch follows redirects by default, an attacker can supply a public URL that passes validation and then redirects to a private network address, localhost, or cloud metadata endpoint.\n\nThis constitutes a \u201ccheck\u2011then\u2011act\u201d gap in the request lifecycle: the safety check occurs before the redirect chain is resolved, and the final destination is never validated.\n\n## Impact\nIf an attacker can influence content on a page being crawled (e.g., user\u2011generated content, untrusted external pages), they can cause the crawler to:\n- Fetch cloud instance metadata (AWS, GCP, Azure), potentially exposing credentials or tokens\n- Access internal services on private networks (`10.x`, `172.16.x`, `192.168.x`)\n- Connect to localhost services\n- Exfiltrate response data through attacker-controlled redirect chains\n\nThis is exploitable in any environment where `RecursiveUrlLoader` runs with access to internal networks or metadata services, which includes most cloud-hosted deployments.\n\n## Attack Scenario\n1. The crawler is pointed at a public URL that passes initial SSRF validation.\n2. That URL responds with a 3xx redirect to an internal target.\n3. The fetch follows the redirect automatically without revalidation.\n4. The crawler accesses the internal or metadata endpoint.\n\nExample redirector:\n```\nhttps://302.r3dir.me/--to/?url=http://169.254.169.254/latest/meta-data/\n```\n\n## Root Cause\n- SSRF validation (`validateSafeUrl`) is only performed on the initial URL.\n- Redirects are followed automatically by fetch (`redirect: \"follow\"` default), so the request can change destinations without additional validation.\n\n## Resolution\nUpgrade to `@langchain/community` **\u003e= 1.1.18**, which validates every redirect hop by disabling automatic redirects and re-validating `Location` targets before following them.\n- Automatic redirects are disabled (`redirect: \"manual\"`).\n- Each 3xx `Location` is resolved and validated with `validateSafeUrl()` before the next request.\n- A maximum redirect limit prevents infinite loops.\n\n## Reources\n- Original SSRF fix (CVE-2026-26019): enforced origin comparison and added initial URL validation\n- https://github.com/langchain-ai/langchainjs/security/advisories/GHSA-gf3v-fwqg-4vh7",
"id": "GHSA-mphv-75cg-56wg",
"modified": "2026-02-25T22:59:48Z",
"published": "2026-02-25T22:59:48Z",
"references": [
{
"type": "WEB",
"url": "https://github.com/langchain-ai/langchainjs/security/advisories/GHSA-gf3v-fwqg-4vh7"
},
{
"type": "WEB",
"url": "https://github.com/langchain-ai/langchainjs/security/advisories/GHSA-mphv-75cg-56wg"
},
{
"type": "ADVISORY",
"url": "https://nvd.nist.gov/vuln/detail/CVE-2026-27795"
},
{
"type": "WEB",
"url": "https://github.com/langchain-ai/langchainjs/pull/9990"
},
{
"type": "WEB",
"url": "https://github.com/langchain-ai/langchainjs/commit/2812d2b2b9fd9343c4850e2ab906b8cf440975ee"
},
{
"type": "WEB",
"url": "https://github.com/langchain-ai/langchainjs/commit/d5e3db0d01ab321ec70a875805b2f74aefdadf9d"
},
{
"type": "PACKAGE",
"url": "https://github.com/langchain-ai/langchainjs"
},
{
"type": "WEB",
"url": "https://github.com/langchain-ai/langchainjs/releases/tag/%40langchain%2Fcommunity%401.1.14"
},
{
"type": "WEB",
"url": "https://github.com/langchain-ai/langchainjs/releases/tag/%40langchain%2Fcommunity%401.1.18"
}
],
"schema_version": "1.4.0",
"severity": [
{
"score": "CVSS:3.1/AV:N/AC:L/PR:L/UI:R/S:C/C:L/I:N/A:N",
"type": "CVSS_V3"
}
],
"summary": "LangChain Community: redirect chaining can lead to SSRF bypass via RecursiveUrlLoader"
}
Sightings
| Author | Source | Type | Date |
|---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or observed by the user.
- Confirmed: The vulnerability has been validated from an analyst's perspective.
- Published Proof of Concept: A public proof of concept is available for this vulnerability.
- Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
- Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
- Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
- Not confirmed: The user expressed doubt about the validity of the vulnerability.
- Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.