GHSA-MRW7-HF4F-83PF

Vulnerability from github – Published: 2025-11-20 20:59 – Updated: 2025-11-21 15:31
VLAI?
Summary
vLLM deserialization vulnerability leading to DoS and potential RCE
Details

Summary

A memory corruption vulnerability that leading to a crash (denial-of-service) and potentially remote code execution (RCE) exists in vLLM versions 0.10.2 and later, in the Completions API endpoint. When processing user-supplied prompt embeddings, the endpoint loads serialized tensors using torch.load() without sufficient validation.

Due to a change introduced in PyTorch 2.8.0, sparse tensor integrity checks are disabled by default. As a result, maliciously crafted tensors can bypass internal bounds checks and trigger an out-of-bounds memory write during the call to to_dense(). This memory corruption can crash vLLM and potentially lead to code execution on the server hosting vLLM.

Details

A vulnerability that can lead to RCE from the completions API endpoint exists in vllm, where due to missing checks when loading user-provided tensors, an out-of-bounds write can be triggered. This happens because the default behavior of torch.load(tensor, weights_only=True) since pytorch 2.8.0 is to not perform validity checks for sparse tensors, and this needs to be enabled explicitly using the torch.sparse.check_sparse_tensor_invariants context manager.

The vulnerability is in the following code in vllm/entrypoints/renderer.py:148

    def _load_and_validate_embed(embed: bytes) -> EngineEmbedsPrompt:
        tensor = torch.load(
            io.BytesIO(pybase64.b64decode(embed, validate=True)),
            weights_only=True,
            map_location=torch.device("cpu"),
        )
        assert isinstance(tensor, torch.Tensor) and tensor.dtype in (
            torch.float32,
            torch.bfloat16,
            torch.float16,
        )
        tensor = tensor.to_dense()

Because of the missing checks, loading invalid prompt embedding tensors provided by the user can cause an out-of-bounds write in the call to to_dense .

Impact

All users with access to this API are able to exploit this vulnerability. Unsafe deserialization of untrusted input can be abused to achieve DoS and potentially remote code execution (RCE) in the vLLM server process. This impacts deployments running vLLM as a server or any instance that deserializes untrusted/model-provided payloads.

Fix

https://github.com/vllm-project/vllm/pull/27204

Acknowledgements

Finder: AXION Security Research Team (Omri Fainaro, Bary Levy): discovery and coordinated disclosure.

Show details on source website

{
  "affected": [
    {
      "package": {
        "ecosystem": "PyPI",
        "name": "vllm"
      },
      "ranges": [
        {
          "events": [
            {
              "introduced": "0.10.2"
            },
            {
              "fixed": "0.11.1"
            }
          ],
          "type": "ECOSYSTEM"
        }
      ]
    }
  ],
  "aliases": [
    "CVE-2025-62164"
  ],
  "database_specific": {
    "cwe_ids": [
      "CWE-123",
      "CWE-20",
      "CWE-502",
      "CWE-787"
    ],
    "github_reviewed": true,
    "github_reviewed_at": "2025-11-20T20:59:34Z",
    "nvd_published_at": "2025-11-21T02:15:43Z",
    "severity": "HIGH"
  },
  "details": "### Summary\nA memory corruption vulnerability that leading to a crash (denial-of-service) and potentially remote code execution (RCE) exists in vLLM versions 0.10.2 and later, in the Completions API endpoint. When processing user-supplied prompt embeddings, the endpoint loads serialized tensors using torch.load() without sufficient validation.\n\nDue to a change introduced in PyTorch 2.8.0, sparse tensor integrity checks are disabled by default. As a result, maliciously crafted tensors can bypass internal bounds checks and trigger an out-of-bounds memory write during the call to to_dense(). This memory corruption can crash vLLM and potentially lead to code execution on the server hosting vLLM.\n\n### Details\nA vulnerability that can lead to RCE from the completions API endpoint exists in vllm, where due to missing checks when loading user-provided tensors, an out-of-bounds write can be triggered. This happens because the default behavior of `torch.load(tensor, weights_only=True)`  since pytorch 2.8.0 is to not perform validity checks for sparse tensors, and this needs to be enabled explicitly using the [torch.sparse.check_sparse_tensor_invariants](https://docs.pytorch.org/docs/stable/generated/torch.sparse.check_sparse_tensor_invariants.html) context manager.\n\nThe vulnerability is in the following code in [vllm/entrypoints/renderer.py:148](https://github.com/vllm-project/vllm/blob/a332b84578cdc0706e040f6a765954c8a289904f/vllm/entrypoints/renderer.py#L148)\n\n```python\n    def _load_and_validate_embed(embed: bytes) -\u003e EngineEmbedsPrompt:\n        tensor = torch.load(\n            io.BytesIO(pybase64.b64decode(embed, validate=True)),\n            weights_only=True,\n            map_location=torch.device(\"cpu\"),\n        )\n        assert isinstance(tensor, torch.Tensor) and tensor.dtype in (\n            torch.float32,\n            torch.bfloat16,\n            torch.float16,\n        )\n        tensor = tensor.to_dense()\n```\n\nBecause of the missing checks, loading invalid prompt embedding tensors provided by the user can cause an out-of-bounds write in the call to `to_dense` .\n\n### Impact\nAll users with access to this API are able to exploit this vulnerability. Unsafe deserialization of untrusted input can be abused to achieve DoS and potentially remote code execution (RCE) in the vLLM server process. This impacts deployments running vLLM as a server or any instance that deserializes untrusted/model-provided payloads.\n\n## Fix\n\nhttps://github.com/vllm-project/vllm/pull/27204\n\n## Acknowledgements\n\nFinder: AXION Security Research Team (Omri Fainaro, Bary Levy): discovery and coordinated disclosure.",
  "id": "GHSA-mrw7-hf4f-83pf",
  "modified": "2025-11-21T15:31:32Z",
  "published": "2025-11-20T20:59:34Z",
  "references": [
    {
      "type": "WEB",
      "url": "https://github.com/vllm-project/vllm/security/advisories/GHSA-mrw7-hf4f-83pf"
    },
    {
      "type": "ADVISORY",
      "url": "https://nvd.nist.gov/vuln/detail/CVE-2025-62164"
    },
    {
      "type": "WEB",
      "url": "https://github.com/vllm-project/vllm/pull/27204"
    },
    {
      "type": "WEB",
      "url": "https://github.com/vllm-project/vllm/commit/58fab50d82838d5014f4a14d991fdb9352c9c84b"
    },
    {
      "type": "PACKAGE",
      "url": "https://github.com/vllm-project/vllm"
    }
  ],
  "schema_version": "1.4.0",
  "severity": [
    {
      "score": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
      "type": "CVSS_V3"
    }
  ],
  "summary": "vLLM deserialization vulnerability leading to DoS and potential RCE"
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or observed by the user.
  • Confirmed: The vulnerability has been validated from an analyst's perspective.
  • Published Proof of Concept: A public proof of concept is available for this vulnerability.
  • Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
  • Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
  • Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
  • Not confirmed: The user expressed doubt about the validity of the vulnerability.
  • Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.


Loading…

Detection rules are retrieved from Rulezet.

Loading…

Loading…