PYSEC-2025-62

Vulnerability from pysec - Published: 2025-02-07 20:15 - Updated: 2025-07-01 23:22
VLAI?
Details

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Maliciously constructed statements can lead to hash collisions, resulting in cache reuse, which can interfere with subsequent responses and cause unintended behavior. Prefix caching makes use of Python's built-in hash() function. As of Python 3.12, the behavior of hash(None) has changed to be a predictable constant value. This makes it more feasible that someone could try exploit hash collisions. The impact of a collision would be using cache that was generated using different content. Given knowledge of prompts in use and predictable hashing behavior, someone could intentionally populate the cache using a prompt known to collide with another prompt in use. This issue has been addressed in version 0.7.2 and all users are advised to upgrade. There are no known workarounds for this vulnerability.

Impacted products
Name purl
vllm pkg:pypi/vllm

{
  "affected": [
    {
      "package": {
        "ecosystem": "PyPI",
        "name": "vllm",
        "purl": "pkg:pypi/vllm"
      },
      "ranges": [
        {
          "events": [
            {
              "introduced": "0"
            },
            {
              "fixed": "432117cd1f59c76d97da2eaff55a7d758301dbc7"
            }
          ],
          "repo": "https://github.com/python/cpython",
          "type": "GIT"
        },
        {
          "events": [
            {
              "introduced": "0"
            },
            {
              "fixed": "0.7.2"
            }
          ],
          "type": "ECOSYSTEM"
        }
      ],
      "versions": [
        "0.0.1",
        "0.1.0",
        "0.1.1",
        "0.1.2",
        "0.1.3",
        "0.1.4",
        "0.1.5",
        "0.1.6",
        "0.1.7",
        "0.2.0",
        "0.2.1",
        "0.2.1.post1",
        "0.2.2",
        "0.2.3",
        "0.2.4",
        "0.2.5",
        "0.2.6",
        "0.2.7",
        "0.3.0",
        "0.3.1",
        "0.3.2",
        "0.3.3",
        "0.4.0",
        "0.4.0.post1",
        "0.4.1",
        "0.4.2",
        "0.4.3",
        "0.5.0",
        "0.5.0.post1",
        "0.5.1",
        "0.5.2",
        "0.5.3",
        "0.5.3.post1",
        "0.5.4",
        "0.5.5",
        "0.6.0",
        "0.6.1",
        "0.6.1.post1",
        "0.6.1.post2",
        "0.6.2",
        "0.6.3",
        "0.6.3.post1",
        "0.6.4",
        "0.6.4.post1",
        "0.6.5",
        "0.6.6",
        "0.6.6.post1",
        "0.7.0",
        "0.7.1"
      ]
    }
  ],
  "aliases": [
    "CVE-2025-25183",
    "GHSA-rm76-4mrf-v9r8"
  ],
  "details": "vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Maliciously constructed statements can lead to hash collisions, resulting in cache reuse, which can interfere with subsequent responses and cause unintended behavior. Prefix caching makes use of Python\u0027s built-in hash() function. As of Python 3.12, the behavior of hash(None) has changed to be a predictable constant value. This makes it more feasible that someone could try exploit hash collisions. The impact of a collision would be using cache that was generated using different content. Given knowledge of prompts in use and predictable hashing behavior, someone could intentionally populate the cache using a prompt known to collide with another prompt in use. This issue has been addressed in version 0.7.2 and all users are advised to upgrade. There are no known workarounds for this vulnerability.",
  "id": "PYSEC-2025-62",
  "modified": "2025-07-01T23:22:49.083695+00:00",
  "published": "2025-02-07T20:15:34+00:00",
  "references": [
    {
      "type": "ADVISORY",
      "url": "https://github.com/vllm-project/vllm/security/advisories/GHSA-rm76-4mrf-v9r8"
    },
    {
      "type": "FIX",
      "url": "https://github.com/python/cpython/commit/432117cd1f59c76d97da2eaff55a7d758301dbc7"
    },
    {
      "type": "REPORT",
      "url": "https://github.com/vllm-project/vllm/pull/12621"
    }
  ]
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or observed by the user.
  • Confirmed: The vulnerability has been validated from an analyst's perspective.
  • Published Proof of Concept: A public proof of concept is available for this vulnerability.
  • Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
  • Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
  • Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
  • Not confirmed: The user expressed doubt about the validity of the vulnerability.
  • Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.


Loading…

Detection rules are retrieved from Rulezet.

Loading…

Loading…