FKIE_CVE-2025-30202

Vulnerability from fkie_nvd - Published: 2025-04-30 01:15 - Updated: 2025-05-14 19:59
Summary
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.5.2 and prior to 0.8.5 are vulnerable to denial of service and data exposure via ZeroMQ on multi-node vLLM deployment. In a multi-node vLLM deployment, vLLM uses ZeroMQ for some multi-node communication purposes. The primary vLLM host opens an XPUB ZeroMQ socket and binds it to ALL interfaces. While the socket is always opened for a multi-node deployment, it is only used when doing tensor parallelism across multiple hosts. Any client with network access to this host can connect to this XPUB socket unless its port is blocked by a firewall. Once connected, these arbitrary clients will receive all of the same data broadcasted to all of the secondary vLLM hosts. This data is internal vLLM state information that is not useful to an attacker. By potentially connecting to this socket many times and not reading data published to them, an attacker can also cause a denial of service by slowing down or potentially blocking the publisher. This issue has been patched in version 0.8.5.
Impacted products
Vendor Product Version
vllm vllm *

{
  "configurations": [
    {
      "nodes": [
        {
          "cpeMatch": [
            {
              "criteria": "cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:*",
              "matchCriteriaId": "15AC3826-5B31-40E2-9964-6F8930043285",
              "versionEndExcluding": "0.8.5",
              "versionStartIncluding": "0.5.2",
              "vulnerable": true
            }
          ],
          "negate": false,
          "operator": "OR"
        }
      ]
    }
  ],
  "cveTags": [],
  "descriptions": [
    {
      "lang": "en",
      "value": "vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.5.2 and prior to 0.8.5 are vulnerable to denial of service and data exposure via ZeroMQ on multi-node vLLM deployment. In a multi-node vLLM deployment, vLLM uses ZeroMQ for some multi-node communication purposes. The primary vLLM host opens an XPUB ZeroMQ socket and binds it to ALL interfaces. While the socket is always opened for a multi-node deployment, it is only used when doing tensor parallelism across multiple hosts. Any client with network access to this host can connect to this XPUB socket unless its port is blocked by a firewall. Once connected, these arbitrary clients will receive all of the same data broadcasted to all of the secondary vLLM hosts. This data is internal vLLM state information that is not useful to an attacker. By potentially connecting to this socket many times and not reading data published to them, an attacker can also cause a denial of service by slowing down or potentially blocking the publisher. This issue has been patched in version 0.8.5."
    },
    {
      "lang": "es",
      "value": "vLLM es un motor de inferencia y servicio de alto rendimiento y eficiente en memoria para LLM. Las versiones a partir de la 0.5.2 y anteriores a la 0.8.5 son vulnerables a denegaci\u00f3n de servicio y exposici\u00f3n de datos a trav\u00e9s de ZeroMQ en implementaciones de vLLM multinodo. En una implementaci\u00f3n de vLLM multinodo, vLLM utiliza ZeroMQ para algunos fines de comunicaci\u00f3n multinodo. El host vLLM principal abre un socket XPUB ZeroMQ y lo vincula a TODAS las interfaces. Si bien el socket siempre est\u00e1 abierto para implementaciones multinodo, solo se utiliza al realizar paralelismo tensorial en varios hosts. Cualquier cliente con acceso de red a este host puede conectarse a este socket XPUB a menos que su puerto est\u00e9 bloqueado por un firewall. Una vez conectados, estos clientes arbitrarios recibir\u00e1n los mismos datos transmitidos a todos los hosts vLLM secundarios. Estos datos son informaci\u00f3n interna del estado de vLLM que no es \u00fatil para un atacante. Al conectarse a este socket muchas veces y no leer los datos publicados, un atacante tambi\u00e9n puede causar una denegaci\u00f3n de servicio al ralentizar o incluso bloquear al publicador. Este problema se ha corregido en la versi\u00f3n 0.8.5."
    }
  ],
  "id": "CVE-2025-30202",
  "lastModified": "2025-05-14T19:59:42.390",
  "metrics": {
    "cvssMetricV31": [
      {
        "cvssData": {
          "attackComplexity": "LOW",
          "attackVector": "NETWORK",
          "availabilityImpact": "HIGH",
          "baseScore": 7.5,
          "baseSeverity": "HIGH",
          "confidentialityImpact": "NONE",
          "integrityImpact": "NONE",
          "privilegesRequired": "NONE",
          "scope": "UNCHANGED",
          "userInteraction": "NONE",
          "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
          "version": "3.1"
        },
        "exploitabilityScore": 3.9,
        "impactScore": 3.6,
        "source": "security-advisories@github.com",
        "type": "Secondary"
      },
      {
        "cvssData": {
          "attackComplexity": "LOW",
          "attackVector": "NETWORK",
          "availabilityImpact": "HIGH",
          "baseScore": 7.5,
          "baseSeverity": "HIGH",
          "confidentialityImpact": "NONE",
          "integrityImpact": "NONE",
          "privilegesRequired": "NONE",
          "scope": "UNCHANGED",
          "userInteraction": "NONE",
          "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
          "version": "3.1"
        },
        "exploitabilityScore": 3.9,
        "impactScore": 3.6,
        "source": "nvd@nist.gov",
        "type": "Primary"
      }
    ]
  },
  "published": "2025-04-30T01:15:51.800",
  "references": [
    {
      "source": "security-advisories@github.com",
      "tags": [
        "Patch"
      ],
      "url": "https://github.com/vllm-project/vllm/commit/a0304dc504c85f421d38ef47c64f83046a13641c"
    },
    {
      "source": "security-advisories@github.com",
      "tags": [
        "Issue Tracking",
        "Patch"
      ],
      "url": "https://github.com/vllm-project/vllm/pull/6183"
    },
    {
      "source": "security-advisories@github.com",
      "tags": [
        "Exploit",
        "Vendor Advisory"
      ],
      "url": "https://github.com/vllm-project/vllm/security/advisories/GHSA-9f8f-2vmf-885j"
    }
  ],
  "sourceIdentifier": "security-advisories@github.com",
  "vulnStatus": "Analyzed",
  "weaknesses": [
    {
      "description": [
        {
          "lang": "en",
          "value": "CWE-770"
        }
      ],
      "source": "security-advisories@github.com",
      "type": "Secondary"
    }
  ]
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or observed by the user.
  • Confirmed: The vulnerability has been validated from an analyst's perspective.
  • Published Proof of Concept: A public proof of concept is available for this vulnerability.
  • Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
  • Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
  • Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
  • Not confirmed: The user expressed doubt about the validity of the vulnerability.
  • Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.


Loading…

Detection rules are retrieved from Rulezet.

Loading…

Loading…