GHSA-69J4-GRXJ-J64P
Vulnerability from github – Published: 2025-11-20 21:26 – Updated: 2025-11-21 15:32Summary
The /v1/chat/completions and /tokenize endpoints allow a chat_template_kwargs request parameter that is used in the code before it is properly validated against the chat template. With the right chat_template_kwargs parameters, it is possible to block processing of the API server for long periods of time, delaying all other requests
Details
In serving_engine.py, the chat_template_kwargs are unpacked into kwargs passed to chat_utils.py apply_hf_chat_template with no validation on the keys or values in that chat_template_kwargs dict. This means they can be used to override optional parameters in the apply_hf_chat_template method, such as tokenize, changing its default from False to True.
https://github.com/vllm-project/vllm/blob/2a6dc67eb520ddb9c4138d8b35ed6fe6226997fb/vllm/entrypoints/openai/serving_engine.py#L809-L814
https://github.com/vllm-project/vllm/blob/2a6dc67eb520ddb9c4138d8b35ed6fe6226997fb/vllm/entrypoints/chat_utils.py#L1602-L1610
Both serving_chat.py and serving_tokenization.py call into this _preprocess_chat method of serving_engine.py and they both pass in chat_template_kwargs.
So, a chat_template_kwargs like {"tokenize": True} makes tokenization happen as part of applying the chat template, even though that is not expected. Tokenization is a blocking operation, and with sufficiently large input can block the API server's event loop, which blocks handling of all other requests until this tokenization is complete.
This optional tokenize parameter to apply_hf_chat_template does not appear to be used, so one option would be to just hard-code that to always be False instead of allowing it to be optionally overridden by callers. A better option may be to not pass chat_template_kwargs as unpacked kwargs but instead as a dict, and only unpack them after the logic in apply_hf_chat_template that resolves the kwargs against the chat template.
Impact
Any authenticated user can cause a denial of service to a vLLM server with Chat Completion or Tokenize requests.
Fix
https://github.com/vllm-project/vllm/pull/27205
{
"affected": [
{
"package": {
"ecosystem": "PyPI",
"name": "vllm"
},
"ranges": [
{
"events": [
{
"introduced": "0.5.5"
},
{
"fixed": "0.11.1"
}
],
"type": "ECOSYSTEM"
}
]
}
],
"aliases": [
"CVE-2025-62426"
],
"database_specific": {
"cwe_ids": [
"CWE-770"
],
"github_reviewed": true,
"github_reviewed_at": "2025-11-20T21:26:24Z",
"nvd_published_at": "2025-11-21T02:15:43Z",
"severity": "MODERATE"
},
"details": "### Summary\nThe /v1/chat/completions and /tokenize endpoints allow a `chat_template_kwargs` request parameter that is used in the code before it is properly validated against the chat template. With the right `chat_template_kwargs` parameters, it is possible to block processing of the API server for long periods of time, delaying all other requests \n\n### Details\nIn serving_engine.py, the chat_template_kwargs are unpacked into kwargs passed to chat_utils.py `apply_hf_chat_template` with no validation on the keys or values in that chat_template_kwargs dict. This means they can be used to override optional parameters in the `apply_hf_chat_template` method, such as `tokenize`, changing its default from False to True.\n\nhttps://github.com/vllm-project/vllm/blob/2a6dc67eb520ddb9c4138d8b35ed6fe6226997fb/vllm/entrypoints/openai/serving_engine.py#L809-L814\n\nhttps://github.com/vllm-project/vllm/blob/2a6dc67eb520ddb9c4138d8b35ed6fe6226997fb/vllm/entrypoints/chat_utils.py#L1602-L1610\n\nBoth serving_chat.py and serving_tokenization.py call into this `_preprocess_chat` method of `serving_engine.py` and they both pass in `chat_template_kwargs`.\n\nSo, a `chat_template_kwargs` like `{\"tokenize\": True}` makes tokenization happen as part of applying the chat template, even though that is not expected. Tokenization is a blocking operation, and with sufficiently large input can block the API server\u0027s event loop, which blocks handling of all other requests until this tokenization is complete.\n\nThis optional `tokenize` parameter to `apply_hf_chat_template` does not appear to be used, so one option would be to just hard-code that to always be False instead of allowing it to be optionally overridden by callers. A better option may be to not pass `chat_template_kwargs` as unpacked kwargs but instead as a dict, and only unpack them after the logic in `apply_hf_chat_template` that resolves the kwargs against the chat template.\n\n### Impact\n\nAny authenticated user can cause a denial of service to a vLLM server with Chat Completion or Tokenize requests.\n\n### Fix\n\nhttps://github.com/vllm-project/vllm/pull/27205",
"id": "GHSA-69j4-grxj-j64p",
"modified": "2025-11-21T15:32:03Z",
"published": "2025-11-20T21:26:24Z",
"references": [
{
"type": "WEB",
"url": "https://github.com/vllm-project/vllm/security/advisories/GHSA-69j4-grxj-j64p"
},
{
"type": "ADVISORY",
"url": "https://nvd.nist.gov/vuln/detail/CVE-2025-62426"
},
{
"type": "WEB",
"url": "https://github.com/vllm-project/vllm/pull/27205"
},
{
"type": "WEB",
"url": "https://github.com/vllm-project/vllm/commit/3ada34f9cb4d1af763fdfa3b481862a93eb6bd2b"
},
{
"type": "PACKAGE",
"url": "https://github.com/vllm-project/vllm"
},
{
"type": "WEB",
"url": "https://github.com/vllm-project/vllm/blob/2a6dc67eb520ddb9c4138d8b35ed6fe6226997fb/vllm/entrypoints/chat_utils.py#L1602-L1610"
},
{
"type": "WEB",
"url": "https://github.com/vllm-project/vllm/blob/2a6dc67eb520ddb9c4138d8b35ed6fe6226997fb/vllm/entrypoints/openai/serving_engine.py#L809-L814"
}
],
"schema_version": "1.4.0",
"severity": [
{
"score": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H",
"type": "CVSS_V3"
}
],
"summary": "vLLM vulnerable to DoS via large Chat Completion or Tokenization requests with specially crafted `chat_template_kwargs`"
}
Sightings
| Author | Source | Type | Date |
|---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or observed by the user.
- Confirmed: The vulnerability has been validated from an analyst's perspective.
- Published Proof of Concept: A public proof of concept is available for this vulnerability.
- Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
- Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
- Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
- Not confirmed: The user expressed doubt about the validity of the vulnerability.
- Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.