GHSA-45GQ-HR3J-JMRQ
Vulnerability from github – Published: 2026-02-04 18:30 – Updated: 2026-02-06 18:30In the Linux kernel, the following vulnerability has been resolved:
vsock/virtio: cap TX credit to local buffer size
The virtio transports derives its TX credit directly from peer_buf_alloc, which is set from the remote endpoint's SO_VM_SOCKETS_BUFFER_SIZE value.
On the host side this means that the amount of data we are willing to queue for a connection is scaled by a guest-chosen buffer size, rather than the host's own vsock configuration. A malicious guest can advertise a large buffer and read slowly, causing the host to allocate a correspondingly large amount of sk_buff memory. The same thing would happen in the guest with a malicious host, since virtio transports share the same code base.
Introduce a small helper, virtio_transport_tx_buf_size(), that returns min(peer_buf_alloc, buf_alloc), and use it wherever we consume peer_buf_alloc.
This ensures the effective TX window is bounded by both the peer's advertised buffer and our own buf_alloc (already clamped to buffer_max_size via SO_VM_SOCKETS_BUFFER_MAX_SIZE), so a remote peer cannot force the other to queue more data than allowed by its own vsock settings.
On an unpatched Ubuntu 22.04 host (~64 GiB RAM), running a PoC with 32 guest vsock connections advertising 2 GiB each and reading slowly drove Slab/SUnreclaim from ~0.5 GiB to ~57 GiB; the system only recovered after killing the QEMU process. That said, if QEMU memory is limited with cgroups, the maximum memory used will be limited.
With this patch applied:
Before: MemFree: ~61.6 GiB Slab: ~142 MiB SUnreclaim: ~117 MiB
After 32 high-credit connections: MemFree: ~61.5 GiB Slab: ~178 MiB SUnreclaim: ~152 MiB
Only ~35 MiB increase in Slab/SUnreclaim, no host OOM, and the guest remains responsive.
Compatibility with non-virtio transports:
-
VMCI uses the AF_VSOCK buffer knobs to size its queue pairs per socket based on the local vsk->buffer_* values; the remote side cannot enlarge those queues beyond what the local endpoint configured.
-
Hyper-V's vsock transport uses fixed-size VMBus ring buffers and an MTU bound; there is no peer-controlled credit field comparable to peer_buf_alloc, and the remote endpoint cannot drive in-flight kernel memory above those ring sizes.
-
The loopback path reuses virtio_transport_common.c, so it naturally follows the same semantics as the virtio transport.
This change is limited to virtio_transport_common.c and thus affects virtio-vsock, vhost-vsock, and loopback, bringing them in line with the "remote window intersected with local policy" behaviour that VMCI and Hyper-V already effectively have.
[Stefano: small adjustments after changing the previous patch] [Stefano: tweak the commit message]
{
"affected": [],
"aliases": [
"CVE-2026-23086"
],
"database_specific": {
"cwe_ids": [],
"github_reviewed": false,
"github_reviewed_at": null,
"nvd_published_at": "2026-02-04T17:16:19Z",
"severity": null
},
"details": "In the Linux kernel, the following vulnerability has been resolved:\n\nvsock/virtio: cap TX credit to local buffer size\n\nThe virtio transports derives its TX credit directly from peer_buf_alloc,\nwhich is set from the remote endpoint\u0027s SO_VM_SOCKETS_BUFFER_SIZE value.\n\nOn the host side this means that the amount of data we are willing to\nqueue for a connection is scaled by a guest-chosen buffer size, rather\nthan the host\u0027s own vsock configuration. A malicious guest can advertise\na large buffer and read slowly, causing the host to allocate a\ncorrespondingly large amount of sk_buff memory.\nThe same thing would happen in the guest with a malicious host, since\nvirtio transports share the same code base.\n\nIntroduce a small helper, virtio_transport_tx_buf_size(), that\nreturns min(peer_buf_alloc, buf_alloc), and use it wherever we consume\npeer_buf_alloc.\n\nThis ensures the effective TX window is bounded by both the peer\u0027s\nadvertised buffer and our own buf_alloc (already clamped to\nbuffer_max_size via SO_VM_SOCKETS_BUFFER_MAX_SIZE), so a remote peer\ncannot force the other to queue more data than allowed by its own\nvsock settings.\n\nOn an unpatched Ubuntu 22.04 host (~64 GiB RAM), running a PoC with\n32 guest vsock connections advertising 2 GiB each and reading slowly\ndrove Slab/SUnreclaim from ~0.5 GiB to ~57 GiB; the system only\nrecovered after killing the QEMU process. That said, if QEMU memory is\nlimited with cgroups, the maximum memory used will be limited.\n\nWith this patch applied:\n\n Before:\n MemFree: ~61.6 GiB\n Slab: ~142 MiB\n SUnreclaim: ~117 MiB\n\n After 32 high-credit connections:\n MemFree: ~61.5 GiB\n Slab: ~178 MiB\n SUnreclaim: ~152 MiB\n\nOnly ~35 MiB increase in Slab/SUnreclaim, no host OOM, and the guest\nremains responsive.\n\nCompatibility with non-virtio transports:\n\n - VMCI uses the AF_VSOCK buffer knobs to size its queue pairs per\n socket based on the local vsk-\u003ebuffer_* values; the remote side\n cannot enlarge those queues beyond what the local endpoint\n configured.\n\n - Hyper-V\u0027s vsock transport uses fixed-size VMBus ring buffers and\n an MTU bound; there is no peer-controlled credit field comparable\n to peer_buf_alloc, and the remote endpoint cannot drive in-flight\n kernel memory above those ring sizes.\n\n - The loopback path reuses virtio_transport_common.c, so it\n naturally follows the same semantics as the virtio transport.\n\nThis change is limited to virtio_transport_common.c and thus affects\nvirtio-vsock, vhost-vsock, and loopback, bringing them in line with the\n\"remote window intersected with local policy\" behaviour that VMCI and\nHyper-V already effectively have.\n\n[Stefano: small adjustments after changing the previous patch]\n[Stefano: tweak the commit message]",
"id": "GHSA-45gq-hr3j-jmrq",
"modified": "2026-02-06T18:30:30Z",
"published": "2026-02-04T18:30:43Z",
"references": [
{
"type": "ADVISORY",
"url": "https://nvd.nist.gov/vuln/detail/CVE-2026-23086"
},
{
"type": "WEB",
"url": "https://git.kernel.org/stable/c/84ef86aa7120449828d1e0ce438c499014839711"
},
{
"type": "WEB",
"url": "https://git.kernel.org/stable/c/8ee784fdf006cbe8739cfa093f54d326cbf54037"
},
{
"type": "WEB",
"url": "https://git.kernel.org/stable/c/c0e42fb0e054c2b2ec4ee80f48ccd256ae0227ce"
},
{
"type": "WEB",
"url": "https://git.kernel.org/stable/c/d9d5f222558b42f6277eafaaa6080966faf37676"
},
{
"type": "WEB",
"url": "https://git.kernel.org/stable/c/fef7110ae5617555c792a2bb4d27878d84583adf"
}
],
"schema_version": "1.4.0",
"severity": []
}
Sightings
| Author | Source | Type | Date |
|---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or observed by the user.
- Confirmed: The vulnerability has been validated from an analyst's perspective.
- Published Proof of Concept: A public proof of concept is available for this vulnerability.
- Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
- Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
- Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
- Not confirmed: The user expressed doubt about the validity of the vulnerability.
- Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.