ghsa-qc4m-rx2p-hrfv
Vulnerability from github
In the Linux kernel, the following vulnerability has been resolved:
KVM: Always flush async #PF workqueue when vCPU is being destroyed
Always flush the per-vCPU async #PF workqueue when a vCPU is clearing its completion queue, e.g. when a VM and all its vCPUs is being destroyed. KVM must ensure that none of its workqueue callbacks is running when the last reference to the KVM module is put. Gifting a reference to the associated VM prevents the workqueue callback from dereferencing freed vCPU/VM memory, but does not prevent the KVM module from being unloaded before the callback completes.
Drop the misguided VM refcount gifting, as calling kvm_put_kvm() from async_pf_execute() if kvm_put_kvm() flushes the async #PF workqueue will result in deadlock. async_pf_execute() can't return until kvm_put_kvm() finishes, and kvm_put_kvm() can't return until async_pf_execute() finishes:
WARNING: CPU: 8 PID: 251 at virt/kvm/kvm_main.c:1435 kvm_put_kvm+0x2d/0x320 [kvm]
Modules linked in: vhost_net vhost vhost_iotlb tap kvm_intel kvm irqbypass
CPU: 8 PID: 251 Comm: kworker/8:1 Tainted: G W 6.6.0-rc1-e7af8d17224a-x86/gmem-vm #119
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
Workqueue: events async_pf_execute [kvm]
RIP: 0010:kvm_put_kvm+0x2d/0x320 [kvm]
Call Trace:
If kvm_clear_async_pf_completion_queue() actually flushes the workqueue, then there's no need to gift async_pf_execute() a reference because all invocations of async_pf_execute() will be forced to complete before the vCPU and its VM are destroyed/freed. And that in turn fixes the module unloading bug as __fput() won't do module_put() on the last vCPU reference until the vCPU has been freed, e.g. if closing the vCPU file also puts the last reference to the KVM module.
Note that kvm_check_async_pf_completion() may also take the work item off the completion queue and so also needs to flush the work queue, as the work will not be seen by kvm_clear_async_pf_completion_queue(). Waiting on the workqueue could theoretically delay a vCPU due to waiting for the work to complete, but that's a very, very small chance, and likely a very small delay. kvm_arch_async_page_present_queued() unconditionally makes a new request, i.e. will effectively delay entering the guest, so the remaining work is really just:
trace_kvm_async_pf_completed(addr, cr2_or_gpa);
__kvm_vcpu_wake_up(vcpu);
mmput(mm);
and mmput() can't drop the last reference to the page tables if the vCPU is still alive, i.e. the vCPU won't get stuck tearing down page tables.
Add a helper to do the flushing, specifically to deal with "wakeup all" work items, as they aren't actually work items, i.e. are never placed in a workqueue. Trying to flush a bogus workqueue entry rightly makes __flush_work() complain (kudos to whoever added that sanity check).
Note, commit 5f6de5cbebee ("KVM: Prevent module exit until al ---truncated---
{ "affected": [], "aliases": [ "CVE-2024-26976" ], "database_specific": { "cwe_ids": [ "CWE-400" ], "github_reviewed": false, "github_reviewed_at": null, "nvd_published_at": "2024-05-01T06:15:14Z", "severity": "HIGH" }, "details": "In the Linux kernel, the following vulnerability has been resolved:\n\nKVM: Always flush async #PF workqueue when vCPU is being destroyed\n\nAlways flush the per-vCPU async #PF workqueue when a vCPU is clearing its\ncompletion queue, e.g. when a VM and all its vCPUs is being destroyed.\nKVM must ensure that none of its workqueue callbacks is running when the\nlast reference to the KVM _module_ is put. Gifting a reference to the\nassociated VM prevents the workqueue callback from dereferencing freed\nvCPU/VM memory, but does not prevent the KVM module from being unloaded\nbefore the callback completes.\n\nDrop the misguided VM refcount gifting, as calling kvm_put_kvm() from\nasync_pf_execute() if kvm_put_kvm() flushes the async #PF workqueue will\nresult in deadlock. async_pf_execute() can\u0027t return until kvm_put_kvm()\nfinishes, and kvm_put_kvm() can\u0027t return until async_pf_execute() finishes:\n\n WARNING: CPU: 8 PID: 251 at virt/kvm/kvm_main.c:1435 kvm_put_kvm+0x2d/0x320 [kvm]\n Modules linked in: vhost_net vhost vhost_iotlb tap kvm_intel kvm irqbypass\n CPU: 8 PID: 251 Comm: kworker/8:1 Tainted: G W 6.6.0-rc1-e7af8d17224a-x86/gmem-vm #119\n Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015\n Workqueue: events async_pf_execute [kvm]\n RIP: 0010:kvm_put_kvm+0x2d/0x320 [kvm]\n Call Trace:\n \u003cTASK\u003e\n async_pf_execute+0x198/0x260 [kvm]\n process_one_work+0x145/0x2d0\n worker_thread+0x27e/0x3a0\n kthread+0xba/0xe0\n ret_from_fork+0x2d/0x50\n ret_from_fork_asm+0x11/0x20\n \u003c/TASK\u003e\n ---[ end trace 0000000000000000 ]---\n INFO: task kworker/8:1:251 blocked for more than 120 seconds.\n Tainted: G W 6.6.0-rc1-e7af8d17224a-x86/gmem-vm #119\n \"echo 0 \u003e /proc/sys/kernel/hung_task_timeout_secs\" disables this message.\n task:kworker/8:1 state:D stack:0 pid:251 ppid:2 flags:0x00004000\n Workqueue: events async_pf_execute [kvm]\n Call Trace:\n \u003cTASK\u003e\n __schedule+0x33f/0xa40\n schedule+0x53/0xc0\n schedule_timeout+0x12a/0x140\n __wait_for_common+0x8d/0x1d0\n __flush_work.isra.0+0x19f/0x2c0\n kvm_clear_async_pf_completion_queue+0x129/0x190 [kvm]\n kvm_arch_destroy_vm+0x78/0x1b0 [kvm]\n kvm_put_kvm+0x1c1/0x320 [kvm]\n async_pf_execute+0x198/0x260 [kvm]\n process_one_work+0x145/0x2d0\n worker_thread+0x27e/0x3a0\n kthread+0xba/0xe0\n ret_from_fork+0x2d/0x50\n ret_from_fork_asm+0x11/0x20\n \u003c/TASK\u003e\n\nIf kvm_clear_async_pf_completion_queue() actually flushes the workqueue,\nthen there\u0027s no need to gift async_pf_execute() a reference because all\ninvocations of async_pf_execute() will be forced to complete before the\nvCPU and its VM are destroyed/freed. And that in turn fixes the module\nunloading bug as __fput() won\u0027t do module_put() on the last vCPU reference\nuntil the vCPU has been freed, e.g. if closing the vCPU file also puts the\nlast reference to the KVM module.\n\nNote that kvm_check_async_pf_completion() may also take the work item off\nthe completion queue and so also needs to flush the work queue, as the\nwork will not be seen by kvm_clear_async_pf_completion_queue(). Waiting\non the workqueue could theoretically delay a vCPU due to waiting for the\nwork to complete, but that\u0027s a very, very small chance, and likely a very\nsmall delay. kvm_arch_async_page_present_queued() unconditionally makes a\nnew request, i.e. will effectively delay entering the guest, so the\nremaining work is really just:\n\n trace_kvm_async_pf_completed(addr, cr2_or_gpa);\n\n __kvm_vcpu_wake_up(vcpu);\n\n mmput(mm);\n\nand mmput() can\u0027t drop the last reference to the page tables if the vCPU is\nstill alive, i.e. the vCPU won\u0027t get stuck tearing down page tables.\n\nAdd a helper to do the flushing, specifically to deal with \"wakeup all\"\nwork items, as they aren\u0027t actually work items, i.e. are never placed in a\nworkqueue. Trying to flush a bogus workqueue entry rightly makes\n__flush_work() complain (kudos to whoever added that sanity check).\n\nNote, commit 5f6de5cbebee (\"KVM: Prevent module exit until al\n---truncated---", "id": "GHSA-qc4m-rx2p-hrfv", "modified": "2024-07-03T18:38:00Z", "published": "2024-05-01T06:31:42Z", "references": [ { "type": "ADVISORY", "url": "https://nvd.nist.gov/vuln/detail/CVE-2024-26976" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/3d75b8aa5c29058a512db29da7cbee8052724157" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/4f3a3bce428fb439c66a578adc447afce7b4a750" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/82e25cc1c2e93c3023da98be282322fc08b61ffb" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/83d3c5e309611ef593e2fcb78444fc8ceedf9bac" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/a75afe480d4349c524d9c659b1a5a544dbc39a98" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/ab2c2f5d9576112ad22cfd3798071cb74693b1f5" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/b54478d20375874aeee257744dedfd3e413432ff" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/caa9af2e27c275e089d702cfbaaece3b42bca31b" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/f8730d6335e5f43d09151fca1f0f41922209a264" }, { "type": "WEB", "url": "https://lists.debian.org/debian-lts-announce/2024/06/msg00017.html" }, { "type": "WEB", "url": "https://lists.debian.org/debian-lts-announce/2024/06/msg00020.html" } ], "schema_version": "1.4.0", "severity": [ { "score": "CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:H", "type": "CVSS_V3" } ] }
- Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
- Confirmed: The vulnerability is confirmed from an analyst perspective.
- Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
- Patched: This vulnerability was successfully patched by the user reporting the sighting.
- Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
- Not confirmed: The user expresses doubt about the veracity of the vulnerability.
- Not patched: This vulnerability was not successfully patched by the user reporting the sighting.