ghsa-q99v-mcmh-7x5m
Vulnerability from github
In the Linux kernel, the following vulnerability has been resolved:
mm, kmsan: fix infinite recursion due to RCU critical section
Alexander Potapenko writes in [1]: "For every memory access in the code
instrumented by KMSAN we call kmsan_get_metadata() to obtain the metadata
for the memory being accessed. For virtual memory the metadata pointers
are stored in the corresponding struct page
, therefore we need to call
virt_to_page() to get them.
According to the comment in arch/x86/include/asm/page.h, virt_to_page(kaddr) returns a valid pointer iff virt_addr_valid(kaddr) is true, so KMSAN needs to call virt_addr_valid() as well.
To avoid recursion, kmsan_get_metadata() must not call instrumented code, therefore ./arch/x86/include/asm/kmsan.h forks parts of arch/x86/mm/physaddr.c to check whether a virtual address is valid or not.
But the introduction of rcu_read_lock() to pfn_valid() added instrumented RCU API calls to virt_to_page_or_null(), which is called by kmsan_get_metadata(), so there is an infinite recursion now. I do not think it is correct to stop that recursion by doing kmsan_enter_runtime()/kmsan_exit_runtime() in kmsan_get_metadata(): that would prevent instrumented functions called from within the runtime from tracking the shadow values, which might introduce false positives."
Fix the issue by switching pfn_valid() to the _sched() variant of rcu_read_lock/unlock(), which does not require calling into RCU. Given the critical section in pfn_valid() is very small, this is a reasonable trade-off (with preemptible RCU).
KMSAN further needs to be careful to suppress calls into the scheduler, which would be another source of recursion. This can be done by wrapping the call to pfn_valid() into preempt_disable/enable_no_resched(). The downside is that this sacrifices breaking scheduling guarantees; however, a kernel compiled with KMSAN has already given up any performance guarantees due to being heavily instrumented.
Note, KMSAN code already disables tracing via Makefile, and since mmzone.h is included, it is not necessary to use the notrace variant, which is generally preferred in all other cases.
{ "affected": [], "aliases": [ "CVE-2024-26639" ], "database_specific": { "cwe_ids": [], "github_reviewed": false, "github_reviewed_at": null, "nvd_published_at": "2024-03-18T11:15:10Z", "severity": null }, "details": "In the Linux kernel, the following vulnerability has been resolved:\n\nmm, kmsan: fix infinite recursion due to RCU critical section\n\nAlexander Potapenko writes in [1]: \"For every memory access in the code\ninstrumented by KMSAN we call kmsan_get_metadata() to obtain the metadata\nfor the memory being accessed. For virtual memory the metadata pointers\nare stored in the corresponding `struct page`, therefore we need to call\nvirt_to_page() to get them.\n\nAccording to the comment in arch/x86/include/asm/page.h,\nvirt_to_page(kaddr) returns a valid pointer iff virt_addr_valid(kaddr) is\ntrue, so KMSAN needs to call virt_addr_valid() as well.\n\nTo avoid recursion, kmsan_get_metadata() must not call instrumented code,\ntherefore ./arch/x86/include/asm/kmsan.h forks parts of\narch/x86/mm/physaddr.c to check whether a virtual address is valid or not.\n\nBut the introduction of rcu_read_lock() to pfn_valid() added instrumented\nRCU API calls to virt_to_page_or_null(), which is called by\nkmsan_get_metadata(), so there is an infinite recursion now. I do not\nthink it is correct to stop that recursion by doing\nkmsan_enter_runtime()/kmsan_exit_runtime() in kmsan_get_metadata(): that\nwould prevent instrumented functions called from within the runtime from\ntracking the shadow values, which might introduce false positives.\"\n\nFix the issue by switching pfn_valid() to the _sched() variant of\nrcu_read_lock/unlock(), which does not require calling into RCU. Given\nthe critical section in pfn_valid() is very small, this is a reasonable\ntrade-off (with preemptible RCU).\n\nKMSAN further needs to be careful to suppress calls into the scheduler,\nwhich would be another source of recursion. This can be done by wrapping\nthe call to pfn_valid() into preempt_disable/enable_no_resched(). The\ndownside is that this sacrifices breaking scheduling guarantees; however,\na kernel compiled with KMSAN has already given up any performance\nguarantees due to being heavily instrumented.\n\nNote, KMSAN code already disables tracing via Makefile, and since mmzone.h\nis included, it is not necessary to use the notrace variant, which is\ngenerally preferred in all other cases.", "id": "GHSA-q99v-mcmh-7x5m", "modified": "2024-04-04T15:30:33Z", "published": "2024-03-18T12:30:35Z", "references": [ { "type": "ADVISORY", "url": "https://nvd.nist.gov/vuln/detail/CVE-2024-26639" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/5a33420599fa0288792537e6872fd19cc8607ea6" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/6335c0cdb2ea0ea02c999e04d34fd84f69fb27ff" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/dc904345e3771aa01d0b8358b550802fdc6fe00b" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/f6564fce256a3944aa1bc76cb3c40e792d97c1eb" } ], "schema_version": "1.4.0", "severity": [] }
Sightings
Author | Source | Type | Date |
---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
- Confirmed: The vulnerability is confirmed from an analyst perspective.
- Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
- Patched: This vulnerability was successfully patched by the user reporting the sighting.
- Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
- Not confirmed: The user expresses doubt about the veracity of the vulnerability.
- Not patched: This vulnerability was not successfully patched by the user reporting the sighting.