FKIE_CVE-2023-54013
Vulnerability from fkie_nvd - Published: 2025-12-24 11:15 - Updated: 2025-12-29 15:58
Severity ?
Summary
In the Linux kernel, the following vulnerability has been resolved:
interconnect: Fix locking for runpm vs reclaim
For cases where icc_bw_set() can be called in callbaths that could
deadlock against shrinker/reclaim, such as runpm resume, we need to
decouple the icc locking. Introduce a new icc_bw_lock for cases where
we need to serialize bw aggregation and update to decouple that from
paths that require memory allocation such as node/link creation/
destruction.
Fixes this lockdep splat:
======================================================
WARNING: possible circular locking dependency detected
6.2.0-rc8-debug+ #554 Not tainted
------------------------------------------------------
ring0/132 is trying to acquire lock:
ffffff80871916d0 (&gmu->lock){+.+.}-{3:3}, at: a6xx_pm_resume+0xf0/0x234
but task is already holding lock:
ffffffdb5aee57e8 (dma_fence_map){++++}-{0:0}, at: msm_job_run+0x68/0x150
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #4 (dma_fence_map){++++}-{0:0}:
__dma_fence_might_wait+0x74/0xc0
dma_resv_lockdep+0x1f4/0x2f4
do_one_initcall+0x104/0x2bc
kernel_init_freeable+0x344/0x34c
kernel_init+0x30/0x134
ret_from_fork+0x10/0x20
-> #3 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}:
fs_reclaim_acquire+0x80/0xa8
slab_pre_alloc_hook.constprop.0+0x40/0x25c
__kmem_cache_alloc_node+0x60/0x1cc
__kmalloc+0xd8/0x100
topology_parse_cpu_capacity+0x8c/0x178
get_cpu_for_node+0x88/0xc4
parse_cluster+0x1b0/0x28c
parse_cluster+0x8c/0x28c
init_cpu_topology+0x168/0x188
smp_prepare_cpus+0x24/0xf8
kernel_init_freeable+0x18c/0x34c
kernel_init+0x30/0x134
ret_from_fork+0x10/0x20
-> #2 (fs_reclaim){+.+.}-{0:0}:
__fs_reclaim_acquire+0x3c/0x48
fs_reclaim_acquire+0x54/0xa8
slab_pre_alloc_hook.constprop.0+0x40/0x25c
__kmem_cache_alloc_node+0x60/0x1cc
__kmalloc+0xd8/0x100
kzalloc.constprop.0+0x14/0x20
icc_node_create_nolock+0x4c/0xc4
icc_node_create+0x38/0x58
qcom_icc_rpmh_probe+0x1b8/0x248
platform_probe+0x70/0xc4
really_probe+0x158/0x290
__driver_probe_device+0xc8/0xe0
driver_probe_device+0x44/0x100
__driver_attach+0xf8/0x108
bus_for_each_dev+0x78/0xc4
driver_attach+0x2c/0x38
bus_add_driver+0xd0/0x1d8
driver_register+0xbc/0xf8
__platform_driver_register+0x30/0x3c
qnoc_driver_init+0x24/0x30
do_one_initcall+0x104/0x2bc
kernel_init_freeable+0x344/0x34c
kernel_init+0x30/0x134
ret_from_fork+0x10/0x20
-> #1 (icc_lock){+.+.}-{3:3}:
__mutex_lock+0xcc/0x3c8
mutex_lock_nested+0x30/0x44
icc_set_bw+0x88/0x2b4
_set_opp_bw+0x8c/0xd8
_set_opp+0x19c/0x300
dev_pm_opp_set_opp+0x84/0x94
a6xx_gmu_resume+0x18c/0x804
a6xx_pm_resume+0xf8/0x234
adreno_runtime_resume+0x2c/0x38
pm_generic_runtime_resume+0x30/0x44
__rpm_callback+0x15c/0x174
rpm_callback+0x78/0x7c
rpm_resume+0x318/0x524
__pm_runtime_resume+0x78/0xbc
adreno_load_gpu+0xc4/0x17c
msm_open+0x50/0x120
drm_file_alloc+0x17c/0x228
drm_open_helper+0x74/0x118
drm_open+0xa0/0x144
drm_stub_open+0xd4/0xe4
chrdev_open+0x1b8/0x1e4
do_dentry_open+0x2f8/0x38c
vfs_open+0x34/0x40
path_openat+0x64c/0x7b4
do_filp_open+0x54/0xc4
do_sys_openat2+0x9c/0x100
do_sys_open+0x50/0x7c
__arm64_sys_openat+0x28/0x34
invoke_syscall+0x8c/0x128
el0_svc_common.constprop.0+0xa0/0x11c
do_el0_
---truncated---
References
Impacted products
| Vendor | Product | Version |
|---|
{
"cveTags": [],
"descriptions": [
{
"lang": "en",
"value": "In the Linux kernel, the following vulnerability has been resolved:\n\ninterconnect: Fix locking for runpm vs reclaim\n\nFor cases where icc_bw_set() can be called in callbaths that could\ndeadlock against shrinker/reclaim, such as runpm resume, we need to\ndecouple the icc locking. Introduce a new icc_bw_lock for cases where\nwe need to serialize bw aggregation and update to decouple that from\npaths that require memory allocation such as node/link creation/\ndestruction.\n\nFixes this lockdep splat:\n\n ======================================================\n WARNING: possible circular locking dependency detected\n 6.2.0-rc8-debug+ #554 Not tainted\n ------------------------------------------------------\n ring0/132 is trying to acquire lock:\n ffffff80871916d0 (\u0026gmu-\u003elock){+.+.}-{3:3}, at: a6xx_pm_resume+0xf0/0x234\n\n but task is already holding lock:\n ffffffdb5aee57e8 (dma_fence_map){++++}-{0:0}, at: msm_job_run+0x68/0x150\n\n which lock already depends on the new lock.\n\n the existing dependency chain (in reverse order) is:\n\n -\u003e #4 (dma_fence_map){++++}-{0:0}:\n __dma_fence_might_wait+0x74/0xc0\n dma_resv_lockdep+0x1f4/0x2f4\n do_one_initcall+0x104/0x2bc\n kernel_init_freeable+0x344/0x34c\n kernel_init+0x30/0x134\n ret_from_fork+0x10/0x20\n\n -\u003e #3 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}:\n fs_reclaim_acquire+0x80/0xa8\n slab_pre_alloc_hook.constprop.0+0x40/0x25c\n __kmem_cache_alloc_node+0x60/0x1cc\n __kmalloc+0xd8/0x100\n topology_parse_cpu_capacity+0x8c/0x178\n get_cpu_for_node+0x88/0xc4\n parse_cluster+0x1b0/0x28c\n parse_cluster+0x8c/0x28c\n init_cpu_topology+0x168/0x188\n smp_prepare_cpus+0x24/0xf8\n kernel_init_freeable+0x18c/0x34c\n kernel_init+0x30/0x134\n ret_from_fork+0x10/0x20\n\n -\u003e #2 (fs_reclaim){+.+.}-{0:0}:\n __fs_reclaim_acquire+0x3c/0x48\n fs_reclaim_acquire+0x54/0xa8\n slab_pre_alloc_hook.constprop.0+0x40/0x25c\n __kmem_cache_alloc_node+0x60/0x1cc\n __kmalloc+0xd8/0x100\n kzalloc.constprop.0+0x14/0x20\n icc_node_create_nolock+0x4c/0xc4\n icc_node_create+0x38/0x58\n qcom_icc_rpmh_probe+0x1b8/0x248\n platform_probe+0x70/0xc4\n really_probe+0x158/0x290\n __driver_probe_device+0xc8/0xe0\n driver_probe_device+0x44/0x100\n __driver_attach+0xf8/0x108\n bus_for_each_dev+0x78/0xc4\n driver_attach+0x2c/0x38\n bus_add_driver+0xd0/0x1d8\n driver_register+0xbc/0xf8\n __platform_driver_register+0x30/0x3c\n qnoc_driver_init+0x24/0x30\n do_one_initcall+0x104/0x2bc\n kernel_init_freeable+0x344/0x34c\n kernel_init+0x30/0x134\n ret_from_fork+0x10/0x20\n\n -\u003e #1 (icc_lock){+.+.}-{3:3}:\n __mutex_lock+0xcc/0x3c8\n mutex_lock_nested+0x30/0x44\n icc_set_bw+0x88/0x2b4\n _set_opp_bw+0x8c/0xd8\n _set_opp+0x19c/0x300\n dev_pm_opp_set_opp+0x84/0x94\n a6xx_gmu_resume+0x18c/0x804\n a6xx_pm_resume+0xf8/0x234\n adreno_runtime_resume+0x2c/0x38\n pm_generic_runtime_resume+0x30/0x44\n __rpm_callback+0x15c/0x174\n rpm_callback+0x78/0x7c\n rpm_resume+0x318/0x524\n __pm_runtime_resume+0x78/0xbc\n adreno_load_gpu+0xc4/0x17c\n msm_open+0x50/0x120\n drm_file_alloc+0x17c/0x228\n drm_open_helper+0x74/0x118\n drm_open+0xa0/0x144\n drm_stub_open+0xd4/0xe4\n chrdev_open+0x1b8/0x1e4\n do_dentry_open+0x2f8/0x38c\n vfs_open+0x34/0x40\n path_openat+0x64c/0x7b4\n do_filp_open+0x54/0xc4\n do_sys_openat2+0x9c/0x100\n do_sys_open+0x50/0x7c\n __arm64_sys_openat+0x28/0x34\n invoke_syscall+0x8c/0x128\n el0_svc_common.constprop.0+0xa0/0x11c\n do_el0_\n---truncated---"
}
],
"id": "CVE-2023-54013",
"lastModified": "2025-12-29T15:58:56.260",
"metrics": {},
"published": "2025-12-24T11:15:54.270",
"references": [
{
"source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
"url": "https://git.kernel.org/stable/c/2f3a124696d43de3c837f87a9f767c56ee86cf2a"
},
{
"source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
"url": "https://git.kernel.org/stable/c/af42269c3523492d71ebbe11fefae2653e9cdc78"
}
],
"sourceIdentifier": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
"vulnStatus": "Awaiting Analysis"
}
Loading…
Loading…
Sightings
| Author | Source | Type | Date |
|---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or observed by the user.
- Confirmed: The vulnerability has been validated from an analyst's perspective.
- Published Proof of Concept: A public proof of concept is available for this vulnerability.
- Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
- Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
- Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
- Not confirmed: The user expressed doubt about the validity of the vulnerability.
- Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.
Loading…
Loading…