FKIE_CVE-2025-38397

Vulnerability from fkie_nvd - Published: 2025-07-25 13:15 - Updated: 2025-11-19 19:12
Summary
In the Linux kernel, the following vulnerability has been resolved: nvme-multipath: fix suspicious RCU usage warning When I run the NVME over TCP test in virtme-ng, I get the following "suspicious RCU usage" warning in nvme_mpath_add_sysfs_link(): ''' [ 5.024557][ T44] nvmet: Created nvm controller 1 for subsystem nqn.2025-06.org.nvmexpress.mptcp for NQN nqn.2014-08.org.nvmexpress:uuid:f7f6b5e0-ff97-4894-98ac-c85309e0bc77. [ 5.027401][ T183] nvme nvme0: creating 2 I/O queues. [ 5.029017][ T183] nvme nvme0: mapped 2/0/0 default/read/poll queues. [ 5.032587][ T183] nvme nvme0: new ctrl: NQN "nqn.2025-06.org.nvmexpress.mptcp", addr 127.0.0.1:4420, hostnqn: nqn.2014-08.org.nvmexpress:uuid:f7f6b5e0-ff97-4894-98ac-c85309e0bc77 [ 5.042214][ T25] [ 5.042440][ T25] ============================= [ 5.042579][ T25] WARNING: suspicious RCU usage [ 5.042705][ T25] 6.16.0-rc3+ #23 Not tainted [ 5.042812][ T25] ----------------------------- [ 5.042934][ T25] drivers/nvme/host/multipath.c:1203 RCU-list traversed in non-reader section!! [ 5.043111][ T25] [ 5.043111][ T25] other info that might help us debug this: [ 5.043111][ T25] [ 5.043341][ T25] [ 5.043341][ T25] rcu_scheduler_active = 2, debug_locks = 1 [ 5.043502][ T25] 3 locks held by kworker/u9:0/25: [ 5.043615][ T25] #0: ffff888008730948 ((wq_completion)async){+.+.}-{0:0}, at: process_one_work+0x7ed/0x1350 [ 5.043830][ T25] #1: ffffc900001afd40 ((work_completion)(&entry->work)){+.+.}-{0:0}, at: process_one_work+0xcf3/0x1350 [ 5.044084][ T25] #2: ffff888013ee0020 (&head->srcu){.+.+}-{0:0}, at: nvme_mpath_add_sysfs_link.part.0+0xb4/0x3a0 [ 5.044300][ T25] [ 5.044300][ T25] stack backtrace: [ 5.044439][ T25] CPU: 0 UID: 0 PID: 25 Comm: kworker/u9:0 Not tainted 6.16.0-rc3+ #23 PREEMPT(full) [ 5.044441][ T25] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 [ 5.044442][ T25] Workqueue: async async_run_entry_fn [ 5.044445][ T25] Call Trace: [ 5.044446][ T25] <TASK> [ 5.044449][ T25] dump_stack_lvl+0x6f/0xb0 [ 5.044453][ T25] lockdep_rcu_suspicious.cold+0x4f/0xb1 [ 5.044457][ T25] nvme_mpath_add_sysfs_link.part.0+0x2fb/0x3a0 [ 5.044459][ T25] ? queue_work_on+0x90/0xf0 [ 5.044461][ T25] ? lockdep_hardirqs_on+0x78/0x110 [ 5.044466][ T25] nvme_mpath_set_live+0x1e9/0x4f0 [ 5.044470][ T25] nvme_mpath_add_disk+0x240/0x2f0 [ 5.044472][ T25] ? __pfx_nvme_mpath_add_disk+0x10/0x10 [ 5.044475][ T25] ? add_disk_fwnode+0x361/0x580 [ 5.044480][ T25] nvme_alloc_ns+0x81c/0x17c0 [ 5.044483][ T25] ? kasan_quarantine_put+0x104/0x240 [ 5.044487][ T25] ? __pfx_nvme_alloc_ns+0x10/0x10 [ 5.044495][ T25] ? __pfx_nvme_find_get_ns+0x10/0x10 [ 5.044496][ T25] ? rcu_read_lock_any_held+0x45/0xa0 [ 5.044498][ T25] ? validate_chain+0x232/0x4f0 [ 5.044503][ T25] nvme_scan_ns+0x4c8/0x810 [ 5.044506][ T25] ? __pfx_nvme_scan_ns+0x10/0x10 [ 5.044508][ T25] ? find_held_lock+0x2b/0x80 [ 5.044512][ T25] ? ktime_get+0x16d/0x220 [ 5.044517][ T25] ? kvm_clock_get_cycles+0x18/0x30 [ 5.044520][ T25] ? __pfx_nvme_scan_ns_async+0x10/0x10 [ 5.044522][ T25] async_run_entry_fn+0x97/0x560 [ 5.044523][ T25] ? rcu_is_watching+0x12/0xc0 [ 5.044526][ T25] process_one_work+0xd3c/0x1350 [ 5.044532][ T25] ? __pfx_process_one_work+0x10/0x10 [ 5.044536][ T25] ? assign_work+0x16c/0x240 [ 5.044539][ T25] worker_thread+0x4da/0xd50 [ 5.044545][ T25] ? __pfx_worker_thread+0x10/0x10 [ 5.044546][ T25] kthread+0x356/0x5c0 [ 5.044548][ T25] ? __pfx_kthread+0x10/0x10 [ 5.044549][ T25] ? ret_from_fork+0x1b/0x2e0 [ 5.044552][ T25] ? __lock_release.isra.0+0x5d/0x180 [ 5.044553][ T25] ? ret_from_fork+0x1b/0x2e0 [ 5.044555][ T25] ? rcu_is_watching+0x12/0xc0 [ 5.044557][ T25] ? __pfx_kthread+0x10/0x10 [ 5.04 ---truncated---
Impacted products

{
  "configurations": [
    {
      "nodes": [
        {
          "cpeMatch": [
            {
              "criteria": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
              "matchCriteriaId": "575B7891-8019-4D0D-B284-1B8DA410C942",
              "versionEndExcluding": "6.15.6",
              "versionStartIncluding": "6.15",
              "vulnerable": true
            },
            {
              "criteria": "cpe:2.3:o:linux:linux_kernel:6.16:rc1:*:*:*:*:*:*",
              "matchCriteriaId": "6D4894DB-CCFE-4602-B1BF-3960B2E19A01",
              "vulnerable": true
            },
            {
              "criteria": "cpe:2.3:o:linux:linux_kernel:6.16:rc2:*:*:*:*:*:*",
              "matchCriteriaId": "09709862-E348-4378-8632-5A7813EDDC86",
              "vulnerable": true
            },
            {
              "criteria": "cpe:2.3:o:linux:linux_kernel:6.16:rc3:*:*:*:*:*:*",
              "matchCriteriaId": "415BF58A-8197-43F5-B3D7-D1D63057A26E",
              "vulnerable": true
            },
            {
              "criteria": "cpe:2.3:o:linux:linux_kernel:6.16:rc4:*:*:*:*:*:*",
              "matchCriteriaId": "A0517869-312D-4429-80C2-561086E1421C",
              "vulnerable": true
            }
          ],
          "negate": false,
          "operator": "OR"
        }
      ]
    }
  ],
  "cveTags": [],
  "descriptions": [
    {
      "lang": "en",
      "value": "In the Linux kernel, the following vulnerability has been resolved:\n\nnvme-multipath: fix suspicious RCU usage warning\n\nWhen I run the NVME over TCP test in virtme-ng, I get the following\n\"suspicious RCU usage\" warning in nvme_mpath_add_sysfs_link():\n\n\u0027\u0027\u0027\n[    5.024557][   T44] nvmet: Created nvm controller 1 for subsystem nqn.2025-06.org.nvmexpress.mptcp for NQN nqn.2014-08.org.nvmexpress:uuid:f7f6b5e0-ff97-4894-98ac-c85309e0bc77.\n[    5.027401][  T183] nvme nvme0: creating 2 I/O queues.\n[    5.029017][  T183] nvme nvme0: mapped 2/0/0 default/read/poll queues.\n[    5.032587][  T183] nvme nvme0: new ctrl: NQN \"nqn.2025-06.org.nvmexpress.mptcp\", addr 127.0.0.1:4420, hostnqn: nqn.2014-08.org.nvmexpress:uuid:f7f6b5e0-ff97-4894-98ac-c85309e0bc77\n[    5.042214][   T25]\n[    5.042440][   T25] =============================\n[    5.042579][   T25] WARNING: suspicious RCU usage\n[    5.042705][   T25] 6.16.0-rc3+ #23 Not tainted\n[    5.042812][   T25] -----------------------------\n[    5.042934][   T25] drivers/nvme/host/multipath.c:1203 RCU-list traversed in non-reader section!!\n[    5.043111][   T25]\n[    5.043111][   T25] other info that might help us debug this:\n[    5.043111][   T25]\n[    5.043341][   T25]\n[    5.043341][   T25] rcu_scheduler_active = 2, debug_locks = 1\n[    5.043502][   T25] 3 locks held by kworker/u9:0/25:\n[    5.043615][   T25]  #0: ffff888008730948 ((wq_completion)async){+.+.}-{0:0}, at: process_one_work+0x7ed/0x1350\n[    5.043830][   T25]  #1: ffffc900001afd40 ((work_completion)(\u0026entry-\u003ework)){+.+.}-{0:0}, at: process_one_work+0xcf3/0x1350\n[    5.044084][   T25]  #2: ffff888013ee0020 (\u0026head-\u003esrcu){.+.+}-{0:0}, at: nvme_mpath_add_sysfs_link.part.0+0xb4/0x3a0\n[    5.044300][   T25]\n[    5.044300][   T25] stack backtrace:\n[    5.044439][   T25] CPU: 0 UID: 0 PID: 25 Comm: kworker/u9:0 Not tainted 6.16.0-rc3+ #23 PREEMPT(full)\n[    5.044441][   T25] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011\n[    5.044442][   T25] Workqueue: async async_run_entry_fn\n[    5.044445][   T25] Call Trace:\n[    5.044446][   T25]  \u003cTASK\u003e\n[    5.044449][   T25]  dump_stack_lvl+0x6f/0xb0\n[    5.044453][   T25]  lockdep_rcu_suspicious.cold+0x4f/0xb1\n[    5.044457][   T25]  nvme_mpath_add_sysfs_link.part.0+0x2fb/0x3a0\n[    5.044459][   T25]  ? queue_work_on+0x90/0xf0\n[    5.044461][   T25]  ? lockdep_hardirqs_on+0x78/0x110\n[    5.044466][   T25]  nvme_mpath_set_live+0x1e9/0x4f0\n[    5.044470][   T25]  nvme_mpath_add_disk+0x240/0x2f0\n[    5.044472][   T25]  ? __pfx_nvme_mpath_add_disk+0x10/0x10\n[    5.044475][   T25]  ? add_disk_fwnode+0x361/0x580\n[    5.044480][   T25]  nvme_alloc_ns+0x81c/0x17c0\n[    5.044483][   T25]  ? kasan_quarantine_put+0x104/0x240\n[    5.044487][   T25]  ? __pfx_nvme_alloc_ns+0x10/0x10\n[    5.044495][   T25]  ? __pfx_nvme_find_get_ns+0x10/0x10\n[    5.044496][   T25]  ? rcu_read_lock_any_held+0x45/0xa0\n[    5.044498][   T25]  ? validate_chain+0x232/0x4f0\n[    5.044503][   T25]  nvme_scan_ns+0x4c8/0x810\n[    5.044506][   T25]  ? __pfx_nvme_scan_ns+0x10/0x10\n[    5.044508][   T25]  ? find_held_lock+0x2b/0x80\n[    5.044512][   T25]  ? ktime_get+0x16d/0x220\n[    5.044517][   T25]  ? kvm_clock_get_cycles+0x18/0x30\n[    5.044520][   T25]  ? __pfx_nvme_scan_ns_async+0x10/0x10\n[    5.044522][   T25]  async_run_entry_fn+0x97/0x560\n[    5.044523][   T25]  ? rcu_is_watching+0x12/0xc0\n[    5.044526][   T25]  process_one_work+0xd3c/0x1350\n[    5.044532][   T25]  ? __pfx_process_one_work+0x10/0x10\n[    5.044536][   T25]  ? assign_work+0x16c/0x240\n[    5.044539][   T25]  worker_thread+0x4da/0xd50\n[    5.044545][   T25]  ? __pfx_worker_thread+0x10/0x10\n[    5.044546][   T25]  kthread+0x356/0x5c0\n[    5.044548][   T25]  ? __pfx_kthread+0x10/0x10\n[    5.044549][   T25]  ? ret_from_fork+0x1b/0x2e0\n[    5.044552][   T25]  ? __lock_release.isra.0+0x5d/0x180\n[    5.044553][   T25]  ? ret_from_fork+0x1b/0x2e0\n[    5.044555][   T25]  ? rcu_is_watching+0x12/0xc0\n[    5.044557][   T25]  ? __pfx_kthread+0x10/0x10\n[    5.04\n---truncated---"
    },
    {
      "lang": "es",
      "value": "En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: nvme-multipath: corregir advertencia de uso sospechoso de RCU Cuando ejecuto la prueba NVME sobre TCP en virtme-ng, obtengo la siguiente advertencia de \"uso sospechoso de RCU\" en nvme_mpath_add_sysfs_link(): \u0027\u0027\u0027 [ 5.024557][ T44] nvmet: Created nvm controller 1 for subsystem nqn.2025-06.org.nvmexpress.mptcp for NQN nqn.2014-08.org.nvmexpress:uuid:f7f6b5e0-ff97-4894-98ac-c85309e0bc77. [ 5.027401][ T183] nvme nvme0: creating 2 I/O queues. [ 5.029017][ T183] nvme nvme0: mapped 2/0/0 default/read/poll queues. [ 5.032587][ T183] nvme nvme0: new ctrl: NQN \"nqn.2025-06.org.nvmexpress.mptcp\", addr 127.0.0.1:4420, hostnqn: nqn.2014-08.org.nvmexpress:uuid:f7f6b5e0-ff97-4894-98ac-c85309e0bc77 [ 5.042214][ T25] [ 5.042440][ T25] ============================= [ 5.042579][ T25] WARNING: suspicious RCU usage [ 5.042705][ T25] 6.16.0-rc3+ #23 Not tainted [ 5.042812][ T25] ----------------------------- [ 5.042934][ T25] drivers/nvme/host/multipath.c:1203 RCU-list traversed in non-reader section!! [ 5.043111][ T25] [ 5.043111][ T25] other info that might help us debug this: [ 5.043111][ T25] [ 5.043341][ T25] [ 5.043341][ T25] rcu_scheduler_active = 2, debug_locks = 1 [ 5.043502][ T25] 3 locks held by kworker/u9:0/25: [ 5.043615][ T25] #0: ffff888008730948 ((wq_completion)async){+.+.}-{0:0}, at: process_one_work+0x7ed/0x1350 [ 5.043830][ T25] #1: ffffc900001afd40 ((work_completion)(\u0026amp;entry-\u0026gt;work)){+.+.}-{0:0}, at: process_one_work+0xcf3/0x1350 [ 5.044084][ T25] #2: ffff888013ee0020 (\u0026amp;head-\u0026gt;srcu){.+.+}-{0:0}, at: nvme_mpath_add_sysfs_link.part.0+0xb4/0x3a0 [ 5.044300][ T25] [ 5.044300][ T25] stack backtrace: [ 5.044439][ T25] CPU: 0 UID: 0 PID: 25 Comm: kworker/u9:0 Not tainted 6.16.0-rc3+ #23 PREEMPT(full) [ 5.044441][ T25] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 [ 5.044442][ T25] Workqueue: async async_run_entry_fn [ 5.044445][ T25] Call Trace: [ 5.044446][ T25]  [ 5.044449][ T25] dump_stack_lvl+0x6f/0xb0 [ 5.044453][ T25] lockdep_rcu_suspicious.cold+0x4f/0xb1 [ 5.044457][ T25] nvme_mpath_add_sysfs_link.part.0+0x2fb/0x3a0 [ 5.044459][ T25] ? queue_work_on+0x90/0xf0 [ 5.044461][ T25] ? lockdep_hardirqs_on+0x78/0x110 [ 5.044466][ T25] nvme_mpath_set_live+0x1e9/0x4f0 [ 5.044470][ T25] nvme_mpath_add_disk+0x240/0x2f0 [ 5.044472][ T25] ? __pfx_nvme_mpath_add_disk+0x10/0x10 [ 5.044475][ T25] ? add_disk_fwnode+0x361/0x580 [ 5.044480][ T25] nvme_alloc_ns+0x81c/0x17c0 [ 5.044483][ T25] ? kasan_quarantine_put+0x104/0x240 [ 5.044487][ T25] ? __pfx_nvme_alloc_ns+0x10/0x10 [ 5.044495][ T25] ? __pfx_nvme_find_get_ns+0x10/0x10 [ 5.044496][ T25] ? rcu_read_lock_any_held+0x45/0xa0 [ 5.044498][ T25] ? validate_chain+0x232/0x4f0 [ 5.044503][ T25] nvme_scan_ns+0x4c8/0x810 [ 5.044506][ T25] ? __pfx_nvme_scan_ns+0x10/0x10 [ 5.044508][ T25] ? find_held_lock+0x2b/0x80 [ 5.044512][ T25] ? ktime_get+0x16d/0x220 [ 5.044517][ T25] ? kvm_clock_get_cycles+0x18/0x30 [ 5.044520][ T25] ? __pfx_nvme_scan_ns_async+0x10/0x10 [ 5.044522][ T25] async_run_entry_fn+0x97/0x560 [ 5.044523][ T25] ? rcu_is_watching+0x12/0xc0 [ 5.044526][ T25] process_one_work+0xd3c/0x1350 [ 5.044532][ T25] ? __pfx_process_one_work+0x10/0x10 [ 5.044536][ T25] ? assign_work+0x16c/0x240 [ 5.044539][ T25] worker_thread+0x4da/0xd50 [ 5.044545][ T25] ? __pfx_worker_thread+0x10/0x10 [ 5.044546][ T25] kthread+0x356/0x5c0 [ 5.044548][ T25] ? __pfx_kthread+0x10/0x10 [ 5.044549][ T25] ? ret_from_fork+0x1b/0x2e0 [ 5.044552][ T25] ? __lock_release.isra.0+0x5d/0x180 [ 5.044553][ T25] ? ret_from_fork+0x1b/0x2e0 [ 5.044555][ T25] ? rcu_is_watching+0x12/0xc0 [ 5.044557][ T25] ? __pfx_kthread+0x10/0x10 [ 5.04 ---truncated---"
    }
  ],
  "id": "CVE-2025-38397",
  "lastModified": "2025-11-19T19:12:48.980",
  "metrics": {
    "cvssMetricV31": [
      {
        "cvssData": {
          "attackComplexity": "LOW",
          "attackVector": "LOCAL",
          "availabilityImpact": "HIGH",
          "baseScore": 5.5,
          "baseSeverity": "MEDIUM",
          "confidentialityImpact": "NONE",
          "integrityImpact": "NONE",
          "privilegesRequired": "LOW",
          "scope": "UNCHANGED",
          "userInteraction": "NONE",
          "vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H",
          "version": "3.1"
        },
        "exploitabilityScore": 1.8,
        "impactScore": 3.6,
        "source": "nvd@nist.gov",
        "type": "Primary"
      }
    ]
  },
  "published": "2025-07-25T13:15:29.193",
  "references": [
    {
      "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
      "tags": [
        "Patch"
      ],
      "url": "https://git.kernel.org/stable/c/a432383e6cd86d9fda00a6073ed35c1067a836d6"
    },
    {
      "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
      "tags": [
        "Patch"
      ],
      "url": "https://git.kernel.org/stable/c/d6811074203b13f715ce2480ac64c5b1c773f2a5"
    }
  ],
  "sourceIdentifier": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
  "vulnStatus": "Analyzed",
  "weaknesses": [
    {
      "description": [
        {
          "lang": "en",
          "value": "NVD-CWE-noinfo"
        }
      ],
      "source": "nvd@nist.gov",
      "type": "Primary"
    }
  ]
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or observed by the user.
  • Confirmed: The vulnerability has been validated from an analyst's perspective.
  • Published Proof of Concept: A public proof of concept is available for this vulnerability.
  • Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
  • Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
  • Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
  • Not confirmed: The user expressed doubt about the validity of the vulnerability.
  • Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.


Loading…

Detection rules are retrieved from Rulezet.

Loading…

Loading…