GHSA-7XXJ-9P5P-Q5RJ

Vulnerability from github – Published: 2025-10-21 12:31 – Updated: 2025-10-21 12:31
VLAI?
Details

In the Linux kernel, the following vulnerability has been resolved:

rcu-tasks: Fix race in schedule and flush work

While booting secondary CPUs, cpus_read_[lock/unlock] is not keeping online cpumask stable. The transient online mask results in below calltrace.

[ 0.324121] CPU1: Booted secondary processor 0x0000000001 [0x410fd083] [ 0.346652] Detected PIPT I-cache on CPU2 [ 0.347212] CPU2: Booted secondary processor 0x0000000002 [0x410fd083] [ 0.377255] Detected PIPT I-cache on CPU3 [ 0.377823] CPU3: Booted secondary processor 0x0000000003 [0x410fd083] [ 0.379040] ------------[ cut here ]------------ [ 0.383662] WARNING: CPU: 0 PID: 10 at kernel/workqueue.c:3084 __flush_work+0x12c/0x138 [ 0.384850] Modules linked in: [ 0.385403] CPU: 0 PID: 10 Comm: rcu_tasks_rude_ Not tainted 5.17.0-rc3-v8+ #13 [ 0.386473] Hardware name: Raspberry Pi 4 Model B Rev 1.4 (DT) [ 0.387289] pstate: 20000005 (nzCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 0.388308] pc : __flush_work+0x12c/0x138 [ 0.388970] lr : __flush_work+0x80/0x138 [ 0.389620] sp : ffffffc00aaf3c60 [ 0.390139] x29: ffffffc00aaf3d20 x28: ffffffc009c16af0 x27: ffffff80f761df48 [ 0.391316] x26: 0000000000000004 x25: 0000000000000003 x24: 0000000000000100 [ 0.392493] x23: ffffffffffffffff x22: ffffffc009c16b10 x21: ffffffc009c16b28 [ 0.393668] x20: ffffffc009e53861 x19: ffffff80f77fbf40 x18: 00000000d744fcc9 [ 0.394842] x17: 000000000000000b x16: 00000000000001c2 x15: ffffffc009e57550 [ 0.396016] x14: 0000000000000000 x13: ffffffffffffffff x12: 0000000100000000 [ 0.397190] x11: 0000000000000462 x10: ffffff8040258008 x9 : 0000000100000000 [ 0.398364] x8 : 0000000000000000 x7 : ffffffc0093c8bf4 x6 : 0000000000000000 [ 0.399538] x5 : 0000000000000000 x4 : ffffffc00a976e40 x3 : ffffffc00810444c [ 0.400711] x2 : 0000000000000004 x1 : 0000000000000000 x0 : 0000000000000000 [ 0.401886] Call trace: [ 0.402309] __flush_work+0x12c/0x138 [ 0.402941] schedule_on_each_cpu+0x228/0x278 [ 0.403693] rcu_tasks_rude_wait_gp+0x130/0x144 [ 0.404502] rcu_tasks_kthread+0x220/0x254 [ 0.405264] kthread+0x174/0x1ac [ 0.405837] ret_from_fork+0x10/0x20 [ 0.406456] irq event stamp: 102 [ 0.406966] hardirqs last enabled at (101): [] _raw_spin_unlock_irq+0x78/0xb4 [ 0.408304] hardirqs last disabled at (102): [] el1_dbg+0x24/0x5c [ 0.409410] softirqs last enabled at (54): [] local_bh_enable+0xc/0x2c [ 0.410645] softirqs last disabled at (50): [] local_bh_disable+0xc/0x2c [ 0.411890] ---[ end trace 0000000000000000 ]--- [ 0.413000] smp: Brought up 1 node, 4 CPUs [ 0.413762] SMP: Total of 4 processors activated. [ 0.414566] CPU features: detected: 32-bit EL0 Support [ 0.415414] CPU features: detected: 32-bit EL1 Support [ 0.416278] CPU features: detected: CRC32 instructions [ 0.447021] Callback from call_rcu_tasks_rude() invoked. [ 0.506693] Callback from call_rcu_tasks() invoked.

This commit therefore fixes this issue by applying a single-CPU optimization to the RCU Tasks Rude grace-period process. The key point here is that the purpose of this RCU flavor is to force a schedule on each online CPU since some past event. But the rcu_tasks_rude_wait_gp() function runs in the context of the RCU Tasks Rude's grace-period kthread, so there must already have been a context switch on the current CPU since the call to either synchronize_rcu_tasks_rude() or call_rcu_tasks_rude(). So if there is only a single CPU online, RCU Tasks Rude's grace-period kthread does not need to anything at all.

It turns out that the rcu_tasks_rude_wait_gp() function's call to schedule_on_each_cpu() causes problems during early boot. During that time, there is only one online CPU, namely the boot CPU. Therefore, applying this single-CPU optimization fixes early-boot instances of this problem.

Show details on source website

{
  "affected": [],
  "aliases": [
    "CVE-2022-49540"
  ],
  "database_specific": {
    "cwe_ids": [
      "CWE-362"
    ],
    "github_reviewed": false,
    "github_reviewed_at": null,
    "nvd_published_at": "2025-02-26T07:01:29Z",
    "severity": "MODERATE"
  },
  "details": "In the Linux kernel, the following vulnerability has been resolved:\n\nrcu-tasks: Fix race in schedule and flush work\n\nWhile booting secondary CPUs, cpus_read_[lock/unlock] is not keeping\nonline cpumask stable. The transient online mask results in below\ncalltrace.\n\n[    0.324121] CPU1: Booted secondary processor 0x0000000001 [0x410fd083]\n[    0.346652] Detected PIPT I-cache on CPU2\n[    0.347212] CPU2: Booted secondary processor 0x0000000002 [0x410fd083]\n[    0.377255] Detected PIPT I-cache on CPU3\n[    0.377823] CPU3: Booted secondary processor 0x0000000003 [0x410fd083]\n[    0.379040] ------------[ cut here ]------------\n[    0.383662] WARNING: CPU: 0 PID: 10 at kernel/workqueue.c:3084 __flush_work+0x12c/0x138\n[    0.384850] Modules linked in:\n[    0.385403] CPU: 0 PID: 10 Comm: rcu_tasks_rude_ Not tainted 5.17.0-rc3-v8+ #13\n[    0.386473] Hardware name: Raspberry Pi 4 Model B Rev 1.4 (DT)\n[    0.387289] pstate: 20000005 (nzCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)\n[    0.388308] pc : __flush_work+0x12c/0x138\n[    0.388970] lr : __flush_work+0x80/0x138\n[    0.389620] sp : ffffffc00aaf3c60\n[    0.390139] x29: ffffffc00aaf3d20 x28: ffffffc009c16af0 x27: ffffff80f761df48\n[    0.391316] x26: 0000000000000004 x25: 0000000000000003 x24: 0000000000000100\n[    0.392493] x23: ffffffffffffffff x22: ffffffc009c16b10 x21: ffffffc009c16b28\n[    0.393668] x20: ffffffc009e53861 x19: ffffff80f77fbf40 x18: 00000000d744fcc9\n[    0.394842] x17: 000000000000000b x16: 00000000000001c2 x15: ffffffc009e57550\n[    0.396016] x14: 0000000000000000 x13: ffffffffffffffff x12: 0000000100000000\n[    0.397190] x11: 0000000000000462 x10: ffffff8040258008 x9 : 0000000100000000\n[    0.398364] x8 : 0000000000000000 x7 : ffffffc0093c8bf4 x6 : 0000000000000000\n[    0.399538] x5 : 0000000000000000 x4 : ffffffc00a976e40 x3 : ffffffc00810444c\n[    0.400711] x2 : 0000000000000004 x1 : 0000000000000000 x0 : 0000000000000000\n[    0.401886] Call trace:\n[    0.402309]  __flush_work+0x12c/0x138\n[    0.402941]  schedule_on_each_cpu+0x228/0x278\n[    0.403693]  rcu_tasks_rude_wait_gp+0x130/0x144\n[    0.404502]  rcu_tasks_kthread+0x220/0x254\n[    0.405264]  kthread+0x174/0x1ac\n[    0.405837]  ret_from_fork+0x10/0x20\n[    0.406456] irq event stamp: 102\n[    0.406966] hardirqs last  enabled at (101): [\u003cffffffc0093c8468\u003e] _raw_spin_unlock_irq+0x78/0xb4\n[    0.408304] hardirqs last disabled at (102): [\u003cffffffc0093b8270\u003e] el1_dbg+0x24/0x5c\n[    0.409410] softirqs last  enabled at (54): [\u003cffffffc0081b80c8\u003e] local_bh_enable+0xc/0x2c\n[    0.410645] softirqs last disabled at (50): [\u003cffffffc0081b809c\u003e] local_bh_disable+0xc/0x2c\n[    0.411890] ---[ end trace 0000000000000000 ]---\n[    0.413000] smp: Brought up 1 node, 4 CPUs\n[    0.413762] SMP: Total of 4 processors activated.\n[    0.414566] CPU features: detected: 32-bit EL0 Support\n[    0.415414] CPU features: detected: 32-bit EL1 Support\n[    0.416278] CPU features: detected: CRC32 instructions\n[    0.447021] Callback from call_rcu_tasks_rude() invoked.\n[    0.506693] Callback from call_rcu_tasks() invoked.\n\nThis commit therefore fixes this issue by applying a single-CPU\noptimization to the RCU Tasks Rude grace-period process.  The key point\nhere is that the purpose of this RCU flavor is to force a schedule on\neach online CPU since some past event.  But the rcu_tasks_rude_wait_gp()\nfunction runs in the context of the RCU Tasks Rude\u0027s grace-period kthread,\nso there must already have been a context switch on the current CPU since\nthe call to either synchronize_rcu_tasks_rude() or call_rcu_tasks_rude().\nSo if there is only a single CPU online, RCU Tasks Rude\u0027s grace-period\nkthread does not need to anything at all.\n\nIt turns out that the rcu_tasks_rude_wait_gp() function\u0027s call to\nschedule_on_each_cpu() causes problems during early boot.  During that\ntime, there is only one online CPU, namely the boot CPU.  Therefore,\napplying this single-CPU optimization fixes early-boot instances of\nthis problem.",
  "id": "GHSA-7xxj-9p5p-q5rj",
  "modified": "2025-10-21T12:31:27Z",
  "published": "2025-10-21T12:31:27Z",
  "references": [
    {
      "type": "ADVISORY",
      "url": "https://nvd.nist.gov/vuln/detail/CVE-2022-49540"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/1c6c3f2336642fb3074593911f5176565f47ec41"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/230bf5878af6038dfb63d9184272a58475236580"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/8f49a8758b5cd541bd7aa9a0d0d11c7426141c0e"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/ba722d061bc4b54802d701fc63fc2fd988934603"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/f75fd4b9221d93177c50dcfde671b2e907f53e86"
    }
  ],
  "schema_version": "1.4.0",
  "severity": [
    {
      "score": "CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:N/I:N/A:H",
      "type": "CVSS_V3"
    }
  ]
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or observed by the user.
  • Confirmed: The vulnerability has been validated from an analyst's perspective.
  • Published Proof of Concept: A public proof of concept is available for this vulnerability.
  • Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
  • Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
  • Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
  • Not confirmed: The user expressed doubt about the validity of the vulnerability.
  • Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.


Loading…

Detection rules are retrieved from Rulezet.

Loading…

Loading…