ghsa-4qcc-96j2-gv42
Vulnerability from github
Published
2024-05-20 12:30
Modified
2024-05-20 12:30
Details

In the Linux kernel, the following vulnerability has been resolved:

ice: fix LAG and VF lock dependency in ice_reset_vf()

9f74a3dfcf83 ("ice: Fix VF Reset paths when interface in a failed over aggregate"), the ice driver has acquired the LAG mutex in ice_reset_vf(). The commit placed this lock acquisition just prior to the acquisition of the VF configuration lock.

If ice_reset_vf() acquires the configuration lock via the ICE_VF_RESET_LOCK flag, this could deadlock with ice_vc_cfg_qs_msg() because it always acquires the locks in the order of the VF configuration lock and then the LAG mutex.

Lockdep reports this violation almost immediately on creating and then removing 2 VF:

====================================================== WARNING: possible circular locking dependency detected 6.8.0-rc6 #54 Tainted: G W O


kworker/60:3/6771 is trying to acquire lock: ff40d43e099380a0 (&vf->cfg_lock){+.+.}-{3:3}, at: ice_reset_vf+0x22f/0x4d0 [ice]

but task is already holding lock: ff40d43ea1961210 (&pf->lag_mutex){+.+.}-{3:3}, at: ice_reset_vf+0xb7/0x4d0 [ice]

which lock already depends on the new lock.

the existing dependency chain (in reverse order) is:

-> #1 (&pf->lag_mutex){+.+.}-{3:3}: __lock_acquire+0x4f8/0xb40 lock_acquire+0xd4/0x2d0 __mutex_lock+0x9b/0xbf0 ice_vc_cfg_qs_msg+0x45/0x690 [ice] ice_vc_process_vf_msg+0x4f5/0x870 [ice] __ice_clean_ctrlq+0x2b5/0x600 [ice] ice_service_task+0x2c9/0x480 [ice] process_one_work+0x1e9/0x4d0 worker_thread+0x1e1/0x3d0 kthread+0x104/0x140 ret_from_fork+0x31/0x50 ret_from_fork_asm+0x1b/0x30

-> #0 (&vf->cfg_lock){+.+.}-{3:3}: check_prev_add+0xe2/0xc50 validate_chain+0x558/0x800 __lock_acquire+0x4f8/0xb40 lock_acquire+0xd4/0x2d0 __mutex_lock+0x9b/0xbf0 ice_reset_vf+0x22f/0x4d0 [ice] ice_process_vflr_event+0x98/0xd0 [ice] ice_service_task+0x1cc/0x480 [ice] process_one_work+0x1e9/0x4d0 worker_thread+0x1e1/0x3d0 kthread+0x104/0x140 ret_from_fork+0x31/0x50 ret_from_fork_asm+0x1b/0x30

other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&pf->lag_mutex); lock(&vf->cfg_lock); lock(&pf->lag_mutex); lock(&vf->cfg_lock);

*** DEADLOCK *** 4 locks held by kworker/60:3/6771: #0: ff40d43e05428b38 ((wq_completion)ice){+.+.}-{0:0}, at: process_one_work+0x176/0x4d0 #1: ff50d06e05197e58 ((work_completion)(&pf->serv_task)){+.+.}-{0:0}, at: process_one_work+0x176/0x4d0 #2: ff40d43ea1960e50 (&pf->vfs.table_lock){+.+.}-{3:3}, at: ice_process_vflr_event+0x48/0xd0 [ice] #3: ff40d43ea1961210 (&pf->lag_mutex){+.+.}-{3:3}, at: ice_reset_vf+0xb7/0x4d0 [ice]

stack backtrace: CPU: 60 PID: 6771 Comm: kworker/60:3 Tainted: G W O 6.8.0-rc6 #54 Hardware name: Workqueue: ice ice_service_task [ice] Call Trace: dump_stack_lvl+0x4a/0x80 check_noncircular+0x12d/0x150 check_prev_add+0xe2/0xc50 ? save_trace+0x59/0x230 ? add_chain_cache+0x109/0x450 validate_chain+0x558/0x800 __lock_acquire+0x4f8/0xb40 ? lockdep_hardirqs_on+0x7d/0x100 lock_acquire+0xd4/0x2d0 ? ice_reset_vf+0x22f/0x4d0 [ice] ? lock_is_held_type+0xc7/0x120 __mutex_lock+0x9b/0xbf0 ? ice_reset_vf+0x22f/0x4d0 [ice] ? ice_reset_vf+0x22f/0x4d0 [ice] ? rcu_is_watching+0x11/0x50 ? ice_reset_vf+0x22f/0x4d0 [ice] ice_reset_vf+0x22f/0x4d0 [ice] ? process_one_work+0x176/0x4d0 ice_process_vflr_event+0x98/0xd0 [ice] ice_service_task+0x1cc/0x480 [ice] process_one_work+0x1e9/0x4d0 worker_thread+0x1e1/0x3d0 ? __pfx_worker_thread+0x10/0x10 kthread+0x104/0x140 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x31/0x50 ? __pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1b/0x30

To avoid deadlock, we must acquire the LAG ---truncated---

Show details on source website


{
  "affected": [],
  "aliases": [
    "CVE-2024-36003"
  ],
  "database_specific": {
    "cwe_ids": [],
    "github_reviewed": false,
    "github_reviewed_at": null,
    "nvd_published_at": "2024-05-20T10:15:14Z",
    "severity": null
  },
  "details": "In the Linux kernel, the following vulnerability has been resolved:\n\nice: fix LAG and VF lock dependency in ice_reset_vf()\n\n9f74a3dfcf83 (\"ice: Fix VF Reset paths when interface in a failed over\naggregate\"), the ice driver has acquired the LAG mutex in ice_reset_vf().\nThe commit placed this lock acquisition just prior to the acquisition of\nthe VF configuration lock.\n\nIf ice_reset_vf() acquires the configuration lock via the ICE_VF_RESET_LOCK\nflag, this could deadlock with ice_vc_cfg_qs_msg() because it always\nacquires the locks in the order of the VF configuration lock and then the\nLAG mutex.\n\nLockdep reports this violation almost immediately on creating and then\nremoving 2 VF:\n\n======================================================\nWARNING: possible circular locking dependency detected\n6.8.0-rc6 #54 Tainted: G        W  O\n------------------------------------------------------\nkworker/60:3/6771 is trying to acquire lock:\nff40d43e099380a0 (\u0026vf-\u003ecfg_lock){+.+.}-{3:3}, at: ice_reset_vf+0x22f/0x4d0 [ice]\n\nbut task is already holding lock:\nff40d43ea1961210 (\u0026pf-\u003elag_mutex){+.+.}-{3:3}, at: ice_reset_vf+0xb7/0x4d0 [ice]\n\nwhich lock already depends on the new lock.\n\nthe existing dependency chain (in reverse order) is:\n\n-\u003e #1 (\u0026pf-\u003elag_mutex){+.+.}-{3:3}:\n       __lock_acquire+0x4f8/0xb40\n       lock_acquire+0xd4/0x2d0\n       __mutex_lock+0x9b/0xbf0\n       ice_vc_cfg_qs_msg+0x45/0x690 [ice]\n       ice_vc_process_vf_msg+0x4f5/0x870 [ice]\n       __ice_clean_ctrlq+0x2b5/0x600 [ice]\n       ice_service_task+0x2c9/0x480 [ice]\n       process_one_work+0x1e9/0x4d0\n       worker_thread+0x1e1/0x3d0\n       kthread+0x104/0x140\n       ret_from_fork+0x31/0x50\n       ret_from_fork_asm+0x1b/0x30\n\n-\u003e #0 (\u0026vf-\u003ecfg_lock){+.+.}-{3:3}:\n       check_prev_add+0xe2/0xc50\n       validate_chain+0x558/0x800\n       __lock_acquire+0x4f8/0xb40\n       lock_acquire+0xd4/0x2d0\n       __mutex_lock+0x9b/0xbf0\n       ice_reset_vf+0x22f/0x4d0 [ice]\n       ice_process_vflr_event+0x98/0xd0 [ice]\n       ice_service_task+0x1cc/0x480 [ice]\n       process_one_work+0x1e9/0x4d0\n       worker_thread+0x1e1/0x3d0\n       kthread+0x104/0x140\n       ret_from_fork+0x31/0x50\n       ret_from_fork_asm+0x1b/0x30\n\nother info that might help us debug this:\n Possible unsafe locking scenario:\n       CPU0                    CPU1\n       ----                    ----\n  lock(\u0026pf-\u003elag_mutex);\n                               lock(\u0026vf-\u003ecfg_lock);\n                               lock(\u0026pf-\u003elag_mutex);\n  lock(\u0026vf-\u003ecfg_lock);\n\n *** DEADLOCK ***\n4 locks held by kworker/60:3/6771:\n #0: ff40d43e05428b38 ((wq_completion)ice){+.+.}-{0:0}, at: process_one_work+0x176/0x4d0\n #1: ff50d06e05197e58 ((work_completion)(\u0026pf-\u003eserv_task)){+.+.}-{0:0}, at: process_one_work+0x176/0x4d0\n #2: ff40d43ea1960e50 (\u0026pf-\u003evfs.table_lock){+.+.}-{3:3}, at: ice_process_vflr_event+0x48/0xd0 [ice]\n #3: ff40d43ea1961210 (\u0026pf-\u003elag_mutex){+.+.}-{3:3}, at: ice_reset_vf+0xb7/0x4d0 [ice]\n\nstack backtrace:\nCPU: 60 PID: 6771 Comm: kworker/60:3 Tainted: G        W  O       6.8.0-rc6 #54\nHardware name:\nWorkqueue: ice ice_service_task [ice]\nCall Trace:\n \u003cTASK\u003e\n dump_stack_lvl+0x4a/0x80\n check_noncircular+0x12d/0x150\n check_prev_add+0xe2/0xc50\n ? save_trace+0x59/0x230\n ? add_chain_cache+0x109/0x450\n validate_chain+0x558/0x800\n __lock_acquire+0x4f8/0xb40\n ? lockdep_hardirqs_on+0x7d/0x100\n lock_acquire+0xd4/0x2d0\n ? ice_reset_vf+0x22f/0x4d0 [ice]\n ? lock_is_held_type+0xc7/0x120\n __mutex_lock+0x9b/0xbf0\n ? ice_reset_vf+0x22f/0x4d0 [ice]\n ? ice_reset_vf+0x22f/0x4d0 [ice]\n ? rcu_is_watching+0x11/0x50\n ? ice_reset_vf+0x22f/0x4d0 [ice]\n ice_reset_vf+0x22f/0x4d0 [ice]\n ? process_one_work+0x176/0x4d0\n ice_process_vflr_event+0x98/0xd0 [ice]\n ice_service_task+0x1cc/0x480 [ice]\n process_one_work+0x1e9/0x4d0\n worker_thread+0x1e1/0x3d0\n ? __pfx_worker_thread+0x10/0x10\n kthread+0x104/0x140\n ? __pfx_kthread+0x10/0x10\n ret_from_fork+0x31/0x50\n ? __pfx_kthread+0x10/0x10\n ret_from_fork_asm+0x1b/0x30\n \u003c/TASK\u003e\n\nTo avoid deadlock, we must acquire the LAG \n---truncated---",
  "id": "GHSA-4qcc-96j2-gv42",
  "modified": "2024-05-20T12:30:30Z",
  "published": "2024-05-20T12:30:30Z",
  "references": [
    {
      "type": "ADVISORY",
      "url": "https://nvd.nist.gov/vuln/detail/CVE-2024-36003"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/740717774dc37338404d10726967d582414f638c"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/96fdd1f6b4ed72a741fb0eb705c0e13049b8721f"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/de8631d8c9df08440268630200e64b623a5f69e6"
    }
  ],
  "schema_version": "1.4.0",
  "severity": []
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading...

Loading...

Loading...

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
  • Confirmed: The vulnerability is confirmed from an analyst perspective.
  • Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
  • Patched: This vulnerability was successfully patched by the user reporting the sighting.
  • Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
  • Not confirmed: The user expresses doubt about the veracity of the vulnerability.
  • Not patched: This vulnerability was not successfully patched by the user reporting the sighting.