GHSA-C3VJ-49C5-5WP9

Vulnerability from github – Published: 2024-07-12 15:31 – Updated: 2025-10-06 21:30
VLAI?
Details

In the Linux kernel, the following vulnerability has been resolved:

mm: shmem: fix getting incorrect lruvec when replacing a shmem folio

When testing shmem swapin, I encountered the warning below on my machine. The reason is that replacing an old shmem folio with a new one causes mem_cgroup_migrate() to clear the old folio's memcg data. As a result, the old folio cannot get the correct memcg's lruvec needed to remove itself from the LRU list when it is being freed. This could lead to possible serious problems, such as LRU list crashes due to holding the wrong LRU lock, and incorrect LRU statistics.

To fix this issue, we can fallback to use the mem_cgroup_replace_folio() to replace the old shmem folio.

[ 5241.100311] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x5d9960 [ 5241.100317] head: order:4 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0 [ 5241.100319] flags: 0x17fffe0000040068(uptodate|lru|head|swapbacked|node=0|zone=2|lastcpupid=0x3ffff) [ 5241.100323] raw: 17fffe0000040068 fffffdffd6687948 fffffdffd69ae008 0000000000000000 [ 5241.100325] raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000 [ 5241.100326] head: 17fffe0000040068 fffffdffd6687948 fffffdffd69ae008 0000000000000000 [ 5241.100327] head: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000 [ 5241.100328] head: 17fffe0000000204 fffffdffd6665801 ffffffffffffffff 0000000000000000 [ 5241.100329] head: 0000000a00000010 0000000000000000 00000000ffffffff 0000000000000000 [ 5241.100330] page dumped because: VM_WARN_ON_ONCE_FOLIO(!memcg && !mem_cgroup_disabled()) [ 5241.100338] ------------[ cut here ]------------ [ 5241.100339] WARNING: CPU: 19 PID: 78402 at include/linux/memcontrol.h:775 folio_lruvec_lock_irqsave+0x140/0x150 [...] [ 5241.100374] pc : folio_lruvec_lock_irqsave+0x140/0x150 [ 5241.100375] lr : folio_lruvec_lock_irqsave+0x138/0x150 [ 5241.100376] sp : ffff80008b38b930 [...] [ 5241.100398] Call trace: [ 5241.100399] folio_lruvec_lock_irqsave+0x140/0x150 [ 5241.100401] __page_cache_release+0x90/0x300 [ 5241.100404] __folio_put+0x50/0x108 [ 5241.100406] shmem_replace_folio+0x1b4/0x240 [ 5241.100409] shmem_swapin_folio+0x314/0x528 [ 5241.100411] shmem_get_folio_gfp+0x3b4/0x930 [ 5241.100412] shmem_fault+0x74/0x160 [ 5241.100414] __do_fault+0x40/0x218 [ 5241.100417] do_shared_fault+0x34/0x1b0 [ 5241.100419] do_fault+0x40/0x168 [ 5241.100420] handle_pte_fault+0x80/0x228 [ 5241.100422] __handle_mm_fault+0x1c4/0x440 [ 5241.100424] handle_mm_fault+0x60/0x1f0 [ 5241.100426] do_page_fault+0x120/0x488 [ 5241.100429] do_translation_fault+0x4c/0x68 [ 5241.100431] do_mem_abort+0x48/0xa0 [ 5241.100434] el0_da+0x38/0xc0 [ 5241.100436] el0t_64_sync_handler+0x68/0xc0 [ 5241.100437] el0t_64_sync+0x14c/0x150 [ 5241.100439] ---[ end trace 0000000000000000 ]---

[baolin.wang@linux.alibaba.com: remove less helpful comments, per Matthew] Link: https://lkml.kernel.org/r/ccad3fe1375b468ebca3227b6b729f3eaf9d8046.1718423197.git.baolin.wang@linux.alibaba.com

Show details on source website

{
  "affected": [],
  "aliases": [
    "CVE-2024-40949"
  ],
  "database_specific": {
    "cwe_ids": [],
    "github_reviewed": false,
    "github_reviewed_at": null,
    "nvd_published_at": "2024-07-12T13:15:17Z",
    "severity": "MODERATE"
  },
  "details": "In the Linux kernel, the following vulnerability has been resolved:\n\nmm: shmem: fix getting incorrect lruvec when replacing a shmem folio\n\nWhen testing shmem swapin, I encountered the warning below on my machine. \nThe reason is that replacing an old shmem folio with a new one causes\nmem_cgroup_migrate() to clear the old folio\u0027s memcg data.  As a result,\nthe old folio cannot get the correct memcg\u0027s lruvec needed to remove\nitself from the LRU list when it is being freed.  This could lead to\npossible serious problems, such as LRU list crashes due to holding the\nwrong LRU lock, and incorrect LRU statistics.\n\nTo fix this issue, we can fallback to use the mem_cgroup_replace_folio()\nto replace the old shmem folio.\n\n[ 5241.100311] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x5d9960\n[ 5241.100317] head: order:4 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0\n[ 5241.100319] flags: 0x17fffe0000040068(uptodate|lru|head|swapbacked|node=0|zone=2|lastcpupid=0x3ffff)\n[ 5241.100323] raw: 17fffe0000040068 fffffdffd6687948 fffffdffd69ae008 0000000000000000\n[ 5241.100325] raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000\n[ 5241.100326] head: 17fffe0000040068 fffffdffd6687948 fffffdffd69ae008 0000000000000000\n[ 5241.100327] head: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000\n[ 5241.100328] head: 17fffe0000000204 fffffdffd6665801 ffffffffffffffff 0000000000000000\n[ 5241.100329] head: 0000000a00000010 0000000000000000 00000000ffffffff 0000000000000000\n[ 5241.100330] page dumped because: VM_WARN_ON_ONCE_FOLIO(!memcg \u0026\u0026 !mem_cgroup_disabled())\n[ 5241.100338] ------------[ cut here ]------------\n[ 5241.100339] WARNING: CPU: 19 PID: 78402 at include/linux/memcontrol.h:775 folio_lruvec_lock_irqsave+0x140/0x150\n[...]\n[ 5241.100374] pc : folio_lruvec_lock_irqsave+0x140/0x150\n[ 5241.100375] lr : folio_lruvec_lock_irqsave+0x138/0x150\n[ 5241.100376] sp : ffff80008b38b930\n[...]\n[ 5241.100398] Call trace:\n[ 5241.100399]  folio_lruvec_lock_irqsave+0x140/0x150\n[ 5241.100401]  __page_cache_release+0x90/0x300\n[ 5241.100404]  __folio_put+0x50/0x108\n[ 5241.100406]  shmem_replace_folio+0x1b4/0x240\n[ 5241.100409]  shmem_swapin_folio+0x314/0x528\n[ 5241.100411]  shmem_get_folio_gfp+0x3b4/0x930\n[ 5241.100412]  shmem_fault+0x74/0x160\n[ 5241.100414]  __do_fault+0x40/0x218\n[ 5241.100417]  do_shared_fault+0x34/0x1b0\n[ 5241.100419]  do_fault+0x40/0x168\n[ 5241.100420]  handle_pte_fault+0x80/0x228\n[ 5241.100422]  __handle_mm_fault+0x1c4/0x440\n[ 5241.100424]  handle_mm_fault+0x60/0x1f0\n[ 5241.100426]  do_page_fault+0x120/0x488\n[ 5241.100429]  do_translation_fault+0x4c/0x68\n[ 5241.100431]  do_mem_abort+0x48/0xa0\n[ 5241.100434]  el0_da+0x38/0xc0\n[ 5241.100436]  el0t_64_sync_handler+0x68/0xc0\n[ 5241.100437]  el0t_64_sync+0x14c/0x150\n[ 5241.100439] ---[ end trace 0000000000000000 ]---\n\n[baolin.wang@linux.alibaba.com: remove less helpful comments, per Matthew]\n  Link: https://lkml.kernel.org/r/ccad3fe1375b468ebca3227b6b729f3eaf9d8046.1718423197.git.baolin.wang@linux.alibaba.com",
  "id": "GHSA-c3vj-49c5-5wp9",
  "modified": "2025-10-06T21:30:27Z",
  "published": "2024-07-12T15:31:28Z",
  "references": [
    {
      "type": "ADVISORY",
      "url": "https://nvd.nist.gov/vuln/detail/CVE-2024-40949"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/8c6c3719ebb7913f8a665d11816d2e38b0eadbab"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/9094b4a1c76cfe84b906cc152bab34d4ba26fa5c"
    }
  ],
  "schema_version": "1.4.0",
  "severity": [
    {
      "score": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H",
      "type": "CVSS_V3"
    }
  ]
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or observed by the user.
  • Confirmed: The vulnerability has been validated from an analyst's perspective.
  • Published Proof of Concept: A public proof of concept is available for this vulnerability.
  • Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
  • Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
  • Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
  • Not confirmed: The user expressed doubt about the validity of the vulnerability.
  • Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.


Loading…

Detection rules are retrieved from Rulezet.

Loading…

Loading…