FKIE_CVE-2026-23157

Vulnerability from fkie_nvd - Published: 2026-02-14 16:15 - Updated: 2026-02-18 17:52
Severity ?
Summary
In the Linux kernel, the following vulnerability has been resolved: btrfs: do not strictly require dirty metadata threshold for metadata writepages [BUG] There is an internal report that over 1000 processes are waiting at the io_schedule_timeout() of balance_dirty_pages(), causing a system hang and trigger a kernel coredump. The kernel is v6.4 kernel based, but the root problem still applies to any upstream kernel before v6.18. [CAUSE] From Jan Kara for his wisdom on the dirty page balance behavior first. This cgroup dirty limit was what was actually playing the role here because the cgroup had only a small amount of memory and so the dirty limit for it was something like 16MB. Dirty throttling is responsible for enforcing that nobody can dirty (significantly) more dirty memory than there's dirty limit. Thus when a task is dirtying pages it periodically enters into balance_dirty_pages() and we let it sleep there to slow down the dirtying. When the system is over dirty limit already (either globally or within a cgroup of the running task), we will not let the task exit from balance_dirty_pages() until the number of dirty pages drops below the limit. So in this particular case, as I already mentioned, there was a cgroup with relatively small amount of memory and as a result with dirty limit set at 16MB. A task from that cgroup has dirtied about 28MB worth of pages in btrfs btree inode and these were practically the only dirty pages in that cgroup. So that means the only way to reduce the dirty pages of that cgroup is to writeback the dirty pages of btrfs btree inode, and only after that those processes can exit balance_dirty_pages(). Now back to the btrfs part, btree_writepages() is responsible for writing back dirty btree inode pages. The problem here is, there is a btrfs internal threshold that if the btree inode's dirty bytes are below the 32M threshold, it will not do any writeback. This behavior is to batch as much metadata as possible so we won't write back those tree blocks and then later re-COW them again for another modification. This internal 32MiB is higher than the existing dirty page size (28MiB), meaning no writeback will happen, causing a deadlock between btrfs and cgroup: - Btrfs doesn't want to write back btree inode until more dirty pages - Cgroup/MM doesn't want more dirty pages for btrfs btree inode Thus any process touching that btree inode is put into sleep until the number of dirty pages is reduced. Thanks Jan Kara a lot for the analysis of the root cause. [ENHANCEMENT] Since kernel commit b55102826d7d ("btrfs: set AS_KERNEL_FILE on the btree_inode"), btrfs btree inode pages will only be charged to the root cgroup which should have a much larger limit than btrfs' 32MiB threshold. So it should not affect newer kernels. But for all current LTS kernels, they are all affected by this problem, and backporting the whole AS_KERNEL_FILE may not be a good idea. Even for newer kernels I still think it's a good idea to get rid of the internal threshold at btree_writepages(), since for most cases cgroup/MM has a better view of full system memory usage than btrfs' fixed threshold. For internal callers using btrfs_btree_balance_dirty() since that function is already doing internal threshold check, we don't need to bother them. But for external callers of btree_writepages(), just respect their requests and write back whatever they want, ignoring the internal btrfs threshold to avoid such deadlock on btree inode dirty page balancing.
Impacted products
Vendor Product Version

{
  "cveTags": [],
  "descriptions": [
    {
      "lang": "en",
      "value": "In the Linux kernel, the following vulnerability has been resolved:\n\nbtrfs: do not strictly require dirty metadata threshold for metadata writepages\n\n[BUG]\nThere is an internal report that over 1000 processes are\nwaiting at the io_schedule_timeout() of balance_dirty_pages(), causing\na system hang and trigger a kernel coredump.\n\nThe kernel is v6.4 kernel based, but the root problem still applies to\nany upstream kernel before v6.18.\n\n[CAUSE]\nFrom Jan Kara for his wisdom on the dirty page balance behavior first.\n\n  This cgroup dirty limit was what was actually playing the role here\n  because the cgroup had only a small amount of memory and so the dirty\n  limit for it was something like 16MB.\n\n  Dirty throttling is responsible for enforcing that nobody can dirty\n  (significantly) more dirty memory than there\u0027s dirty limit. Thus when\n  a task is dirtying pages it periodically enters into balance_dirty_pages()\n  and we let it sleep there to slow down the dirtying.\n\n  When the system is over dirty limit already (either globally or within\n  a cgroup of the running task), we will not let the task exit from\n  balance_dirty_pages() until the number of dirty pages drops below the\n  limit.\n\n  So in this particular case, as I already mentioned, there was a cgroup\n  with relatively small amount of memory and as a result with dirty limit\n  set at 16MB. A task from that cgroup has dirtied about 28MB worth of\n  pages in btrfs btree inode and these were practically the only dirty\n  pages in that cgroup.\n\nSo that means the only way to reduce the dirty pages of that cgroup is\nto writeback the dirty pages of btrfs btree inode, and only after that\nthose processes can exit balance_dirty_pages().\n\nNow back to the btrfs part, btree_writepages() is responsible for\nwriting back dirty btree inode pages.\n\nThe problem here is, there is a btrfs internal threshold that if the\nbtree inode\u0027s dirty bytes are below the 32M threshold, it will not\ndo any writeback.\n\nThis behavior is to batch as much metadata as possible so we won\u0027t write\nback those tree blocks and then later re-COW them again for another\nmodification.\n\nThis internal 32MiB is higher than the existing dirty page size (28MiB),\nmeaning no writeback will happen, causing a deadlock between btrfs and\ncgroup:\n\n- Btrfs doesn\u0027t want to write back btree inode until more dirty pages\n\n- Cgroup/MM doesn\u0027t want more dirty pages for btrfs btree inode\n  Thus any process touching that btree inode is put into sleep until\n  the number of dirty pages is reduced.\n\nThanks Jan Kara a lot for the analysis of the root cause.\n\n[ENHANCEMENT]\nSince kernel commit b55102826d7d (\"btrfs: set AS_KERNEL_FILE on the\nbtree_inode\"), btrfs btree inode pages will only be charged to the root\ncgroup which should have a much larger limit than btrfs\u0027 32MiB\nthreshold.\nSo it should not affect newer kernels.\n\nBut for all current LTS kernels, they are all affected by this problem,\nand backporting the whole AS_KERNEL_FILE may not be a good idea.\n\nEven for newer kernels I still think it\u0027s a good idea to get\nrid of the internal threshold at btree_writepages(), since for most cases\ncgroup/MM has a better view of full system memory usage than btrfs\u0027 fixed\nthreshold.\n\nFor internal callers using btrfs_btree_balance_dirty() since that\nfunction is already doing internal threshold check, we don\u0027t need to\nbother them.\n\nBut for external callers of btree_writepages(), just respect their\nrequests and write back whatever they want, ignoring the internal\nbtrfs threshold to avoid such deadlock on btree inode dirty page\nbalancing."
    },
    {
      "lang": "es",
      "value": "En el kernel de Linux, la siguiente vulnerabilidad ha sido resuelta:  btrfs: no requerir estrictamente el umbral de metadatos sucios para la escritura de p\u00e1ginas de metadatos  [ERROR] Existe un informe interno de que m\u00e1s de 1000 procesos est\u00e1n esperando en el io_schedule_timeout() de balance_dirty_pages(), causando un cuelgue del sistema y desencadenando un volcado de memoria del kernel.  El kernel est\u00e1 basado en el kernel v6.4, pero el problema ra\u00edz todav\u00eda se aplica a cualquier kernel upstream anterior a la v6.18.  [CAUSA] De Jan Kara por su sabidur\u00eda sobre el comportamiento de balanceo de p\u00e1ginas sucias primero.    Este l\u00edmite de suciedad del cgroup era lo que realmente estaba desempe\u00f1ando el papel aqu\u00ed porque el cgroup ten\u00eda solo una peque\u00f1a cantidad de memoria y por lo tanto el l\u00edmite de suciedad para \u00e9l era de aproximadamente 16MB.    La limitaci\u00f3n de suciedad es responsable de asegurar que nadie pueda ensuciar (significativamente) m\u00e1s memoria sucia de lo que hay de l\u00edmite de suciedad. As\u00ed, cuando una tarea est\u00e1 ensuciando p\u00e1ginas, entra peri\u00f3dicamente en balance_dirty_pages() y la dejamos dormir all\u00ed para ralentizar el ensuciamiento.    Cuando el sistema ya est\u00e1 por encima del l\u00edmite de suciedad (ya sea globalmente o dentro de un cgroup de la tarea en ejecuci\u00f3n), no permitiremos que la tarea salga de balance_dirty_pages() hasta que el n\u00famero de p\u00e1ginas sucias caiga por debajo del l\u00edmite.    As\u00ed que en este caso particular, como ya mencion\u00e9, hab\u00eda un cgroup con una cantidad de memoria relativamente peque\u00f1a y como resultado con un l\u00edmite de suciedad establecido en 16MB. Una tarea de ese cgroup ha ensuciado p\u00e1ginas por un valor de aproximadamente 28MB en el inodo btree de btrfs y estas eran pr\u00e1cticamente las \u00fanicas p\u00e1ginas sucias en ese cgroup.  As\u00ed que eso significa que la \u00fanica forma de reducir las p\u00e1ginas sucias de ese cgroup es realizar el writeback de las p\u00e1ginas sucias del inodo btree de btrfs, y solo despu\u00e9s de eso esos procesos pueden salir de balance_dirty_pages().  Ahora volviendo a la parte de btrfs, btree_writepages() es responsable de realizar el writeback de las p\u00e1ginas sucias del inodo btree.  El problema aqu\u00ed es que hay un umbral interno de btrfs que si los bytes sucios del inodo btree est\u00e1n por debajo del umbral de 32M, no realizar\u00e1 ning\u00fan writeback.  Este comportamiento es para agrupar la mayor cantidad posible de metadatos para que no escribamos de vuelta esos bloques de \u00e1rbol y luego los volvamos a copiar en escritura (re-COW) para otra modificaci\u00f3n.  Estos 32MiB internos son m\u00e1s altos que el tama\u00f1o de p\u00e1gina sucia existente (28MiB), lo que significa que no se realizar\u00e1 ning\u00fan writeback, causando un interbloqueo entre btrfs y cgroup:  - Btrfs no quiere realizar el writeback del inodo btree hasta que haya m\u00e1s p\u00e1ginas sucias  - Cgroup/MM no quiere m\u00e1s p\u00e1ginas sucias para el inodo btree de btrfs   As\u00ed, cualquier proceso que toque ese inodo btree es puesto a dormir hasta que el n\u00famero de p\u00e1ginas sucias se reduzca.  Muchas gracias a Jan Kara por el an\u00e1lisis de la causa ra\u00edz.  [MEJORA] Desde el commit del kernel b55102826d7d (\u0027btrfs: establecer AS_KERNEL_FILE en el btree_inode\u0027), las p\u00e1ginas del inodo btree de btrfs solo se cargar\u00e1n al cgroup ra\u00edz, el cual deber\u00eda tener un l\u00edmite mucho mayor que el umbral de 32MiB de btrfs. As\u00ed que no deber\u00eda afectar a kernels m\u00e1s nuevos.  Pero para todos los kernels LTS actuales, todos est\u00e1n afectados por este problema, y realizar un backport de todo el AS_KERNEL_FILE puede no ser una buena idea.  Incluso para kernels m\u00e1s nuevos, sigo pensando que es una buena idea eliminar el umbral interno en btree_writepages(), ya que en la mayor\u00eda de los casos cgroup/MM tiene una mejor visi\u00f3n del uso de la memoria de todo el sistema que el umbral fijo de btrfs.  Para los llamadores internos que usan btrfs_btree_balance_dirty(), ya que esa funci\u00f3n ya est\u00e1 realizando una comprobaci\u00f3n de umbral interna, ---truncado---"
    }
  ],
  "id": "CVE-2026-23157",
  "lastModified": "2026-02-18T17:52:44.520",
  "metrics": {},
  "published": "2026-02-14T16:15:55.863",
  "references": [
    {
      "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
      "url": "https://git.kernel.org/stable/c/4e159150a9a56d66d247f4b5510bed46fe58aa1c"
    },
    {
      "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
      "url": "https://git.kernel.org/stable/c/629666d20c7dcd740e193ec0631fdff035b1f7d6"
    }
  ],
  "sourceIdentifier": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
  "vulnStatus": "Awaiting Analysis"
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or observed by the user.
  • Confirmed: The vulnerability has been validated from an analyst's perspective.
  • Published Proof of Concept: A public proof of concept is available for this vulnerability.
  • Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
  • Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
  • Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
  • Not confirmed: The user expressed doubt about the validity of the vulnerability.
  • Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.


Loading…

Detection rules are retrieved from Rulezet.

Loading…

Loading…