DEBIAN-CVE-2025-39879

Source
https://security-tracker.debian.org/tracker/CVE-2025-39879
Import Source
https://storage.googleapis.com/debian-osv/debian-cve-osv/DEBIAN-CVE-2025-39879.json
JSON Data
https://api.osv.dev/v1/vulns/DEBIAN-CVE-2025-39879
Upstream
Published
2025-09-23T06:15:47.523Z
Modified
2026-04-28T20:30:07.979641Z
Severity
  • 5.5 (Medium) CVSS_V3 - CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H CVSS Calculator
Summary
[none]
Details

In the Linux kernel, the following vulnerability has been resolved: ceph: always call cephshiftunusedfoliosleft() The function cephprocessfoliobatch() sets foliobatch entries to NULL, which is an illegal state. Before foliobatchrelease() crashes due to this API violation, the function cephshiftunusedfoliosleft() is supposed to remove those NULLs from the array. However, since commit ce80b76dd327 ("ceph: introduce cephprocessfoliobatch() method"), this shifting doesn't happen anymore because the "for" loop got moved to cephprocessfoliobatch(), and now the i variable that remains in cephwritepagesstart() doesn't get incremented anymore, making the shifting effectively unreachable much of the time. Later, commit 1551ec61dc55 ("ceph: introduce cephsubmitwrite() method") added more preconditions for doing the shift, replacing the i check (with something that is still just as broken): - if cephprocessfoliobatch() fails, shifting never happens - if cephmovedirtypageinpagearray() was never called (because cephprocessfoliobatch() has returned early for some of various reasons), shifting never happens - if processed_in_fbatch is zero (because cephprocessfoliobatch() has returned early for some of the reasons mentioned above or because cephmovedirtypageinpagearray() has failed), shifting never happens Since those two commits, any problem in cephprocessfoliobatch() could crash the kernel, e.g. this way: BUG: kernel NULL pointer dereference, address: 0000000000000034 #PF: supervisor write access in kernel mode #PF: errorcode(0x0002) - not-present page PGD 0 P4D 0 Oops: Oops: 0002 [#1] SMP NOPTI CPU: 172 UID: 0 PID: 2342707 Comm: kworker/u778:8 Not tainted 6.15.10-cm4all1-es #714 NONE Hardware name: Dell Inc. PowerEdge R7615/0G9DHV, BIOS 1.6.10 12/08/2023 Workqueue: writeback wbworkfn (flush-ceph-1) RIP: 0010:foliosputrefs+0x85/0x140 Code: 83 c5 01 39 e8 7e 76 48 63 c5 49 8b 5c c4 08 b8 01 00 00 00 4d 85 ed 74 05 41 8b 44 ad 00 48 8b 15 b0 > RSP: 0018:ffffb880af8db778 EFLAGS: 00010207 RAX: 0000000000000001 RBX: 0000000000000000 RCX: 0000000000000003 RDX: ffffe377cc3b0000 RSI: 0000000000000000 RDI: ffffb880af8db8c0 RBP: 0000000000000000 R08: 000000000000007d R09: 000000000102b86f R10: 0000000000000001 R11: 00000000000000ac R12: ffffb880af8db8c0 R13: 0000000000000000 R14: 0000000000000000 R15: ffff9bd262c97000 FS: 0000000000000000(0000) GS:ffff9c8efc303000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000034 CR3: 0000000160958004 CR4: 0000000000770ef0 PKRU: 55555554 Call Trace: <TASK> cephwritepagesstart+0xeb9/0x1410 The crash can be reproduced easily by changing the cephcheckpagebeforewrite() return value to -E2BIG. (Interestingly, the crash happens only if huge_zero_folio has already been allocated; without huge_zero_folio, ishugezerofolio(NULL) returns true and foliosputrefs() skips NULL entries instead of dereferencing them. That makes reproducing the bug somewhat unreliable. See https://lore.kernel.org/20250826231626.218675-1-max.kellermann@ionos.com for a discussion of this detail.) My suggestion is to move the cephshiftunusedfoliosleft() to right after cephprocessfoliobatch() to ensure it always gets called to fix up the illegal folio_batch state.

References

Affected packages

Debian:14 / linux

Package

Name
linux
Purl
pkg:deb/debian/linux?arch=source

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
6.16.8-1

Affected versions

6.*
6.12.38-1
6.12.41-1
6.12.43-1~bpo12+1
6.12.43-1
6.12.48-1
6.12.57-1~bpo12+1
6.12.57-1
6.12.63-1~bpo12+1
6.12.63-1
6.12.69-1~bpo12+1
6.12.69-1
6.12.73-1~bpo12+1
6.12.73-1
6.12.74-1
6.12.74-2~bpo12+1
6.12.74-2
6.13~rc6-1~exp1
6.13~rc7-1~exp1
6.13.2-1~exp1
6.13.3-1~exp1
6.13.4-1~exp1
6.13.5-1~exp1
6.13.6-1~exp1
6.13.7-1~exp1
6.13.8-1~exp1
6.13.9-1~exp1
6.13.10-1~exp1
6.13.11-1~exp1
6.14.3-1~exp1
6.14.5-1~exp1
6.14.6-1~exp1
6.15~rc7-1~exp1
6.15-1~exp1
6.15.1-1~exp1
6.15.2-1~exp1
6.15.3-1~exp1
6.15.4-1~exp1
6.15.5-1~exp1
6.15.6-1~exp1
6.16~rc7-1~exp1
6.16-1~exp1
6.16.1-1~exp1
6.16.3-1~bpo13+1
6.16.3-1
6.16.5-1
6.16.6-1
6.16.7-1

Ecosystem specific

{
    "urgency": "not yet assigned"
}

Database specific

source
"https://storage.googleapis.com/debian-osv/debian-cve-osv/DEBIAN-CVE-2025-39879.json"