In the Linux kernel, the following vulnerability has been resolved: mm/ptdump: take the memory hotplug lock inside ptdumpwalkpgd() Memory hot remove unmaps and tears down various kernel page table regions as required. The ptdump code can race with concurrent modifications of the kernel page tables. When leaf entries are modified concurrently, the dump code may log stale or inconsistent information for a VA range, but this is otherwise not harmful. But when intermediate levels of kernel page table are freed, the dump code will continue to use memory that has been freed and potentially reallocated for another purpose. In such cases, the ptdump code may dereference bogus addresses, leading to a number of potential problems. To avoid the above mentioned race condition, platforms such as arm64, riscv and s390 take memory hotplug lock, while dumping kernel page table via the sysfs interface /sys/kernel/debug/kernelpagetables. Similar race condition exists while checking for pages that might have been marked W+X via /sys/kernel/debug/kernelpagetables/checkwxpages which in turn calls ptdumpcheckwx(). Instead of solving this race condition again, let's just move the memory hotplug lock inside generic ptdumpcheckwx() which will benefit both the scenarios. Drop getonlinemems() and putonlinemems() combination from all existing platform ptdump code paths.