Skip to content

Commit 9e36ceb

Browse files
David Hildenbrand (Arm)gregkh
authored andcommitted
mm/memory: fix PMD/PUD checks in follow_pfnmap_start()
[ Upstream commit ffef67b ] follow_pfnmap_start() suffers from two problems: (1) We are not re-fetching the pmd/pud after taking the PTL Therefore, we are not properly stabilizing what the lock actually protects. If there is concurrent zapping, we would indicate to the caller that we found an entry, however, that entry might already have been invalidated, or contain a different PFN after taking the lock. Properly use pmdp_get() / pudp_get() after taking the lock. (2) pmd_leaf() / pud_leaf() are not well defined on non-present entries pmd_leaf()/pud_leaf() could wrongly trigger on non-present entries. There is no real guarantee that pmd_leaf()/pud_leaf() returns something reasonable on non-present entries. Most architectures indeed either perform a present check or make it work by smart use of flags. However, for example loongarch checks the _PAGE_HUGE flag in pmd_leaf(), and always sets the _PAGE_HUGE flag in __swp_entry_to_pmd(). Whereby pmd_trans_huge() explicitly checks pmd_present(), pmd_leaf() does not do that. Let's check pmd_present()/pud_present() before assuming "the is a present PMD leaf" when spotting pmd_leaf()/pud_leaf(), like other page table handling code that traverses user page tables does. Given that non-present PMD entries are likely rare in VM_IO|VM_PFNMAP, (1) is likely more relevant than (2). It is questionable how often (1) would actually trigger, but let's CC stable to be sure. This was found by code inspection. Link: https://lkml.kernel.org/r/20260323-follow_pfnmap_fix-v1-1-5b0ec10872b3@kernel.org Fixes: 6da8e96 ("mm: new follow_pfnmap API") Signed-off-by: David Hildenbrand (Arm) <david@kernel.org> Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Peter Xu <peterx@redhat.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@kernel.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent cef18bb commit 9e36ceb

1 file changed

Lines changed: 15 additions & 3 deletions

File tree

mm/memory.c

Lines changed: 15 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6697,11 +6697,16 @@ int follow_pfnmap_start(struct follow_pfnmap_args *args)
66976697

66986698
pudp = pud_offset(p4dp, address);
66996699
pud = pudp_get(pudp);
6700-
if (pud_none(pud))
6700+
if (!pud_present(pud))
67016701
goto out;
67026702
if (pud_leaf(pud)) {
67036703
lock = pud_lock(mm, pudp);
6704-
if (!unlikely(pud_leaf(pud))) {
6704+
pud = pudp_get(pudp);
6705+
6706+
if (unlikely(!pud_present(pud))) {
6707+
spin_unlock(lock);
6708+
goto out;
6709+
} else if (unlikely(!pud_leaf(pud))) {
67056710
spin_unlock(lock);
67066711
goto retry;
67076712
}
@@ -6713,9 +6718,16 @@ int follow_pfnmap_start(struct follow_pfnmap_args *args)
67136718

67146719
pmdp = pmd_offset(pudp, address);
67156720
pmd = pmdp_get_lockless(pmdp);
6721+
if (!pmd_present(pmd))
6722+
goto out;
67166723
if (pmd_leaf(pmd)) {
67176724
lock = pmd_lock(mm, pmdp);
6718-
if (!unlikely(pmd_leaf(pmd))) {
6725+
pmd = pmdp_get(pmdp);
6726+
6727+
if (unlikely(!pmd_present(pmd))) {
6728+
spin_unlock(lock);
6729+
goto out;
6730+
} else if (unlikely(!pmd_leaf(pmd))) {
67196731
spin_unlock(lock);
67206732
goto retry;
67216733
}

0 commit comments

Comments
 (0)