Skip to content

Commit 458568c

Browse files
xzpeterakpm00
authored andcommitted
mm/hugetlb: prepare hugetlb_follow_page_mask() for FOLL_PIN
follow_page() doesn't use FOLL_PIN, meanwhile hugetlb seems to not be the target of FOLL_WRITE either. However add the checks. Namely, either the need to CoW due to missing write bit, or proper unsharing on !AnonExclusive pages over R/O pins to reject the follow page. That brings this function closer to follow_hugetlb_page(). So we don't care before, and also for now. But we'll care if we switch over slow-gup to use hugetlb_follow_page_mask(). We'll also care when to return -EMLINK properly, as that's the gup internal api to mean "we should unshare". Not really needed for follow page path, though. When at it, switching the try_grab_page() to use WARN_ON_ONCE(), to be clear that it just should never fail. When error happens, instead of setting page==NULL, capture the errno instead. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Peter Xu <[email protected]> Reviewed-by: Mike Kravetz <[email protected]> Reviewed-by: David Hildenbrand <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: James Houghton <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: John Hubbard <[email protected]> Cc: Kirill A . Shutemov <[email protected]> Cc: Lorenzo Stoakes <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Mike Rapoport (IBM) <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Yang Shi <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent dd767aa commit 458568c

File tree

1 file changed

+22
-11
lines changed

1 file changed

+22
-11
lines changed

mm/hugetlb.c

Lines changed: 22 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -6462,13 +6462,7 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
64626462
struct page *page = NULL;
64636463
spinlock_t *ptl;
64646464
pte_t *pte, entry;
6465-
6466-
/*
6467-
* FOLL_PIN is not supported for follow_page(). Ordinary GUP goes via
6468-
* follow_hugetlb_page().
6469-
*/
6470-
if (WARN_ON_ONCE(flags & FOLL_PIN))
6471-
return NULL;
6465+
int ret;
64726466

64736467
hugetlb_vma_lock_read(vma);
64746468
pte = hugetlb_walk(vma, haddr, huge_page_size(h));
@@ -6478,8 +6472,23 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
64786472
ptl = huge_pte_lock(h, mm, pte);
64796473
entry = huge_ptep_get(pte);
64806474
if (pte_present(entry)) {
6481-
page = pte_page(entry) +
6482-
((address & ~huge_page_mask(h)) >> PAGE_SHIFT);
6475+
page = pte_page(entry);
6476+
6477+
if (!huge_pte_write(entry)) {
6478+
if (flags & FOLL_WRITE) {
6479+
page = NULL;
6480+
goto out;
6481+
}
6482+
6483+
if (gup_must_unshare(vma, flags, page)) {
6484+
/* Tell the caller to do unsharing */
6485+
page = ERR_PTR(-EMLINK);
6486+
goto out;
6487+
}
6488+
}
6489+
6490+
page += ((address & ~huge_page_mask(h)) >> PAGE_SHIFT);
6491+
64836492
/*
64846493
* Note that page may be a sub-page, and with vmemmap
64856494
* optimizations the page struct may be read only.
@@ -6489,8 +6498,10 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
64896498
* try_grab_page() should always be able to get the page here,
64906499
* because we hold the ptl lock and have verified pte_present().
64916500
*/
6492-
if (try_grab_page(page, flags)) {
6493-
page = NULL;
6501+
ret = try_grab_page(page, flags);
6502+
6503+
if (WARN_ON_ONCE(ret)) {
6504+
page = ERR_PTR(ret);
64946505
goto out;
64956506
}
64966507
}

0 commit comments

Comments
 (0)