Skip to content

Commit aa5fe31

Browse files
x-y-zakpm00
authored andcommitted
mips: use nth_page() in place of direct struct page manipulation
__flush_dcache_pages() is called during hugetlb migration via migrate_pages() -> migrate_hugetlbs() -> unmap_and_move_huge_page() -> move_to_new_folio() -> flush_dcache_folio(). And with hugetlb and without sparsemem vmemmap, struct page is not guaranteed to be contiguous beyond a section. Use nth_page() instead. Without the fix, a wrong address might be used for data cache page flush. No bug is reported. The fix comes from code inspection. Link: https://lkml.kernel.org/r/[email protected] Fixes: 15fa3e8 ("mips: implement the new page table range API") Signed-off-by: Zi Yan <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Cc: Mike Kravetz <[email protected]> Cc: Mike Rapoport (IBM) <[email protected]> Cc: Muchun Song <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 8db0ec7 commit aa5fe31

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

arch/mips/mm/cache.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,7 @@ void __flush_dcache_pages(struct page *page, unsigned int nr)
117117
* get faulted into the tlb (and thus flushed) anyways.
118118
*/
119119
for (i = 0; i < nr; i++) {
120-
addr = (unsigned long)kmap_local_page(page + i);
120+
addr = (unsigned long)kmap_local_page(nth_page(page, i));
121121
flush_data_cache_page(addr);
122122
kunmap_local((void *)addr);
123123
}

0 commit comments

Comments
 (0)