Skip to content

Commit f2d2f95

Browse files
hygoniakpm00
authored andcommitted
mm: introduce and use {pgd,p4d}_populate_kernel()
Introduce and use {pgd,p4d}_populate_kernel() in core MM code when populating PGD and P4D entries for the kernel address space. These helpers ensure proper synchronization of page tables when updating the kernel portion of top-level page tables. Until now, the kernel has relied on each architecture to handle synchronization of top-level page tables in an ad-hoc manner. For example, see commit 9b86152 ("x86-64, mem: Update all PGDs for direct mapping and vmemmap mapping changes"). However, this approach has proven fragile for following reasons: 1) It is easy to forget to perform the necessary page table synchronization when introducing new changes. For instance, commit 4917f55 ("mm/sparse-vmemmap: improve memory savings for compound devmaps") overlooked the need to synchronize page tables for the vmemmap area. 2) It is also easy to overlook that the vmemmap and direct mapping areas must not be accessed before explicit page table synchronization. For example, commit 8d40091 ("x86/vmemmap: handle unpopulated sub-pmd ranges")) caused crashes by accessing the vmemmap area before calling sync_global_pgds(). To address this, as suggested by Dave Hansen, introduce _kernel() variants of the page table population helpers, which invoke architecture-specific hooks to properly synchronize page tables. These are introduced in a new header file, include/linux/pgalloc.h, so they can be called from common code. They reuse existing infrastructure for vmalloc and ioremap. Synchronization requirements are determined by ARCH_PAGE_TABLE_SYNC_MASK, and the actual synchronization is performed by arch_sync_kernel_mappings(). This change currently targets only x86_64, so only PGD and P4D level helpers are introduced. Currently, these helpers are no-ops since no architecture sets PGTBL_{PGD,P4D}_MODIFIED in ARCH_PAGE_TABLE_SYNC_MASK. In theory, PUD and PMD level helpers can be added later if needed by other architectures. For now, 32-bit architectures (x86-32 and arm) only handle PGTBL_PMD_MODIFIED, so p*d_populate_kernel() will never affect them unless we introduce a PMD level helper. [[email protected]: fix KASAN build error due to p*d_populate_kernel()] Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Fixes: 8d40091 ("x86/vmemmap: handle unpopulated sub-pmd ranges") Signed-off-by: Harry Yoo <[email protected]> Suggested-by: Dave Hansen <[email protected]> Acked-by: Kiryl Shutsemau <[email protected]> Reviewed-by: Mike Rapoport (Microsoft) <[email protected]> Reviewed-by: Lorenzo Stoakes <[email protected]> Acked-by: David Hildenbrand <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Alistair Popple <[email protected]> Cc: Andrey Konovalov <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: "Aneesh Kumar K.V" <[email protected]> Cc: Anshuman Khandual <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: bibo mao <[email protected]> Cc: Borislav Betkov <[email protected]> Cc: Christoph Lameter (Ampere) <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Dev Jain <[email protected]> Cc: Dmitriy Vyukov <[email protected]> Cc: Gwan-gyeong Mun <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jane Chu <[email protected]> Cc: Joao Martins <[email protected]> Cc: Joerg Roedel <[email protected]> Cc: John Hubbard <[email protected]> Cc: Kevin Brodsky <[email protected]> Cc: Liam Howlett <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Oscar Salvador <[email protected]> Cc: Peter Xu <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Qi Zheng <[email protected]> Cc: Ryan Roberts <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Gleinxer <[email protected]> Cc: Thomas Huth <[email protected]> Cc: "Uladzislau Rezki (Sony)" <[email protected]> Cc: Vincenzo Frascino <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 7cc183f commit f2d2f95

File tree

5 files changed

+48
-18
lines changed

5 files changed

+48
-18
lines changed

include/linux/pgalloc.h

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
/* SPDX-License-Identifier: GPL-2.0 */
2+
#ifndef _LINUX_PGALLOC_H
3+
#define _LINUX_PGALLOC_H
4+
5+
#include <linux/pgtable.h>
6+
#include <asm/pgalloc.h>
7+
8+
/*
9+
* {pgd,p4d}_populate_kernel() are defined as macros to allow
10+
* compile-time optimization based on the configured page table levels.
11+
* Without this, linking may fail because callers (e.g., KASAN) may rely
12+
* on calls to these functions being optimized away when passing symbols
13+
* that exist only for certain page table levels.
14+
*/
15+
#define pgd_populate_kernel(addr, pgd, p4d) \
16+
do { \
17+
pgd_populate(&init_mm, pgd, p4d); \
18+
if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED) \
19+
arch_sync_kernel_mappings(addr, addr); \
20+
} while (0)
21+
22+
#define p4d_populate_kernel(addr, p4d, pud) \
23+
do { \
24+
p4d_populate(&init_mm, p4d, pud); \
25+
if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_P4D_MODIFIED) \
26+
arch_sync_kernel_mappings(addr, addr); \
27+
} while (0)
28+
29+
#endif /* _LINUX_PGALLOC_H */

include/linux/pgtable.h

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1469,8 +1469,8 @@ static inline void modify_prot_commit_ptes(struct vm_area_struct *vma, unsigned
14691469

14701470
/*
14711471
* Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
1472-
* and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
1473-
* needs to be called.
1472+
* and let generic vmalloc, ioremap and page table update code know when
1473+
* arch_sync_kernel_mappings() needs to be called.
14741474
*/
14751475
#ifndef ARCH_PAGE_TABLE_SYNC_MASK
14761476
#define ARCH_PAGE_TABLE_SYNC_MASK 0
@@ -1954,10 +1954,11 @@ static inline bool arch_has_pfn_modify_check(void)
19541954
/*
19551955
* Page Table Modification bits for pgtbl_mod_mask.
19561956
*
1957-
* These are used by the p?d_alloc_track*() set of functions an in the generic
1958-
* vmalloc/ioremap code to track at which page-table levels entries have been
1959-
* modified. Based on that the code can better decide when vmalloc and ioremap
1960-
* mapping changes need to be synchronized to other page-tables in the system.
1957+
* These are used by the p?d_alloc_track*() and p*d_populate_kernel()
1958+
* functions in the generic vmalloc, ioremap and page table update code
1959+
* to track at which page-table levels entries have been modified.
1960+
* Based on that the code can better decide when page table changes need
1961+
* to be synchronized to other page-tables in the system.
19611962
*/
19621963
#define __PGTBL_PGD_MODIFIED 0
19631964
#define __PGTBL_P4D_MODIFIED 1

mm/kasan/init.c

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -13,9 +13,9 @@
1313
#include <linux/mm.h>
1414
#include <linux/pfn.h>
1515
#include <linux/slab.h>
16+
#include <linux/pgalloc.h>
1617

1718
#include <asm/page.h>
18-
#include <asm/pgalloc.h>
1919

2020
#include "kasan.h"
2121

@@ -191,7 +191,7 @@ static int __ref zero_p4d_populate(pgd_t *pgd, unsigned long addr,
191191
pud_t *pud;
192192
pmd_t *pmd;
193193

194-
p4d_populate(&init_mm, p4d,
194+
p4d_populate_kernel(addr, p4d,
195195
lm_alias(kasan_early_shadow_pud));
196196
pud = pud_offset(p4d, addr);
197197
pud_populate(&init_mm, pud,
@@ -212,7 +212,7 @@ static int __ref zero_p4d_populate(pgd_t *pgd, unsigned long addr,
212212
} else {
213213
p = early_alloc(PAGE_SIZE, NUMA_NO_NODE);
214214
pud_init(p);
215-
p4d_populate(&init_mm, p4d, p);
215+
p4d_populate_kernel(addr, p4d, p);
216216
}
217217
}
218218
zero_pud_populate(p4d, addr, next);
@@ -251,10 +251,10 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
251251
* puds,pmds, so pgd_populate(), pud_populate()
252252
* is noops.
253253
*/
254-
pgd_populate(&init_mm, pgd,
254+
pgd_populate_kernel(addr, pgd,
255255
lm_alias(kasan_early_shadow_p4d));
256256
p4d = p4d_offset(pgd, addr);
257-
p4d_populate(&init_mm, p4d,
257+
p4d_populate_kernel(addr, p4d,
258258
lm_alias(kasan_early_shadow_pud));
259259
pud = pud_offset(p4d, addr);
260260
pud_populate(&init_mm, pud,
@@ -273,7 +273,7 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
273273
if (!p)
274274
return -ENOMEM;
275275
} else {
276-
pgd_populate(&init_mm, pgd,
276+
pgd_populate_kernel(addr, pgd,
277277
early_alloc(PAGE_SIZE, NUMA_NO_NODE));
278278
}
279279
}

mm/percpu.c

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3108,7 +3108,7 @@ int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size,
31083108
#endif /* BUILD_EMBED_FIRST_CHUNK */
31093109

31103110
#ifdef BUILD_PAGE_FIRST_CHUNK
3111-
#include <asm/pgalloc.h>
3111+
#include <linux/pgalloc.h>
31123112

31133113
#ifndef P4D_TABLE_SIZE
31143114
#define P4D_TABLE_SIZE PAGE_SIZE
@@ -3134,13 +3134,13 @@ void __init __weak pcpu_populate_pte(unsigned long addr)
31343134

31353135
if (pgd_none(*pgd)) {
31363136
p4d = memblock_alloc_or_panic(P4D_TABLE_SIZE, P4D_TABLE_SIZE);
3137-
pgd_populate(&init_mm, pgd, p4d);
3137+
pgd_populate_kernel(addr, pgd, p4d);
31383138
}
31393139

31403140
p4d = p4d_offset(pgd, addr);
31413141
if (p4d_none(*p4d)) {
31423142
pud = memblock_alloc_or_panic(PUD_TABLE_SIZE, PUD_TABLE_SIZE);
3143-
p4d_populate(&init_mm, p4d, pud);
3143+
p4d_populate_kernel(addr, p4d, pud);
31443144
}
31453145

31463146
pud = pud_offset(p4d, addr);

mm/sparse-vmemmap.c

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -27,9 +27,9 @@
2727
#include <linux/spinlock.h>
2828
#include <linux/vmalloc.h>
2929
#include <linux/sched.h>
30+
#include <linux/pgalloc.h>
3031

3132
#include <asm/dma.h>
32-
#include <asm/pgalloc.h>
3333
#include <asm/tlbflush.h>
3434

3535
#include "hugetlb_vmemmap.h"
@@ -229,7 +229,7 @@ p4d_t * __meminit vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node)
229229
if (!p)
230230
return NULL;
231231
pud_init(p);
232-
p4d_populate(&init_mm, p4d, p);
232+
p4d_populate_kernel(addr, p4d, p);
233233
}
234234
return p4d;
235235
}
@@ -241,7 +241,7 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node)
241241
void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
242242
if (!p)
243243
return NULL;
244-
pgd_populate(&init_mm, pgd, p);
244+
pgd_populate_kernel(addr, pgd, p);
245245
}
246246
return pgd;
247247
}

0 commit comments

Comments
 (0)