Skip to content

Commit 86051ca

Browse files
hkamezawatorvalds
authored andcommitted
mm: fix usemap initialization
usemap must be initialized only when pfn is within zone. If not, it corrupts memory. And this patch also reduces the number of calls to set_pageblock_migratetype() from (pfn & (pageblock_nr_pages -1) to !(pfn & (pageblock_nr_pages-1) it should be called once per pageblock. Signed-off-by: KAMEZAWA Hiroyuki <[email protected]> Acked-by: Mel Gorman <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Shi Weihua <[email protected]> Cc: Balbir Singh <[email protected]> Cc: Pavel Emelyanov <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent a01e035 commit 86051ca

File tree

1 file changed

+12
-2
lines changed

1 file changed

+12
-2
lines changed

mm/page_alloc.c

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2524,7 +2524,9 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
25242524
struct page *page;
25252525
unsigned long end_pfn = start_pfn + size;
25262526
unsigned long pfn;
2527+
struct zone *z;
25272528

2529+
z = &NODE_DATA(nid)->node_zones[zone];
25282530
for (pfn = start_pfn; pfn < end_pfn; pfn++) {
25292531
/*
25302532
* There can be holes in boot-time mem_map[]s
@@ -2542,7 +2544,6 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
25422544
init_page_count(page);
25432545
reset_page_mapcount(page);
25442546
SetPageReserved(page);
2545-
25462547
/*
25472548
* Mark the block movable so that blocks are reserved for
25482549
* movable at startup. This will force kernel allocations
@@ -2551,8 +2552,15 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
25512552
* kernel allocations are made. Later some blocks near
25522553
* the start are marked MIGRATE_RESERVE by
25532554
* setup_zone_migrate_reserve()
2555+
*
2556+
* bitmap is created for zone's valid pfn range. but memmap
2557+
* can be created for invalid pages (for alignment)
2558+
* check here not to call set_pageblock_migratetype() against
2559+
* pfn out of zone.
25542560
*/
2555-
if ((pfn & (pageblock_nr_pages-1)))
2561+
if ((z->zone_start_pfn <= pfn)
2562+
&& (pfn < z->zone_start_pfn + z->spanned_pages)
2563+
&& !(pfn & (pageblock_nr_pages - 1)))
25562564
set_pageblock_migratetype(page, MIGRATE_MOVABLE);
25572565

25582566
INIT_LIST_HEAD(&page->lru);
@@ -4464,6 +4472,8 @@ void set_pageblock_flags_group(struct page *page, unsigned long flags,
44644472
pfn = page_to_pfn(page);
44654473
bitmap = get_pageblock_bitmap(zone, pfn);
44664474
bitidx = pfn_to_bitidx(zone, pfn);
4475+
VM_BUG_ON(pfn < zone->zone_start_pfn);
4476+
VM_BUG_ON(pfn >= zone->zone_start_pfn + zone->spanned_pages);
44674477

44684478
for (; start_bitidx <= end_bitidx; start_bitidx++, value <<= 1)
44694479
if (flags & value)

0 commit comments

Comments
 (0)