Skip to content

Commit d992895

Browse files
Nick PigginLinus Torvalds
authored andcommitted
[PATCH] Lazy page table copies in fork()
Defer copying of ptes until fault time when it is possible to reconstruct the pte from backing store. Idea from Andi Kleen and Nick Piggin. Thanks to input from Rik van Riel and Linus and to Hugh for correcting my blundering. Ray Fucillo <[email protected]> reports: "I applied this latest patch to a 2.6.12 kernel and found that it does resolve the problem. Prior to the patch on this machine, I was seeing about 23ms spent in fork for ever 100MB of shared memory segment. After applying the patch, fork is taking about 1ms regardless of the shared memory size." Signed-off-by: Nick Piggin <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 4019371 commit d992895

File tree

1 file changed

+11
-0
lines changed

1 file changed

+11
-0
lines changed

mm/memory.c

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -498,6 +498,17 @@ int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
498498
unsigned long addr = vma->vm_start;
499499
unsigned long end = vma->vm_end;
500500

501+
/*
502+
* Don't copy ptes where a page fault will fill them correctly.
503+
* Fork becomes much lighter when there are big shared or private
504+
* readonly mappings. The tradeoff is that copy_page_range is more
505+
* efficient than faulting.
506+
*/
507+
if (!(vma->vm_flags & (VM_HUGETLB|VM_NONLINEAR|VM_RESERVED))) {
508+
if (!vma->anon_vma)
509+
return 0;
510+
}
511+
501512
if (is_vm_hugetlb_page(vma))
502513
return copy_hugetlb_page_range(dst_mm, src_mm, vma);
503514

0 commit comments

Comments
 (0)