Skip to content

Commit 9f8bdb3

Browse files
Hugh Dickinstorvalds
authored andcommitted
mm: make swapoff more robust against soft dirty
Both s390 and powerpc have hit the issue of swapoff hanging, when CONFIG_HAVE_ARCH_SOFT_DIRTY and CONFIG_MEM_SOFT_DIRTY ifdefs were not quite as x86_64 had them. I think it would be much clearer if HAVE_ARCH_SOFT_DIRTY was just a Kconfig option set by architectures to determine whether the MEM_SOFT_DIRTY option should be offered, and the actual code depend upon CONFIG_MEM_SOFT_DIRTY alone. But won't embark on that change myself: instead make swapoff more robust, by using pte_swp_clear_soft_dirty() on each pte it encounters, without an explicit #ifdef CONFIG_MEM_SOFT_DIRTY. That being a no-op, whether the bit in question is defined as 0 or the asm-generic fallback is used, unless soft dirty is fully turned on. Why "maybe" in maybe_same_pte()? Rename it pte_same_as_swp(). Signed-off-by: Hugh Dickins <[email protected]> Reviewed-by: Aneesh Kumar K.V <[email protected]> Acked-by: Cyrill Gorcunov <[email protected]> Cc: Laurent Dufour <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Martin Schwidefsky <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 88f306b commit 9f8bdb3

File tree

1 file changed

+4
-14
lines changed

1 file changed

+4
-14
lines changed

mm/swapfile.c

Lines changed: 4 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1111,19 +1111,9 @@ unsigned int count_swap_pages(int type, int free)
11111111
}
11121112
#endif /* CONFIG_HIBERNATION */
11131113

1114-
static inline int maybe_same_pte(pte_t pte, pte_t swp_pte)
1114+
static inline int pte_same_as_swp(pte_t pte, pte_t swp_pte)
11151115
{
1116-
#ifdef CONFIG_MEM_SOFT_DIRTY
1117-
/*
1118-
* When pte keeps soft dirty bit the pte generated
1119-
* from swap entry does not has it, still it's same
1120-
* pte from logical point of view.
1121-
*/
1122-
pte_t swp_pte_dirty = pte_swp_mksoft_dirty(swp_pte);
1123-
return pte_same(pte, swp_pte) || pte_same(pte, swp_pte_dirty);
1124-
#else
1125-
return pte_same(pte, swp_pte);
1126-
#endif
1116+
return pte_same(pte_swp_clear_soft_dirty(pte), swp_pte);
11271117
}
11281118

11291119
/*
@@ -1152,7 +1142,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
11521142
}
11531143

11541144
pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
1155-
if (unlikely(!maybe_same_pte(*pte, swp_entry_to_pte(entry)))) {
1145+
if (unlikely(!pte_same_as_swp(*pte, swp_entry_to_pte(entry)))) {
11561146
mem_cgroup_cancel_charge(page, memcg, false);
11571147
ret = 0;
11581148
goto out;
@@ -1210,7 +1200,7 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
12101200
* swapoff spends a _lot_ of time in this loop!
12111201
* Test inline before going to call unuse_pte.
12121202
*/
1213-
if (unlikely(maybe_same_pte(*pte, swp_pte))) {
1203+
if (unlikely(pte_same_as_swp(*pte, swp_pte))) {
12141204
pte_unmap(pte);
12151205
ret = unuse_pte(vma, pmd, addr, entry, page);
12161206
if (ret)

0 commit comments

Comments
 (0)