Skip to content

Commit 2faee8f

Browse files
Dominik DingelMartin Schwidefsky
authored andcommitted
s390/mm: prevent and break zero page mappings in case of storage keys
As soon as storage keys are enabled we need to stop working on zero page mappings to prevent inconsistencies between storage keys and pgste. Otherwise following data corruption could happen: 1) guest enables storage key 2) guest sets storage key for not mapped page X -> change goes to PGSTE 3) guest reads from page X -> as X was not dirty before, the page will be zero page backed, storage key from PGSTE for X will go to storage key for zero page 4) guest sets storage key for not mapped page Y (same logic as above 5) guest reads from page Y -> as Y was not dirty before, the page will be zero page backed, storage key from PGSTE for Y will got to storage key for zero page overwriting storage key for X While holding the mmap sem, we are safe against changes on entries we already fixed, as every fault would need to take the mmap_sem (read). Other vCPUs executing storage key instructions will get a one time interception and be serialized also with mmap_sem. Signed-off-by: Dominik Dingel <[email protected]> Reviewed-by: Paolo Bonzini <[email protected]> Signed-off-by: Martin Schwidefsky <[email protected]>
1 parent 593befa commit 2faee8f

File tree

2 files changed

+17
-1
lines changed

2 files changed

+17
-1
lines changed

arch/s390/include/asm/pgtable.h

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -479,6 +479,11 @@ static inline int mm_has_pgste(struct mm_struct *mm)
479479
return 0;
480480
}
481481

482+
/*
483+
* In the case that a guest uses storage keys
484+
* faults should no longer be backed by zero pages
485+
*/
486+
#define mm_forbids_zeropage mm_use_skey
482487
static inline int mm_use_skey(struct mm_struct *mm)
483488
{
484489
#ifdef CONFIG_PGSTE

arch/s390/mm/pgtable.c

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1256,6 +1256,15 @@ static int __s390_enable_skey(pte_t *pte, unsigned long addr,
12561256
pgste_t pgste;
12571257

12581258
pgste = pgste_get_lock(pte);
1259+
/*
1260+
* Remove all zero page mappings,
1261+
* after establishing a policy to forbid zero page mappings
1262+
* following faults for that page will get fresh anonymous pages
1263+
*/
1264+
if (is_zero_pfn(pte_pfn(*pte))) {
1265+
ptep_flush_direct(walk->mm, addr, pte);
1266+
pte_val(*pte) = _PAGE_INVALID;
1267+
}
12591268
/* Clear storage key */
12601269
pgste_val(pgste) &= ~(PGSTE_ACC_BITS | PGSTE_FP_BIT |
12611270
PGSTE_GR_BIT | PGSTE_GC_BIT);
@@ -1274,9 +1283,11 @@ void s390_enable_skey(void)
12741283
down_write(&mm->mmap_sem);
12751284
if (mm_use_skey(mm))
12761285
goto out_up;
1286+
1287+
mm->context.use_skey = 1;
1288+
12771289
walk.mm = mm;
12781290
walk_page_range(0, TASK_SIZE, &walk);
1279-
mm->context.use_skey = 1;
12801291

12811292
out_up:
12821293
up_write(&mm->mmap_sem);

0 commit comments

Comments
 (0)