Skip to content

Commit 661b1c6

Browse files
mdrothbp3tk0v
authored andcommitted
x86/sev: Adjust the directmap to avoid inadvertent RMP faults
If the kernel uses a 2MB or larger directmap mapping to write to an address, and that mapping contains any 4KB pages that are set to private in the RMP table, an RMP #PF will trigger and cause a host crash. SNP-aware code that owns the private PFNs will never attempt such a write, but other kernel tasks writing to other PFNs in the range may trigger these checks inadvertently due to writing to those other PFNs via a large directmap mapping that happens to also map a private PFN. Prevent this by splitting any 2MB+ mappings that might end up containing a mix of private/shared PFNs as a result of a subsequent RMPUPDATE for the PFN/rmp_level passed in. Another way to handle this would be to limit the directmap to 4K mappings in the case of hosts that support SNP, but there is potential risk for performance regressions of certain host workloads. Handling it as-needed results in the directmap being slowly split over time, which lessens the risk of a performance regression since the more the directmap gets split as a result of running SNP guests, the more likely the host is being used primarily to run SNP guests, where a mostly-split directmap is actually beneficial since there is less chance of TLB flushing and cpa_lock contention being needed to perform these splits. Cases where a host knows in advance it wants to primarily run SNP guests and wishes to pre-split the directmap can be handled by adding a tuneable in the future, but preliminary testing has shown this to not provide a signficant benefit in the common case of guests that are backed primarily by 2MB THPs, so it does not seem to be warranted currently and can be added later if a need arises in the future. Signed-off-by: Michael Roth <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Link: https://lore.kernel.org/r/[email protected]
1 parent 2c35819 commit 661b1c6

File tree

1 file changed

+83
-2
lines changed

1 file changed

+83
-2
lines changed

arch/x86/virt/svm/sev.c

Lines changed: 83 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -368,6 +368,81 @@ int psmash(u64 pfn)
368368
}
369369
EXPORT_SYMBOL_GPL(psmash);
370370

371+
/*
372+
* If the kernel uses a 2MB or larger directmap mapping to write to an address,
373+
* and that mapping contains any 4KB pages that are set to private in the RMP
374+
* table, an RMP #PF will trigger and cause a host crash. Hypervisor code that
375+
* owns the PFNs being transitioned will never attempt such a write, but other
376+
* kernel tasks writing to other PFNs in the range may trigger these checks
377+
* inadvertently due a large directmap mapping that happens to overlap such a
378+
* PFN.
379+
*
380+
* Prevent this by splitting any 2MB+ mappings that might end up containing a
381+
* mix of private/shared PFNs as a result of a subsequent RMPUPDATE for the
382+
* PFN/rmp_level passed in.
383+
*
384+
* Note that there is no attempt here to scan all the RMP entries for the 2MB
385+
* physical range, since it would only be worthwhile in determining if a
386+
* subsequent RMPUPDATE for a 4KB PFN would result in all the entries being of
387+
* the same shared/private state, thus avoiding the need to split the mapping.
388+
* But that would mean the entries are currently in a mixed state, and so the
389+
* mapping would have already been split as a result of prior transitions.
390+
* And since the 4K split is only done if the mapping is 2MB+, and there isn't
391+
* currently a mechanism in place to restore 2MB+ mappings, such a check would
392+
* not provide any usable benefit.
393+
*
394+
* More specifics on how these checks are carried out can be found in APM
395+
* Volume 2, "RMP and VMPL Access Checks".
396+
*/
397+
static int adjust_direct_map(u64 pfn, int rmp_level)
398+
{
399+
unsigned long vaddr;
400+
unsigned int level;
401+
int npages, ret;
402+
pte_t *pte;
403+
404+
/*
405+
* pfn_to_kaddr() will return a vaddr only within the direct
406+
* map range.
407+
*/
408+
vaddr = (unsigned long)pfn_to_kaddr(pfn);
409+
410+
/* Only 4KB/2MB RMP entries are supported by current hardware. */
411+
if (WARN_ON_ONCE(rmp_level > PG_LEVEL_2M))
412+
return -EINVAL;
413+
414+
if (!pfn_valid(pfn))
415+
return -EINVAL;
416+
417+
if (rmp_level == PG_LEVEL_2M &&
418+
(!IS_ALIGNED(pfn, PTRS_PER_PMD) || !pfn_valid(pfn + PTRS_PER_PMD - 1)))
419+
return -EINVAL;
420+
421+
/*
422+
* If an entire 2MB physical range is being transitioned, then there is
423+
* no risk of RMP #PFs due to write accesses from overlapping mappings,
424+
* since even accesses from 1GB mappings will be treated as 2MB accesses
425+
* as far as RMP table checks are concerned.
426+
*/
427+
if (rmp_level == PG_LEVEL_2M)
428+
return 0;
429+
430+
pte = lookup_address(vaddr, &level);
431+
if (!pte || pte_none(*pte))
432+
return 0;
433+
434+
if (level == PG_LEVEL_4K)
435+
return 0;
436+
437+
npages = page_level_size(rmp_level) / PAGE_SIZE;
438+
ret = set_memory_4k(vaddr, npages);
439+
if (ret)
440+
pr_warn("Failed to split direct map for PFN 0x%llx, ret: %d\n",
441+
pfn, ret);
442+
443+
return ret;
444+
}
445+
371446
/*
372447
* It is expected that those operations are seldom enough so that no mutual
373448
* exclusion of updaters is needed and thus the overlap error condition below
@@ -384,11 +459,16 @@ EXPORT_SYMBOL_GPL(psmash);
384459
static int rmpupdate(u64 pfn, struct rmp_state *state)
385460
{
386461
unsigned long paddr = pfn << PAGE_SHIFT;
387-
int ret;
462+
int ret, level;
388463

389464
if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP))
390465
return -ENODEV;
391466

467+
level = RMP_TO_PG_LEVEL(state->pagesize);
468+
469+
if (adjust_direct_map(pfn, level))
470+
return -EFAULT;
471+
392472
do {
393473
/* Binutils version 2.36 supports the RMPUPDATE mnemonic. */
394474
asm volatile(".byte 0xF2, 0x0F, 0x01, 0xFE"
@@ -398,7 +478,8 @@ static int rmpupdate(u64 pfn, struct rmp_state *state)
398478
} while (ret == RMPUPDATE_FAIL_OVERLAP);
399479

400480
if (ret) {
401-
pr_err("RMPUPDATE failed for PFN %llx, ret: %d\n", pfn, ret);
481+
pr_err("RMPUPDATE failed for PFN %llx, pg_level: %d, ret: %d\n",
482+
pfn, level, ret);
402483
dump_rmpentry(pfn);
403484
dump_stack();
404485
return -EFAULT;

0 commit comments

Comments
 (0)