Skip to content

Commit

Permalink
Merge branches 'fixes', 'generic', 'misc', 'mmu', 'pat_vmx_msrs', 'se…
Browse files Browse the repository at this point in the history
…lftests', 'svm' and 'vmx'

* fixes:
  KVM: SEV: Update KVM_AMD_SEV Kconfig entry and mention SEV-SNP
  KVM: SVM: Don't advertise Bus Lock Detect to guest if SVM support is missing
  KVM: SVM: fix emulation of msr reads/writes of MSR_FS_BASE and MSR_GS_BASE
  KVM: x86: Acquire kvm->srcu when handling KVM_SET_VCPU_EVENTS
  KVM: x86/mmu: Check that root is valid/loaded when pre-faulting SPTEs
  KVM: x86/mmu: Fixup comments missed by the REMOVED_SPTE=>FROZEN_SPTE rename

* generic:
  KVM: Harden guest memory APIs against out-of-bounds accesses
  KVM: Write the per-page "segment" when clearing (part of) a guest page
  KVM: Clean up coalesced MMIO ring full check
  KVM: selftests: Add a test for coalesced MMIO (and PIO on x86)
  KVM: Fix coalesced_mmio_has_room() to avoid premature userspace exit

* misc: (30 commits)
  KVM: x86: Remove some unused declarations
  KVM: x86: Forcibly leave nested if RSM to L2 hits shutdown
  KVM: x86: Add fastpath handling of HLT VM-Exits
  KVM: x86: Reorganize code in x86.c to co-locate vCPU blocking/running helpers
  KVM: x86: Exit to userspace if fastpath triggers one on instruction skip
  KVM: x86: Dedup fastpath MSR post-handling logic
  KVM: x86: Re-enter guest if WRMSR(X2APIC_ICR) fastpath is successful
  KVM: selftests: Play nice with AMD's AVIC errata
  KVM: selftests: Verify the guest can read back the x2APIC ICR it wrote
  KVM: selftests: Test x2APIC ICR reserved bits
  KVM: selftests: Skip ICR.BUSY test in xapic_state_test if x2APIC is enabled
  KVM: selftests: Add x86 helpers to play nice with x2APIC MSR #GPs
  KVM: selftests: Report unhandled exceptions on x86 as regular guest asserts
  KVM: selftests: Open code vcpu_run() equivalent in guest_printf test
  KVM: x86: Re-split x2APIC ICR into ICR+ICR2 for AMD (x2AVIC)
  KVM: x86: Move x2APIC ICR helper above kvm_apic_write_nodecode()
  KVM: x86: Enforce x2APIC's must-be-zero reserved ICR bits
  KVM: x86: Suppress userspace access failures on unsupported, "emulated" MSRs
  KVM: x86: Suppress failures on userspace access to advertised, unsupported MSRs
  KVM: x86: Hoist x86.c's global msr_* variables up above kvm_do_msr_access()
  ...

* mmu: (33 commits)
  KVM: x86/mmu: Use KVM_PAGES_PER_HPAGE() instead of an open coded equivalent
  KVM: x86/mmu: Add KVM_RMAP_MANY to replace open coded '1' and '1ul' literals
  KVM: x86/mmu: Fold mmu_spte_age() into kvm_rmap_age_gfn_range()
  KVM: x86/mmu: Morph kvm_handle_gfn_range() into an aging specific helper
  KVM: x86/mmu: Honor NEED_RESCHED when zapping rmaps and blocking is allowed
  KVM: x86/mmu: Add a helper to walk and zap rmaps for a memslot
  KVM: x86/mmu: Plumb a @can_yield parameter into __walk_slot_rmaps()
  KVM: x86/mmu: Move walk_slot_rmaps() up near for_each_slot_rmap_range()
  KVM: x86/mmu: WARN on MMIO cache hit when emulating write-protected gfn
  KVM: x86/mmu: Detect if unprotect will do anything based on invalid_list
  KVM: x86/mmu: Subsume kvm_mmu_unprotect_page() into the and_retry() version
  KVM: x86: Rename reexecute_instruction()=>kvm_unprotect_and_retry_on_failure()
  KVM: x86: Update retry protection fields when forcing retry on emulation failure
  KVM: x86: Apply retry protection to "unprotect on failure" path
  KVM: x86: Check EMULTYPE_WRITE_PF_TO_SP before unprotecting gfn
  KVM: x86: Remove manual pfn lookup when retrying #PF after failed emulation
  KVM: x86/mmu: Move event re-injection unprotect+retry into common path
  KVM: x86/mmu: Always walk guest PTEs with WRITE access when unprotecting
  KVM: x86/mmu: Don't try to unprotect an INVALID_GPA
  KVM: x86: Fold retry_instruction() into x86_emulate_instruction()
  ...

* pat_vmx_msrs:
  KVM: nVMX: Use macros and #defines in vmx_restore_vmx_misc()
  KVM: VMX: Open code VMX preemption timer rate mask in its accessor
  KVM VMX: Move MSR_IA32_VMX_MISC bit defines to asm/vmx.h
  KVM: nVMX: Add a helper to encode VMCS info in MSR_IA32_VMX_BASIC
  KVM: nVMX: Use macros and #defines in vmx_restore_vmx_basic()
  KVM: VMX: Track CPU's MSR_IA32_VMX_BASIC as a single 64-bit value
  KVM: VMX: Move MSR_IA32_VMX_BASIC bit defines to asm/vmx.h
  KVM: x86: Stuff vCPU's PAT with default value at RESET, not creation
  x86/cpu: KVM: Move macro to encode PAT value to common header
  x86/cpu: KVM: Add common defines for architectural memory types (PAT, MTRRs, etc.)

* selftests:
  KVM: selftests: Verify single-stepping a fastpath VM-Exit exits to userspace
  KVM: selftests: Explicitly include committed one-off assets in .gitignore
  KVM: selftests: Add SEV-ES shutdown test
  KVM: selftests: Always unlink memory regions when deleting (VM free)
  KVM: selftests: Remove unused kvm_memcmp_hva_gva()
  KVM: selftests: Re-enable hyperv_evmcs/hyperv_svm_test on bare metal
  KVM: selftests: Move Hyper-V specific functions out of processor.c

* svm:
  KVM: SVM: Track the per-CPU host save area as a VMCB pointer
  KVM: SVM: Add host SEV-ES save area structure into VMCB via a union
  KVM: SVM: Add a helper to convert a SME-aware PA back to a struct page
  KVM: SVM: Remove unnecessary GFP_KERNEL_ACCOUNT in svm_set_nested_state()

* vmx:
  KVM: VMX: Set PFERR_GUEST_{FINAL,PAGE}_MASK if and only if the GVA is valid
  KVM: nVMX: Assert that vcpu->mutex is held when accessing secondary VMCSes
  KVM: nVMX: Explicitly invalidate posted_intr_nv if PI is disabled at VM-Enter
  KVM: x86: Fold kvm_get_apic_interrupt() into kvm_cpu_get_interrupt()
  KVM: nVMX: Detect nested posted interrupt NV at nested VM-Exit injection
  KVM: nVMX: Suppress external interrupt VM-Exit injection if there's no IRQ
  KVM: nVMX: Get to-be-acknowledge IRQ for nested VM-Exit at injection site
  KVM: x86: Move "ack" phase of local APIC IRQ delivery to separate API
  KVM: VMX: Also clear SGX EDECCSSA in KVM CPU caps when SGX is disabled
  KVM: VMX: hyper-v: Prevent impossible NULL pointer dereference in evmcs_load()
  KVM: nVMX: Use vmx_segment_cache_clear() instead of open coded equivalent
  KVM: nVMX: Honor userspace MSR filter lists for nested VM-Enter/VM-Exit
  KVM: VMX: Do not account for temporary memory allocation in ECREATE emulation
  KVM: VMX: Modify the BUILD_BUG_ON_MSG of the 32-bit field in the vmcs_check16 function
  • Loading branch information
sean-jc committed Sep 10, 2024
9 parents 47ac09b + 5fa9f04 + 025dde5 + 4ca077f + 9a5bff7 + 566975f + c32e028 + 32071fa + f300948 commit b4298ba
Show file tree
Hide file tree
Showing 58 changed files with 1,882 additions and 1,297 deletions.
23 changes: 19 additions & 4 deletions Documentation/virt/kvm/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4214,7 +4214,9 @@ whether or not KVM_CAP_X86_USER_SPACE_MSR's KVM_MSR_EXIT_REASON_FILTER is
enabled. If KVM_MSR_EXIT_REASON_FILTER is enabled, KVM will exit to userspace
on denied accesses, i.e. userspace effectively intercepts the MSR access. If
KVM_MSR_EXIT_REASON_FILTER is not enabled, KVM will inject a #GP into the guest
on denied accesses.
on denied accesses. Note, if an MSR access is denied during emulation of MSR
load/stores during VMX transitions, KVM ignores KVM_MSR_EXIT_REASON_FILTER.
See the below warning for full details.

If an MSR access is allowed by userspace, KVM will emulate and/or virtualize
the access in accordance with the vCPU model. Note, KVM may still ultimately
Expand All @@ -4229,9 +4231,22 @@ filtering. In that mode, ``KVM_MSR_FILTER_DEFAULT_DENY`` is invalid and causes
an error.

.. warning::
MSR accesses as part of nested VM-Enter/VM-Exit are not filtered.
This includes both writes to individual VMCS fields and reads/writes
through the MSR lists pointed to by the VMCS.
MSR accesses that are side effects of instruction execution (emulated or
native) are not filtered as hardware does not honor MSR bitmaps outside of
RDMSR and WRMSR, and KVM mimics that behavior when emulating instructions
to avoid pointless divergence from hardware. E.g. RDPID reads MSR_TSC_AUX,
SYSENTER reads the SYSENTER MSRs, etc.

MSRs that are loaded/stored via dedicated VMCS fields are not filtered as
part of VM-Enter/VM-Exit emulation.

MSRs that are loaded/store via VMX's load/store lists _are_ filtered as part
of VM-Enter/VM-Exit emulation. If an MSR access is denied on VM-Enter, KVM
synthesizes a consistency check VM-Exit(EXIT_REASON_MSR_LOAD_FAIL). If an
MSR access is denied on VM-Exit, KVM synthesizes a VM-Abort. In short, KVM
extends Intel's architectural list of MSRs that cannot be loaded/saved via
the VM-Enter/VM-Exit MSR list. It is platform owner's responsibility to
to communicate any such restrictions to their end users.

x2APIC MSR accesses cannot be filtered (KVM silently ignores filters that
cover any x2APIC MSRs).
Expand Down
1 change: 1 addition & 0 deletions arch/x86/include/asm/cpuid.h
Original file line number Diff line number Diff line change
Expand Up @@ -179,6 +179,7 @@ static __always_inline bool cpuid_function_is_indexed(u32 function)
case 0x1d:
case 0x1e:
case 0x1f:
case 0x24:
case 0x8000001d:
return true;
}
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/include/asm/kvm-x86-ops.h
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ KVM_X86_OP_OPTIONAL(mem_enc_unregister_region)
KVM_X86_OP_OPTIONAL(vm_copy_enc_context_from)
KVM_X86_OP_OPTIONAL(vm_move_enc_context_from)
KVM_X86_OP_OPTIONAL(guest_memory_reclaimed)
KVM_X86_OP(get_msr_feature)
KVM_X86_OP(get_feature_msr)
KVM_X86_OP(check_emulate_instruction)
KVM_X86_OP(apic_init_signal_blocked)
KVM_X86_OP_OPTIONAL(enable_l2_tlb_flush)
Expand Down
22 changes: 16 additions & 6 deletions arch/x86/include/asm/kvm_host.h
Original file line number Diff line number Diff line change
Expand Up @@ -211,6 +211,7 @@ enum exit_fastpath_completion {
EXIT_FASTPATH_NONE,
EXIT_FASTPATH_REENTER_GUEST,
EXIT_FASTPATH_EXIT_HANDLED,
EXIT_FASTPATH_EXIT_USERSPACE,
};
typedef enum exit_fastpath_completion fastpath_t;

Expand Down Expand Up @@ -280,10 +281,6 @@ enum x86_intercept_stage;
#define PFERR_PRIVATE_ACCESS BIT_ULL(49)
#define PFERR_SYNTHETIC_MASK (PFERR_IMPLICIT_ACCESS | PFERR_PRIVATE_ACCESS)

#define PFERR_NESTED_GUEST_PAGE (PFERR_GUEST_PAGE_MASK | \
PFERR_WRITE_MASK | \
PFERR_PRESENT_MASK)

/* apic attention bits */
#define KVM_APIC_CHECK_VAPIC 0
/*
Expand Down Expand Up @@ -1727,6 +1724,8 @@ struct kvm_x86_ops {
void (*enable_nmi_window)(struct kvm_vcpu *vcpu);
void (*enable_irq_window)(struct kvm_vcpu *vcpu);
void (*update_cr8_intercept)(struct kvm_vcpu *vcpu, int tpr, int irr);

const bool x2apic_icr_is_split;
const unsigned long required_apicv_inhibits;
bool allow_apicv_in_x2apic_without_x2apic_virtualization;
void (*refresh_apicv_exec_ctrl)(struct kvm_vcpu *vcpu);
Expand Down Expand Up @@ -1806,7 +1805,7 @@ struct kvm_x86_ops {
int (*vm_move_enc_context_from)(struct kvm *kvm, unsigned int source_fd);
void (*guest_memory_reclaimed)(struct kvm *kvm);

int (*get_msr_feature)(struct kvm_msr_entry *entry);
int (*get_feature_msr)(u32 msr, u64 *data);

int (*check_emulate_instruction)(struct kvm_vcpu *vcpu, int emul_type,
void *insn, int insn_len);
Expand Down Expand Up @@ -2060,6 +2059,8 @@ void kvm_prepare_emulation_failure_exit(struct kvm_vcpu *vcpu);

void kvm_enable_efer_bits(u64);
bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer);
int kvm_get_msr_with_filter(struct kvm_vcpu *vcpu, u32 index, u64 *data);
int kvm_set_msr_with_filter(struct kvm_vcpu *vcpu, u32 index, u64 data);
int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data, bool host_initiated);
int kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data);
int kvm_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data);
Expand Down Expand Up @@ -2136,7 +2137,15 @@ int kvm_get_nr_pending_nmis(struct kvm_vcpu *vcpu);

void kvm_update_dr7(struct kvm_vcpu *vcpu);

int kvm_mmu_unprotect_page(struct kvm *kvm, gfn_t gfn);
bool __kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
bool always_retry);

static inline bool kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu,
gpa_t cr2_or_gpa)
{
return __kvm_mmu_unprotect_gfn_and_retry(vcpu, cr2_or_gpa, false);
}

void kvm_mmu_free_roots(struct kvm *kvm, struct kvm_mmu *mmu,
ulong roots_to_free);
void kvm_mmu_free_guest_mode_roots(struct kvm *kvm, struct kvm_mmu *mmu);
Expand Down Expand Up @@ -2254,6 +2263,7 @@ int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v);
int kvm_cpu_has_interrupt(struct kvm_vcpu *vcpu);
int kvm_cpu_has_extint(struct kvm_vcpu *v);
int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu);
int kvm_cpu_get_extint(struct kvm_vcpu *v);
int kvm_cpu_get_interrupt(struct kvm_vcpu *v);
void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);

Expand Down
34 changes: 20 additions & 14 deletions arch/x86/include/asm/msr-index.h
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,20 @@
#define EFER_FFXSR (1<<_EFER_FFXSR)
#define EFER_AUTOIBRS (1<<_EFER_AUTOIBRS)

/*
* Architectural memory types that are common to MTRRs, PAT, VMX MSRs, etc.
* Most MSRs support/allow only a subset of memory types, but the values
* themselves are common across all relevant MSRs.
*/
#define X86_MEMTYPE_UC 0ull /* Uncacheable, a.k.a. Strong Uncacheable */
#define X86_MEMTYPE_WC 1ull /* Write Combining */
/* RESERVED 2 */
/* RESERVED 3 */
#define X86_MEMTYPE_WT 4ull /* Write Through */
#define X86_MEMTYPE_WP 5ull /* Write Protected */
#define X86_MEMTYPE_WB 6ull /* Write Back */
#define X86_MEMTYPE_UC_MINUS 7ull /* Weak Uncacheabled (PAT only) */

/* FRED MSRs */
#define MSR_IA32_FRED_RSP0 0x1cc /* Level 0 stack pointer */
#define MSR_IA32_FRED_RSP1 0x1cd /* Level 1 stack pointer */
Expand Down Expand Up @@ -363,6 +377,12 @@

#define MSR_IA32_CR_PAT 0x00000277

#define PAT_VALUE(p0, p1, p2, p3, p4, p5, p6, p7) \
((X86_MEMTYPE_ ## p0) | (X86_MEMTYPE_ ## p1 << 8) | \
(X86_MEMTYPE_ ## p2 << 16) | (X86_MEMTYPE_ ## p3 << 24) | \
(X86_MEMTYPE_ ## p4 << 32) | (X86_MEMTYPE_ ## p5 << 40) | \
(X86_MEMTYPE_ ## p6 << 48) | (X86_MEMTYPE_ ## p7 << 56))

#define MSR_IA32_DEBUGCTLMSR 0x000001d9
#define MSR_IA32_LASTBRANCHFROMIP 0x000001db
#define MSR_IA32_LASTBRANCHTOIP 0x000001dc
Expand Down Expand Up @@ -1157,15 +1177,6 @@
#define MSR_IA32_VMX_VMFUNC 0x00000491
#define MSR_IA32_VMX_PROCBASED_CTLS3 0x00000492

/* VMX_BASIC bits and bitmasks */
#define VMX_BASIC_VMCS_SIZE_SHIFT 32
#define VMX_BASIC_TRUE_CTLS (1ULL << 55)
#define VMX_BASIC_64 0x0001000000000000LLU
#define VMX_BASIC_MEM_TYPE_SHIFT 50
#define VMX_BASIC_MEM_TYPE_MASK 0x003c000000000000LLU
#define VMX_BASIC_MEM_TYPE_WB 6LLU
#define VMX_BASIC_INOUT 0x0040000000000000LLU

/* Resctrl MSRs: */
/* - Intel: */
#define MSR_IA32_L3_QOS_CFG 0xc81
Expand All @@ -1183,11 +1194,6 @@
#define MSR_IA32_SMBA_BW_BASE 0xc0000280
#define MSR_IA32_EVT_CFG_BASE 0xc0000400

/* MSR_IA32_VMX_MISC bits */
#define MSR_IA32_VMX_MISC_INTEL_PT (1ULL << 14)
#define MSR_IA32_VMX_MISC_VMWRITE_SHADOW_RO_FIELDS (1ULL << 29)
#define MSR_IA32_VMX_MISC_PREEMPTION_TIMER_SCALE 0x1F

/* AMD-V MSRs */
#define MSR_VM_CR 0xc0010114
#define MSR_VM_IGNNE 0xc0010115
Expand Down
20 changes: 15 additions & 5 deletions arch/x86/include/asm/svm.h
Original file line number Diff line number Diff line change
Expand Up @@ -516,6 +516,20 @@ struct ghcb {
u32 ghcb_usage;
} __packed;

struct vmcb {
struct vmcb_control_area control;
union {
struct vmcb_save_area save;

/*
* For SEV-ES VMs, the save area in the VMCB is used only to
* save/load host state. Guest state resides in a separate
* page, the aptly named VM Save Area (VMSA), that is encrypted
* with the guest's private key.
*/
struct sev_es_save_area host_sev_es_save;
};
} __packed;

#define EXPECTED_VMCB_SAVE_AREA_SIZE 744
#define EXPECTED_GHCB_SAVE_AREA_SIZE 1032
Expand All @@ -532,6 +546,7 @@ static inline void __unused_size_checks(void)
BUILD_BUG_ON(sizeof(struct ghcb_save_area) != EXPECTED_GHCB_SAVE_AREA_SIZE);
BUILD_BUG_ON(sizeof(struct sev_es_save_area) != EXPECTED_SEV_ES_SAVE_AREA_SIZE);
BUILD_BUG_ON(sizeof(struct vmcb_control_area) != EXPECTED_VMCB_CONTROL_AREA_SIZE);
BUILD_BUG_ON(offsetof(struct vmcb, save) != EXPECTED_VMCB_CONTROL_AREA_SIZE);
BUILD_BUG_ON(sizeof(struct ghcb) != EXPECTED_GHCB_SIZE);

/* Check offsets of reserved fields */
Expand Down Expand Up @@ -568,11 +583,6 @@ static inline void __unused_size_checks(void)
BUILD_BUG_RESERVED_OFFSET(ghcb, 0xff0);
}

struct vmcb {
struct vmcb_control_area control;
struct vmcb_save_area save;
} __packed;

#define SVM_CPUID_FUNC 0x8000000a

#define SVM_SELECTOR_S_SHIFT 4
Expand Down
40 changes: 30 additions & 10 deletions arch/x86/include/asm/vmx.h
Original file line number Diff line number Diff line change
Expand Up @@ -122,19 +122,17 @@

#define VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR 0x000011ff

#define VMX_MISC_PREEMPTION_TIMER_RATE_MASK 0x0000001f
#define VMX_MISC_SAVE_EFER_LMA 0x00000020
#define VMX_MISC_ACTIVITY_HLT 0x00000040
#define VMX_MISC_ACTIVITY_WAIT_SIPI 0x00000100
#define VMX_MISC_ZERO_LEN_INS 0x40000000
#define VMX_MISC_MSR_LIST_MULTIPLIER 512

/* VMFUNC functions */
#define VMFUNC_CONTROL_BIT(x) BIT((VMX_FEATURE_##x & 0x1f) - 28)

#define VMX_VMFUNC_EPTP_SWITCHING VMFUNC_CONTROL_BIT(EPTP_SWITCHING)
#define VMFUNC_EPTP_ENTRIES 512

#define VMX_BASIC_32BIT_PHYS_ADDR_ONLY BIT_ULL(48)
#define VMX_BASIC_DUAL_MONITOR_TREATMENT BIT_ULL(49)
#define VMX_BASIC_INOUT BIT_ULL(54)
#define VMX_BASIC_TRUE_CTLS BIT_ULL(55)

static inline u32 vmx_basic_vmcs_revision_id(u64 vmx_basic)
{
return vmx_basic & GENMASK_ULL(30, 0);
Expand All @@ -145,9 +143,30 @@ static inline u32 vmx_basic_vmcs_size(u64 vmx_basic)
return (vmx_basic & GENMASK_ULL(44, 32)) >> 32;
}

static inline u32 vmx_basic_vmcs_mem_type(u64 vmx_basic)
{
return (vmx_basic & GENMASK_ULL(53, 50)) >> 50;
}

static inline u64 vmx_basic_encode_vmcs_info(u32 revision, u16 size, u8 memtype)
{
return revision | ((u64)size << 32) | ((u64)memtype << 50);
}

#define VMX_MISC_SAVE_EFER_LMA BIT_ULL(5)
#define VMX_MISC_ACTIVITY_HLT BIT_ULL(6)
#define VMX_MISC_ACTIVITY_SHUTDOWN BIT_ULL(7)
#define VMX_MISC_ACTIVITY_WAIT_SIPI BIT_ULL(8)
#define VMX_MISC_INTEL_PT BIT_ULL(14)
#define VMX_MISC_RDMSR_IN_SMM BIT_ULL(15)
#define VMX_MISC_VMXOFF_BLOCK_SMI BIT_ULL(28)
#define VMX_MISC_VMWRITE_SHADOW_RO_FIELDS BIT_ULL(29)
#define VMX_MISC_ZERO_LEN_INS BIT_ULL(30)
#define VMX_MISC_MSR_LIST_MULTIPLIER 512

static inline int vmx_misc_preemption_timer_rate(u64 vmx_misc)
{
return vmx_misc & VMX_MISC_PREEMPTION_TIMER_RATE_MASK;
return vmx_misc & GENMASK_ULL(4, 0);
}

static inline int vmx_misc_cr3_count(u64 vmx_misc)
Expand Down Expand Up @@ -508,9 +527,10 @@ enum vmcs_field {
#define VMX_EPTP_PWL_4 0x18ull
#define VMX_EPTP_PWL_5 0x20ull
#define VMX_EPTP_AD_ENABLE_BIT (1ull << 6)
/* The EPTP memtype is encoded in bits 2:0, i.e. doesn't need to be shifted. */
#define VMX_EPTP_MT_MASK 0x7ull
#define VMX_EPTP_MT_WB 0x6ull
#define VMX_EPTP_MT_UC 0x0ull
#define VMX_EPTP_MT_WB X86_MEMTYPE_WB
#define VMX_EPTP_MT_UC X86_MEMTYPE_UC
#define VMX_EPT_READABLE_MASK 0x1ull
#define VMX_EPT_WRITABLE_MASK 0x2ull
#define VMX_EPT_EXECUTABLE_MASK 0x4ull
Expand Down
6 changes: 6 additions & 0 deletions arch/x86/kernel/cpu/mtrr/mtrr.c
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,12 @@

#include "mtrr.h"

static_assert(X86_MEMTYPE_UC == MTRR_TYPE_UNCACHABLE);
static_assert(X86_MEMTYPE_WC == MTRR_TYPE_WRCOMB);
static_assert(X86_MEMTYPE_WT == MTRR_TYPE_WRTHROUGH);
static_assert(X86_MEMTYPE_WP == MTRR_TYPE_WRPROT);
static_assert(X86_MEMTYPE_WB == MTRR_TYPE_WRBACK);

/* arch_phys_wc_add returns an MTRR register index plus this offset. */
#define MTRR_TO_PHYS_WC_OFFSET 1000

Expand Down
6 changes: 4 additions & 2 deletions arch/x86/kvm/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -144,8 +144,10 @@ config KVM_AMD_SEV
select HAVE_KVM_ARCH_GMEM_PREPARE
select HAVE_KVM_ARCH_GMEM_INVALIDATE
help
Provides support for launching Encrypted VMs (SEV) and Encrypted VMs
with Encrypted State (SEV-ES) on AMD processors.
Provides support for launching encrypted VMs which use Secure
Encrypted Virtualization (SEV), Secure Encrypted Virtualization with
Encrypted State (SEV-ES), and Secure Encrypted Virtualization with
Secure Nested Paging (SEV-SNP) technologies on AMD processors.

config KVM_SMM
bool "System Management Mode emulation"
Expand Down
30 changes: 28 additions & 2 deletions arch/x86/kvm/cpuid.c
Original file line number Diff line number Diff line change
Expand Up @@ -705,7 +705,7 @@ void kvm_set_cpu_caps(void)

kvm_cpu_cap_init_kvm_defined(CPUID_7_1_EDX,
F(AVX_VNNI_INT8) | F(AVX_NE_CONVERT) | F(PREFETCHITI) |
F(AMX_COMPLEX)
F(AMX_COMPLEX) | F(AVX10)
);

kvm_cpu_cap_init_kvm_defined(CPUID_7_2_EDX,
Expand All @@ -721,6 +721,10 @@ void kvm_set_cpu_caps(void)
SF(SGX1) | SF(SGX2) | SF(SGX_EDECCSSA)
);

kvm_cpu_cap_init_kvm_defined(CPUID_24_0_EBX,
F(AVX10_128) | F(AVX10_256) | F(AVX10_512)
);

kvm_cpu_cap_mask(CPUID_8000_0001_ECX,
F(LAHF_LM) | F(CMP_LEGACY) | 0 /*SVM*/ | 0 /* ExtApicSpace */ |
F(CR8_LEGACY) | F(ABM) | F(SSE4A) | F(MISALIGNSSE) |
Expand Down Expand Up @@ -949,7 +953,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
switch (function) {
case 0:
/* Limited to the highest leaf implemented in KVM. */
entry->eax = min(entry->eax, 0x1fU);
entry->eax = min(entry->eax, 0x24U);
break;
case 1:
cpuid_entry_override(entry, CPUID_1_EDX);
Expand Down Expand Up @@ -1174,6 +1178,28 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
break;
}
break;
case 0x24: {
u8 avx10_version;

if (!kvm_cpu_cap_has(X86_FEATURE_AVX10)) {
entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
break;
}

/*
* The AVX10 version is encoded in EBX[7:0]. Note, the version
* is guaranteed to be >=1 if AVX10 is supported. Note #2, the
* version needs to be captured before overriding EBX features!
*/
avx10_version = min_t(u8, entry->ebx & 0xff, 1);
cpuid_entry_override(entry, CPUID_24_0_EBX);
entry->ebx |= avx10_version;

entry->eax = 0;
entry->ecx = 0;
entry->edx = 0;
break;
}
case KVM_CPUID_SIGNATURE: {
const u32 *sigptr = (const u32 *)KVM_SIGNATURE;
entry->eax = KVM_CPUID_FEATURES;
Expand Down
Loading

0 comments on commit b4298ba

Please sign in to comment.