@Patrick just noticed xpdf silently fails to run when trying to open a pdf in 15.0.0.6.6
can you reproduce that? Any logs needed?
Scratch that, the file is malformed
@Patrick git instruction on the dev page - git doesn’t seem to recognize the --recursive-submodules parameter but this worked:
git checkout --recurse-submodules 15.0.0.7.1-developers-only
Thanks, fixed.
Yes I’m using it as we speak
Could you please the following KVM parameters and check if we’re already using secure defaults? //cc @madaidan
kvm.ignore_msrs=[KVM] Ignore guest accesses to unhandled MSRs.
Default is 0 (don’t ignore, but inject #GP)kvm.enable_vmware_backdoor=[KVM] Support VMware backdoor PV interface.
Default is false (don’t support).kvm.mmu_audit= [KVM] This is a R/W parameter which allows audit
KVM MMU at runtime.
Default is 0 (off)kvm.nx_huge_pages=
[KVM] Controls the software workaround for the
X86_BUG_ITLB_MULTIHIT bug.
force : Always deploy workaround.
off : Never deploy workaround.
auto : Deploy workaround based on the presence of
X86_BUG_ITLB_MULTIHIT.Default is 'auto'. If the software workaround is enabled for the host, guests do need not to enable it for nested guests.
kvm.nx_huge_pages_recovery_ratio=
[KVM] Controls how many 4KiB pages are periodically zapped
back to huge pages. 0 disables the recovery, otherwise if
the value is N KVM will zap 1/Nth of the 4KiB pages every
minute. The default is 60.kvm-amd.nested= [KVM,AMD] Allow nested virtualization in KVM/SVM.
Default is 1 (enabled)kvm-amd.npt= [KVM,AMD] Disable nested paging (virtualized MMU)
for all guests.
Default is 1 (enabled) if in 64-bit or 32-bit PAE mode.kvm-arm.vgic_v3_group0_trap=
[KVM,ARM] Trap guest accesses to GICv3 group-0
system registerskvm-arm.vgic_v3_group1_trap=
[KVM,ARM] Trap guest accesses to GICv3 group-1
system registerskvm-arm.vgic_v3_common_trap=
[KVM,ARM] Trap guest accesses to GICv3 common
system registerskvm-arm.vgic_v4_enable=
[KVM,ARM] Allow use of GICv4 for direct injection of
LPIs.kvm-intel.ept= [KVM,Intel] Disable extended page tables
(virtualized MMU) support on capable Intel chips.
Default is 1 (enabled)kvm-intel.emulate_invalid_guest_state=
[KVM,Intel] Enable emulation of invalid guest states
Default is 0 (disabled)kvm-intel.flexpriority=
[KVM,Intel] Disable FlexPriority feature (TPR shadow).
Default is 1 (enabled)kvm-intel.nested=
[KVM,Intel] Enable VMX nesting (nVMX).
Default is 0 (disabled)kvm-intel.unrestricted_guest=
[KVM,Intel] Disable unrestricted guest feature
(virtualized real and unpaged mode) on capable
Intel chips. Default is 1 (enabled)kvm-intel.vmentry_l1d_flush=[KVM,Intel] Mitigation for L1 Terminal Fault
CVE-2018-3620.Valid arguments: never, cond, always always: L1D cache flush on every VMENTER. cond: Flush L1D on VMENTER only when the code between VMEXIT and VMENTER can leak host memory. never: Disables the mitigation Default is cond (do L1 cache flush in specific instances)
kvm-intel.vpid= [KVM,Intel] Disable Virtual Processor Identification
feature (tagged TLBs) on capable Intel chips.
Default is 1 (enabled)
Could you please experiment with kernel boot parameter
l1tf=full,force
and make sure it doesn’t break KVM hosts or guests?
I disable that
Irrelevant since hugepages are disabled for guests for security reasons:
Enabled for both AMD and Intel using hap tag
The rest apply to Intel which I don’t have.
(disabled = via libvirt)
Guest boot ok.
I don’t know how relevant my experience is on a non Intel system. The mitigation is for Intel CPUs only and therefore would only be active on that hardware.
Here;s some benchmarks done:
KVM related kernel parameters: it is one thing to disable these in VM XML settings but what about custom created VMs? Wouldn’t it still be better if we (also) set these options on the kernel boot parameter command line?
Sure. If you are talking about kernels that see wider use outside Whonix like linux-hardened then this is a very sensible thing to do.
It’s for security-misc package which is already used by some Debian users, Kicksecure and will be used by Whonix-Host.
Could you add please any hardening parameters that make sense and add to security-misc package?
You’ve included everything important from the L1TF thread. Reviewing the KVM options from here, I don’t think it would ever make sense in context of a host OS. To tighten KVM options for non-Whonix guests, one would either use Kicksecure or its config which includes my changes.
As an aside it seems the new kernel command turns all mitigations on including disabling SMT. I think it’s more manageable if you switch to using that instead of specifying every knob.
mitigations=auto,nosmt
There is a lot use of KVM outside of user visible VMs / XMLs. For example jenkins or docker can run in KVM. In such situations the libvirt XML files are often auto generated without the user being aware of any hardening that should be manually injected at XML creation time. Since the KVM related kernel hardening parameters do essentially nothing for non-users of KVM (I think?) I am very much for enabling these by default.
answered here Should all kernel patches for CPU bugs be unconditionally enabled? Vs Performance vs Applicability - #21 by Patrick
The only km security options are:
kvm.enable_vmware_backdoor=[KVM] Support VMware backdoor PV interface.
Default is false (don't support).
Here it’s already set to false. Do you want to still explicitly pass that to the cli?
kvm.nx_huge_pages=
[KVM] Controls the software workaround for the
X86_BUG_ITLB_MULTIHIT bug.
force : Always deploy workaround.
off : Never deploy workaround.
auto : Deploy workaround based on the presence of
X86_BUG_ITLB_MULTIHIT.
Default is 'auto'.
If the software workaround is enabled for the host,
guests do need not to enable it for nested guests.
Here it is already enabled whenever the errata is detected, but we should always force it I guess?
https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/multihit.html
kvm-intel.vmentry_l1d_flush=[KVM,Intel] Mitigation for L1 Terminal Fault
CVE-2018-3620.
Valid arguments: never, cond, always
always: L1D cache flush on every VMENTER.
cond: Flush L1D on VMENTER only when the code between
VMEXIT and VMENTER can leak host memory.
never: Disables the mitigation
Default is cond (do L1 cache flush in specific instances)
It is unclear here what we gain from going to always from conditional.
In Linux 5.6 KVM is adding a new super-combo mitigation to protect against L1TF and Spectre 1 used together. This will be baked into the code it seems AFAIK.
Xen has similar stuff added, but I have no idea how it works.
Does KVM also work on Safe use of Hyperthreading (HT) similar to Xen?