@Patrick git instruction on the dev page - git doesn’t seem to recognize the --recursive-submodules parameter but this worked:
git checkout --recurse-submodules 15.0.0.7.1-developers-only
@Patrick git instruction on the dev page - git doesn’t seem to recognize the --recursive-submodules parameter but this worked:
git checkout --recurse-submodules 15.0.0.7.1-developers-only
Thanks, fixed.
Does shared folder auto mounting still work for you in Whonix and Kicksecure? @Hulahoop
Yes I’m using it as we speak
Could you please the following KVM parameters and check if we’re already using secure defaults? //cc @madaidan
kvm.ignore_msrs=[KVM] Ignore guest accesses to unhandled MSRs.
Default is 0 (don’t ignore, but inject #GP)kvm.enable_vmware_backdoor=[KVM] Support VMware backdoor PV interface.
Default is false (don’t support).kvm.mmu_audit= [KVM] This is a R/W parameter which allows audit
KVM MMU at runtime.
Default is 0 (off)kvm.nx_huge_pages=
[KVM] Controls the software workaround for the
X86_BUG_ITLB_MULTIHIT bug.
force : Always deploy workaround.
off : Never deploy workaround.
auto : Deploy workaround based on the presence of
X86_BUG_ITLB_MULTIHIT.Default is 'auto'. If the software workaround is enabled for the host, guests do need not to enable it for nested guests.
kvm.nx_huge_pages_recovery_ratio=
[KVM] Controls how many 4KiB pages are periodically zapped
back to huge pages. 0 disables the recovery, otherwise if
the value is N KVM will zap 1/Nth of the 4KiB pages every
minute. The default is 60.kvm-amd.nested= [KVM,AMD] Allow nested virtualization in KVM/SVM.
Default is 1 (enabled)kvm-amd.npt= [KVM,AMD] Disable nested paging (virtualized MMU)
for all guests.
Default is 1 (enabled) if in 64-bit or 32-bit PAE mode.kvm-arm.vgic_v3_group0_trap=
[KVM,ARM] Trap guest accesses to GICv3 group-0
system registerskvm-arm.vgic_v3_group1_trap=
[KVM,ARM] Trap guest accesses to GICv3 group-1
system registerskvm-arm.vgic_v3_common_trap=
[KVM,ARM] Trap guest accesses to GICv3 common
system registerskvm-arm.vgic_v4_enable=
[KVM,ARM] Allow use of GICv4 for direct injection of
LPIs.kvm-intel.ept= [KVM,Intel] Disable extended page tables
(virtualized MMU) support on capable Intel chips.
Default is 1 (enabled)kvm-intel.emulate_invalid_guest_state=
[KVM,Intel] Enable emulation of invalid guest states
Default is 0 (disabled)kvm-intel.flexpriority=
[KVM,Intel] Disable FlexPriority feature (TPR shadow).
Default is 1 (enabled)kvm-intel.nested=
[KVM,Intel] Enable VMX nesting (nVMX).
Default is 0 (disabled)kvm-intel.unrestricted_guest=
[KVM,Intel] Disable unrestricted guest feature
(virtualized real and unpaged mode) on capable
Intel chips. Default is 1 (enabled)kvm-intel.vmentry_l1d_flush=[KVM,Intel] Mitigation for L1 Terminal Fault
CVE-2018-3620.Valid arguments: never, cond, always always: L1D cache flush on every VMENTER. cond: Flush L1D on VMENTER only when the code between VMEXIT and VMENTER can leak host memory. never: Disables the mitigation Default is cond (do L1 cache flush in specific instances)
kvm-intel.vpid= [KVM,Intel] Disable Virtual Processor Identification
feature (tagged TLBs) on capable Intel chips.
Default is 1 (enabled)
Could you please experiment with kernel boot parameter
l1tf=full,force
and make sure it doesn’t break KVM hosts or guests?
I disable that
Irrelevant since hugepages are disabled for guests for security reasons:
Enabled for both AMD and Intel using hap tag
The rest apply to Intel which I don’t have.
(disabled = via libvirt)
Guest boot ok.
I don’t know how relevant my experience is on a non Intel system. The mitigation is for Intel CPUs only and therefore would only be active on that hardware.
Here;s some benchmarks done:
KVM related kernel parameters: it is one thing to disable these in VM XML settings but what about custom created VMs? Wouldn’t it still be better if we (also) set these options on the kernel boot parameter command line?
Sure. If you are talking about kernels that see wider use outside Whonix like linux-hardened then this is a very sensible thing to do.
It’s for security-misc package which is already used by some Debian users, Kicksecure and will be used by Whonix-Host.
Could you add please any hardening parameters that make sense and add to security-misc package?
You’ve included everything important from the L1TF thread. Reviewing the KVM options from here, I don’t think it would ever make sense in context of a host OS. To tighten KVM options for non-Whonix guests, one would either use Kicksecure or its config which includes my changes.
As an aside it seems the new kernel command turns all mitigations on including disabling SMT. I think it’s more manageable if you switch to using that instead of specifying every knob.
mitigations=auto,nosmt
There is a lot use of KVM outside of user visible VMs / XMLs. For example jenkins or docker can run in KVM. In such situations the libvirt XML files are often auto generated without the user being aware of any hardening that should be manually injected at XML creation time. Since the KVM related kernel hardening parameters do essentially nothing for non-users of KVM (I think?) I am very much for enabling these by default.
answered here Should all kernel patches for CPU bugs be unconditionally enabled? Vs Performance vs Applicability - #21 by Patrick
The only km security options are:
kvm.enable_vmware_backdoor=[KVM] Support VMware backdoor PV interface.
Default is false (don't support).
Here it’s already set to false. Do you want to still explicitly pass that to the cli?
kvm.nx_huge_pages=
[KVM] Controls the software workaround for the
X86_BUG_ITLB_MULTIHIT bug.
force : Always deploy workaround.
off : Never deploy workaround.
auto : Deploy workaround based on the presence of
X86_BUG_ITLB_MULTIHIT.
Default is 'auto'.
If the software workaround is enabled for the host,
guests do need not to enable it for nested guests.
Here it is already enabled whenever the errata is detected, but we should always force it I guess?
https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/multihit.html
kvm-intel.vmentry_l1d_flush=[KVM,Intel] Mitigation for L1 Terminal Fault
CVE-2018-3620.
Valid arguments: never, cond, always
always: L1D cache flush on every VMENTER.
cond: Flush L1D on VMENTER only when the code between
VMEXIT and VMENTER can leak host memory.
never: Disables the mitigation
Default is cond (do L1 cache flush in specific instances)
It is unclear here what we gain from going to always from conditional.
In Linux 5.6 KVM is adding a new super-combo mitigation to protect against L1TF and Spectre 1 used together. This will be baked into the code it seems AFAIK.
Xen has similar stuff added, but I have no idea how it works.
Does KVM also work on Safe use of Hyperthreading (HT) similar to Xen?
No. I guess it is very unlikely that kvm.enable_vmware_backdoor would ever become a default. If we listed all options that are implicit / default anyhow, things could get messy. Maybe we need some exception for this general rule sometimes but here I don’t see it.
If we follow our latest policy from Should all kernel patches for CPU bugs be unconditionally enabled? Vs Performance vs Applicability then yes.
What do they mean by If the software workaround is enabled for the host,
?
See also L1TF - L1 Terminal Fault — The Linux Kernel documentation
The applicable part of us seems to be:
Mitigation selection guide
- Virtualization with untrusted guests
3.1. SMT not supported or disabled
If SMT is not supported by the processor or disabled in the BIOS or by the kernel, it’s only required to enforce L1D flushing on VMENTER.
Conditional L1D flushing is the default behaviour and can be tuned. See Mitigation control on the kernel command line and Mitigation control for KVM - module parameter.
kvm-intel.vmentry_l1d_flush=always
is said to reduce performance. Therefore we shouldn’t enable it if it doesn’t benefit security at all.
Therefore I guess it’ not needed. Unless, does someone make the argument that one should set kvm-intel.vmentry_l1d_flush=always
for better security even though we’re already disabling SMT? Can you find any such references?
It means if it kicks in on the host, there’s no need to have it in the VM too as nested VMs will be protected.
Nothing I’ve read indicates that.
AFAICT Disabling SMT and flushing L1TF is the only way to deal with Hyperthreading.
https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/L1TF
Full protection from the Hyper-Thread based attack can be achieved in one of two ways. The first option requires that Hyper-Threads be disabled in the system’s BIOS, by booting with the “nosmt” kernel command line option, or by writing “off” or “forceoff” to /sys/devices/system/cpu/smt/control (this file is not persistent across reboots). The second option involves restricting guests to specific CPU cores that are not shared with the host or other guests considered to be in different trust domains. This option is more difficult to configure and may still allow a malicious guest to gain some information from the host environment.
My CPU pinning also somewhat mitigates this as no leaks between WS and GW should be possible.