[Help Welcome] KVM Development - staying the course

You’ve included everything important from the L1TF thread. Reviewing the KVM options from here, I don’t think it would ever make sense in context of a host OS. To tighten KVM options for non-Whonix guests, one would either use Kicksecure or its config which includes my changes.

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/admin-guide/kernel-parameters.txt#n2080


As an aside it seems the new kernel command turns all mitigations on including disabling SMT. I think it’s more manageable if you switch to using that instead of specifying every knob.

mitigations=auto,nosmt
1 Like

There is a lot use of KVM outside of user visible VMs / XMLs. For example jenkins or docker can run in KVM. In such situations the libvirt XML files are often auto generated without the user being aware of any hardening that should be manually injected at XML creation time. Since the KVM related kernel hardening parameters do essentially nothing for non-users of KVM (I think?) I am very much for enabling these by default.

answered here Should all kernel patches for CPU bugs be unconditionally enabled? Vs Performance vs Applicability - #21 by Patrick

The only km security options are:

kvm.enable_vmware_backdoor=[KVM] Support VMware backdoor PV interface.
				   Default is false (don't support).

Here it’s already set to false. Do you want to still explicitly pass that to the cli?

kvm.nx_huge_pages=
			[KVM] Controls the software workaround for the
			X86_BUG_ITLB_MULTIHIT bug.
			force	: Always deploy workaround.
			off	: Never deploy workaround.
			auto    : Deploy workaround based on the presence of
				  X86_BUG_ITLB_MULTIHIT.

			Default is 'auto'.

			If the software workaround is enabled for the host,
			guests do need not to enable it for nested guests.

Here it is already enabled whenever the errata is detected, but we should always force it I guess?

https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/multihit.html

kvm-intel.vmentry_l1d_flush=[KVM,Intel] Mitigation for L1 Terminal Fault
			CVE-2018-3620.

			Valid arguments: never, cond, always

			always: L1D cache flush on every VMENTER.
			cond:	Flush L1D on VMENTER only when the code between
				VMEXIT and VMENTER can leak host memory.
			never:	Disables the mitigation

			Default is cond (do L1 cache flush in specific instances)

It is unclear here what we gain from going to always from conditional.

In Linux 5.6 KVM is adding a new super-combo mitigation to protect against L1TF and Spectre 1 used together. This will be baked into the code it seems AFAIK.

Xen has similar stuff added, but I have no idea how it works.

1 Like

Does KVM also work on Safe use of Hyperthreading (HT) similar to Xen?

No. I guess it is very unlikely that kvm.enable_vmware_backdoor would ever become a default. If we listed all options that are implicit / default anyhow, things could get messy. Maybe we need some exception for this general rule sometimes but here I don’t see it.

If we follow our latest policy from Should all kernel patches for CPU bugs be unconditionally enabled? Vs Performance vs Applicability then yes.

What do they mean by If the software workaround is enabled for the host,?

See also L1TF - L1 Terminal Fault — The Linux Kernel documentation
The applicable part of us seems to be:

Mitigation selection guide

  1. Virtualization with untrusted guests

3.1. SMT not supported or disabled

If SMT is not supported by the processor or disabled in the BIOS or by the kernel, it’s only required to enforce L1D flushing on VMENTER.

Conditional L1D flushing is the default behaviour and can be tuned. See Mitigation control on the kernel command line and Mitigation control for KVM - module parameter.

kvm-intel.vmentry_l1d_flush=always is said to reduce performance. Therefore we shouldn’t enable it if it doesn’t benefit security at all.

Therefore I guess it’ not needed. Unless, does someone make the argument that one should set kvm-intel.vmentry_l1d_flush=always for better security even though we’re already disabling SMT? Can you find any such references?

1 Like

It means if it kicks in on the host, there’s no need to have it in the VM too as nested VMs will be protected.

Nothing I’ve read indicates that.

1 Like

AFAICT Disabling SMT and flushing L1TF is the only way to deal with Hyperthreading.


https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/L1TF

Full protection from the Hyper-Thread based attack can be achieved in one of two ways. The first option requires that Hyper-Threads be disabled in the system’s BIOS, by booting with the “nosmt” kernel command line option, or by writing “off” or “forceoff” to /sys/devices/system/cpu/smt/control (this file is not persistent across reboots). The second option involves restricting guests to specific CPU cores that are not shared with the host or other guests considered to be in different trust domains. This option is more difficult to configure and may still allow a malicious guest to gain some information from the host environment.


My CPU pinning also somewhat mitigates this as no leaks between WS and GW should be possible.

1 Like

Please see KVM: Difference between revisions - Whonix

1 Like

@Patrick where do I add the kvm kernel command?

1 Like

security-misc/40_kernel_hardening.cfg at master · Kicksecure/security-misc · GitHub

1 Like
1 Like

Merged.

1 Like

Uploaded newer images to our server. How long before they appear on download.whonix.org?

Symlink was missing. This is now fixed.

In future should be after 1-2 minutes.

1 Like

We can probably disable hugepages entirely with the vm.nr_hugepages=0 sysctl rather than using the kvm.nx_huge_pages mitigation.

2 Likes

Alright go for it.

1 Like

What’s the rationale / advantage of this?

iTLB multihit — The Linux Kernel documentation mentions only kvm.nx_huge_pages but not vm.nr_hugepages=0. It should be the same in theory but I haven’t seen vm.nr_hugepages=0 mentioned in context of, search term:

"iTLB multihit" "vm.nr_hugepages"
1 Like

Hugepages have more security issues (see Whonix for KVM). The kvm.nx_huge_pages mitigation only fixes a specific issue by marking certain memory pages as non-executable. vm.nr_hugepages=0 disables hugepages altogether, preventing all of its issues.

https://www.kernel.org/doc/Documentation/admin-guide/mm/hugetlbpage.rst

/proc/sys/vm/nr_hugepages indicates the current number of “persistent” huge pages in the kernel’s huge page pool. “Persistent” huge pages will be returned to the huge page pool when freed by a task. A user with root privileges can dynamically allocate more or free some persistent huge pages by increasing or decreasing the value of nr_hugepages.

The default should be 0 anyway but this isn’t guaranteed on other distros.

The sysctl isn’t specific to that issue but hugepages in general.

1 Like

Speaking of KVM spectre mitigations, there’s more work planned ahead for safe HT use

1 Like