[Help Welcome] KVM Development - staying the course

Maybe its enough if users allow autosyncing with ntp once on the host and then manually alter it a little before turning this off on the host?
NTP on the host isn't safe, since it's unauthenticated. And it's unpractical to make it safe (for reasons listed on the Dev/TimeSync page).

We’re defending against two different adversary capabilities here. Active attacks (disrupting host’s NTP) and passive eavesdropping.

The latter is simpler to defeat. When the host’s clock is 1396753165.481980210 and Whonix-Workstation’s clock was 1396753165.481980210, those could be correlated. (Because the host’s time leaks due to browsers, updaters etc. leaking TLS HELLO gmt_unix_time. Although latest TBB patched the TLS HELLO gmt_unix_time issue, other applications may still leak it.) (Also javascript can leak the time with high precision [Provide JS with reduced time precision (#1517) · Issues · Legacy / Trac · GitLab].) (Google / facebook scripts are imho one enough websites to be in position to pull off a global passive attack.) As far I understand it, this should now get defeated by bootclockrandomization.

The former, active attacks are more difficult to defeat. When the host clock is off by for example ~35 minutes, because of an adversary capable to trick NTP, then you’re always the one “who’s clock is off by ~35 minutes”. Since sdwdate fixes the time to something different from the host and since the result is non-predictable, as far I understand it, this should now get defeated by sdwdate.

Turning it off prevents active attacks in the future, and as long as the host clock is close to ntp time then I don't think users will stand out.
Active attacks: see ~35 minutes example above.
This works out because, even if an adversary has already been doing global attacks during the event of a user's first sync, they (the adversary) would have done so subtly and not changed the time so much as to not draw attention.
I guess most users won't report if their clock is too much off. We trained them pretty good to ignore the clock by using UTC timezone and having no first time wizard asking for their correct time zone.
Manually adjustment of a few seconds forwards or backwards is enough to muddy the waters
If it includes milliseconds, yes, but I haven't seen anyone before messing with milliseconds. I am glad this is now automated.

Virtio-blk now works

Here are the steps, which I’ll throw altogether here before the server downtime. Will move into the wiki later:

Boot from virtio block device - KVM < Original steps that needed to be changed so it can apply to Debian.

in guest os, change /boot/grub/device.map from "(hd0) /dev/sda" to "(hd0) /dev/vda" in guest os, change /boot/grub/grub.cfg from "root=/dev/sda1" to "root=/dev/vda1", if you are using UUID, then no need to do this step.

I changed all mention of sda1 to vda1 in the grub file. I changed the disk controller type from virtmanager to virtio which takes care of this at the libvirt xml level. Now the vm boots normally with no hangups and I feel a little faster.

A bit of advice, that I’ll add so you can decide:

bancfc: the better option would be using LABELs and UUIDs and such in the source image unless this is a one time conversion kind of thing

Not really important but is interesting:

http://www.linux-kvm.org/page/Virtio/Block/Latency

I think we should consider Apparmor advice for Debian hosts as SELinux seems like a lot to follow.

[quote=“HulaHoop, post:63, topic:166”]I think we should consider Apparmor advice for Debian hosts as SELinux seems like a lot to follow.

Agreed. AppArmor is better for Debian.

Commenting on Dev/KVM - Whonix … - Using labels would be good. When you download Whonix, you always end up with the same disk label / uuid.

Beginning from Whonix 9, we will be using fixed disk identifiers anyway. (Originally implemented for verifiable builds improvements [less files to manually review]. Seems like it will be useful here as well.)

VM Manager throws me an error when defining new virtual network using the settings at
https://www.whonix.org/wiki/KVM#KVM_Setup_Instructions

Go to the VMM GUI --> Edit --> Connection Details --> Add button Choose whonix as network name Edit subnet range to 10.0.0.2/24 Uncheck the dhcpv4 option Ignore anything to do with ipv6 Keep the default option of: Isolated Virtual Network selected and click finish.

changing the address range to 10.0.0.0/24 helps

Sorry Zweeble can you please be more specific, maybe post the error message?

Don’t worry soon these steps will be as easy as a few commands to import the settings files.

When creating the new virtual network (named whonix, according to your KVM_Setup_Instructions), the VM Manager refuses to accept the address range 10.0.0.2/24 as it is not one of the IPv4 private address ranges.

Don't worry soon these steps will be as easy as a few commands to import the settings files.

Can’t wait to see them^^ Never tried to import an OVA before and don’t get anywhere with it…

Updated libvirt and qemu (on my 64 bit server), converted the gw.ova to .qcow2 and got it finally “running” - the start took about one hour, the cpu core given to the vm always at 100% …
After boot even moving the mouse made the core jump up to 100% again :wink:
Every command takes ages - ideas how to improve settings? Thx

The subnet I chose was supposed to imitate the range for VirtualBox’s internal network. Choose anyone that works for you, no problem.

As for the image conversion. This is no longer necessary as Patrick has provided qcow2 images since version 8. You can get them on sourceforge Whonix page. Forgive the obvious question, but are you running a cpu that has virtualization support? If not then that could explain the slowness you are seeing.

Thx for your answer. The server supports virtualization. I also run other VMs 32 and 64 bit, KVM with libvirt on it and never had problems like this. Weird.

Maybe it would be an idea to publish the KVM install and the detailed settings you use for the VMs?

On second thought, its very important you download the qcow2 images from sourceforge because it uses performance settings that you have probably left out in the manual conversion. These have a big effect on performance.

If this doesn’t work then the qemu version you are running could probably be suffering from some type of regression.

General advice is to give the workstation vm a generous amount of ram, though not so much as to cripple your host. This is not needed for the gateway however where you can get by assigning the recommended minimum of 256mb.

There has been a report by zweeble about image size issues:

In response, I’ve written Whonix ™ for KVM - but it looks like only a very short-term solution.

Maybe we should drop “-o preallocation=metadata”? Seems to cause trouble as in

  • allegedly big file size
  • real issues with file size.

But without “-o preallocation=metadata” the image is slow? How much was the speed difference?

Well, if it’s either issues with file size vs issues with performance, then this would be a real problem with KVM.

This is really strange.

Original is just fine:

md5sum Whonix-Gateway-8.2.qcow2
060fd6f07bbf0b2f36f95a28528e9248  Whonix-Gateway-8.2.qcow2
du -h --apparent-size ./Whonix-Gateway-8.2.qcow2 
101G    ./Whonix-Gateway-8.2.qcow2
du -h Whonix-Gateway-8.2.qcow2
2.6G    Whonix-Gateway-8.2.qcow2

But after unpacking using the compressed version

gunzip ./Whonix-Gateway-8.2.qcow2.tar.gz

File size is messed up, even though md5 matches!

md5sum Whonix-Gateway-8.2.qcow2
060fd6f07bbf0b2f36f95a28528e9248  Whonix-Gateway-8.2.qcow2
du -h --apparent-size ./Whonix-Gateway-8.2.qcow2 
101G    ./Whonix-Gateway-8.2.qcow2
du -h ./Whonix-Gateway-8.2.qcow2 
101G    ./Whonix-Gateway-8.2.qcow2

Whonix’s compression code:
https://github.com/Whonix/whonix-developer-meta-files/blob/master/release/compress_qcow2

Possible solution:

export TAR_OPTIONS="--format=posix --sparse"

I really don’t want to be nasty, but I mentioned before that this +100 GB image doesn’t make sense anyway^^
Where is the problem to decrease workstation to let’s say 10 GB and gateway to 4 GB from the beginning? I’d prefer to download 2 huge files I can use instead of 2 small ones I can’t use…

[quote=“zweeble, post:76, topic:166”]I really don’t want to be nasty, but I mentioned before that this +100 GB image doesn’t make sense anyway^^
Where is the problem to decrease workstation to let’s say 10 GB and gateway to 4 GB from the beginning? I’d prefer to download 2 huge files I can use instead of 2 small ones I can’t use…[/quote]
I don’t find this question nasty/cynic or anything. I value your constructive comments. You have a valid point here.

Shipping a workstation with 10 GB max space would make users wanting to exceed that limit complain. They could grow the virtual hdd size (Virtual Hard Disk Size Increase) but that is rather cumbersome and complicated due to missing easy GUI access to the required functions.

Operating system’s / rsync / browser issue is:
They don’t support sparse files well.

gzip’s issue is:
It doesn’t support sparse files. (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=535987) And I should not have taken gzip for that reason in the first place. (Took gzip, because it can produce deterministic archives.)

KVM’s issue here is:
KVM has no alternative to VirtualBox’s .ova feature. If they had one, we would not have to have this discussion. That’s why we have to compress them.

The new compression method will be:

tar \
   --create \
   --sparse \
   --xz \
   --mtime="2014-05-06 00:00:00" \
   --directory="$WHONIX_BINARY" \
   --file "$WHONIX_BINARY/Whonix-Gateway-$version.qcow2.xz" "Whonix-Gateway-$version.qcow2"

Which produces a deterministic sparse archive.

You can then unpack using:

tar xvf Whonix-Gateway-8.2.qcow2.xz

(unxz won’t work!)

And will end up with a spare image. Apparent size 100 GB, can grow up to 100 GB space, but will initially take no more than ~2GB space after extraction. I will re-compress, sign and upload soon.

You could still say you preferred non-compressed smaller 10 GB images that must be grown to take more than 10 GB? You would still have a point. In an ideal Whonix distro world, we would offer both types of images. In an ideal world, other projects would solve their issues with sparse files. [Best would still be, if KVM added an alternative to VirtualBox’s ova feature.]

Well, maybe we should ask around who really uses up to 10 GB with whonix. I want to point at the fact that using KVM opens the way for using LVM, that would improve performance considerably and makes resizing the images a piece of cake. So these imo very few users that might complain about a small 10 GB image are no subject anymore, plus using sparse files is not needed any more (also sparse files present a performance handicap).
If I didn’t make a mistake, I can only add advantages:

  • no trouble with the compressing process
  • much smaller but easy resizeable images
  • two possibilities to improve performance

The thing I dislike about KVM is, that VirtualBox has no such issues with max disk size (which is set to 100 GB but could be increased to anything) and really using only used disk space. (Well, VirtualBox has other issues, such as only supporting vmdk images for exported ova’s, which make things harder later as well.)

Want to write a blog post? (In this style?)

I want to point at the fact that using KVM opens the way for using LVM,
Since I am not that knowledgeable about KVM, I don't know what difference KVM makes in relation to LVM compared to VirtualBox. Please explain.
that would improve performance
KVM or LVM would improve performance? KVM has worse graphics performance, even when using SPICE. I am not sure we discussed this here or in the lost old forum. LVM slightly worsens performance (but so slightly, that no one minds)?
considerably and makes resizing the images a piece of cake.
How so?

When not using LVM, I can grow my file system using gparted. When using LVM, this isn’t possible. Are there finally any GUI tools supporting growing lvm file systems?

(also sparse files present a performance handicap).
After quick research, I haven't found any references for having a performance penalty for using sparse files.

Also we’re using qcow2 images with metadata preallocation. According to this blog post (That HulaHoop shared.) and earlier discussions with HulaHoop, qcow2 images using metadata preallocation are a good choice, size wise, performance wise and feature wise (support snapshots).

If I didn't make a mistake, I can only add advantages: - no trouble with the compressing process
Without compression, upload would (100 kb/s upload) would take me 2-3 days for 20 GB. Still bearable if this is the best solution, but I like uploading the ~1GB images more.

Maybe my misunderstanding is how you build whonix?? I thought that the KVM version is build in a qemu/KVM VM using qcow2 or raw LV…

No need to make performance tests here: image based VMs (no matter if OVA using sparse files or qcow2) are slower than VMs that use a LVM raw partition. Working on files in an image file for sure is at least more disk IO.
LVM itself is the GUI to manage LVs, also to resize them. But there is a also a simple command in the terminal.

And qcow2 images still can be compressed with gzip, so no worries about huge uploads. Watch out, a gzipped qcow2 image will be about 10% smaller than the native qcow2 zlib compression :slight_smile: