kernel recompilation for better hardening

We can delete old versions and use --depth=1.

Downloading the source code will also take longer than downloading a tarball so it would probably even out.

I think we should use the tarball. Seems simpler.

1 Like

Yes. Always delete the old tarballs when adding a new tarball. Except if we ever wanted / had to support multiple version such as stable and testing.

Yes.

1 Like
1 Like

Tons of kernel config files here:
https://github.com/a13xp0p0v/kconfig-hardened-check/tree/master/config_files

even kspp-recommendations

1 Like

This work is being done by @madaidan who also contributed pull requests here.

This doesn’t really make sense as I’ve never contributed to that repo. It should be “pull requests to linux-hardened”.

1 Like

Fixed.

1 Like

I’ve sent 4 new pull requests. There’s been an issue with notifications before so I’m saying here to be sure you know.

There also seems to be an issue with the CI.

https://travis-ci.com/Whonix/hardened-kernel/builds/143787339

+sed -e s@^@  @g ..//*.changes

sed: can't read ..//*.changes: No such file or directory

The command "wget -O- https://raw.githubusercontent.com/adrelanos/travis.debian.net/gh-pages/script.sh | sh -" exited with 2.
1 Like

There’s been an issue with notifications before so I’m saying here to be sure you know.

Yes. Missed these notifications here again. Which is strange. As
repository creator I should get notifications about all activities.

Always good to post links to pull requests here so everyone else who’s
watching can follow too.

1 Like

I started work on the host config.

Add host kernel config and description by madaidan · Pull Request #20 · Kicksecure/hardened-kernel · GitHub shows the diff between it and the default Debian config.

You forked the repo from my account so you didn’t actually create the repository. Maybe that’s it?

1 Like
1 Like

Maybe yes. Github doesn’t show it’s forked. Reset the repository to
“watching” now. Seeing if that helps. Got the most recent notification.

1 Like

The vivid driver is for testing. It doesn’t require any special hardware. It is shipped in Ubuntu, Debian, Arch Linux, SUSE Linux Enterprise and openSUSE. On Ubuntu the devices created by this driver are available to the normal user, since Ubuntu applies RW ACL when the user is logged in.

See the disclosure of CVE-2019-18683 which I’ve found and fixed in vivid driver:
oss-security - [ Linux kernel ] Exploitable bugs in drivers/media/platform/vivid

https://www.openwall.com/lists/oss-security/2019/11/02/1

I used the syzkaller fuzzer with custom modifications and found a bunch of 5-year old bugs in the Linux kernel.
[…]
For now I would recommend to blacklist the vivid kernel module on your machines.

CONFIG_VIDEO_VIVID doesn’t exist in the VM kernel as camera, radio and TV support are disabled but it does exist in the host config.

We should disable this in the host config and blacklist the module in security-misc. vivid is only required for testing. It’s not used for anything else.

https://www.kernel.org/doc/html/latest/media/v4l-drivers/vivid.html

1 Like

We should automate the versioning so we always get the latest LTS kernel and linux-hardened patch. New releases can sometimes come fast and go unnoticed so doing it manually is unreliable.

This can get the latest version as long as it starts with “4” which all LTS kernels will for a while:

curl https://github.com/anthraxx/linux-hardened/releases | grep "linux-hardened-4" | head -n1 | sed -e 's/.*linux-hardened-//g' | sed -e 's/\.a\.patch.*//g'

But it is obviously very hacky. I don’t know a better way though.

We shouldn’t get the versions from kernel.org though as linux-hardened might be behind for a few days.

1 Like
1 Like

madaidan via Whonix Forum:

We should automate the versioning so we always get the latest LTS kernel and linux-hardened patch. New releases can sometimes come fast and go unnoticed so doing it manually is unreliable.

This can get the latest version as long as it starts with “4” which all LTS kernels will for a while:

curl https://github.com/anthraxx/linux-hardened/releases | grep "linux-hardened-4" | head -n1 | sed -e 's/.*linux-hardened-//g' | sed -e 's/\.a\.patch.*//g'

Better to not use any networking at all during build as per:

still contains various TODO.

1 Like

How else would we automate the versioning if not with networking?

It’s just a single curl command that downloads the html. It’s unlikely to fail and even if it did, we can add in a bunch of error checking.

It’s not like downloading the full kernel source.

1 Like

Merged. :slight_smile:

1 Like

I don’t think it’s advisable to automate fetching newest version. It creates many follow-up issues. Error checking is very doable but I don’t see any way for good error handling.

  • If it’s automated then developers cannot test the kernel before users. Any patch that results in a non-bootable kernel cannot be fixed before users using even the stable repository make their machines unbootable.
  • Either fail open and miss kernel upgrades or fail closed and break apt.
    • If it fails open then exit codes of apt-get dist-upgrade get unreliable. Exit success (exit code 0) wouldn’t guarantee that all upgrades were installed. Breaks automation of updates. Would require some status file indicating if upgrade was a success or failure when then automation scripts would check to check.
    • Failing closed, i.e. the compilation script exiting non-zero and thereby making APT exit non-zero would prevent the user from other package installation until that is fixed. Require to run sudo dpkg --configure -a. And if the download location is permanently down, things get more and more complicated. Failing closed, breaking APT is probably not an option?
  • Networking dependent: if networking is down, slow, etc. the update will fail.
    • (I plan to merge tb-starter, tb-updater, tb-default-browser and open-link-confirmation packages, add Tor Browser archive (and signature) to binaries-freedom package to make the only required networking by Whonix only APT and nothing else. I.e. once packages are fetched, there are no more external network connections required. This simplifies the build environment, tunneling all connections through Tor/onions during build and whatnot.)
  • gpg verification is a major hassle and security risk.
  • Upstreaming gets harder.
    • A package relying on resources unavailable from Debian main disqualifies for Debian main. (That is why torbrowser-launcher is in Debian contrib but not in Debian main.)
    • 2. The Debian Archive — Debian Policy Manual v4.6.2.0
    • Examples of packages which would be included in contrib are: free packages which require contrib , non-free packages or packages which are not in our archive at all for compilation or execution, and

  • Something special to cover for use case “download over onions only”.

What should be done (and easy to implement) is users overwriting variables, i.e. user ability to choose/hardcode/manually select any version numbers / urls as they desire so there is no hard dependency on a package upgrade before testing a newer kernel.

1 Like

The kernel and linux-hardened patches are already tested before being released. Using LTS makes it even more unlikely for there to be issues like that.

We can tell the users to fix their connection and update again if we can’t connect.

We can’t gpg verify the html anyway.

If we do upstream our config, it’d likely be using the linux-source package so we won’t have to worry about versioning.

This config won’t be upstreamed anytime soon either.

Can’t we just display the version in a text file on the Whonix onion service? e.g.

kver="$(curl http://dds6qkxpwdeubwucdiaord2xgbbeyds25rbsgr73tbfpqpt4a6vjwsyd.onion/kver.txt)"

A cron job could regularly update it.

Then an attacker can use a URL to a malicious kernel source or downgrade the kernel to a version with known vulnerabilities.

1 Like

To mitigate /proc/pid/sched spy on keystrokes - proof of concept spy-gksu we can probably unset CONFIG_SCHEDSTATS.

We can leave this to apparmor-profile-everything but then it’d just add compile time.

1 Like