[HOME] [DOWNLOAD] [DOCS] [NEWS] [SUPPORT] [TIPS] [ISSUES] [DONATE]

kernel recompilation for better hardening

https://salsa.debian.org/kernel-team/linux/blob/master/debian/patches/debian/version.patch might hint that environment variables are supported such as $DISTRIBUTOR' and $DISTRIBUTION_VERSION which influence how kernel packages would be named.

1 Like

Could you also have a look please? @HulaHoop

1 Like

What is CONFIG_NET_EGRESS?

1 Like

You could use “make tinyconfig” which creates a minimal non bootable kernel so you can estimate the lowest possible compilation time for your setup. If it still takes an hour or so then cutting out more modules maybe won’t help that much. Recompiling the cloud kernel could also give a good estimate on what to expect from a stripped down kernel.
It’s maybe not a good estimate but iirc the tinyconfig is around 20-30kb while the cloud kernel config for Debian is around 90kb. The latter one should work for the average desktop system when some GUI relevant stuff is added.

2 Likes

Sure but stable kernels have newer code and more attack surface.

https://www.grsecurity.net/the_truth_about_linux_4_6

The real “hard truth” about Linux kernel security is that there’s no such thing as a free lunch. Keeping up to date on the latest upstream kernel will generally net all the bug fixes that have been created thus far, but with it of course brings completely new features, new code, new bugs, and new attack surface. The majority of vulnerabilities in the Linux kernel are ones that have been released just recently, something any honest person active in kernel development can attest to.

Although, stable kernels do have more hardening features.

It’s a hard decision.

LTS kernels have less hardening features and not all bug fixes are backported but it has less attack surface and potentially less chance of having bugs.

Stable kernels have more hardening features and all bug fixes but more attack surface and more bugs.

We should look into that. We should add a -hardened suffix or something to differentiate between our kernel and normal kernels.

Dunno. Was automatically disabled with make. The kernel devs don’t seem to think it’s important enough to warrant a description https://github.com/torvalds/linux/blob/master/net/Kconfig#L52

Thanks! I didn’t know about this. I’ll test it later.

1 Like
1 Like

madaidan via Whonix Forum:

Sure but stable kernels have newer code and more attack surface.

https://www.grsecurity.net/the_truth_about_linux_4_6

The real “hard truth” about Linux kernel security is that there’s no such thing as a free lunch. Keeping up to date on the latest upstream kernel will generally net all the bug fixes that have been created thus far, but with it of course brings completely new features, new code, new bugs, and new attack surface. The majority of vulnerabilities in the Linux kernel are ones that have been released just recently, something any honest person active in kernel development can attest to.

Although, stable kernels do have more hardening features.

It’s a hard decision.

LTS kernels have less hardening features and not all bug fixes are backported but it has less attack surface and potentially less chance of having bugs.

Stable kernels have more hardening features and all bug fixes but more attack surface and more bugs.

Let’s use LTS kernels instead. For more stability. Something good that
works is better than something “perfect” constantly breaking.

https://www.openwall.com/lists/lkrg-users/2019/12/23/1 reminds me,
keeping up with kernel issues shouldn’t develop into most time spend on.

1 Like

madaidan via Whonix Forum:

It might be better to use the number of cores + 1 with the -j option when compiling the kernel instead of just the number of cores although there seems to be some disagreement on this.

https://stackoverflow.com/questions/23279178/how-to-speed-up-linux-kernel-compilation

The best results are often achieved using the number of CPU cores in the machine + 1; for example, with a 2-core processor run make -j3

https://wiki.gentoo.org/wiki/Kernel/Configuration#Build

Add the option -j(<NUMBER_OF_CORES> + 1) . For example, a dual core processor contains two logical cores plus one (2 + 1):

We can change make -j $(nproc) to make -j $(($(nproc) + 1))

Please test and send pull request.

1 Like

It doesn’t verify the kernel sources or linux-hardened patch with gpg as I would usually do if ! gpg --verify ... but you might prefer to use something else like https://github.com/Whonix/gpg-bash-lib

I tested it and the tinyconfig takes a few minutes to build for me.

1 Like
2 Likes

Many fixes for arch specific stuff, EFI, lockdown, kernel signing, bug fixes they run across (as a huge distro used everywhere they are tested all over the place), ABI maintenance and so on.

2 Likes

Using curl / networking during apt updates is bad.

  • Either fail open and miss kernel upgrades or fail closed and break apt.
  • Networking dependent: if networking is down, slow, etc. the update will fail. Package will exit non-zero break updating or update will be ignored.
    • (I plan to merge tb-starter, tb-updater, tb-default-browser and open-link-confirmation packages, add Tor Browser archive (and signature) to binaries-freedom package to make the only required networking APT and nothing else. I.e. once packages are fetched, there are no more networking dependencies. This simplifies the build environment, tunneling all connections through Tor/onions during build and whatnot.)
  • gpg verification is a major hassle and security risk.

Why do we need to use linux-hardened as patch? Their git repository looks like as if they imported whole Linux source code from kernel.org and then modified it. Looking at https://github.com/anthraxx/linux-hardened/releases/tag/5.4.6.a they offer patch and source code. Maybe we could git clone linux-hardened, then git checkout the tag and build the tag instead? Thereby we could safe one step: downloading from kernel.org. (Both would have to be gpg verified. Double work.

If my above idea is working (getting complete kernel source from linux-hardened) then maybe it would be better to git(hub) fork https://github.com/anthraxx/linux-hardened and add our compile script and config there? That would also be a good chance to merge our modifications upstream to get more eyes on it and to reduce/nullify the delta between our fork and upstream.

1 Like
2 Likes

It just seems like the best way to use it.

The Arch build script (maintained by anthraxx, the current maintainer of upstream linux-hardened) uses it as a patch.

We won’t need to use this though as we have https://github.com/Whonix/security-misc/blob/master/usr/lib/security-misc/hide-hardware-info which is far more flexible as it allows us to whitelist stuff.

Right.

Let’s use LTS anyhow. Reason: I don’t think we’re ready for non-LTS kernels. We don’t have automated testing, let alone on all platforms. Even Tor and Tor Browser upgrades for future versions aren’t sufficiently tested preemptively before these hit stable. When using non-LTS and an upgrade is out, there will be time pressure to upgrade. But what if that breaks either Qubes, VirtualBox and/or KVM or spice, guest additions, lkrg and/or tirdad? There is nothing we can do to patch it quickly and can go back to LTS versions and a version downgrade is hard to do using apt for a Debian derivative.

Alright.

linux-hardend:
We could add linux-hardened-5.4.6.a.patch and linux-hardened-5.4.6.a.patch.sig to hardened-kernel git. gpg verification can be manual. I.e. by developers only when we add to git. No need to automate with a script. (Similar to electrum in binaries-freedom package.)

Linux upstream:
Since linux-hardened shall be applied to to original kernel.org Linux, maybe it would be best to fork https://github.com/torvalds/linux, git checkout and LTS branch/release tag, create a new branch, add linux-hardend patch, add compile script and that’s it? That should make viewing the diff in git very easy. Our branch would just be linux-hardend patch + compile script with Linux LTS unmodified. No “direct” modification of Linux LTS. Patching would happen on the user’s machine during kernel build.

What do you think?

1 Like

Yay! We now have a functional Debian buster based CI build. The kernel compilation process is still in progress. Dunno if it will complete but I guess so.

main page:
https://travis-ci.com/Whonix/hardened-kernel

specific build:
https://travis-ci.com/Whonix/hardened-kernel/builds/142518832

raw build log:
https://api.travis-ci.com/v3/job/270808453/log.txt

CI vs non-CI builds: CI builds as root, not user. That could be improved if deemed useful.

Ideally we would could also automate on the CI booting the kernel in VirtualBox, KVM, Qubes and see where it is functional and where it is broken but that might be a pipe dream. Although https://criu.org/Continuous_integration#Kernel_testing is using kexec but that seems hard to re-use and that would reboot test only KVM (which would still be awesome for start) (because Travis CI is based on KVM as virt-what says). Travis CI also supports artifacts (and any scripting). Build results could be send elsewhere (another cloud service) for further automated testing, i.e. (kexec) kernel boot.


Note: CI builds is only a tool for developers. Kernels build in CI will not be used for anything except for developers to look at build logs. No users will ever be presented to build anything build on a CI. Just any easy way to share build logs in an objective, local system configuration quirk reduced way. Quote http://travis.debian.net/

Q: But wget | sh - is insecure!

A: Of course, and you should never run such a command on your own machine. However, not only does Travis-CI build within throwaway containers that you are not responsible for, cannot trust, and generally don’t care about, there is zero expectation that the resulting .deb files are to be used or installed anywhere.

1 Like

Why not just add the LTS kernel tarball to hardened-kernel instead of forking the source?

1 Like

That’s also an option.

pros:

  • Preserve git history, authorship. (Although we might consider git clone --depth 1 to save space for those who git clone Whonix.)

Cons:

  • The tarballs are binary files. Git cannot disk space efficiently manage these. Each gets added as a full copy to git. That will balloon the size of that git repository after a few releases. We will figure out how to deal with this later. We could instruct users to to do git clone --depth 1 cloing. Have to do this at some point for binaries-freedom package anyhow.
  • Extracting the tarball will take longer than already working with extracted files.

Probably better option. Please add if you agree.

1 Like
[Imprint] [Privacy Policy] [Cookie Policy] [Terms of Use] [E-Sign Consent] [DMCA] [Investors] [Priority Support] [Professional Support]