kernel recompilation for better hardening

madaidan via Whonix Forum:

It might be better to use the number of cores + 1 with the -j option when compiling the kernel instead of just the number of cores although there seems to be some disagreement on this.

time - How to speed up Linux kernel compilation? - Stack Overflow

The best results are often achieved using the number of CPU cores in the machine + 1; for example, with a 2-core processor run make -j3

Kernel/Configuration - Gentoo Wiki

Add the option -j(<NUMBER_OF_CORES> + 1) . For example, a dual core processor contains two logical cores plus one (2 + 1):

We can change make -j $(nproc) to make -j $(($(nproc) + 1))

Please test and send pull request.

1 Like

It doesn’t verify the kernel sources or linux-hardened patch with gpg as I would usually do if ! gpg --verify ... but you might prefer to use something else like GitHub - Kicksecure/gpg-bash-lib: gpg file verification bash library, addresses comprehensive threat model, that covers file name tampering, indefinite freeze, rollback, endless data attacks, etc.

I tested it and the tinyconfig takes a few minutes to build for me.

1 Like

Many fixes for arch specific stuff, EFI, lockdown, kernel signing, bug fixes they run across (as a huge distro used everywhere they are tested all over the place), ABI maintenance and so on.


Using curl / networking during apt updates is bad.

Why do we need to use linux-hardened as patch? Their git repository looks like as if they imported whole Linux source code from and then modified it. Looking at Release 5.4.6.a · anthraxx/linux-hardened · GitHub they offer patch and source code. Maybe we could git clone linux-hardened, then git checkout the tag and build the tag instead? Thereby we could safe one step: downloading from (Both would have to be gpg verified. Double work.

If my above idea is working (getting complete kernel source from linux-hardened) then maybe it would be better to git(hub) fork GitHub - anthraxx/linux-hardened: Minimal supplement to upstream Kernel Self Protection Project changes. Features already provided by SELinux + Yama and archs other than multiarch arm64 / x86_64 aren't in scope. Only tags have stable history. Shared IRC channel with KSPP: #linux-hardening and add our compile script and config there? That would also be a good chance to merge our modifications upstream to get more eyes on it and to reduce/nullify the delta between our fork and upstream.

1 Like

It just seems like the best way to use it.

The Arch build script (maintained by anthraxx, the current maintainer of upstream linux-hardened) uses it as a patch.

We won’t need to use this though as we have which is far more flexible as it allows us to whitelist stuff.


Let’s use LTS anyhow. Reason: I don’t think we’re ready for non-LTS kernels. We don’t have automated testing, let alone on all platforms. Even Tor and Tor Browser upgrades for future versions aren’t sufficiently tested preemptively before these hit stable. When using non-LTS and an upgrade is out, there will be time pressure to upgrade. But what if that breaks either Qubes, VirtualBox and/or KVM or spice, guest additions, lkrg and/or tirdad? There is nothing we can do to patch it quickly and can go back to LTS versions and a version downgrade is hard to do using apt for a Debian derivative.


We could add linux-hardened-5.4.6.a.patch and linux-hardened-5.4.6.a.patch.sig to hardened-kernel git. gpg verification can be manual. I.e. by developers only when we add to git. No need to automate with a script. (Similar to electrum in binaries-freedom package.)

Linux upstream:
Since linux-hardened shall be applied to to original Linux, maybe it would be best to fork GitHub - torvalds/linux: Linux kernel source tree, git checkout and LTS branch/release tag, create a new branch, add linux-hardend patch, add compile script and that’s it? That should make viewing the diff in git very easy. Our branch would just be linux-hardend patch + compile script with Linux LTS unmodified. No “direct” modification of Linux LTS. Patching would happen on the user’s machine during kernel build.

What do you think?

1 Like

Yay! We now have a functional Debian buster based CI build. The kernel compilation process is still in progress. Dunno if it will complete but I guess so.

main page:

specific build:

raw build log:

CI vs non-CI builds: CI builds as root, not user. That could be improved if deemed useful.

Ideally we would could also automate on the CI booting the kernel in VirtualBox, KVM, Qubes and see where it is functional and where it is broken but that might be a pipe dream. Although Continuous integration - CRIU is using kexec but that seems hard to re-use and that would reboot test only KVM (which would still be awesome for start) (because Travis CI is based on KVM as virt-what says). Travis CI also supports artifacts (and any scripting). Build results could be send elsewhere (another cloud service) for further automated testing, i.e. (kexec) kernel boot.

Note: CI builds is only a tool for developers. Kernels build in CI will not be used for anything except for developers to look at build logs. No users will ever be presented to build anything build on a CI. Just any easy way to share build logs in an objective, local system configuration quirk reduced way. Quote

Q: But wget | sh - is insecure!

A: Of course, and you should never run such a command on your own machine. However, not only does Travis-CI build within throwaway containers that you are not responsible for, cannot trust, and generally don’t care about, there is zero expectation that the resulting .deb files are to be used or installed anywhere.

1 Like

Why not just add the LTS kernel tarball to hardened-kernel instead of forking the source?

1 Like

That’s also an option.


  • Preserve git history, authorship. (Although we might consider git clone --depth 1 to save space for those who git clone Whonix.)


  • The tarballs are binary files. Git cannot disk space efficiently manage these. Each gets added as a full copy to git. That will balloon the size of that git repository after a few releases. We will figure out how to deal with this later. We could instruct users to to do git clone --depth 1 cloing. Have to do this at some point for binaries-freedom package anyhow.
  • Extracting the tarball will take longer than already working with extracted files.

Probably better option. Please add if you agree.

1 Like

We can delete old versions and use --depth=1.

Downloading the source code will also take longer than downloading a tarball so it would probably even out.

I think we should use the tarball. Seems simpler.

1 Like

Yes. Always delete the old tarballs when adding a new tarball. Except if we ever wanted / had to support multiple version such as stable and testing.


1 Like
1 Like

Tons of kernel config files here:

even kspp-recommendations

1 Like

This work is being done by @madaidan who also contributed pull requests here.

This doesn’t really make sense as I’ve never contributed to that repo. It should be “pull requests to linux-hardened”.

1 Like


1 Like

I’ve sent 4 new pull requests. There’s been an issue with notifications before so I’m saying here to be sure you know.

There also seems to be an issue with the CI.

+sed -e s@^@  @g ..//*.changes

sed: can't read ..//*.changes: No such file or directory

The command "wget -O- | sh -" exited with 2.
1 Like