Derivative Maker Automated CI Builder

Per request of @Patrick , I created a CI integration for the derivative-maker repository.

What it does

This tool runs a Github Actions workflow when a commit is pushed.

The workflow runs an automation suite on a remote server. The suite was created with Ansible and lives in the automated_builder directory

Currently I have a Debian VPS setup on digital ocean, that I am willing to maintain for the Whonix project.

see derivative-maker/automated_builder/README.md on my branch for more technical details about how to run it.

Current blockers

When the derivative-maker/derivative-maker --build runs on the most recent commit, I am met with errors:

last_failed_bash_command: "$source_code_folder_dist/packages/kicksecure/genmkfile/usr/bin/genmkfile" reprepro-remove

I tried hardcoding the tag 16.0.5.3 to be built, and ran in to the same errors. I ultimately went ahead and built it with the latest commit because this is the way it should function in the future. Attached is a log of the build errors

Log details
############################################################
ERROR in ././build-steps.d/1200_create-debian-packages detected!

dist_build_version: f69a4c663d6dfa1105c145770b2ef2aea8704214
dist_build_error_counter: 1
benchmark: 00:00:00
last_failed_exit_code: 254
trap_signal_type_previous: unset
trap_signal_type_last    : ERR

process_backtrace_result:
1: : init
2: : sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
3: : sshd: ansible [priv]
4: : sshd: ansible@pts/4
5: : /bin/sh -c /usr/bin/python3 /home/ansible/.ansible/tmp/ansible-tmp-1660320029.2783656-1983-3611384331708/AnsiballZ_command.py && sleep 0
6: : /usr/bin/python3 /home/ansible/.ansible/tmp/ansible-tmp-1660320029.2783656-1983-3611384331708/AnsiballZ_command.py
7: : /bin/sh -c /home/ansible/derivative-maker/derivative-maker --flavor whonix-gateway-xfce --target virtualbox --build >> /home/ansible/build.log 2>&1
8: : /bin/bash /home/ansible/derivative-maker/derivative-maker --flavor whonix-gateway-xfce --target virtualbox --build
9: : /bin/bash ././build-steps.d/1200_create-debian-packages

function_trace_result:
main (line number: 406)
build_run_function (line number: 35)
main (line number: 402)
build_run_function (line number: 35)
create-debian-packages (line number: 392)
build_run_function (line number: 35)
create_derivative_distribution_debian_packages (line number: 354)
errorhandlergeneral (line number: 380)
errorhandlerprocessshared (line number: 209)


last_failed_bash_command: "$source_code_folder_dist/packages/kicksecure/genmkfile/usr/bin/genmkfile" reprepro-remove
############################################################
'
++ unset error_reason
++ '[' ERR = INT ']'
++ '[' ERR = TERM ']'
++ '[' ERR = ERR ']'
++ '[' '!' 0 = 0 ']'
++ true 'INFO: dist_build_auto_retry set to 0 (--retry-max). No auto retry.'
++ unset dist_build_auto_retry_counter
++ true
++ ignore_error=false
++ answer=
++ '[' ERR = ERR ']'
++ '[' '' = true ']'
++ '[' -t 0 ']'
++ true 'INFO: stdin connected to terminal, using interactive error handler.'
++ true 'ERROR in ././build-steps.d/1200_create-debian-packages detected!

Please have a look above (the block within ###...).

 - Please enter c and press enter to ignore the error and continue building. (Recommended against!)
 - Please press r and enter to retry.
 - Please press s and enter to open an chroot interactive shell.
 - Please press a and enter to abort.'
++ read -p 'Answer? ' answer
Answer?

I would upload the full log file for posterity, but it seems as though I am unable to upload a txt file on this discourse instance. @Patrick any chance you could help with this?

Next Steps

  1. Once the builds successfully work, we need to put them somewhereā€¦probably as a github actions artifact. That said, they may be too large and this might require some sort of s3 bucket or remote place to push them.

  2. Set up the build to load the OVA in to virtual box on the remote server, so @Patrick can VNC in to the server and test the running VMs after build

Longer term goals

  1. Run WATS testing suite automatically on the new VMs

  2. Make WATS more useful and robust

1 Like

I didnā€™t test it yet but looks great!

For any build success/failure, would it be possible to post that as a comment to github, gitlab or somewhere? Perhaps post the log? That would be immensely helpful.

Maybe the log is too big. Could be posted here too when using code tags.
https://www.kicksecure.com/wiki/Forum_Best_Practices#Code_Tags
Otherwise some paste page.
But no log of broken build required. Current git master is broken indeed. Will be fixed soonish. Atm thereā€™s been a lot of wiki work so it got delayed. Why the latest stable tag is broken I donā€™t know but instead of investigating this, Iā€™ll prioritize fixing getting a newer tag that is functional out.


Currently the build error handler being interactive (asking for what to do on stdin) is an issue?
Since itā€™s on a server and non-non-interactive the good news is the build script has a non-interactive feature. By setting environment variable dist_build_non_interactive=true the question can be avoided and the build would just error out.

In any case, whether the build succeeded (exit code 0) or failed (non-zero exit code), just post a new ticket on github (or gitlab or somewhere) with the result? Perhaps the title would say ā€œbuild failedā€ or ā€œbuild succeededā€ and something else useful such as the nearest tag (describe --always --abbrev=10000000000000).

Size could be an issue indeed.
The builds created on a remote server would not be redistributed to users. These are only created locally for better security.
A CI server however is very useful since then build failures would be promptly reported (and therefore fixed much, much sooner).

Auto testing the images using WATS sounds also very interesting.

@Patrick here is the log ^^

1 Like

I will fix the dist_build_non_interactive variable, that will make it where github actions will notify a failed build, and you can see it quickly.

Normally, do you test things manually? If I went ahead and loaded the successful builds in to virtualbox on the VPS, could you VNC and test? Would that be useful to you?

Also, when it comes time for you to test this functionality let me know and I will give you the necessary environment variables for the derivative-maker repo settings

1 Like

Yes, but that surely is non-ideal.

Yes, certainly be very useful.

Also a succeeded build notification somewhere would be good. Could be a comment to the same ticket (hardcoded ticket number). Otherwise Iā€™d be wondering if just the server is broken, script is broken versus the build really having succeeded.

I will make sure to implement it in a way where there is no question whether or not things worked.

If the build succeeds on github actions, it will mean that the OVA has built successfully and been loaded in to VirtualBox on the VPS.

You will be able to connect and verify, but I will ensure it is in the artifacts as well

1 Like

Artifacts

When a build runs, you can visit that buildā€™s page and download the artifacts at the bottom of the page. That artifact will have logs from the run.

Instructions

  1. visit the actions menu on the repo (in this case /mycobee/derivative-maker/actions/)
  2. Click on the name of your build
    https://github.com/Mycobee/derivative-maker/actions/runs/2869767476
  3. Scroll down to the bottom, where you see artifacts
  4. Click logs to download the zipped folder

Within build.log, you can find the log which specifies the current issues building master

1 Like

@Patrick I am going to do a few small clean up things over the next couple of days, but otherwise the work I am able to do on this project is blocked by the breaking build

once the builds succeed, my next steps are:

  1. autoload the OVAā€™s on the runner
  2. set up the server where you can VNC in and test out the builds
  3. push the OVAs to a storage bucket where you can download tagged images with the commit SHA in the names, so you can download them and work locally if desired

After that is complete, I will merge the commit. We can work together to get it setup for derivative-maker repo, then we can figure out how to make the WATS test run via the pipeline as well.

1 Like

Sorry, I didnā€™t unbreak the build yet.

As a hack to unblock, try download the exiting binary image as a regular download from the Whonix website?

This unfortunately broke during a major refactoring. But anyhow.

Git tag 16.0.6.5-developers-only is hopefully fixed. Build is already beyond the issue which you experienced. Will post again when the build successfully completed.

1 Like

Another (new) build bug fixed in 16.0.6.6-developers-only.

1 Like

Yes. Should be fixed. Build of 1 VM flavor already succeeded. The others builds will likely succeed too.

1 Like

awesome. Thanks Patrick

Small request, since we are automating things to run without user interaction, can set it where dist_build_non_interactive avoids this logic:

INFO: Script running as as non-root, ok.
INFO: Running 'sudo --non-interactive --validate' to test if sudo password entry prompt is needed...
sudo: a password is required
INFO: Going to run 'sudo --validate' to prompt for password...
INFO: Please enter sudo password.

I have it set up where the ansible user on the VPS doesnt need to enter any sudo password to run the necessary commands, but --validate asks for a password regardless

1 Like

Thatā€™s good. I was actually thinking about updating the build documentation to recommend setting up passwordless sudo.

How did you set up passwordless sudo?

The following passwordless sudo should work:

file:

/etc/sudoers.d/passwordless

content:

%sudo ALL=(ALL:ALL) NOPASSWD:ALL

Also required to run:

sudo adduser user sudo

(Required to replace user with actual user name.)

(Based on usability-misc/etc/sudoers.d/user-passwordless at master Ā· Kicksecure/usability-misc Ā· GitHub)

At time of writing, Qubes uses passwordless sudo by default. sudo --validate does not show a password prompt for me. So I am surprised that doesnā€™t work yet.

Not sure yet how this could be avoided. Certainly a command line parameter or environment variable could be added as a last resort but before going to such lengths would be good if thereā€™s a cleaner solution.

Ah. I answered to fast.

That should be easy. Will do now.

Hm. Not sure that would work.

Even if just runningā€¦

sudo --non-interactive --validate

does that prompt for a password for you?

Because some sort of test would be needed to make sure sudo is actually functional.

Perhaps if just --validate is the issueā€¦ Could you confirm please, that some other command such asā€¦

sudo --non-interactive test -d /usr

works for you without any sudo issues?

Working on a fix now. I set

user  ALL=(ALL) NOPASSWD:ALL

in /etc/sudoers which basically lets me run commands without a password, but validate wasnt working.

$ sudo echo hello
hello

$ sudo --non-interactive --validate
sudo: a password is required

Going to use your method in /etc/sudoers.d/passwordless and fix it. Thanks for teaching :slight_smile:

1 Like

okay so I think --validate is the issue

/etc/sudoers

# Allow members of group sudo to execute any command
%sudo	ALL=(ALL:ALL) ALL

vpsuser  ALL=(ALL) NOPASSWD:ALL

@includedir /etc/sudoers.d

Example shell commands as the ansible user to show behavior

vpsuser:~$ sudo echo 'no password supplied!'
no password supplied!

vpsuser:~$ sudo --non-interactive --validate
sudo: a password is required

vpsuser:~$ sudo cat /etc/sudoers.d/passwordless
%sudo ALL=(ALL:ALL) NOPASSWD:ALL

vpsuser:~$ groups
vpsuser sudo

vpsuser:~$ sudo --non-interactive test -d /usr
vpsuser:~$
1 Like

This is fixed in 16.0.6.7-developers-only.

Changed to:

Please let me know if that works for you.