Derivative Maker Automated CI Builder

Yup I will do that :slight_smile:

1 Like
---
- name: Clean existing gateway VM
  shell: "dist_build_non_interactive=true /home/ansible/derivative-maker/derivative-maker --flavor whonix-gateway-xfce --target virtualbox --clean > /home/ansible/build.log 2>&1"

- name: Clean existing workstation VM
  shell: "dist_build_non_interactive=true /home/ansible/derivative-maker/derivative-maker --flavor whonix-workstation-xfce --target virtualbox --clean >> /home/ansible/build.log 2>&1"

- name: Reboot VPS for stray loop devices
  reboot:
    reboot_timeout: 60
  become: true

- name: Build new gateway VM
  shell: "dist_build_non_interactive=true /home/ansible/derivative-maker/derivative-maker --flavor whonix-gateway-xfce --target virtualbox --build >> /home/ansible/build.log 2>&1"

- name: Build new workstation VM
  shell: "dist_build_non_interactive=true /home/ansible/derivative-maker/derivative-maker --flavor whonix-workstation-xfce --target virtualbox --build >> /home/ansible/build.log 2>&1"

Not sure it would be wise to combine all of them to a single script in ci folder?

Except - name: Reboot VPS for stray loop devices that might make more sense for the CI to take care of.

How about:

  1. ansible calls script in ci folder to clean vms
  2. ansible reboots machine for loop devices
  3. ansible call script in ci folder to build vms

I agree that leaving reboot functionality in ansible is a good idea so ansible knows to expect connection to the machine to break during reboot

For longer term maintenance, I have a question.

Is there any way to speed up the builds to only run the relevant steps that have been affected in the code changes?

i.e. - Do we need to build ../monero-gui_0.18.1.0-1_all.deb when you only change a few things in the help steps or something?

Would be nice to have a “light” build or something for iterating more quickly. Currently it takes over 1.5 hours to build fresh workstation and gateway VMs. I’d love it if we could make it where you have the ability to get feedback more quickly

I guess though when troubleshooting you can always just SSH in to the VPS and run the troublesome build step manually and see what is causing the problem. Just a thought though, I want this CI feature to make your life easier :man_shrugging:

1 Like

We have this thingy here: Whonix build script now optionally supports installing packages from Whonix remote repository rather than building packages locally

So just by adding…

--remote-derivative-packages true

No packages should be built and all packages would be downloaded from the Whonix binary repository. It would skip all the lengthy package creation.

How does that sounds?

How often will we use --remote-derivative-packages true? Maybe for git commits, use it. For git tags, do it “proper” and drop it?

Though, when using --remote-derivative-packages true we would not notice when package builds fail. But that isn’t very likely since if packages are updated, I need to build them locally anyhow.

Absolutely makes sense. In my previous build, a rookie mistake for forgetting $SUDO_TO_ROOT has lead to a failed build. Another 40 minutes to wait for me now until I can see if that is fixed now - unless I do a local build with local hacks which with the CI we’re trying to avoid.

That’s also why I suggested Derivative Maker Automated CI Builder - #74 by Patrick - because then I would hack the build command to a much simpler various to a point where only a minimal raw image gets created with even nothing useful inside just to test various mount / umount to quickly get that fixed.

1 Like

My latest commit fix · derivative-maker/derivative-maker@03f6496 · GitHub didn’t get picked up by the CI on https://github.com/Mycobee/derivative-maker/actions yet.

Last time that was faster, I think. I am not complaining about the speed. Just thinking if that commit got lost, not there yet, it will never come.

Maybe the automated CI reboots could leak to some commits overlooked by the CI?

This happened because I rebased and pushed pretty quickly after you commented, and didn’t give enough time for you to finish your stuff.

It isn’t an issue with CI or anything, simply me being a bit too trigger happy and pushing without rebasing 03f64961 yet

1 Like

The below text is to outline longer term goals, and initially I think we can ship this stripped down but getting all this running shouldnt be too heavy of lift

Conditional logic for CI builds

# if ci_trigger == commit
  # run build suite using --remote-derivative-packages
  # send logs as artifacts to github actions and notify success or failure
  # nuke excess VM data (OVAs, VDI, etc.) to save space on VPS
# elsif ci_trigger == tag
  # run build suite without using remote derivative packages
  # load and start VMs in to VBoxManage 
  # push OVAs to S3 storage bucket so Devs/Testers can download and experiment 
  # Allow VNC access to VPS and VMs for quicker testing
  # Run the WATS suite on the Tagged VM

Thoughts @Patrick?/

@Patrick it is unclear to me what happened with this build

https://github.com/Mycobee/derivative-maker/actions/runs/3023870954

Any chance we could hop on a Jitsi call at your convenience and debug together? Would love to speed up our iteration a bit

Perfect!

Except…

This one will probably not be used. Could as well as save storage / storage costs.

(Btw also the build logs can be trimmed. We probably don’t want to keep (all) build logs for each build forever once these take significant space. Also an occasionally manual wipe would probably be ok.)

Using S3 and cloud hosting for development-only purposes without a strong dependency and without introducing any risk that a compromised CI could break security for users is OK.

However, if any testers would have to download something form S3, that would probably be criticized. Not as bad, but kinda similar to Whonix adding Google Analytics to its website (not going to happen!). Might backfire.

Found it already.

++ realpath /home/ansible/derivative-binary/Whonix-Gateway-XFCE_image/etc/network/interfaces
realpath: /home/ansible/derivative-binary/Whonix-Gateway-XFCE_image/etc/network/interfaces: No such file or directory

Just a test that I added for debugging the mount issue which might be no longer needed caused a non-zero exit code.

But indeed. Finding the error from the log is non-trivial due to the log size. In the latest commit, I prefixed error: or ERROR: (best to search case-insensitive). To find this issue, I rather searched the log for “detected” than “error”.

To make the build log contain the word “error” less often, I’ll refactor, rename some functions (the error handlers). But I do that only once some builds are completely passing.

Already fixed in git.

Not sure it would be faster. I am using the CI now to debug this. Now waiting for the new git commit to show up under https://github.com/Mycobee/derivative-maker/actions to see if the last bug was squashed now. Feel free to contact me on telegram (as previously added). For actual call, many apps will work for me.

1 Like

Fixed for me locally.

As of git commit bf65075d4cf4edc2f3c0291dee37c80c0207c5b7 same as git tag 16.0.7.6-developers-only if I make a local build with --remote-derivative-packages true there is no build issue and no stray mounts. Checked. Both gw and ws build were successful.

(I am building the tag, not commit. There is a slim chance this might make a difference due to perhaps triggering a bug due to too long file names.)

The umount bug avoidance strategy of avoiding to umount non-existing mount points as well as umount --lazy seems to be functional.

1 Like

Git tag 16.0.7.7-developers-only has some build script cosmetic improvements only. (Less unnecessary lsof output.)

1 Like

True. In the event you needed it for some reason you could scp it from the VPS to a machine.

yes, I was going to keep the bucket password protected, so that CI builds were only traded by people using it for situations where it is known not to be “secure” build. But since it is not needed, it doesn’t matter. Also I was using the digital ocean equivalent, not quite as bad as amazon data collection wise I’d imagine, but who knows what companies do behind the curtains :man_shrugging:

The logs delete each time a CI build run occurs. If you notice the first run in the create_vm.yml file, it redirects stdout and err to build.log, and additional build steps are append that file. The next time a build runs, the redirect with > overwrites the log. It is a lot of output but storage is no concern, as it heals itself.

If you want to trim the logs, I am open to suggestions on how best to do it.

I have to rebase my branch on your upstream master and force push my commit manually to trigger the build, but once my automated_builder it is merged in to your master and configured in the GH, when you push to derivative-maker it will automatically trigger the build. (I pushed the rebased changes btw)

1 Like

Yay, build issues fixed! Build https://github.com/Mycobee/derivative-maker/actions/runs/3026018094 completed without error.

Should I merge your branch? Seems very much ready. And the changes also zero risk and non-controversial as they stay contained in one folder that does not affect the build.

Small note:

  • I cannot hit the merge button for pull request Add automated_builder ansible suite to CI by Mycobee · Pull Request #1 · Mycobee/derivative-maker · GitHub as this is a pull request for Mycobee github organisation, not a pull request for derivative-maker github organisation. To be able to hit the merge button, the pull request would have to be opened against the derivative-maker github organisation.
  • There’s no need for a pull request if that makes additional work on your side. Even if you’d say “my branch is now ready for merge” or something like that, I could just add your branch, fetch and merge using git command line, that I am comfortable with for such cases. Pull request or not is a matter of choice.

As for git push --force… Not sure why you’re using that, but I don’t mind multiple git commits. In other words, I’ve never asked any contributors to squash any commits yet into a single commit or to rewrite git history or any such stuff. That’s because if I contribute to other projects and being asked to do that, that’s always a bit cumbersome, a deterrent for me. Therefore keeping it simple here and not requesting git history beautification. For the future, I don’t mind merging many smaller commits either.

Ah, that sounds good. So not logs clogging the server.

The latest log being 11 MB I guess is still far from running into size issues. Even if it was 50 MB for a more verbose build, that wouldn’t be a space issue, I guess?
(But an issue for humans to check the log.)

I have a few more things I’d like to do for tags, but I will get it done soon and let you know when it is 100% ready to go in to your repo. Also, we will have to work together to do something in the configuration settings of your repo to make this work. I will send those steps in a PGP message

No worries. For me git work is automatic, and it adds minimal extra work. I dont mind squashing things, rebasing, pull requests, etc. I appreciate clean git histories and commits that are well written, but also I am happy to do what works best for you.

I do think if the project were to grow, implementing some git best practices would be a good thing. But if only a few people are working who know the project well, I don’t think it is a big deal.

It’s because I keep rebasing on top of your branch. imagine that 1-4 are the order of commits

my repo looks like:

  • patrick commit 1
  • patrick commit 2
  • rob commit 3

but you do an upstream commit (3) and I want to put my commit on top of it, when I rebase it looks like

  • patrick commit 1
  • patrick commit 2
  • patrick commit 3 (from upstream repo)
  • rob commit 4

Now when I push, git says the histories do not line up. This requires a force push. It is the way git works, basically warning that I have changed history and this is not something it allows you to do unless you know what you are doing (hence the -f flag)

Compared to the size of the VMs, 50mb is no big deal. The VPS has 100gb disk. The Gateway, Workstation, and codebases (binary and derivative maker) are a much larger percentage of storage than logs. But again, self healing non issue.

As for making readable logs, that is a subjective thing and if you feel it should be done in some kind of way I am happy to oblige. I just wanna help make your life more efficient. I will likely never have the security understanding and deep knowledge you do of the OS, so I am just trying to build tools that help you move faster with less time doing manual building/testing.

git used to drive me crazy, but every job I’ve worked has strict git guidelines so I really dont even think much about it anymore. it’s mostly automatic. You can fetch my branch, cherry pick my commit, or have an upstream PR at any point you want. It is all easy peasy for me. Whatever is best for you is best for me, just ask !

Give me like a day or two to get everything tidy and we can merge it in to your master

Cheers :slight_smile:

1 Like

@Patrick I am doing a final build after some cleanup. If it succeeds, my changes can go in to the upstream master.

Currently, it only builds when you push a tag. I will eventually update to have stripped down builds for commits with --remote-derivative-packages true and conditional logic to check if it is a tagged push or a commit push without a tag

For now, when you push a tag, it will build the VMs and start them on the VPS. You can VNC in and test them if you like

May I have your PGP key to send instructions on config settings with your derivative-maker repo?

Thanks

1 Like

I will need to set you up with server access so you can VNC in and test the running VMs if you like after a successful build

1 Like

Sure. Always up to date here: Contact - Whonix

1 Like

Instructions sent, things have been double checked. Feel free to cherry pick or merge my branch whenever you are ready

CI builds are good to go :slight_smile:

1 Like