hope we can use something more to freedom/privacy respect like gitlab or so. i wish there will be no more github usage.
@mig5 Thank You! This has long been a PITA for us.
Can using an HTML minifier and/or running the images into a tool that has lossy-compression before they are incorporated, help?
EDIT: Is there a way to dedupe the images by having all pages point to one common copy of the asset?
If I had a way to do that I would obviously do it that’s the conundrum. wget can’t fetch the assets as static assets because MediaWiki is stupid design. webpage2html is the complete opposite: it has a way to fetch those assets, but it can only load them entirely inline per URL.
I’ll try and experiment some more with just bash/sed/awk/cut/perl hacks, to try and ‘grab’ the
<style> tag stuff and maybe load them into their own stylesheet or at least some ‘header.html’ and then see if I can somehow ‘include’ that in each html file.
Total bespoke job and I’m keen to try and avoid spending too much of Patrick’s money on a bandaid fix (IMO the money better spent paying someone to move all the content out of mediawiki entirely and into markdown)
Size aside… I am uncomfortable to create a deb of this and install this by default. All my packages are built from source code except for packages installed by apt-get. For html offline documentation it would be built on the server (lower trust level than my local machine) so could be compromised. Does that make sense? @HulaHoop
The only way to reach the same security level would be to build from markdown (which can be verified to not include any strange character sequences which could exploit vulnerabilities). Then there is also images. And some content generator markdown to something would be required. Tons of work.
while we can browse whonix documentation inside whonix-workstation or inside the host or …etc. but the question is how can we browse the documentation inside whonix-gateway ? like if someone want to copy paste the commands (because they r 2 long to type) or there is no host nor workstation just the gateway; so i was thinking why dont we save whonix documentation as an offline wiki inside the gateway and it can be updated by apt-get dist upgrade or with each new whonix version 12 13 14 …etc or the updates going to be manually by the reader (if he can).
i donno if its possible to do this step , but i find useful. also i donno any programs doing this or how to or is it easy or not …etc so if anyone can shine my knowledge with this i will be thankful.also i would like to hear ur suggestions about how to view the documentation inside the GW if the offline documentation or wiki is a bad idea.
i have found 3 methods doing this, by:-
2- wget as per this link for example explaining it:- http://www.linuxjournal.com/content/downloading-entire-web-site-wget
3- as full screen screenshots for example by using this add-on:- https://addons.mozilla.org/en-US/firefox/addon/fireshot/
(i will download the documentations as images so there is no need to view the websites with a browser)
the remaining questions is:- if i download the whole documentations and uploaded them on a server or …etc how can i make it possible that these images going to be inside whonix gateway .ova (i mean for the users)? i think i should take permission from whonix and how to put it …etc?
Sceenshots are bad. Not text anymore. Not searchable. Not clickable.
Conditions for copying Whonix wiki are already explained in the wiki footer.
[html] Unless otherwise noted above, content of this page is copyrighted and licensed under the same Free (as in speech) license as Whonix itself. [/html]
These tools are a hack, aren’t a clean workable solution for a distribution. Maybe https://www.mediawiki.org/wiki/Extension:Offline, but lack of manpower.
Thanks for your honest opinion.
I have done some truly horrible things in my shell script that will haunt me til the end of my days (or at least until I find time to do it more elegantly, whichever comes first), but all the same, I have achieved that goal of loading in a ‘common’ style.css instead of duplicating. (Well, it doesn’t quite load yet in TB, probably a bug, but I will see if I can fix that. The important thing is I can strip the css out and put it in a separate file. It works in Chromium for me so probably a TB thing…)
This has dropped the repo from being over 900MB when unpacked, to about 168MB. I can probably even remove a whole 60MB or so by deleting/re-creating the git repo again, as the .git is mostly that big due to history regarding the previous versions of the files.
Bravo. Impressive indeed and although its not ideal/secure as the markdown solution, it delivers an offline and easy means to share the information.
Can this be further pruned by adding a switch to exclude the /Dev pages and /Deprecated ?
These I would guestimate would have marginal effect on size. The major
source of size would be duplicated contents as well as images.
Not a solution but a summary of the most similar we have for now:
- https://github.com/WhonixBOT/WhonixWikiBackups (database dump, ugly binary file, restoreable in other mediawiki, no automation available)
- https://github.com/WhonixBOT/whonix-wiki-backup (markdown, looks nicer but no automated path to restore. And currently broken unfortunately because, see git-mediawiki is broken.)
- https://github.com/WhonixBOT/whonix-wiki-html (no automated path to restoration anywhere else)
fix offline documentation - pdfbook
fix whonix-wiki-html backup / fix scrape-whonix-wiki.sh
This was fixed.
No idea how to fix.
A post was split to a new topic: BlackArch Offline Documentation
Would it be possible to publish a Github site with all of the Whonix documentation for offline viewing? Other privacy oriented projects like QubesOS have all the documentation published this way.
Welcome to Whonix forums and thank you for your question!
Yes, that would be very good to have.
The current much less than ideal state of things is summarized in this post:
Offline Documentation Discussion
Technical challenges and limited resources prevent it from being improved. See also discussion in this forum thread. To make it better than that, someone capable needs to help.
wget can be used to download the entire site, correct?
wget not easy. Would need to script it but other tools such as htttrack do this.