Hi all,
@Patrick had me look at offline documentation recently.
I’ve taken a stab at a new (but still, IMO, temporary/intermediate/imperfect) solution.
This method is as follows:
-
Once a day, a server-side cronjob generates a collection of sitemap.xml files for the MediaWiki. The ‘parent’ sitemap file is at https://www.whonix.org/wiki/sitemap/sitemap-index-wiki.xml, and it contains a subset of links to other sitemaps - just the way MediaWiki likes to do things.
-
The scrape-whonix-wiki.sh script (in the repo above) runs later in the day (still server-side), and uses the sitemap to ‘discover’ the URLs to all the wiki content.
It then scrapes those pages with a Python tool called ‘webpage2html’, does a whole heap of other ugly munging to fix most links, remove irrelevant parts of the page, add .html suffixes etc.
Then it commits the new version to the repo above. Thereby, it’s a server-side generated collection of HTML pages from the wiki that more or less looks like the real thing.
Why webpage2html and not wget?
Because MediaWiki loads its CSS and javascript assets dynamically via PHP (from a collection of different sources including MediaWiki core, the skin in use, relevant Extensions, etc). Therefore, there are no ‘static’ assets that can be downloaded and served. A wget of the pages includes the /w/load.php?xxxxxxxxx for assets, which means the pages don’t look right, content all messed up etc.
webpage2html uses a different approach - it fetches the assets somehow and then adds that data inline into the html file itself.
The downside is that each html file is rather large (1.4MB on average). The upside is it actually looks OK and it’s entirely local.
clone it down in a VM and then open file:///home/user/whonix-wiki-html in browser - works quite well.
So, the pros and cons of this method:
Pros:
-
Automatable via cron
-
Dynamic discovery of pages (e.g any new pages created in the last 24 hours, will be in the new nightly sitemap xml files, which means the subsequent crawl picks up new content, along with edited content)
-
Fully offline copy (some footer links etc might link to the main www.whonix.org site, but links to other content within content, should load the respective .html file locally. I skip some useless pages such as many of the Special: ones - I think the main content is what really matters here)
-
Requires no technical knowledge from the user on how to set up locally unlike the documentation at https://www.whonix.org/wiki/Dev/Replicating_whonix.org which basically requires near-sysadmin knowledge. User only needs to know how to git clone the Github repo above
-
can also be run by anyone anywhere (no need to rely on this github version)
The negatives:
-
Files are quite large. Repo is pretty quick to clone, 50-60MB or so, but the resulting local copy is maybe 850MB+ ! Due to all the duplicated images in each .html file. Nothing I can do to fix this.
-
Still relies on crawling the site to fetch content - sort of a security risk as described above - however, we partially mitigate that by running the script on the server itself, so it’s connecting to ‘localhost’ in essence (not literally localhost, but its own eth0 interface), making it pretty much impossible to MITM. However, whatever HTML it generates, comes from the live wiki, which might be compromised already. A risk that all other ‘export from mediawiki’ solutions already face.
Whilst this is maybe a bit further than previous attempts we’ve made, I personally still consider it a bandaid fix.
Ultimately the best form of ‘offline documentation’ which also resists watering-hole attacks, would be to follow the QubesOS example of using a Github repo with markdown docs. Collaboration through pull requests, and ‘published’ documentation merely a deployed version of those docs. Turns the solution on its head with ‘offline’ coming first, and publication coming second.
The cost is:
- a slightly larger learning curve (maybe) sending pull requests for changes.
- And maybe a reliance on Github, although there are ways to mitigate that too (the Github repo could be a means to an end, a mirror of some more ‘pristine’ copy held somwhere else rather than a hard dependency).
It also means that you could ship the offline documentation with Whonix itself, (accessible through the home page of Tor Browser), making the website’s copy a fall-back copy only.
In short I think it’s worth the effort to move away from the current wiki approach in general for many reasons (am I offering to migrate all the content? No I am not necessarily
) A long term goal anyway IMO.
My solution will update in the above wiki on a daily (or nightly) basis. Still fixing a couple small bugs here and there mostly relating to making sure links open local .html version instead of remote version (or local version without .html suffix, somehow a couple keep sneaking through my awful sed fu)