What about having all the servers in the same location? Say for example we use OnionBalance to run the main front end instance using a Whonix gateway with it’s own workstation and we run 2 seperate back end instances using individual Whonix gateways with their own respective workstations. The end user traffic will be redistributed in a round robin fashion to all the servers equally.
: First the three backend instances (which are regular onion services) publish
their descriptors to the Tor directory hashring.
: Then Onionbalance fetches the descriptors of the backend instances from the hashring.
: Onionbalance now extracts the introduction points out of the backend
descriptors, and creates a new superdescriptor that includes a combination of all those introduction points. Then Onionbalance uploads the superdescriptor to the hashring.
: Now the client, Alice, fetches the superdescriptor from the hashring
by visiting the front end instance
: Alice picks an introduction point from the superdescriptor and
introduces herself to it. Because the introduction points actually belong to the backend instances, Alice is actually talking to backend instance #2, effectively getting load-balanced.
Doesn’t this mean we have implemented load balancing? Please refer to the onion balance website diagram for more info. I can’t add links for some reason.
In this instance we don’t consider the Whonix gateway and workstation to be separate entities but as one like in Ubuntu. We perform load balancing similar to how it is implemented in like say on Ubuntu because all the changes are in the gateway none in the workstation. The only difference is that the gateway acts as the application server and the workstation acts as the database server compared to Ubuntu where both happen on the same system. A guide like the OP said would be helpful for a lot of people. This sounds easy in theory but implementation is going to be difficult.