Another way to considerably speed up the website could be to switch away from the standard HTTP 1.1 protocol in exchange for HTTP/2 or SPDY.

Standard HTTP 1.1 works by requesting and fetching the HTML page. Then the browser reads the HTML page, and has to request and fetch all the other page assets (CSS, JavaScript, Images, Fonts, etc). So for a single webpage, there can be dozens of separate requests out to servers for downloading page assets that need to be fulfilled. This really slows total page load times.

HTTP/2 and SPDY which is being deprecated in exchange for the new HTTP/2 over the next year work differently by having the server first fetch all of the page assets and send everything to the user’s browser in one single fetch (multiplexing). As well as using compression and prioritization schemes. So one single page download, instead of in many separate requests.

I’ve heard and seen that such HTTP multiplexing protocols can considerably speed up webpage load times.

See this following demo site that compares HTTP vs. HTTPS with SPDY…

Some decent overview, timeframe, and browser support info in the Wikipedia articles…

I’m not sure if all the necessary compatibility factors match up quite yet, so that would have to be considered. But I just wanted to give a heads up that HTTP/2 is likely to be a strong solution for additionally speeding up the website in the upcoming future.

And, as a bonus, a potentially a good thing for overall web anonymity that may make fingerprinting HTTPS webpages by netflows a bit harder.

I did research earlier, read a lot benchmarks and whatnot and came to the conclusion, that on our new server a very fast, but not ultra fast and effort wise doable combination would be using nginx as an SSL terminator in combination with varnish as cache and apache2 with our current settings as backend.

[That combination has quite some benchmarks on the web and is not that uncommon. For example, performance wise, it beats pound.]

[And I am quite certain, that nginx + varnish is faster than any other combination of nginx/lighhtpd + squid/varnish.]

[It would not be ultra fast, because some stuff is not justified yet, such as multiple servers, load balancing, database servers, CDN (which we probably do not want for privacy/security reasons) and whatnot professionals such as wikimedia or facebook use. But still very fast. That means I expect page load times below 2 seconds.]

ngnix alone could work as well, but since our most heavy lifting is mediawiki, that needs a lot php, because we have much more dynamic than static content, I think ngnix alone wouldn’t do the trick here.

In past with optimization I learned, that more cool stuff doesn’t always mean better performance. I don’t remember any specific examples, but some times an improvement that sounded good in theory, but in practice, slowed down our site even further.

Now, varnish doesn’t support HTTP/2 and/or SPDY. I haven’t found any benchmarks that indicate what adding these to the combination I had in mind would do to the performance. Also no instructions, if that is even possible. What would be useful, would be any benchmarks that apply to, i.e. for lots of dynamic [php] content. Do you know any or got any experience?

I don’t know of such benchmarks or specific instructions. Probably early days for HTTP/2 integration.

If I’m understanding Varnish as a cache correctly, it will only greatly reduce the server processing time for each page asset.

However, the page load bottleneck will still be with requesting and fetching multiple page assets over a Tor connection.

Standard HTTP Process:

  • 1st: Submitting a HTTPS request to download and then parse the webpage HTML over Tor.

  • 2nd: After completing the 1st step, submitting several more HTTPS requests to additional page assets (CSS, JavaScript, Images, Fonts, etc) to download and then parse them over Tor.

  • 3rd: After completing individual assets in the 2nd step, if there are any secondary included page assets imported by the primary page assets, then repeat the HTTPS download and parsing process for those as well over Tor. [Not sure if there are any such secondary imported page assets.]

  • Done: Page Load Complete

So even if one assumes 0ms server processing time, a multiple stage HTTPS request process over Tor could still put the page load times above 2 seconds [I’d guess], due to network latency.

HTTP/2 and SPDY greatly speed up the network transport of page assets, largely via multiplexing, so that only 1 single HTTPS request/download is needed for the entire page contents.

Multiplexed HTTP Process:

  • 1st: Submitting a HTTPS request for webpage, receive everything in 1 bulk download, and parse the page assets.

  • Done: Page Load Complete

But you can see where net page load times are at after getting the new server and standard HTTP 1.1 with Varnish caching implemented. And HTTP/2 will become more standard and supported over the next year or so it seems.

Just wanted to put this on the radar.


Sure, I am very interested to boost the server speed further.

Here is a benchmark (not involving varnish):

Enabling it seems super simple:

How that interacts when using varnish in the background, I guess we’ll see.

Glad it is a simple enable. Yeah, hopefully it interacts well with the caching setup.

Will be especially interested to learn what levels of performance impact it has for browsing over Tor, due to Tor’s poorer latency and bandwidth.

It works. Just enabled SPDY in nginx. No test could show a difference so far.

Tor Browser does not support SPDY yet:

In this video they combine Varnish and SPDY:

Will take quite some time until HTTP/2 comes to nginx:

So most likely we won’t have it before Debian stretch since our server is based on Debian stable [now jessie, since frozen]. Most times it’s not worth the extra effort to upgrade just to get such a feature.

Since Tor Browser does not speak SPDY yet, I imagine it would take quite a while until it supports HTTP/2.

A few notes:

A good HTTP/2 or SPDY speed test would involve these elements:

  • Server Support
  • Client Support
  • No Local Client Cache
  • Ensuring the Push Capability is Activated and Working

Not sure if auto speed test services would handle this protocol yet or not. Maybe not, since it is a bit early still.

These mention browser support for Firefox and others. So whenever Tor Browser catches up to these Firefox versions.

Firefox supports HTTP/2 which has been enabled by default since version 36.[49] Experimental support for HTTP/2 was originally added in version 34.[50][51] Currently only HTTP/2 over TLS is implemented.[21]
Firefox supports SPDY 2 from version 11, and default-enabled since 13 and later. (Also SeaMonkey version 2.8+.) SPDY protocol functionality can be (de)activated by toggling the network.http.spdy.enabled variable in about:config.[10] Firefox 15 added support for SPDY 3.[32] Firefox 27 has added SPDY 3.1 support.[34] Firefox 28 has removed support of SPDY 2.[29] about:networking (or the HTTP/2 and SPDY indicator add-on)[44] shows if a website uses SPDY.


Source: Module ngx_http_spdy_module Quote:

Current implementation of SPDY protocol does not support “server push”.
Looks like this stuff will take quite some more time.


I’ve tested locally using chromium using add-on page load time. (Tested that chromium has SPDY support [success] as well as that had SPDY support [success].) Noticed no difference.

I should re-test with chromium add-on cache killer enabled. (Disables cache. Increases page load times by ~0.2 - ~0.3 seconds.)

Enabling/disabling SPDY in nginx is quite low effort.

SPDY is disabled for now. Would you like to run some benchmarks against yourself? Then I re-enable, then you can test again and compare.

That’s okay. I don’t imagine I’d come up with much different at this point.

This seems to becoming the new future standard with HTTP/2 and so hopefully the tools will improve and hopefully will be able to further aid the latency bottleneck.