Australian Shared Hosting Benchmark 2018 - DreamIT Host Edition

Published and retracted 2018-11-11, re-published 2018-11-28, re-published for the DreamIT Host update 2019-01-23.

Pre-Introduction and Disclaimer

This page is an update to the Australian Shared Hosting Benchmark 2018, published at the end of 2018 at https://ausdomainledger.net/aushostingbenchmark2018/

I was solicited by DreamIT Host to benchmark their cPanel Business Web Hosting service. I agreed (no compensation).

Please keep in mind that these results come with important caveats:

  • The other hosts were not forewarned of benchmarking, DreamIT Host had forward knowledge. That said, I have no reason to believe that the benchmark was manipulated.
  • This benchmark for DreamIT Host was performed over 2 months after the rest of the providers.
  • The account on which this benchmark ran initially required admin intervention, in order to enable PHP's OPCache module for PHP 7.1. Therefore, this result is only valid for other DreamIT Host Business Web Hosting services that also have the OPCache module enabled.
  • Due the above reasons, this does not meet the standard for an independent benchmark worth relying on. Refer to the original (without the inclusion of DreamIT Host) if you want that.

Introduction

The purpose of this project was to figure out the following:

Who, of the most popular[1] Australian web hosts, provides the fastest cPanel hosting at the $20/month[2] price point?

It follows up on a similar question I asked in 2017: Who is the biggest cPanel web host in Australia?

It was also important for the benchmark to clearly justify all of its decisions and to provide a way for others to reproduce the results on the same or similar kinds of services. To that end, every part of this benchmark is open source, including the tools and raw data, so you can run any or all parts of it yourself.

A caution: Remember, these results only reflect around a week (~4 daily runs per provider) of testing, Providers update their services constantly, and all benchmarks in existence are flawed. Consider that when choosing how seriously to take these results in your decision-making.

Providers and Services

The specific cPanel services that were benchmarked were selected according to the Methodology, which can be approximately be summed up as: take the ~10 most popular web hosts on Whirlpool and choose their ~$20/mo plan.

These are then benchmarked against a VPS service at the same price, configured in a performance-optimal way[3].

Provider Plan NameCost/MonthHostnameCloudLinuxWebserverPHP VersionDatabase VersionKernel
Linux VPS (2 CPU, 2GB RAM)$20.00nerve-idiomNonginx/1.15.57.1.235.5.5-10.2.18-MariaDB3.10.0-862.14.4.el7.x86_64
Business Standard$21.90vmcp44.digitalpacific.com.auYesLiteSpeed7.1.245.6.41-cll-lve2.6.32-896.16.1.lve1.4.51.el6.x86_64
Business Web Hosting$19.95web5.hosting-servers.com.auYesLiteSpeed7.1.255.5.5-10.2.21-MariaDB3.10.0-962.3.2.lve1.5.24.7.el7.x86_64
cPanel® Starter Hosting$14.94cp374.ezyreg.comYesApache7.0.325.6.412.6.32-673.8.1.lve1.4.3.1.el6.x86_64
cPanel - Custom Plan (200% CPU, 2GB RAM, 10GB Storage)$23.10cp-kil-m-009.micron21.comYesLiteSpeed7.1.245.6.41-log3.10.0-714.10.2.lve1.5.9.el7.x86_64
Business - Basic$19.95b9.cpcloud.com.auYesLiteSpeed7.1.245.5.5-10.2.18-MariaDB-cll-lve3.10.0-714.10.2.lve1.5.17.1.el7.x86_64
Elite$14.99web16.myhost.net.auYesLiteSpeed7.1.245.5.5-10.2.18-MariaDB-cll-lve3.10.0-714.10.2.lve1.5.19.7.el7.x86_64
Business Hosting - Bronze$20.00vmcp22.web-servers.com.auYesLiteSpeed7.1.245.5.5-10.0.36-MariaDB-cll-lve3.10.0-714.10.2.lve1.5.12.el7.x86_64
Freedom$16.95s222.syd2.hostingplatform.net.auYesLiteSpeed7.1.245.5.5-10.2.19-MariaDB3.10.0-714.10.2.lve1.5.19.8.el7.x86_64
Web Hosting w/ Power Pack$6.00flash.zuver.net.auYesLiteSpeed7.1.245.7.242.6.32-896.16.1.lve1.4.54.el6.x86_64

Synergy Wholesale, TPP Wholesale, Crazy Domains, and Net Registry were all disqualified[7], bringing the field down to 8 providers.

Results

Provider WP Overall Score WP P90 1/sec WP P90 5/sec WP P90 10/sec DB Insert DB Queries Prime Sieve PHP.net #1 PHP.net #2 IO Random IO Open IO Seq. Write
100% 23.14ms 35.20ms 47.17ms 37.00s 52 8255 417.00ms 2922.00ms 438348 53.60ms 189.87ms
1.63x slower overall
64.13ms 2.77x slower 55.29ms 1.57x slower 52.35ms 1.11x slower 23.63s 1.57x faster 77 1.48x faster 6730 1.23x slower 309.00ms 1.35x faster 2360.00ms 1.24x faster 62369 7.03x slower 129.68ms 2.42x slower 739.33ms 3.89x slower
2.40x slower overall
27.76ms 1.20x slower 91.65ms 2.60x slower 134.27ms 2.85x slower 21.07s 1.76x faster 76 1.46x faster 11250 1.36x faster 298.00ms 1.40x faster 2031.00ms 1.44x faster 95951 4.57x slower 35.84ms 1.50x faster 64.76ms 2.93x faster
2.79x slower overall
56.58ms 2.45x slower 107.39ms 3.05x slower 129.98ms 2.76x slower 34.15s 1.08x faster 47 1.11x slower 6809 1.21x slower 504.00ms 1.21x slower 3207.00ms 1.10x slower 99867 4.39x slower 80.42ms 1.50x slower 120.03ms 1.58x faster
3.42x slower overall
95.23ms 4.12x slower 105.79ms 3.01x slower 159.74ms 3.39x slower 71.27s 1.93x slower 31 1.68x slower 6210 1.33x slower 915.00ms 2.19x slower 4331.00ms 1.48x slower 62396 7.03x slower 652.50ms 12.17x slower 102.88ms 1.85x faster
4.38x slower overall
67.39ms 2.91x slower 175.87ms 5.00x slower 218.49ms 4.63x slower 54.80s 1.48x slower 41 1.27x slower 6494 1.27x slower 611.00ms 1.47x slower 3728.00ms 1.28x slower 3768 116.33x slower 244.68ms 4.56x slower 31380.02ms 165.27x slower
4.55x slower overall
78.27ms 3.38x slower 221.44ms 6.29x slower 180.86ms 3.83x slower 47.50s 1.28x slower 39 1.33x slower 6502 1.27x slower 511.00ms 1.23x slower 4097.00ms 1.40x slower 345645 1.27x slower 91.32ms 1.70x slower 86.75ms 2.19x faster
6.28x slower overall
99.97ms 4.32x slower 196.48ms 5.58x slower 366.08ms 7.76x slower 82.48s 2.23x slower 34 1.53x slower 4844 1.70x slower 926.00ms 2.22x slower 5442.00ms 1.86x slower 16685 26.27x slower 114.71ms 2.14x slower 2554.74ms 13.46x slower
6.79x slower overall
107.07ms 4.63x slower 212.61ms 6.04x slower 396.54ms 8.41x slower 83.61s 2.26x slower 35 1.49x slower 5488 1.50x slower 683.00ms 1.64x slower 4582.00ms 1.57x slower 15774 27.79x slower 219.78ms 4.10x slower 211.20ms 1.11x slower
890.99x slower overall
847.87ms 36.64x slower 42600.00ms 1210.23x slower 50560.00ms 1071.87x slower 58.65s 1.59x slower 46 1.13x slower 2007 4.11x slower 2408.00ms 5.77x slower 14011.00ms 4.80x slower 3841 114.12x slower 161.16ms 3.01x slower 30478.16ms 160.52x slower

Visualized (based on WP Overall Score)

DreamIT Host
Netorigin
VentraIP
Net Virtue
Zuver
Micron21
Digital Pacific
Panthur
Melbourne IT

Methodology

Provider Selection

The selection of providers is a task that could be strongly affected by bias, so I endeavored to give it some semblance of being data-driven.

The ideal scenario was to scrape all of the threads posted in during 2018 in the Whirlpool Web Hosting forum, apply named-entity recognition on the posts and then perform a frequency count for each named-entity (web host).

Unfortunately I did not quite reach that scenario. While the scraping of the forum was easy to perform (and with permission), the state-of-the-art for NER requires a significant amount of training to be at all effective. e.g. "net registry" is very hard to pick up as an NE. In the end, I (very unglamorously) used my eyeballs to identify the top 17 web hosts from the ~16k posts.

From there, permissive RegExps were written for those 17 and used to generate the frequencies.

One significant mistake that I made was counting all posts whether they were made in 2018 or not (so threads that go for many years had all posts counted). A second mistake was that posts from another forum ("Server Management") accidentally leaked into the dataset during the scrape, but I suspect its impact was negligible.

You can look in the source code repository for all of the raw data and the actual frequencies.

Benchmark Selection

A meaningful measure of "performance" in shared web hosting should ideally reflect the speed at which humans experience the web applications that they run on their cPanel service. As such, we will run only pure-PHP benchmarks which are initiated over HTTP requests. This rules out usual tooling like sysbench, FIO, etc. Another reason for using pure PHP is that a number of hosts have disabled system/exec and it's not clear whether it's possible to fork an arbitrary ELF executable (e.g. using cgi-bin) across every host or not. Since it it has a chance of torpedoing the entire methodology, I chose to avoid the risk.

In benchmarking, it's unavoidable to rely on microbenchmarks. Even for "realistic" tests, they will not truthfully predict a real-world workload. However, this is a comparative study. We are not gunning for accuracy in the absolute numbers reported by the benchmarks, just for accuracy in the comparative numbers. For this reason, the results are shown as a percentage of the performance of a baseline environment (a fast VPS).

The baseline environment is an unmanaged Linux VPS from Binary Lane, at the same cost target as the shared hosting plans. It is configured in a performance-maximizing way (nginx+PHP-FPM), with the aim of providing some guidance as to "how much slower/faster" a shared service is, compared to an optimum (though not necessarily the optimum [6]).

Each benchmark is also repeated 25-30 across a week time interval, in order to account for transient "noisy neighbor" effects.

The pure PHP benchmarks are described below.

1. WordPress

This is the "realistic" benchmark that primarily informs the overall result.

It involves installing an entirely vanilla[4] WordPress installation and performing a constant-rate, concurrent HTTP siege against it using wrk2, the "coordinated-omission" fork of wrk. I have further forked wrk2 at the ausdomainledger/wrk2 multi-ip branch in order to round-robin requests between IPv4 and IPv6 addresses, when available. The reason for this is that Zuver and VentraIP have extremely aggressive CSF firewall rules configured, which causes the 10/sec concurrency level to fail. Using multiple source IPs permits us to evade the firewall.

The aim of the benchmark is to record the latency percentiles for the site's transaction time at fixed concurrency levels (1/sec, 5/sec, 10/sec). These should approximately reflect the usability of the website when it is being browsed by only the webmaster, by a few users, and a few more users, respectively. Initially 10/sec was 25/sec, but it became clear during benchmarking that all but one of the providers fell flat on their faces under such a workload.

Due to shared hosting being a multi-tenant environment, we will focus mainly on the P90 result, as anything higher is more or less just noise. However, you may browse the raw data for a complete latency distribution for each provider.

The "siege" is performed from a dedicated server hosted on Servers Australia's Sydney network, and is within a millisecond latency of every benchmarked website. The first exception is Micron21, who are hosted in Melbourne, who are sieged from a separate VPS node hosted in Micron21's Kilsyth DC. The second exception is Net Origin, who are hosted in Perth, who are sieged from a separate VPS node hosted by RansomIT in Perth. In both cases, extra care was taken to ensure that the different environment did not introduce extra advantage or penalty.

2. Database

This is a second "realistic" benchmark which focuses on the ability of the user to put the database server to work. It is significant to include because each host may or may not be running CloudLinux's MySQL governor.

The benchmark is based upon the first 500k records in the ASIC Business Names dataset, with a full-text MySQL search index on the business name column. The benchmark itself reports on the throughput of INSERTs and full-text queries.

3. Microbenchmark: CPU - Prime Sieve

This is a better CPU microbenchmark than #4 and #5, basically because it runs longer. It is my own port of Sieve of Eratosthenes used by sysbench to measure single-threaded CPU performance.

4. Microbenchmark: PHP.net's bench.php

This is one of the two official microbenchmarks used by PHP to report on performance gains and regressions between PHP versions. However, it runs very quickly and is not very good at measuring any meaningful workload.

5. Microbenchmark: PHP.net's micro_bench.php

As above.

6. Microbenchmark: Disk IO

Disk IO is a terribly important subsystem when measuring performance, but it is more or less impossible[5] to accurately measure from a pure PHP benchmark. For this reason, it is heavily discounted in the overall result.

Nonetheless, an attempt is given to measure this, for curiosity, if no other reason. The benchmark essentially measures mass random read/write/open on inodes characteristic of the kind you would find in a shared hosting web application (e.g. WordPress).

bUt iS iT sTaTisTiCaLlY sIgNiFiCanT?

Directly comparing samples (i.e. percentiles) across datasets is potentially problematic. I haven't proven that the measured differences did not occur purely by chance.

By running the benchmark suites 25-30 times across different days and for significant durations, I have tried to provide an accurate basis for comparison, but my good feeling about it is purely intuitive rather than benefiting from rigorous stats analysis.

Ranking the Results

For the overall rankings, the sum of the median of each of 1/sec, 5/sec and 10/sec P90s is taken.

It might be surprising to ignore the other measurements entirely, but coming up with a composite/summary score based on some kind of factor or principal component analysis proved to be too unwieldly for me to do correctly, and in a reasonable time.

This ranking method is also the most direct measure for the idea of "performance" that most people think about in the context of shared hosting, in an environment that is more or less homogenous towards and optimized for WordPress workloads.

In any case, you are free to sort the results by whatever measurement you want.

Disclosures and Contact

Of all of these providers, I am affiliated only with the baseline benchmark provider, Binary Lane, as an existing customer. Their selection as the benchmark provider was made on the basis of personal preference and the fact that they are not eligible as a shared hosting provider.

No providers were informed about being benchmarked at any time. Care was taken not to reveal the purpose of the accounts.

All providers were paid the full amount for their services using only my personal funds.

Services were used only for the purpose of this benchmark, and then cancelled. I received an automatic refund from VentraIP & Zuver at cancellation time, apparently in accordance with their 45 day money-back guarantee.

Feedback can be provided by 📧 to _<at>ausdomainledger.net. I am happy to post factual corrections or highlight mistakes and flaws. However, no existing or additional benchmarks will be re-run or have their results updated.

All product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only. Use of these names, logos, and brands does not imply endorsement.


[1] Popularity is calculated by the number of mentions on the Whirlpool Web Hosting forum in 2018. See Methodology for more info.

[2] $20/month was selected as an arbitrary, comfortable price-point, that feels like the mean for what is offered by hosts and chosen by customers, but isn't backed by data.

[3] The Ansible provisioning script for the VPS baseline benchmark can be found in the source code repository.

[4] Caching plugins are entirely orthogonal to the purpose of this benchmark. We are trying to perform a comparative study of the "raw power" of each cPanel service. To that end, any caching plugins that are snuck in by the server (like Litespeed Cache) are forcibly removed.

[5] PHP does not expose a sufficient set of syscalls to make it possible to control whether a write is actually applied to disk. At most, disk writes will result in dirty pages in the kernel, so the benchmark results will ultimately depend on the state of the kernel page cache at any point in time (in other words, completely out of our control).

[6] The selection of Binary Lane as the benchmark baseline is not any kind of endorsement or statement about its worthiness as a web host. It is purely a question of convenience and independence from the benchmarked providers.

[7] Synergy Wholesale and TPP Wholesale were disqualified for not having any retail (not requiring pre-approval/contract signing) services available. Crazy Domains were disqualified for requiring annual+ payment terms, which does not fit the spirit of the problem statement. Net Registry were disqualified for unironically asking me to fax them my drivers licence, but it is unlikely that their results would differ from that of Melbourne IT by much.