Rearranging Raspberry Pi

February 27th, 2017 by


Replacing the production DB under a running system.

Two years ago we migrated Raspberry Pi from a single big server to a series of virtual machines (VMs) on an even bigger server. As time has gone on this architecture served us well; we’ve managed the Pi Zero launch and the busier Raspberry Pi 3 launch and even briefly ran the website on Raspberry Pis as a test.

However, we had a number of issues with the setup that we were looking to address:

  1. We were out of space at the back end and needed more capacity;
  2. We wanted more redundancy: the setup was dependent on a single dedicated VM host in a single data centre; and
  3. There’s an apparent hardware fault on the current VM host that causes it to very occasionally spontaneously reboot (or in one case, switch itself off altogether)

It was time for a bit of an upgrade.

Scalable WordPress

The biggest part of the Raspberry Pi configuration is the main WordPress site that serves the front page for www.raspberrypi.org. This consisted of three VMs: two web servers and one database server. WordPress doesn’t provide any built-in functionality for scaling to multiple servers and although the vast majority of pages are driven entirely by the database, some operations, such as installing plugins or uploading media, result in the creation of local files that need to be available to all web servers.

In order to support WordPress in a multi-server configuration, we arrange the two web servers as a primary and a secondary. The primary delivers half of the public requests and also the administration and content creation side. The relevant parts of the local filesystem are then regularly rsynced to the secondary server, which serves public requests and can maintain the support the full public usage of the site if necessary.

The website is fronted by our “CDN”. This is a cluster of Mac Mini servers that we use to offload much of the static content traffic, and to load balance across the two web servers.

Step 1 : Figure out what you’re trying to do and write a plan.

More capacity meant a new VM host, and more redundancy meant that it went in a different data centre (Sovereign House, or “SOV”) to the existing VM host (Harbour Exchange, or “HEX”).

Diagnosing the fault on the current VM host is tricky, firstly because it only occurs once or twice a year, and secondly because it was hosting quite a busy live website. So our plan for this was to migrate all VMs onto entirely new hardware in HEX so that we can prod the old box at our leisure. This also gave us a handy opportunity to do something we’ve been wanting to do for a while, which is an OS upgrade on the VM host itself.

To add redundancy to the main site, we can split the two web server VMs across the two sites, but this doesn’t help unless we also replicate the database at the new site. So overall, we wanted to move one web server VM to SOV, and add a database VM, and in HEX we wanted to move the database VM and remaining web server VM to the new hardware. This all needed to be achieved with minimal downtime; as a highly public site we’d rather not have two hours of downtime if we can avoid it.

Step 2 : Move the database

We brought up a second database VM at the second site (SOV). We set this up as a MySQL replica of the primary database VM, this requires only the briefest of interruptions to service to configure. We then simulated a failure of the primary database server and moved all database services to the alternative site – so the database is now in SOV. Again, this has only a very brief interruption to service (<5s).

We then moved the old database VM to the new VM host in the original site (HEX) and reconfigured it as a MySQL replica. We now have a primary/secondary setup for MySQL on the new VM hosts, the secondary in a different location (HEX) to the primary (SOV), and the HEX server is not longer on the faulty hardware.

It’s worth noting that in normal operation, both web servers use the same database server for all queries. In similar arrangements, it’s quite common to have one or more web server use the slave database server for read queries, and to only send write queries to the master, thereby reducing the load on the primary database server. Unfortunately, standard WordPress doesn’t support this, but there are plugins that do which we may look at in the future.

Step 3 : Move the web servers

We shut down the secondary web server (HEX), moved it to the replacement VM host in the same data centre and brought it back up. The CDN automatically redirected all web traffic to the primary web server until the secondary came backup. Once this was complete we took the primary web server offline disabling all administration functions for the main website. Fortunately we’d told everyone in the Raspberry Pi office to drink coffee while we did this so nobody complained. Again the CDN moved all production traffic to the secondary web server, we then moved the primary VM to our alternative data centre (SOV) to sit next to the primary database server (SOV).

Step 4 : Tell the CDN

Unlike many providers, we have independent routing at each of our sites. This gives us much greater resilience to network problems, but means that moving a VM between sites necessitates a change in IP address. We informed our cluster of Mac Minis in the CDN that the primary web server had moved, and the administration site sprung back into life and the traffic split evenly across the two sites.

Step 5 : Drink coffee

Over the course of about three hours, we’d migrated a high volume production website from a non-redundant, single site configuration to a geographically redundant configuration, moved the primary database and primary web server to a different location and provided a capacity upgrade. This was all done in the middle of the day with no user-facing downtime, and only a modest maintenance window for the administration portal.

With that done, we can start work on the next stage of the plan: migrating the remainder of the VMs away from the old VM host in HEX.

Managed WordPress

January 23rd, 2017 by

Analogue photo taken with film and real chemistry. Parallax Photographic Cooperative.

WordPress is an excellent content management system that is behind around 25% of all sites on the internet. Our busiest site is Raspberry Pi which is now constructed from multiple different WordPress installations and some custom web applications, stitched together in to one nearly seamless high traffic website.

We’ve taken the knowledge we’ve gained supporting this site and rolled it out as a managed service, allowing you to concentrate on your content, whilst we take care of keep the site up and secure. In addition to 24/7 monitoring, plugin security scans, and our custom security hardening, we’re also able to assist with improving site performance.

We’re now hosting a broad range of sites on this service with the simpler cases start with customers such as Ellexus, who make very impressive technology for IO profiling, and need a reliable, managed platform that they can easily update.

At the other end of the spectrum we have the likes of Parallax Photographic, a co-operative in Brixton who sell photography supplies for people interested in film photography, using real chemistry to develop the photographs and a full analogue feel to the resulting prints. Parallax Photographic use WordPress to host to their online shop, embedding WooCommerce into WordPress to create their fully functional e-commerce site.

Parallax were having performance and management issues with their existing self-managed installation of WordPress. We transferred it for them to our managed WordPress service, in the process adding not only faster hardware but performance improvements to their WordPress stack, custom security hardening, managed backups and 24/7 monitoring. We took one hour for the final switch-over at 9am on a Sunday morning leaving them with a faster and more manageable site. They now have more time to spend fulfilling orders and taking beautiful photographs.

Purrmetrix monitors temperature accurately and inexpensively, and as you can see above with excellent embeddable web analytics. In addition to hosting their website and WooCommerce site for people to place orders, we are also customers (directly, through their website!) using their site to monitor our Raspberry Pi hosting platform. The heatmap (above) is a real-time export from their system. At the time of writing, it shows a 5C temperature difference between the cold and hot aisles across one of our shelves of 108 Pi 3s. The service provides automated alerts; if that graph goes red indicating an over temperature situation alerts start firing. During the prototyping and beta phase for our Raspberry Pi hosting platform, we’ve used their graphing to demonstrate that it takes about six hours from dual fan failure to critical temperature issues. This is long enough to make maintenance straightforward.

Also embedded in our Raspberry Pi hosting platform are multiple Power over Ethernet modules from Pi Supply who make a variety of add-ons for the Raspberry Pi, including some decent high quality audio adapters. With the launch of the Raspberry Pi 3 we had to do some rapid vertical scaling of the Pi Supply managed WooCommerce platform – in thirty seconds we had four times the RAM and double the CPU cores to cope with the additional customer load.

 

We host a wide variety of WordPress sites include Scottish comedy club Mirth of Forth, personalised embroidery for work and leisure wear and our own blog that you’re currently reading. So if you’d like to have us run your WordPress site for you, from a simple blog to a fully managed e-commerce solution or one of the busiest sites on the Web, we’d love to hear from you at sales@mythic-beasts.com.

IPv6 Update

November 1st, 2016 by

Sky completed their IPv6 rollout – any device that comes with IPv6 support will use it by default.

Yesterday we attended the annual IPv6 Council to exchange knowledge and ideas with the rest of the UK networking industry about bringing forward the IPv6 rollout.

For the uninitiated, everything connected to the internet needs an address. With IPv4 there are only 4 billion addresses available which isn’t enough for one per person – let alone one each for my phone, my tablet, my laptop and my new internet connected toaster. So IPv6 is the new network standard that has an effectively unlimited number of addresses and will support an unlimited number of devices. The hard part is persuading everyone to move onto the new network.

Two years ago when the IPv6 Council first met, roughly 1 in 400 internet connections in the UK had IPv6 support. Since then Sky have rolled out IPv6 everywhere and by default all their customers have IPv6 connectivity. BT have rolled IPv6 out to all their SmartHub customers and will be enabling IPv6 for their Homehub 5 and Homehub 4 customers in the near future. Today 1 in 6 UK devices has IPv6 connectivity and when BT complete it’ll be closer to 1 in 3. Imperial College also spoke about their network which has IPv6 enabled everywhere.

Major content sources (Google, Facebook, LinkedIn) and CDNs (Akamai, Cloudflare) are all already enabled with IPv6. This means that as soon as you turn on IPv6 on an access network, over half your traffic flows over IPv6 connections. With Amazon and Microsoft enabling IPv6 in stages on their public clouds by default traffic will continue to grow. Already for a some number of ISPs, IPv6 is the dominant protocol. The Internet Society are already predicting that IPv6 traffic will exceed IPv4 traffic around two to three years from now.

LinkedIn and Microsoft both spoke about deploying IPv6 in their corporate and data centre environments. Both companies are suffering exhaustion of private RFC1918 address space – there just aren’t enough 10.a.b.c addresses to cope with organisations of their scale so they’re moving now to IPv6-only networks.

Back in 2012 we designed and deployed an IPv6-only architecture for Raspberry Pi, and have since designed other IPv6-only infrastructures including a substantial Linux container deployment. Educating the next generation of developers about how networks will work when they join the workforce is critically important.

Sneak preview from Mythic Labs, Raspberry Pi netboot

August 5th, 2016 by

We don’t like to pre-announce things that aren’t ready for public consumption. It’s no secret that we’d love to offer hosted Raspberry Pis in the data centre, and in our view the blocker for this being possible is the unreliability of SD cards which require physical attention when they fail. So we’ve provided some assistance to Gordon to help with getting netboot working for the Raspberry Pi. We built a sensible looking netboot setup and spent a fair amount of time debugging and reading packets to try and help work out why the netboot was occasionally stopping.

This isn’t yet a production service and you can’t buy a hosted Raspberry Pi server. Yet. But if you’d be interested, we’d love to hear from you at sales@mythic-beasts.com.



This is a standard Raspberry Pi 3 with a Power over Ethernet (PoE) adapter. You have to boot the Pi once from a magic SD card which enables netboot. Then you remove the SD card and plug it in to the powered network port. PoE means we can power cycle it using the managed switch. At boot, it talks to a standard tftpd server and isc-dhcp-server, this then delivers the kernel which runs from an NFS root. It’s a minimal Raspbian Jessie from debootstrap plus sshd and occupies a mere 381M versus the 1.3G for a standard Raspbian install. The switch is reporting the Pi 3 consuming 2W.

The Raspberry Pi topple is just for fun.



DNSSEC now in use by Raspberry Pi

May 12th, 2016 by

Over the past twelve months we’ve implemented Domain Name Security Extensions, initially by allowing the necessary records to be set with the domain registries, and then in the form of a managed service which sets the records, signs the zone files, and takes care of regular key rotation

Our beta program has been very successful, lots of domains now have DNSSEC and we’ve seen very few issues. We thought that we should do some wider testing with a larger number of users than our own website, so we asked some friends of ours with a busy website if they felt brave enough to give it a go

Eben Upton> I think this would be worth doing.
Ben Nuttall> I'll go ahead and click the green button for each domain.
-- time passes --
Ben Nuttall> Done - for all that use HTTPS.

So now we have this lovely graph that indicates we’ve secured DNS all the way down the chain for every request. Mail servers know for definite they have the correct address to deliver mail to, Web requests know they’re at the correct webserver.

The only remaining task is to remove the beta label in our control panel.

Raspberry Pi DNSSEC visualisation, click for interactive version

Raspberry Pi DNSSEC visualisation, click for interactive version

IPv6 only hosting

April 27th, 2016 by

Last week at the UK Network Operators Forum Pete gave a talk about our IPv6 only hosting, progress we’ve made and barriers we’ve overcome.

It’s now available to view online

The little computer that did

April 13th, 2016 by

At the end of March we migrated the Raspberry Pi website from a very big multi-core server to a tiny cluster of eight Raspberry Pi 3s. Here’s a bit more detail about how it worked.

The Pi rack not fooling anyone on April 1st

The Pi rack not fooling anyone on April 1st

Booting

For the Raspberry Pi 3 launch we tried out some Pis running in a data centre environment with high load using the SD card for the root filesystem. They kept crashing, if you exceed the write capability of the card the delays make the kernel think the storage has failed and the system falls over. We also want to be able to remotely rebuild the filesystem so we can fix a broken Pi remotely. So we’ve put the root filesystem on a network file server, which is accessed over NFS.

The Raspberry Pi runs the latest kernel, 4.1.18-v7+ and boots from the SD card with a configuration as follows:

dwc_otg.lpm_enable=0 console=ttyAMA0,115200 console=tty1 root=/dev/nfs rootfstype=nfs
  ip=10.46.189.2::10.46.189.1:255.255.255.252::eth0:off 
  nfsroot=10.46.189.1:/export/10.46.189.2 elevator=deadline 
  fsck.repair=yes rootwait

This brings up a block of 4 IP addresses on eth0. One address for the network, one for broadcast, one for the Pi and one for the network fileserver. It then mounts the NFS filesystem at:

nfsroot=10.46.189.1:/export/10.46.189.2

and uses that as the root filesystem.

Overly simple introduction to VLANs

On a traditional switch, you plug things and any ethernet port can talk to any other ethernet port. If you want to have two different networks you need two different switches, and any computer that needs to be on both networks needs two network ports. In our case we’re trying to have a private network for storage for each Raspberry Pi, so each Pi requires its own switch and the fileserver needs it’s own network port for every Raspberry Pi connected to keep them separate. This is going to get expensive very quickly.

Instead we turn on virtual LANs (VLAN). We connect our fileserver to port 24 and create a VLAN for ports 1 & 24, another for 2&24, etc. The switch configuration for the fileserver port specifies these VLANs as “tagged”, meaning our switch adds a header to the front of every packet from a Raspberry Pi port that allows the fileserver to tell which VLAN, and therefore which Raspberry Pi, the packet came from. The fileserver can reply with the same header, and that packet will only be sent to that specific Raspberry Pi. It behaves as if each Raspberry Pi has its own switch.

Network on the fileserver

The fileserver sees each VLAN as a separate network card, named eth0.N where N identifies the VLAN. We can configure them like any other network interface:

auto eth0.10
iface eth0.10 inet static
	address 10.46.189.1
	netmask 255.255.255.252

auto eth0.11
iface eth0.11 inet static
	address 10.46.189.5
	netmask 255.255.255.252

eth0.10 and eth0.11 appear to be network cards with a tiny network with one Raspberry Pi on the end, but in reality there’s a single physical ethernet connection underneath all of them.

Network on the Raspberry Pi

On the Raspberry Pi, eth0 is already configured on the Raspberry Pi by the boot line above to talk to the fileserver. On our switch configuration, we specify that private network is “untagged” on Raspberry Pi port, which means that it won’t have a VLAN header on it and we can access it as “eth0” rather than “eth0.N” as we did on the fileserver.

In order to do anything useful, we also need to give the Raspberry Pis access to the public network. On our network, the public network is accessible on VLAN 131. We configure this to be a “tagged” VLAN on the Raspberry Pi port, meaning it becomes accessible on the eth0.131 interface. We can configure this in the normal way, and in keeping with other back-end servers on the Raspberry Pi setup, it only has an IPv6 address:

auto eth0.131
iface eth0.131 inet6 static
	address	2a00:1098:0:84:1000:1::2
	netmask 64
	gateway	2a00:1098:0:84::1

Effectively the Raspberry Pi believes it has two network cards, one on eth0 which is a private network shared with the fileserver, one on eth0.131 which has an IPv6 address and is connected to the real internet.

Why all that configuration?

In an ideal world we’d have a single IPv6 address for each Pi, and mount the network filesystem with it. However, with an NFS root filesystem, potentially another user on the LAN who can steal your IPv6 address can access your files. There’s a second complication, IPv4 is built into the standard kernel on the Raspberry Pi and the differences per Pi are constrained to just the kernel command line, with IPv6 we’d have to build it into an initrd which would load up the IPv6 modules and set up the NFS mounts.

Planning for the future we’ve spoken to Gordon about how PXE boot on the Raspberry Pi will work and it’s extremely likely that it’s going to require IPv4 to pull in the bootloader, kernel and initrd. Whilst there is native IPv6 in the Raspberry Pi office, there isn’t any IPv6 on their test lan for developing the boot code and it’s a currently not a major priority for the Pi despite around 5% of the UK having native IPv6.

So if we want to make this commercial, each Pi needs its own storage network and it needs IPv4 on the storage network.

Power over Ethernet

We’ve added a Power over Ethernet HAT to our Raspberry Pis. This means that they receive power over the ethernet cable in addition to the two separate networks. As well as reducing the amount of space used by power bricks, it also means you can power cycle a Raspberry Pi just by re-configuring the switch.

Software

Each Raspberry Pi runs Raspbian with Apache2 installed. We’ve pulled in PHP7 from Debian Stretch to improve PHP performance and then copied all the files for the Raspberry Pi website onto the NFS root for each Raspberry Pi (so the fileserver effectively has 8 copies – one for each Pi). We then just added the IPv6 addresses of the Raspberry Pis into the site’s load balancer, deleted the addresses for the main x86 servers and waited for everything to explode.

Did it work?

Slightly to our surprise, yes and well. We had a couple of issues – the Pi is much slower than the x86 servers, not only clock speed but also the speed of the network card used to access the filesystem and the database server. Some rarely used functions, such as registering a new Raspberry Jam, weren’t really quick enough under the new setup and gave people some error pages as the connections timed out. Uploading images for new WordPress posts was similarly an issue as receiving a 3MB file and distributing eight copies on a 100Mbps network isn’t very fast. But mostly it worked.

Did power cycling the Pis via the switch work?

We never tested it in production, every Pi remained up and stable for the whole 3.5 day duration we had the system in use. In testing it’s been fine.

Can I buy one?

Not yet. At present you can still break a Pi by destroying the flash, and the enclosure doesn’t allow for replacement without taking the whole shelf (which in production would contain 96 Pis) offline. Once we have full netboot for the Pi, it is a service we could offer.

Can I register my interest to buy a Pi in the cloud?

Sure – email us at sales@mythic-beasts.com and we’ll add you to a list to keep you up to date.

Rebuilding Raspberry Pi

March 9th, 2016 by

After the Raspberry Pi 2 launch in February 2015, we had a review of how to improve and scale the hosting setup for Raspberry Pi. There were two components that caused us pain during the Pi 2 launch: the main site, running WordPress, and the forums, powered by phpBB.

The first question from our review was whether we should be putting effort into scaling a WordPress site. WordPress is estimated to be powering as many as a quarter of all websites, and it’s popular for a reason: it makes site development very easy. WordPress is easily extensible through themes and plugins, it’s supported by a vast array of existing third party plugins, and it provides a good built-in framework for delegating and moderating authoring roles.

Unfortunately, this ease of development is at least in part down to a very simplistic execution model, with each page being dynamically generated, executing code from every installed plugin, and typically resulting in multiple database queries. When the Raspberry Pi site gets busy, it’s usually down to a huge number of visitors hitting just a page which is essentially a static news story. WordPress provides no built-in mechanism for caching such content, so by default, we’re dynamically generating many copies of identical, or near identical pages.

Losing the flexibility and ease of development offered by WordPress just to cope with the handful of days when the site gets very busy would be unfortunate, so we decided to put effort into making the existing site scalable.

Caching

For pretty much every WordPress problem you can imagine, there’s at least one plugin offering to solve it for you. For site performance, there are a number of plugins such as WP Supercache, but as WordPress itself provides no framework for identifying cacheable parts of page, these can only take a very simple and typically over-cautious, page-based approach.

For example, if you’re a logged in user, you might get served a page that is in someway tailored to you, so Supercache bypasses its cache and serves you a dynamic page. Similarly, if Supercache sees a request that looks like a comment being posted, the cache is invalidated, and a dynamic page is served, and cached for future requests.

During the Pi 2 launch, we saw significant problems with load spikes when comments were posted. Clearly, small delays in comments being visible on the site is a minor annoyance compared to thousands of visitors being served an error page, so we set about making our caching more aggressive.

We wrote a small hack called staticify. This fetches the key pages from the blog every 60 seconds and renders them to static HTML. That way we always have a page in our static cache, and because we’re selecting the pages that we cache we can afford to be more brutal with our caching: we know that there’s no user-specific content on these pages, so we serve up the same cached page even if you’re logged in.

More virtualisation

An important goal after the Pi 2 launch review was to split out different parts of the site onto separate virtual servers. For example, having the WordPress blog and the forums software on different VMs made it much easier to experiment with using Hip Hop VM which offered a significant performance boost to the blog, but is incompatible with the forum software.

Although the Raspberry Pi setup runs as a private cloud on a single host machine, having different components split onto separate VMs makes it much easier to balance resources between them, and if necessary, spin up extra capacity quickly using our public cloud.

IPv6

When Raspberry Pi was hit by DDOS we built an IPv6-only backend network for the machines to communicate with each other. In the new setup all access to the back-end VMs comes from either one of four front-end load balancers, or a “gateway” VM. So we thought we’d remove IPv4 connectivity from the VMs entirely. For example this is ifconfig on one of the blog PHP VMs :

eth0      Link encap:Ethernet  HWaddr 52:54:00:3f:8a:5a  
          inet6 addr: 2a00:1098:0:82:1000:x:y:z/64 Scope:Global
          inet6 addr: fe80::5054:ff:fe3f:8a5a/64 Scope:Link

The VM needs to occasionally call out over IPv4. For example, akismet and Twitter don’t yet have full IPv6 support, so these requests go through a NAT64 gateway, provided by Mythic Beasts that proxies the connections so it appears almost seamless to the VM. This is part of the Mythic Beasts IPv6 education project, backward ISPs claim there is no demand for IPv6, whereas we provide multiple services from IPv6-only servers and give discounts if you use IPv6-only services.

SSL

Officially we enabled SSL because we wanted to improve our Google ranking but handy side effects include irritating the security services and preventing third party networks injecting adverts or corrupting downloads. The SSL decryption is done on the front-end load balancers and as they have lots of spare CPU incurs no performance issue. The only thing that isn’t is the image downloads because of incompatibilities with the current version of NOOBs. We hope to eventually resolve this.

Pi Zero Launch

November 26th at 7am, the Pi Zero is launched, a $5 computer given away on magazines. The bandwidth graph for the Raspberry Pi server does this:

Launch day bandwidth graph

Launch day bandwidth graph

It’s very exciting and quickly exceeds our previous records for the launch of Pi 2. The two VMs that generate all the webpages for the blog and deliver all the content are humming along at 10-25% capacity. The database VM is almost completely idle, we’ve successfully cached almost everything,our database server only sees load when a comment is being posted or the cache is being updated. Meanwhile we neatly exceed the 4500 users we had for the Pi2 breaking 10,000 simultaneous users at our peak.


A quick back-of-the-envelope calculation and we conclude that our staticify script avoided executing WordPress a large number of times and the following slightly dubious claim is mostly true:


The MagPi site was a bit more difficult, it hadn’t had the same level of optimisation and went through a number of changes throughout the day to accelerate it. However, the VM setup meant that the excess load was contained to specific virtual machines- under our original flat hosting setup the load from the MagPi would have taken everything offline and made identifying the underlying cause much harder.

Raspbian

We now run a full mirror of the main Raspbian site, and we’ve even done a test to make sure that the failover works.

The mirror director is a critical piece of infrastructure, without it package downloads will fail and updates can’t complete. So in the event of a failure we need to bring the mirror director back up much more quickly than we can restore 4TB+ of data from backup. As a result of this work we now have a hot spare, which has been fully tested.

Does it work and is WordPress still a good idea?

We weathered the PiZero and Christmas Day traffic peaks with ease and we think we can probably double or triple the number of people using the sites at peak times before we have to think much more or add hardware. The result is we’ve a really useful and very busy site, that supports our multiple contributors, moderators and users with a relatively minimal amount of engineering and administration time, on a comparatively small server setup.

Hosting the Raspberry Pi 3 launch, on a Raspberry Pi 3

February 29th, 2016 by

Four years ago we sat on the phone while Eben Upton pushed the button to launch his educational computer, the Raspberry Pi, and we joined them on a fairly remarkable journey.  “How do you sell and ship 10,000 Raspberry Pis?” turned into “how do you sell and ship 5,000,000 Raspberry Pis?” and “how do you contain the excitement of the internet when you put a computer on the front of a magazine?”

Today, we’re nervously watching all the server graphs as the new Raspberry Pi 3 launches and goes on sale. We’ve had one to play with for a while so we did what we do with any new shiny toy: benchmark it in a real world application.

Rasberry Pi 3

Our Raspberry Pi 3 next to a Raspberry Pi 2 serving requests for the Raspberry Pi 3 launch.

Our favourite application is rendering WordPress pages for the Raspberry Pi website, so we set up a testbed: Pi2 and Pi3 versus the virtual machines that run the blog. We picked a typical page and tried them out. Initial results weren’t promising – just one fifth the speed of the production VMs.  The VMs have the advantage of being on the same physical server as the VM that hosts the database.

Moving the Pis to the same switch as the database server, and upgrading from PHP 5.6 to PHP 7 brought Pi 3 page rendering times that were less than twice as long as the production servers.

Server Spec Seconds per page
Blog VM (PHP 5.6) 24 x 2.4Ghz Ivy Bridge 0.4
Pi 2 (PHP 7) 4 x 0.9Ghz A7 0.9
Pi 3 (PHP 7) 4 x 1.2Ghz A? 0.7

That’s fast enough to be usable. Parallelising requests across all cores, we can probably sustain about 4 hits/second from the Pi 2, 6 hits/second from the Pi 3 and around 50 hits/second for the main site.

These figures are for uncached pages.  As we’ve seen in the past, 50 hits/second isn’t even close to enough to cope with launch day traffic.  In reality, the vast majority of pages we serve are cached and both Pis can adequately serve 100Mbps of cached pages (versus 4Gbps for the main host).

So we’ve done what any sensible real world test would do, we’ve put them into the main hosting mix. If you read the headers you’ll see on some requests

  HTTP/1.1 200 OK
...
  X-Served-By: Raspberry Pi 3
...

indicating your page request came off a Raspberry Pi 3.

We’re aiming to serve about 1 in 12 requests from a Pi 2 or a Pi 3, but may adjust this up or down to keep the pi in action and not melting under the load.

How’s it done?

The backend for the Raspberry Pi site is built from virtual machines.  One VM  runs the database, and a pair that generate pages for the main, WordPress-based, website.  One of the pair is designated as primary, and also runs the admin backend for WordPress, which then synchronises files  to the other VM, now additionally, both the Raspberry Pis. All the backend servers exist on a pure IPv6 network.  We have a cluster of front-end servers that are dual stack, and load balance requests through to IPv6-only backends.

If you have IPv6 you can see the status of the two Pis here:

stats.pi2.raspberrypi.org
stats.pi3.raspberrypi.org

If you don’t have IPv6 complain to your ISP, then set up a tunnel at he.net.

The two Pis can tweet directly at @hostingpi3 and @hostingpi2. Sadly, Twitter doesn’t support IPv6 so traffic goes via our NAT 64 service that provides outbound connectivity for IPv6-only servers to legacy parts of the internet.

Decimal points are important

January 5th, 2016 by

Ben at Raspberry Pi wanted to use his new vanity domain rpf.io, as a URL shortener rather than the URL of the common big services. The easy solution was to use an existing service on a paid account which gives us analytics and tracking. However, demonstrating the age old principle of if you have to ask you can’t afford it, his email reads…

$695/month for a .htaccess file

We like Open Source software, so instead of paying enough money to rent quite a nice car for a trivial .htaccess file we instead chose to install yourls on a little IPv6-only virtual machine behind our NAT64 and IPv6 Proxy services.

We’ve done some benchmarking, out of the box we could sustain 500 hits/second, adding in php-apc boosted this to well over 2000 hits/second which should be enough, even if Liz Upton gets very excited with the Raspberry Pi twitter account.

So you can test out the service here http://rpf.io/mythic before we start making these links public.