PROXY protocol support for our, err, proxy

April 29th, 2016 by

We’re increasingly using our IPv4 to IPv6 reverse proxy to host websites on IPv6-only virtual machines. One of the downsides of proxying is that your server doesn’t get to see the client’s real IP address. For non-SSL connections, the proxy can insert an “X-Forwarded-For” header, but SSL is increasingly becoming the norm, and one of the nice things about an SNI-aware reverse proxy is that it doesn’t need to do SSL off load: we don’t need your certificates on our proxy and your traffic stays encrypted until it hits your server. Of course, this means that we can’t go inserting any headers into your connection either.

Fortunately, there is a solution: PROXY protocol. This is a protocol-agnostic mechanism for passing information from a reverse proxy to a server, including the client IP address.

We’ve just added support for PROXY protocol to our reverse proxy:

proxy-protocol

Turning this on allows your server to get the client IP address, but as it’s an additional protocol, not part of HTTP, your server must be expecting it: turning this on and pointing it at a standard HTTP server will result in a broken website.

Most web servers have support for this. NGINX has support built in, and just needs “proxy_protocol” adding after the listen directive:

server {
    listen 80   proxy_protocol;
    listen 443  ssl proxy_protocol;
    ...
}

You will probably also want some additional configuration to actually set the IP address that gets used for logs etc., and also to ensure that you only trust proxy information from the real proxy servers.

For Apache, support is provided by mod_proxy_protocol, which needs to be installed manually. Once done, configuration is easy:

<VirtualHost *:443>
  ...
  ProxyProtocol On

  CustomLog ${APACHE_LOG_DIR}/access.log "%a %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\""

The CustomLog line instructs Apache to use the real client IP rather than the proxy. You should now see v4 addresses being happily logged on your IPv6 server:

root@vm1:~# tail -n 1 /var/log/apache2/access.log
93.93.130.44 - - [29/Apr/2016:14:05:32 +0100] "GET / HTTP/1.1" 200 321 "-" "curl/7.26.0"

Unfortunately the module doesn’t currently provide a way to restrict enablement to trusted proxies only. As such, you’ll probably want to install a firewall to restrict HTTP/HTTPS traffic to only come from our proxies, as otherwise clients could easily fake their IP address.

One thing to watch out for is that although this is applied within a VirtualHost configuration, it’ll actually apply to all virtual hosts on the same IP address and port. This is an unavoidable side effect of the fact that the proxy information is sent before we start talking HTTP. Of course, with IPv6, throwing another IP address at the problem isn’t an issue.

IPv6 only hosting

April 27th, 2016 by

Last week at the UK Network Operators Forum Pete gave a talk about our IPv6 only hosting, progress we’ve made and barriers we’ve overcome.

It’s now available to view online

Let’s Encrypt IPv6-only

April 18th, 2016 by

Let’s Encrypt on a v6-only host

One of the much requested features for Let’s Encrypt free SSL certificates is support for IPv6-only hosts. Whilst this is promised in the very near future we’re happy to say that IPv6-only hosts behind our NAT64 & Proxy services work out of the box with Let’s Encrypt.

To test it we took the traditional dogfood approach, this website is run on an IPv6-only VM and we’ve just enabled Lets Enrypt SSL support on our own blog. As soon as Let’s Encrypt offer SSL certificates for IPV6-only hosts with no proxy and no NAT64 we’ll give that a try too.

DNS-based domain validation (dns-01)

An alternative approach would be to use dns-01 validation using our DNS API. Our API speaks native v6, so that should work just fine on a truly single-stack IPv6 host.

The little computer that did

April 13th, 2016 by

At the end of March we migrated the Raspberry Pi website from a very big multi-core server to a tiny cluster of eight Raspberry Pi 3s. Here’s a bit more detail about how it worked.

The Pi rack not fooling anyone on April 1st

The Pi rack not fooling anyone on April 1st

Booting

For the Raspberry Pi 3 launch we tried out some Pis running in a data centre environment with high load using the SD card for the root filesystem. They kept crashing, if you exceed the write capability of the card the delays make the kernel think the storage has failed and the system falls over. We also want to be able to remotely rebuild the filesystem so we can fix a broken Pi remotely. So we’ve put the root filesystem on a network file server, which is accessed over NFS.

The Raspberry Pi runs the latest kernel, 4.1.18-v7+ and boots from the SD card with a configuration as follows:

dwc_otg.lpm_enable=0 console=ttyAMA0,115200 console=tty1 root=/dev/nfs rootfstype=nfs
  ip=10.46.189.2::10.46.189.1:255.255.255.252::eth0:off 
  nfsroot=10.46.189.1:/export/10.46.189.2 elevator=deadline 
  fsck.repair=yes rootwait

This brings up a block of 4 IP addresses on eth0. One address for the network, one for broadcast, one for the Pi and one for the network fileserver. It then mounts the NFS filesystem at:

nfsroot=10.46.189.1:/export/10.46.189.2

and uses that as the root filesystem.

Overly simple introduction to VLANs

On a traditional switch, you plug things and any ethernet port can talk to any other ethernet port. If you want to have two different networks you need two different switches, and any computer that needs to be on both networks needs two network ports. In our case we’re trying to have a private network for storage for each Raspberry Pi, so each Pi requires its own switch and the fileserver needs it’s own network port for every Raspberry Pi connected to keep them separate. This is going to get expensive very quickly.

Instead we turn on virtual LANs (VLAN). We connect our fileserver to port 24 and create a VLAN for ports 1 & 24, another for 2&24, etc. The switch configuration for the fileserver port specifies these VLANs as “tagged”, meaning our switch adds a header to the front of every packet from a Raspberry Pi port that allows the fileserver to tell which VLAN, and therefore which Raspberry Pi, the packet came from. The fileserver can reply with the same header, and that packet will only be sent to that specific Raspberry Pi. It behaves as if each Raspberry Pi has its own switch.

Network on the fileserver

The fileserver sees each VLAN as a separate network card, named eth0.N where N identifies the VLAN. We can configure them like any other network interface:

auto eth0.10
iface eth0.10 inet static
	address 10.46.189.1
	netmask 255.255.255.252

auto eth0.11
iface eth0.11 inet static
	address 10.46.189.5
	netmask 255.255.255.252

eth0.10 and eth0.11 appear to be network cards with a tiny network with one Raspberry Pi on the end, but in reality there’s a single physical ethernet connection underneath all of them.

Network on the Raspberry Pi

On the Raspberry Pi, eth0 is already configured on the Raspberry Pi by the boot line above to talk to the fileserver. On our switch configuration, we specify that private network is “untagged” on Raspberry Pi port, which means that it won’t have a VLAN header on it and we can access it as “eth0” rather than “eth0.N” as we did on the fileserver.

In order to do anything useful, we also need to give the Raspberry Pis access to the public network. On our network, the public network is accessible on VLAN 131. We configure this to be a “tagged” VLAN on the Raspberry Pi port, meaning it becomes accessible on the eth0.131 interface. We can configure this in the normal way, and in keeping with other back-end servers on the Raspberry Pi setup, it only has an IPv6 address:

auto eth0.131
iface eth0.131 inet6 static
	address	2a00:1098:0:84:1000:1::2
	netmask 64
	gateway	2a00:1098:0:84::1

Effectively the Raspberry Pi believes it has two network cards, one on eth0 which is a private network shared with the fileserver, one on eth0.131 which has an IPv6 address and is connected to the real internet.

Why all that configuration?

In an ideal world we’d have a single IPv6 address for each Pi, and mount the network filesystem with it. However, with an NFS root filesystem, potentially another user on the LAN who can steal your IPv6 address can access your files. There’s a second complication, IPv4 is built into the standard kernel on the Raspberry Pi and the differences per Pi are constrained to just the kernel command line, with IPv6 we’d have to build it into an initrd which would load up the IPv6 modules and set up the NFS mounts.

Planning for the future we’ve spoken to Gordon about how PXE boot on the Raspberry Pi will work and it’s extremely likely that it’s going to require IPv4 to pull in the bootloader, kernel and initrd. Whilst there is native IPv6 in the Raspberry Pi office, there isn’t any IPv6 on their test lan for developing the boot code and it’s a currently not a major priority for the Pi despite around 5% of the UK having native IPv6.

So if we want to make this commercial, each Pi needs its own storage network and it needs IPv4 on the storage network.

Power over Ethernet

We’ve added a Power over Ethernet HAT to our Raspberry Pis. This means that they receive power over the ethernet cable in addition to the two separate networks. As well as reducing the amount of space used by power bricks, it also means you can power cycle a Raspberry Pi just by re-configuring the switch.

Software

Each Raspberry Pi runs Raspbian with Apache2 installed. We’ve pulled in PHP7 from Debian Stretch to improve PHP performance and then copied all the files for the Raspberry Pi website onto the NFS root for each Raspberry Pi (so the fileserver effectively has 8 copies – one for each Pi). We then just added the IPv6 addresses of the Raspberry Pis into the site’s load balancer, deleted the addresses for the main x86 servers and waited for everything to explode.

Did it work?

Slightly to our surprise, yes and well. We had a couple of issues – the Pi is much slower than the x86 servers, not only clock speed but also the speed of the network card used to access the filesystem and the database server. Some rarely used functions, such as registering a new Raspberry Jam, weren’t really quick enough under the new setup and gave people some error pages as the connections timed out. Uploading images for new WordPress posts was similarly an issue as receiving a 3MB file and distributing eight copies on a 100Mbps network isn’t very fast. But mostly it worked.

Did power cycling the Pis via the switch work?

We never tested it in production, every Pi remained up and stable for the whole 3.5 day duration we had the system in use. In testing it’s been fine.

Can I buy one?

Not yet. At present you can still break a Pi by destroying the flash, and the enclosure doesn’t allow for replacement without taking the whole shelf (which in production would contain 96 Pis) offline. Once we have full netboot for the Pi, it is a service we could offer.

Can I register my interest to buy a Pi in the cloud?

Sure – email us at sales@mythic-beasts.com and we’ll add you to a list to keep you up to date.

Hosting the Raspberry Pi 3 launch, on a Raspberry Pi 3

February 29th, 2016 by

Four years ago we sat on the phone while Eben Upton pushed the button to launch his educational computer, the Raspberry Pi, and we joined them on a fairly remarkable journey.  “How do you sell and ship 10,000 Raspberry Pis?” turned into “how do you sell and ship 5,000,000 Raspberry Pis?” and “how do you contain the excitement of the internet when you put a computer on the front of a magazine?”

Today, we’re nervously watching all the server graphs as the new Raspberry Pi 3 launches and goes on sale. We’ve had one to play with for a while so we did what we do with any new shiny toy: benchmark it in a real world application.

Rasberry Pi 3

Our Raspberry Pi 3 next to a Raspberry Pi 2 serving requests for the Raspberry Pi 3 launch.

Our favourite application is rendering WordPress pages for the Raspberry Pi website, so we set up a testbed: Pi2 and Pi3 versus the virtual machines that run the blog. We picked a typical page and tried them out. Initial results weren’t promising – just one fifth the speed of the production VMs.  The VMs have the advantage of being on the same physical server as the VM that hosts the database.

Moving the Pis to the same switch as the database server, and upgrading from PHP 5.6 to PHP 7 brought Pi 3 page rendering times that were less than twice as long as the production servers.

Server Spec Seconds per page
Blog VM (PHP 5.6) 24 x 2.4Ghz Ivy Bridge 0.4
Pi 2 (PHP 7) 4 x 0.9Ghz A7 0.9
Pi 3 (PHP 7) 4 x 1.2Ghz A? 0.7

That’s fast enough to be usable. Parallelising requests across all cores, we can probably sustain about 4 hits/second from the Pi 2, 6 hits/second from the Pi 3 and around 50 hits/second for the main site.

These figures are for uncached pages.  As we’ve seen in the past, 50 hits/second isn’t even close to enough to cope with launch day traffic.  In reality, the vast majority of pages we serve are cached and both Pis can adequately serve 100Mbps of cached pages (versus 4Gbps for the main host).

So we’ve done what any sensible real world test would do, we’ve put them into the main hosting mix. If you read the headers you’ll see on some requests

  HTTP/1.1 200 OK
...
  X-Served-By: Raspberry Pi 3
...

indicating your page request came off a Raspberry Pi 3.

We’re aiming to serve about 1 in 12 requests from a Pi 2 or a Pi 3, but may adjust this up or down to keep the pi in action and not melting under the load.

How’s it done?

The backend for the Raspberry Pi site is built from virtual machines.  One VM  runs the database, and a pair that generate pages for the main, WordPress-based, website.  One of the pair is designated as primary, and also runs the admin backend for WordPress, which then synchronises files  to the other VM, now additionally, both the Raspberry Pis. All the backend servers exist on a pure IPv6 network.  We have a cluster of front-end servers that are dual stack, and load balance requests through to IPv6-only backends.

If you have IPv6 you can see the status of the two Pis here:

stats.pi2.raspberrypi.org
stats.pi3.raspberrypi.org

If you don’t have IPv6 complain to your ISP, then set up a tunnel at he.net.

The two Pis can tweet directly at @hostingpi3 and @hostingpi2. Sadly, Twitter doesn’t support IPv6 so traffic goes via our NAT 64 service that provides outbound connectivity for IPv6-only servers to legacy parts of the internet.

Decimal points are important

January 5th, 2016 by

Ben at Raspberry Pi wanted to use his new vanity domain rpf.io, as a URL shortener rather than the URL of the common big services. The easy solution was to use an existing service on a paid account which gives us analytics and tracking. However, demonstrating the age old principle of if you have to ask you can’t afford it, his email reads…

$695/month for a .htaccess file

We like Open Source software, so instead of paying enough money to rent quite a nice car for a trivial .htaccess file we instead chose to install yourls on a little IPv6-only virtual machine behind our NAT64 and IPv6 Proxy services.

We’ve done some benchmarking, out of the box we could sustain 500 hits/second, adding in php-apc boosted this to well over 2000 hits/second which should be enough, even if Liz Upton gets very excited with the Raspberry Pi twitter account.

So you can test out the service here http://rpf.io/mythic before we start making these links public.

IPv4 is so last century

November 11th, 2015 by
A scary beast that lives in the Fens.

A scary beast that lives in the Fens.

Fenrir is the latest addition to the Mythic Beasts family. It’s a virtual machine in our Cambridge data centre which is running our blog. What’s interesting about it, is that it has no IPv4 connectivity.

eth0 Link encap:Ethernet HWaddr 52:54:00:39:67:12
     inet6 addr: 2a00:1098:0:82:1000:0:39:6712/64 Scope:Global
     inet6 addr: fe80::5054:ff:fe39:6712/64 Scope:Link
     UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

It is fronted by our Reverse Proxy service – any connection over IPv4 or IPv6 arrives at one of our proxy servers and is forwarded on over IPv6 to fenrir which generates and serves the page. If it needs to make an outbound connection to another server (e.g. to embed our Tweets) it uses our NAT64 service which proxies the traffic for it.

All of our standard management services are running: graphing, SMS monitoring, nightly backups, security patches, and the firewall configuration is simpler because we only need to write a v6 configuration. In addition, we don’t have to devote an expensive IPv4 address to the VM, slightly reducing our marketing budget.

For any of our own services, IPv6 only is the new default. Our staff members have to make a justification if they want to use one of our IPv4 addresses for a service we’re building. We now also need to see how many addresses we can reclaim from existing servers by moving to IPv6 + Proxy.

IPv6 Graphing

October 15th, 2015 by
it's a server graph!

it’s a server graph!

One of the outstanding tasks for full IPv6 support within Mythic Beasts was to make our graphing server support IPv6 only hosts. In theory this is trivial, in practice it required a bit more work.

Our graphing service uses munin, and we built it on munin 1.4 nearly five years ago; we scripted all the configuration and it has basically run itself ever since. When we added our first IPv6 only server it didn’t automatically get configured with graphs. On investigation we discovered that munin 1.4 just didn’t support IPv6 at all, so the first step was to build a new munin server based on Debian Jessie with munin 2.0.

Our code generates the configuration file by printing a line for each server to monitor which includes the IP address. For IPv4 you print the address as normal, 127.0.0.1, for IPv6 you have to encase the address in square brackets [2a00:1098:0:82:1000:0:1:1]. So a small patch later to spot which type of address is which and we have a valid configuration file.

Lastly we needed to add the IPv6 address of our munin server into the configuration file of all the servers that might be talked to over IPv6. Once this was done, as if by magic, thousands of graphs appeared.

IPv4 to IPv6 Reverse Proxy & Load Balancer

October 5th, 2015 by
cloud-ipv6

IPv6-only in the cloud just became possible

We have been offering IPv6-only Virtual Servers for some time, but until now they’ve been of limited use for public-facing services as most users don’t yet have a working IPv6 connection.

Our new, free IPv4 to IPv6 Reverse Proxy service provides a shared front-end server with an IPv4 address that will proxy requests through to your IPv6-only server. The service will proxy both HTTP and HTTPS requests.  For HTTPS, we use the SNI extension so the proxy can direct the traffic without needing to decrypt it. This means that the proxy does not need access to your SSL keys, and the connection remains end-to-end encrypted between the user’s browser and your server.

The service allows you to specify multiple backend servers, so if you have more than one server with us, it will load balance across them.

The IPv4 to IPv6 Reverse Proxy can be configured through our customer control panel. Front ends can be configured for hostnames within domains that are registered with us, or for which we provide DNS.

UK IPv6 Council Forum, 2nd Annual Meeting

September 24th, 2015 by
2.5% of the UK had native IPv6 enabled by September 2015

September 2015: 2.5% of the UK has native IPv6

Yesterday was the second meeting of the UK IPv6 Council. Eleven months ago Mythic Beasts went along to hear what the leading UK networks were doing about IPv6 migration. Mostly they had plans, and trials. However, the council is clearly useful: Last year, Nick Chettle from Sky promised that Sky would be enabling IPv6 in 2015. His colleague, Ian Dickinson gave a follow-up talk yesterday and in the past two months UK IPv6 usage has grown from 0.2% to 2.6%. We think somebody had to enable IPv6 to make his graph look good for today’s presentation…

In the meantime, Mythic Beasts has made some progress beyond having an IPv6 website, email and our popular IPv6 Health Check. Here’s what we’ve achieved, and not achieved in the last twelve months.

Customer Facing Successes

  • IPv6 support for our control panel.
  • IPv6 support for our customer wiki.
  • Offered IPv6-only hosting services that customers have actually bought.
  • Added NAT64 for hosted customers to access other IPv4 only services.
  • Added multiple downstream networks, some of which are IPv6 only.
  • Raspberry Pi has a large IPv6 only internal network – 34 real and virtual servers but only 15 IPv4 addresses, and integrations with other parts of their ecosystem (e.g. Raspbian) are also IPv6.
  • IPv6 for all DNS servers, authoritative and resolvers
  • IPv6 for our single sign on authentication service (this was one of the hardest bits).
  • Our SMS monitoring fully supports IPv6 only servers (this was quite important).
  • Our backup service fully supports IPv6 only servers (this was very important).
  • Direct Debits work over IPv6, thanks to GoCardless

Internal Successes

  • IPv6 on our own internal wiki, MRTG, IRC channels
  • Full Ipv6 support for connectivity to and out of our gateway server.
  • IPv6 rate limiting to prevent outbound spam being relayed.
  • Everything works from IPv6 with NAT64.

Prototypes

  • Shared IPv4/IPv6 load balancer for providing v4 connectivity to v6 only hosted services.

Still to do

  • Our card payment gateway doesn’t support IPv6.
  • Our graphing service doesn’t yet support IPv6-only servers (edit – implemented 15th October 2015).
  • Automatic configuration for an IPv6-only primary DNS server which slaves to our secondary DNS service.
  • Billing for IPv6 traffic.
  • One shared hosting service still has incomplete IPv6 support. One shared hosting service has optional instead of mandatory IPv6 support.
  • Automatic IPv6 provisioning for existing server customers.
  • Make sure everything works from IPv6 with no NAT64.

Waiting on others

  • The management interfaces for our DNS wholesalers don’t support IPv6.
  • Nor our SSL certificate providers.
  • Nor our SMS providers.
  • Nor our card payment gateway.