IPv4 is so last century

November 11th, 2015 by
A scary beast that lives in the Fens.

A scary beast that lives in the Fens.

Fenrir is the latest addition to the Mythic Beasts family. It’s a virtual machine in our Cambridge data centre which is running our blog. What’s interesting about it, is that it has no IPv4 connectivity.

eth0 Link encap:Ethernet HWaddr 52:54:00:39:67:12
     inet6 addr: 2a00:1098:0:82:1000:0:39:6712/64 Scope:Global
     inet6 addr: fe80::5054:ff:fe39:6712/64 Scope:Link

It is fronted by our Reverse Proxy service – any connection over IPv4 or IPv6 arrives at one of our proxy servers and is forwarded on over IPv6 to fenrir which generates and serves the page. If it needs to make an outbound connection to another server (e.g. to embed our Tweets) it uses our NAT64 service which proxies the traffic for it.

All of our standard management services are running: graphing, SMS monitoring, nightly backups, security patches, and the firewall configuration is simpler because we only need to write a v6 configuration. In addition, we don’t have to devote an expensive IPv4 address to the VM, slightly reducing our marketing budget.

For any of our own services, IPv6 only is the new default. Our staff members have to make a justification if they want to use one of our IPv4 addresses for a service we’re building. We now also need to see how many addresses we can reclaim from existing servers by moving to IPv6 + Proxy.

Rebuilding Software RAID 1 refused to boot

October 30th, 2015 by

Dear LazyWeb,

Yesterday we did a routine disk replacement on a machine with software RAID. It has two mirrored disks, sda and sdb with a RAID 1 partition with software RAID, /dev/md1 mirrored across /dev/sda3 and /dev/sdb3. We took the machine offline and replaced /dev/sda. In netboot recovery mode we set up the partition table on /dev/sda, then set the array off rebuilding as normal:

mdadm --manage /dev/md1 --add /dev/sda3

This expects to take around three hours to complete, so we told the machine too boot up normally and rebuild in the background while being operational. This failed – during bootup in the initrd, the kernel (Debian 3.16) was bringing up the array with /dev/sda3, but not /dev/sdb3, claiming it didn’t have enough disks to start the array and refusing to boot.

Within the initrd if I did:

mdadm --assemble /dev/md1 /dev/sda3 /dev/sdb3

the array refused to start claiming that it didn’t have sufficient disks to bring itself online, but if I did:

mdadm --assemble /dev/md1 /dev/sdb3
mdadm --manage /dev/md1 --add /dev/sda3

within the initrd it would bring up the array and start it rebuilding.

Our netboot recovery environment (same kernel) meanwhile correctly identifies both disks, and leaves the array rebuilding happily.

To solve it we ended up leaving the machine to rebuild in the network recovery mode until the array was fully redundant at which point the machine booted without issue. This wasn’t an issue – it’s a member of a cluster so downtime wasn’t a problem – but in general it’s supposed to work better than that.

It’s the first time we’ve ever seen this happen and we’re short on suggestions as to why – we’ve done hundreds of software RAID1 disk swaps before and never seen this issue.

Answers or suggestions in an email or tweet.

If you put your mind to it

October 21st, 2015 by

With today being Back To The Future day, it’s worth reflecting on two pieces of advice I received in the mid 1980s. The best piece of advice was definitely from the film:

‘If you put your mind to you, you can accomplish anything’.

the worst was my mother:

‘Stop playing on your Spectrum and go and do your piano practice’

I’m not certain this generalises, I think that largely your parents do give better advice than Hollywood script writers.

IPv6 Graphing

October 15th, 2015 by
it's a server graph!

it’s a server graph!

One of the outstanding tasks for full IPv6 support within Mythic Beasts was to make our graphing server support IPv6 only hosts. In theory this is trivial, in practice it required a bit more work.

Our graphing service uses munin, and we built it on munin 1.4 nearly five years ago; we scripted all the configuration and it has basically run itself ever since. When we added our first IPv6 only server it didn’t automatically get configured with graphs. On investigation we discovered that munin 1.4 just didn’t support IPv6 at all, so the first step was to build a new munin server based on Debian Jessie with munin 2.0.

Our code generates the configuration file by printing a line for each server to monitor which includes the IP address. For IPv4 you print the address as normal,, for IPv6 you have to encase the address in square brackets [2a00:1098:0:82:1000:0:1:1]. So a small patch later to spot which type of address is which and we have a valid configuration file.

Lastly we needed to add the IPv6 address of our munin server into the configuration file of all the servers that might be talked to over IPv6. Once this was done, as if by magic, thousands of graphs appeared.

Professor Cathie Clarke, Ada Lovelace day

October 13th, 2015 by

Unless you really like maths, you’re probably better off just looking at the pictures.

Today is Ada Lovelace day, where we celebrate the achievements of women in the traditionally male dominated fields of Science, Technology, Engineering and Maths.

Mythic Beasts came about as a side project of a bunch of students, most of whom studied at Clare College Cambridge. As a Cambridge student you receive supervisions, hour long tutorials in sets of two or three. You also live in the college for three years and some fellows of the college also have rooms in the same accommodation. As luck would have it, our director Pete was partnered with one of our other founders, Richard for supervisions in Physics, and in the second year they were jointly supervised by Professor Cathie Clarke and it turned out that her college room was directly opposite Pete’s in Clare Old Court.

This led to a slightly unusual arrangement, rather than everyone trekking into the department for supervisions they decided to hold them in Pete’s room at 8am, usually accompanied by a bacon sandwich and strong black coffee before heading off to lectures. Cathie was a superb teacher, neatly covering dynamics, orbits and effortlessly showing why practically the whole of spaceflight involves pointing your rocket motor in obviously the wrong direction in order to get to where you want to go. She also, neatly answered other questions including electro-magnetism that had stumped pretty much the whole year in Clare and all the supervisors in that subject, despite having had about sixty seconds notice and only half a cup of coffee.

Pete’s room was on the top right hand corner of this photograph, the bacon cooking kitchen roughly above the passageway.

There was a particularly memorable supervision, where Richard overslept and arrived very late to Pete’s room, anxious he’d missed the supervision. On waking Pete up, they jointly discovered the note on the door from Cathie apologising that she’d overslept and the supervision would need to be rescheduled. A co-incidence for which all parties were grateful.

So whilst her impressive CV gives away the huge publication list, professorship and that she’s the course coordinator for astrophysics; in person we had the privilege of knowing that she is also a superb prize-winning teacher, gifted researcher and somehow on top of all that, a lovely human being who occasionally oversleeps just like the rest of us.

IPv4 to IPv6 Reverse Proxy & Load Balancer

October 5th, 2015 by

IPv6-only in the cloud just became possible

We have been offering IPv6-only Virtual Servers for some time, but until now they’ve been of limited use for public-facing services as most users don’t yet have a working IPv6 connection.

Our new, free IPv4 to IPv6 Reverse Proxy service provides a shared front-end server with an IPv4 address that will proxy requests through to your IPv6-only server. The service will proxy both HTTP and HTTPS requests.  For HTTPS, we use the SNI extension so the proxy can direct the traffic without needing to decrypt it. This means that the proxy does not need access to your SSL keys, and the connection remains end-to-end encrypted between the user’s browser and your server.

The service allows you to specify multiple backend servers, so if you have more than one server with us, it will load balance across them.

The IPv4 to IPv6 Reverse Proxy can be configured through our customer control panel. Front ends can be configured for hostnames within domains that are registered with us, or for which we provide DNS.

iOS 9 and SSL

September 28th, 2015 by
We're still installing iOS9 for testing reasons onto this Apple Device

We’re still installing iOS9 for testing reasons onto this Apple Device

tl;dr iOS9 applications only work with the newest SHA-256 certificates. If your iOS9 application or website is showing certificate errors and you’d like some help, contact support@mythic-beasts.com

iOS9 was recently released which brings a number of changes. In addition to the widely publicised changes about IPv6 (iOS9 prefers IPv6 and all applications in the App Store must function without issue on an IPv6 only network), Apple has forced obsolescence of older types of SSL certificate.

SSL certificates use hashing functions to provide security. The Secure Hash Algorithm 1 (SHA-1), was published by the NSA in 1995 as the standard for secure authentication. The first theoretical attacks were shown in 2005 leading to a recommendation in 2010 that we abandon SHA-1 and move to SHA-256. In 2014 Google put a sunset date for SHA-1 of December 2016 – if your website trusts an SHA-1 certificate past this date Chrome refuses to regard your site as secure.

With iOS9, Apple pulled the date at which everyday software stops working with SHA-1 forward. If your website or application is secured with a SHA-1 certificate, iOS9 gives warnings and errors. The fix is easy, we can provide or re-issue your existing certificate with an iOS9 compatible – and more importantly more secure – SHA-256 certificate.

UK IPv6 Council Forum, 2nd Annual Meeting

September 24th, 2015 by
2.5% of the UK had native IPv6 enabled by September 2015

September 2015: 2.5% of the UK has native IPv6

Yesterday was the second meeting of the UK IPv6 Council. Eleven months ago Mythic Beasts went along to hear what the leading UK networks were doing about IPv6 migration. Mostly they had plans, and trials. However, the council is clearly useful: Last year, Nick Chettle from Sky promised that Sky would be enabling IPv6 in 2015. His colleague, Ian Dickinson gave a follow-up talk yesterday and in the past two months UK IPv6 usage has grown from 0.2% to 2.6%. We think somebody had to enable IPv6 to make his graph look good for today’s presentation…

In the meantime, Mythic Beasts has made some progress beyond having an IPv6 website, email and our popular IPv6 Health Check. Here’s what we’ve achieved, and not achieved in the last twelve months.

Customer Facing Successes

  • IPv6 support for our control panel.
  • IPv6 support for our customer wiki.
  • Offered IPv6-only hosting services that customers have actually bought.
  • Added NAT64 for hosted customers to access other IPv4 only services.
  • Added multiple downstream networks, some of which are IPv6 only.
  • Raspberry Pi has a large IPv6 only internal network – 34 real and virtual servers but only 15 IPv4 addresses, and integrations with other parts of their ecosystem (e.g. Raspbian) are also IPv6.
  • IPv6 for all DNS servers, authoritative and resolvers
  • IPv6 for our single sign on authentication service (this was one of the hardest bits).
  • Our SMS monitoring fully supports IPv6 only servers (this was quite important).
  • Our backup service fully supports IPv6 only servers (this was very important).
  • Direct Debits work over IPv6, thanks to GoCardless

Internal Successes

  • IPv6 on our own internal wiki, MRTG, IRC channels
  • Full Ipv6 support for connectivity to and out of our gateway server.
  • IPv6 rate limiting to prevent outbound spam being relayed.
  • Everything works from IPv6 with NAT64.


  • Shared IPv4/IPv6 load balancer for providing v4 connectivity to v6 only hosted services.

Still to do

  • Our card payment gateway doesn’t support IPv6.
  • Our graphing service doesn’t yet support IPv6-only servers (edit – implemented 15th October 2015).
  • Automatic configuration for an IPv6-only primary DNS server which slaves to our secondary DNS service.
  • Billing for IPv6 traffic.
  • One shared hosting service still has incomplete IPv6 support. One shared hosting service has optional instead of mandatory IPv6 support.
  • Automatic IPv6 provisioning for existing server customers.
  • Make sure everything works from IPv6 with no NAT64.

Waiting on others

  • The management interfaces for our DNS wholesalers don’t support IPv6.
  • Nor our SSL certificate providers.
  • Nor our SMS providers.
  • Nor our card payment gateway.

Selling hardware into the cloud

September 22nd, 2015 by

A Cambridge start-up approached us with an interesting problem. In this age of virtualisation, they have a new and important service, but one which can’t be virtualised as it relies on trusted hardware. They know other companies will want to use their service from within their private networks within the big cloud providers, but they can’t co-locate their hardware within Amazon or Azure.

This picture is a slight over simplification of the process

This picture is a slight over simplification of the process

The interesting thing here is that the solution is simple. It is possible to link directly into Amazon via Direct Connect and to Azure via Express Route. To use Direct Connect or Express Route within the UK you need to have a telco circuit terminating in a Telecity data centre, or to physically colocate your servers. As many of you will know, Mythic Beasts are physically present in three such data centres, the most important of which is Telecity Sovereign House, the main UK point of presence for both Amazon and Microsoft.

So our discussion here is nice and straightforward. Our future customer can co-locate their prototype service with Mythic Beasts in our Telecity site in Docklands. They can then connect to Express Route and Direct Connect over dedicated fibre within the datacentre when they’re ready to take on customers. Their customers then have to set up a VPC Peering connection and the service is ready to use. This is dedicated specialised hardware from the inside of ‘the cloud’, and it’s something we can offer to all manner of companies, start-up or not, from any dedicated or colocated service. You only need ask.

Ethernet Speeds: expect 2.5Gbps on copper, 25Gbps on fibre

September 18th, 2015 by

Recently we went to UKNOF where Alcatel Lucent gave a helpful presentation on new ethernet speeds.

Currently most network connectivity is 1Gbps ethernet over Cat5e copper which stretches up to 100m. There is an infrequently used standard for 10Gbps over Cat6 copper to 55m for higher speeds.

Now demand is starting to appear for faster than 1Gbps speeds, and it’s very attractive to do this without replacing the installed base of Cat5e and Cat6 cabling. There are new standards in the pipeline for 2.5Gbps and 5Gbps ethernet over Cat5e/Cat6 cabling.

In the data centre it’s common to have 10Gbps over SFP+ direct attach for short interconnects (up to 10m) and 1Gbps/10Gbps/40Gbps/100Gbps over fibre for longer distances. 1Gbps and 10Gbps are widely adopted. 40Gbps and 100Gbps are a different design, implemented by combining multiple lanes of traffic at 10Gbps to act as a single link. 100Gbps has changed to be 4 lanes at 25Gbps rather than 10 at 10Gbps.

The more lanes you have in use, the more switches and switching chips you need – but effectively this means that 40Gbps has the same cost in port count as 100Gbps. The next generation of 100Gbps switching hardware will consist of a large number of lanes that run at either 10Gbps or 25Gbps. With current interfaces, you’d use 4 lanes for 100Gbps, 4 lanes for 40Gbps or 1 lane for 10Gbps. The obvious gap is using a single lane for 25Gbps standard so you can connect vastly more devices at greater than 10Gbps speeds.

So in the near future, we’re expecting to see 2.5Gbps and 25Gbps ethernet becoming available, and in the longer term work has now started on 400Gbps standards.