IPv4 to IPv6 Proxy API

April 21st, 2023 by

We’ve been offering IPv6-only hosting for eight years now, and have demonstrated that many websites can forego the expense of an IPv4 address pretty easily. You can read more about how we do this on this blog post from 2020. This blog post itself is being served from an IPv6-only server!

A key part of this is our IPv4-to-IPv6 proxy. This listens for incoming traffic on a shared IPv4 address and forwards it to your IPv6-only server. In order to use the proxy, you need to tell it which hostnames to listen for, and which server or servers to forward traffic to. This can be done using our control panel, and as of today, it can also be done via an API.

Having an API for proxy configuration makes it possible to automatically add or remove backend servers, allowing you to spin up additional servers, or take servers out of service for failover or maintenance.

You can also use the API to add and remove hostnames handled by the proxy, and so can be used to automate the provisioning of new services.

Fine-grained access controls

As for our DNS API and Domain API, the Proxy API provides fine-grained access control for API keys. For example, you can create an API key that only has access to a specified domain or hostname, or you can create a read-only API key if you only need to read the current configuration.

Getting started

Our IPv4-to-IPv6 proxy is available to all customers with a Mythic Beasts server, including virtual servers, Raspberry Pi servers, dedicated and colo. You can find more information on the proxy service, and the Proxy API on our support pages.

MagPi magazine: how to host a website on a Raspberry Pi

October 9th, 2020 by

The MagPi MagazineThe MagPi Magazine has published a new article on how to set up a web server using a Raspberry Pi hosted in our Pi Cloud.

The article walks through all the steps necessary from ordering a server on our website to getting WordPress installed and running.

It’s also a great demonstration of how easy it is to host a website on an IPv6-only server such as those in our Pi Cloud. In fact, it’s so easy that the article doesn’t even mention that the Pi doesn’t have a public IPv4 address. An SSH port-forward on our gateway server provides IPv4 access for remote administration, and our v4 to v6 proxy relays incoming HTTP requests from those still using a legacy internet connection.

You can read the article on the MagPi site or order a server to try it out yourself.

We have Pi 3 and Pi 4 servers available now, and the option of per-second billing means you can try this without any ongoing commitment.

IPv4/IPv6 transit in HE Fremont 2

September 18th, 2020 by

Back in 2018, we acquired BHost, a virtual hosting provider with a presence in the UK, the Netherlands and the US. Since the acquisition, we’ve been working steadily to upgrade the US site from a single transit provider with incomplete IPv6 networking and a mixture of container-based and full virtualisation to what we have now:

  • Dual redundant routers
  • Two upstream network providers (HE.net, CenturyLink)
  • A presence on two internet Exchanges (FCIX/SFMIX)
  • Full IPv6 routing
  • All customers on our own KVM-based virtualisation platform

With these improvements to our network, we’re now able to offer IPv4 and IPv6 transit connectivity to other customers in Hurricane Electric’s Fremont 2 data centre. We believe that standard services should have a standard price list, so here’s ours:

Transit Price List

Prices start at £60/month on a one month rolling contract, with discounts for longer commits. You can order online by hitting the big green button, we’ll send you a cross-connect location within one working day, and we’ll have your session up within one working day of the cross connect being completed. If we don’t hit this timescale, your first month is free.

We believe that ordering something as simple as IP transit should be this straightforward, but it seems that it’s not the norm. Here’s what it took for us to get our second 10G transit link in place:

  • 24th April – Contact sales representative recommended by another ISP.
  • 1st May – Contact different sales representative recommended by UKNOF as one of their sponsors.
  • 7th May – 1 hour video conference to discuss our requirements (a 10Gbps link).
  • 4th June – Chase for a formal quote.
  • 10th June – Provide additional details required for a formal quote.
  • 10th June – Receive quote.
  • 1st July – Clarify further details on quote, including commit.
  • 2nd July – Approve quote, place order by email.
  • 6th July – Answer clarifications, push for contract.
  • 7th July – Quote cancelled. Provider realises that Fremont is in the US and they have sent EU pricing. Receive and accept higher revised quote.
  • 10th July – Receive contract.
  • 14th July – Return signed contract. Ask for cross connect location.
  • 15th July – Reconfirm the delivery details from the signed contract.
  • 16th July – Send network plan details for setting up the network.
  • 27th July – Send IP space justification form. They remind us to provision a cross connect, we ask for details again.
  • 6th August – Chase for cross connect location.
  • 7th August – Delivery manager allocated who will process our order.
  • 11th August – Ask for a cross connect location.
  • 20th August – Ask for a cross connect location.
  • 21st August – Circuit is declared complete within the 35 day working setup period. Billing for the circuit starts.
  • 26th August – Receive a Letter Of Authorisation allowing us to arrange the cross connect. We immediately place order for cross connect.
  • 26th August – Data centre is unable to fulfil cross connect order because the cross connect location is already in use.
  • 28th August – Provide contact at data centre for our new provider to work out why this port is already in use.
  • 1st September – Receive holding mail confirming they’re working on sorting our cross connect issue.
  • 2nd September – Receive invoice for August + September. Refuse to pay it.
  • 3rd September – Cross connect location resolved, circuit plugged in, service starts functioning.

Shortly after this we put our order form live and improved our implementation, we received our first order on the 9th September and provisioned a few days later. Our third transit customer is up and live, order form to fully working was just under twelve hours; comfortably within our promise of two working days.

Raspberry Pi 4 now available in our Pi Cloud

June 17th, 2020 by
PI 4 with PoE HAT

Our PI 4 servers all wear the Power over Ethernet HAT to provide power and cooling to the CPU.

We’re now offering these in our Raspberry Pi Cloud starting from £7.50/month or 1.2p/hour.

Since the release of the Raspberry Pi 4 last year, it’s been an obvious addition to our Raspberry Pi cloud, but it’s taken us a little while to make it happen. Our Raspberry Pi Cloud relies on network boot in order to ensure that customers can’t brick or compromise servers and, at launch, the Pi 4 wasn’t able to network boot. We now have a stable replacement firmware with full PXE boot support.

The Pi 4 represents a significant upgrade over the Pi 3; it is over twice as fast, has four times the RAM and the network card runs at full gigabit speed. On a network-booted server this gives you much faster file access in addition to more bandwidth out to the internet. We’ve done considerable back-end work to support the Pi 4. We’ve implemented:

  • New operating system images that work on the Pi 4 for 32 bit Raspberry Pi OS and Ubuntu.
  • A significant file server upgrade for faster IO performance.
  • Supporting the different PXE boot mode of the Pi 4 without impacting our Pi 3 support.

Ben Nuttall has been running some secret beta testing with his project Pi Wheels which builds Python packages for the Raspberry Pi. We’re grateful for his help.

Is it any good?

tl;dr – YES

We’ve historically used WordPress as a benchmarking tool, mostly because it’s representative of web applications in general and as a hosting company we manage a lot of those. So we put the Raspberry Pi 4 up against a Well Known Cloud Provider that offers ARM instances. We benchmarked against both first generation (a1) and second generation (m6g) instances.

Our test was rendering 10,000 pages from a default WordPress install at a concurrency level of 50.

Raspberry Pi 4 a1.large m6g.medium
Spec 4 cores @ 1.5Ghz
4GB RAM
2 cores
4GB RAM
1 core
4GB RAM
Monthly price £8.63 $45.35
(~ £36.09)
$34.69
(~ £27.61)
Requests per second 107 52 57
Mean request time 457ms 978ms 868ms
99th percentile request time 791ms 1247ms 1056ms

In both cases the Pi 4 is approximately twice as fast at a quarter of the price.

Notes:

  • Raspberry Pi 4 monthly price based on on-demand per-second pricing.
  • USD to GBP conversion from Google on 17th June 2020

IPv6-only hosting in 2020

February 28th, 2020 by

It’s now nearly five years since we started offering IPv6-only hosting, and what started out as a source of interesting projects for enthusiastic early-adopters has become our default for most hosting requirements.

A few things have changed over the years that have made this possible:

  • The death of Windows XP, the last significant OS with a browser that didn’t support SNI (Server Name Indication). SNI makes it possible for us to proxy encrypted connections to IPv6-only hosts.
  • The widespread adoption of secure services. This means that protocols that don’t have their own proxying features (such as POP3 or IMAP) can be proxied in their encrypted form thanks to SNI.
  • Improvements to our hosting services, such as our SSH port forwarder.

This post gives a quick run-down of how we make IPv6-only hosting a reality.

Getting bytes in

There’s no getting away from the fact that an IPv6-only hosting server still needs to be able to talk to IPv4-only clients, but there’s now a good solution for doing so for pretty much all common scenarios.

Web traffic

This is the most common requirement, and also probably the easiest, as it can be handled by our v4 to v6 proxy.  The proxy is a set of servers with both IPv4 and IPv6 addresses that accept traffic for various protocols and forward it to an IPv6-only server.

The DNS for the hosted site points at our proxy servers, by means of either an ANAME or CNAME record to proxy.mythic-beasts.com.

Unencrypted HTTP traffic is easy to proxy as HTTP 1.1 is designed to support multiple websites on a single IP address.

HTTPS is also easy to proxy, thanks to the now-ubiquitous support for SNI (its successor, ESNI, may complicate this a bit in the future, but we’ll tackle that in a separate post).

Our proxy also supports PROXY protocol, which is a standard way of communicating the original client’s IP address on a proxied connection. Support for PROXY protocol is now a standard feature of NGINX and Apache.

IPv6 traffic can either follow the same route as IPv4 traffic through the proxy (as shown above) or can be routed directly to the hosting server by setting the AAAA records for the site to point at the server rather than the proxy:

This provides a slightly more direct route for IPv6 traffic, but can make the configuration on the server a little more complicated, particularly if you’re using PROXY protocol.

IMAP and POP3

These can both be proxied in their secure forms (IMAPS and POP3S) thanks to SNI, and thankfully these secure variants are now the default choice for all popular email clients.

SSH

Our customers typically want to administer their servers via SSH, and can’t guarantee that they’ll always be connecting from a v6-enabled network. The SSH protocol isn’t built on TLS/SSL so doesn’t have SNI support, and doesn’t have any equivalent features of its own.

We work around this by providing a port-forward to all virtual servers and Raspberry Pi servers from a host with a v4 IP address, so customers can make a connection to a different host on a non-standard port, and the connection will be forwarded to the IPv6 server on port 22. Details of the host and port can be found in our customer control panel.

SMTP

SMTP is a bit awkward. It’s used in two common scenarios:

  1. “Submission”, where an end-user client sends outgoing mail using authenticated SMTP
  2. Server-to-server delivery of email.

It has multiple ports in common use:

  • 25 – the standard port for server-to-server email
  • 465 – a port for SMTP over SSL
  • 587 – the standard SMTP submission port

Port 25 doesn’t use SSL/TLS at connection time, but can be upgraded to a secure connection via the STARTTLS command, which means it can’t be proxied using SNI.

Port 465 has a confused history, having been allocated by IANA for secure SMTP, then revoked in favour of STARTTLS and allocated to a different service, and then reinstated for secure SMTP submission by RFC 8314.  Port 465 is supported by our proxy, and is a good choice for SMTP submission.

Port 587 was historically plain SMTP (RFC 2476) with STARTTLS, but is being migrated to SSL by default (RFC 8314) which is proxyable thanks to SNI.  Our proxy assumes that port 587 traffic is encrypted (because it can’t do anything useful if it’s not) and as such can also be used for SMTP submission, provided you use SSL/TLS rather than STARTTLS.

For server-to-server delivery, it’s possible to use our dual-stack MX servers to handle incoming mail. This can be done by having the highest priority MX record point to the v6-only server, and then have a lower priority record
pointing to our MX servers. v4-only servers will deliver to our MX servers, and we’ll then pass it on to your v6-only server.

This isn’t a perfect solution, as it means you can’t do connection-time filtering of incoming mail.

Our MX servers need to be configured to accept mail for your domain. At present, this needs to be done by emailing support.

Getting bytes out

Your server may need to make outgoing connections to v4-only servers. Fortunately this is straightforward using our NAT64 resolvers. These are DNS resolvers that when asked for an address for a host that does not have any AAAA records will provide an IPv6 address that is mapped to the host’s v4 address. The v6 address is actually an address on one of our NAT servers that will then forward the traffic to the v4 address.

There’s a 1:1 mapping between v4 addresses and v6 addresses on the NAT server – with IPv6 we can easily allocate the equivalent of the full 32-bit IPv4 address space to a single server!

NAT64 works very well in almost all cases. We have come across a few bits of software which explicitly request an A record when doing a DNS lookup, which obviously doesn’t work.

As with any NAT configuration, you’re sharing a v4 address with other users, which can cause issues for sites that perform IP-based filtering or rate limiting.

Make the switch

Like most providers, we now charge for IPv4 addresses, but unlike most other providers it’s a tax you probably don’t need pay. We offer IPv6-only versions of all of our virtual and dedicated servers, and our Raspberry Pi servers area all IPv6-only.

Learn more

If you’d like to hear more, here are some videos of a presentation that Pete gave at the UK Network Operators Forum (UKNOF).

IPv6 updates

December 16th, 2019 by

Last Thursday we went to the IPv6 Council to speak about IPv6-only hosting and to exchange information with other networks about the state of IPv6 in the UK.

IPv4 address exhaustion is becoming ever more real: the USA and Europe have now run out, and Asia, Africa and Latin America all have less than a year of highly-restricted supply left.

Perhaps unsurprisingly, we’re now seeing real progress in deploying IPv6 across the board.

The major connectivity providers gave an update on their progress. Sky already have an effectively complete deployment across their UK network, so instead they told us about the Sky Italia build-out that launches early next year. It will initially be 100% dual stack but they’re planning to migrate to single stack IPv6 with IPv4 access provided by MAP-T as soon as possible. BT/EE have IPv6 virtually everywhere and take-up is rising as HomeHubs are retired and replaced with SmartHubs. Three are actively enabling IPv6 over their network, as we noticed last month:

Smaller providers are also making progress; Hyperoptic and Community Fibre have both essentially completed their dual stack rollout this year, with both organisations having to consider Network Address Translation due to lack of IPv4 addresses.

We’ve been working hard for many years to make IPv6-only hosting a practical option for our customers, allowing us to considerably expand the lifespan of our IPv4 allocation (which, thanks to a few acquisitions and being a relatively old company, is a reasonable size).

We heard from Ungliech, who started more recently and don’t have a large historical allocation of IPv4 addresses. They gave an interesting talk about their IPv6-only hosting and how it’s an urgent requirement for a new entrant because a RIPE final allocation of 1024 addresses isn’t enough to start a traditional hosting company. Thanks to RIPE running out last month, any new competitor has it four times harder with only 256 addresses to get them started.

We also had interesting updates from Microsoft about their continuing journey to IPv6-only internally in their corporate network, and the pain of continuing to support IPv4 private addressing. When they acquire a company they already have overlapping internal networks, and making internal services available to the wider organisation is an ongoing difficult challenge.

There was also a fascinating talk from SITA about providing network and infrastructure to aviation. There is a huge amount of networking involved and the RFC1918 private IPv4 address space is no longer large enough to enable a large airport. They have a very strong push to use IPv6 even on networks not connected to the public internet.

VMHaus services now available in Amsterdam

July 3rd, 2019 by

Integration can be hard work

Last year we had a busy time acquiring Retrosnub, BHost and VMHaus. We’ve been steadily making progress in the background integrating the services the companies provide to reduce costs and complexity of management. We can now also announce our first significant feature upgrade for VMHaus. We’ve deployed a new virtual server cluster to our Amsterdam location and VMHaus services are now available in Amsterdam. VMHaus is using Mythic Beasts for colocation and network and in Amsterdam they will gain access to our extensive set of peers at AMSIX, LINX and LoNAP. Per hour billed virtual servers are available from VMHaus with payment through Paypal.

As you’d expect, every VM comes with a /64 of IPv6 space.

In the background we’ve also been migrating former-BHost KVM-based services to Mythic Beasts VM services in Amsterdam. Shortly we’ll be starting to migrate former-BHost and VMHaus KVM-based services in London to new VM clusters in the Meridian Gate data centre.

Retrosnub Acquisition

June 4th, 2018 by

A Mythic Beast eating a Retrosnub (artists impression)

Just before Christmas we were approached by Malcolm Scott, director of Retrosnub, a small cloud hosting provider in Cambridge. His existing connectivity provider had run out of IPv4 addresses. They’d decided to deal with this issue by adding charges of £2 per IPv4 address per month to encourage existing customers to return unused IPv4 addresses to them. As a cloud hosting provider with a substantial number of virtual machines (VMs) on a small number of hosts this had the result of tripling the monthly colocation bill of Retrosnub.

Aware of my presentation on IPv6-only hosting at UKNOF, Malcolm knew that opportunities for significant expansion were severely limited due to the difficulty of obtaining large amounts of IPv4 address space. Retrosnub faced a future of bankruptcy or remaining a very niche provider. His connectivity providers seemed strongly in favour of Retrosnub going bust so they could reclaim and re-sell the IPv4 space for higher margin services.

There are no expansion opportunities for new cloud hosting providers.

As a larger provider with our own address space, we had sufficient spare capacity in our virtual machine cloud to absorb the entire customer base of Retrosnub with no additional expenditure. Our work in supporting IPv6-only virtual machines will also make it easier to significantly reduce the number of IPv4 addresses required to support Retrosnub services. We formed a deal and agreed to buy the customer base of Retrosnub.

Combining operations

Since agreeing the deal, we’ve been working hard to merge our operations with minimum disruption.

The top priority was the domain name services because domains expire if you don’t renew them. Doing a bulk transfer of domain names between registrars is something which Nominet, the body responsible for UK domains, makes extremely easy, as it just requires changing the “tag” on all the domains.

Unfortunately, just about all other TLDs follow a standard ICANN process, which requires that a domain be renewed for a year at the time of transfer, and that the owner of the domain approves the process. If you were designing a process to destroy competition in a market by making it hard for resellers to move between registrars, it would look quite like this.

We’ve now got the bulk of domains transferred, and the next steps will be to migrate the DNS records from Retrosnub to Mythic Beasts so that our control panel can be used to change the records.

At the same time, we rapidly formulated a plan to migrate all the virtual machines in to stem the financial losses. Moving the VMs required an unavoidable change in IP address, and we also wanted to get them migrated from their current platform (Citrix Xenserver with para-virtualisation) to our own platform (KVM with full hardware virtualisation).

In order to ease the transition, we arranged for a pair of servers to do IP forwarding: a server in our cloud that forwarded the new IP to the VM in the Retrosnub cloud until it was migrated in, and another in the Retrosnub cloud that forwarded the old IP after the server had been moved. By doing this we were able to give customers a one week window in which to complete their IP migration, rather than forcing it to be done at the time that we actually moved the VM.

In the process of this migration, all customers received a significant bandwidth upgrade and majority received disk, RAM and CPU upgrades too.

We completed this on schedule before the quarterly colocation bill arrived, so instead of paying the much increased bill, we cancelled the contract and removed the servers from the facility.

Next steps

Our next step will be to migrate all the web and email hosting customers into our standard shared hosting environment. This has some time pressure as Google have plans for Chrome to start marking all non-HTTPS websites as insecure. We offer one click HTTPS hosting using Let’s Encrypt on all of our hosting accounts.

Capacity upgrades, cheaper bandwidth and new fibre

December 8th, 2017 by

We don’t need these Giant Scary Laser stickers yet.

We’ve recently upgraded both our LONAP connections to 10Gbps at our two London POPs bring our total external capacity to 62Gbps.

We’ve been a member of LONAP, the London Network Access Point, since we first started running our own network. LONAP is an internet exchange, mutually owned by several hundred members. Each member connects to LONAP’s switches and can arrange to exchange traffic directly with other members without passing through another internet provider. This makes our internet traffic more stable because we have more available routes, faster because our connections go direct between source and recipient with fewer hops and usually cheaper too.

Since we joined, both we and LONAP have grown. Initially we had two 1Gbps connections, one in each of our two core sites. If one failed the other could take over the traffic. Recently we’ve been running both connections near capacity much of the time and in the event of failure of either link we’d have to fall back to a less direct, slower and more expensive route. Time to upgrade.

The upgrade involved moving from a copper CAT5e connection to optic fibre. As a company run by physics graduates this is an excellent excuse to add yet more LASERs to our collection. Sadly the LASERs aren’t very exciting, being 1310nm they’re invisible to the naked eye and for safety reasons they’re very low powered (~1mW). Not only will they not set things on fire (bad) they also won’t blind you if you accidentally look down the fibre (good). This is not universally true for all optic fibre though; DWDM systems can have nearly 100 invisible laser beams in the same fibre at 100x the power output each. Do not look down optic fibre!

The first upgrade at Sovereign House went smoothly, bringing online the first 10Gbps LONAP link. In Harbour Exchange proved a little more problematic.  We initially had a problem with an incompatible optical transceiver. Once replaced, we then saw a further issue with the link being unstable which was resolved by changing the switch port and optical transceiver at LONAP’s end. We then had further low level bit errors resulting in packet loss for large packets. This was eventually traced to a marginal optical patch lead. Many thanks to Rob Lister of LONAP support for quickly resolving this for us.

With the upgrade completed, we now have two 10Gbps connections to LONAP, in addition to our two 10Gbps connections into the London Internet Exchange and multiple 10Gbps transit uplinks, as well as some 1Gbps private connections to some especially important peers.

To celebrate this we’re dropping our bandwidth excess pricing to 1p/GB for all London based services.  The upgrades leave us even better placed to offer very competitive quotes on high bandwidth servers, as well as IPv6 and IPv4 transit in Harbour Exchange, Meridian Gate and Sovereign House.  Please contact us at sales@mythic-beasts.com for more information.

FRμIT: Federated RaspberryPi MicroInfrastructure Testbed

July 3rd, 2017 by

The participants of the FRμIT project, distributed Raspberry Pi cloud.

FRμIT is an academic project that looks at building and connecting micro-data-centres together, and what can be achieved with this kind of architecture. Currently they have hundreds of Raspberry Pis and they’re aiming for 10,000 by the project end. They invited us to join them, we’ve already solved the problem of building a centralised Raspberry Pi data centre and wanted to know if we could advise and assist their project.  We recently joined them in the Cambridge University Computer Lab for their first project meeting.

Currently we centralise computing in data centres as it’s cheaper to pick up the computers and move them to the heart of the internet than it is to bring extremely fast (10Gbps+) internet everywhere. This model works brilliantly for many applications because a central computing resource can support large numbers of users each connecting with their own smaller connections. It works less well when the source data is large and in somewhere with poor connectivity, for example a video stream from a nature reserve. There are also other types of application such as Seti@Home which have huge computational requirements on small datasets where distributing work over slow links works effectively.

Gbps per GHz

At a recent UK Network Operator Forum meeting, Google gave a presentation about their data centre networking where they built precisely the opposite architecture to the one proposed here. They have a flat LAN with the same bandwidth between any two points so that all CPUs are equivalent. This involves around 1Gbps of bandwidth per 1GHz of CPU. This simplifies your software stack as applications don’t have to try and place CPU close to the data but it involves an extremely expensive data centre build.

This isn’t an architecture you can build with the Raspberry Pi. Our Raspberry Pi cloud is as about as close as you can manage with 100Mbps per 4×1.2GHz cores. This is about 1/40th of the network capacity required to run Google architecture applications. But that’s okay, other applications are available. As FRμIT scales geographically, the bandwidth will become much more constrained – it’s easy to imagine a cluster of 100 Raspberry Pis sharing a single low bandwidth uplink back to the core.

This immediately leads to all sort of interesting and hard questions about how to write a scheduler as you need to know in advance the likely CPU/bandwidth mix of your distributed application in order to work out where it can run. Local data distribution becomes important – 100+ Pis downloading updates and applications may saturate the small backbone links. They also have a variety of hardware types, the original Pi model B to the newer and faster Pi 3, possibly even some Pi Zero W.

Our contribution

We took the members of the project through our Raspberry Pi Cloud is built, including how a Pi is provisioned, how the network and operating system are provisioned and the back-end for the entire process from clicking “order” to a booted Pi awaiting customer login.

In discussions of how to manage a large number of Federated Raspberry Pis we were pleased to find considerable agreement with our method of managing lots of servers: use OpenVPN to build a private network and route a /48 of IPv6 space to it.   This enables standard server management tools work, even where the Raspberry Pis are geographically distributed behind NAT firewalls and other creative network configurations.

Donate your old Pi

If you have an old Raspberry Pi, perhaps because you’ve upgraded to a new Pi 3, you can donate it directly to the project through PiCycle. They’ll then recycle your old Raspberry Pi into the distributed compute cluster.

We’re looking forward to their discoveries and enjoyed working with the researchers. When we build solutions for customers we’re aiming to minimise the number of unknowns to de-risk the solution. By contrast tackling difficult unsolved problems is the whole point of research. If they knew how to build the system already they wouldn’t bother trying.