Understanding Latency 3.0, a conference focused on network latency, has started its second day. There’s still time to join the rest of day two and day three.
https://understandinglatency.com
I’ve posted a few bits related to the conference.
Understanding Latency 3.0, a conference focused on network latency, has started its second day. There’s still time to join the rest of day two and day three.
https://understandinglatency.com
I’ve posted a few bits related to the conference.
It appears that Network Neutrality will be dealt a severe blow at the FCC soon. Although some smaller ISPs are supportive of this change, I believe that it represents a major threat to the long term health of the small ISP industry.
I love small ISPs. A long time ago I owned one, I’ve worked as a network admin for one, and I spend some of my spare time on the board of directors for a small co-operative ISP. After a seven-year foray into building products for the largest ISPs in the world, I’m extremely happy to be 100% focused on building products for small ISPs. Whether this makes me biased or just passionate probably depends on your point of view.
Firstly, why do some small ISPs support the pending changes? I think this breaks down into a few high level categories:
It’s natural for the independent, entrepreneur types that often run small ISPs to have these views but in this instance, government is the only entity that has the power to put rules in place that can protect small ISPs from the large ISPs and from other large companies on the content side.
Before explaining, there is one fact that many ISPs don’t want to face but need to understand:
Your customers don’t pay for access to your network, there is nothing on your network they care about. Your customers pay you to connect them to the services and people they wish to communicate with.
Consider an alternate reality where the Internet did not have strong Network Neutrality norms and rules and a content service such as Netflix is born. In this world, a giant like Comcast would probably ignore the new service, cable TV is a cash cow after all, until they see strong results from Netflix or maybe see early indications that customers like this new form of content delivery better.
In this alternate reality, the rational business decision is to slow down, block or charge hefty fees to this new content upstart. Most consumers don’t have multiple real choices for Internet service, so where will they go? And better yet, if the new upstart is killed or weakened early enough, the consumer won’t even know an alternative even exists so they won’t look for another provider that doesn’t take these types of actions. The second rational business decision, after hurting this new competitor, would be to start a competing service. As an aside, 99% of the time, betting against any person or company acting in their best financial interest is stupid idea.
So how does this relate to small providers? They don’t have this kind of market power.
Your customers pay you to connect them to the services and people they wish to communicate with.
In this reality, Netflix doesn’t exist, at least not at the scale and competitiveness that it does today. Maybe even Google doesn’t.
If these services and others like them don’t exist, then why would consumers pay to connect to a small ISPs network? There is nothing on theĀ network the consumer care about and nothing competitive they can reach through the network. They are better off with a poor connection to a network with stuff they want than a great connection to a network with nothing they care about.
ISPs have some right to complain of the pain caused by the growth in video streaming. It’s not cheap to keep up with that rate of growth. However, the alternative of having no demand to connect to your network is much, much worse.
Another way to think about this is that it is natural for ISPs to want to have content and services that no other network does. This is simply fighting the “there is nothing on your network they care about” problem. Small ISPs can’t hope to compete in this area and to whatever extent large ISPs are successful, it will come at the cost of the smaller providers.
I don’t expect the world to end the day that Network Neutrality rules are removed. However, I do believe that this change adds risk to the long term viability of small ISPs and the industry that supports them. So, if you run a small ISP, or sell to small ISPs and support the pending changes, please think hard about this. The ‘freedom’ you are asking for extends to entities with much bigger boots than you.
Why The Internet Only Just Works
A paper which provides an overview of some of the problems in the Internet architecture.
A talk describing Google’s new TCP congestion control algorithm, BBR, is now online.
Such a beautiful and simple solution to a long standing problem. This one of those situations when you have to wonder why this wasn’t done before.
On a related note, it’s interesting how BBR separates the retransmission and congestion control (rate) logic. There is a section in An Engineering Approach to Networking where the author specifically calls out that it’s easier to solve both problems if they are considered separately vs. using the retransmission window size to control the rate as most TCP congestion control solutions do. This struck me as very interesting and I’m excited to see it demonstrated by sch_fq and BBR now.
A while ago the Packet Pushers had Geoff Huston on as a guest in the future of networking series. There are lots of good ideas and contrarian opinions in that podcast episode – go listen to it.
During the episode, Geoff mentioned a book that had a big influence on him called An Engineering Approach to Computer Networking. Naturally, I ordered a copy.
The first thing you’ll notice reading this book is that some aspects are dated – it was written in 1996 after all. This becomes obvious early on when ATM is mentioned as the likely replacement for Ethernet and how it will play a major role in the future – obviously that didn’t work out.
Fortunately, most of the content is much more timeless.
Almost all networking books that try to be computer science or engineering text books are much closer to being descriptions of how IP networking works vs. really teaching the science behind networking. This book is the opposite – it’s not the book to read if you want to win an IP networking quiz show.
The principles discussed throughout the book underlie all circuit and packet switched networks so digging into details that may seem out of date is still well worth the time. This helps to build a solid foundation and gives a bit of perspective on how and why so much has stayed the same.
I’m not going to bother giving this book any kind of rating. If you have an interest in computer networking you should read it. For an even more abstract and science based view of computer networking you should also read my favourite networking book – Patterns in Networking Architecture.
I recently finished reading The Art of Network Architecture. If I remember correctly, I found out about this book during an episode of the Packet Pushers where the author participated.
I ordered the book based on the promise of discussion of SDN use cases and SDN networking in general. It turns out that this wasn’t the best book to dig into that area but it does offer a nice overview and reminder of networking concepts across all areas of networking from design to management. So while parts are a bit fluffy and common sense, it was worth reading in the same way a good survey paper is.
Part 1: Internet Redundancy, Or Not
Part 2: Redundant Connections to a Single Host?
In the last post I discussed how devices like your laptop and mobile phone are computing devices with multiple Internet connections not all that different from a network with multiple connections. The anecdote about Skype on a mobile phone reconnecting a call after you leave the range of Wi-Fi alludes to one key difference. That is, a device directly connected to a particular network connection can easily detect a total failure of said connection. In the example, this allowed Skype to quickly try to reconnect using the phone’s cellular connection.
Think back to our initial problem, how can a normal business get redundant Internet connections? The simplest, and at best half solution, is a router with two WAN connections and NATing out each port.
Now imagine you are using a laptop which is connected to a network with dual NATed WAN connections and you are in the middle of a Skype call. The connection associated with the Skype call will use one of the two WAN network ports and since NAT is used, the source address of the connection will be the IP address associated with the chosen WAN port. As we discussed before, this ‘binds’ the connection to the given WAN connection.
In our previous example of a phone switching to its cellular connection when the Wi-Fi connection drops, Skype was able to quickly decide to try to open another connection. This was possible because when the Wi-Fi connection dropped, Skype got a notification that its connection(s) were terminated.
In the case of a device, like our laptop, which is behind the gateway there is no such notification because no network interface attached to the local device failed. All Skype knows is that it has stopped receiving data – it has no idea why. This could be a transient error or perhaps the whole Internet died. This forces applications to rely on keep alive messages to determine when the network has failed. When a failure determination occurs, the application can try to open another connection. In the case of our dual NATed WAN connected network this new connection will succeed because the new connection will be NATed out the second WAN interface.
In the mean time, the user experienced an outage even though the network did still have an active connection to the Internet. The duration of this outage depends on how aggressive the application timeouts are. It can have short timeouts and risk flapping between connections or longer timeouts and provide a poorer experience. Of course this also assumes that the application includes this non-trivial functionality, most don’t.
Isn’t delivering packets the network’s job not the application’s?
Part 1: Internet Redundancy, Or Not
Previously I wrote about how true redundancy for Internet connections is only available to Internet providers and very large enterprises. This post continues from there.
I would guess that the fact that it’s not possible to get redundant Internet access is a big surprise to people who haven’t look into it in detail. Surprisingly though, if you have a smart phone or a laptop that you plugin to an Ethernet port, you live through the problems caused by this Internet protocol design flaw every day. These problems seem so normal that you may have never considered that reality could be otherwise.
Let’s start with the example of a laptop attached to a docking station at your desk. It’s very common to use the wired Internet connection at your desk vs. wireless because it typically offers faster, more consistent service. Consider the case of needing to transfer a large file while working at your desk. You start the transfer, it’s humming along in the background, and you switch over to another task. A few minutes later you remember that you have a meeting so you yank the laptop from its docking station and walk to the meeting room.
What just happened to the file transfer? The answer of course is that the file transfer died and you may be thinking, “Of course it died. The laptop lost its connection”. This seems normal because we’re all used to this brokenness.
Think back to the previous blog post. We were trying to get redundant Internet links to a small business and households and found that it’s not possible with the Internet protocols as they exist today. Now think about what the laptop is – it’s a computing system with two Internet connections. This really isn’t very different from a network with two connections. Ultimately, the reason the file transfer died is because of all the same limitations discussed in the last post related to network redundancy. That is, solving the problem of enabling multiple redundant connections for a network, solves the problem for individual hosts as well.
Now consider a smart phone user that starts a Skype voice or video conversation while in the office and then heads to the car to go meet a client. If you’ve accidentally tried this before you know that as you leave the range of the office Wi-Fi, the connection drops. In the particular case of Skype, it may be able to rejoin the conversation after the phone switches to the cellular data connection but most applications don’t even make an attempt at this. Like the laptop, a smart phone is just a computing device with multiple Internet connections.
One last example, your office server that runs Active Directory or performs some other important function. You probably would like network redundancy for this as well right? This also isn’t possible without low level ‘hacks’ like bonding two Ethernet ports to the same switch together.
Not only does the Internet not allow for true redundancy for networks, the lack of this functionality causes trouble for end hosts as well.
Imagine you are a business that wants to have redundant connections to the Internet. Given the importance of an active Internet connection for many businesses this is a reasonable thing for an IT shop or business owner to ask for. One could also consider the serious home gamer who can’t risk being cut off as another use case.
Let’s dig into the technical options for achieving Internet redundancy.
The first and most obvious path would be to purchase a router that has two WAN ports and ordering Internet service from two different providers. Bam, you are ready to go right? Well… not really.
The way this typically works is that the router will choose one of the two Internet connections for a given outbound connection. The policy could be always use connection A until it fails or be more dynamic and take some advantage of both connections at the same time. The problem with this approach is that because the traffic will be NATed towards each Internet provider, there is no way to fail a given connection from one Internet provider to another. So the failure of one of the Internet connections means that your voice call, SQL or game connection will die, probably after some annoyingly lengthy TCP or application level timeout expires. If the site is strictly doing short outbound connections such as the case with HTTP 1.1 traffic this isn’t such a big deal.
So the ‘get two standard Internet connections and a dual port WAN router’ approach sort of works. Let’s call it partially redundant.
How do we get to true redundancy that can survive a connection failure without dropping connections? To this we need the site’s network to be reachable through multiple paths. The standard way to do this is to obtain IP address space from one of the service providers or get provider independent IP space from one of the registries (such as ARIN). Given that IPv4 addresses are in short supply this isn’t a trivial task. The conditions that have to be met to get address space are well out of the reach of small and medium businesses. Even when the barriers can be met, it’s still archaic to have to do a bunch of paper work with a third party for something that is so obviously needed.
The real kicker is that the lack of IP space is only part of the problem. IPv6’s huge 128-bit address space doesn’t really help at all because to use both paths, the site or home’s IP prefix needs to exist in the global routing table. That is, every core router on the Internet needs an entry that tells it how to reach this newly announced chunk of address space. The specialized memory (CAM) used by these routers isn’t cheap so there is a strong incentive within the Internet operations community to keep this kind of redundancy out of the reach of everyone except other ISPs and large businesses.
So the simple option doesn’t really solve the problem and ‘true’ redundancy isn’t possible for most businesses. What about something over the top?
Consider a router that is connected to multiple standard Internet connections. It could maintain two tunnels, one over each connection, towards another router somewhere else on the Internet. To the rest of the Internet, this second router is where the business is connected to the Internet. If one of the site’s Internet connections fails, the routers can simply continue passing packets over the remaining live tunnel thereby maintaining connectivity to the end site. From an end user’s perspective, this solution mostly works but let’s think about the downsides. We’ve essentially made our site’s redundancy dependent on the tunnel termination router and its Internet connectivity whereas without this we are just at the mercy of the ISP’s network. Also, unless the end site obtains its own address space, this approach has all downsides of the first approach except the NAT related problems occur at the tunnel termination router instead of being on-site. Finally, if the site can get its own address space, why do the tunnel approach at all?
I should note, because someone will point it out in the comments, that for very large organizations it’s possible to get layer two connectivity to each site and essentially build their own internal internet. If they have enough public IP space they can achieve redundancy to the end site for connections with hosts on the public Internet. With private IP space, they can achieve redundancy for connections within their own network. Without public IP space, even these networks suffer from the NAT related failure modes.
To summarize, if you aren’t a very large business, there is no way to get true Internet connection redundancy with the current Internet protocols. That’s kinda sad.
My article on Packet Queueing in the Linux Kernel appeared in the July 2013 issue of Linux Journal. Now that a month has past, Linux Journal’s great copyright policy allows me to post the content. You can find the full article at the URL below.