Is the transition to IPv6 inevitable?
39 points by andyc
39 points by andyc
The argument for a while has been that there will be inflection points where large groups are v6-only and then you really want to use v6 to be reachable by them. This hasn’t happened much. We have CGNAT for v4 and it’s awful, but there are two big commercial incentives against v6:
If addresses are scarce, it’s easy to justify not being hole to run servers on consumer connections. Hosting providers love this. With 1Gig symmetric FTTP connections being rolled out (not ubiquitous, but not ludicrously expensive outliers anymore), most people’s personal hosting requirements can easily be served by their own local network with a $25 server. Why do you need YouTube when you can share videos from your own connection without ads or dependence on a third party that can decide to ban you for no reason?
If addresses are scarce, you can sell them for more. Vultr has a cheaper tier that is v6-only, but hosting companies that have a load of IPv4 addresses don’t want to incentivise the transition they want to sell more. Azure’s pricing for v4 and v6 addresses is the same. A v4 address doesn’t cost the same amount as a v6 /48 or a /64, it costs the same as a /127. This makes no sense unless you realise that it’s easier to sell v4 addresses if people don’t transition to v6.
If addresses are scarce, you can sell them for more.
I think it’s more insidious than that. The big cloud computing providers and ISPs have, and continue to, gobble up a large fraction of the IPv4 address space. While that is also a big cost to them, they will have been able to acquire far more addresses earlier and cheaper than any of their competitors. If you wanted to compete with them, you would have to buy those addresses at far higher costs.
IPv4 Addresses are a competitive moat.
most people’s personal hosting requirements can easily be served by their own local network with a $25 server.
IPv4 Addresses are a competitive moat.
I 100% believe this (almost-not-even) conspiracy theory, especially when it comes to ISPs whose business models have not kept up with reality, but switching costs for customers are high or even non-existent.
Tbh I feel like it’s anti-competitive
most people’s personal hosting requirements can easily be served by their own local network with a $25 server. Why do you need YouTube when you can share videos from your own connection without ads or dependence on a third party that can decide to ban you for no reason?
When a particular solution has been widely available for decades yet has seen zero market penetration, it’s important to double-check whether the problems it solves are the same problems that people want solved.
Most people are uninterested in being a sysadmin. If I have a video I want to share then I can upload it into YouTube and not worry about keeping a self-hosted ffmpeg
transcoding pipeline up to date. YouTube won’t page me in the middle of the night if there’s an operational problem.
Popularity of always-on plug-in machines in private homes is low and falling. Most people’s primary computers are phones or tablets, with laptops making up most of the rest. Self-hosting requires purchasing dedicated hardware (probably something like a Synology NAS appliance), and may involve running cables in an otherwise WiFi-only household.
People that want to upload videos to YouTube without ads will disable monetization when they upload their videos. People who are bothered by ads install uBlock and forget the ads exist. People who aren’t bothered by ads don’t feel any particular need to reduce the ads other people see on YouTube.
The prevalence of content on YouTube that is copyright infringement, politically controversial, or grotesque leads most people to be unconcerned about the chance of being banned for uploading home videos of their cats.
In the ideal world where everyone uses IPv6, I’m sure they’ll also be using things like PeerTube and other P2P solutions that will work around that potential issue. Also, the likelyhood that any video you publish will ever get enough attention that your 1 Gbps connection (still in the ideal world here) won’t be able to handle it is rather remote. No offense intended to your video making skills, it’s just that the odds are against it.
I have definitely maxed out my home internet connection just serving video to my friends. Where I live, nearly everybody is on DSL and upload bandwidth is severely restricted - 30 Mbps is the most you’ll get. I have easily maxed this out just serving video to a couple friends from my little $200 server.
$25 gets you a Pi Zero 2 with 1.0 GHz quad core Arm A53 and 0.5 GB RAM.
$30 gets an Orange Pi RV2 with eight 1.6 GHz RISC-V cores similar to Arm A55, and 2 GB RAM.
There’s a pretty big space between sharing video with my friends and running youtube. IPv6 helps a lot with that job. I rather like the fact that now that I have IPv6, I can share digital objects with my friends, or myself when I’m not home. I like the fact that IPv6 makes it easy to do when I want to and I don’t have to worry too much about the details. I do agree that I’m not most people but I would say that just because a market isn’t large, doesn’t mean it’s not viable. You say that there is zero market penetration here but I do this and I know other people who do it which means that you are conflating zero to mean “a really small number”. In my circle, and I don’t cultivate my circle for self hosters, I find that somewhere around 1/20 people I know want to self host something.
there will be inflection points where large groups are v6-only and then you really want to use v6 to be reachable by them. This hasn’t happened much
… in western countries. But it is happening elsewhere, mostly out of necessity.
Really? What countries have users without CGNAT?
Mobile providers in SEA are the classic example, though with the important caveat that an IPv6-only network doesn’t mean an IPv6-only device. You can have the local device OS convert IPv4 to IPv6 and then convert back on the ISP end, so each device is a full IPv6 participant and can also be an IPv4 client.
See https://datatracker.ietf.org/doc/html/rfc6877 (“464XLAT”)
Mobile providers in SEA are the classic example […] See https://datatracker.ietf.org/doc/html/rfc6877 (“464XLAT”)
I’m a bit confused. I thought 464XLAT uses a stateful NAT64 on the carrier side? How is that not a CGNAT?
CGNAT typically describes a scheme where non-globally-routable IPv4 addresses are provisioned to customer devices. You open the iPhone settings and there’ll be a 10.0.0.0/8 or whatever IPv4 address in there.
464XLAT provisions IPv6 addresses; the network is entirely IPv6, and any IPv4 support is closer to a VPN between the customer device and an ISP-provided proxy layer.
CGNAT (Carrier-Grade NAT) for me means there is a NAT running at the carrier side. If I have a local IPv4 address or if that translation happens at the carrier level isn’t all that relevant. From the perspective of a IPv4 device on the other side, I show up with an IP+port tuple that it can reply to.
From the perspective of a IPv4 device on the other side, I show up with an IP+port tuple that it can reply to.
It seems like you’re thinking of “NAT” as a synonym of “network proxy layer”, which is not how that term is typically understood in networking.
Pretty much any L3 proxy will look the same from the perspective of the remote peer; the difference between (for example) a NAT, a VPN, and a MITM is the relationship between the proxy and the peer being proxied. So when people talk about NAT, they’re referring to a proxy that remaps the (address, port)
tuple while keeping the rest of the IP packet intact[0]. The packet sizes don’t change, which is a distinguishing feature of NAT vs encapsulation (e.g. VPNs, or 6to4) or protocol translation (SIIT).
[0] I’m skipping over some of the practical details here. NATs need to inspect the IP packet’s payload because the “port” doesn’t exist at the IP layer, it’s part of the TCP/UDP/SCTP layer. So you’ll often see NAT implementations that parse IP+{ICMP,TCP,UDP}
and drop anything else on the floor, or only support IP packets that don’t have extension fields.
If I have a local IPv4 address or if that translation happens at the carrier level isn’t all that relevant.
It’s relevant to software on your device (which can’t open an IPv4 socket if there’s no local IPv4 address) and to any intermediate routing hardware (which needs to support IPv6). If you connect a device without the correct configuration to the network then it won’t be able to connect to IPv4 services, which can be relevant for hotspots and tethering.
You are hung up in a technicality that’s not really relevant to the original point.
I sympathize with your meaning, but this is a technical discussion that hinges on specific minutae of networking protocols, posted on a site dedicated to computer programming. A certain amount of terminological precision (or annoying pedantry, if you prefer) is to be expected.
I think you’ve only described a NAT in your first paragraph, often performed by the CPE. A CGNAT describes NAT done at the regional or multi-regional scale, e.g. Comcast NATing the v4 packets of every customer in Denver through a single rack.
Originally the idea was that NAT’ing the remaining dwindling v4 usage would be something to isolate and consolidate as you tore down your v4 infra, but now that we’ve done the work to put a NAT into every cable modem and wifi router there’s no reason to do that consolidation vs keeping it at the edge.
I think you should re-read my post.
but now that we’ve done the work to put a NAT into every cable modem and wifi router there’s no reason to do that consolidation vs keeping it at the edge.
Mobile providers provide connections to mobile devices (phones, tablets, hotspots). There is no separate CPE for the modem/switch/router; your phone connects to the tower which tells it “you are 10.80.90.100” or whatever. In a CGNAT IPv4 configuration the IP address assigned by the ISP to the device is not globally routable, it’s part of the RFC 1918 10.0.0.0/8
and RFC 6598 100.64.0.0/10
address ranges.
I believe you are technically correct (which is, of course, the best kind of correct), but the point that matters for users is:
If I have a client application that speaks IPv4 and IPv6, can it connect to an IPv4-only server without my having to do any manual configuration?
If the answer is ‘no’, then there’s a big incentive for the server to support IPv6. If the answer is ‘no’, IPv6 is extra complexity but its lack doesn’t lose you customers. You have three kinds of customers:
IPv6 becomes very important for a service provider once a non-trivial number of your potential customers are in the second category. If a lot are in the first category, IPv6 is a cost. If everyone is in the second or third category, it’s probably a small cost saving.
You’re still thinking in terms of IPv4-only vs IPv6-only, but for customer devices (assumed to not accept incoming connections) the actual categories are “IPv4-only” and “IPv6-capable”. All devices that are IPv6-capable can make outgoing connections to IPv4-only endpoints by mapping the IPv4 address space into IPv6 and then letting the ISP’s network translate to/from IPv4. Or CGNAT. Or just being really aggressive about dynamically-allocating IPv4 addresses (no more long-lived leases).
It’s similar to how IPv6 tunnels used to be necessary to access IPv6 sites via the IPv4 internet, but the situation is now reversed. IPv4 is being tunneled over an increasingly IPv6-native backhaul.
IPv6 becomes very important for a service provider once a non-trivial number of your potential customers are in the second category. If a lot are in the first category, IPv6 is a cost. If everyone is in the second or third category, it’s probably a small cost saving.
The second category (IPv6-only client endpoints) can be ignored, because the only people who are operating IPv6-only clients are weird techies browsing your site from Lynx on their jailbroken Dreamcast.
The third category is growing because IPv4 performance is decreasingly important (all the sites that regular people spend time on are IPv6-capable) and IPv4 ASNs are increasingly valuable, so ISPs have cost incentives to lease them out (or auction them off). See https://www.kentik.com/blog/exodus-of-ipv4-from-war-torn-ukraine/ for a recent example of using IPv4 address space to raise funds.
As one additional data point, the ISP I use in Tokyo advertises higher throughput and lower latency for IPv6 connections. They have a customer-facing marketing page and brochures and everything (https://asahi-net.jp/service/option/ipv6/). The monthly bill is split into $50 to NTT (who operate the physical infrastructure) and $5 to Asahi (who operate the customer-facing parts of an ISP), so the profit margins must be fairly thin, and the cost of IPv4 address space is probably big enough to matter.
You’re still thinking in terms of IPv4-only vs IPv6-only, but for customer devices (assumed to not accept incoming connections) the actual categories are “IPv4-only” and “IPv6-capable”.
That was my point. When you have clients that are in these two categories, the cost of not doing IPv4 is very high (you lose customers). The cost of not doing IPv6 is low (you don’t lose customers). The cost of doing IPv6 is, at least, non-zero.
20 years ago, we were told that there would be a load of users coming online that were v6-only because we’d run out of v4 addresses for them. We all must support v6 on the server to be able to reach them. These users do not, as you say, exist in a statistically significant way. So that big inflection point where you lose customers if you don’t support IPv6 is not there.
Big sites like GitHub can afford to be IPv4-only. And GitHub is one of the few sites where ‘weird techies browsing your site from Lynx on their jailbroken Dreamcast’ might actually be a noticeable percentage of your userbase.
The third category is growing because IPv4 performance is decreasingly important (all the sites that regular people spend time on are IPv6-capable) and IPv4 ASNs are increasingly valuable, so ISPs have cost incentives to lease them out (or auction them off)
But that’s exactly where the counter-incentives that I started this thread with come from. IPv4 ASNs are valuable, but some companies already own a load of them. These companies are incentivised to make IPv6 as annoying as possible (see: Azure VMs can use IPv6 only via a NAT and you pay as much for an IPv6 /127 as for a v4 address).
As one additional data point, the ISP I use in Tokyo advertises higher throughput and lower latency for IPv6 connections
This is interesting and is the kind of thing that would make a difference for big sites. Google famously cares about 10ms additional delays in load times, for example, but sites that big are (with a few notable exceptions) already on IPv6. The problem with adoption is the long tail. And there, again, the incentives end up being the wrong way around. If I’m running a site for a SME with a few hundred or a few thousand customers, then the fact that clients with very fast connections can use a fraction of their bandwidth to hit my site is a feature: it prevents accidental DDoS. Loading my site won’t use more than a fraction of their bandwidth and an extra 50ms of latency won’t make a meaningful difference to bounce rates. At the same time, not having to deal with IPv6 reduces my costs. Not having to deal with IPv4 might reduce my costs more, but my choice isn’t between supporting v4 or supporting v6, it’s supporting v4 or supporting both.
Tackling that long tail is hard. Peer-to-peer systems were originally going to be the killer apps for IPv6, because they’re painful to work with NAT, but the Internet moved in a client-server direction, which reduced that need.
20 years ago, we were told that there would be a load of users coming online that were v6-only […] So that big inflection point where you lose customers if you don’t support IPv6 is not there.
While I don’t want to overly disparage the no-doubt skilled technologists who were trying their best to predict an uncertain future, this position would have been as silly 20 years ago as it is today. By 2005 it was already considered archaic to assign globally-routable IPv4 addresses to individual devices. The idea that someone in Africa wouldn’t be able to connect to an IPv4-only site due to lacking a globally-routable IPv4 address would have been like saying someone in a city without apartment addresses can’t send or receive mail.
I do remember a lot of discussions about what would happen when the IPv4 address space started running low, and it was common knowledge that large early adopters (HP, IBM, various American universities, DARPA) each had multiple /8s that cumulatively would have extended the runway indefinitely had those organizations switched to local addressing.
Maybe 30 years ago (1995)? Before my time, but that would have been right in the middle of the IPX->IP and CIDR transitions, Windows 95 had native TCP/IP, and I think DNS was taking off around that time too. I don’t know if work on IPng had started at that point, but surely someone had done the math and realized 2**32 wasn’t going to be enough if India and China industrialized. That would have also pre-dated the adoption of consumer routing hardware, so there wouldn’t have been as much awareness that 1:M topologies (NAT) were practical.
Also, as a somewhat meta point, I don’t believe that “at least one person made uneducated claims about a topic, therefore those claims should be interpreted as representative of the educated consensus” is a valid form of argument. Too many memories of people on the TV claiming Y2K would cause banks to zero deposit accounts, too many phpbb forums full of sockpuppets, too many Twitter threads full of markov chains.
Google famously cares about 10ms additional delays in load times, for example, but sites that big are (with a few notable exceptions) already on IPv6. The problem with adoption is the long tail. And there, again, the incentives end up being the wrong way around. If I’m running a site for a SME with a few hundred or a few thousand customers, […]
IPv6 adoption is measured by client support, not server support. It doesn’t really matter whether a website supports IPv6 or not – sure, it’s nice and gives a certain kind of person the same good feelings as scoring A+ on ssllabs or having an .onion
address, but at the end of the day it is client support that decides the migration timeline.
At some point in the future client support for IPv6 will be >95% and websites will start turning off their IPv4 support to save $4/month, but if someone wants to keep using IPv4 indefinitely then there’s nothing stopping them – it’s like how websites can still be written in HTML 3.2 if the author likes that vibe.
Peer-to-peer systems were originally going to be the killer apps for IPv6, because they’re painful to work with NAT, but the Internet moved in a client-server direction, which reduced that need.
I’ve been hearing that peer-to-peer networking is the future for longer than I’ve known what networking is, along with other weird de-centralized phantasms like web-of-trust and mesh networking. Every time there’s a new technology (IPv6 included) somebody will show up to claim that peer-to-peer whatever will be the killer app for it, and meanwhile the rest of the world decides that paying someone $10/month (or $0) is better than having to participate in a peer-to-peer ecosystem.
I don’t mean to take this out on you, exactly, but it’s your thread and in the OP you imply that self-hosting is something that people would voluntarily do, so here we are.
Do you think that a typical person would self-host videos (or photos, documents, etc) if only their residential internet connection had enough bandwidth and a globally-routable stable IPv4 address?
Everything I have experienced in my career has lead me to the opposite conclusion, which is that if you give the average person a choice between self-hosting and becoming an Amish turnip farmer they would choose the turnips.
Work on CIDR and IPng started at basically the same time in 1992/3:
https://www.rfc-editor.org/rfc/rfc1367
Restrictions on classful address allocation are introduced
https://www.rfc-editor.org/rfc/rfc1380
IESG makes plans
https://www.rfc-editor.org/rfc/rfc1454
three IPng candidates are established
https://www.rfc-editor.org/rfc/rfc1467
classful restrictions and CIDR and BGP4 deployment proceeding as planned
https://www.rfc-editor.org/rfc/rfc1519
CIDR becomes official
https://www.rfc-editor.org/rfc/rfc1550
IPng official solicitation for proposals
While I don’t want to overly disparage the no-doubt skilled technologists who were trying their best to predict an uncertain future, this position would have been as silly 20 years ago as it is today.
I’m not good at time. I suspect I’m thinking of the period around 1998-2000, which (it turns out) is more than 20 years ago.
Even in 2002, my dial-up ISP (Demon) gave me a stable IPv4 address and most customers had a single machine connected to their modem. We had a LAN, but I was living in a shared house with three other geeks, so we were outliers. I moved to cable around then. My cable ISP gave us one IP and required NAT for residential users, but if you went to their commercial offering then they’d give you multiple public IPs. I consulted for a couple of small companies a year or two later that were doing exactly this: every machine in their network had a publicly routable IP (though they had firewalls that blocked all incoming connections).
Do you think that a typical person would self-host videos (or photos, documents, etc) if only their residential internet connection had enough bandwidth and a globally-routable stable IPv4 address?
It depends on how it’s done. If self hosting means get a thing that looks like a computer and configure it like a server? Absolutely not.
If self hosting means you buy an ApplePrivateSharingWidget™, connect it to your WiFi, and have a way of sharing things with your friends and family, and publishing public things, that doesn’t depend on any third-party service and is advertised with strong privacy guarantees? Absolutely. A few companies have tried to offer things like this and the problem hasn’t been lack of demand, it’s been that they don’t work (and often violate ISP T&Cs, which prohibit running servers on consumer-grade services) and they’re killed by high return rates.
It’s not v6-only, so not really what you’re looking for, but Japan does have a chunk of users that use “v6-plus” (aka MAP-E) instead of CGNAT, where the user gets assigned a subset of ports on a public IPv4 address, and the end-user does client-side SNAT to clamp ipv4 traffic down to just those ports on their side before putting it over an ipv4-over-ipv6 ipip tunnel.
I know you meant “what countries have v6-only users”, so this doesn’t qualify, but it is an ISP offering that’s ipv6-first and without CGNAT (but still with ipv4 access).
To the second point, I don’t think the scarcity of IPv4 addresses is very profitable for providers since the scarcity makes them expensive on the supply side too.
Address scarcity might someday push the prices high enough to enforce adoption. First it would be small projects that save money with IPv6-only plans… This would hopefully open the floodgates of it becoming prerequisite to use the internet without disruption.
I wouldn’t bet on this happening anytime soon. IPv4 addresses are still pretty cheap. Amd things like CGNAT reduce the scarcity.
v4 addresses are expensive on the supply side, but unevenly so. A new player coming in today has to pay a lot to get a decent pool of addresses. Existing big players are probably sitting on huge allocations.
There is also the big issue of network configuration.
If you’re just running your own home network, it’s easy. But if you ever deployed bigger firewalls and networks, you will probably hit the issue of getting it right for a dual stack. And since there are still people without IPv4, you will have to support v4 anyway, while v6 is effectively optional.
My current ISP raised their prices a lot, so I’m looking for a new provider (hooray for open access fiber networks) but almost all of them are defaulting to CGNAT! That used to be extremely rare in Sweden. Fortunately most of them still give you the option to have a public IP (either as a cheap add-on, like $2/month, or by simply calling customer service after signing up). One provider I looked at said that they’re transitioning to CGNAT because IPv6 will provide public IPs for everyone. Small problem: the fiber network refuses to support IPv6 for non-corporate customers (“hooray” for privately owned fiber networks with as many corners cut as possible).
I only have a German source for that, but apparently ISPs are obliged to give you a free public (dynamic) IPv4 address in the EU, if you request it.
NAT is more expensive (for ISPs) than routing. It’s just more computation required. Remember, they don’t normally firewall. This load scales with number of packets. And ISPs like to sell higher speeds.
On top of that, the number of connections still increases and that puts pressure on the outbound address space.
Offloading part of that traffic to IPv6 is the way to deal with both.
An acquaintance who runs a small ISP has told me the real blockers for them are the customer boxes and network management tools. Lack of investment cash hurts their bottom line in the long run.
Also, another small ISP I know enables IPv6 this year, apparently. They upped their offering recently. 1Gbps symmetric for $18/month and 2Gbps symmetric for $36/month. I guess they no longer like the peak loads on their equipment.
From my experience running library WiFi for 1400+ students connected at once, almost a decade back, 75%+ packets went over IPv6, when given the option.
almost a decade back, 75%+ packets went over IPv6, when given the option.
It’s generally around ~90% these days. This is the irony of IPv6 adoption statistics: While adoption among hosts and networks is terrible, the large CDNs that all of the traffic goes through do support it.
This includes stuff like, say, netflix and youtube. But it also includes stuff like website CDNs: If your website uses something like jsdelivr, google fonts, s3, akamai, etc. a large fraction of the total bytes might be sent over v6 even if your own server does not enable it.
A slight tangent, but I think it explains some of the ignorance I’ve experienced:
A certain program kept timing out and needing to reconnect, making interactive use a PITA. I was asked to diagnose. The people before me did all of the basic things (uninstalling & reinstalling, updating, contacting the company that makes the service and software), but nothing worked. However, I knew that their office was using SonicWall.
I had installed a NetBSD machine to do NAT / firewalling / secure port forwarding over key-based ssh, et cetera, as a backup to the SonicWall. I set the NetBSD’s address as the gateway for the client, and the software never timed out and worked perfectly.
Nobody had thought that an “industry standard” firewall device would be the culprit (although SonicWalls are such shit they really should’ve), so several people before me spent days trying to diagnose, with lots of theories but no results.
Recently we’re starting to see wide scale deployment of CG-NAT, which, depending on the implementation, can be quite shit. I haven’t seen anything as bad as SonicWall, but I’ve seen active sessions with active data going over them get killed while in active use, because, it seems, they had simply been open for a long time. I’ve seen UDP get lost for a handful of seconds every several minutes. I’ve seen machines waiting for new port connections to open for seconds, making everything seem frozen. I’ve seen all open connections get killed at the same time for no good reason.
These kinds of things will get better should people complain, but just like with our cell phones, when client machines connect over IPv6 and we don’t even know that’s happening behind the scenes, IPv6 can reduce or cover up the problems with shitty CG-NAT. Even when that’s not the case, most people don’t even know the issue is CG-NAT, much less know to complain about it.
So we have corporate people who swear they don’t need IPv6 because they’ve “never used it”, unaware of what their cell phones do, and we have people who think CG-NAT is perfectly suitable, because they’re blissfully unaware of some of the issues that IPv6 helps cover up. So should the people who don’t understand the problem have a say? No, but unfortunately, that’s not the case.
I’ve been running IPv6 for all of my services for 25 years. Every excuse except one from people who say it’s too hard is bull. The one excuse that is true? It’s hard to get our ISPs to give us IPv6.
Are IPv6-only internet exchange points coming? Yes, contrary to Betteridge’s law: there are now standards for advertising IPv4 connectivity over IPv6, and for tunnelling IPv4 over IPv6 between different ISPs.
I’m going to share my controversial opinion, but I think that IPv6 is a failure and should be scratched. It suffers from what I call the “Jurassic Park” syndrome, mixed with the “better is the enemy of good”
Regarding the last one, IPv4 is good. Only a few people have the incentive to migrate to IPv6. (e.g the category mentioned in the article: IoT device developers) So why would people migrate? It’s working fine for now, there is no need to do “better.”
As for what I call the “Jurassic Park” syndrome, it refers to the famous quote “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
This is how I feel about IPv6. There are many of features I find overkill. Don’t get me wrong, as an engineer, I love it, I love well designed over-engineered stuff. But, did we need encapsulation? Couldn’t we have just created IPSec6 with the same ideas as IPSec instead of using extension headers? Did we need jumbo-grams? …
In my opinion this complexity (in addition to the new address format which makes it hard to read and reason quickly, especially with abbreviated addresses) is hindering its adoption.
For the record, I love the concept of IPv6. However, having set it up a few times on servers with VMs, I find it way too complicated and demotivating to set up.
As a final thought, I think the scarcity of IPv4 is manufactured, there is a lot of speculation on IPv4 blocks. A lot of companies and private equity funds bought IPv4 blocks just to speculate on it. I think the regional internet registries should start by reclaiming all the class A IPv4 blocks (IIRC Apple and a few other companies still have /8 blocks) They should also crack down on the speculation of IPv4 blocks, I’m not sure how that would exactly look like, but I have a few ideas. We should try to find an alternative solution to CGNAT as well. Moreover, the article mentions moving away from well-known ports, I think that’s a great idea, moving to SRV records for every protocol might be easier than migrating to IPv6, especially since the burden is on the sysadmin, not the user.
This is how I feel about IPv6. There are many of features I find overkill. Don’t get me wrong, as an engineer, I love it, I love well designed over-engineered stuff. But, did we need encapsulation? Couldn’t we have just created IPSec6 with the same ideas as IPSec instead of using extension headers? Did we need jumbo-grams?
I don’t really agree with this at all. Many aspects of IPv6 are actually simpler than their IPv4 counterparts, much in part due to lessons learned.
Extension headers are a good example of this: they are there for future proofing, but routers can and will ignore nearly all of them, reducing the computational cost of forwarding. Likewise standardising the AH/ESP encapsulation in IPv6 is a good thing, given how widely IPsec is deployed in places like cellular networks for IMS/voice, rather than it being “bolted on” like it is in IPv4. ICMPv6 is another example, concentrating just about all of the important control messages down into a single protocol.
People love to spread FUD around IPv6 for some reason but it doesn’t take much looking beyond the surface to realise that in many ways it’s a pretty big improvement architecturally.
Woah, people are actually running IPSec in the wild? Whenever I tried to do anything with it, it fell prohibitively overcomplicated. But maybe in the ISP context it has unique benefits that make all that worth it?
IPsec is indeed widely used. It’s mandatory for 3GPP VoWiFi, it’s very widely used in 3GPP VoLTE, it’s widely used under the hood for corporate VPNs, cross-cloud VPN gateways, etc.
I agree that IPSec is a complicated mess that’s easy to get wrong, but unfortunately it’s still widely deployed. We try to advocate for Wireguard everywhere we can, but so many commercial firewalls out in the wile don’t support it yet, and only support IPSec.
IPSec is a different problem that’s mostly orthogonal to the topic of this discussion. It should be sufficient to say that the reason for IPSec’s lack of success is because all the vendors in the design committee said that they wanted interoperability but they all stood to profit greatly if their implementation of IPSec wasn’t interoperable. In my experience, when you want to connect two places with a VPN and you control the equipment at both ends, IPSec is really simple.
Couldn’t we have just created IPSec6 with the same ideas as IPSec instead of using extension headers?
Remember, IPv6 was the second system syndrome follow-up to IPv4, so its strategy for protocol design is “IPv4 but bigger and better and doing all the things that other protocol stacks say the Internet can’t do”.
Among those things were autoconfiguration. IPv4 and IPv6 do it differently because DHCP and SLAAC were designed basically in parallel. DHCP was later ported forwards to IPv6 because SLAAC was designed without enough practical experience of what autoconfiguration needs so it doesn’t cover all the requirements.
Another thing was “security” which in the 1990s meant sprinkling cryptography everywhere. IPsec was originally intended to be an attractive feature that would encourage people to upgrade to IPv6, but it was backported to IPv4 and so that incentive went away. (Then SSL came along and solved the most important channel security problems without getting tangled in the IP stack, leaving IPsec irrelevant except for VPNs.)
Another thing was mobility, and along with it things like easy renumbering. But that turned out to be impossible with the internet’s address allocation and routing system. It took a very long time to realise that IPv6 would not be able to do better than IPv4, and that IPv6 would have to work in basically the same way as IPv4. And that delayed serious work on transition mechanisms. (Observe how late happy eyeballs was.)
We should try to find an alternative solution to CGNAT as well.
It’s called NAT64.
I think the IETF’s institutional dislike for NAT also caused a lot of delay to work on transition mechanisms because they didn’t want to put a middlebox into it.
the article mentions moving away from well-known ports , I think that’s a great idea, moving to SRV records for every protocol might be easier than migrating to IPv6
The main problem with that was web browsers refused to support SRV. But the web recently gained its own souped-up SRV records, called SVCB and HTTP which include transport protocol and encryption capabilities as well as alternative port numbers.
IPsec was originally intended to be an attractive feature that would encourage people to upgrade to IPv6,
This is the most damning aspect of this affair by far. The idea that anybody would not just voluntarily, but enthusiastically deploy IPSec is simply hilarious. It has a billion knobs to twiddle that both sides of the tunnel need to agree on, and the de-facto “protocol” to negotiate those is to go back and forth via email for a couple weeks. SSL won not just because the web ate everything or because people are lazy, but because it actually works.
To be fair, I don’t think anyone knew how to design a secure protocol back then, so it kind of makes sense for IPsec to be designed as a bag of parts, “some assembly required”, more like lab equipment than consumer product. And key management is the hardest part of a protocol, and the successful 1990s ones either half-arse it (SSL is one-sided) or YOLO it (ssh tofu).
My browser is currently displaying lobste.rs via 2604:a880:400:d0::2082:1001. I have had the ability to connect to IPv6 sites for over 10 years now.
Hosting SSH & VPN on IPv6 means my bot traffic is almost zero.
My phone calls go over an IPv6 & IPsec tunnel back to one of the two crap phone providers in Australia.
Why is this such a big deal still?
Try deploying IPv6 on Azure. By default, you get a publicly routable IPv4 address with a VM. If you want an IPv6 address, you can pay for an IPv6 address (not a /64). You can run IPv6 on your VLAN but your public v6 address is not the one that the VM is assigned, it’s NAT’d. This makes configuring IPv6 as complicated as IPv4, and more difficult because none of the IPv6 tooling is designed for this kind of environment.
Azure isn’t the only cloud service available. AWS supports IPv6 for no extra charge. My VPS hoster is Vultr and they also support IPv6. Vultr assigns a /64 automatically. If I remember AWS does the same.
Plenty of other providers get it right, but plenty of providers give me a single v6 IP too.
Because there are Services and Users without IPv6 support. Of course when your home provider enable IPv6 it (most of the time) just works. But for someone providing Services you need to also enable and configure IPv4 or you’ll lost the IPv4-only users. Also when you have some bigger network you also need to think about IPv4, because there are some bigger services which only provides IPv4 DNS entries (i.e. github, twitter). Also there is some software which (default) config is so that they only use IPv4 addresses (or even worse IPv4 literals). So even when you don’t need to access extern IPv4 services you might need to setup IPv4 internal to get this software working.
For the end user this is mostly irrelevant. But when you need to administrate your network or service this is all with the costs of managing IPv4 and IPv6 in parallel. So some people come to the conclusion to wait with IPv6 till it’s “finished”. What they miss, when everyone thinks this way nobody will add IPv6 to there service/network and we are stuck with IPv4. The problem there is, IPv4 is not free. Statefull-NAT is a growing factor in networking. Also setting up a little bit bigger network with IPv4 is quite terrible because you always need to think about how many addresses do you need in one network segment is hard. Either you just put everything in one big “vlan” or you do renumbering every half a year.
Don’t get me wrong: IPv6 is not perfect, but with the bigger address space it’s a lot of simpler to manage. And Of course change a given setup from IPv4 to dualstack or to IPv6 only is not easy.
I’m saddened that people still consider IPv6 adoption “inevitable” instead of “required”. I believe that local ISP being lazy and some effort coming from businesses making money on leasing IPv4 subnets are the main problem blocking what I believe is the reasonable course.
Back in 6bone times some systems had issues which made users “fix it by turning IPv6 off” but that’s likely not large enough crowd to influence current delay. I haven’t heard “something wasn’t working but I fixed it by turning IPv6 off” argument for years now. And both my home FTTH and separately the mobile ISP natively give me IPv6 address from DHCP/Router Solicitation. I haven’t had to enable it. They do this for all clients.
It’s supported in a way transparent to users, obtaining v6 works natively on all major desktop and mobile OSs for years and there are no issues. ISPs shield users from external connections coming to their addresses (external host connecting to “LAN” machine having now public v6 address) but users have an option to get incoming SYN routed to their local systems. Unaware users never even see these options so don’t stupidly enable this. Seriously - v6 works transparently for users for years now on both mobile and cable/FTTH home links.
Most modern OSs attempt v6 and then v4 connections if target host resolves as both A an AAAA records. And there’s a reason for this preference being the default. I won’t oversell it but in certain scenarios IPv6 is more efficient (see below for related anecdata story;)
Then, there are also cell and FTTH providers where I live, which requested and got their IPv6 /32 subnets (typical 2^96 addresses assigned to ISPs) from RIPE a decade ago and have no intention of using it. You call their support and sometimes even the 2nd support line responds in a way that it’s clear that they have no idea what the ticket is about.
It’s weird. It’s unnutural. I consider all arguments related to delaying of IPv6 adoption… stupid. Either “niussance”, “not supported by our or client’s devices” or similar bullshit. And telling them that serving DHCP (v4) along with RS (v6) won’t break anything on client side if their devices don’t understand these RS packets, but I’m not getting through. When we’re talking about LAN or something like v6 inside Kubernetes cluster I see either laziness, blank eyesight (think: Musk disconnected from reality and flying on ketamine+ecstasy in Oval Office) or these “niussance” comments.
And we’re talking about switching from 44 years old exhausted IPv4 address space to 27 years old IPv6… And I’ve literally been told that IPv6 is something untested thus unsupported by people YOUNGER THAN IPV6 itself!
On ISPs side it’s either “done and working” (both my FTTH and mobile ISPs for many years) but usually complete ignorance and lack of understanding based on lack of profit from man-hours spent. On a more public scale, I really (tinfoil hat on) believe that the IPv4 leasing business goes so well that it may be sabotaging v6 adoption in some ways. Again - no proof and tinfoil hat idea: this business only works until v6 adoption.
To finish my rant on a positive note, an anecdata: During XP/NT4 times I worked for a company running a factory plant producing heavy machinery, running huge steel-cutting lasers machines etc. Some time after I joined this company I was given a “The best way to get something done is to give it to someone who doesn’t know it can’t be done”-type task.
Specifically a few factory workers were forced to transfer a bunch of data before they were able to clock off and day after day they were waiting for this process to finish after working hours. It was a ton of really small files sucked from something that only shared it via SMB protocol so the quest was to somehow boost the throughput from something like <10% of available network connection. The host was a closed system embedded in one of these huge factory machines. No access as it could break warranty agreements. We were unsure what it was and what it was actually running. MAC address OUI was not in the officialOUI database and 2 different versions of nmap were hinting low probability that it’s either Windows NT4/2000/XP or some ancient Linux. Again: closed solution exposing just Samba.
Windows Server admins tried some tuning of client XP systems with no success and finally passed this task on the fresh meat - me. Long story short, NetBIOS was resolving the source host (the one only exposing SMB) to both v4 and v6 addresses. Before trying anything drastic I brought in and connected my private laptop with FreeBSD to test that the host also serves SMB on v6 addresses. Yes, lame but there were just no host capable of v6 communication in this network of 400+ computers. XP 32.
The drastic measure was: reinstall 32-bit Windows XP to 64-bit Windows XP which was capable of using v6 after you typed “cmd /k ipv6 install”. Boom!
I was never able to specifically measure how it was working faster but I believe that it were v6’s larger frames. WinXP 64 had the same poor throughput via v4 but it was more efficient via v6.
tl;dr: I improved something by replacing IPv4 with IPv6 in an industrial facility of publicly traded corporation ~19 years ago already. I never saw IPv4 having any kind of worse parameters than IPv6.
Oh, even though I did job that the company was paying me for, it’s Poland so these guys from the factory have bought me 1 liter vodka bottle a few days later. They were able to go home 20-25 minutes earlier since then.
Ipv6 is very hard to configure.
ndppd is just too buggy.
IPv6 is a technology. As a technology, it has a learning curve. I’ve been running IPv6 for nearly 20 years and I only began to understand some of the more obvious stuff in the past few years. A great example of this is address assignments. On my IPv6 day-0 I treated IPv6 just like IPv4 and assigned a static address to each of my machines manually. It took getting familiar with SLAAC to understand that a lot of things that I learned where “wrong”, were not engineering wrong, they were wrong because they played on a weakness of IPv4, lack of IP address space. Today I understand that even if I limit myself to a /64, I’m never going to run out of IP address space. Today IP web based IP address hosting seems to be an anachronism. That not because this was a bad idea from an engineering perspective. It’s because IPv4 addresses are rare, and as such valuable. IPv6 addresses are plentiful.
Ваша учётная запись или IP-адрес заблокированы.
Блокировка произведена администратором Balledur. Указана следующая причина: прокси.
Начало блокировки: 14:05, 25 января 2025 Окончание блокировки: 14:05, 25 января 2027 Цель блокировки: 2001:0:0:0:0:0:0:0/19
Undeleted after talking to sugaryboa: the missing context is that this is the error message returned by this link; it’s blocking a lot of ipv6.