Sorry, I just gotta rant a bit... this is a really bad hack, that I wouldn't trust on a production system. Instead of doubling down and working on better IPv6 support with providers and in software configuration, and defining best practices for working with IPv6, they just kinda gloss over with a 'not supported yet' and develop a whole system that will very likely break things in random ways.
> More importantly, we can route to these addresses much more simply, with a single route to the “fan” network on each host, instead of the maze of twisty network tunnels you might have seen with other overlays.
Maybe I haven't seen the other overlays (they mention flannel), but how does this not become a series of twisty network tunnels? Except now you have to manually add addresses (static IPv4 addresses!) of the hosts in the route table? I see this as a huge step backwards... now you have to maintain address space routes amongst a bunch of container hosts?
Also, they mention having up to 1000s of containers on laptops, but then their solution scales only to 250 before you need to setup another route + multi-homed IP? Or wipe out entire /8s?
> If you decide you don’t need to communicate with one of these network blocks, you can use it instead of the 10.0.0.0/8 block used in this document. For instance, you might be willing to give up access to Ford Motor Company (19.0.0.0/8) or Halliburton (34.0.0.0/8). The Future Use range (240.0.0.0/8 through 255.0.0.0/8) is a particularly good set of IP addresses you might use, because most routers won't route it; however, some OSes, such as Windows, won't use it. (from https://wiki.ubuntu.com/FanNetworking)
Why are they reusing IP address space marked 'not to be used?' Surely there will be some router, firewall, or switch that will drop those packets arbitrarily, resulting in very-hard-to-debug errors.
--
This problem is already solved with IPv6. Please, if you have this problem, look into using IPv6. This article has plenty of ways to solve this problem using IPv6:
>Sorry, I just gotta rant a bit... this is a really bad hack, that I wouldn't trust on a production system. Instead of doubling down and working on better IPv6 support with providers and in software configuration, and defining best practices for working with IPv6, they just kinda gloss over with a 'not supported yet'
Yeah, how pragmatic of them. Instead of for pie in the sky, let's all get together to pressure people to improve tons of infrastructure we don't own, action (which can always happen in parallel anyway) they solved their real problem NOW.
>and develop a whole system that will very likely break things in random ways.
Citation needed else it's just FUD. The links explained how it works well enough for that.
The overlay networks are not necessarily hacks - just a souped-up, more distributed, auto-configured VPN. Also, especially in flannel's case, you hand it IPv4 address space to use for the whole network, so there is a bit more coordination of which space gets used.
Even so, there are are lots of ways to get IPv6 now, I would think anywhere where you could use this fan solution to change firewall settings and route tables on the host, you could also setup an IPv6 tunnel or address space. Even with some workarounds for not having a whole routed subnet, like using Proxy NDP.
It seems like a much more future-proof solution than working with something like this. Just my 2c...
You do not appear to be familiar with the problem domain that this addresses, and I think the fan device addresses the problem very very well compared to its competitors! It's nothing like a VPN, it's just IP encapsulation without any encryption or authentication. And it's far far far less of a hack than the distributed databases currently used for network overlays, like Calico or MidoNet or all those other guys, IMHO. For example take this sentence from the article:
> Also, IPv6 is nowehre to be seen on the clouds, so addresses are more scarce than they need to be in the first place.
There are a lot of people that are using AWS and not in control of the entire network. If they were in control of the entire network, they could just assign a ton of internal IP space to each host. IPV6 is great, sure, but if its not on the table its not on the table.
We will be testing the fan mechanism very soon, and it will likely be used as part of any LXC/Docker deploy, if we ever get to deploying them in production.
No, I get what this addresses... It is really just a wrapper around building a network bridge and having the host do routing / packet forwarding and encapsulation. Nothing a little iptables can't handle :)
I wasn't aware that AWS still doesn't have support for IPv6, that's just amazingly bad in 2015. I'll shift my blame onto them then for spawning all these crazy workarounds.
IPv6 is hard. It's hard to optimize, it's hard to harden, and it's hard to protect against.
One small example: How do you implement a IPv6 firewall which keeps all of China and Russia out of your network? (My apologies to folks living in China and Russia, I've just seen a lot of viable reasons to do this in the past).
Another small example: How do you enable "tcp_tw_recycle" or "tcp_tw_reuse" for IPv6 in Ubuntu?
Maybe we should start thinking of security in terms of 'how can we build things that are actually secure by design' instead of 'how can we use stupid IP-level hacks to block things because our stuff is swiss cheese'?
None of this really applies to VPC (which is a private virtual network for only your own hosts and access is restricted lower down than at the ip layer). You actually can have a public IPv6 address on AWS, it just has to go through ELB.
To be clear, I was not saying that you can give an ELB in a VPC an IPv6 address. I was saying you can give a non-VPC ELB an IPv6 address. Basically I was pointing out that, however imperfect, Amazon has chosen to prioritize public access to IPv6 over private use of it.
This is only for containers communicating with each other within the FAN. Any traffic bound to external networks would have to be NAT'd, which is fine.
> More importantly, we can route to these addresses much more simply, with a single route to the “fan” network on each host, instead of the maze of twisty network tunnels you might have seen with other overlays.
Maybe I haven't seen the other overlays (they mention flannel), but how does this not become a series of twisty network tunnels? Except now you have to manually add addresses (static IPv4 addresses!) of the hosts in the route table? I see this as a huge step backwards... now you have to maintain address space routes amongst a bunch of container hosts?
Also, they mention having up to 1000s of containers on laptops, but then their solution scales only to 250 before you need to setup another route + multi-homed IP? Or wipe out entire /8s?
> If you decide you don’t need to communicate with one of these network blocks, you can use it instead of the 10.0.0.0/8 block used in this document. For instance, you might be willing to give up access to Ford Motor Company (19.0.0.0/8) or Halliburton (34.0.0.0/8). The Future Use range (240.0.0.0/8 through 255.0.0.0/8) is a particularly good set of IP addresses you might use, because most routers won't route it; however, some OSes, such as Windows, won't use it. (from https://wiki.ubuntu.com/FanNetworking)
Why are they reusing IP address space marked 'not to be used?' Surely there will be some router, firewall, or switch that will drop those packets arbitrarily, resulting in very-hard-to-debug errors.
--
This problem is already solved with IPv6. Please, if you have this problem, look into using IPv6. This article has plenty of ways to solve this problem using IPv6:
https://docs.docker.com/articles/networking/
If your provider doesn't support IPv6, please try to use a tunnel provider to get your very own IPv6 address space.
like https://tunnelbroker.net/
Spend the time to learn IPv6, you won't regret it 5-10 years down the road...