Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why do people keep giving whole IP addresses to every little container? It's a terrible management paradigm compared to service-discovery and using hostports for every address.


Because the alternative methods for handling intra-container communication are even bigger messes.

The 1.6 docker solution involved a double nat and relied on iptables - resulting in some fairly serious bottlenecks and pathalogical edge cases. It also required a third party solution for handling discovery of IPs for services.

Opening up containers to access the host network interfaces breaks the encapsulation promises of containers, and is thus not available to all people. Conceptually, it also creates holes in the idempotent service model, since they have to be aware of port conflicts.

The 1 IP per container model, such as VXLan, Flannel, Kubernetes, Docker 1.7 etc. is one of the more effective methods of countering the problem, at the cost of guzzling IP address space, and requiring a gateway to escape the virtual network tunnel.


> Opening up containers to access the host network interfaces breaks the encapsulation promises of containers

Who made that promise? It was never a feature of Linux containers to virtualize the NIC.


You would be surprised the number of people that don't understand containers are a form namespacing + isolation not a form of virtualisation.


Because a lot of software stuff is written without intrinsic support for service discovery, which means all kinds of hacks (e.g. proxying; dynamically rewriting config files and reloading) to work around it if the ports may change. With an IP per container, things like low TTL DNS tied to a service registry (e.g. Consul, or SkyDNS with something to update Etcd) is viable and often a much easier alternative.

I agree with you from a purity point of view that proper service-discovery is better, but in terms of practicality, IP per container is often a lot simpler to implement.


This is all for your internal network though, which you should be able to control, right?


Having control of your internal network does not fix missing service discovery support in all the applications you're running that likes to assume that port numbers are static and unchanging.


Because service discovery itself is ridiculous. I have to deploy three or more Consul/etcd nodes to run service discovery for my handful of EC2 instances and containers?


Of course not. You can maintain configuration files if you like, but anything beyond a relatively small network is going to be a pain to maintain. It's also not going to react very quickly to node failures or topology changes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: