Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But at some point doesn't one need to think at the granularity level of clusters? In other words, there are natural limits to how big the hosting environment can grow, at which point performance and efficiency with respect to network topology and other host-system factors (in the broadest senses, e.g. at the level of data centers and the tech-pieces from which they're built) must be considered.

For reasons of convenience, in the case of some (many?) applications, perhaps those considerations can be ignored and the "container fabric" can simply be thought of as perfectly homogeneous, is that the idea? And if one grows beyond the point where that simplifying assumption doesn't hold, then it's time to switch to an architecture where clustering is explicitly addressed?



Broadly yes.

You could apply a similar argument to Digital Ocean or other providers. It is feasible that you could outgrow any provider, but in practice they would be able to scale before you hit that problem.

Is that what you meant or did I misunderstand?


In that sense, shall we care whether the EC2 instances in the ECS cluster are located at the same server, or the same rack?


It would depend on "how things are wired together". Now, it's great if they are "wired together" in such a way that one need not give it any thought. But whether we're talking EC2 instances or containers, at some point one has to think about it, e.g. two instances talking to each other, one being on the East Coast USA and the other in the Midwest, or West Coast, vs. their both being in the same datacenter. That's at the extremes, for sure, but maybe even intra-dc clustering has to be considered explicitly for certain applications? Maybe not?


The point is that you should stop thinking about VM cluster, but container (application) cluster, aka microservices.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: