"Containers" are just a pattern, the real technology behind it is kernel namespaces (plus ordinary process boundaries). (For that matter container security is still half-baked in many ways. The kernel namespacing has been added after-the-fact and it can't really be trusted to make an effective "sandbox" on its own.)
Everyone keep saying that. But containers are battle-tested for years at this point. I think that all low-hanging vulnerabilities have been fixed. Like you can expect that your container IS secure if your system is up to date. Every container escape is kernel bug which is fixed ASAP. And 0-days can be applied to everything, virtualization will not help you with that, there're 0-days in virtualization layers. Yes, surface of attack is significantly larger with containers and chance of 0-day is larger, but for most companies which don't really run code from attackers in their containers, this danger is over-stated. Container security is good enough for most workloads.
Well, not so fast. Ubuntu apport (its coredump collector and crash reporter) had a series of bugs over multiple years which could be used to escape from containers. This is all because coredump filter is still a global thing and not a per namespace one.
I agree with you it's actually more secure than people thought it is. A rule of thumb: just treat it as same as "running multiple applications with different user". Yeah, not a real sandbox. But nowadays also not many low hanging fruits, either.
Yes, and that "everyone" include kernel developers. That's my point. Now, does this mean containers are totally useless, of course not. Especially if you can prevent running attacker code via other means.