Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are overheads in server workloads that scale with the number of machines (network traffic, serializing/deserializing requests). There are also fixed costs per server that don't scale with core count, or at least scale sublinearly (storage, physical data center space, motherboard, ease of maintenance). So running 10 machines with 100 cores can be cheaper and more performant than running 1,000 machines with 1 core even if $/core is higher. And of course individual cores can be beefier: wider SIMD units, application-specific extensions like bfloat support for ML workloads, etc.

Of course Moore's law is slowing down, but cores/$ is an extremely silly metric to use



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: