> Today, Pinterest's memcached fleet spans over 5000 EC2 instances across a variety of instance types optimized along compute, memory, and storage dimensions. Collectively, the fleet serves up to ~180 million requests per second and ~220 GB/s of network throughput over a ~460 TB active in-memory and on-disk dataset, partitioned among ~70 distinct clusters.
Wow.
Assuming $0.09 per GB egress on EC2, that's $51,321,600/mo. Of course, they must be on some Enterprise plan of some sort, but how much discount must they get to make it affordable?
By comparison, 180m requests per second on an "egress-free" serverless compute like Workers would cost $77,760,000/mo (assuming 6m per $1) or $233,280,000/mo (2m per $1).
Hi, I'm the original author of this article, though I have left Pinterest since this article was published.
Many customers with large AWS footprints, Pinterest included, have enterprise plans with highly custom pricing. It is often the case that general public pricing isn't directly comparable to enterprise pricing on an individual component level.
On the topic of network transfer, many of our highest network bandwidth memcached clusters are replicated with an egress routing policy that exercises an availability zone affinity [0]. For the most efficient clusters, this means that 99.9+% of network bandwidth remains in the client-colocated AZ (within the same region and VPC), which is free [1].
Wow.
Assuming $0.09 per GB egress on EC2, that's $51,321,600/mo. Of course, they must be on some Enterprise plan of some sort, but how much discount must they get to make it affordable?
By comparison, 180m requests per second on an "egress-free" serverless compute like Workers would cost $77,760,000/mo (assuming 6m per $1) or $233,280,000/mo (2m per $1).
Cloud is wild.