Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Most interesting is a new caching algorithm

S4LRU (or 4QLRU mentioned in the sibling comment) is not new.

The concept is one of several in a bag of tricks the Advection video delivery network used to manage HD video edge caches since early to mid 2000s. Not discussed in this paper, we also stored the content of each of our S4LRU-like queues in storage levels each capable of higher concurrency: origin storage cluster, local NAS, host JBOD HDD, host SSD, host ramdisk. We didn't use LRU, but something that works notably better for video, still not found in papers.

HD video files are very large compared to available storage for cache queues, and moving HD content among layers is expensive and slow. LRU causes a lot of churn and efficiently distributing large content across "collaborative" collections of edge servers with comparatively smaller caches presents a tough packing problem.

As a self-funded video delivery network bootstrapping off our own organic revenue, we had to solve these in order to wholesale to CDNs private labeling and reselling our video delivery at huge scale. Our combination of approaches let us beat all published papers on cache efficiency (as of mid 2000s), using a set of computationally inexpensive heuristics.

It's been cool to watch CloudFlare independently discover and publish ideas from that bag of tricks, while the traditional CDNs continue to operate less efficiently. I was surprised by this paper's discovery that FB is still not using what this paper calls "collaborative" edge caching. One day the big guys may wake up and find themselves no longer competitive.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: