Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nice algorithm. The diagrams in the paper are wonderfully clear and useful.

My instinct on reading it is that allowing readers to continue during a resize is a solid improvement, but that any approach based on buckets using linked pointers (open chaining) should be considered "low performance" by definition.

Do you think this approach is enough to make them competitive? Is there a parallel approach that would work for faster hashes that don't chase pointers? Would here be a way to "compact" the hash on resize so that at least the pointers are tightly arranged?



> Nice algorithm. The diagrams in the paper are wonderfully clear and useful.

Thanks! LaTeX and TikZ are awesome; I would not have wanted to draw those diagrams in any kind of WYSIWYG diagramming tool.

> My instinct on reading it is that allowing readers to continue during a resize is a solid improvement, but that any approach based on buckets using linked pointers (open chaining) should be considered "low performance" by definition.

See my comment at https://news.ycombinator.com/item?id=8365436 ; for many of the hashes in the Linux kernel, the hash nodes are too large to store directly in the buckets, and other parts of the system need to maintain references to them, so a resize cannot copy them and free the originals. And if you're storing a pointer to a node rather than a node, then you'll have one indirection (with associated memory/cacheline fetch) in each bucket anyway, whether you use a closed (non-chaining) table or an open (chaining) table.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: