Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If you have an O(N) algorithm that walks the data set in the order that it's laid out in memory(assuming contiguous data) it will beat your O(NlogN)

Not sure what you're trying to say with this particular statement. Of course it will. NLogN grows way faster than N.

And Radix sort is of course drastically faster than comparison sorts if the word size is smaller than logN regardless of the particularities of the implementation.

People don't and shouldn't use big O to describe absolute performance but it's a great place to start and you can't reach the level of implementation details performance tuning if you can't understand or formalize the basics.

But you do have a point overall.

In practice, performance may be different. Big O assumes a particular 'virtual machine' with a particular word size and a particular time it takes to execute an instruction.



Yeah, meant to omit NlogN there, I blame it on lack of coffee.

My meta point is the constant factors that Big O throws out usually end up dominating real-world performance.


That's why some Data Stuctures courses use tilde notation instead (same as big O, but the constant is preserved).


In big-O senses Radix sort is one of those more persistent misconceptions. At scale Radix sort has to be nlog(n) because you should really consider the possibility all your numbers are unique.

This does not even start on the reality that memory access really is O(n^(1/3)), addition is O(n) and a lot of other things we assume are O(1) are not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: