Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The issue here is mostly about what you are counting.

Almost all instances I've seen of big O scaling not being realistic count every memory access as 1 operation. In reality, memory accesses have a huge variation in how much time they take. Moreover, this does have a dependence on the program.

This is of course due to caches. As such, an O(n^2) algorithm that has very local memory accesses can actually beat an O(n) algorithm that spreads out its memory accesses randomly over the entire memory space.

It has come to the point where I care more about the expected cache hits of an algorithm than the big O scaling.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: