Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A fantastic result, and a nice story behind it. Per the abstract of Tang's paper, the current best results are linear in m and n.


Prof Aaronson said the best upper bound they have so far is O(log(n)^33)... to give that more meaning, O(log(n)^33) grows faster than O(n) until about n = 10^15, so "not practical" is a bit of an understatement!

I still get why this is an interesting result, though. And if Scott thinks that lower bounds are achievable he is probably right.


10^15 is not that big. We could construct and compute on sparse matrices that large today.


I'm glad to know where the crossover point is, and yeah, hearing that it's around 10^15 tells me it's just a few orders of magnitude of improvement away from being a practical machine learning tool.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: