Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder if the bad performance is due to the <1kb requirement.


Almost certainly. It would be tempting just to pre-render this to an HD video otherwise.


In case you missed the link on the page, there's a 1080p version on YouTube https://www.youtube.com/watch?v=NnZUUSdpt-k


The point was that it is probably much less CPU intensive to decode such a stream in 1080p than to render it from a compressed 1k of Javascript.


Ah, I suppose. I like the performance aspect of it though. It feels like something my computer is doing live, as I watch it, instead of a recording of something done in the past.


Hah, just found this 2006 thread via Twitter: "Coding routines for a non-interactive demo is just a special case of procedural video-compression anyway." http://ada.untergrund.net/?p=boardthread&id=200#msg1961


Not sure the point of the comparison. Yes, some of our encoding technology beats some algorithmic generation of content in performance. If the goal is size of encoding, though, it doesn't come close.

Tricks like this can help show how less than a gig of data is enough to encode an operating system. Or a person.


I really hope you are right about encoding a person in less than a gig, but what about the associated genome (3000 megabases), episodic memories and/or neuron connectivity?! Surely this is data that would be essential, yet very hard to algorithmically encode?


I don't know enough to weigh in on if a person is just a gig. I'm just going off of this[1] presentation.

[1] http://www.infoq.com/presentations/We-Really-Dont-Know-How-T...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: