Hacker Newsnew | past | comments | ask | show | jobs | submit | efskap's commentslogin

Cool stuff, I think I'll never be able to unsee the extra top padding all over the web now haha

I don't even know if the golden ratio itself is that magical, but I do see a lot of value in picking one ratio and sticking to it everywhere.


You've got it. It doesn't have to be golden. That was just useful for other personal preference reasons.

It was cool to see subreddit simulators evolve alongside progress in text generation, from Markov chains, to GPT-2, to this. But as they made huge leaps in coherency, a wonderful sort of chaos was lost. (nb: the original sub is now being written by a generic foundation llm)


What's an American name? Are you referring to WASP (White, Anglo-Saxon, Protestant) names?


Cool! I like the way this kind of demo breaks the fourth wall of its medium. I'm actually surprised that I haven't really seen this kind of thing in demoscene, where it's always just opengl craziness within the confines of a window, which doesn't really "take advantage" of living in a windowing operating system.


Yeah they confirm that at the bottom of the linked page

> Furthermore, by leveraging tools like MapAnything to generate metric points, ShapeR can even produce metric 3D shapes from monocular images without retraining.


ELI5 has meant friendly simplified explanations (not responses aimed at literal five-year-olds) since forever, at least on the subreddit where the concept originated.

Now, perhaps referring to differentiability isn't layperson-accessible, but this is HN after all. I found it to be the perfect degree of simplification personally.


I hate to sound like a webdev stereotype but surely the parsing step of querySelector, which is cached, is not slow enough to warrant maintaining such a build step.


Some things you build not because they are necessary, but because you can.


Would it really be infeasible to take a sample and do a search over an indexed training set? Maybe a bloom filter can be adapted


It's not the searching that's infeasible. Efficient algorithms for massive scale full text search are available.

The infeasibility is searching for the (unknown) set of translations that the LLM would put that data through. Even if you posit only basic symbolic LUT mappings in the weights (it's not), there's no good way to enumerate them anyway. The model might as well be a learned hash function that maintains semantic identity while utterly eradicating literal symbolic equivalence.


I smell the bias-variance tradeoff. By underfitting more, they get closer to the degenerate case of a model that only knows one perfect photo.


A lot of medical devices still run XP as well unfortunately, because of old proprietary software for expensive equipment that doesn't receive updates anymore.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: