Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Some of these samples are shocking. How do these models answer chart-based questions, I mean when they can't even count the intersections between two lines?


Same way they answer any question: piece together a statistically probable sequence of words to follow the prompt. All they know about an image is a handful of words a classifier might choose to describe it. If those words have nothing to do with the question being asked, they can't nudge the model in the general direction of a correct answer, so it's a crapshoot- even moreso than usual.


That’s not at all how multi-modal LLMs work - their visual input is not words generated by a classifier. Instead the image is divided into patches and tokenised by a visual encoder (essentially, it is compressed), and then fed directly as a sequence to the model.


The dataset most likely contains chart descriptions that describe the raw data, but not the visual interactions of the individual pixels.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: