Hacker Newsnew | past | comments | ask | show | jobs | submit | egeozcan's commentslogin

In a way that sounds like setting the seed.

Kinda, but the same seed will not guarantee the same result the next time around.

These days, it feels like, every article about something big happening is about an AI doomsday scenario, AI bubble "finally" bursting or AGI being reached.

Maybe one exception is milestones in nuclear fusion, but even that is very much rare compared to these.


This is amazing. What I do is something else: I make AI agents develop AI scripts (good ol' computer player scripts) and try to beat each other:

https://egeozcan.github.io/unnamed_rts/game/

I occasionally run my tournament script: https://github.com/egeozcan/unnamed_rts/blob/main/src/script...

That calculates the ELOs for each AI implementation, and I feed it to different agents so they get really creative trying to beat each other. Also making rule changes to the game and seeing how some scripts get weaker/stronger is a nice way to measure balance.

Funny thing, Codex gets really aggressive and starts cheating a lot of times: https://bsky.app/profile/egeozcan.bsky.social/post/3mfdtj5dh...


I do a lot of web development, and even if we set the great tooling aside for a moment, Bun is still a major improvement (a real leap, I’d say) when it comes to performance.

Why? You can use the fast version to directly skip to compact! /s

As someone heavily involved in a11y testing and improvement, the status quo, for better or worse, is to do it the other way around. Most people use automated, LLM based tooling with Playwright to improve accessibility.

I certainly do - it’s wonderful that making your site accessible is a single prompt away!

How can you make sure of that? AFAIK, these SOTA models run exclusively on their developers hardware. So any test, any benchmark, anything you do, does leak per definition. Considering the nature of us humans and the typical prisoners dilemma, I don't see how they wouldn't focus on improving benchmarks even when it gets a bit... shady?

I tell this as a person who really enjoys AI by the way.


> does leak per definition.

As a measure focused solely on fluid intelligence, learning novel tasks and test-time adaptability, ARC-AGI was specifically designed to be resistant to pre-training - for example, unlike many mathematical and programming test questions, ARC-AGI problems don't have first order patterns which can be learned to solve a different ARC-AGI problem.

The ARC non-profit foundation has private versions of their tests which are never released and only the ARC can administer. There are also public versions and semi-public sets for labs to do their own pre-tests. But a lab self-testing on ARC-AGI can be susceptible to leaks or benchmaxing, which is why only "ARC-AGI Certified" results using a secret problem set really matter. The 84.6% is certified and that's a pretty big deal.

IMHO, ARC-AGI is a unique test that's different than any other AI benchmark in a significant way. It's worth spending a few minutes learning about why: https://arcprize.org/arc-agi.


> which is why only "ARC-AGI Certified" results using a secret problem set really matter. The 84.6% is certified and that's a pretty big deal.

So, I'd agree if this was on the true fully private set, but Google themselves says they test on only the semi-private:

> ARC-AGI-2 results are sourced from the ARC Prize website and are ARC Prize Verified. The set reported is v2, semi-private (https://storage.googleapis.com/deepmind-media/gemini/gemini_...)

This also seems to contradict what ARC-AGI claims about what "Verified" means on their site.

> How Verified Scores Work: Official Verification: Only scores evaluated on our hidden test set through our official verification process will be recognized as verified performance scores on ARC-AGI (https://arcprize.org/blog/arc-prize-verified-program)

So, which is it? IMO you can trivially train / benchmax on the semi-private data, because it is still basically just public, you just have to jump through some hoops to get access. This is clearly an advance, but it seems to me reasonable to conclude this could be driven by some amount of benchmaxing.

EDIT: Hmm, okay, it seems their policy and wording is a bit contradictory. They do say (https://arcprize.org/policy):

"To uphold this trust, we follow strict confidentiality agreements. [...] We will work closely with model providers to ensure that no data from the Semi-Private Evaluation set is retained. This includes collaborating on best practices to prevent unintended data persistence. Our goal is to minimize any risk of data leakage while maintaining the integrity of our evaluation process."

But it surely is still trivial to just make a local copy of each question served from the API, without this being detected. It would violate the contract, but there are strong incentives to do this, so I guess is just comes down to how much one trusts the model providers here. I wouldn't trust them, given e.g. https://www.theverge.com/meta/645012/meta-llama-4-maverick-b.... It is just too easy to cheat without being caught here.


Chollet himself says "We certified these scores in the past few days." https://x.com/fchollet/status/2021983310541729894.

The ARC-AGI papers claim to show that training on a public or semi-private set of ARC-AGI problems to be of very limited value in passing a private set. <--- If the prior sentence is not correct, then none of ARC-AGI can possibly be valid. So, before "public, semi-private or private" answers leaking or 'benchmaxing' on them can even matter - you need to first assess whether their published papers and data demonstrate their core premise to your satisfaction.

There is no "trust" regarding the semi-private set. My understanding is the semi-private set is only to reduce the likelihood those exact answers unintentionally end up in web-crawled training data. This is to help an honest lab's own internal self-assessments be more accurate. However, labs doing an internal eval on the semi-private set still counts for literally zero to the ARC-AGI org. They know labs could cheat on the semi-private set (either intentionally or unintentionally), so they assume all labs are benchmaxing on the public AND semi-private answers and ensure it doesn't matter.


They could also cheat on the private set though. The frontier models presumably never leave the provider's datacenter. So either the frontier models aren't permitted to test on the private set, or the private set gets sent out to the datacenter.

But I think such quibbling largely misses the point. The goal is really just to guarantee that the test isn't unintentionally trained on. For that, semi-private is sufficient.


Particularly for the large organizations at the frontier, the risk-reward does not seem worth it.

Cheating on the benchmark in such a blatantly intentional way would create a large reputational risk for both the org and the researcher personally.

When you're already at the top, why would you do that just for optimizing one benchmark score?


Everything about frontier AI companies relies on secrecy. No specific details about architectures, dispatching between different backbones, training details such as data acquisition, timelines, sources, amounts and/or costs, or almost anything that would allow anyone to replicate even the most basic aspects of anything they are doing. What is the cost of one more secret, in this scenario?

Because the gains from spending time improving the model overall outweigh the gains from spending time individually training on benchmarks.

The pelican benchmark is a good example, because it's been representative of models ability to generate SVGs, not just pelicans on bikes.


> Because the gains from spending time improving the model overall outweigh the gains from spending time individually training on benchmarks.

This may not be the case if you just e.g. roll the benchmarks into the general training data, or make running on the benchmarks just another part of the testing pipeline. I.e. improving the model generally and benchmaxing could very conceivably just both be done at the same time, it needn't be one or the other.

I think the right take away is to ignore the specific percentages reported on these tests (they are almost certainly inflated / biased) and always assume cheating is going on. What matters is that (1) the most serious tests aren't saturated, and (2) scores are improving. I.e. even if there is cheating, we can presume this was always the case, and since models couldn't do as well before even when cheating, these are still real improvements.

And obviously what actually matters is performance on real-world tasks.


I have Claude Max plan which makes me feel like I could code anything. I'm not talking about vibe-coding greenfield projects. I mean, I can throw it in any huge project, let it figure out the architecture, how to test and run things, generate a report on where it thinks I should start... Then I start myself, while asking claude code for very very specific edits and tips.

I also can create a feedback loop and let it run wild, which also works but that needs also planning and a harness, and rules etc. Usually not worth it if you need to jump between a million things like me.


> unless... you understood Chinese and the alternative text would manage to persuade you to do something harmful

Oh, here is the file I just saved... I see that it now tells me to rob a bank and donate the money to some random cult I'm just learning about.

Let me make a web search to understand how to contact the cult leader and proceed with my plan!

(luckily LLMs were not a thing back then :) )


Could it be that you're creating a stereotype in your head and getting angry about it?

People say these things against any group they dislike. It's so much that these days it feels like most of the social groups are defined by outsiders with the things they dislike about them.


Well not really, vibe coding is literally brute forcing things until it works, not caring about the details of it.


So manual programming. Humans don't always get everything perfect the first try either.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: