Hacker Newsnew | past | comments | ask | show | jobs | submit | blobbers's commentslogin

Pointy haired bosses be looking for results.

Engineers be loving the craft.

It's a dance, but AI is unfortunately looking at us like we're dancing, and meanwhile it's built a factory.


4 day work week would be so rad.

Am I going to finally get a robot to fold my clothes?

Based on how they conducted the test: - used an IQ / analytical test - correlated that to whether people cared about flowery language

Generally speaking, analytical people care more about numbers than words, so isn't this more of an 'expected result'?


If anyone saw that LinkedIn post about someone at Block resigning guilt of being offered a raise and retention after the layoffs, I'd say that is a signal that tech is heading down.

Most people would be thankful to have a secure well paying job in the post AI blow off; increasingly it's going to harder to differentiate yourself against anyone else using AI. That we have people still in the thick of AI that don't understand that is a strong signal that AI boom is still going to come take some jobs.

If you're in a software related role and AI isn't making you more productive, it's on YOU as a dev to figure things out quickly.

AI is coming for your job so you can either be an AI manager, or you can get managed out for AI.

caveat: This is my take as someone who used to do a lot of hand coding, and now regularly has a small team of AI doing anything that would have normally required mostly brute coding strength but not too much thought; that's facet'ed plots, refactoring libraries, improving pipeline efficiency, adding parallelization where possible, building presentations, adding test coverage.


mmhmm. That's a lot of me me me. Are you reviewing others' work who produce the same output as you?

Haha. This title: Nobody Gets Promoted for Simplicity

This was the model at my last job. The "director" of software had strong opinions about the most random topics and talked about them like they would be revolutionary. His team was so far from the product teams they would just build random crap that was unhelpful but technically demo'ed well. Never put into practice. Promoted for 4 years, then fired.


I am excited about game dev with AI, but the games you posted are kind of a joke.

My kids made similar games with Claude code in js.

Was hoping to see some serious indie games, but these looked pretty terri-bad.

Is anyone building the next SimCity, Civilization, etc.?


I largely agree with you, but that's honestly the part I like. Just like your kids, there are folks here getting value out of something that wasn't accessible to them before. My most cherished artwork isn't the tasteful stuff, it's the crayon drawing my niece made for me on my wedding day. The low-stakes nature of this AI content feels similar; people are doing it for the sheer pleasure and aren't afraid of meeting anyone's bar. A lot of it is noise right now, but I suspect we'll see it develop into something really interesting if the pattern continues.

Shameless plug - and nothing so grandiose as SimCity but I built a pretty substantial 2D/3D blindfold trainer chess game. It's by no means "vibe coded" though, and there's a fair bit of manual work around the 3d modeling that I had to roll myself.

Even with that I'd still say 70% of the code was written using LLMs with the opencode agent.

https://shahkur.specr.net


I spent quite a bit of time making games/social WebXR stuff in Three.js. There is unfortunately a ceiling to what commercial success you can have there. Sure there's all the .io games which see a bunch of kids playing games from their school computers, but as much as I'd like to see it, web games aren't going to be as impressive as native PC/Console games. It's not really technology, mostly just the market isn't there.

The models are pretty decent at building simple Three.js/Phaser games, but if you want to work with Unity/Unreal/Godot you're going to need a MCP or other tool to get them to work with the engine's tooling/context. I just so happen to work on one for Unity https://bezi.com

I will say, while I think the current models are very impressive with generating code for most game mechanics. They are still terrible at spatial awareness. Gemini Pro 3.1 is showing some promise here, the latest Opus/Sonnet models are...ok. But there's still a lot left to be desired. You also still really need to know how to make games both creatively and technically to pull off prompting a game into existence.

So are you going to vibecode your way to the next SimCity / Civ without knowing some game dev? Probably not right now and I think that's for the best. People want games that are creative and unique. But a passionate hobbyist who has never made a game, knows some programming, and has a vision for a great game now has an amazing tool set to build their dream game and that's pretty cool!


There is a civilization ai game funded by ycombinator

Can someone help me understand when these neural engines kick in in open source software?

I typically use python ML libraries like lightgbm, sklearn, xgboost etc.

I also use numpy for large correlation matrices, covariance etc.

Are these operations accelerated? Is there a simple way to benchmark?

I see a lot of benchmarks on what look like C functions, but today in my jobs I rely on higher level libraries. I don't know if they perform any better on apple HW, and unless they have a flag like use_ane I'm inclined to think they do better.

Of course chatgpt suggested I benchmark an Intel Mac vs. newer apple silicon. Thanks chatgpt, there's a reason people still hate AI.


> when these neural engines kick in in open source software?

It mostly doesn't because NPUs are bespoke and vendor-specific (which incents neglect by software devs working on open source numerics and ML/AI infrastructure), and the Apple ANE is no exception. Part of this effort is most likely about fixing that for the specific case of the Apple ANE.


Part of which effort? The Reverse engineering is so it can be used blog article?

I just think: great it seems like I'm paying for a hardware accelerator that makes Siri go faster. And I use siri on my laptop exactly 0 times in the last infinite years.


It also makes a lot of really useful features like on device OCR, captions, voice isolation, temporal antialiasing in metalfx, an enormous host of things in the apple pro apps, etc. work

Yeah, I don't use any of those features. So it sounds like its for folks who are creatives running lightroom or apple movie, or some kind of apple sound program?

I'm a dev, not a creative, unfortunately. I don't use other people's software, I generally write my own (or used to before Claude took over my world).


You bought a truck. Obviously there will be some part of it you don’t use.

I think its more like it advertises it can climb hills in crawl mode, but turns out its only specific hills.

So fundamentally, it still comes down to CPUs + RAM.


Yes. Numpy will accelerate if it detects hardware that it supports.

I can't find any docs that numpy will do this.

https://opensource.apple.com/projects/mlx/ is needed to do this?


Normally I’d help a bro out but I started and googled and got hundreds of results and realized why does everyone need to be spoon fed. Please go do some work yourself mkay?

This is getting silly guys. All on the same team. Need to have a c.t.j. meeting.

Am I the only one who remembers the prime directive of google, much easier to understand than 'organizing the worlds information' etc. etc. It was simpler.

Don't be evil.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: