Hacker Newsnew | past | comments | ask | show | jobs | submit | cowlby's commentslogin

Vibe coding is also why this was released hours after leak instead of days/weeks.

This is one of the things that GitHub Spec Kit solves for me. The specify.plan step launches code exploration agents and builds itself the latest data model, migrations, etc etc. Really reduces the need to document stuff when the agent self discovers codebase needs.

Give Claude sqlite/supabase MCP, GitHub CLI, Linear CLI, Chrome or launch.json and it can really autonomously solve this.


Who else struggles with both sides of this? My engineer side values curiosity, brain power, and artistanship. My capitalist side says it's always the product not the process. My formula is something like this: product = money, process = happiness, money != happiness, no money = unhappiness.

I think the optimal solution is min/maxing this thing. Find the AI process that minimizes unhappiness, and maximizes money.


> My capitalist side says it's always the product not the process.

Your capitalist side needs to read some Deming. "Your system is perfectly tuned to produce the results that you are getting." Obviously, then, if you want better results, you need to improve your system.

Also "the product" is ambiguous. Is it the overall product, like how the product sits in the market, how the user interacts with it to achieve their goals, the manufacturability of the product, etc.? That is Steve Jobs sort of focus on the product, and it is really more of a system (how does the product relate to its user, environment, etc). However, AI doesn't produce that product, nor does any individual engineer. If "the product" means "the result of a task", you don't want to optimize that. That's how you get Microsoft and enterprise products. Nothing works well together, and using it is like cutting a steak with a spoon, but it has a truckload of features.


I definitely struggle with both sides, or maybe multiple sides. On the one hand most of my daily output at my job is coming from AI these days. On the other hand I find the explosion of AI-generated "writing" (and other forms of art) to be aesthetically abhorrent. And I've just recently started a ... weird sort of metaphysics / spirituality / but also AI related writing project, so the difference between creation with and without AI is in really sharp focus for me right now.

I wrote an article about this, but honestly I don't think I really captured the totality of my feelings. I really haven't decided where I land. I'm definitely using the tools for economic purposes, and I even have some "pure-fun" side project stuff where I'm getting value from it.

Here's the article if that sounds interesting, would love to discuss the whole topic with anyone who's finding themselves of two (or more) minds on these sorts of issues: https://hermeticwoodsman.substack.com/p/why-i-let-ai-write-m...


I did some napkin math the other day, and my kids at half my size prob hit the ground with 1/2 the stress that I do. Certainly could take more risks falling with a 50% reduction in harm. The extra rotational energy from 70" vs 40" will do it.


I'm starting to think for software it's produce 2,000 loaves per month. I'm realizing now software was supply-constrained and organizations had to be very strategic about what apps/UIs to build. Now everything and anything can be an app and so we can build more targeted frontends for all kinds of business units that would've been overlooked before.


Ever since Opus 4.6 came out, I've "vibecoded" a bunch of personal apps/CLIs that would've taken me months before. Some examples:

- CLI voice changer with cloned Overwatch voices on ElevenLabs.

- Brother P-Touch label maker using HTML/CSS. Their app is absolutely atrocious.

- Converted a FileMaker CRM into a Next.js/Supabase app.

- Dozens of drag-n-drop or 1-click/CLI tools. Think flattening a folder, a zip file.

- Dozens of Chrome Extensions and TamperMonkey user scripts. Think blocking ads with very targeted xpath.

But when I think about sharing them it feels like what's the point since anyone can make them themselves?


Isn’t the sustainability drive a function of how much humans have written about life and death and science fiction including these themes?


Humans, like all animals, have instinctual and biological drives to survive besides, but it's interesting to think how much of our drive to survive is culturally transmitted too.


Yes my hot take is that the real risk isn't skill atrophy... it's failing to develop the new skill of using AI. It's all abstraction layers anyway and people always lament the next abstraction up.

0/1s → assembly → C → high-level languages → frameworks → AI → product

The engineer keeps moving up the abstraction chain with less and less understanding of the layers below. The better solution would be creating better verification, testing, and determinism at the AI layer. Surely we'll see the equivalent of high-level languages and frameworks for AI soon.


I find “maintainable code” the hardest bias to let go of. 15+ years of coding and design patterns are hard to let go.

But the aha moment for me was what’s maintainable by AI vs by me by hand are on different realms. So maintainable has to evolve from good human design patterns to good AI patterns.

Specs are worth it IMO. Not because if I can spec, I could’ve coded anyway. But because I gain all the insight and capabilities of AI, while minimizing the gotchas and edge failures.


> But the aha moment for me was what’s maintainable by AI vs by me by hand are on different realms. So maintainable has to evolve from good human design patterns to good AI patterns.

How do you square that with the idea that all the code still has to be reviewed by humans? Yourself, and your coworkers


I picture like semi conductors; the 5nm process is so absurdly complex that operators can't just peek into the system easily. I imagine I'm just so used to hand crafting code that I can't imagine not being able to peek in.

So maybe it's that we won't be reviewing by hand anymore? I.e. it's LLMs all the way down. Trying to embrace that style of development lately as unnatural as it feels. We're obv not 100% there yet but Claude Opus is a significant step in that direction and they keep getting better and better.


Then who is responsible when (not if) that code does horrible things? We have humans to blame right now. I just don’t see it happening personally because liability and responsibility are too important


For some software, sure but not most.

And you don’t blame humans anyways lol. Everywhere I’ve worked has had “blameless” postmortems. You don’t remove human review unless you have reasonable alternatives like high test coverage and other automated reviews.


We still have performance reviews and are fired. There’s a human that is responsible.

“It’s AI all the way down” is either nonsense on its face, or the industry is dead already.


> But the aha moment for me was what’s maintainable by AI vs by me by hand are on different realms

I don't find that LLMs are any more likely than humans to remember to update all of the places it wrote redundant functions. Generally far less likely, actually. So forgive me for treating this claim with a massive grain of salt.


I recently discovered GitHub speckit which separates planning/execution in stages: specify, plan, tasks, implement. Finding it aligns with the OP with the level of “focus” and “attention” this gets out of Claude Code.

Speckit is worth trying as it automates what is being described here, and with Opus 4.6 it's been a kind of BC/AD moment for me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: