Hacker Newsnew | past | comments | ask | show | jobs | submit | mikert89's commentslogin

The thing is, this management philosophy worked when AWS knew what they needed to build and just needed to execute with top notch operations.

But now with AI, they are getting disrupted. Most AWS services might become obsolete, why does an ai need these janky higher levels abstractions AWS piles on.

So now they need innovation, but the company isn’t set up for it. They are forcing short deadlines for product launches that don’t matter


its not even AI. most of the cloud offerings are commodities now.

the marginal technological direction is determined by middle managers whoes primary motivation is “what new customer facing feature can I launch at this years re:invent and build a little empire” (of course this is a shrinking offering as tech debt and complexity pile up)

junior engineers are burned and churned on execution, seniors are project managers, principals just do high level reviews & high level fire fighting (note not actually leading the tech)

director and above just their spend time on “what to kill” or “who to fire” as priorities change every 6 months


People seriously underestimate how many founders were just right place at the right time/had their startups pumped full of VC cash. Meet some of these unicorn founders in person behind closed doors and it will throw you into an existential crisis

the fun part - it doesn't matter if his professional reputation is that of a clown. He is still a billionaire clown and you, my friend, may be a genius but dependent on the hand that feeds you for the rest of your existence.

Grifting is very profitable. Some risk of getting caught of course, but in our high trust society you can usually get away with it

id love to be a billionaire

hard to tell if its click bait or if these people cant project into the future

It's extremely common for people to be unable to project into the future when there is a bias in the way. Anytime you see a blatant failure to look beyond the tip of their nose by a person, it's almost always due to their own biases getting in the way (ie it's irrationality, they're giving up reason in exchange for not having to challenge their own position/s).

The other side of that irrationality coin is 2D extrapolation: a thing happened (or a context is such N), so therefore I shall extrapolate it happening again (once or many times) into the future on a smooth line, so as to fit my bias.


have you ever been in one of these mansions? my hot take is people seriously underestimate how great being rich is, and how enjoyable some ocean side mansions are

Ah, that's why the lifetime earnings for a big tech CEO is about 50-100mm. It's enough to afford one of these mansions and a few additional properties around the world -- about as much wealth as any individual human being needs or could possibly spend in one lifetime.

Right?


As ai improves, most tasks will become something like this. Environments setup where the model learns through trial and error

Any human endeavor that can be objectively verified in some environment like this can be completely automated


What's really interesting is that the LLMs become better and better at setting up the environments / tasks themselves. I got this surreal experience the other day where I was writing a prompt0n.md file (I try to log all my prompts in a .folder to keep track of what I prompt and the results I get), and the autocomplete in antigravity kinda sorta wrote the entire prompt by itself... Granted it had all the previous prompts in the same folder (don't know exactly what it grabs in context by itself) and I was working on the next logical step, but it kept getting the "good bits" out of them, and following the pattern quite nicely. I only edited minor things, and refused one line completion in the entire prompt.

It's probably not long till frontier AI companies automate AI research. Then we get recursive self-improvement and eventually superintelligence. The singularity is near. Only a few years perhaps.

Forgot the /s

I'm currently working on a project that is self-improving most of the time. Most of the plans for next steps are written by the agent itself, and executed by the agent itself, and the result feeds into choosing which plans to pursue next. It's not 100% autonomous yet, but self-improvement loops are real, and essential to getting the most out of AI.

AI currently lacks agency but if it can achieve greater goal setting and agency I can't see why self-improvement could not be achieved.

I think the most disappointing thing will be that even we do achieve ASI, everything will carry on as business as usual for a while before it starts making an economic impact because of how resistant to change we have made society.


This is something that I have been wondering about. SuperIntelligence or not, it's clear that significant change is going to happen.

There are a lot of people working on the cause of the change. There are a lot of people criticising the nature of the change. There are a lot of people rejecting the change.

How many are there preparing the world for the change?

Some form of change is coming, how are we preparing society to deal with what is happening?

Job losses due to technology have happened over and over again. Rendering particular forms of employment redundant (typing pools, clearing horse manure, Video rental store workers, and of course, the loom). Most agree that the world is better when those are jobs that need to be done. It's the livelihood of the workers that is the concern.

Instead of fighting the change we need to address the inevitability of change the responsibility to those who it will affect.


Short for /superintelligence.

So much this.

People make fun of prompt engineering, but I think "AI ops" will eventually become a real role at most if not all software companies. Harness Engineers and Agent Reliability Engineers will be just as important as something like DevOps is now.


Prompt engineering is already dying. AI has become great at inferring what you mean even without being incredibly explicit and creates its own detailed plan to follow. Harnesses will also be developed by AI.

Counter-data point: the quality delta between a raw prompt and a well-structured one (same model) is still significant in my experience. "AI inferring intent" works fine for simple tasks, but for complex multi-constraint outputs — code generation with specific constraints, structured data extraction, agent instructions — structure still matters a lot.

What seems to be dying is hand-crafted one-off prompts. What's growing is structured prompt templates that encode intent precisely. I built flompt (https://flompt.dev / https://github.com/Nyrok/flompt) around exactly that thesis — visual prompt structuring, not prompt guessing.


it's called reinforcement learning

don't forget the size of the search space...

this is why big tech is spending 500B on GPUs

that they don't even have the datacenters to plug them in, not the power generation needed to run them if they did

I personally like the 100 dollar one from claude, but the gpt4 pro can be very good

Its wild how far off a lot of the mainstream "consensus" takes on this are.

Yeah I have an M1 Max, and I really want to upgrade, but there’s no reason to.

I love how people think the company that basically invented ai is going out of business. Clearly OpenAI is a massive success and will continue to be

That’s what my Uber told me last night, not sure how he was able to get his hands on some stock!

"Basically invented AI" by running on principles that Minsky wrote about in the 80s, and improvements Google developed in the early 10s, on bigger and bigger computers. But "Basically invented".

I strongly believe that this is a false and outdated take.

Code being the easy part was predicated on how long it took to build a product, and the impact that had on product management, sales, and marketing.

When the time to build collapses, all product/sales/design/martketing mistakes are forgiven. You can pivot so fast, that mistakes in other domains dont matter as much and are reversible

All of the axioms we previously held true need to be rethought


don't worry that we got the wrong requirements from the customer, chose an impossible deadline, priced it wrong, and there's no market, we can just vibe code our way out of it??


The point is that even in case of total product management failure, the cost of failing is much lower both in time and money.


I don't see it. In my experience with AI/Claude so far, building something with AI then changing direction half way through is a great way to generate garbage structure and garbage code. It takes time to dig yourself out of that hole, possibly more than if you had just slowly built by hand from the beginning all. Maybe I'm holding it wrong.


If you switch directions with hand crafted code you have a mess too and a large amount of tedious refactoring work to do.

Which should be perfect work for Ai.


It might make failure faster, but that doesn't mean it's cheaper.

Users will churn quick if you aren't reliable or useful and a security incident can be company-ending for a startup.


Company-ending is a form of failure. The quicker you do that, the quicker you can start your next company.


In an odd way you’re absolutely proving the article’s point. The requirements, deadline, pricing, idea, implementation, customer story, these are the things that matter and are hard. Compared to that, code is easy.


>When the time to build collapses, all product/sales/design/marketing mistakes are forgiven

I must be living in topsy turvey land because this is literally the opposite of what is true. When the time to build collapses, those things become the criticality of the entire product. From a customer perspective, those were always the things that mattered, the customer story. No customer cares how a thing was coded, they’ve ALWAYS cared about all those other things.


nah youre missing it, if time to build takes 9 months, you better get the product right.

if time to build takes 2 months, just build it and iterate.

or just rebuild the product to the customers liking...


I’m not missing anything. Customers getting what they want earlier is a good thing. If the product is flawed, it will be outcompeted by a product that is better designed for customers. You’re sidestepping like crazy my dude


Because we don't need to worry about uptime, customer satisfaction, or data integrity.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: