Hacker Newsnew | past | comments | ask | show | jobs | submit | malfist's commentslogin

That's amazon in a nutshell though. Create conflicting metrics for performance, push credit up and responsibility down, punish everyone below you for not meeting the double standards

> Create conflicting metrics for performance, push credit up and responsibility down, punish everyone below you for not meeting the double standards

This resonates with my experience.

The only thing you forgot is that you can also use the 12^H^H 14 leadership principles to argue whatever you want (and then the opposite of what you argued last month, still using the same leadership principles).


Got a project finished early? Well, you didn't insist on the highest standards. Made sure things were held to a high standard? Well, you weren't biased for action.

Were you a knowledge source for the entire team? Well, you weren't learning and being curious. Did you ask a lot of questions to learn everything? Well, then you weren't "are right a lot".

Did you think big and come up with an architecture that saved Amazon a lot of money? Then you weren't inventing and simplifying. Build something simple to get out out the door quick? Well, you weren't thinking big.

Did you act quickly without consulting others to fix an issue? Well you weren't earning trust. Did you consult people to make sure they were happy with the solution? Well you weren't biased for action.

Thats just a few examples, there's so many more


Very nice, I can imagine someone turning it into a little satirical webpage, which implements a kind of decision tree:

1. Choose from a set of challenge types (e.g. meeting a deadline, reliability)

2. Choose whether the challenge was "met" or "failed".

3. Choose whether you want to make the person look good or bad, by following/ignoring a principle.

4. Results: A list of relevant principles with short rationalizations.

I'm almost tempted to try, except perhaps I should treasure my ignorance.

If a tool like that gets popular enough that most employees are using it for office-politics, it might even start to deflate the whole Leadership Principles thing.


Not only is having too many comments on your PRs bad for you, but so is not leaving comments on other people's PRs. Both are metrics used

I'd leave lots of comments out of spite whenever I would feel my PRs had been treated unfairly. If I am going down, you all are coming with me.

I specifically look at the quality / substance of the comments when I'm reviewing someone for promo/transfer/fire.

Welcome to Amazon, you'll fit right in.

> previously having overhired

Funny way of saying that Jassy told people he doesn't like the culture of a larger amazon.

Also, if we overhired in 2020-2022, why the hell are we still correcting it in 2026? Did none of the layoffs in 2023 on do the job?

Just an all around failure of leadership with no ownership.


> Did none of the layoffs in 2023 on do the job?

No, because the calculus of layoffs shifted. Briefly, there is always a natural attrition rate A%, but whenever companies do an X% layoff they expect a smaller Y% additional attrition (due to morale etc.) So they expect an overall (A + X + Y)% reduction in headcount within a few months of the layoffs.

However, the job market swung so rapidly from pro-employee to pro-employer in that timeframe that the Y% never happened, and in fact there was even a drop in A%. And so companies still ended up with more employees than planned and had to scramble to achieve their headcount goals using other means (RTO mandates, shifting headcount offshore, further layoffs with AI washing, etc.)

A bit more detail on the calculus in this comment: https://news.ycombinator.com/item?id=46142948


Oh believe me, I am not defending the prick.

We are in a thread of Amazon holding engineering meetings after AI-related outages after laying off 30k people.

If anything this highlights gross incompetence of a moronic leadership. It should be them being laid off.

If overhiring indeed happened, it is also a failure of leadership. Hiring too many people and then firing a bunch of people causes friction, loss of knowledge, decreased morale, etc.


But like, what if we did the layoffs bit by bit and tell people each time there will be more and stay tuned. Surely that's a sign of strong leadership. Just like "muscle confusion" for workouts! Can't let people feel to safe or stable.

Over the weekend I was trying to return a pair of shoes and get a different size and I kept getting 500s trying to go to the store page for the shoes.

Funny, I was automatically refunded for a pair of shoes that Amazon thought I never received even though I’m wearing them right now. I couldn’t even find a way to dispute the refund so I just took the win…

That explains why it kept changing the estimated received date. It was doing weird things.

As a junior dev, I loved to ask interview candidates to implement merge sort or quick sort on whiteboards.

As a non-junior dev I realize how stupid that was.


I think the first enlightenment is that software engineers should be able to abstract away these algorithms to reliable libraries.

The second enlightenment is that if you don't understand what the libraries are doing, you will probably ship things that assemble the libraries in unreasonably slow/expensive ways, lacking the intuition for how "hard" the overall operation should be.


I don't know where you live but I've literally never seen one outside a display in best buy

> Despite what the courts may say

When the rubber meets the road, what the courts say is all that matters.


The reason for that phrase is that no, Mother Nature's laws are all that matters, unlike our puny laws, hers are inherent properties of the universe, no need for enforcement because you literally can't break them. A court can insist that up is down, but it ain't.

Where, pray tell, do physical laws of nature come into relevance in a discussion about Terms and Agreements?

They come into relevance about the time the phrase "despite what the courts may say" was uttered. The intent behind the phrase "you can pry it from my cold dead hands" is roughly the same.

Of course I think that armed revolution over ToS is utterly laughable. But I'm merely answering your question.

For an example of a situation the phase actually applies to, consider "despite what the courts may say we are removing the flock cameras".


That is an amazing read, thank you for sharing. Not often you see a landfill for welfare recipients turned into a holy place that the popes visit and wealthy people store their wine

Thats the thing about AI writing though. Those tropes are things humans do too. But like once or twice in an article. Not every single freaking paragraph

I also think you can easily get overzealous with it and diagnose increasingly large percentages of ordinary human language as "tropified" due to being part of recognizable cadences. I think most of the things on the list are legit but I think it starts to get to a gray area where it's borrowing ordinary mannerisms of speech that aren't necessarily egregious.

Yes, and it's a detection loop without feedback. You can never verify that a piece of work in the wild is actually AI. The poster is the only one who really knows, and they'll always say it's not.

This is a problem, because you can easily get stuck in a self-reinforcing loop. You feel strengthened in your convictions that you're good at ferreting out LLM-speak because you've found so much of it. And you find so much of it because you feel confident you're good at it. Nobody ever corrects you when you're wrong.

Combine that with general overconfidence and you get threads where every other post with correct grammar gets "called out" as AI generated. It's pretty boring.

There's a similar effect with contentious subject. You get reams and reams of posts calling the other side out for being part of a Russian/Israeli/Iranian/Chinese troll network. There's no independent falsification or verification for that, so people just get strengthened in their existing beliefs.


>Yes, and it's a detection loop without feedback. You can never verify that a piece of work in the wild is actually AI. The poster is the only one who really knows, and they'll always say it's not.

Yes. People keep saying, in response to points like this, "oh but you/I can tell pretty easily." But it's not the detection, it's the verification! (see what I did there)

Where I'd push back is the idea that the problem is the boring "call out" discourse that follows each accusation. The problem of verifying human provenance is fundamental to the discussion of trust and argumentation, but the simple "the zone is flooded" problem is also an ecological one. There's terrible air/water/soil quality in the metro area I live in; people have to live with it w/o regard to how invested they are in changing it.


Ever since the sloppification of the internet began, I’ve called out hundreds of LLM slop posts. I’ve gotten about 50 responses back from the author, most of them admitting to LLM usage, with only a single one initially vehemently denying it, but then later admitting it.

I cannot know what this says about my false negative rate, but at the very least I am confident in my false positive rate.


At this point it’s pretty easy to detect unaltered LLM output because it is such bad writing. That will change over time with training I would hope. At some point I imagine it will be hard to tell.

I honestly don’t know what sites like this will do when that happens and the only way of detecting LLMs is that they are subtly wrong or post too much, we’d be overrun with them.

Not sure if we should be hopefully or fearful that they will improve to be undetectable but I suspect they will.


> That will change over time with training I would hope.

There's precious little training material left that isn't generated by LLMs themselves.

Consider this to be model collapse (i.e. we might be at the best SOTA possible with the approach we use today - any further training is going to degrade it).


> There's precious little training material left that isn't generated by LLMs themselves.

Percentage-wise this is quite exaggerated.

> Consider this to be model collapse (i.e. we might be at the best SOTA possible with the approach we use today - any further training is going to degrade it).

You consider this above factor to lead to model collapse? You’ve only mentioned one factor here; this isn’t enough. I’m aware of the GIGO factor, yes. Still there are at least ~5 other key factors needed to make a halfway decent scaling prediction.

It is worth mentioning one outside view here: any one human technology tends to advance as long as there are incentives and/or enthusiasts that push it. I don’t usually bet against motivated humans eventually getting somewhere, provided they aren’t trying to exceed the actual laws of physics. There are bets I find interesting: future scenarios, rates of change, technological interactions, and new discoveries.

Here are two predictions I have high uncertainty about. First, the transformer as an architectural construct will NOT be tossed out within the next five years because something better at the same level is found. Second, SoTA AI performance advances probably due to better fine-tuning training methods, hybrid architectures, and agent workflows.


> There's precious little training material left that isn't generated by LLMs themselves.

> Percentage-wise this is quite exaggerated.

How exaggerated?

a) The percentage is not static, but continuously increasing.

b) Even if it were static, you only need a few generations for even a small percentage to matter.

> You consider this above factor to lead to model collapse? You’ve only mentioned one factor here; this isn’t enough. I’m aware of the GIGO factor, yes. Still there are at least ~5 other key factors needed to make a halfway decent scaling prediction.

What are those other factors, and why isn't GIGO sufficient for model collapse?


I wouldn't say it's "bad writing", but rather that the sheer volume of it allows the attentive reader to quickly identify the tropes and get bored of them.

Similar to how you can watch one fantastic western/vampire/zombie/disaster/superhero movie and love it, but once Hollywood has decided that this specific style is what brings in the money, they flood the zone with westerns, or superhero movies or whatever, and then the tropes become obvious and you can't stand watching another one.

If (insert your favorite blogger) had secret access to ChatGPT and was the only person in the world with access to it, you would just assume that it's their writing style now, and be ok with it as long as you liked the content.


It is objectively bad writing:

Overly focussed on style over content

Melodrama even when discussing the mundane

Attention grabbing tricks like binary opposites overused constantly

Overuse of adjectives and adverbs in particularly inappropriate places.

Lack of coherence if you’re generating large bits of text

General dull tone and lack of actual content in spite of the tricks above

Re your assertion at the end - sure if I didn’t know I’d think it was a particularly stupid, melodramatic human who didn’t ever get to the point and probably avoid their writing at all costs.


Sites like this will have to start using bot detection. Captchas, Anubis.

> At this point it’s pretty easy to detect unaltered LLM output because it is such bad writing.

And yet people seem to still be terrible at that. Someone uses an em-dash and there's always a moron calling it out as AI.

> I honestly don’t know what sites like this will do when that happens and the only way of detecting LLMs is that they are subtly wrong or post too much, we’d be overrun with them.

My personal take is that it doesn't really matter. Most posts are already knee-jerk reactions with little value. Speaking just to be talking. If LLMs make stupid posts, it'll be basically the same as now: scroll a bit more. And if they chance upon saying something interesting then that's a net gain.


Never seen this in the wild, but that sounds unfortunate about em-dashses.

Personally, I think it will matter deeply if sites like this are overrun by bots. If you believe your description, why are you here?


> borrowing ordinary mannerisms of speech that aren't necessarily egregious

That's how a trope starts. When a minority of writers are using a particular pattern, it's personalized style. When a majority of writers in a genre adopt the same personalized style, it's a trope.

We find AI tropes especially annoying because there are three frontier LLMs producing a sizable chunk of text we read (maybe even a majority of text, for some people) lately. It would also be annoying if a clique of three humans were producing most of the text we read; we'd start to find their personal styles annoying and overdone. Even before LLMs, that was a thing that happened in some "slop" fiction genres where a particularly active author would churn out dozens of novels per year in one style (often via ghostwriters, but still with a single style and repetitive plot pattern).


Perhaps the problem is SEO for persuasive writing, LinkedIn-spiration for “business” writing, and school papers for research. The machines read a lot more of this than you would. So for them human writing would appear overwhelmingly troped. Whatever works, right?

It also gets RHLFed into it by people who think the "better" sentence is the one with more puffery, and crucially it tries to cram the semantic patterns in whether appropriate or not because it's been trained to write in ways which aren't perceived as bland.

Puffery about "rich cultural heritage, a "tapestry" of sights "from the Colosseum to the Pantheon" and how they "serve as potent symbols" probably is better writing than "Rome is a city in the Lazio region of Italy with a population of 4m. It is the capital of Italy". Doesn't work quite so well when its trying to fit the pattern to the two competing diners of Bumfuck, Ohio and how the rich cultural heritage of its municipal library underscores its status as the third largest city in its county.


I guess we shouldn't do this then. If it doesn't completely solve to problem.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: