Hacker Newsnew | past | comments | ask | show | jobs | submit | rsynnott's commentslogin

I'm not sure that posting deranged tweets at three in the morning _really_ qualifies as work.

Ireland has both paper only voting, and a PR-STV voting system. Counting can take, literally, days (the most recent EU election took five days to fill all the seats). It is a spectator sport for a certain type of nerd.

> I can glance over code and know "if this compiles and the tests succeed, it will work", even if I didn't have the knowledge to write it myself.

... Errr... Yeah, that's not a great approach, unless you are defining 'work' extremely vaguely.


Haha I have usually found myself on the conservative side of any engineering team I’ve been on, and it’s refreshing to catch some flak for perceived carelessness.

I still make an effort to understand the generated code. If there’s a section I don’t get, I ask the LLM to explain it.

Most of the time it’s just API conventions and idioms I’m not yet familiar with. I have strong enough fundamentals that I generally know what I’m trying to accomplish and how it’s supposed to work and how to achieve it securely.

For example, I was writing some backend code that I knew needed a nonce check but I didn’t know what the conventions were for the framework. So I asked the LLM to add a nonce check, then scanned the docs for the code it generated.


I mean, er, see Iraq, Afghanistan (multiple times), Ukraine, etc etc. Invading places tends to, in practice, be rather difficult.

If you’re a senior at Amazon and your whole job becomes reviewing slop, well, you can likely get another job which does not revolve around reviewing slop. The current market is not great, but it’s disproportionately painful for juniors.

It’s really not. McDonald’s’ whole thing is consistency. It’s never going to be good, but not is it going to be that terrible.

That is, ah, very much not the case for AI slop.


I don’t totally buy this. If you’re Amazon, there’s only so buggy you can get before you start losing huge amounts of money.

99% of software is not Amazon.

This article is about Amazon.

Yes, and my point was about the dangers of generalising from this instance.

I wonder what senior means here. Like, unless it’s fairly junior seniors, the ratios are going to make that impossible.

Even if they have an army of senior engineers, reviewing AI generated code is fundamentally different than reviewing code written without AI. The change is usually larger, looks good on the surface and there are stupid mistakes in it. It's like reviewing someone's code whose only goal is to get an approval on the change.

So exactly like reviewing a code change from an outsourced development team in another country?

I'm not sure. How common is it to review the outsourced development team's code? My guess is that there is rarely any review. They usually ship the whole software and are responsible for it.

I spent years doing it.

Yeah, “you must use LLMs, but also pls don’t use them for important stuff” is a difficult circle to square.

Who said you can’t use it for important stuff? Just because SOME people are screwing up doesn’t mean everyone is.

Of course you can use them for whatever you want. Its also not disputable that some people will be more careful than the other. The issue however is that the idiots who pushed for widespread usage of AIs in the companies, i.e. clueless MBAs, have also pushed them onto exactly the types you are mentioning - the ones who will screw things over because they are incompetent or don't care, or most likely - are both of those things. So it's not a criticism of people who are careful in their usage of LLMs in critical scenarios - it's a criticism of the morons who bought into the AI hype and really believe an LLM will produce equally great terraform code previously written by 10 engineers at the 1% of the cost.

Absolutely. We need to get a Hello, World equivalent of something a person should be able to do with AI before they are allowed to decide AI projects.

Well, the website is called lesswrong.com, and not correct.com.

I tried not to comment directly on the site because I wanted my points to stand on their own. However, Lesswrong has a long history on the internet. It’s part of the “rationalist” writing sphere which has become oddly preoccupied with topics like race and IQ, eugenics-adjacent topics, and never ending flirtations with reactionary ideologies.

That is true but also a bit unfair, they've also been oddly preoccupied with topics like trying to help the most people and frequently promote giving money to efficient charities to fight against malaria, vitamin A deficiencies and help vaccinate children in very poor countries.

That's their marketing pitch, but revealed preferences are stronger signals than stated ones.

I agree that revealed preferences are stronger signals than stated ones. https://funds.effectivealtruism.org/ shows 52000 donors for $110M, https://www.givingwhatwecan.org/ says more than 10000 donors and more than $490M given.

Oh, yeah, I'm aware.

They could have called it morewrong.com or morallywrong for all the right mathematical reasons instead. Their eugenics agenda is really more than a little bit tiresome at this point.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: