Ireland has both paper only voting, and a PR-STV voting system. Counting can take, literally, days (the most recent EU election took five days to fill all the seats). It is a spectator sport for a certain type of nerd.
Haha I have usually found myself on the conservative side of any engineering team I’ve been on, and it’s refreshing to catch some flak for perceived carelessness.
I still make an effort to understand the generated code. If there’s a section I don’t get, I ask the LLM to explain it.
Most of the time it’s just API conventions and idioms I’m not yet familiar with. I have strong enough fundamentals that I generally know what I’m trying to accomplish and how it’s supposed to work and how to achieve it securely.
For example, I was writing some backend code that I knew needed a nonce check but I didn’t know what the conventions were for the framework. So I asked the LLM to add a nonce check, then scanned the docs for the code it generated.
If you’re a senior at Amazon and your whole job becomes reviewing slop, well, you can likely get another job which does not revolve around reviewing slop. The current market is not great, but it’s disproportionately painful for juniors.
Even if they have an army of senior engineers, reviewing AI generated code is fundamentally different than reviewing code written without AI. The change is usually larger, looks good on the surface and there are stupid mistakes in it. It's like reviewing someone's code whose only goal is to get an approval on the change.
I'm not sure. How common is it to review the outsourced development team's code? My guess is that there is rarely any review. They usually ship the whole software and are responsible for it.
Of course you can use them for whatever you want. Its also not disputable that some people will be more careful than the other. The issue however is that the idiots who pushed for widespread usage of AIs in the companies, i.e. clueless MBAs, have also pushed them onto exactly the types you are mentioning - the ones who will screw things over because they are incompetent or don't care, or most likely - are both of those things. So it's not a criticism of people who are careful in their usage of LLMs in critical scenarios - it's a criticism of the morons who bought into the AI hype and really believe an LLM will produce equally great terraform code previously written by 10 engineers at the 1% of the cost.
I tried not to comment directly on the site because I wanted my points to stand on their own. However, Lesswrong has a long history on the internet. It’s part of the “rationalist” writing sphere which has become oddly preoccupied with topics like race and IQ, eugenics-adjacent topics, and never ending flirtations with reactionary ideologies.
That is true but also a bit unfair, they've also been oddly preoccupied with topics like trying to help the most people and frequently promote giving money to efficient charities to fight against malaria, vitamin A deficiencies and help vaccinate children in very poor countries.
They could have called it morewrong.com or morallywrong for all the right mathematical reasons instead. Their eugenics agenda is really more than a little bit tiresome at this point.
reply