Hacker Newsnew | past | comments | ask | show | jobs | submit | captainbland's commentslogin

For Nvidia's part they're just giving money to one of their largest customers. They make money back even if they "lose" the bet

It's like government XX giving "help" or "grants" to countries at war so they can purchase weapons from XX.

Selling Shovels is quite lucrative whether there is an actual mining business or just a gold rush.

At one point Jensen Huang will be out (retired or forced by staginating sales) and can definitely look back on a very successful career. That much is certain.


Strictly speaking governments can and do create new things, e.g. NASA.

A few reasons, "AI" as used by non-experts often has correctness and security issues. Even when it doesn't, its outputs are often not reproducible/predictable because they're probabilistic systems.

AI systems are also prone to writing code which they can't effectively refactor themselves, implying that many of these code bases are fiscal time bombs where human experts are required to come fix them. If the service being replaced has transactional behaviour, does the AI produced solution? Does the person using it know what that means?

The other side is that AI as an industry still needs to recoup trillions in investment, and enterprise users are potential whales for that. Good prices in AI systems today are not guaranteed to last because even with hardware improvements these systems need to make money back that has been invested in them.


Some of that latter part depends on how good and cheap open weight systems get. The ability to deploy your own will strictly limit the price of closed models if they aren't dominant in functionality.

I think the particular problem is if AI is just producing large volumes of code which are unnecessary, because the LLM is simply not able to create a more concise solution. If this is the case it suggests these LLM generated solutions are likely bringing about a lot of tech debt faster than anyone is ever likely to be able to resolve it. Although maybe people are banking on LLMs themselves one day being sophisticated enough to do it, although that would also be the perfect time to price gouge them.

Agree. We've seen cowboy developers who move fast by producing unreadable code and cutting every corner. And sometimes that's ok. Say you want a proof of concept to validate demand and iterate on feedback. But we want maintainable and reliable production code we can reason about and grasp quickly. Tech debt has a price to pay and looks like LLM abusers are on a path to waking up with a heavy hangover :)

We hired some LLM cowboy developer externals that were pushing out a plethora of PRs daily and a large portion of our team's time at one point was dedicated entirely to just doing PR reviews. Eventually we let them go, and the last few months for us has been dedicated to cleaning up vast quantities of unmaintainable LLM code that's entered our codebase.

I think it's still early days, and it's probably the case that a lot of software development teams have yet to realize that a team basically just doing PR reviews is a strong indication that a codebase is very quickly trending away from maintainability. Our team is still heavily using LLMs and coding agents, but our PR backlog recently has been very manageable.

I suspect we'll start seeing a lot of teams realize they're inundated with tech debt as soon as it becomes difficult for even LLMs to maintain their codebases. The "go fast and spit out as much code as humanly possible" trend that I think has infected software development will eventually come back to bite quite a few companies.


Yep, it's the early days. Eventually we'll work out something like Design Patterns for Hybrid Development, where humans are responsible for software architecture, breaking requirements into maintainable SOLID components, and defining pass/fail criteria. Armed with that, LLMs will do the actual boilerplate implementation and serve as our Rubber Ducky Council for Brainstorming :)

This is more like neat-scrolling, I like it


I guess the good news for Apple is their margins are so high to begin with that they can probably swallow it for a while before pushing increases onto their consumers who will probably largely be happy that it improves the perceived status of their favoured brand


In fairness we essentially ban scooters from practically every public path/road but they're still everywhere


What's wild is so many of these are from prestigious universities. MIT, Princeton, Oxford and Cambridge are all on there. It must be a terrible time to be an academic who's getting outcompeted by this slop because somebody from an institution with a better name submitted it.


I'm going to be charitable and say that the papers from prestigious universities were honest mistakes rather than paper mill university fabrications.

One thing that has bothered me for a very long time is that computer science (and I assume other scientific fields) has long since decided that English is the lingua franca, and if you don't speak it you can't be part of it. Can you imagine if being told that you could only do your research if you were able to write technical papers in a language you didn't speak, maybe even using glyphs you didn't know? It's crazy when you think about it even a little bit, but we ask it of so many. Let's not include the fact that 90% of the English-speaking population couldn't crank out a paper to the required vocabulary level anyway.

A very legitimate, not trying to cheat, use for LLMs is translation. While it would be an extremely broad and dangerous brush to paint with, I wonder if there is a correlation between English-as-a-Second (or even third)-Language authors and the hallucinations. That would indicate that they were trying to use LLMs to help craft the paper to the expected writing level. The only problem being that it sometimes mangles citations, and if you've done good work and got 25+ citations, it's easy for those errors to slip through.


I can't speak for the American universities, but remember there is no entrance exam for UK PhDs, you just require a 2:1 or 1st class bachelor's degree/masters (going straight without a masters is becoming more common) usually, which is trivial to obtain. The hard part is usually getting funding, but if you provide your own funding you can go to any university you want. They are only really hard universities to get into for a bachelors, not for masters or PhD where you are more of a money/labour source than anything else.


Yeah in principle funded PhD positions are quite competitive and as I understand it you tend to be interviewed and essentially ranked against other candidates. But I guess if you're paying for yourself to be there you'll face lower scrutiny


https://now.synthetic.services

It pretty much has one post explaining what the blog is (it's a custom system) and why it exists. Enjoy?


> That's because starting JVM 22, the previous so-called "dynamic attachment of agents" is put behind a flag.

Ok am I being stupid or is the pragmatic solution not to just to enable this flag for test runs etc. and leave it off in prod?


I think the problem is that it's on his users to enable this flag, not something that can be done by Mockito automatically.


Most people want their test suite to pass. If they ugprade java and mockito prints out a message that they need to enabled '--some-flag' while running tests they're just going to add that flag to surefire in their pom. Seems like quite a small speedbump.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: