Hacker Newsnew | past | comments | ask | show | jobs | submit | pennomi's commentslogin

This one is an ablative heat shield, but it’s supposed to flake off gracefully, not break off in large chunks.

An entirely different form of research could be done by sending large quantities of normal people into space. Astronauts are such a small sample size (and so thoroughly vetted) that you get a different statistical view.


Because the average voter cannot see past the price at the pump. People are remarkably uninformed about how the world works.


The price at the pump affects not only a voter's commuter car, but also every truck that delivers goods across the US. This may have a much larger knock-on effect.

OTOH the US is the largest oil producer in the world [1]. Theoretically the US could keep domestic prices in check, but that would require rather drastic administrative pressure, likely only legal at wartime.

[1]: https://www.eia.gov/todayinenergy/detail.php?id=61545


It'd also require completely different refineries. Most U.S. oil is Sweet light vs the Heavy stuff we import and refine from overseas.


It's not only that. Oil prices also greatly increase the price of logistics, mining, metallurgy and fertilisers.


Plastic packaging in food is about to shoot up.


The food in plastic packages is about to shoot up


that brings the question - given the amount of media and propaganda, is it a failure or a result of that media and propaganda.


What they have to see in this case in your opinion?


I mean, look at the Hacker News feed and you’ll get a pretty good sample of new apps and features written by LLMs.

Are they good apps and features? Ehhhh. But let’s not pretend that they’re missing.


How is creation of a Gestapo analogue NOT a step towards Nazi-style authoritarianism?


Need a d10 roll? Just look at the last digit of the current second on your clock. Is it random? No, but it approximates randomness if you only make a roll sporadically.


Use minutes if you need a D12 and are playing very slowly. ;)


Wouldn’t that be hours and really really slowly? Could do seconds mod 12 (or any other factor of 60, which is a lot).


Use hundredths of a second on the stopwatch. With a little math, and throwing out invalid results, you can generate a random number in any range < 100.

Though I do wonder if the hundredths are true or just for show. Maybe they're randomly chosen. :)


How is it not random?


It’s pseudorandom. It’s predictable in theory because if you had another watch, or an amazing sense of time, you could predict it. Is that realistic? Not really.

Computers use their clock to generate pseudorandom numbers all the time (hehe). It’s great randomness for something like shuffling songs or a sorting algorithm. You don’t want to use it with some “adversarial”, like online poker.


For sure. The more specialized or obscure of things you have to do, the less LLMs help you.

Building a simple marketing website? Probably don’t waste your time - an LLM will probably be faster.

Designing a new SLAM algorithm? Probably LLMs will spin around in circles helplessly. That being said, that was my experience several years ago… maybe state of the art has changed in the computer vision space.


> The more specialized or obscure of things you have to do, the less LLMs help you.

I've been impressed by how this isn't quite true. A lot of my coding life is spent in the popular languages, which the LLMs obviously excel at.

But a random dates-to-the-80s robotics language (Karel)? I unfortunately have to use it sometimes, and Claude ingested a 100s of pages long PDF manual for the language and now it's better at it than I am. It doesn't even have a compiler to test against, and still it rarely makes mistakes.

I think the trick with a lot of these LLMs is just figuring out the best techniques for using them. Fortunately a lot of people are working all the time to figure this out.


Agreed. This sentiment you are replying to is a common one and is just people self-aggrandizing. No, almost nobody is working on code novel enough to be difficult for an LLM. All code projects build on things LLM's understand very well.

Even if your architectural idea is completely unique... a never before seen magnum opus, the building blocks are still legos.


the building blocks never were the hard part tho


> I've been impressed by how this isn't quite true.

I’d say it’s true, but the LLMs and humans don’t have the exact same definition of what “obscure” is.

Karel is almost a subset of Pascal with some keyword swaps. And there’s a LOT of Pascal (and similar languages) around.

From the PoV of a statistical based tool like an LLM, Karel is just another flavor of a very popular structure.


Specialized is probably not the word I'd use, because llms are generally useful to understand more specialized / obscure topics. For example I've never randomly heard people talking about the dicom standard, llms have no trouble with it.


I think there is a sweet spot for the training(?) on these LLMs where there is basically only "professional" level documentation and chatter, without the layman stuff being picked up from reddit and github/etc.

I was looking at trying to remember/figure out some obscure hardware communication protocol to figure out enumeration of a hardware bus on some servers. Feeding codex a few RFC URLs and other such information, plus telling it to search the internet resulted in extremely rapid progress vs. having to wade through 500 pages of technical jargon and specification documents.

I'm sure if I was extending the spec to a 3.0 version in hardware or something it would not be useful, but for someone who just needs to understand the basics to get some quick tooling stood up it was close to magic.


The standard for obscurity is different for LLMs, something can be very widespread and public without the average person knowing about it. DICOM is used at practically every hospital in the world, there's whole websites dedicated to browsing the documentation, companies employ people solely for DICOM work, there's popular maintained libraries for several different languages, etc, so the LLM has an enormous amount of it in its training data.

The question relevant for LLMs would be "how many high quality results would I get if I googled something related to this", and for DICOM the answer is "many". As long the that is the case LLMs will not have trouble answering questions about it either.


> llms are generally useful to understand more specialized / obscure topics

A very simple kind of query that in my experiences causes problems to many current LLMs is:

"Write {something obscure} in the Wolfram programming language."


One tendency I've noticed is that LLMs struggle with creativity. If you give them a language with extremely powerful and expressive features, they'll often fail to use them to simplify other problems the way a good programmer does. Wolfram is a language essentially designed around that.

I wasn't able to replicate in my own testing though. Do you know if it also fails for "mathematica" code? There's much more text online about that.


> Do you know if it also fails for "mathematica" code?

My experience concerning using "Mathematica" instead of "Wolfram" in AI tasks is similar.


Several years ago is ancient with the rate of advancement that LLMs have had recently


> Building a simple marketing website? Probably don’t waste your time - an LLM will probably be faster.

This is actually where I would be most reluctant to use an LLM. Your website represents your product, and you probably don’t want to give it the scent of homogenized AI slop. People can tell.


They can tell if you let it use whatever CSS it wants (Claude will nearly always make a purple or blue website with gross rainbow gradients). They can also tell if you let it write your marketing copy.

If you decide on your own brand colors and wording, there’s very little left about the code that can’t be done instantly by an LLM (at least on a marketing website).


I just read Claude's front-end design instructions, and it now explicitly bans purple gradients. Curious to see what new pattern it will latch on to.


Honestly that’s one of the best potential uses for LLMs, translating code that was cleverly designed by brilliant humans into lower level languages.

I don’t trust LLM API design in the slightest, but they are decent at the brute force coding part, especially if you can replicate the testing suite.


I was asking earlier if DHI has tests - seems they do. As does the FastAPI refactor.

Would be nice though if there was an exact test suite that ran against both that could demonstrate parity. It would be onerous to try and compare both library’s test to see which one is more comprehensive (believable) - by default I’d expect the original to be the most believable (duh) mostly because tests represent historical cases that can up and broke it.


Time for the PSF to consider something inspired by uv as a native solution.


The core-adjacent people have completely failed to produce reasonable packaging tools for decades, why would you want another new tool from them?


Who said anything about it coming from core-adjacent people?

Was Kenneth Reitz "core-adjacent" when Requests was brought under the PSF umbrella?


Is there anyone seriously involved in packaging who is neither working at Astral nor a PyPA member?


Well, if I didn't have a bunch of other ideas for things to work on, and if I felt like anyone cared, there would be me....


It makes sense that a next token predictor could execute assembly code. This is fascinating work, especially with the memory implementation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: