> However small latency is, power failure is still faster.
A fancy switching power supply with a friendly power factor (looks like a resistive load, rather than drawing more amps during the lower voltage parts of the waveform) actually will have non-zero fall time when suddenly unplugged.
Once you move from abstract to practical - like say having legislators or regulators make rules based on The Science, or relying personally on more facts than you have time to independently verify - yes you do need to have trustworthy people.
I'm pretty sure they were asking for a pinned date for definitions of "economically valuable" and "most (of total economic value)", specifically because, as previous comments noted, the definition and quantity of "economic value" vary over time. If AI hype is to be believed, and if we assume AGI has a slow takeoff, the economy will look very different in 2030, significantly shifting the goalposts for AGI relative to the same definition as of 2026.
Well if humans can do economically valuable mental work the AI can't then its not AGI, don't you think? An AGI could learn that new job too and replace the human, so as long as we still have economically valuable mental work that only humans can do then we haven't reached AGI.
This is a strange binary I don't understand. There are humans that can't do the work of some humans. Intelligence is, clearly, a spectrum. I don't see why a general intelligence would need to have capabilities far beyond a human, when just replacing somewhat lacking humans could upend large portions of the economy. Again, "it's not AGI" arguments will eventually require that some humans aren't considered intelligent, which is the point in time that we'll all be able to agree "ok, this is AGI".
As catlifeonmars noted, what's valuable changes over time.
But beyond that, part of the nature of that change over time is that things tend to be valuable because they're scarce.
So the definition from upthread becomes roughly "highly autonomous systems that outperform humans at [useful things where the ability to do those things is scarce]", or alternatively "highly autonomous systems that outperform humans at [useful things that can't be automated]".
Which only makes sense if the reflexive (it's dependent on the thing being observed) part that I'm substituting in brackets is pinned to a specific as-of date. Because if it's floating / references the current date that that definition is being evaluated for, the definition is nonsensical.
I'd argue it's so vague it's already nonsensical. Can we not declare Google (search) AGI? It sure does a hell of a lot of stuff better than any human I now. Same with the calculator in my desk drawer. Even by broom does a far better job sweeping than I do. My hands just aren't made for sweeping.
But to extend your point, I think we really need to be explicit about the assumptions being made. Everyone loves to say intelligence is easy to define but if it were then we'd have a definition. But if "you" figure it out and it's so simple then "we" are all too dumb and it needs better explaining for our poor simple minds. Or there's a lot of details that make it hard to pin down and that's why there's not a definition of it yet. Kinda like how there's no formal definition of life
I think you're conflating "knowledge" with "intelligence". And, "agency" seems to be a missing concept, which is the only way for something intelligent to apply its knowledge to achieve something practical, on its own.
Google search can't achieve anything practical, because it has no agency. It has no agency partly because it doesn't have the required intelligence to do anything on its own, other than display results for something else, that does have agency, to use.
The applicable definitions, from the dictionary:
Knowledge: facts, information, and skills acquired through experience or education; the theoretical or practical understanding of a subject.
Intelligence: the ability to acquire and apply knowledge and skills.
Agency: the ability to make decisions and act independently.
No, it's that people keep misusing that word for a broader and broader class of things. Pushing back on dilution of meaning isn't a lack of understanding.
It's certainly entertaining to read about ancient industry history, with people on DARPA grants objecting to military interest in the stuff the military was paying them to do.
> why this is so huge is fascinating. i suspect it is not really about the age gap, but rather
Alternate theory: it's a genre tag that implies a whole pile of arbitrary features. Kind of along similar lines as how calling a movie a "space western" tells you quite a lot about it, despite making absolutely no sense if you try to take it strictly literally.
> In this case, it feels natural to me that the line for images should be aligned with the line for the act itself.
Why? Things are made illegal because someone involved is (presumed to be) harmed. That assumption doesn't hold if everyone involved was hired to pretend for the camera, or at least not in the same way. Maybe ban the movie industry as a whole over it's reputation for chewing people up?
We don't use standard time because it works best, we use it because it's "correct" relative to the position of the sun.
Now, standard business hours (9-5 or whatever) were probably chosen for working well in the circumstances where they became standard, and it might be interesting to watch for whether tweaking the clocks leads to tweaking the nominal time of things...
The GitHub issue is AI generated. In my experience triaging these in other projects, you can’t really trust anything in them without verifying. The users will make claims and then the AI will embellish to make them sound more important and accurate.
Making them look more accurate is not the same as being more accurate, and llms are pretty good at the former.
Imagine a user had a vague idea or something that is broken, then the LLM will choose to interpret his comment for what it thinks is the most likely actual underneath problem, without actually checking anything.
“Seem important and accurate” is correct. It doesn’t imply actual accuracy, the llm will just use figures that resemble an actual calculation, hiding they are wild guesses.
I’ve run into the issue trying to use Claude to instrument and analyze some code for performance. It would make claims like “around 500mb ram are being used in this allocation” without evidence.
A fancy switching power supply with a friendly power factor (looks like a resistive load, rather than drawing more amps during the lower voltage parts of the waveform) actually will have non-zero fall time when suddenly unplugged.
reply