Hacker Newsnew | past | comments | ask | show | jobs | submit | topherhunt's commentslogin

I don't think the final evaluation is to "cement the understanding" so much as _verify_ that students have taken accountability for their own learning process.

^ This

This is what a student, who truly wants to learn rather than simply complete a course / certification, would do... Use AI tools to explain + learn, but not outsource the learning process itself to the tools.


Jeez this seems totally backwards to me. I'd rather live in a society where court records are as open and public as safely possible (like GP's vision) and we as a society adjust our norms such that it's assholish and discriminatory to pass over someone for hiring just because they shoplifted when they were 15.

There will for sure be major backlash against "permanent criminal" datasets (bringing up AI in this is a red herring, it's not fundamentally different from if someone were serving such a database using CGI scripts; AI just gives us more reach to do the things we were already committed to doing). But I frankly don't sympathize with the attitude that people should have the right to pretend that past decisions never happened. You also shouldn't be permanently _punished_ or _ostracized_ for your past self's decisions. But nor should you have the right to expect total anonymity / clean slate disconnected from your past self's decisions.

My probably unpopular view: The right direction is for us as a society to recognize and acknowledge that people change and _need to be allowed to change_ -- not take the easy hack of erasing history. The cost for larger-scale public transparency & institutional change efforts is just too high.


> The agent has no "identity". There is no "I". It has no agency.

"It's just predicting tokens, silly." I keep seeing this argument that AIs are just "simulating" this or that, and therefore it doesn't matter because it's not real. It's not real thinking, it's not a real social network, AIs are just predicting the next token, silly.

"Simulating" is a meaningful distinction exactly when the interior is shallower than the exterior suggests — like the video game NPC who appears to react appropriately to your choices, but is actually just playing back a pre-scripted dialogue tree. Scratch the surface and there's nothing there. That's a simulation in the dismissive sense.

But this rigid dismissal is pointless reality-denial when lobsters are "simulating" submitting a PR, "simulating" indignance, and "simulating" writing an angry confrontative blog post". Yes, acknowledged, those actions originated from 'just' silicon following a prediction algorithm, in the same way that human perception and reasoning are 'just' a continual reconciliation of top-down predictions based on past data and bottom-up sensemaking based on current data.

Obviously AI agents aren't human. But your attempt to deride the impulse to anthropormophize these new entities is misleading, and it detracts from our collective ability to understand these emergent new phenomena on their own terms.

When you say "there's no ghost, just an empty shell" -- well -- how well do you understand _human_ consciousness? What's the authoritative, well-evidenced scientific consensus on the preconditions for the arisal of sentience, or a sense of identity?


> Yes, acknowledged, those actions originated from 'just' silicon following a prediction algorithm, in the same way that human perception and reasoning are 'just' a continual reconciliation of top-down predictions based on past data and bottom-up sensemaking based on current data.

I keep seeing this argument, but it really seems like a completely false equivalence. Just because a sufficiently powerful simulation would be expected to be indistinguishable from reality doesn't imply that there's any reason to take seriously the idea that we're dealing with something "sufficiently powerful".

Human brains do things like language and reasoning on top of a giant ball of evolutionary mud - as such they do it inefficiently, and with a whole bunch of other stuff going on in the background. LLMs work along entirely different principles, working through statistically efficient summaries of a large corpus of language itself - there's little reason to posit that anything analogously experiential is going on.

If we were simulating brains and getting this kind of output, that would be a completely different kind of thing.

I also don't discount that other modes of "consiousness" are possible, it just seems like people are reasoning incorrectly backward from the apparent output of the systems we have now in ways that are logically insufficient for conclusions that seem implausible.


Airplanes and bees are both structured entirely differently and yet they still both fly.

Just because LLMs don't work the same way the human brain does, doesn't mean they don't think.


Unless you're being sarcastic, this is exactly the kind of surface-level false equivalence illogic I'm talking about. From my post:

> I also don't discount that other modes of "consciousness" are possible, it just seems like people are reasoning incorrectly backward from the apparent output of the systems we have now in ways that are logically insufficient for conclusions that seem implausible.


Nobody is saying LLMs definitely think/reason/whatever. The GP is saying that we don't know they don't. Do you disagree?


It's simulating, there's no real substance, except the "homonculus soul" that its human maker/owner injectet into it.

If you asked it to simulate a pirate, it would simulate a pirate instead, and simulate a parrot sitting on its shoulder.

This is hard to discuss because it's so abstract. But imagine an embodied agent (robot), that can simulate pain if you kick it. There's no pain internally. There's just a simulation of it (because some human instructed it such). It's also wrong to assign any moral value to kicking (or not kicking) it (except as "destruction of property owned by another human" same as if you kick a car).


How do we know they don't feel true pain? Can you define it well enough? Perhaps humans are the ones just "simulating" pain.

We've proven that they can have substance, we imbue it with a process called RLHF.


Well they have no pain receptors, for one.


I replaced my spinner verbs with thought-provoking Yodaese so my claude sessions are constantly making me think about my life decisions. Loving it. https://gist.github.com/topherhunt/b7fa7b915d6ee3a7998363d12...


If you're trying to argue that this snippet should answer the question of "what is Bazzite"... have you looked at marketing-speke websites lately? Think of how many different categories of service / product / platform / technology call themselves "the operating system for the next generation of XYZ".

+1 to jtrn's complaint here; when Bazzite's homepage doesn't own up and immediately say "Bazzite is a Linux distribution", it's being unnecessarily unclear, and it loses my trust.


Wow! That's....

discouraging, actually, considering how frequently Claude ignores my AGENTS.md guidance.


Did you notice the @-referencing requirement? AGENTS.md is not included by default, but CLAUDE.md should be


Don't worry about this. There's no way these tools are going away. If this bubble bursts it may wipe out the incentive to continue this frenzied race to build novel AI, but ChatGPT et al won't be shut down, and even if they were, the open-source LLMs comparable to the cutting-edge of 6 months ago will still be online and available. Plus, even if AI progress froze solid tomorrow, I think it would take decades before we'd start to anywhere-near-saturate the potential space of applications & use cases to really do the current tech level justice. (Also, even post-bubble, AI progress would not freeze solid, to put it mildly)


This is totally unresearched, but my gut says it would be much higher ROI for Europe + North America to independently source solar from their respective nearby deserts, paired with batteries?


> https://en.m.wikipedia.org/wiki/ELMED_interconnector#:~:text....

This is already in the works and secured financing recently. It’s a smaller link but it’s a start. Also Tunisia trade electricity with Libya and Algeria; so technically they could be selling electricity to Europe through that link.


I would hope Europe has learned a lesson not to depend on unreliable partners for its energy.


The stability of any country you rely on for power is indeed a major concern.

Alas during the previous Trump presidency, Europe saw that modern Republican 'America First' thinking doesn't just call for a wall with Mexico, a travel ban with Muslim countries, and a trade war with China - it also wants a trade war with Europe.

And linking the south of Spain to the north of Morocco only needs ~200km of undersea cable, rather than the ~6000km an EU-to-US link would call for. That's a pretty big benefit.


But if it's cheaper, let's take those easy wins and think about that later!


If it’s cheaper, vastly cleaner and viable, we shouldn’t let isolationist cynicism ruin that opportunity. Without oil from the Middle East and Russia, a lot of the world would grind to halt, but most countries cannot rely on their own reserves so the isolationist angle doesn’t even come up.


Sounds like exactly what the seller of commodity X would say to me considering not buying commodity X fron them anymore when switching to something else.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: