Ah, yeah. I spent the early 2010s writing front-ends in AS3, so imagine how that turned out. I wrote my own event system too when I was forced to head to javascript, but in the end I mostly just used jquery's, and it's still what I use. I agree the event-driven paradigm leads to sloppy code, but static event names are enough of a clue to what's invoked most of the time, even in relatively large projects. And most things can sensibly just be promisified now anyway, besides user interactions.
I thought it was funny that you wrote this way back when:
>> I've often seen projects where I think "what talks to what and how? What is the separation of concerns and where does this code live?"
they probably also acknowledge pytorch, numpy, R ... but we don't attribute those tools as the agent who did the work.
I know we've been primed by sci-fi movies and comic books, but like pytorch, gpt-5.2 is just a piece of software running on a computer instrumented by humans.
I don't see the authors of those libraries getting a credit on the paper, do you ?
>I know we've been primed by sci-fi movies and comic books, but like pytorch, gpt-5.2 is just a piece of software running on a computer instrumented by humans.
And we are just a system running on carbon-based biology in our physics computer run by whomever. What makes us special, to say that we are different than GPT-5.2?
> And we are just a system running on carbon-based biology in our physics computer run by whomever. What makes us special, to say that we are different than GPT-5.2?
Do you really want to be treated like an old PC (dismembered, stripped for parts, and discarded) when your boss is done with you (i.e. not treated specially compared to a computer system)?
But I think if you want a fuller answer, you've got a lot of reading to do. It's not like you're the first person in the world to ask that question.
You misunderstood, I am prohumanism. My comment was about challenging the believe that models cant be as intelligent as we are, which cant be answered definitely, though a lot of empirical evidence seems to point to the fact, that we are not fundamentally different intelligence wise. Just closing our eyes will not help in preserving humanism, so we have to shape the world with models in a human friendly way, aka alignment.
It's always a value decision. You can say shiny rocks are more important than people and worth murdering over.
Not an uncommon belief.
Here you are saying you personally value a computer program more than people
It exposes a value that you personally hold and that's it
That is separate from the material reality that all this AI stuff is ultimately just computer software... It's an epistemological tautology in the same way that say, a plane, car and refrigerator are all just machines - they can break, need maintenance, take expertise, can be dangerous...
LLMs haven't broken the categorical constraints - you've just been primed to think such a thing is supposed to be different through movies and entertainment.
I hate to tell you but most movie AIs are just allegories for institutional power. They're narrative devices about how callous and indifferent power structures are to our underlying shared humanity
The separation between private and the government is purely theatrics - a mere administrative shell.
I really don't understand why people treat it with such sacrosanct reverence.
It reminds me of a cup and ball street scam. Opportunistic people move things around and there's a choir of true believers who think there's some sacred principles of separation to uphold as they defend the ornamental labels as if they're some divine decree.
In some cases yes, especially when it comes to surveillance, the distinction doesn't feel like very much. When the government hires a contractor specifically because they break the spirit of the 4th amendment, it's hard to argue that it's not the government breaking the law.
I've been looking for cheap optionally non-cloud camera recently and cycled through 15 different vendors on amazon buying, testing, probing, and returning.
Here's what I found.
If you don't want to pay a lot, there's something called "wansview" which is a white-label to a number of cheap amazon cameras (sub $20). You can do ONVIF and RTSP on any of the wansview firmwared devices and then knock them off the internet to keep it local.
Most recommendations of cameras for things like home assistant point to things at rolls-royce prices (~sometimes 20x the cost of the cheap consumer ones).
You shouldn't have to pony up a 2,000% markup for the feature "has tcp port open for rtsp"
You can do on-device storage and stream over network ... no cloud subscription needed and no huge price tag.
If you're looking for others, you don't even need to buy the camera and check. Just scroll through the marketing jpegs on the amazon page. If they have screenshots with wansview you're good.
It's the only vendor I've found that does this.
This should be long term stable. If they decided to remove it you'd have to manually "upgrade" the firmware - which you won't have to do.
They seem to come with the same capabilities. I assume they farm out the firmware to someone else that they license from, then they get the firmware at basically free in the hope that the firmware maker will push their cloud product but this is all speculation.
These are all working "on the reservation" - I'm not flashing anything. There's always a risk but I think these are just cameras.
They might be running a spy rig side hustle but they open up their ports. I haven't gotten two way audio or the camera motor to move around on the ONVIF protocol yet but there's "profiles" that can do this ... I'll just have to see if the cameras respond to those profiles.
If not I'll contact Wansview.
In my experience with Chinese companies when I contact them about things like this they treat me as if I'm about to pull the trigger on 100,000 of them so who knows, maybe wansview is the win here.
And made almost zero impact, it was just a bigger version of Deepseek V2 and when mostly unnoticed because its performances weren't particularly notable especially for its size.
It was R1 with its RL-training that made the news and crashed the srock market.
It will often chomp white space differently but the main problem is
1. Track alignment with the lines being tracks (hash fixes that)
2. Content alignment with the model not losing focus (hamming/levenshtein other similarity scores fixes that)
If we demand exact matches we're simply not going to get them.
(Combining both methods might be good, I hadn't thought of that)
Another crucial point: the error line "Content mismatch. Reread the file" is crucial. Errors should give descriptive remediate actions.
So even with crappy models it does this automatically and will tool loop accordingly.
Asking it to do smaller edits is no good. Many smaller models will go down to single line edits, looking around for blank lines and just inject garbage. So don't suggest it.
Larger models, which succeed in doing this, know to do that. Smaller models which don't, won't do it if you don't suggest it
Seriously this thing works with 4B models
I also combine it with a toolcall hack for models that don't support tool calling
It injects the tool description in the system prompt after probing the capabilities and then does a simple response router.
I haven't found a model within reason that this doesn't work with (I'm sure if you intentionally throw some fine tune botch up that's emitting garbage it'll break - that's not the claim)
For those who don't know, star catalogs were basically GPS systems. With a calendar, sextant and an accurate catalog, you should be able to know where you are.
Apollo 13 astronauts used this method, works pretty well
I was trying to get a hold of him for years. People who knew him kept saying they'd get me in touch, never did.
His name pops up a lot during the 60s and 70s as an author on numerous articles about networks, often regarding many competing, now defunct alternative networks to the Internet.
IP-Asia met every week via Zoom. Several other people whose names appear in the same literature frequented it too. Pop in tonight for the final session?
Thanks for that link. I attended that at 4am california time. So sorry I didn't ever get to talk to him.
It was like attending a funeral for someone I never was able to track down. Feel kinda terrible about all of it. Really sucks, sounds like he was a very friendly guy.
The separation is already there
People have just failed to understand it
reply