wouldn't that defeat the privacy purpose? wouldn't someone be able to see that it was your card in the ATM, when they traced back the monero as exchanged for a coin that was exchanged for your fiat?
ETA: just to be clear - that's a genuine question. I don't know much about monero, so if it really is possible to have untraceable money, that seems like a prudent investment for precaution. I've just always assumed that digital money is inherently traceable, so I always assumed genuine privacy is a mirage. I assume I'm wrong about that, somehow, so I'm curious about the mechanisms of that anonymity.
so would that be a feature of monero-to-monero transactions? I'm still confused as to how it would actually be anonymous? like if I used another coin to exchange for monero, that's obviously traceable. so then I use monero to purchase something else which I then sell for other monero (or I just trade monero directly? if that's possible?). and I'm to believe that there's no way to trace that back and say "okay, monero from wallet X was traded to wallet Y" or whatever other intermediate steps (like"monero was spent on X from wallet A, and then X was resold using monero from wallet B")? like, assuming they don't get in to my wallet, no one would be able to track down a transaction on the chain to a wallet? Or they would be able to track it to a wallet, but they couldn't tie that wallet to me for... some reason?
sorry to ask, but the website seems very light on any actual technical detail about how they are achieving their privacy claims - at least in terms I can parse to make them understandable to me.
very cool, thanks! And since I can't respond to that poster, I'll say it here: thanks for that detailed answer! That definitely seems like a pretty anonymous system. I'm convinced that monero is a pretty private coin!
completely fair, and I agree. but let's talk 6 months/a year down the line - when a local LLM will be able to offer what claude code does only slower and a smaller context window. then do you whip out the local llm to handle the project, or is it still objectionable?
The front page is currently home to the announcement of Qwen 3.6 35B, which has comparable performance to the flagship coding models of a few months ago, and can be run at home by those with a gaming computer or MBP from the last five years. It is happening, but there will always be some lag.
Yes, but every time the capabilities, security, accuracy, or any other quality of LLMs is challenged, the default answer is that we'll essentially have AGI in a quarter or two. It's very tiring to try to argue with people about current quality, when the argument is always to wait and/or pay for a super expensive model.
That's not what the grandparent poster was saying, but sure. They have been steadily improving across those metrics, as Opus 4.6 / 4.7 / Mythos demonstrate. They're certainly not perfect, and I understand your fatigue (it is certainly fatiguing to follow, even if interested!), but each new release pushes it that bit further, and the improvements percolate downwards to the cheaper models.
right on. I certainly empathize with your frustrations about "AGI". but rest assurred, I'm firmly in the camp of "not in my lifetime" and even further in the camp of "not without at least 3 more massive breakthroughs about things we currently do not understand at all". so sorry if it sounded like I was asking "what about when local llms get SUPER GOOD", or something. that's not at all what I meant. All I was asking was - "Claude Code can currently be pointed to a directory and then be chatted with about what it needs to do in that directory to make a full code project. That ability is already available on local machines through a ton of convoluted setup, but it's almost certainly going to be a packaged solution within a year (and possibly within the next few months/weeks/days). So when that packaged solution arrives and the choices are 'use the llm for scaffolding which takes 3 hours of unattended time' or 'build the scaffolding myself which takes 6 hours of deep focus time', what will still be objectionable about choosing the former?"
and, to be clear, it's an earnest question. like I've said elsewhere, I have concerns about over-reliance on the tech, but once it all moves local, a lot of those concerns become much more trivial. so I'm curious if other people have concerns that remain pressing and practical.
ETA: I'm aware that Claude wouldn't take 3 hours to do this, while using its massive warehouses of GPUS. I'm estimating what I think is a reasonable time for a single-gpu device to produce something workable.
Claude Code was released in February 2025, how can it have been years since we were promised competitive local models?
(Do you not realize how crazy the entire premise here is? Imagine someone in 1975 saying that ARPANET has been up for years so everything there is to know about networking technology has probably been found already.)
the epilogue is what speaks to me most. all of the work I've done with llms takes that same kind of approach. I never link them to a git repo and I only ever ask them to make specific, well-formatted changes so that I can pick up where they left off. my general feelings are that LLMs make the bullshit I hate doing a lot easier - project setup, integrate themeing, prepare/package resources for installability/portability, basic dependency preparation (vite for js/ts, ui libs for c#, stuff like that), ui layout scaffolding (main panel, menu panel, theme variables), auto-update fetch and execute loops, etc...
and while I know they can do the nitty gritty ui work fine, I feel like I can work just as fast, or faster, on UI without them than I can with them. with them it's a lot of "no, not that, you changed too much/too little/the wrong thing", but without them I just execute because it's a domain I'm familiar with.
So my general idea of them is that they are "90% machines". Great at doing all of the "heavy lifting" bullshit of initial setup or large structural refactoring (that doesn't actually change functionality, just prepares for it) that I never want to do anyway, but not necessary and often unhelpful for filling in that last 10% of the project just the way I want it.
of course, since any good PM knows that 90% of the code written only means 50% of the project finished (at best), it still feels like a hollow win. So I often consider the situation in the same way as that last paragraph. Am I letting the ease of the initial setup degrade my ability to setup projects without these tools? does it matter, since project setup and refactoring are one-and-done, project-specific, configuration-specific quagmires where the less thought about fiddly perfect text-matching, the better? can I use these things and still be able to use them well (direct them on architechture/structure) if I keep using them and lose grounded concepts of what the underlying work is? good questions, as far as I'm concerned.
asking as someone who is writing a game engine in javascript with the intention to 'transpile' the games' source into a C# project for a native runtime: this provides a map that allows automated translation from javascript source to C# source, right?
No. JSIR is primarily for JS -> IR -> JS for analysis and source-to-source transformation. It's not a ready-made bridge for emitting other languages
You could use it as an intermediate form in a JS->C# pipeline, but you still have to define a subset of JavaScript that lowers cleanly to your target C# runtime and implementing the IR->C# lowering yourself.
I'd imagine the hard part is not the IR, but aligning the JavaScript semantics (object model, closures, prototypes etc.) with C# (static type system, different execution model..).
Right on. That makes sense. Thanks for spelling it out!
I do think aligning the semantics will be the easier part, honestly, because I'm only trying to transpile the supported source for the game engine. Since that's all written in typescript and I'm not guaranteeing full parity if you are trying to transpile arbitrary ts/js (only the source that can be parsed the same way the game engine is parsed), I'm expecting it to be a near 1-to-1 conversion. I started writing everything in C# and copied the structure to JS, knowing that this was the eventual plan, so the JS can actually be re-written as C# with a pretty simple regex tokenizer.
My hope, here, is that by having the code morphed into an IR, that the IR would be some kind of well-known IR that - for instance - C# could also be morphed into and - therefore - would allow automatic parsing back and forth. From what you're saying, though, it sounds like IRs don't use a common structure for describing code (I'm guessing because of the semantic misalignment you mention between a wide variety of different paradigms?), so this would only work if I made the map from IR to C# which would be just as complex (or more so) than just regexing my JS into C#. If I've got that right, that's a bummer, but understandable. If I'm wrong, though, happy to learn more!
I don't see anything wrong that would disqualify your plan.
But if the alternative is regex, and you're writing already in TypeScript, you may take a look at ts-morph [0]. TS has very good compiler APIs and that gets you something much safer than text-based replacement while still staying relatively small for a constrained subset. ts-morph wraps those APIs cleanly.
Btw, JS doesn't even have an official bytecode. The spec is defined at the language semantics level, so each engine/toolchain invents its own internal representation.
A solid suggestion, but a big point of porting it to C# is the performance gains, which the CLR would mitigate. I know it'll be faster than running in a browser - where the game will also run - but if you're offering something for "performance", I don't think the time is best spent on making my job of composing the package easier. I think I'd rather try to figure out how to go whole-hog and compile as much of the game into an AOT package as possible. But, for what it's worth, the entire game engine was written in C# and ported into JS for the express purpose of being able to back-port the packaged code into C#. So I'm hoping it's not too onerous to do the native transpilation, either.
you don't have to be a scientist to directly engage with the literature. from mathematical proofs to directly observed phenomena to statistical certainties - it's all out there for you to engage with and feel secure in your findings just by having an internet connection. there's a qualitative difference in that evidence from the "sides" and therefore there is a qualitative and practical difference in the "more intelligent" side. "truth" is not incidental to the situation, it's the entire point of making claims at all. So a side that is making claims that turn out to not be true - whether you personally verify that or not - is a worse side, intellectually, than another.
If the side you follow says the science community is political and biased, then "just look at the literature" isn't going to help. It's like telling an atheist they'd believe in Jesus if they'd just read the bible.
We are lied to constantly by people who influence our lives. You can't even go to the grocery store without being lied to - being told breakfast cereals are healthy, that low fat options will make you less fat, shrinkflation, misleading unit pricing. It's no wonder people are so distrusting
Even if you're a democrat you still have to admit that democrats lie, a ton, and it's super obvious. Maybe if our leadership in general, on both sides, was capable of being decent humans then we'd be able to build trust and stop doing dumb shit as a civilization
Unfortunately at some level, as usual, it comes down to game theory
If you tell the nuanced truth and lose, and your opponent tells simplified untruths and wins, where does that leave you?
As I understand it (obviously a gross simplification), Jimmy Carter attempted to treat Americans like adults, but Americans did not want to inconvenience themselves by wearing sweaters
please engage in good faith. if you think mathematical proofs will be an issue when I tell someone to "look at the literature", you either don't know what a mathematical proof is, or are too far abstracted from reality to influence any practical action. yes, we're being lied to. no, they don't fuck up the science in order to lie to you. they just expect you not to read the science. because, truthfully, it's rare that the people who are lying to you would even know how to fuck up the science in their favor. so they bet on your ignorance, based on their ignorance, and they usually win the bet. but not if you just go look it up and engage with it. it's not about reading a single paper; it's about always reading every paper (on topics you have decided you are going to have an opinion about) with a keen and unshakeable focus on practical effect. anything else is an academic boondoggle.
That’s a pretty weak argument. What percentage of people actually have the qualifications to understand and verify a research paper? And how much can you even trust the raw data? At the end of the day, it’s just a matter of faith—whether you choose to believe the guy in the church or the guy at the university.
You don't need any advanced science to understand climate change. The basic chemistry and physics of it are readily accessible at a high school level.
Current research papers are far more advanced, but they're about the details of climate change. The basic facts of it were established two centuries ago.
We know that we are putting CO2 into the atmosphere. We know that CO2 absorbs heat. That's not a matter of believing an expert. At this point, anybody still denying it is deliberately choosing what somebody else tells them.
The economic effects of that are harder to model, but denialism is still stuck on whether the effect is real. There is no way to include them in any coherent discussion of what to do about it.
nailed it. I see this odd "eugenics" framing all the time, and all I can think is 'ooh la la, somebody's gonna get laid in college." you can argue the academia until you are blue in the face, but the real-world statistics show that less educated people have more children and that education quality in the US has been declining. It's not a foregone conclusion that one causes the other, but there's a cogent argument to be made that it's about the culture of poorer people vs the culture of richer people - and they even spell out that angle in the movie. They show how reticent the rich couple is to have a child, and how eager the poor couple is to do the same. It's about what their cultures value about children and legacy. It's not "dumb people make dumb kids", it's "dumb people won't educate their kids past their own knowledge who, in turn, won't educate their kids past their own knowledge." The movie even goes on to resolve with the "dumb" descendants learning (from the protagonist) when they have anyone willing to make that a point of the culture. So I can't read a clean "eugenics" take from the film; I only find that take in misreadings of the intro, personally.
I agree the eugenics thing is tangential. It's just there as an easy way to advance the plot to the point where the real story can start without too much work.
You could drop the eugenics thing, replace it with cultural indoctrination of some sort, re-frame it to instead of shitting on white trash culture, shit all over the college educated white collar white people culture and have the same movie down to the "culture has so thoroughly run amuck that even the black president is white in a bad way" trope, the trash piling up because we don't know what to do with it and the heroes being a hooker and a lazy army private. Maybe you'd have to replace the demo derby with a committee hearing full of say nothing corporate speak and some other minor details.
not quite; spell it out for me. are you suggesting that the onion has never, under any circumstances, been funny and therefore are guilty of having pretentious opinions that are "not funny", which makes them bad? Or is it that you're suggesting that you are the sole arbiter of what is and isn't funny, so you're the only person who gets to determine the worth of specific types of humor? Sorry, I have a hard time distinguishing which type of childish, smug bullshit I'm dealing with, so any help you can provide would be appreciated.
In any case, I've never laughed as hard at anything Lenny fucking Bruce said as I did at The Onions "Sony Releases Stupid Piece Of Shit That Doesn't Fucking Work" bit. So if you've got some favorite bruce bits, I'd love to get educated on what is hilarious about 60 year old observational standup.
it's amazing how much asking someone to actually explain what they are trying to imply will completely shut them up. Thanks for playing! I hope your next one is so pithy that I'll rue the day I spoke against you. fingers crossed
I know a comedian who is very good on absurdity. He's been doing that for ages (he kind of popularized it in my country), and he generally attracts right-wingers. I don't appreciate all his humor, as in I don't find it all funny because the goal seems to be to shock (kind of like Goatse, which was also a joke/meme riddled with a political message). I do find it political and humor though, as I can clearly see the intent is (at least also) to humor, and also can recognize political virtue signaling within. I've also found him, at times, funny.
Whether something is humor can be objectively established by disassembly of the structure of the content, whereas 'if you find it funny' is personal, yet 'if it is funny' is a summary of whether a certain group (such as 'the general public', whatever that may be) find it as such.
As such, I believe the expression of not finding someone or something funny a red herring. Different emotions obviously flourish, and the person who expresses that they don't find it funny finds these (more) important.
The red herring here isn't whether The Onion is funny or not (personal), it isn't whether it is humor or not (it is, specifically satire). It is that you fundamentally disagree with the political message it entails. Which you are allowed to do so, but in a discussion it is useful to recognize a significant amount of people do find it funny, and either have no problem with the political messages (tolerance) or agree with these (acceptance).
Demanding to respect a claim is a political act by itself.
Something being 'political' or not is a red herring. Politics is deeply ingrained in our society. How much is it ingrained? It is a spectrum, not a binary proposition. Trying to portrait it as a proposition is trying to oversimplify, removing nuance.
All it does is it wants people to ignore issues and let different political wings try to live in 'harmony' with each other by pretending the other side doesn't exist. This strategy doesn't work, and will hit in the face like a boomerang.
Truly fantastic work! "Holy Grail" is right! Terrain generation just got an upgrade so the tooling is about to start producing some really beautiful results in real time. That's going to be a blast to work with. Thanks!
Speaking of which... when people talk about "replacing" humans with AI, it makes me wonder if there's some kind of law we can push for that says "if you are part of the chain of command that signs off on AI being able to make final determinations, and that causes legal issues, you will be legally liable in place of the AI, since computers cannot be liable." Let a jury decide who, in the chain, bears what burden, case by case, but provide for prima facie liability for all parties in the chain, when a valid suit is tried. I want to see how strong the push is for AI when it's the CEO's personal money on the line.
The chain of responsibility must include the AI vendor. If vendors aren't liable for malpractice, there will be less incentive for all due diligence when lives are on the line.
Honestly yes, you are 100% right that it should be a responsibility thing. I remember back in the day it was said that self-driving car companies would have legal responsibility in case of an accident. I remember that kind of put a damper on the rollout and also took a lot of hype and focus away from the whole industry.
ETA: just to be clear - that's a genuine question. I don't know much about monero, so if it really is possible to have untraceable money, that seems like a prudent investment for precaution. I've just always assumed that digital money is inherently traceable, so I always assumed genuine privacy is a mirage. I assume I'm wrong about that, somehow, so I'm curious about the mechanisms of that anonymity.
reply