I'm glad Rust is working well for them. Everyone should use tools they love.
That said, you can write an inaccessible reference cycle that never calls drop() in Rust just as well as you can in most non-GC languages. Leaks are explicitly defined as "safe" in Rust, and IME they're quite common in the wild. Even major projects like Actix had known leaks for several major versions, and you still need to run sanitizers and whatnot to probabilistically find them.
Interestingly, the notion of dropping precisely when something goes out of scope is itself a performance footgun. The pattern of "defer" in Zig encourages thinking in terms of lifetimes and groups of obects, which is much cheaper and easier to reason about.
You can absolutely do that in Rust too, but the language defaults encourage a slightly different programming model, so I see more code that's distasteful to my eyes in that respect in Rust codebases.
> More generally, RAII is a feature that exists in tension with the approach of operating on items in batches, which is an essential technique when writing performance-oriented software.
> And it doesn’t end here: operating in batches by using memory arenas, for example, is also a way to reduce memory ownership complexity, since you are turning orchestration of N lifetimes into 1.
> In this video Casey Muratori describes how going from thinking about individual allocations to thinking in batches is a natural form of progression for a programmer.
> Extremely popular talk on the advantages of looking at problems as data transformation pipelines, where Mike Acton shows how common approaches in C++ (RAII being one of them) are antithetical to the goal of creating performant code.
i actually like C (despite its warts) because the mental model of the compiler is small and i can usually reason correctly about what will happen.
in rust, it's certainly possible for an inexperienced developer to attain the same level of intuitive reasoning, but the mental model for the compiler is orders of magnitude larger. ultimately, rust wants the user to trust the compiler, rather than understand the compiler.
zig embraces the small mental model and gets rid of almost every C wart (in my eyes). fewer built-in guarantees, but less magic.
Read a bunch of your other comments, so I know this wasn't meant in a zealous way.
-
I wouldn't pick my languages based on someone else's hot take, or any Manga avatar, actually. I also honestly doubt "Lina" is unbiased here. They sound like they leaned in heavy from C++, good on them. But many who are drawn to zig, like me, rejected C++ in the first place.
Rust does seem like a nice step up for systems programmers who used C++, namely because it seems to expand on and refine so many ideas from there. But if you disagree with the kitchen-sink approach to language features, then Rust might not be for you anyway.
> All this stuff is just done automatically in Rust. If you have to write out the code by yourself, that's more work, and a huge chance for bugs. The compiler is doing some very complex tracking to figure out what to drop when and where. You'd have to do that all in your head correctly without its help.
What prevents anyone from dedicating a Zig memory allocator to the job (and all of its subtasks), and simply freeing the entire allocator at the end of the job? No baby-sitting needed.
Or if the mindset is really to be assisted, because “very complex” and too “much work”, may as well use a garbage collected language.
> It's knowing the compiler is on your side and taking care of all this that makes it magical.
Until you got used to it, and trusted it so much, and it suddenly misses something - either after a compiler update, or after some unsupported code introduced, and that shit takes down prod on a Friday. I’m not going to take the chance, thank you, I can call free() and valgrind.
> What prevents anyone from dedicating a Zig memory allocator to the job (and all of its subtasks), and simply freeing the entire allocator at the end of the job? No baby-sitting needed.
Given the whole ecosystem is built around the Allocator interface, it's entirely feasible for the consumer of a library to pass in a Garbage Collecting allocator, and let it do the job of deciding what gets dropped or not.
Downside is that this is all at runtime, and you don't get the compile-time memory management that Rust has.
she is mostly hacking on open-source apple-gpu drivers (on apple silicon platform) for linux (not that she needs any introduction in these parts)