They are playing a bit fast and loose with the word "banned".
> Your smartphone contains materials processed through semiconductor fabrication, chemical etching, metal anodizing, glass tempering, and electroplating — none of which you could start a new facility for in California without years of litigation.
I agree that we should make it easier to do things, specifically by decreasing the amount of litigation involved in doing stuff. But the risk of a bunch of litigation isn't a ban, right? I get that it's trying to be attention-grabbing, but calling it a ban when it's not just sort of confuses the issue.
Being unable to start a project without doing 5 years of legal wrangling once you put shovel to earth may not be a "ban", but it sure doesn't encourage development.
There are other states without the regulations that these businesses apparently find offensive. Why can't the manufacturing be spun up in those states?
> But the risk of a bunch of litigation isn't a ban, right?
Funny enough, I've known some people over the years who have explicitly viewed litigation as a reasonable alternative to regulation. Their logic was that we should just let people and companies do whatever they want. Then, if it turns out a company is dumping mercury in the river or whatever, you litigate based on the damages. Better than regulation, they assured me.
Agreed, words matter.
There are a lot of smart people out there, and the writer of this site makes me skeptical when he/she exaggerates, omits or spins info. Tell us all the facts at least, so we can trust you.
True, keeping a reader is engaged is important, but at least for me, don't want spin on the actual facts. Want to know what the actual facts are and so I can make an informed decision. Otherwise, it's just the writer using salesmanship to sell their own personal beliefs.
And, for the writer perspective, spin is definitely a powerful technique (seems to be changing America to being more polarized), but for me personal, would like to think I try to see though it as much as possible (in any form, coming from the politically left or right).
I think it is. It keeps listeners engaged because what they love most is telling you that you might be wrong and looking ways for it. A listener should make up their own mind anyway and double check -- if what you say is 99% right better they take that away than be 100% right and not be heard at all. I also just respect people more that can be bold with their points rather than hiding behind some chicken shit nuance that always covers them if what they really meant to postulate was wrong.
I have tried many times to do this, but lack even the minor discipline required. I inevitably make changes to the commands I want to run at the command line, rather than in the script, and then later forget to edit them in the script.
Instead, I now swear by atuin.sh, which just remembers every command I've typed. It's sort of bad, since I never actually get nice scripts, just really long commands, but it gets you 50% of the way there with 0 effort. When leaving my last job, I even donated my (very long) atuin history to my successor, which I suspect was more useful than any document I wrote.
My only hot tip: atuin overrides the up-arrow by default, which is really annoying, so do `atuin init zsh --disable-up-arrow` to make it only run on Ctrl-R.
I was most surprised by the fact that it only took 40 examples for a Qwen finetune to match the style and quality of (interactively tuned) Nano Banana. Certainly the end result does not look like the stock output of open-source image generation models.
I wonder if for almost any bulk inference / generation task, it will generally be dramatically cheaper to (use fancy expensive model to generate examples, perhaps interactively with refinements) -> (fine tune smaller open-source model) -> (run bulk task).
In my experience image models are very "thirsty" and can often learn the overall style of an image from far fewer models. Even Qwen is a HUGE model relatively speaking.
Interestingly enough, the model could NOT learn how to reliably generate trees or water no matter how much data and/or strategies I threw at it...
This to me is the big failure mode of fine-tuning - it's practically impossible to understand what will work well and what won't and why
I see, yeah, I can see how if it's like 100% matching some parts of the style, but then failing completely on other parts, it's a huge pain to deal with. I wonder if a bigger model could loop here - like, have GPT 5.2 compare the fine-tune output and the Nano Banana output, notice that trees + water are bad, select more examples to fine-tune on, and the retry. Perhaps noticing that the trees and water are missing or bad is a more human judgement, though.
Interestingly enough even the big guns couldn't reliably act as judges. I think there are a few reasons for that:
- the way they represent image tokens isn't conducive to this kind of task
- text-to-image space is actually quite finicky, it's basically impossible to describe to the model what trees ought to look like and have them "get it"
- there's no reliable way to few-shot prompt these models for image tasks yet (!!)
I reset my network settings and updated before realizing this is just an outage. I kept searching “<MY LOCATION> Verizon outage” but did not even consider it could be nationwide. I guess it shows how rare nationwide outages are.
Google (and Vercel) are great for doing this! I would like to see Anthropic and OpenAI do something similar, since they too greatly benefit from Tailwind CSS.
> PEP 658 went live on PyPI in May 2023. uv launched in February 2024. The timing isn’t coincidental. uv could be fast because the ecosystem finally had the infrastructure to support it. A tool like uv couldn’t have shipped in 2020. The standards weren’t there yet.
How/why did the package maintainers start using all these improvements? Some of them sound like a bunch of work, and getting a package ecosystem to move is hard. Was there motivation to speed up installs across the ecosystem? If setup.py was working okay for folks, what incentivized them to start using pyproject.toml?
> If setup.py was working okay for folks, what incentivized them to start using pyproject.toml?
It wasn't working okay for many people, and many others haven't started using pyproject.toml.
For what I consider the most egregious example: Requests is one of the most popular libraries, under the PSF's official umbrella, which uses only Python code and thus doesn't even need to be "built" in a meaningful sense. It has a pyproject.toml file as of the last release. But that file isn't specifying the build setup following PEP 517/518/621 standards. That's supposed to appear in the next minor release, but they've only done patch releases this year and the relevant code is not at the head of the repo, even though it already caused problems for them this year. It's been more than a year and a half since the last minor release.
... Ah, I got confused for a bit. When I first noticed the `pyproject.toml` deficiency, it was because Requests was affected by the major Setuptools 72 backwards incompatibility. Then this year they were hit again by the major Setuptools 78 backwards incompatibility (which the Setuptools team consciously ignored in testing because Requests already publishes their own wheel, so this only affected the build-from-source purists like distro maintainers). See also my writeup https://lwn.net/Articles/1020576/ .
I should have mentioned one of the main reasons setup.py turns out not okay for people (aside from the general unpleasantness of running code to determine what should be, and mostly is, static metadata): in the legacy approach, Setuptools has to get `import`ed from the `setup.py` code before it can run, but running that code is the way to find out the dependencies. Including build-time dependencies. Specifically Setuptools itself. Good luck if the user's installed version is incompatible with what you've written.
I tend to avoid sci-fi that hits too close to home (don't love any of the AI/internet/crypto classics, same reason I can't bear to watch Silicon Valley), so I was a little bored by the top of the the list.
But, there's really good stuff that I've loved just a bit down the list: Foundation, The Left Hand Of Darkness, The Dispossessed, Stories of Your Life and Others, Exhalation, Children Of Time, Dune.
Was surprised the Mars trilogy was pretty low (might be the keyword indexing?) - highly recommend, as long as you don't get too bored by descriptions of rock.
rustc can use a few different backends. By my understanding, the LLVM backend is fully supported, the Cranelift backend is either fully supported or nearly so, and there's a GCC backend in the works. In addition, there's a separate project to create an independent Rust frontend as part of GCC.
Even then, there are still some systems that will support C but won't support Rust any time soon. Systems with old compilers/compiler forks, systems with unusual data types which violate Rust's assumptions (like 8 bit bytes IIRC)
Many organizations and environments will not switch themselves to LLVM to hamfist compiled Rust code. Nor is the fact of LLVM supporting something in principle means that it's installed on the relevant OS distribution.
Using LLVM somewhere in the build doesn't require that you compile everything with LLVM. It generates object files, just like GCC, and you can link together object files compiled with each compiler, as long as they don't use compiler-specific runtime libraries (like the C++ standard library, or a polyfill compiler-rt library).
If you're developing, you generally have control over the development environment (+/-) and you can install things. Plus that already reduces the audience: set of people with oddball hardware (as someone here put it) intersected with the set of people with locked down development environments.
Let alone the fact that conceptually people with locked down environments are precisely those would really want the extra safety offered by Rust.
I know that real life is messy but if we don't keep pressing, nothing improves.
> If you're developing, you generally have control over the development environment
If you're developing something individually, then sure, you have a lot of control. When you're developing as part of an organization or a company, you typically don't. And if there's non-COTS hardware involved, you are even more likely not to have control.
AES and RSA had enough public scrutiny to make backdooring backdoors imprudent.
The standardization of an obviously weaker option than more established ones is difficult to explain with security reasons, so the default assumption should be that there are insecurity reasons.
There was lots of public scrutiny of Kyber (ML-KEM); DJB made his own submission to the NIST PQC standardization process. A purposely introduced backdoor in Kyber makes absolutely no sense; it was submitted by 11 respected cryptographers, and analyzed by hundreds of people over the course of standardization.
I disagree that ML-KEM is "obviously weaker". In some ways, lattice-based cryptography has stronger hardness foundations than RSA and EC (specifically, average -> worst case reductions).
ML-KEM and EC are definitely complementary, and I would probably only deploy hybrids in the near future, but I don't begrudge others who wish to do pure ML-KEM.
I don't think anyone is arguing that Kyber is purposefully backdoored. They are arguing that it (and basically every other lattice based method) has lost a minimum of ~50-100 bits of security in the past decade (and half of the stage 1 algorithms were broken entirely). The reason I can only give ~50-100 bits as the amount Kyber has lost is because attacks are progressing fast enough, and analysis of attacks is complicated enough that no one has actually published a reliable estimate of how strong Kyber is putting together all known attacks.
I have no knowledge of whether Kyber at this point is vulnerable given whatever private cryptanalysis the NSA definitely has done on it, but if Kyber is adopted now, it will definitely be in use 2 decades from now, and it's hard to believe that it won't be vulnerable/broken then (even with only publicly available information).
Source for this loss of security? I'm aware of the MATZOV work but you make it sound like there's a continuous and steady improvement in attacks and that is not my impression.
Lots of algorithms were broken, but so what? Things like Rainbow and SIKE are not at all based on the hardness of solving lattice problems.
> AES and RSA had enough public scrutiny to make backdooring backdoors imprudent.
Can you elaborate on the standard of scrutiny that you believe AES and RSA (which were standardized at two very different maturation points in applied cryptography) met that hasn't been applied to the NIST PQ process?
I think it's established that NSA backdoors things. It doesn't mean they backdoor everything. But scrutiny is merited for each new thing NSA endorses and we have to wonder and ask why, and it's enough that if we can't explain why something is a certain way and not another, it's not improbable that we should be cautious of that and call it out. This is how they've operated for decades.
Sure. I'm not American either. I agree, maximum scrutiny is warranted.
The thing is these algorithms have been under discussion for quite some time. If you're not deeply into cryptography it might not appear this way, but these are essentially iterations on many earlier designs and ideas and have been built up cumulatively over time. Overall it doesn't seem there are any major concerns that anyone has identified.
But that's not what we're actually talking about. We're talking about whether creating an IETF RFC for people who want to use solely use ML-KEM is acceptable or not - and given the most famous organization proposing to do this is the US Federal Government it seems bizarre in the extreme to accuse them of backdooring what they actually intend to use for themselves. As I said, though, this does not preclude the rest of the industry having and using hybrid KEMs, which given what cloudflare, google etc are doing we likely will.
> Your smartphone contains materials processed through semiconductor fabrication, chemical etching, metal anodizing, glass tempering, and electroplating — none of which you could start a new facility for in California without years of litigation.
I agree that we should make it easier to do things, specifically by decreasing the amount of litigation involved in doing stuff. But the risk of a bunch of litigation isn't a ban, right? I get that it's trying to be attention-grabbing, but calling it a ban when it's not just sort of confuses the issue.
reply