> users should be curious and actively attempting to understand how it works
Have you ever talked with users?
> this is an endless job
Indeed. If we spend all our time learning what changed with all our tooling when it changes without proper documentation then we spend all our working lives keeping up instead of doing our actual jobs.
There are general users of the average SaaS, and there are claude code users. There's no doubt in my mind that our expectations should be somewhat higher for CC users re: memory. I'm personally not completely convinced that cache eviction should be part of their thought process while using CC, but it's not _that_ much of a stretch.
Personally I've never thought about cache eviction as it pertains to CC. It's just not something that I ever needed to think about. Maybe I'm just not a power user but I just use the product the way I want to and it just works.
Well sure if you put it that way, they're similar. But it's either you don't see it and you get surprised by increased quota usage, or you do see it and you know what it means. Bonus points if they let you turn it off.
Plenty of room for a middle ground, like a static timestamp per session that shows expiration time, without the distraction of a constantly changing UI element.
I like the idea of a cool down. But my next question is would this have been caught if no one updated? I know in practice not everyone would be on a cool down. But presumably this comprise was only found out because a lot of people did update.
> presumably this comprise was only found out because a lot of people did update
This was supposedly discovered by "Socket researchers", and the product they're selling is proactive scanning to detect/block malicious packages, so I'd assume this would've been discovered even if no regular users had updated.
But I'd claim even for malware that's only discovered due to normal users updating, it'd generally be better to reduce the number of people affected with a slow roll-out (which should happen somewhat naturally if everyone sets, or doesn't set, their cool-down based on their own risk tolerance/threat model) rather than everyone jumping onto the malicious package at once and having way more people compromised than was necessary for discovery of the malware.
That assumes discovering a security bug is random and it could happen to anyone, so more shots on goal is better. But is that a good way to model it?
Ir seems like if you were at all likely to be giving dependencies the extra scrutiny that discovers a problem, you’d probably know it? Most of the people who upgraded didn’t help, they just got owned.
A cooldown gives anyone who does investigate more time to do their work.
Cooldown sounds like a good idea ONLY IF these so called security companies can catch these malicious dependencies during the cooldown period. Are they doing this bit or individual researchers find a malware and these companies make headlines?
For researchers who notice new releases as soon as they are published and discover malice based on that alone, I agree, and every step of that can be automated to some level of effectiveness.
But for researchers who aren't sufficiently effective until the first victim starts shouting that something went sideways, the malicious actor would be wise to simply ensure no victim is aware until well after the cooldown period, implementing novel obfuscation that evades static analysis and the like.
It's a trade off for sure, maybe if companies could have "honeypot" environments where they update everything and deploy their code, and try to monitor for sneaky things.
If I were in charge of a package manager I would be seriously looking into automated and semi automated exploit detection so that people didn't have to yolo new packages to find out if they are bad. The checking would itself become an attack vector, but you could mitigate that too. I'm just saying _something_ is possible.
Good insight. It's always easy to blame that which you don't understand. I know nothing about k8s, and my eyes kinda glaze over when our staff engineer talks about pods and clusters. But it works for our team, even if not everyone understands it.
When all you have is a hammer, every problem starts to look like a nail. And the people with axes are wondering how (or indeed even why) so many people are trying to chop wood with a hammer. Further, some axewielders are wondering why they are losing their jobs to people with hammers when an axe is the right tool for the job. Easy to hate the hammer in this case.
Yeah, I would attribute that to tribalism. There's an intense amount of dogma in the Kubernetes community, likely stemming from the billions of dollars that get fed into the ecosystem by Big Tech. I genuinely think people adopt it as part of their identity and then become hostile to anyone who "doesn't understand the excellence of Kubernetes." I only say this because I've had many lunch time conversations with random strangers at the various KubeCon conferences I've attended - and let's just say some were pretty eye opening.
I would also say that a lot of people, even people who are professional k8s operators, don't understand enough of the "theory" behind it. The "why and how", to put it shortly.
And the end result is often that you have two tribes that have totally incorrect idea of even what tools they are using themselves and how, and it's like you swapped them an intentionally wrong dictionary like in a Monthy Python sketch.
I think it's an interest hypothesis but I don't think it works out like that. AI prices aren't priced in relation to the work they do, they're priced in relation to tokens (input/output). As long as it's cheaper to use those tokens than it is to pay a dev, then dev salaries will likely fall. Whenever it becomes cheaper to hire a dev than to use AI, a company will likely just hire a dev. But AI prices won't fall just because dev salaries have.
I think revenue is common to talk about because profit is also meaningless when a company spends every penny it earns to grow (new engineers, marketing, etc). Iirc Amazon made zero profit for quite some time.
Also revenue is a signal for product market fit. Is it a great one? Dunno. But for example I'd be hard pressed to sell $1billion of anything, even if I had something everyone wanted.
But I think your point about burn rate is important. How long can they have this attrition on cash before they collapse?
I mean, the financials just don't look great either way.
Their main product is part VSCode, which is a market that's almost impossible to make money in, and part reselling already expensive LLM tokens.
You can look at more parameters and judge how well a company could do in the future. For Amazon, you can predict that once they stop growing, they can make a pretty penny.
But with Cursor that doesn't seem likely. Even if they had the talent for training models from scratch, which I don't think they do, and IF inference makes money, which is not clear at all, training models is still a huge money sink.
So, for them getting bought out by xAi which has a base model they can use makes sense. But what does xAi get here? Another endless money pit?
You're right. I was commenting mostly on "why companies usually talk about revenue than profit"
I think the truth is that it's a new frontier. No one knows if any of this will make money. Investors are just betting that someone else will learn to monetize sometime soon.
I always got sad when I create a ticket and I see the "ticket created" toast, and then I'm like "oh shoot I forgot to add a screenshot" and go to click the toast to go to my ticket but the toast disappeared. Because then I know that I'm gonna waste the next five minutes of my life looking for it.
FWIW Github has similar shitty search interface. Not sure why.
At least for consumer products you can just issue a chargeback if the company does shitty things that prevent you from canceling. I've done that many times. And you can probably file a complaint with the FTC. I've never done that though.
> If a model finally comes out that produces an excellent SVG of a pelican riding a bicycle you can bet I’m going to test it on all manner of creatures riding all sorts of transportation devices.
This relies on the false premise that, if they would include it in their training dataset, it would be perfect. All they need to do is be good enough and better than the other, not perfect.
I'm not sure if we can have a "perfect" Pelican riding a bicycle. Like, I could probably commission a highly experienced artist to draw one and I don't think it would be perfect. The legs would probably have to be too long, or pedals oddly placed, or handles strange, or wings with hands.
Based on the one Simon commented though, I'd say we're in decent territory to try the latter part of his hypothesis.
> The legs would probably have to be too long, or pedals oddly placed, or handles strange, or wings with hands.
In all seriousness, that's what makes it an interesting test: it's asking for something technically impossible, that requires artistic license to make coherent.
Making specific choices on where to bend reality (and where not to) is a big chunk of visual art.
Have you ever talked with users?
> this is an endless job
Indeed. If we spend all our time learning what changed with all our tooling when it changes without proper documentation then we spend all our working lives keeping up instead of doing our actual jobs.
reply