Sure, adults should be able to take PEDs if they want to. But there's no reason to allow doping cheaters to enter sanctioned competitive events. It's no different from forcing all competitors to follow equipment rules. Like for the discus throw everyone has to use the same weight. Or for bike racing you can't install a motor.
Would you think it a poor dynamic if a company offered to pay people a good salary simply to be heavy sustained drinkers, but only for some limited amount of time? I'd say the problem is that the Moloch attractor tends to undermine this lofty ideal of "freedom of choice".
At least that produces tangible value for the rest of us this way.
Current idea of sports is that athletes wreck themselves for mere performance value (and money to the people who set it up, with a bit trickling down to athletes for enabling it all). As far as I understand, nothing they directly do is otherwise reusable to anyone else.
I’d rather watch a live commercial for human enhancement industries. At least that’s something that eventually becomes available to everyone.
The "multiplier" on Github Copilot went from 3 to 7.5. Nice to see that it is actually only 20-30% and Microsoft wanting to lose money slightly slower.
Yep, and I just made a recommendation that was essentially "never enable Opus 4.7" to my org as a direct result. We have Opus 4.6 (3x) and Opus 4.5 (3x) enabled currently. They are worth it for planning.
At 7.5x for 4.7, heck no. It isn't even clear it is an upgrade over Opus 4.6.
> Over the coming weeks, Opus 4.7 will replace Opus 4.5 and Opus 4.6 in the model picker for Copilot Pro+.
> This model is launching with a 7.5× premium request multiplier as part of promotional pricing until April 30th
TBF, it's a rumour that they are switching to per-token price in May, but it's from an insider (apparently), and seeing how good of a deal the current per-request pricing is, everyone expects them to bump prices sometime soon or switch to per-token pricing.
The per-request pricing is ridiculous (in a good way, for the user). You can get so much done on a single prompt if you build the right workflow. I'm sure they'll change it soon
Yeah it seems insane that it's priced this way to me too. Using sonnet/opus through a ~$40 a month copilot plan gives me at least an order of magnitude more usage than a ~$40 a month claude code plan (the usage limits on the latter are so low that it's effectively not a viable choice, at least for my use cases).
The models are limited to 160k token context length but in practice that's not a big deal.
Unless MS has a very favourable contract with Anthropic or they're running the models on their own hardware there's no way they're making money on this.
Yeah, you can even write your own harness that spawns subagents for free, and get essentially free opus calls too. Insane value, I'm not at all surprised they're making changes. Oh well. It was a pain in the ass to use Copilot since it had a slightly different protocol and oauth so it wasn't supported in a lot of tools, now I'm going to go with Ollama cloud probably, which is supported by pretty much everything.
Manage the budget not the impl. Top down decisions like "use a cheap model" risk optimize for the wrong things. If we lose 90% cache hit on the expensive models to context switch to a cheap one, there's no savings. Set the budget, let the devs optimize.
I don't know how you guys are not seeing 4.7 as an upgrade, it just does so much more, so much better. I guess lower complexity tasks are saturated though.
Opus 4.6 also just got dumber. It's dismissive, hand-wavy, jumps to conclusions way too quickly, skips reasoning... Bubble is going to burst, either some big breakthrough comes up or we are going to see a very fast enshittificafion.
> This creates a recurring pattern on r/LocalLLaMA: new model launches, people try it through Ollama, it’s broken or slow or has botched chat templates, and the model gets blamed instead of the runtime.
Seems like maybe, at least some of the time, you’re being underwhelmed my ollama not the model.
The better performance point alone seems worth switching away
I follow the llama.cpp runtime improvements and it’s also true for this project. They may rush a bit less but you also have to wait for a few days after a model release to get a working runtime with most features.
With Ollama, the initial one-time setup is a little easier, and the CLI is useful, but is it worth dysfunctional templates, worse performance, and the other issues? Not to me.
Jinja templates are very common, and Jinja is not always losslessly convertible to the Go template syntax expected by Ollama. This means that some models simply cannot work correctly with Ollama. Sometimes the effects of this incompatibility are subtle and unpredictable.
Does it have a model registry with an API and hot swapping or you still have to use sometime like llama swap as suggested in the article ? Or is it CLI?
It's perhaps naive, but could he create a new organisation, like a "TotallyNotVeraCrypt" French loi 1901 association, at a different address, and create a new microsoft account by making sure it passes all the requirements.
Yeah but isn't the point of these certificates to express trust?
The point isn't (or: shouldn't be) to forcefully find your way through some back alley to make it look legit. It's to certify that the software is legit.
Trust goes both ways: we ought to trust Microsoft to act as a responsible CA. Obfuscating why they revoked trust (as is apparently the case) and leaving the phone ringing is hurting trust in MS as a CA and as an organization.
There are different types of trust, but at the very least with such a signature you can trust that the piece of software is really from Veracrypt and not from a malicious third party.
A signature is a signal, not an absolute. Although, to be fair, if Microsoft (or most other CAs) had done a better job, then that trust would have carried more weight than it does currently.
Trust isn't binary, it's a spectrum. A signature is a signal that should increase trustworthiness. Not the strongest signal, perhaps even a weak one, but it's not zero.
That's what VeraCrypt is, a fork of the original TrueCrypt after all drama, security doubts, and eventual discontinuation. It took a long time and two independent audits to establish trust in it.
Hard pass.
reply