Hacker Newsnew | past | comments | ask | show | jobs | submit | speedgoose's commentslogin

Yeah so athletes with more money and better access to doping products win instead.

Hard pass.


Also will encourage athletes to give themselves long term health issues for short term performance gains.

I'm of the "your body, your choice" mind

To me the decision to take PED's doesn't feel different than being an alcoholic or having an abortion.

I wouldn't recommend anyone become an alcoholic, but it's their life and people ought to have the freedom of choice.


I’m not sure kids in competitive sports will be able to make an informed decision without any pressure.

Sure, adults should be able to take PEDs if they want to. But there's no reason to allow doping cheaters to enter sanctioned competitive events. It's no different from forcing all competitors to follow equipment rules. Like for the discus throw everyone has to use the same weight. Or for bike racing you can't install a motor.

Would you think it a poor dynamic if a company offered to pay people a good salary simply to be heavy sustained drinkers, but only for some limited amount of time? I'd say the problem is that the Moloch attractor tends to undermine this lofty ideal of "freedom of choice".

At least that produces tangible value for the rest of us this way.

Current idea of sports is that athletes wreck themselves for mere performance value (and money to the people who set it up, with a bit trickling down to athletes for enabling it all). As far as I understand, nothing they directly do is otherwise reusable to anyone else.

I’d rather watch a live commercial for human enhancement industries. At least that’s something that eventually becomes available to everyone.


The "multiplier" on Github Copilot went from 3 to 7.5. Nice to see that it is actually only 20-30% and Microsoft wanting to lose money slightly slower.

https://docs.github.com/fr/copilot/reference/ai-models/suppo...


Yep, and I just made a recommendation that was essentially "never enable Opus 4.7" to my org as a direct result. We have Opus 4.6 (3x) and Opus 4.5 (3x) enabled currently. They are worth it for planning.

At 7.5x for 4.7, heck no. It isn't even clear it is an upgrade over Opus 4.6.


7.5 is promotional rate, it will go up to 25. And in May you will be switched to per token billing.

Opus 4.5 and 4.6 will be removed very soon.

So what is your contingency plan?


Are you saying github copilot is switching to a per token billing model? If so, you have a link to that?

Can you link to a source for anything you're claiming?

https://github.blog/changelog/2026-04-16-claude-opus-4-7-is-...

> Over the coming weeks, Opus 4.7 will replace Opus 4.5 and Opus 4.6 in the model picker for Copilot Pro+.

> This model is launching with a 7.5× premium request multiplier as part of promotional pricing until April 30th

TBF, it's a rumour that they are switching to per-token price in May, but it's from an insider (apparently), and seeing how good of a deal the current per-request pricing is, everyone expects them to bump prices sometime soon or switch to per-token pricing.


The per-request pricing is ridiculous (in a good way, for the user). You can get so much done on a single prompt if you build the right workflow. I'm sure they'll change it soon

Yeah it seems insane that it's priced this way to me too. Using sonnet/opus through a ~$40 a month copilot plan gives me at least an order of magnitude more usage than a ~$40 a month claude code plan (the usage limits on the latter are so low that it's effectively not a viable choice, at least for my use cases).

The models are limited to 160k token context length but in practice that's not a big deal.

Unless MS has a very favourable contract with Anthropic or they're running the models on their own hardware there's no way they're making money on this.


Yeah, you can even write your own harness that spawns subagents for free, and get essentially free opus calls too. Insane value, I'm not at all surprised they're making changes. Oh well. It was a pain in the ass to use Copilot since it had a slightly different protocol and oauth so it wasn't supported in a lot of tools, now I'm going to go with Ollama cloud probably, which is supported by pretty much everything.

Microsoft are going to be removing Opus 4.5 and 4.6 from Copilot soon so I'd enjoy the lower cost while it lasts.

Manage the budget not the impl. Top down decisions like "use a cheap model" risk optimize for the wrong things. If we lose 90% cache hit on the expensive models to context switch to a cheap one, there's no savings. Set the budget, let the devs optimize.

in copilot I find it hard to justify using opus at even 3x vs just using GPT 5.4 high at 1x

I went from plan with opus, implement with claude, to simply plan and implement with GPT 5.4

It's a very good model for a very good price


What is "claude"?

I don't know how you guys are not seeing 4.7 as an upgrade, it just does so much more, so much better. I guess lower complexity tasks are saturated though.

Anecdotally, been leaning on 4.6 heavily, and today 4.7 hallucinated on some agentic research it was doing. Not seen it do that before.

When pushed it did the 'ol "whoopsie, silly me"; turned out the hallucination had been flagged by the agent and ignored by Opus.

Makes it hard to trust it, which sucks as it's a heavy part of my workflow.


This article is only about the tokenizer. It doesn't measure the number of tokens needed for each request, which could be higher or lower overall.

And that is temporary pricing. Looking at 4.6 fast, I'm assuming this price will go up to 15 once the promo ends

oh wow, that is very telling!

Opus 4.6 also just got dumber. It's dismissive, hand-wavy, jumps to conclusions way too quickly, skips reasoning... Bubble is going to burst, either some big breakthrough comes up or we are going to see a very fast enshittificafion.

I prefer Ollama over the suggested alternatives.

I will switch once we have good user experience on simple features.

A new model is released on HF or the Ollama registry? One `ollama pull` and it's available. It's underwhelming? `ollama rm`.


> This creates a recurring pattern on r/LocalLLaMA: new model launches, people try it through Ollama, it’s broken or slow or has botched chat templates, and the model gets blamed instead of the runtime.

Seems like maybe, at least some of the time, you’re being underwhelmed my ollama not the model.

The better performance point alone seems worth switching away


I follow the llama.cpp runtime improvements and it’s also true for this project. They may rush a bit less but you also have to wait for a few days after a model release to get a working runtime with most features.

Model authors are welcome to add support to llama.cpp before release like IBM did for granite 4 https://github.com/ggml-org/llama.cpp/pull/13550

`wget https://huggingface.co/[USER]/[REPO]/resolve/main/[FILE_NAME...`

`rm [FILE_NAME]`

With Ollama, the initial one-time setup is a little easier, and the CLI is useful, but is it worth dysfunctional templates, worse performance, and the other issues? Not to me.

Jinja templates are very common, and Jinja is not always losslessly convertible to the Go template syntax expected by Ollama. This means that some models simply cannot work correctly with Ollama. Sometimes the effects of this incompatibility are subtle and unpredictable.


you can pull directly from huggingface with llama.cpp, and it also has a decent web chat included

Does it have a model registry with an API and hot swapping or you still have to use sometime like llama swap as suggested in the article ? Or is it CLI?

You can have multiple models served now with loading/unloading with just the server binary.

https://github.com/ggml-org/llama.cpp/blob/master/tools/serv...


It only lacks the automatic FIFO loading/unloading then. Maybe it will be there in a few weeks.

You have no idea what you are downloading with such a pull. At least LMstudio gives you access to all the different versions of the same model.

https://ollama.com/library/gemma4/tags

I see quite a few versions, and I can also use hugging face models.


I used to joke that by using Google products, the NSA backups my data, but I’m not sure I like ICE having access to my YouTube history.

Just get a friend overseas to email you and it kicks off the backup. Best UX of any Google product.

I browse old.reddit.com on mobile.

Not the person you responded too, but in my experience the answer is a big yes.

It's perhaps naive, but could he create a new organisation, like a "TotallyNotVeraCrypt" French loi 1901 association, at a different address, and create a new microsoft account by making sure it passes all the requirements.

Yeah but isn't the point of these certificates to express trust?

The point isn't (or: shouldn't be) to forcefully find your way through some back alley to make it look legit. It's to certify that the software is legit.

Trust goes both ways: we ought to trust Microsoft to act as a responsible CA. Obfuscating why they revoked trust (as is apparently the case) and leaving the phone ringing is hurting trust in MS as a CA and as an organization.


who on planet earth trusts a piece of software because Microsoft signed it?

There are different types of trust, but at the very least with such a signature you can trust that the piece of software is really from Veracrypt and not from a malicious third party.

For one: Most if not all virus scanners.

A signature is a signal, not an absolute. Although, to be fair, if Microsoft (or most other CAs) had done a better job, then that trust would have carried more weight than it does currently.


Trust isn't binary, it's a spectrum. A signature is a signal that should increase trustworthiness. Not the strongest signal, perhaps even a weak one, but it's not zero.

That's what VeraCrypt is, a fork of the original TrueCrypt after all drama, security doubts, and eventual discontinuation. It took a long time and two independent audits to establish trust in it.

Probably not French though, give how hostile it appears to be to encryption/security related projects (GrapheneOS had a good arguments re: that)

The author is now based in Japan, and even owns a veracrypt.jp domain. Meanwhile, the old veracrypt.fr domain redirects to veracrypt.io.

Seems rather clear that he doesn't want French jurisdiction.


And Microsoft will be happy to shut that one down because their incompetence.

So we'd better find a real solution now.


If they don’t think about mentioning the country and write in English, we know where they are from.

I guess using French words is safe for now.


TranslateGemma is great.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: