Hacker Newsnew | past | comments | ask | show | jobs | submit | linolevan's commentslogin

How much is OpenAI paying you for this

Absolutely nothing. I have active subscriptions for both. Claude is better at FE stuff. Codex is better at actual programming.

https://tangled.org/ <--- GitHub on ATProto

that's all I'm aware of

(edit) Oops, just saw that you mentioned it, confused by your first line then. Tangled is awesome!



Where on earth are you living with that kind of price point? Unreal.

Italy, France and Spain have 200GB+ plans for 10€. Romania reportedly has unlimited for 4€ but I don't know which operator.

US plans just aren't comparable as they've been historically f'd with astronomical monthly payments.


> Romania reportedly has unlimited for 4€ but I don't know which operator.

Orange Yoxo is the only one which has actually-unlimited, all the others have a fine-print somewhere with "up to X GB/month, then bandwidth is severely throttled".

I'm using the 4.9€ plan for a mountain webcam[1] and they have been true to their word, no throttling so far.

[1] https://ignis.maramures.io/


Played around with the code to implement a little bit of SIMD. Was able to squeeze out a decent improvement, ~250 fps avg, ~140 low, ~333 high (on an m4). Looks pretty straightforward to do threading with as well. Cool stuff! Could work to bring more gpu stuff back down to the cpu.

Oops! Looks like we posted at the same time.

> Does bonus usage count against my weekly usage limit?

> No. The additional usage you get during off-peak hours doesn’t count toward any weekly usage limits on your plan.


So the first 100% of 5-hour usage are billed against weekly usage at normal rates, but the second additional 100% are not counted?

I just watched my "weekly limit" get used while I ran a claude code command.

I'm not sure how to square that with the quote you gave.


Did you exhaust the five-hour usage limit already? As I understand it, the ”additional usage” refers to anything beyond the standard five-hour usage limit.

Did… you copy paste this from another discussion? I’ve read this comment before.

Me too. This is funny

According to the providers that I keep track of, Cumulus is typically pretty price competitive, except for MiniMax where DeepInfra and Together are much cheaper and GLM-5 where DeepInfra and z.AI's own hosting is much cheaper.

(Also technically qwen3 8b w/ novita being first place but barely)


Can we get context length / output length docs (looks like you mention "Max tokens (chat)" of 128k but it's unclear what that means)? Also it looks like your docs page is out of date from your playground page.

Also piece of feedback: it kind of sucks to have glm/minimax/kimi on separate api endpoints. I assume it's a game you play to get lower latency on routing for popular models but from a consumer perspective it's not great.


Thank you for the feedback. Taking note of this!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: