Hacker Newsnew | past | comments | ask | show | jobs | submit | gunalx's commentslogin

qwen 3.5 is really decent. oOtside for some weird failures on some scaffolding with seemingly different trained tools.

Strong vision and reasoning performance, and the 35-a3b model run s pretty ok on a 16gb GPU with some CPU layers.


for privacy preserving direct inference: Fireworks ai nebius

otherwise openrouter for routing to lots of different providers.


Sad to not see smaller distills of this model being released alongside the flaggship. That has historically been why i liked qwen releases. (Lots of diffrent sizes to pick from from day one)


Judging by the code in the HF transformers repo[1], smaller dense versions of this model will most likely be released at some point. Hopefully, soon.

[1]: https://github.com/huggingface/transformers/tree/main/src/tr...


Per https://github.com/QwenLM/Qwen3.5, more are coming:

> News

> 2026-02-16: More sizes are coming & Happy Chinese New Year!


I get the impression the multimodal stuff might make it a bit harder?


It does habe dotfiles. (.config/noctalia. it is possible to declare mostly declaratively if you want)


You can also kinda read the 3 categories as office, azure, windows. But that is a gross oversimplification.


Glm models with vision ends on a V.



Agree, felt a little less packed than before. H building was brutal though.


That is exactly the only feature I would like to disable. (have custom theming already)


Checkout the Settings panel, in the GitHub screencast this feature is shown in the there.


Well yes, but it isn't too cheap for how 8ld it is.


Some dude literally gave away a couple of terabytes on Reddit homelab subreddit the other day.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: