I had a similarly bad experience running Qwen 3.5 35b a3b directly through llama.cpp. It would massively overthink every request. Somehow in OpenCode it just worked.
I think it comes down to temperature and such (see daniel‘s post), but I haven’t messed with it enough to be sure.
It's a point update to the closed-weight Qwen3.5-Plus. Of course there are no weights. Alibaba has consistently not released weights for their best models.
I missed this. So they didn’t really solve much at all. I guess at least it’s compatible with other runtimes. But yeah, who would’ve guessed that Cloudflare software would (besides being vibeslop) prefer Cloudflare infra. This, of course, makes the software quite hard to adopt.
reply