This is basically a GPT-4 level model that runs (quantized) on a 32gb ram laptop.
Yes it doesn't recall facts from training material as well but with tool use (e.g. wikipedia lookup) that's not a problem and even preferable to a larger model.
It’s interesting how the Qwen team more or less proved that hybrid reasoning doesn’t work and makes things worse. The fact that this model is almost on par with the bigger model in non thinking mode (old, they released a non hybrid model recently) is crazy.
Qwen3 32B is a hybrid reasoning model and is very good. You have to generate a lot of think tokens for any agentic activity but you will probably run the model locally and it wont be a problem. If you need something quick and simple, /no_think is good enough in my experience. It might also be because its not a moe architecture
Qwen3 32B was a hybrid model that came out in April, but these new Qwen July models have all ditched the hybrid mechanism and are either thinking or non-thinking.
By Qwen3-32B you mean the first released version from late April? I don't think Qwen3-32B-2507 has been released yet.
I agree with GP that since Qwen is now releasing updated Qwen3 version without hybrid reasoning, and experience a significant performance boost in the process, it likely means that the hybrid reasoning experiment was a failure.
Isn't that because all "reasoning" approaches are very much fake? The model cannot internalise the concepts it has to reason about. For instance if you ask it why water feels wet, it is unable to grasp the concept of feeling and sensation of wetness, but will for sure "decompress" learned knowledge of people talking how it is to feel the water.
Everything about LLMs is fake. The "reasoning" trick is still demonstrably useful - the benchmarks consistently show models using that trick performing better at harder code challenges, for example.
I'd argue that what's generally considered "reasoning" isn't actually rooted in understanding either. It's just the process you apply to get to a conclusion
expressed more abstractly: is about drawing logical connections between points and extrapolating from them.
To quote the definition: "the action of thinking about something in a logical, sensible way."
I believe it's rooted in mathematics, not physics. That's probably why there is such a focus on the process instead of the result
I’ve used it with Aider (32B and 30B, the previous 30B one, haven’t tried this fully nonthinking one yet) and 4B with home assistant. Both works great in terms of tool calling.
This model is truly the best for local document processing. It’s super fast, very smart, has a low hallucination rate, and has great long context performance (up to 256k tokens). The speed makes it a legitimate replacement for those closed, proprietary APIs that hoard your data.
Can't wait for it to be available in ollama so that I can run my spam filtering benchmarks against it. qwen3:30b-a3b-q4_K_M was very good, and only bested by gemma3:27b-it-qat for spam filtering. But gemma3 is much slower. Looking forward to trying this!
As jasonjmcghee says, they're available... but if you go to ollama.com and set models to "newest" you'll see Mistral (specifically mistral-small3.2 at this writing) because they seem to not sort the models based on newest update: only newest "group" or however you'd phrase it. So you need to scroll down to "qwen3" to see it's been updated.
I got a cute pelican out of it (with a smile!) https://simonwillison.net/2025/Jul/29/qwen3-30b-a3b-instruct...
I ran a version of it on my Mac using https://huggingface.co/lmstudio-community/Qwen3-30B-A3B-Inst... - it uses 30GB of RAM so probably needs 48GB for comfort.
Yes it doesn't recall facts from training material as well but with tool use (e.g. wikipedia lookup) that's not a problem and even preferable to a larger model.
Can you share more insights on this? Going by @simonw's testing, the quantized model doesn't seem close to GPT-4 level.
I agree with GP that since Qwen is now releasing updated Qwen3 version without hybrid reasoning, and experience a significant performance boost in the process, it likely means that the hybrid reasoning experiment was a failure.
expressed more abstractly: is about drawing logical connections between points and extrapolating from them.
To quote the definition: "the action of thinking about something in a logical, sensible way."
I believe it's rooted in mathematics, not physics. That's probably why there is such a focus on the process instead of the result
It would be nice having a fast local model that is good at using tools
Have you tried using them with something like Claude code or aider?
My notes (pelican and space invaders included) here: https://simonwillison.net/2025/Jul/30/qwen3-30b-a3b-thinking...
This is the 5th model from Qwen in 9 days!
Qwen3-235B-A22B-Instruct-2507 - 21st July
Qwen3-Coder-480B-A35B-Instruct - 22nd July
Qwen3-235B-A22B-Thinking-2507 - 25th July
Qwen3-30B-A3B-Instruct-2507 - 29th July
Qwen3-30B-A3B-Thinking-2507 - today
https://ollama.com/library/qwen3:30b
Slightly frustrating. But good to know.