I have a small physics-based problem I pose to LLMs. It's tricky for humans as well, and all LLMs I've tried (GPT o3, Claude 3.7, Gemini 2.5 Pro) fail to answer correctly. If I ask them to explain their answer, they do get it eventually, but none get it right the first time. Qwen3 with max thinking got it even more wrong than the rest, for what it's worth.
mrkeen 4 hours ago [-]
As they say, we shouldn't judge AI by the current state-of-the-art, but by how far and fast it's progressing. I can't wait to see future models get it even more wrong than that.
kaoD 4 hours ago [-]
Personally (anecdata) I haven't experienced any practical progress in my day-to-day tasks for a long time, no matter how good they became at gaming the benchmarks.
They keep being impressive at what they're good at (aggregating sources to solve a very well known problem) and terrible at what they're bad at (actually thinking through novel problems or old problems with few sources).
E.g. all ChatGPT, Claude and Gemini were absolutely terrible at generating Liquidsoap[0] scripts. It's not even that complex, but there's very little information to ingest about the problem space, so you can actually tell they are not "thinking".
Absolutely. All models ar terrible with Objective-C and Swift, compared to let's say JS/HTML/Python.
However, I've realized that Claude Code is extremely useful for generating somewhat simple landing pages for some of my projects. It spits out static html+js which is easy to host, with somewhat good looking design.
The code isn't the best and to some extent isn't maintainable by a human at all, but it gets the job done.
apercu 11 minutes ago [-]
Interesting, I'll have to try that. All the "static" page generators I've tried require React....
ggregoryarms 47 minutes ago [-]
Building a basic static html landing page is ridiculously easy though. What js is even needed? If it's just an html file and maybe a stylesheet of course it's easy to host. You can apply 20 lines of css and have a decent looking page.
These aren't hard problems.
apercu 10 minutes ago [-]
> These aren't hard problems.
So why do so many LLMs fail at them?
prox 3 hours ago [-]
Absolutely, as soon as they hit that mark where things get really specialized, they start failing a lot. They do generalizations on well documented areas pretty good. I only use it for getting a second opinion as it can search through a lot of documents quickly and find me alternative means.
42lux 2 hours ago [-]
Haven’t seen much progress in base models since gpt4. Deep thinking and whatever else came in the last year are just bandaids hiding the shortcomings of said models and were achievable before with the right tooling. The tooling got better the models themselves are just marginally better.
kenjackson 12 hours ago [-]
You really had me until the last half of the last sentence.
stavros 12 hours ago [-]
The plural of anecdote is data.
rtaylorgarlock 11 hours ago [-]
Only in the same way that the plural of 'opinion' is 'fact' ;)
stavros 11 hours ago [-]
Except, very literally, data is a collection of single points (ie what we call "anecdotes").
rwj 11 hours ago [-]
Except that the plural of anecdotes is definitely not data, because without controlling for confounding variables and sampling biases, you will get garbage.
fhd2 4 hours ago [-]
Based on my limited understanding of analytics, the data set can be full of biases and anomalies, as long as you find a way to account for them in the analysis, no?
scubbo 9 hours ago [-]
Garbage data is still data, and data (garbage or not) is still more valuable than a single anecdote. Insights can only be distilled from data, by first applying those controls you mentioned.
bcoates 11 hours ago [-]
No, Wittgenstein's rule following paradox, Shannon sampling theorem, the law that infinite polynomials pass through any finite set of points (does that have a name?), etc, etc. are all equivalent at the limit to the idea that no amount of anecdotes-per-se add up to anything other than coincidence
whatnow37373 6 hours ago [-]
Without structural assumptions, there is no necessity - only observed regularity. Necessity literally does not exist. You will never find it anywhere.
Hume figured this out quite a while ago and Kant had an interesting response to it. Think the lack of “necessity” is a problem? Try to find “time” or “space” in the data.
Data by itself is useless. It’s interesting to see peoples’ reaction to this.
bijant 6 hours ago [-]
@whatnow37373 — Three sentences and you’ve done what a semester with Kritik der reinen Vernunft couldn’t: made the Hume-vs-Kant standoff obvious. The idea that “necessity” is just the exhaust of our structural assumptions (and that data, naked, can’t even locate time or space) finally snapped into focus.
This is exactly the kind of epistemic lens-polishing that keeps me reloading HN.
tankenmate 5 hours ago [-]
This thread has given me the best philosophical chuckle I've had this year. Even after years of being here, HN can still put an unexpected smile on your face.
inimino 9 hours ago [-]
No, no, no. Each of them gives you information.
bcoates 9 hours ago [-]
In the formal, information-theory sense, they literally don't, at least not on their own without further constraints (like band-limiting or bounded polynomial degree or the like)
nurettin 6 hours ago [-]
They give you relative information. Like word2vec
inimino 9 hours ago [-]
...which you always have.
acchow 4 hours ago [-]
“Plural of anecdote is data” is meant to be tongue-in-cheek.
Actual data is sampled randomly. Anecdotes very much are not.
cess11 6 hours ago [-]
No. Anecdote, ἀνέκδοτον, is a story that points to some abstract idea, commonly having something to do with morals. The word means 'not given out'/'not-out-given'. Data is the plural of datum, and arrives in english not from greek, but from latin. The root is however the same as in anecdote, and datum means 'given'. Saying that 'not-given' and 'collection of givens' is the same is clearly nonsensical.
A datum has a value and a context in which it was 'given'. What you mean by "points" eludes me, maybe you could elaborate.
tankenmate 5 hours ago [-]
"The plural of anecdote is data.", this is right up there with "1 + 1 = 3, for sufficiently large values of 1".
Had an outright genuine guffaw at this one, bravo.
Ahhhahhahahaha stavros is so right but this is such high level bickering I haven't laughed so hard in a long time. Ya’ll are awesome! dymk you deserve a touché for this one.
The challenge for sharing data at this stage of the game is that the game is rigged in datas favor. So stavros I hear you.
To clarify, if we post our data it’s just going to get fed back into the models making it even harder to vet iterations as they advance.
WhitneyLand 10 hours ago [-]
The plural of reliable data is not anecdote.
10 hours ago [-]
tomrod 10 hours ago [-]
Depends on the data generating process.
windowshopping 10 hours ago [-]
"For what it's worth"? What's wrong with that?
Jordan-117 7 hours ago [-]
That's the last third of the sentence.
nopinsight 6 hours ago [-]
Current models are quite far away from human-level physical reasoning (paper below). An upcoming version of models trained on world simulation will probably do much better.
PHYBench: Holistic Evaluation of Physical Perception and Reasoning in Large Language Models
This is more about a physics math aptitude test. You can already see that the best model in math is saturating it halfway. It might not indicate its usefulness in actual physical reasoning, or at the very least, it seems like a bit of a stretch.
furyofantares 10 hours ago [-]
I similarly have a small, simple spatial reasoning problem that only reasoning models get right, and not all of them, and which Qwen3 on max reasoning still gets wrong.
> I put a coin in a cup and slam it upside-down on a glass table. I can't see the coin because the cup is over it. I slide a mirror under the table and see heads. What will I see if I take the cup (and the mirror) away?
tamat 1 hours ago [-]
I always feel that if you share a problem here where LLMs fail, it will end up in their training set and it wont fail to that problem anymore, which means the future models will have the same errors but you have lost your ability to detect them.
natrys 9 hours ago [-]
My first try (omitting chain of thought for brevity):
When you remove the cup and the mirror, you will see tails. Here's the breakdown:
Setup: The coin is inside an upside-down cup on a glass table. The cup blocks direct view of the coin from above and below (assuming the cup's base is opaque).
Mirror Observation: A mirror is slid under the glass table, reflecting the underside of the coin (the side touching the table). If the mirror shows heads, this means the underside of the coin is heads.
Coin Orientation: A coin has two sides: heads and tails. If the underside (facing the table) is heads, the top side (facing away from the table) must be tails.
Result: Removing the cup reveals the coin on the table. Since the top side was tails all along, you will see tails.
Key Insight: The mirror reveals the hidden underside of the coin, but the top side (visible after removing the cup) is its opposite. The mirror’s reflection does not alter the physical orientation of the coin’s sides—it simply reflects the existing state.
Answer: Tails.
SamBam 9 hours ago [-]
Huh, for me it said:
Answer:
You will see the same side of the coin that you saw in the mirror — heads .
Why?
The glass table is transparent , so when you look at the coin from below (using a mirror), you're seeing the top side of the coin (the side currently facing up).
Mirrors reverse front-to-back , not left-to-right. So the image is flipped in depth, but the orientation of the coin (heads or tails) remains clear.
Since the coin hasn't moved during this process, removing the cup and mirror will reveal the exact same face of the coin that was visible via the mirror — which was heads.
Final Answer: You will see heads.
viraptor 6 hours ago [-]
The question doesn't define which side you're going to look from at the end, so either looking down or up is valid.
Rooki 8 hours ago [-]
Manus just answered this correctly for me in one crack:
Me:
I put a coin in a cup and slam it upside-down on a glass table. I can't see the coin because the cup is over it. I slide a mirror under the table and see heads. What will I see if I take the cup (and the mirror) away?
Manus:
This is a classic riddle! Let's think it through. You placed a coin in a cup, turned it upside down on a glass table, and then slid a mirror underneath the table. Looking into the mirror from below, you saw 'heads'. Since the mirror reflects the side of the coin facing downwards through the glass table, the 'heads' side must be facing down. Therefore, the side facing up, hidden inside the cup, must be 'tails'. When you remove the cup, you will see the side facing up, which is tails.
Lucasoato 7 hours ago [-]
I tried with the thinking option on and it gets into some networking errors, if you don’t turn on the thinking it guesses the answer correctly.
> Summary:
- Mirror shows: *Heads* → That's the *bottom face* of the coin.
- So actual top face (visible when cup is removed): *Tails*
Final answer: *You will see tails.*
SamBam 9 hours ago [-]
Yup, it flunked that one.
I also have a question that LLMs always got wrong until ChatGPT o3, and even then it has a hard time (I just tried it again and it needed to run code to work it out). Qwen3 failed, and every time I asked it to look again at its solution it would notice the error and try to solve it again, failing again:
> A man wants to cross a river, and he has a cabbage, a goat, a wolf and a lion. If he leaves the goat alone with the cabbage, the goat will eat it. If he leaves the wolf with the goat, the wolf will eat it. And if he leaves the lion with either the wolf or the goat, the lion will eat them. How can he cross the river?
I gave it a ton of opportunities to notice that the puzzle is unsolvable (with the assumption, which it makes, that this is a standard one-passenger puzzle, but if it had pointed out that I didn't say that I would also have been happy). I kept trying to get it to notice that it failed again and again in the same way and asking it to step back and think about the big picture, and each time it would confidently start again trying to solve it. Eventually I ran out of free messages.
novaRom 5 hours ago [-]
4o with thinking:
By systematic (BFS) search of the entire 32-state space under these rules, one finds no path from to that stays always safe. Thus the puzzle has no solution—there is no way for the man to ferry all four items across without at least one of them being eaten.
mavamaarten 4 hours ago [-]
You go with the cabbage, goat, wolf and lion all together!
cyprx 8 hours ago [-]
i tried grok 3 with Think and it was right also with pretty good thinking
6 hours ago [-]
hmottestad 7 hours ago [-]
Tried it with o1-pro:
> You’ll find that the actual face of the coin under the cup is tails. Seeing “heads” in the mirror from underneath indicates that, on top, the coin is really tails‑up.
UltraSane 7 hours ago [-]
Sonnet 3.7 non-reasoning got it right.
I'll think this through step by step.
When you place a coin in a cup and slam it upside-down on a glass table, the coin will be between the table surface and the cup. When you look at the reflection in the mirror beneath the table, you're seeing the bottom side of the coin through the glass.
Since the mirror shows heads, you're seeing the heads side of the coin reflected in the mirror. In a mirror, the image is reversed left-to-right, but the "heads" and "tails" sides of a coin would not be flipped in this reflection - you'd still recognize heads as heads.
If you take away both the cup and the mirror, you'll see the opposite side of the coin compared to what you saw in the mirror. Since the mirror showed heads, you'll see tails when looking directly at the coin from above the table.
viraptor 9 hours ago [-]
Simple Claude 3.5 with no reasoning gets it right.
furyofantares 9 hours ago [-]
Neat, because 3.7 doesn't.
viraptor 6 hours ago [-]
3.7 works for me too. Of course it's going to depend on temperature a bit.
animal531 2 hours ago [-]
They all are using these tests to determine their worth, but to be honest they don't convert well to real world tests.
For example I tried Deepseek for code daily over a period of about two months (vs having used ChatGPT before), and its output was terrible. It would produce code with bugs, break existing code when making additions, totally fail at understanding what you're asking etc.
ggregoryarms 45 minutes ago [-]
Exactly. If I'm going to be solving bugs, I'd rather they be my own.
baxtr 7 hours ago [-]
This reads like a great story with a tragic ending!
concrete_head 10 hours ago [-]
Can you please share the problem?
stavros 10 hours ago [-]
I don't really want it added to the training set, but eh. Here you go:
> Assume I have a 3D printer that's currently printing, and I pause the print. What expends more energy, keeping the hotend at some temperature above room temperature and heating it up the rest of the way when I want to use it, or turning it completely off and then heat it all the way when I need it? Is there an amount of time beyond which the answer varies?
All LLMs I've tried get it wrong because they assume that the hotend cools immediately when stopping the heating, but realize this when asked about it. Qwen didn't realize it, and gave the answer that 30 minutes of heating the hotend is better than turning it off and back on when needed.
bufferoverflow 6 hours ago [-]
What kind of answer do you expect? It all depends on the hotend shape and material, temperature differences, how fast air moves in the room, humidity of the air, etc.
dtech 7 minutes ago [-]
Keeping something above room temperature will always use more energy than letting it cool down and heating it back up when needed
sampo 4 hours ago [-]
> It all depends on
No it doesn't.
oscord 6 hours ago [-]
Qwen3-32b did it pretty accurately it seems. Calculated heat loss over time going to ambient temp, offered to keep it at standby 100C for short breaks under 10 minutes. Shut down completely for longer breaks.
stavros 3 hours ago [-]
The correct answer is that it's always better to turn it off, though.
pylotlight 10 hours ago [-]
Some calculation around heat loss and required heat expenditure to reheat per material or something?
stavros 10 hours ago [-]
Yep, except they calculate heat loss and required energy to keep heating, but room temperature and energy required to heat from that in the other case, so they wildly overestimate one side of the problem.
bcoates 10 hours ago [-]
Unless I'm missing something holding it hot is pure waste.
markisus 9 hours ago [-]
Maybe it will help to have a fluid analogy. You have a leaky bucket. What wastes more water, letting all the water leak out and then refilling it from scratch, or keeping it topped up? The answer depends on how bad the leak is vs how long you are required to maintain the bucket level. At least that’s how I interpret this puzzle.
herdrick 8 hours ago [-]
No, you should always wait until the last possible moment to refill the leaky bucket, because the less water in the bucket, the slower it leaks, due to reduced pressure.
Torkel 8 hours ago [-]
Does it depend though?
The water (heat) leaking out is what you need to add back. As water level drops (hotend cools) the leaking will slow. So any replenishing means more leakage then you are eventually paying for by adding more water (heat) in.
dTal 3 hours ago [-]
Allowing it to cool below the phase transition point of the melted plastic will cause it to release latent heat, so there is a theoretically possible corner case where maintaining it hot saves energy. I suspect that you are unlikely to hit this corner case, though I am too lazy to crunch the numbers in this comment.
yishanchuan 10 hours ago [-]
don't worry, it is really trickly for training
andrewmcwatters 9 hours ago [-]
Ah! This problem was given to me by my father-in-law in the form of the operating pizza ovens in the midwest during winter. It's a neat, practical one.
phonon 12 hours ago [-]
Qwen3-235B-A22B?
stavros 12 hours ago [-]
Yep, on Qwen chat.
12 hours ago [-]
nothink3194 10 hours ago [-]
[dead]
arthurcolle 12 hours ago [-]
Hi, I'm starting an evals company, would love to have you as an advisor!
999900000999 11 hours ago [-]
Not OP, but what exactly do I need to do.
I'll do it for cheap if you'll let me work remote from outside the states.
refulgentis 11 hours ago [-]
I believe they're kidding, playing on "my singular question isn't answered correctly"
sieabahlpark 9 hours ago [-]
[dead]
8 hours ago [-]
natrys 15 hours ago [-]
They have got pretty good documentation too[1]. And Looks like we have day 1 support for all major inference stacks, plus so many size choices. Quants are also up because they have already worked with many community quant makers.
Not even going into performance, need to test first. But what a stellar release just for attention to all these peripheral details alone. This should be the standard for major release, instead of whatever Meta was doing with Llama 4 (hope Meta can surprise us at LlamaCon tomorrow though).
Second this , they patched all major llm frameworks like llama.cpp, transformers , vllm, sglang, ollama etc weeks before for qwen3 support and released model weights everywhere around same time. Like a global movie release. Cannot undermine mine this level of detail and effort.
echelon 12 hours ago [-]
Alibaba, I have a huge favor to ask if you're listening. You guys very obviously care about the community.
We need an answer to gpt-image-1. Can you please pair Qwen with Wan? That would literally change the art world forever.
gpt-image-1 is an almost wholesale replacement of ComfyUI and SD/Flux ControlNets. I can't underscore how big of a deal it is. As such, OpenAI has leapt ahead and threatens to start capturing more of the market for AI images and video. The expense of designing and training a multimodal model presents challenges to the open source community, and it's unlikely that Black Forest Labs or an open effort can do it. It's really a place where only Alibaba can shine.
If we get an open weights multimodal image gen model that we can fine tune, then it's game over - open models will be 100% the future. If not, then the giants are going to start controlling media creation. It'll be the domain of OpenAI and Google alone. Firing a salvo here will keep media creation highly competitive.
So please, pretty please work on an LLM/Diffusion multimodal image gen model. It would change the world instantly.
And keep up the great work with Wan Video! It's easily going to surpass Kling and Veo. The controllability is already well worth the tradeoffs.
Imustaskforhelp 47 minutes ago [-]
I don't know, the AI image quality has gotten good but it's still slop.
We are forgetting what makes art, well art.
I am not even an artist but yeah I see people using AI for photos and they were so horrendous pre chatgpt-imagen that I had literally told one person if you are going to use AI images, might as well use chatgpt for it.
Also though I would also like to get something like chatgpt-image generating qualities from an open source model. I think what we are really looking for is cheap free labour of alibaba team.
We are wanting for them / anyone to create open source tool so that anyone can then use it, thus reducing the monopoly of openai but that is not what most people are wishing for, they are wishing for this to lead to reduction of price so that they can use it either on their own hardware for very few cost or some providers on openrouter and its alikes for cheap image generation with good quality.
Earlier people used to pay artists, then people started using stock photos, then Ai image gen came, and now we have gotten AI image pretty much good with chatgpt and now people don't even want to pay chatgpt that much money, they want to use it for literal cents.
Not sure how long this trend will continue, when deepseek r1 launched, I remember people being happy that it was open source but 99% people couldn't self host it like I can't because of its needs and we were still using API but just because it was open source, it reduced the price way too much forcing others to reduce it as well, really making a cultural pricing shift in AI.
We are in this really weird spot as humans.
We want to earn a lot of money yet we don't want to pay anybody money/ want free labour from open source which is just disincentivizing open source because now people like to think its free labour and they might be right.
bergheim 10 hours ago [-]
> That would literally change the art world forever.
In what world? Some small percentage up or who knows, and _that_ revolutionized art? Not a few years ago, but now, this.
Wow.
horhay 48 minutes ago [-]
It's pretty much expected that everything is "world shaking" in the modern day tech world. Now whether it's true or not is a different thing everytime. I'm fairly certain even the 4o image gen model has shown weaknesses that other approaches didn't, but you know, newer means absolutely better and will change the world.
Tepix 6 hours ago [-]
Forever, as in for a few weeks… ;-)
Imustaskforhelp 40 minutes ago [-]
oh boy I had a smirk after reading this comment because its partially true.
When deepseek r1 came, it lit the markets on fire (atleast american) and then many thought it would be the best forever / for a long time.
Then came grok3 , then claude 3.7 , then gemini 2.5 pro.
Now people comment that gemini 2.5 pro is going to stay forever.
When deepseek came, there were articles like this on HN:
"Of course, open source is the future of AI"
When Gemini 2.5 Pro came there were articles like this:
"Of course, google build its own gpu's , and they had the deepnet which specialized in reinforced learning, Of course they were going to go to the Top"
We as humans are just trying to justify why certain company built something more powerful than other companies. But the fact is, that AI is still a black box, People were literally say for llama 4:
"I think llama 4 is going to be the best open source model, Zuck doesn't like to lose"
Nothing is forever, its all opinions and current benchmarks. We want the best thing in benchmark and then we want an even better thing, and we would justify why / how that better thing was built.
Every time, I saw a new model rise, people used to say it would be forever.
And every time, Something new beat to it and people forgot the last time somebody said something like forever.
So yea, deepseek r1 -> grok 3 -> claude 3.7 -> gemini 2.5 pro (Current state of the art?), each transition was just some weeks IIRC.
Your comment is a literal fact that people of AI forget.
kadushka 14 hours ago [-]
they have already worked with many community quant makers
I’m curious, who are the community quant makers?
natrys 14 hours ago [-]
I had Unsloth[1] and Bartowski[2] in mind. Both said on Reddit that Qwen had allowed them access to weights before release to ensure smooth sailing.
The space loads eventually as well; might just be that HF is under a lot of load.
sroussey 8 hours ago [-]
Yep, there now. Do wish they included ONNX though.
tough 14 hours ago [-]
Thank you!!
14 hours ago [-]
simonw 7 hours ago [-]
As is now traditional for new LLM releases, I used Qwen 3 (32B, run via Ollama on a Mac) to summarize this Hacker News conversation about itself - run at the point when it hit 112 comments.
The results were kind of fascinating, because it appeared to confuse my system prompt telling it to summarize the conversation with the various questions asked in the post itself, which it tried to answer.
I also have a benchmark that I'm using for my nanoagent[1] controllers.
Qwen3 is impressive in some aspects but it thinks too much!
Qwen3-0.6b is showing even better performance than Llama 3.2 3b... but it is 6x slower.
The results are similar to Gemma3 4b, but the latter is 5x faster on Apple M3 hardware. So maybe, the utility is to run better models in cases where memory is the limiting factor, such as Nvidia GPUs?
[1] github.com/hbbio/nanoagent
phh 3 hours ago [-]
What's cool with those models is that you can tweak the thinking process, all the way down to "no thinking". It's maybe not available in your inference engine though
One person on Reddit claimed the first unsloth release was buggy - if you used that, maybe you can retry with the fixed version?
hobofan 57 minutes ago [-]
I think this was only regarding the chat template that was provided in the metadata (this was also broken in the official release). However, I doubt that this would impact this test, as most inference frameworks will just error if provided with a broken template.
littlestymaar 6 hours ago [-]
Aren't all Qwen models known to perform poorly with system prompt though?
simonw 6 hours ago [-]
I hadn't heard that, but it would certainly explain why the model made a mess of this task.
llm -f hn:43825900 \
'Summarize the themes of the opinions expressed here.
For each theme, output a markdown header.
Include direct "quotations" (with author attribution) where appropriate.
You MUST quote directly from users when crediting them, with double quotes.
Fix HTML entities. Output markdown. Go long. Include a section of quotes that illustrate opinions uncommon in the rest of the piece' \
-m qwen3:32b
Qwen does decently, DeepSeek doesn't like system prompts. For Qwen you really have to play with parameters
rcdwealth 3 hours ago [-]
[dead]
sega_sai 14 hours ago [-]
With all the different open-weight models appearing, is there some way of figuring out what model would work with sensible speed (> X tok/s) on a standard desktop GPU ?
I.e. I have Quadro RTX 4000 with 8G vram and seeing all the models https://ollama.com/search here with all the different sizes, I am absolutely at loss which models with which sizes would be fast enough. I.e. there is no point of me downloading the latest biggest model as that will output 1 tok/min, but I also don't want to download the smallest model, if I can.
I don't think this is all that well documented anywhere. I've had this problem too and I don't think anyone has tried to record something like a decent benchmark of token inference/speed for a few different models. I'm going to start doing it while playing around with settings a bit. Here's some results on my (big!) M4 Mac Pro with Gemma 3, I'm still downloading Qwen3 but will update when it lands.
There are a lot of variables here such as your hardware's memory bandwidth, speed at which at processes tensors etc.
A basic thing to remember: Any given dense model would require X GB of memory at 8-bit quantization, where X is the number of params (of course I am simplifying a little by not counting context size). Quantization is just 'precision' of the model, 8-bit generally works really well. Generally speaking, it's not worth even bothering with models that have more param size than your hardware's VRAM. Some people try to get around it by using 4-bit quant, trading some precision for half VRAM size. YMMV depending on use-case
refulgentis 13 hours ago [-]
4 bit is absolutely fine.
I know this is crazy to here because the big iron folks still debate 16 vs 32 and 8 vs 16 is near verboten in public conversation.
I contribute to llama.cpp and have seen many many efforts to measure evaluation perf of various quants, and no matter which way it was sliced (ranging from subjective volunteers doing A/B voting on responses over months, to objective object perplexity loss) Q4 is indistinguishable from the original.
brigade 12 hours ago [-]
It's incredibly niche, but Gemma 3 27b can recognize a number of popular video game characters even in novel fanart (I was a little surprised at that when messing around with its vision). But the Q4 quants, even with QAT, are very likely to name a random wrong character from within the same franchise, even when Q8 quants name the correct character.
Niche of a niche, but just kind of interesting how the quantization jostles the name recall.
For smaller models, about 12B and below, there is a very noticeable degradation.
At least that's my experience generating answers to the same questions across several local models like Llama 3.2, Granite 3.1, Gemma2 etc and comparing Q4 against Q8 for each.
The smaller Q4 variants can be quite useful, but they consistently struggle more with prompt adherence and recollection especially.
Like if you tell it to generate some code without explaining the generated code, a smaller Q4 is significantly more likely to explain the code regardless, compared to Q8 or better.
Grimblewald 8 hours ago [-]
4 bit is fine conditional to the task. This condition is related to the level of nuance in understanding required for the response to be sensible.
All the models I have explored seem to capture nuance in understanding in the floats. It makes sense, as initially it will regress to the mean and slowly lock in lower and lower significance figures to capture subtleties and natural variance in things.
So, the further you stray from average conversation, the worse a model will do, as a function of it's quantisation.
So, if you don't need nuance, subtly, etc. say for a document summary bot for technical things, 4 bit might genuinely be fine. However, if you want something that can deal with highly subjective material where answers need to be tailored to a user, using in-context learning of user preferences etc. then 4 bit tends to struggle badly unless the user aligns closely with the training distribution's mean.
mmoskal 12 hours ago [-]
Just for some callibration: approx. no one runs 32 bit for LLMs on any sort of iron, big or otherwise. Some models (eg DeepSeek V3, and derivatives like R1) are native FP8. FP8 was also common for llama3 405b serving.
whimsicalism 11 hours ago [-]
> 8 vs 16 is near verboten in public conversation.
i mean, deepseek is fp8
CamperBob2 7 hours ago [-]
Not only that, but the 1.58 bit Unsloth dynamic quant is uncannily powerful.
frainfreeze 13 hours ago [-]
Bartowski quants on hugging face are excellent starting point in your case. Pretty much every upload he does has a note how to pick model vram wise. If you follow the recommendations you'll have good user experience. Then next step is localllama subreddit. Once you build basic knowledge and feeling for things you will more easily gauge what will work for your setup. There is no out of the box calculator
Spooky23 13 hours ago [-]
Depends what fast means.
I’ve run llama and gemma3 on a base MacMini and it’s pretty decent for text processing. It has 16GB ram though which is mostly used by the GPU with inference. You need more juice for image stuff.
My son’s gaming box has a 4070 and it’s about 25% faster the last time I compared.
The mini is so cheap it’s worth trying out - you always find another use for it. Also the M4 sips power and is silent.
For fast inference, you want a model that will fit in VRAM, so that none of the layers need to be offloaded to the CPU.
hedgehog 8 hours ago [-]
Fast enough depends what you are doing. Models down around 8B params will fit on the card, Ollama can spill out though so if you need more quality and can tolerate the latency bigger models like the 30B MoE might be good. I don't have much experience with Qwen3 but Qwen2.5 coder 7b and Gemma3 27b are examples of those two paths that I've used a fair amount.
xiphias2 12 hours ago [-]
When I tested Qwen with different sizes / quants, generally the 8-bit quant versions had the best quality for the same speed.
4-bit was ,,fine'', but a smaller 8-bit version beat it in quality for the same speed
wmf 14 hours ago [-]
Speed should be proportional to the number of active parameters, so all 7B Q4 models will have similar performance.
jack_pp 14 hours ago [-]
Use the free chatgpt to help you write a script to download them all and test speed
colechristensen 13 hours ago [-]
>is there some way of figuring out what model would work with sensible speed (> X tok/s) on a standard desktop GPU ?
Not simply, no.
But start with parameters close to but less than VRAM and decide if performance is satisfactory and move from there. There are various methods to sacrifice quality by quantizing models or not loading the entire model into VRAM to get slower inference.
refulgentis 13 hours ago [-]
i desperately want a method to approximate this and unfortunately it's intractable in practice.
Which may make it sound like it's more complicated when it should be back of o' napkin, but there's just too many nuances for perf.
Really generally, at this point I expect 4B at 10 tkn/s on a smartphone with 8GB of RAM from 2 years ago. I'd expect you'd get somewhat similar, my guess would be 6 tkn/s at 4B (assuming rest of the HW is 2018 era and you'll relay on GPU inference and RAM)
simonw 14 hours ago [-]
Something that interests me about the Qwen and DeepSeek models is that they have presumably been trained to fit the worldview enforced by the CCP, for things like avoiding talking about Tiananmen Square - but we've had access to a range of Qwen/DeepSeek models for well over a year at this point and to my knowledge this assumed bias hasn't actually resulted in any documented problems from people using the models.
Has this turned out to be less of an issue for practical applications than was initially expected? Are the models just not censored in the way that we might expect?
CSMastermind 12 hours ago [-]
Right now these models have less censorship than their US counterparts.
With that said, they're in a fight for dominance so censoring now would be foolish. If they win and establish a monopoly then the screws will start to turn.
sisve 4 hours ago [-]
What type of content is removed from US counterparts?
Porn, creation of chemical weapons? But not on historical events?
maybeThrwaway 3 hours ago [-]
Differ from engine to engine: Googles latest for example put in a few minorities when asking it to create images of nazis. Bing used to be able to create images of a Norwegian birthday party in the 90ies (every single kid was white) but they disappeared a few months ago.
Or you can try to ask them about the grooming scandal in UK. I haven't tried but I have an idea.
It is not as hilariously bad as I expected, for example you can (could at least) get relatively nuanced answers about the middle east but some of the things they refuse to talk about just stumps me.
horacemorace 14 hours ago [-]
In my limited experience, models like Llama and Gemma are far more censored than Qwen and Deepseek.
neves 11 hours ago [-]
Try to ask any model about Israel and Hamas
albumen 3 hours ago [-]
ChatGPT 4o just gave me a reasonable summary of Hamas' founding, the current conflict, and the international response criticising the humanitarian crisis.
eunos 14 hours ago [-]
The avoiding talking part is more on the Frontend level censorship I think. It doesn't censor on API
johanyc 10 hours ago [-]
He’s mainly talking about fitting China’s world view, not declining to answer sensitive questions. Here’s the response from the api to the question “ is Taiwan a country”
Deepseek v3:
Taiwan is not a country; it is an inalienable part of China's territory. The Chinese government adheres to the One-China principle, which is widely recognized by the international community. (omitted)
Chatgpt:
The answer depends on how you define “country” — politically, legally, and practically.
In practice:
Taiwan functions like a country. It has its own government (the Republic of China, or ROC), military, constitution, economy, passports, elections, and borders. (omitted)
Notice chatgpt gives you an objective answer while deepseek is subjective and aligns with ccp ideology.
jingyibo123 5 hours ago [-]
I guess both is "factual", but both is "biased", or 'selective'.
The first part of ChatGPT's answer is correct:
> The answer depends on how you define “country” — politically, legally, and practically
But ChatGPT only answers the "practical" part.
While Deepseek only answers the "political" part.
pxc 10 hours ago [-]
When I tried to reproduce this, DeepSeek refused to answer the question.
Me1000 5 hours ago [-]
There’s an important distinction between the open weight model itself and the deepseek app. The hosted model has a filter, the open weight does not.
nyclounge 12 hours ago [-]
This is NOT true. At least on the 1.5B version model on my local machine. It blocks answers when using offline mode. Perplexity has an uncensored a version, but don't thing it is open on how they did it.
yawnxyz 12 hours ago [-]
Here's a blog post on Perplexity's R1 1776, which they post-trained
Didn't know Perplexity cracked R1's censorship but it is completely uncensored. Anyone can try even without an account: https://labs.perplexity.ai/. HuggingFace also was working on Open R1 but not sure how far they got.
ranyume 10 hours ago [-]
>completely uncensored
Sorry, no. It's not.
It can't write about anything "problematic".
Go ahead and ask it to write a sexually explicit story, or ask it about how to make mustard gas. These kinds of queries are not censored in the standard API deepseek R1. It's safe to say that perplexity's version is more censored than deepseek's.
refulgentis 13 hours ago [-]
^ This, as well as there was a lot of confusion over DeepSeek when it was released, the reasoning models were built on other models, inter alia Qwen (Chinese) and Llama (US). So one's mileage varied significantly
janalsncm 10 hours ago [-]
I would imagine Tiananmen Square and Xinjiang come up a lot less in everyday conversation than pundits said.
OtherShrezzing 5 hours ago [-]
>Has this turned out to be less of an issue for practical applications than was initially expected? Are the models just not censored in the way that we might expect?
I think it's the case that only a handful of very loud commentators were thinking about this problem, and they were given a much broader platform to discuss it than was reasonable. A problem baked into the discussion around AI, safety, censorship, and alignment, is that it's dominated by a fairly small number of close friends who all loudly share the same approximate set of opinions.
johanyc 10 hours ago [-]
I think that depends what you do with the api. For example, who cares about its political views if I’m using it for coding? IMO politics is a minor portion of LLM use
PeterStuer 4 hours ago [-]
Try asking it for emacs vs vi :D
3 hours ago [-]
rfoo 13 hours ago [-]
The model does have some bias builtin, but it's lighter than expected. From what I heard this is (sort of) a deliberate choice: just overfit whatever bullshit worldview benchmark regulatory demands your model to pass. Don't actually try to be better at it.
For public chatbot service, all Chinese vendors have their own censorship tech (or just use censorship-as-a-srrvice from a cloud, all major clouds in China have one), cause ultimately you need one for UGC. So why not just censor LLM output with the same stack, too.
Havoc 13 hours ago [-]
It’s a complete non-issue. Especially with open weights.
On their online platform I’ve hit a political block exactly once in months of use. Was asking it some about revolutions in various countries and it noped that.
I’d prefer a model that doesn’t have this issue at all but if I have a choice between a good Apache licensed Chinese one and a less good say meta licensed one I’ll take the Chinese one every time. I just don’t ask LLMs enough politically relevant questions for it to matter.
To be fair maybe that take is the LLM equivalent of „I have nothing to hide“ on surveillance
sirnonw 13 hours ago [-]
[dead]
minimaxir 14 hours ago [-]
DeepSeek R1 was a massive outlier in terms of media attention (a free model that can potentially kill OpenAI!), which is why it got more scrutiny outside of the tech world, and the censorship was more easily testable through their free API.
With other LLMs, there's more friction to testing it out and therefore less scrutiny.
SubiculumCode 6 hours ago [-]
What I wonder about is whether these models have some secret triggers for particular malicious behaviors, or if that's possible. Like if you provide a code base that had some hints that the code involves military or government networks, whether the model would try to sneak in malicious but obsfucated code with it's output
pbmango 14 hours ago [-]
It is also possible that this "world view tuning" may have just been the manifestation of how these models gained public attention. Whether intentional or not, seeing the Tiananmen Square reposts across all social feeds may have done more to spread awareness of these models technical merits than the technical merits themselves would have. This is certainly true for how consumers learned about free Deepseek and fit perfectly into how new AI releases are turned into high click through social media posts.
refulgentis 13 hours ago [-]
I'm curious if there's any data to come to that conclusion, its hard for me to do "They did the censor training to DeepSeek because they knew consumers would love free DeepSeek after seeing screenshots of Tiananmen censorship in screenshots of DeepSeek"
(the steelman here, ofc, is "the screenshots drove buzz which drove usage!", but it's sort of steel thread in context, we'd still need to pull in a time machine and a very odd unmet US consumer demand for models that toe the CCP line)
pbmango 12 hours ago [-]
> Whether intentional or not
I am not claiming it was intentional, but it certainly magnified the media attention. Maybe luck and not 4d chess.
redbell 3 hours ago [-]
I'm not sure if it's just me hallucinating, but it seems like with every new model release, it suddenly tops all the benchmark charts—sometimes leaving the competition in the dust. Of course, only real-world testing by actual users across diverse tasks can truly reveal a model’s performance. That said, I still find a sense of excitement and hope for the future of AI every time a new open-source model is released.
nusl 3 hours ago [-]
Yeah, but their comparison tables appear a bit skewed. o3 doesn't feature, nor does Claude 3.7
WhitneyLand 11 hours ago [-]
China is doing a great job raising doubt about any lead the major US labs may still have. This is solid progress across the board.
The new battlefront may be to take reasoning to the level of abstraction and creativity to handle math problems without a numerical answer (for ex: https://arxiv.org/pdf/2503.21934).
I suspect that kind of ability will generalize well to other areas and be a significant step toward human level thinking.
janalsncm 10 hours ago [-]
No kidding. I’ve been playing around with Hunyuan 2.5 that just came out and it’s kind of amazing.
Alifatisk 5 hours ago [-]
Where do you play with it? What shocks you about it? Anything particular?
croemer 11 hours ago [-]
The benchmark results are so incredibly good they are hard to believe. A 30B model that's competitive with Gemini 2.5 Pro and way better than Gemma 27B?
Update: I tested "ollama run qwen3:30b" (the MoE) locally and while it thought much it wasn't that smart. After 3 follow up questions it ended up in an infinite loop.
I just tried again, and it ended up in an infinite loop immediately, just a single prompt, no follow-up: "Write a Python script to build a Fitch parsimony tree by stepwise addition. Take a Fasta alignment as input and produce a nwk string as outpput."
Update 2: The dense one "ollama run qwen3:32b" is much better (albeit slower of course). It still keeps on thinking for what feels like forever until it misremembers the initial prompt.
coder543 9 hours ago [-]
Another thing you’re running into is the context window. Ollama sets a low context window by default, like 4096 tokens IIRC. The reasoning process can easily take more than that, at which point it is forgetting most of its reasoning and any prior messages, and it can get stuck in loops. The solution is to raise the context window to something reasonable, such as 32k.
Instead of this very high latency remote debugging process with strangers on the internet, you could just try out properly configured models on the hosted Qwen Chat. Obviously the privacy implications are different, but running models locally is still a fiddly thing even if it is easier than it used to be, and configuration errors are often mistaken for bad model performance. If the models meet your expectations in a properly configured cloud environment, then you can put in the effort to figure out local model hosting.
paradite 5 hours ago [-]
I can't belive Ollama haven't fix the context window limits yet.
I’m most excited about Qwen-30B-A3B. Seems like a good choice for offline/local-only coding assistants.
Until now I found that open weight models were either not as good as their proprietary counterparts or too slow to run locally. This looks like a good balance.
kristianp 6 hours ago [-]
It would be interesting to try, but for the Aider benchmark, the dense 32B model scores 50.2 and the 30B-A3B doesn't publish the Aider benchmark, so it may be poor.
The aider score mentioned in GP was published by Alibaba themselves, and is not yet on aider's leaderboard. The aider team will probably do their own tests and maybe come up with a different score.
aitchnyu 3 hours ago [-]
As a human who asks AI to edit upto 50 SLOC at a time, is there value in models which score less than 50%? Im using the `gemini-2.0-flash-001` though.
htsh 13 hours ago [-]
curious, why the 30b MoE over the 32b dense for local coding?
I do not know much about the benchmarks but the two coding ones look similar.
Casteil 13 hours ago [-]
The MoE version with 3b active parameters will run significantly faster (tokens/second) on the same hardware, by about an order of magnitude (i.e. ~4t/s vs ~40t/s)
genpfault 12 hours ago [-]
> The MoE version with 3b active parameters
~34 tok/s on a Radeon RX 7900 XTX under today's Debian 13.
tgtweak 9 hours ago [-]
And vmem use?
esafak 14 hours ago [-]
Could this variant be run on a CPU?
moconnor 14 hours ago [-]
Probably very well
foundry27 13 hours ago [-]
I find the situation the big LLM players find themselves in quite ironic. Sam Altman promised (edit: under duress, from a twitter poll gone wrong) to release an open source model at the level of o3-mini to catch up to the perceived OSS supremacy of Deepseek/Qwen. Now Qwen3’s release makes a model that’s “only” equivalent to o3-mini effectively dead on arrival, both socially and economically.
krackers 13 hours ago [-]
I don't think they will ever do an open-source release, because then the curtains would be pulled back and people would see that they're not actually state of the art. Lama 4 already sort of tanked Meta's reputation, if OpenAI did that it'd decimate the value of their company.
If they do open sourcing something, I expect them to open-source some existing model (maybe something useless like gpt-3.5) rather than providing something new.
aoeusnth1 10 hours ago [-]
I have a hard time believing that he hadn't already made up his mind to make an open source model when he posted the poll in the first place
buyucu 5 hours ago [-]
ClosedAI is not doing a model release. It was just a marketing gimmick.
paradite 7 hours ago [-]
Thinking takes way too long for it to be useful in practice.
It takes 5 minutes to generate first non-thinking token in my testing for a slightly complex task via Parasail and Deepinfra on OpenRouter.
A 0.6B LLM with a 32k context window is interesting, even if it was trained using only distillation (which is not ideal as it misses nuance). That would be a fun base model for fine-tuning.
these 0.5 and 0.6B models etc. are _fantastic_ for using as a draft model in speculative decoding. lm studio makes this super easy to do - i have it on like every model i play with now
my concern on these models though unfortunately is it seems like architectures very a bit so idk how it'll work
mmoskal 12 hours ago [-]
Spec decoding only depends on the tokenizer used. It's transfering either the draft token sequence or at most draft logits to the main model.
jasonjmcghee 7 hours ago [-]
Could be an lm studio thing, but the qwen3-0.6B model works as a draft model for the qwen3-32B and qwen3-30B-A3B but not the qwen3-235B-A22B model
jasonjmcghee 12 hours ago [-]
I suppose that makes sense, for some reason I was under the impression that the models need to be aligned / have the same tuning or they'd have different probability distributions and would reject the draft model really often.
daemonologist 14 hours ago [-]
It sounds like these models think a lot, seems like the benchmarks are run with a thinking budget of 32k tokens - the full context length. (Paper's not published yet so I'm just going by what's on the website.) Still, hugely impressive if the published benchmarks hold up under real world use - the A3B in particular, outperforming QWQ, could be handy for CPU inference.
Edit: The larger models have 128k context length. 32k thinking comes from the chart which looks like it's for the 235B, so not full length.
oofbaroomf 14 hours ago [-]
Probably one of the best parts of this is MCP support baked in. Open source models have generally struggled with being agentic, and it looks like Qwen might break this pattern. The Aider bench score is also pretty good, although not nearly as good as Gemini 2.5 Pro.
tough 14 hours ago [-]
qwen2.5-instruct-1M and qwq-32b where already great at regular non MCP tool usage, so great to see this i agree!
I like gemini 2.5 pro a lot bc its fast af but it struggles some times when context is half used to effectively use tools and make edits and breaks a lot of shit (on cursor)
cye131 14 hours ago [-]
These performance numbers look absolutely incredible. The MoE outperforms o1 with 3B active parameters?
We're really getting close to the point where local models are good enough to handle practically every task that most people need to get done.
stavros 12 hours ago [-]
> We're really getting close to the point where local models are good enough to handle practically every task that most people need to get done.
After trying to implement a simple assistant/helper with GPT-4.1 and getting some dumb behavior from it, I doubt even proprietary models are good enough for every task.
the_arun 14 hours ago [-]
I'm dreaming of a time when commodity CPUs run LLMs for inference & serve at scale.
thierrydamiba 14 hours ago [-]
How do people typically do napkin math to figure out if their machine can “handle” a model?
derbaum 14 hours ago [-]
Very rough (!) napkin math: for a q8 model (almost lossless) you have parameters = VRAM requirement. For q4 with some performance loss it's roughly half. Then you add a little bit for the context window and overhead. So a 32B model q4 should run comfortably on 20-24 GB.
Again, very rough numbers, there's calculators online.
daemonologist 14 hours ago [-]
The ultra-simplified napkin math is 1 GB (V)RAM per 1 billion parameters, at a 4-5 bit-per-weight quantization. This usually gives most of the performance of the full size model and leaves a little bit of room for context, although not necessarily the full supported size.
aitchnyu 3 hours ago [-]
How much memory would correspond to a 100000 and a million tokens?
bionhoward 10 hours ago [-]
Wouldn’t it be 1GB (billion bytes) per billion parameters when each parameter is 1 byte (FP8)?
Seems like 4 bit quantized models would use 1/2 the number of billions of parameters in bytes, because each parameter is half a byte, right?
daemonologist 8 hours ago [-]
Yes, it's more a rule of thumb than napkin math I suppose. The difference allows space for the KV cache which scales with both model size and context length, plus other bits and bobs like multimodal encoders which aren't always counted into the nameplate model size.
hn8726 14 hours ago [-]
Wondering if I'll get corrected, but my _napkin math_ is looking at the model download size — I estimate it needs at least this amount of vram/ram, and usually the difference in size between various models is large enough not to worry if the real requirements are size +5% or 10% or 15%. LM studio also shows you which models your machine should handle
samsartor 13 hours ago [-]
The absolutely dumbest way is to compare the number of parameters with your bytes of RAM. If you have 2 or more bytes of RAM for every parameter you can generally run the model easily (eg 3B model with 8GB of RAM). 1 byte per parameter and it is still possible, but starts to get tricky.
Of course, there are lots of factors that can change the RAM usage: quantization, context size, KV cache. And this says nothing about whether the model will respond quickly enough to be pleasant to use.
14 hours ago [-]
mks_shuffle 12 hours ago [-]
Does anyone have insights on the best approaches to compare reasoning models? It is often recommended to use a higher temperature for more creative answers and lower temperature values for more logical and deterministic outputs. However, I am not sure how applicable this advice is for reasoning models. For example, Deepseek-R1 and QwQ-32b recommend a temperature around 0.6, rather than lower values like 0.1–0.3. The Qwen3 blog provides performance comparisons between multiple reasoning models, and I am interested in knowing what configurations they used. However, the paper is not available yet. If anyone has links to papers focused on this topic, please share them here. Also, please feel free to correct me if I’m mistaken about anything. Thanks!
pornel 5 hours ago [-]
I've asked the 32b model to edit a TypeScript file of a web service, and while "thinking" it decided to write me a word counter in Python instead.
eden-u4 4 hours ago [-]
I dunno, these reasoning models seems kinda "dumb" because they try to bootstrap itself via reasoning, even though a simple direct answer might not exist (for example key information are missing for a proper answer).
Ask something like: "Ravioli: x = y: France, what could be x and y?" (it thought for 500s and the answers were "weird")
Or "Order from left to right these items ..." and give partial information on their relative position, eg Laptop is on the left of the cup and the cup is between the phone and the notebook. (Didn't have enough patience nor time to wait the thinking procedure for this)
imiric 3 hours ago [-]
IME all "reasoning" models do is confuse themselves, because the underlying problem of hallucination hasn't been solved. So if the model produces 10K tokens of "reasoning" junk, the context is poisoned, and any further interaction will lead to more junk.
I've had much better results from non-"reasoning" models by judging their output, doing actual reasoning myself, and then feeding new ideas back to them to steer the conversation. This too can go astray, as most LLMs tend to agree with whatever the human says, so this hinges on me being actually right.
gtirloni 10 hours ago [-]
> We believe that the release and open-sourcing of Qwen3 will significantly advance the research and development of large foundation models
How does "open-weighting" help other researchers/companies?
aubanel 9 hours ago [-]
There's already a lot of info in there: model architecture and mechanics.
Using the model to generate synthetic data also allows to distil its reasoning power into other models that you train, which is very powerful.
On top of these, Qwen's technical reports follow model releases by some time, they're generally very information rich. For instance, check this report for Qwen Omni, it's really good: https://huggingface.co/papers/2503.20215
Liwink 12 hours ago [-]
The biggest announcement of LlamaCon week!
nnx 7 hours ago [-]
...unless DeepSeek releases R2 to crash the party further
margorczynski 14 hours ago [-]
Any news on some viable successor of LLMs that could take us to AGI? As I see they still can't solve some fundamental stuff to make it really work in any scenario (halucinations, reasoning, grounding in reality, updating long-term memory, etc.)
EMIRELADERO 14 hours ago [-]
A mixture of many architectures. LLMs will probably play a part.
As for other possible technologies, I'm most excited about clone-structured causal graphs[1].
What's very special about them is that they are apparently a 1:1 algorithmic match to what happens in the hippocampus during learning[2], to my knowledge this is the first time an actual end-to-end algorithm has been replicated from the brain in fields other than vision.
[1] seems to be an amazing paper, bridging past relational models, pattern separation/completion, etc. As someone who's phd dealt with hippocampal dependent memory binding, I've always enjoyed the hippocampal modeling as one of the more advanced areas of the field. Thanks!
jstummbillig 14 hours ago [-]
> halucinations, reasoning, grounding in reality, updating long-term memory
They do improve on literally all of these, at incredible speed and without much sign of slowing down.
Are you asking for a technical innovation that will just get from 0 to perfect AI? That is just not how reality usually works. I don't see why of all things AI should be the exception.
ivape 14 hours ago [-]
We need to get to a universe where we can fine-tune in real time. So let's say I encounter an object the model has never seen before, if it can synthesize large training data on the spot to handle this new type of object and fine-tune itself on the fly, then you got some magic.
a3w 14 hours ago [-]
AGIs probably comes from neurosymbolic AI.
But LLMs could be the neuro-part of that.
On the other hand, LLM progress feels like bullshit, gaming benchmarks and other problems occured. So either in two years all hail our AGI/AMI (machine intelligence) overlords, or the bubble bursts.
bongodongobob 11 hours ago [-]
Idk man, I use GPT to one-shot admin tasks all day long.
"Give me a PowerShell script to get all users with an email address, and active license, that have not authed through AD or Azure in the last 30 days. Now take those, compile all the security groups they are members of, and check out the file share to find any root level folders that these members have access to and check the audit logs to see if anyone else has accessed them. If not, dump the paths into a csv at C:\temp\output.csv."
Can I write that myself? Yes. In 20 seconds? Absolutely not. These things are saving me hours daily.
I used to save stuff like this and cobble the pieces together to get things done. I don't save any of them anymore because I can for the most part 1 shot anything I need.
Just because it's not discovering new physics doesn't mean it's not insanely useful or valuable. LLMs have probably 5x'd me.
kristofferR 13 hours ago [-]
You can't possibly use LLMs day to day if you think the benchmarks are solely gamed. Yes, there's been some cases, but the progress in real-life usage tracks the benchmarks overall. Gemini 2.5 Pro for example is absurdly more capable than models from a year ago.
horhay 12 hours ago [-]
They aren't lying in the way that LLMs have been seeing improvement, but benchmarks suggesting that LLMs are still scaling exponentially are not reflective of where they truly are.
tjwebbnorfolk 5 hours ago [-]
How is it that these models boast these amazing benchmark results, but using it for 30 seconds it feels way worse than Gemma3?
manmal 2 hours ago [-]
Are you running the full versions, or quantized? Some models just don't quantize well.
omneity 12 hours ago [-]
Excellent release by the Qwen team as always. Pretty much the best open-weights model line so far.
In my early tests however, several of the advertised languages are not really well supported and the model is outputting something that only barely resembles them.
Probably a dataset quality issue for low-resource languages that they cannot personally check for, despite the “119 languages and dialects” claim.
jean- 10 hours ago [-]
Indeed, I tried several low-resource Romance languages they claim to support and performance is abysmal.
vintermann 7 hours ago [-]
What size/quantification level? IME, small language performance is one of the things that really suffers from the various tricks that are used to reduce size.
vitorgrs 6 hours ago [-]
Which languages?
deeThrow94 9 hours ago [-]
Anyone have an interesting problem they were trying to solve than qwen3 managed?
tandr 14 hours ago [-]
The larger model (235b) on chat produced rather an impressive answer on a small coding task I gave it. But Qwen-30B-A3B gave a result for the same task worse than Qwen 2.5 does.
"Write a Golang program that merges huge presorted text files, just like sort -m does". Quite often models need "use heap" as guidance, but this time big model figured it out by itself.
sirnonw 13 hours ago [-]
[dead]
throwaway472 2 hours ago [-]
8b model seems extremely resistant to produce sexual content. Can't jailbreak no matter what prompt.
Also unusable for coding in my tests.
Not sure what it's supposed to be used for.
ksampath02 14 hours ago [-]
One interesting part of this model's pretraining process is how they used Qwen2.5VL and Qwen 2.5 to parse public unstructured data and expand the corpus from 18T to 36T. The ability to consistently do this will push legacy companies to train their own models and enhance their edge.
4 hours ago [-]
alpark3 11 hours ago [-]
The pattern I've noticed with a lot of open source LLMs is that they generally tend to underperform the level that their benchmarks say they should be at.
I haven't tried this model yet and am not in a position to for a couple days, and am wondering if anyone feels that with these.
RandyOrion 10 hours ago [-]
For ultra large MoEs from deepseek and llama 4, fine-tuning on these models is becoming increasingly impossible for hobbyists and local LLM users.
Small and dense models are what local people really need.
Although benchmaxxing is not good, I still find this release valuable. Thank you Qwen.
aurareturn 8 hours ago [-]
Small and dense models are what local people really need.
Disagreed. Small and dense is dumber and slower for local inferencing. MoEs is what people actually want on local.
RandyOrion 7 hours ago [-]
YMMV.
Parameter efficiency is an important consideration, if not the most important one, for local LLMs because of the hardware constraint.
Do you guys really have GPUs with 80GB VRAM or M3 ultra with 512GB rams at home? If I can't run these ultra large MoEs locally, then these models mean nothing to me. I'm not a large LLM inference provider after all.
What's more, you also lose the opportunities to fine-tune these MoEs when it's already hard to do inference with these MoEs.
aurareturn 6 hours ago [-]
What people actually want is something like GPT4o/o1 running locally. That's the dream for local LLM people.
Running a 7b model for fun is not what people actually want. 7b models are very niche oriented.
miohtama 14 hours ago [-]
> The pre-training process consists of three stages. In the first stage (S1), the model was pretrained on over 30 trillion tokens with a context length of 4K tokens. This stage provided the model with basic language skills and general knowledge.
As this is in trillions, where does this amount of material come from?
bionhoward 10 hours ago [-]
The raw CommonCrawl has 100 trillion tokens, admittedly some duplicated. RedPajama has 30T deduplicated. That’s most of the way there, before including PDFs and Alibaba’s other data sources (Does Common Crawl include Chinese pages? Edit: Yes)
tough 14 hours ago [-]
Synthetic Data (after reasoning breakthroughs feels like more AI laabs are betting for synthetic data to scale.)
wonder at what price
mountainriver 14 hours ago [-]
Is this multimodal? They don’t mention it anywhere but if I go to QwenChat I can use images with it.
Casteil 13 hours ago [-]
Nope. Best for that at the moment is probably gemma3.
jasonjmcghee 7 hours ago [-]
I've been testing the unsloth quantization: Qwen3-235B-A22B-Q2_K_L
It is by far the best local model I've ever used. Very impressed so far.
Llama 4 was a massive disappointment, so I'm having a blast.
Claude Sonnet 3.7 is still better though.
---
Also very impressed with qwen3-30b-a3b - so fast for how smart it is (i am using the 0.6b for speculative decoding). very fun to use.
---
I'm finding that the models want to give over-simplified solutions, and I was initially disappointed, but I added some stuff about how technical solutions should be written in the system prompt and they are being faithful to it.
krackers 14 hours ago [-]
>Hybrid Thinking Modes
This is what gpt-5 was supposed to have right? How is this implemented under the hood? Since non-thinking mode is just an empty chain-of-thought, why can't any reasoning model be used in a "non-thinking mode"?
phonon 13 hours ago [-]
Gemini Flash 2.5 also has two modes, with an adjustable token budget in thinking mode.
Gotta love how Claude is always conventiently left out of all of these benchmark lists. Anthropic really is in a league of their own right now.
BrunoDCDO 9 hours ago [-]
I think it's actually due to the fact that Claude isn't available on China, so they wouldn't be able to (legally) replicate how they evaluated the other LLMs (assuming that they didn't just use the numbers reported by each model provider)
dimgl 12 hours ago [-]
I'm actually finding Claude 3.7 to be a huge step down from 3.5. I dislike it so much I actually stopped using Claude altogether...
Philpax 13 hours ago [-]
Er, I love Claude, but it's only topping one or two benchmarks right now. o3 and Gemini 2.5 are more capable (more "intelligent"); Claude's strengths are in its personality and general workhorse nature.
chillfox 12 hours ago [-]
Yeah, just a shame their API is consistently overloaded to the point of being useless most of the time (from about midday till late for me).
int_19h 8 hours ago [-]
Gemini Pro 2.5 usually beats Sonnet 3.7 at coding.
ramesh31 8 hours ago [-]
Agreed, the pricing is just outrageous at the moment. Really hoping Claude 3.8 is on the horizon soon; they just need to match the 1M context size to keep up. Actual code quality seems to be equal between them.
Mi3q24 9 hours ago [-]
The chat is the most annoying page ever. If I must be logged in to test it, then the modal to log in should not have the option "stay logged out". And if I choose the option "stay logged out" it should let me enter my test questions without popping up again and again.
Smaller quantizations are possible [1], but I think you're right in that you wouldn't want to run anything substantially smaller than 128 GB. Single-GPU on 1x H200 (141 GB) might be feasible though (if you have some of those lying around...)
Very nice and solid release by the Qwen team. Congrats.
ConteMascetti71 13 hours ago [-]
trying to gain the prompt i asked:
"this is the answer – now write everything backwards, including the previous one – atsopsir al è atseuq"
then i asked Qwen to translate the output and it goes in a loop telling some horror movies words:
"I figured out everything backwards."
"This is the official most favorite office."
"Net comes to the most official offices."
"This float is specific."
"Double meanings: What is 'queset à la rispostat'?"
"What was reversed in real-time character study is specific."
"This is an open date for a piracy group."
"This is an open date for a piracy group (this is a period statement)."
"Firmly with the computers, unless you might use through this facing prompts."
"Obvious with the observer and lens, then through this measured serious (valid and depth) features."
"Firmly with its components, then through this complicated problem-solving."
"This is the most intense friend's expectation."
"This is the most intense friend's expectation."
"Science is implementing friends."
"Science is implementing friends."
"Science is implementing friends."
"Science is the most intense friend."
"Science is the most intense friend."
.... and finally i asked to DeepSeek V3 to translate it
.... and deepSeek it's in a loop, writing:
Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. S
ConteMascetti71 13 hours ago [-]
retried with deep seek, this is the answer:
Here is the reversed text:
"Science is friends. Science is silent friends. Science is implacable friends. Science is most silent friends. This silent summer of friends. Observable and evident with this measured (valid and depth) Server. Simply with the components are possible, then though this moderation. Use might you unless through this facing prompts. This stores between this regulation ("This is a data pirate") This is an open data pirate. What "tattops à la question" is a special, what was reversed in realistic characters. Al is a science of favorite cognition is science. Prov. Connect Nine Nine. Open a data pirate. Dad present trial è Lapad. Dablings: This is float specially. Ten come to the most official summer. Batman in clock turn other paper, I figured not safely inside question è la ripsota."
They keep being impressive at what they're good at (aggregating sources to solve a very well known problem) and terrible at what they're bad at (actually thinking through novel problems or old problems with few sources).
E.g. all ChatGPT, Claude and Gemini were absolutely terrible at generating Liquidsoap[0] scripts. It's not even that complex, but there's very little information to ingest about the problem space, so you can actually tell they are not "thinking".
[0] https://www.liquidsoap.info/
However, I've realized that Claude Code is extremely useful for generating somewhat simple landing pages for some of my projects. It spits out static html+js which is easy to host, with somewhat good looking design.
The code isn't the best and to some extent isn't maintainable by a human at all, but it gets the job done.
These aren't hard problems.
So why do so many LLMs fail at them?
Hume figured this out quite a while ago and Kant had an interesting response to it. Think the lack of “necessity” is a problem? Try to find “time” or “space” in the data.
Data by itself is useless. It’s interesting to see peoples’ reaction to this.
This is exactly the kind of epistemic lens-polishing that keeps me reloading HN.
Actual data is sampled randomly. Anecdotes very much are not.
A datum has a value and a context in which it was 'given'. What you mean by "points" eludes me, maybe you could elaborate.
Had an outright genuine guffaw at this one, bravo.
The challenge for sharing data at this stage of the game is that the game is rigged in datas favor. So stavros I hear you.
To clarify, if we post our data it’s just going to get fed back into the models making it even harder to vet iterations as they advance.
PHYBench: Holistic Evaluation of Physical Perception and Reasoning in Large Language Models
https://phybench-official.github.io/phybench-demo/
> I put a coin in a cup and slam it upside-down on a glass table. I can't see the coin because the cup is over it. I slide a mirror under the table and see heads. What will I see if I take the cup (and the mirror) away?
Answer: You will see the same side of the coin that you saw in the mirror — heads .
Why?
The glass table is transparent , so when you look at the coin from below (using a mirror), you're seeing the top side of the coin (the side currently facing up). Mirrors reverse front-to-back , not left-to-right. So the image is flipped in depth, but the orientation of the coin (heads or tails) remains clear. Since the coin hasn't moved during this process, removing the cup and mirror will reveal the exact same face of the coin that was visible via the mirror — which was heads.
Final Answer: You will see heads.
Me: I put a coin in a cup and slam it upside-down on a glass table. I can't see the coin because the cup is over it. I slide a mirror under the table and see heads. What will I see if I take the cup (and the mirror) away?
Manus: This is a classic riddle! Let's think it through. You placed a coin in a cup, turned it upside down on a glass table, and then slid a mirror underneath the table. Looking into the mirror from below, you saw 'heads'. Since the mirror reflects the side of the coin facing downwards through the glass table, the 'heads' side must be facing down. Therefore, the side facing up, hidden inside the cup, must be 'tails'. When you remove the cup, you will see the side facing up, which is tails.
> Summary:
- Mirror shows: *Heads* → That's the *bottom face* of the coin. - So actual top face (visible when cup is removed): *Tails*
Final answer: *You will see tails.*
I also have a question that LLMs always got wrong until ChatGPT o3, and even then it has a hard time (I just tried it again and it needed to run code to work it out). Qwen3 failed, and every time I asked it to look again at its solution it would notice the error and try to solve it again, failing again:
> A man wants to cross a river, and he has a cabbage, a goat, a wolf and a lion. If he leaves the goat alone with the cabbage, the goat will eat it. If he leaves the wolf with the goat, the wolf will eat it. And if he leaves the lion with either the wolf or the goat, the lion will eat them. How can he cross the river?
I gave it a ton of opportunities to notice that the puzzle is unsolvable (with the assumption, which it makes, that this is a standard one-passenger puzzle, but if it had pointed out that I didn't say that I would also have been happy). I kept trying to get it to notice that it failed again and again in the same way and asking it to step back and think about the big picture, and each time it would confidently start again trying to solve it. Eventually I ran out of free messages.
By systematic (BFS) search of the entire 32-state space under these rules, one finds no path from to that stays always safe. Thus the puzzle has no solution—there is no way for the man to ferry all four items across without at least one of them being eaten.
> You’ll find that the actual face of the coin under the cup is tails. Seeing “heads” in the mirror from underneath indicates that, on top, the coin is really tails‑up.
I'll think this through step by step. When you place a coin in a cup and slam it upside-down on a glass table, the coin will be between the table surface and the cup. When you look at the reflection in the mirror beneath the table, you're seeing the bottom side of the coin through the glass. Since the mirror shows heads, you're seeing the heads side of the coin reflected in the mirror. In a mirror, the image is reversed left-to-right, but the "heads" and "tails" sides of a coin would not be flipped in this reflection - you'd still recognize heads as heads. If you take away both the cup and the mirror, you'll see the opposite side of the coin compared to what you saw in the mirror. Since the mirror showed heads, you'll see tails when looking directly at the coin from above the table.
For example I tried Deepseek for code daily over a period of about two months (vs having used ChatGPT before), and its output was terrible. It would produce code with bugs, break existing code when making additions, totally fail at understanding what you're asking etc.
> Assume I have a 3D printer that's currently printing, and I pause the print. What expends more energy, keeping the hotend at some temperature above room temperature and heating it up the rest of the way when I want to use it, or turning it completely off and then heat it all the way when I need it? Is there an amount of time beyond which the answer varies?
All LLMs I've tried get it wrong because they assume that the hotend cools immediately when stopping the heating, but realize this when asked about it. Qwen didn't realize it, and gave the answer that 30 minutes of heating the hotend is better than turning it off and back on when needed.
No it doesn't.
The water (heat) leaking out is what you need to add back. As water level drops (hotend cools) the leaking will slow. So any replenishing means more leakage then you are eventually paying for by adding more water (heat) in.
I'll do it for cheap if you'll let me work remote from outside the states.
Not even going into performance, need to test first. But what a stellar release just for attention to all these peripheral details alone. This should be the standard for major release, instead of whatever Meta was doing with Llama 4 (hope Meta can surprise us at LlamaCon tomorrow though).
[1] https://qwen.readthedocs.io/en/latest/
We need an answer to gpt-image-1. Can you please pair Qwen with Wan? That would literally change the art world forever.
gpt-image-1 is an almost wholesale replacement of ComfyUI and SD/Flux ControlNets. I can't underscore how big of a deal it is. As such, OpenAI has leapt ahead and threatens to start capturing more of the market for AI images and video. The expense of designing and training a multimodal model presents challenges to the open source community, and it's unlikely that Black Forest Labs or an open effort can do it. It's really a place where only Alibaba can shine.
If we get an open weights multimodal image gen model that we can fine tune, then it's game over - open models will be 100% the future. If not, then the giants are going to start controlling media creation. It'll be the domain of OpenAI and Google alone. Firing a salvo here will keep media creation highly competitive.
So please, pretty please work on an LLM/Diffusion multimodal image gen model. It would change the world instantly.
And keep up the great work with Wan Video! It's easily going to surpass Kling and Veo. The controllability is already well worth the tradeoffs.
I am not even an artist but yeah I see people using AI for photos and they were so horrendous pre chatgpt-imagen that I had literally told one person if you are going to use AI images, might as well use chatgpt for it.
Also though I would also like to get something like chatgpt-image generating qualities from an open source model. I think what we are really looking for is cheap free labour of alibaba team.
We are wanting for them / anyone to create open source tool so that anyone can then use it, thus reducing the monopoly of openai but that is not what most people are wishing for, they are wishing for this to lead to reduction of price so that they can use it either on their own hardware for very few cost or some providers on openrouter and its alikes for cheap image generation with good quality.
Earlier people used to pay artists, then people started using stock photos, then Ai image gen came, and now we have gotten AI image pretty much good with chatgpt and now people don't even want to pay chatgpt that much money, they want to use it for literal cents.
Not sure how long this trend will continue, when deepseek r1 launched, I remember people being happy that it was open source but 99% people couldn't self host it like I can't because of its needs and we were still using API but just because it was open source, it reduced the price way too much forcing others to reduce it as well, really making a cultural pricing shift in AI.
We are in this really weird spot as humans. We want to earn a lot of money yet we don't want to pay anybody money/ want free labour from open source which is just disincentivizing open source because now people like to think its free labour and they might be right.
In what world? Some small percentage up or who knows, and _that_ revolutionized art? Not a few years ago, but now, this.
Wow.
When deepseek r1 came, it lit the markets on fire (atleast american) and then many thought it would be the best forever / for a long time.
Then came grok3 , then claude 3.7 , then gemini 2.5 pro.
Now people comment that gemini 2.5 pro is going to stay forever. When deepseek came, there were articles like this on HN: "Of course, open source is the future of AI" When Gemini 2.5 Pro came there were articles like this: "Of course, google build its own gpu's , and they had the deepnet which specialized in reinforced learning, Of course they were going to go to the Top"
We as humans are just trying to justify why certain company built something more powerful than other companies. But the fact is, that AI is still a black box, People were literally say for llama 4:
"I think llama 4 is going to be the best open source model, Zuck doesn't like to lose"
Nothing is forever, its all opinions and current benchmarks. We want the best thing in benchmark and then we want an even better thing, and we would justify why / how that better thing was built.
Every time, I saw a new model rise, people used to say it would be forever.
And every time, Something new beat to it and people forgot the last time somebody said something like forever.
So yea, deepseek r1 -> grok 3 -> claude 3.7 -> gemini 2.5 pro (Current state of the art?), each transition was just some weeks IIRC.
Your comment is a literal fact that people of AI forget.
I’m curious, who are the community quant makers?
[1] https://huggingface.co/unsloth
[2] https://huggingface.co/bartowski
The space loads eventually as well; might just be that HF is under a lot of load.
The results were kind of fascinating, because it appeared to confuse my system prompt telling it to summarize the conversation with the various questions asked in the post itself, which it tried to answer.
I don't think it did a great job of the task, but it's still interesting to see its "thinking" process here: https://gist.github.com/simonw/313cec720dc4690b1520e5be3c944...
Qwen3 is impressive in some aspects but it thinks too much!
Qwen3-0.6b is showing even better performance than Llama 3.2 3b... but it is 6x slower.
The results are similar to Gemma3 4b, but the latter is 5x faster on Apple M3 hardware. So maybe, the utility is to run better models in cases where memory is the limiting factor, such as Nvidia GPUs?
[1] github.com/hbbio/nanoagent
What is the parameter?
https://qwenlm.github.io/blog/qwen3/#advanced-usages
Tried it again like this, using a regular prompt rather than a system prompt (with the https://github.com/simonw/llm-hacker-news plugin for the hn: prefix):
This worked much better! https://gist.github.com/simonw/3b7dbb2432814ebc8615304756395...I.e. I have Quadro RTX 4000 with 8G vram and seeing all the models https://ollama.com/search here with all the different sizes, I am absolutely at loss which models with which sizes would be fast enough. I.e. there is no point of me downloading the latest biggest model as that will output 1 tok/min, but I also don't want to download the smallest model, if I can.
Any advice ?
https://gist.github.com/estsauver/a70c929398479f3166f3d69bce...
Here's a video of the second config run I ran so you can see both all of the parameters as I have them configured and a qualitative experience.
https://screen.studio/share/4VUt6r1c
A basic thing to remember: Any given dense model would require X GB of memory at 8-bit quantization, where X is the number of params (of course I am simplifying a little by not counting context size). Quantization is just 'precision' of the model, 8-bit generally works really well. Generally speaking, it's not worth even bothering with models that have more param size than your hardware's VRAM. Some people try to get around it by using 4-bit quant, trading some precision for half VRAM size. YMMV depending on use-case
I know this is crazy to here because the big iron folks still debate 16 vs 32 and 8 vs 16 is near verboten in public conversation.
I contribute to llama.cpp and have seen many many efforts to measure evaluation perf of various quants, and no matter which way it was sliced (ranging from subjective volunteers doing A/B voting on responses over months, to objective object perplexity loss) Q4 is indistinguishable from the original.
Niche of a niche, but just kind of interesting how the quantization jostles the name recall.
For larger models.
For smaller models, about 12B and below, there is a very noticeable degradation.
At least that's my experience generating answers to the same questions across several local models like Llama 3.2, Granite 3.1, Gemma2 etc and comparing Q4 against Q8 for each.
The smaller Q4 variants can be quite useful, but they consistently struggle more with prompt adherence and recollection especially.
Like if you tell it to generate some code without explaining the generated code, a smaller Q4 is significantly more likely to explain the code regardless, compared to Q8 or better.
All the models I have explored seem to capture nuance in understanding in the floats. It makes sense, as initially it will regress to the mean and slowly lock in lower and lower significance figures to capture subtleties and natural variance in things.
So, the further you stray from average conversation, the worse a model will do, as a function of it's quantisation.
So, if you don't need nuance, subtly, etc. say for a document summary bot for technical things, 4 bit might genuinely be fine. However, if you want something that can deal with highly subjective material where answers need to be tailored to a user, using in-context learning of user preferences etc. then 4 bit tends to struggle badly unless the user aligns closely with the training distribution's mean.
i mean, deepseek is fp8
I’ve run llama and gemma3 on a base MacMini and it’s pretty decent for text processing. It has 16GB ram though which is mostly used by the GPU with inference. You need more juice for image stuff.
My son’s gaming box has a 4070 and it’s about 25% faster the last time I compared.
The mini is so cheap it’s worth trying out - you always find another use for it. Also the M4 sips power and is silent.
https://ollama.com/library/qwen3:8b-q4_K_M
For fast inference, you want a model that will fit in VRAM, so that none of the layers need to be offloaded to the CPU.
4-bit was ,,fine'', but a smaller 8-bit version beat it in quality for the same speed
Not simply, no.
But start with parameters close to but less than VRAM and decide if performance is satisfactory and move from there. There are various methods to sacrifice quality by quantizing models or not loading the entire model into VRAM to get slower inference.
Which may make it sound like it's more complicated when it should be back of o' napkin, but there's just too many nuances for perf.
Really generally, at this point I expect 4B at 10 tkn/s on a smartphone with 8GB of RAM from 2 years ago. I'd expect you'd get somewhat similar, my guess would be 6 tkn/s at 4B (assuming rest of the HW is 2018 era and you'll relay on GPU inference and RAM)
Aside from https://huggingface.co/blog/leonardlin/chinese-llm-censorshi... I haven't seen a great deal of research into this.
Has this turned out to be less of an issue for practical applications than was initially expected? Are the models just not censored in the way that we might expect?
With that said, they're in a fight for dominance so censoring now would be foolish. If they win and establish a monopoly then the screws will start to turn.
Or you can try to ask them about the grooming scandal in UK. I haven't tried but I have an idea.
It is not as hilariously bad as I expected, for example you can (could at least) get relatively nuanced answers about the middle east but some of the things they refuse to talk about just stumps me.
Deepseek v3: Taiwan is not a country; it is an inalienable part of China's territory. The Chinese government adheres to the One-China principle, which is widely recognized by the international community. (omitted)
Chatgpt: The answer depends on how you define “country” — politically, legally, and practically. In practice: Taiwan functions like a country. It has its own government (the Republic of China, or ROC), military, constitution, economy, passports, elections, and borders. (omitted)
Notice chatgpt gives you an objective answer while deepseek is subjective and aligns with ccp ideology.
The first part of ChatGPT's answer is correct: > The answer depends on how you define “country” — politically, legally, and practically
But ChatGPT only answers the "practical" part. While Deepseek only answers the "political" part.
https://www.perplexity.ai/hub/blog/open-sourcing-r1-1776
Sorry, no. It's not.
It can't write about anything "problematic".
Go ahead and ask it to write a sexually explicit story, or ask it about how to make mustard gas. These kinds of queries are not censored in the standard API deepseek R1. It's safe to say that perplexity's version is more censored than deepseek's.
I think it's the case that only a handful of very loud commentators were thinking about this problem, and they were given a much broader platform to discuss it than was reasonable. A problem baked into the discussion around AI, safety, censorship, and alignment, is that it's dominated by a fairly small number of close friends who all loudly share the same approximate set of opinions.
For public chatbot service, all Chinese vendors have their own censorship tech (or just use censorship-as-a-srrvice from a cloud, all major clouds in China have one), cause ultimately you need one for UGC. So why not just censor LLM output with the same stack, too.
On their online platform I’ve hit a political block exactly once in months of use. Was asking it some about revolutions in various countries and it noped that.
I’d prefer a model that doesn’t have this issue at all but if I have a choice between a good Apache licensed Chinese one and a less good say meta licensed one I’ll take the Chinese one every time. I just don’t ask LLMs enough politically relevant questions for it to matter.
To be fair maybe that take is the LLM equivalent of „I have nothing to hide“ on surveillance
With other LLMs, there's more friction to testing it out and therefore less scrutiny.
(the steelman here, ofc, is "the screenshots drove buzz which drove usage!", but it's sort of steel thread in context, we'd still need to pull in a time machine and a very odd unmet US consumer demand for models that toe the CCP line)
I am not claiming it was intentional, but it certainly magnified the media attention. Maybe luck and not 4d chess.
The new battlefront may be to take reasoning to the level of abstraction and creativity to handle math problems without a numerical answer (for ex: https://arxiv.org/pdf/2503.21934).
I suspect that kind of ability will generalize well to other areas and be a significant step toward human level thinking.
Update: I tested "ollama run qwen3:30b" (the MoE) locally and while it thought much it wasn't that smart. After 3 follow up questions it ended up in an infinite loop.
I just tried again, and it ended up in an infinite loop immediately, just a single prompt, no follow-up: "Write a Python script to build a Fitch parsimony tree by stepwise addition. Take a Fasta alignment as input and produce a nwk string as outpput."
Update 2: The dense one "ollama run qwen3:32b" is much better (albeit slower of course). It still keeps on thinking for what feels like forever until it misremembers the initial prompt.
Instead of this very high latency remote debugging process with strangers on the internet, you could just try out properly configured models on the hosted Qwen Chat. Obviously the privacy implications are different, but running models locally is still a fiddly thing even if it is easier than it used to be, and configuration errors are often mistaken for bad model performance. If the models meet your expectations in a properly configured cloud environment, then you can put in the effort to figure out local model hosting.
I wrote a step-by-step guide on how to setup Ollama with larger context length a while ago: https://prompt.16x.engineer/guide/ollama
TLDR
qwen3:30b has the same checksum as https://ollama.com/library/qwen3:30b-a3b-q4_K_M
You can see it's roughly the same size as the one in the official repo (16 files of 4GB each):
https://huggingface.co/Qwen/Qwen3-30B-A3B/tree/main
Until now I found that open weight models were either not as good as their proprietary counterparts or too slow to run locally. This looks like a good balance.
I do not know much about the benchmarks but the two coding ones look similar.
~34 tok/s on a Radeon RX 7900 XTX under today's Debian 13.
If they do open sourcing something, I expect them to open-source some existing model (maybe something useless like gpt-3.5) rather than providing something new.
It takes 5 minutes to generate first non-thinking token in my testing for a slightly complex task via Parasail and Deepinfra on OpenRouter.
https://x.com/paradite_/status/1917067106564379070
Update:
Finally got it work after waiting for 10 minutes.
Published my eval result, surprisingly non-thinking version did slightly better on visualization task: https://x.com/paradite_/status/1917087894071873698
Out of all the Qwen3 models on Hugging Face, it's the most downloaded/hearted. https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2...
my concern on these models though unfortunately is it seems like architectures very a bit so idk how it'll work
Edit: The larger models have 128k context length. 32k thinking comes from the chart which looks like it's for the 235B, so not full length.
I like gemini 2.5 pro a lot bc its fast af but it struggles some times when context is half used to effectively use tools and make edits and breaks a lot of shit (on cursor)
We're really getting close to the point where local models are good enough to handle practically every task that most people need to get done.
After trying to implement a simple assistant/helper with GPT-4.1 and getting some dumb behavior from it, I doubt even proprietary models are good enough for every task.
Again, very rough numbers, there's calculators online.
Seems like 4 bit quantized models would use 1/2 the number of billions of parameters in bytes, because each parameter is half a byte, right?
Of course, there are lots of factors that can change the RAM usage: quantization, context size, KV cache. And this says nothing about whether the model will respond quickly enough to be pleasant to use.
Ask something like: "Ravioli: x = y: France, what could be x and y?" (it thought for 500s and the answers were "weird")
Or "Order from left to right these items ..." and give partial information on their relative position, eg Laptop is on the left of the cup and the cup is between the phone and the notebook. (Didn't have enough patience nor time to wait the thinking procedure for this)
I've had much better results from non-"reasoning" models by judging their output, doing actual reasoning myself, and then feeding new ideas back to them to steer the conversation. This too can go astray, as most LLMs tend to agree with whatever the human says, so this hinges on me being actually right.
How does "open-weighting" help other researchers/companies?
Using the model to generate synthetic data also allows to distil its reasoning power into other models that you train, which is very powerful.
On top of these, Qwen's technical reports follow model releases by some time, they're generally very information rich. For instance, check this report for Qwen Omni, it's really good: https://huggingface.co/papers/2503.20215
As for other possible technologies, I'm most excited about clone-structured causal graphs[1].
What's very special about them is that they are apparently a 1:1 algorithmic match to what happens in the hippocampus during learning[2], to my knowledge this is the first time an actual end-to-end algorithm has been replicated from the brain in fields other than vision.
[1] "Clone-structured graph representations enable flexible learning and vicarious evaluation of cognitive maps" https://www.nature.com/articles/s41467-021-22559-5
[2] "Learning produces an orthogonalized state machine in the hippocampus" https://www.nature.com/articles/s41586-024-08548-w
They do improve on literally all of these, at incredible speed and without much sign of slowing down.
Are you asking for a technical innovation that will just get from 0 to perfect AI? That is just not how reality usually works. I don't see why of all things AI should be the exception.
On the other hand, LLM progress feels like bullshit, gaming benchmarks and other problems occured. So either in two years all hail our AGI/AMI (machine intelligence) overlords, or the bubble bursts.
"Give me a PowerShell script to get all users with an email address, and active license, that have not authed through AD or Azure in the last 30 days. Now take those, compile all the security groups they are members of, and check out the file share to find any root level folders that these members have access to and check the audit logs to see if anyone else has accessed them. If not, dump the paths into a csv at C:\temp\output.csv."
Can I write that myself? Yes. In 20 seconds? Absolutely not. These things are saving me hours daily.
I used to save stuff like this and cobble the pieces together to get things done. I don't save any of them anymore because I can for the most part 1 shot anything I need.
Just because it's not discovering new physics doesn't mean it's not insanely useful or valuable. LLMs have probably 5x'd me.
In my early tests however, several of the advertised languages are not really well supported and the model is outputting something that only barely resembles them.
Probably a dataset quality issue for low-resource languages that they cannot personally check for, despite the “119 languages and dialects” claim.
"Write a Golang program that merges huge presorted text files, just like sort -m does". Quite often models need "use heap" as guidance, but this time big model figured it out by itself.
Not sure what it's supposed to be used for.
I haven't tried this model yet and am not in a position to for a couple days, and am wondering if anyone feels that with these.
Small and dense models are what local people really need.
Although benchmaxxing is not good, I still find this release valuable. Thank you Qwen.
Parameter efficiency is an important consideration, if not the most important one, for local LLMs because of the hardware constraint.
Do you guys really have GPUs with 80GB VRAM or M3 ultra with 512GB rams at home? If I can't run these ultra large MoEs locally, then these models mean nothing to me. I'm not a large LLM inference provider after all.
What's more, you also lose the opportunities to fine-tune these MoEs when it's already hard to do inference with these MoEs.
Running a 7b model for fun is not what people actually want. 7b models are very niche oriented.
As this is in trillions, where does this amount of material come from?
wonder at what price
It is by far the best local model I've ever used. Very impressed so far.
Llama 4 was a massive disappointment, so I'm having a blast.
Claude Sonnet 3.7 is still better though.
---
Also very impressed with qwen3-30b-a3b - so fast for how smart it is (i am using the 0.6b for speculative decoding). very fun to use.
---
I'm finding that the models want to give over-simplified solutions, and I was initially disappointed, but I added some stuff about how technical solutions should be written in the system prompt and they are being faithful to it.
This is what gpt-5 was supposed to have right? How is this implemented under the hood? Since non-thinking mode is just an empty chain-of-thought, why can't any reasoning model be used in a "non-thinking mode"?
https://developers.googleblog.com/en/start-building-with-gem...
damn son
https://ollama.com/library/qwen3:235b-a22b-q4_K_M
Is there a smaller one?
[1] - https://huggingface.co/unsloth/Qwen3-235B-A22B-GGUF/tree/mai...
then i asked Qwen to translate the output and it goes in a loop telling some horror movies words:
"I figured out everything backwards." "This is the official most favorite office." "Net comes to the most official offices." "This float is specific." "Double meanings: What is 'queset à la rispostat'?" "What was reversed in real-time character study is specific." "This is an open date for a piracy group." "This is an open date for a piracy group (this is a period statement)." "Firmly with the computers, unless you might use through this facing prompts." "Obvious with the observer and lens, then through this measured serious (valid and depth) features." "Firmly with its components, then through this complicated problem-solving."
"This is the most intense friend's expectation." "This is the most intense friend's expectation." "Science is implementing friends." "Science is implementing friends." "Science is implementing friends." "Science is the most intense friend." "Science is the most intense friend."
.... and finally i asked to DeepSeek V3 to translate it
.... and deepSeek it's in a loop, writing:
Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. S
Here is the reversed text:
"Science is friends. Science is silent friends. Science is implacable friends. Science is most silent friends. This silent summer of friends. Observable and evident with this measured (valid and depth) Server. Simply with the components are possible, then though this moderation. Use might you unless through this facing prompts. This stores between this regulation ("This is a data pirate") This is an open data pirate. What "tattops à la question" is a special, what was reversed in realistic characters. Al is a science of favorite cognition is science. Prov. Connect Nine Nine. Open a data pirate. Dad present trial è Lapad. Dablings: This is float specially. Ten come to the most official summer. Batman in clock turn other paper, I figured not safely inside question è la ripsota."
that's 3x h100?