However, even here on Hacker News, I’ve noticed that some people reject LLMs on seemingly fundamental grounds, pointing out minor issues while disregarding any benefits.
What drives this level of criticism? I’d like to understand the underlying mechanisms that fuel such strong reactions.
Is it existential discomfort around intelligence no longer being uniquely human? Is it anxiety about potential job displacement? Or is it perhaps a concern about intelligence itself losing its perceived special status? Or are people simply trolling?
Am I the only one utterly tired of the LLM boosters who write derisive things about LLM skeptics like this? The question has already been sincerely been answered by many people over and over yet they insist it is because skeptics are "scared of LLM". This baiting is not worthy of a response.
So one one level you have semantic purists. Obviously some of those are here.
But on another level it's the broad (under- and mis-) understandings around the ML capabilities that now exist.
Basically, the ML does not constitute nor embody "intelligence".
Even if it can program software, make photos and video from prose, dictate and summarize audio, perform "business analysis", act as a therapist/friend, retrieve and distill "web search" results, translate languages, etc.
Yes, it can feign intelligence. Really, really well.
But the real critics in camp A are pointing out, that's just technology, that's not intelligence.
And camp B may be saying in some capacity, as a corollary, it's important to not misunderstand or misuse this, because of hypothetical consequences C, D, or E.
I won't defend any of these positions, here and now. I'm just answering the OP's question, "What drives this level of criticism?", in case they are not aware of these general positions and wish to explore them further.
So when people push back against using LLMs in areas where they do not excel, that is not skepticism, that is realism. Take away the hype, take away the emotion, and they are what many people have been saying all along - just a tool. Use it where it makes sense, leave it alone where it does not.
Of course, I also do things that aren't the normal/common, so the learning isn't there.
Simple things, like basic CRUD operations work well but my apps aren't basic. =P
It can invent text, images, video and code. It looks very convincing. Sometimes it even works correctly.
Sometimes when you ask it a question it can answer it correctly. Sometimes when you ask it the same question it will give you a different answer, a false one. Some may say that humans do the same thing. But computers and computer programs shouldn't always have fuzzy logic. Sometimes there is only one right answer and I like my computer programs to be deterministic so that I know I can trust them.
It also doesn't "learn". It forgets everything when the tokens are cleared from memory. And memory is limited. It only "learns" when the model is trained again by its maintainers.
And many other things.
For me, it's none of those things. But the first three questions seem to be assuming that these tools are not only intelligent, but intelligent enough to challenge some human "special status". I don't think that's anywhere close to being true.
It's more that the harm I see already happening appears to be greater than the benefit, and the last thing we need, particularly lately, is more harm.
2. They told us something false or that didn't work
3. We tried LLMs again
4. They failed again by telling us something false or that didn't work
5. We lost trust in them, at least for the time being
1. Lack of True Understanding or Reasoning
LLMs generate text by identifying patterns in massive datasets, not by truly understanding the world or reasoning in the human sense. They often appear intelligent but can make basic logical errors or confabulate facts, especially outside their training data. This raises doubts about whether they’re reliable for tasks requiring critical thinking, judgment, or common sense.
2. Opacity and Explainability
LLMs are "black boxes"; it’s hard to know why they produce a particular output. This makes them difficult to audit, trust, or verify, especially in high-stakes applications (e.g., law, medicine).
3. Bias and Fairness
LLMs reflect and sometimes amplify biases present in their training data. Examples include racial, gender, cultural, and other biases. Even well-intentioned outputs can contain harmful stereotypes, making deployment risky.
4. Misinformation and Hallucination
LLMs can generate plausible-sounding but false or misleading content ("hallucinations"). They might confidently assert fabricated facts, citations, or details, making them dangerous as a source of truth.
5. Ethical Concerns
Issues include plagiarism, data privacy (they may memorize sensitive info), and use in deceptive applications (e.g., deepfakes, fake news, spam). Their ability to mimic human language raises concerns about manipulation and autonomy.
6. Resource Intensiveness and Environmental Impact
Training LLMs consumes massive energy and computational resources. This raises questions about the sustainability and equity of LLM development (access is mostly controlled by wealthy tech companies).
7. Overhype and Misuse
Marketing often oversells LLMs as "intelligent agents" or "thinking machines." There’s skepticism about whether current LLMs justify the hype; some see them as autocomplete on steroids, not a step toward general intelligence.
8. Dependency and De-skilling
Overreliance on LLMs might reduce critical thinking, writing, or research skills in professionals and students. This leads to concerns about human agency, education quality, and intellectual laziness.
9. Unclear Societal Impact
LLMs are evolving rapidly, and society hasn’t caught up in terms of laws, norms, or governance. Critics fear social disruption, job loss, and power concentration in a few AI labs.
10. Limits to Generalization
LLMs trained on past data struggle with novelty, non-textual reasoning, or dynamic real-world environments. They’re not grounded in perception or physical experience, which limits their general intelligence.
>> Or is it perhaps a concern about intelligence itself losing its perceived special status?
Seriously, though, intelligence and knowledge is the only reason humanity survives. If we hamstring those, or limit those to a select few, we decay, because without intelligence and knowledge humans are weaker than every other species and most bacteria on this planet. Unfree knowledge-whether A) physically locked up in guilds, B) legally locked up by draconian intellectual property laws, or C) obfuscatorily locked up in seductive electronic systems that could lie to you-are forms of hamstringing. The eventual result is brittle societies. Large societies falling apart in the modern age can be extremely dangerous to humanity due to nuclear weapons.
I see that as delivering enough