This leaves many in a position where they fear they will be next on the chopping block. Many assume physical tasks will take longer since it will take longer to build up, verify and test humanoid robots vs. some virtual AI agent. However, many believe the writing is on the wall either way, and those in domains involving using their hands or bodies will only have a few more years than the formerly employed white-collar class.
Which skills then, or combinations of skills, do you believe will be safest for staying employed and useful if AI continues improving at the rate it has been for the past few years?
Actually very little debate. We get a lot of unsubstantiated hype from companies like OpenAI, Anthropic, Google, Microsoft. So-called AI has barely made a dent in economic activities, and no company makes money from it. Tech journalism repeatedly fails to question the PR narrative (read Ed Zitron).
> Regardless of whether this will happen, or when, many people already have lost their jobs in part due to the emerging capabilities of AI models…
Consider the more likely explanation: many companies over-hired a few years ago and have cut jobs. Focus on stock price in an uncertain economy leads to layoffs. Easier to blame AI for layoffs than admitting C-suite management incompetence. Fear of the AI boogeyman gives employers the upper hand in hiring and salary negotiations, and keeps employers in line out of fear.
It couldn't be that people lost jobs because of the policies they voted for.
Would you really consider the Nobel laureates Geoffrey Hinton¹, Demis Hassabis² and Barack Obama³ not worth listening to on this matter? Demis is the only one with ulterior motives to hype it up, but compared to normal tech CEOs he certainly has quite a bit of proven impact (Alphafold, AlphaZero) to be worth listening to.
> AI has barely made a dent in economic activities
AI companies' revenues are growing rapidly, reaching the tens of billions. The claim that it's just a scapegoat for inevitable layoffs seems fanciful when there are many real-life cases of AI tools performing equivalent person-hours work in white-collar domains.
https://www.businessinsider.com/how-lawyer-used-ai-help-win-...
To claim it is impossible that AI could be at least a partial cause of layoffs requires an unshakable belief that AI tools could not even be labor-multiplying (as in allowing one person to perform more work at the same level of quality than they would otherwise). To assume that this has never happened by this point in 2025 requires a heavy amount of denial.
That being said, I could cite dozens of articles, numerous takes from leading experts, scientists, legitimate sources without conflicts of interest, and I'm certain a fair portion of the HN regulars would not be swayed one inch. Lively debate is the lifeblood of any domain that prides itself on intellectual rigor, but a lot of the dismissal of the actual utility of AI, the impending impacts, and its implications feels like reflexive coping.
I would really really love to hear an argument that convinced me that AGI is impossible, or far away, or that all the utility I get from Claude, o3 or Gemini are all just tricks of scale and memorization entirely orthogonal to something somewhat akin to general human-like intelligence. However, I have not heard a good argument. The replies I get seem to be largely ad-hominems toward tech CEOs, dismissive characterizations of the tech industry at large, and thought-terminating quips that hold no ontological weight.
1: https://www.wired.com/story/plaintext-geoffrey-hinton-godfat... 2: https://www.axios.com/2025/05/21/google-sergey-brin-demis-ha... 3: https://www.youtube.com/watch?v=72bHop6AIcc 4: https://www.cio.com/article/4012162/ai-begins-to-reshape-the...
Obama, no. Geoff Hinton has his opinions and I’ve listened to them. For every smart person who believes in AI and AGI happening soon you can find other smart people who argue the other way.
> AI companies' revenues are growing rapidly, reaching the tens of billions.
Trading stock and Azure credits don’t equal revenue. OpenAI, the leader in the AI industry, loses billions every quarter. Microsoft and Google and Meta subsidize their work from other profitable activities. The profit isn’t there.
> The claim that it's just a scapegoat for inevitable layoffs seems fanciful when there are many real-life cases of AI tools performing equivalent person-hours work in white-collar domains. https://www.businessinsider.com/how-lawyer-used-ai-help-win-...
A few questionable anecdotes? Given the years since ChatGPT and the billions invested I’d expect more tectonic changes than “It wrote my term paper.” Companies have not replaced employees with AI doing the same job at any scale. You simply can’t find honest examples of that except for call centers that got automated and offshored decades ago.
> To claim it is impossible that AI could be at least a partial cause of layoffs requires an unshakable belief that AI tools could not even be labor-multiplying (as in allowing one person to perform more work at the same level of quality than they would otherwise). To assume that this has never happened by this point in 2025 requires a heavy amount of denial.
I might agree, but I didn’t make that claim. AI tools probably can add value as tools. If you find a real example of AI taking over a professional job at scale let us know.
> That being said, I could cite dozens of articles, numerous takes from leading experts, scientists, legitimate sources without conflicts of interest, and I'm certain a fair portion of the HN regulars would not be swayed one inch. Lively debate is the lifeblood of any domain that prides itself on intellectual rigor, but a lot of the dismissal of the actual utility of AI, the impending impacts, and its implications feels like reflexive coping.
We could have better discussions if the AI industry wasn’t led by chronic liars and frauds, people making ridiculous self-serving predictions not backed by anything that resembles science. AI gets literally shoved down our throats with no demonstrated or measurable benefit, poor accuracy, severe limitations, heavy costs that get subsidized by investors and the public. Forget about the energy and environmental impact. Which “side” acts in good faith in the so-called debate?
> I would really really love to hear an argument that convinced me that AGI is impossible, or far away, or that all the utility I get from Claude, o3 or Gemini are all just tricks of scale and memorization entirely orthogonal to something somewhat akin to general human-like intelligence. However, I have not heard a good argument.
I have. You just need to take the critics seriously. No one can even define intelligence or AGI, but they sure can sell it to FOMO CIOs.
Like right now, native mobile jobs are mostly unaffected by AI. Gemini, despite all the data in the community, doesn't do a decent job at it. If you ask it to build an app from scratch, the architecture will be off. It'll use an outdated tech stack from 2022. It will 'correct' perfectly good data to an older form, and if you ask it to hunt for bugs in cutting edge tech, it might rip out the new one and replace it with old stuff. It often confuses methods that are common between different languages like .contains().
But if very high quality data is easily accessible, e.g. writing, digital art, voice acting, etc, that makes it viable to be cloned by AI. There's little animation data and there's even less oil painting data - so something like oil painting will be greatly more resistant than digital art. It's top tier on Python and yet it struggles with Ren'Py.
Anthropic released experiment results from getting it to manage a vending machine: https://www.anthropic.com/research/project-vend-1
This is a fairly simple task for a human, and Claudius has plenty of reasoning and financial data. But it can't reason its way into running a vending machine because it doesn't have data on how to run vending machines.
Out of date after only 3 years?
But I think what the poster meant was that the AI is not always "up to date". So if my C compiler is a 2025 version, and my code makes use of features added since 2022, then the AI can "retrogress" the code simply because it isn't aware of the new.
Or another example, imagine when JavaScript Promises were new. There's a lot more examples of "not using promises" than "using promises ". So the AI us likely to use the old pattern.
If you're doing "up to the minute " code because you put the effort in to keep abreast of new stuff, then this retrograde will seem, well, frustrating.
Plumbing.
Embalming and funeral direction.
Childrearing, especially of toddlers. Therapy for complex psychological conditions/ones with complications. Anything else that requires strong emotional and interpersonal judgement and the ability to think outside the box.
Politics/charisma. Influencers. Cult leaders. Anything else involving a cult of personality.
Stand up comics/improv artists. Nobody’s going to pay to sit in a room with other people and listen to a computer tell jokes.
World class athletes.
Top tier salespeople.
TV news and game show etc hosts.
Also note that a bunch of these (and other jobs) may vanish if the vast majority of the population is unemployed and only a few handfuls of billionaires can afford to pay anyone for services.
I’d also note that a lot of jobs will stay safe for much longer than we fear if AI continues to be unable to actually reason and can only handle patterns / extrapolations of patterns it’s already seen.
For example, Conflict resolution, therapy, and coaching depend on nuance, empathy, and trust.
Skilled trades like plumbing, electrical work, or HVAC repair, Auto mechanics, Elevator technicians.
Roles like physical presence plus knowledge Emergency responders (firefighters, EMTs), Disaster relief coordinators.
Masses losing their job due to AI (or for whatever reason) will have a widespread effect on every other sector, because at the end of the day huge part of the economy is based on people just spending their money.
That’s why if you look at the leveling guidelines for any well known tech company, “codez real gud” only makes a difference between junior and mid level developers. After that it’s about “scope”, “impact” and “dealing with ambiguity”.
Yes I realize that there are still some “hard problems” that command a premium for people to be able to solve via code - that’s the other 10% and I’m being generous.