Ask HN: Has your opinion on AI changed over the past year?

As in, were you originally optimistic about AI, but are now pessimistic, or were once pessimistic but are now optimistic? Did you originally see it as a huge boon to your work, but now find it a hinderance, or were you once dismissive of it, but now find it indispensible?

Just curious who has changed their mind/outlook and what precipitated the change.

5 points | by atleastoptimal 23 hours ago

12 comments

  • jlengrand 2 hours ago
    Not quite changed, but I also realized that I pretty much cannot afford to just plain skip it and refuse it. So I go along, and use it where it makes sense. Still pretty sure we're about to lose much more than we'll win overall though.
  • sankha93 18 hours ago
    A few months back I was pessimistic about AI, and now I am the opposite. The perspective change happened when I realized giving it an entire problem and expecting it to solve that is unrealistic. The real value add is if you can use AI at the right steps of your workflow or larger system.

    I did a PhD in program synthesis (programming languages techniques) and one the tricks there was to efficiently prune the space of programs. With LLMs it is more much more likely to start with an almost correct guess, the burden now shifts to lighter verification methods.

    I still do not believe in the AGI hype. But I am genuinely excited. Computing has always been humans writing precise algorithms and getting correct answers. The current generation of LLMs are the opposite you can be imprecise, but the answers can be wrong. We have to figure out what interesting systems we can build with it.

  • codingdave 23 hours ago
    Nope - I always thought it was over-hyped, and applied to too many scenarios where it is not the right tool... yet it is not useless. I still think that.
  • drzzhan 15 hours ago
    For most aspects my opinions don't change. But on coding, it has changed a lot. Two years ago I would just dismiss at the possibility that llm could write up a simple function. Now it has proven me wrong.
  • rossdavidh 23 hours ago
    Not hugely. I think we're headed for another "AI winter" (https://en.wikipedia.org/wiki/AI_winter), which will be either our third or fourth depending on whether or not you count the one after IBM Watson. But who knows how long it takes for that; you can know that the bubble would burst, but not when.

    Which is a shame, because neural networks are useful tools to have in your toolbox, they're just not the entire toolbox. I wish we could just use them like "finite state machine" or "model-view-controller" or any other useful-but-not-overhyped algorithm or code pattern.

  • eajr 18 hours ago
    Yes, a year ago I thought these things were just toys and wouldn't amount to much, and I've done a complete 180 on that stance. Started using cursor about 9 months ago and saw some potential, but my mind completely changed when I tried Claude code for the first time. We've got a long way to go till this tech is usable on large production codebases but for small greenfield stuff, it is killer.
  • brudgers 21 hours ago
    Currently I believe AI is ideology not technology. Success is solely achieved by declaration.

    I am not sure when I read what I read to make me subscribe to that idea. Might have been in the last year. Probably was a bit longer ago.

    [Caveat] I have not been terribly excited about AI since expert systems but A* is pretty useful.

    [Clarification] I think AGI is analogous to transmutation of lead into gold. The literal goal is incompatible with reality, but it is the right problem to work on because the exhaust fumes are useful.

  • OccamsMirror 19 hours ago
    I really love it for tasks that I wouldn't have bothered doing myself in the past. Janky automation scripts. Porting a protocol to another language. Mindless work like that.

    I find myself regretting its use when I apply it too aggressively with my products.

  • ungreased0675 23 hours ago
    I’m using it more at work, but I’m also predicting that LLMs will ruin the internet. Social media is increasingly full of slop, and so are search results. AI pollution seems on track to push all useful content out of sight online.
  • afaxwebgirl 23 hours ago
    I was always somewhat cautious about it. If anything, I would say that, that caution has increased.
  • Bender 22 hours ago
    Not really. It's still just big-data combined with language models. It will be great for number crunching, sifting through petabytes of garbage at the risk of garbage in, garbage out. I think it's great that there is a popular thing that is giving data scientists work to do given so many others are out of work. It is still my opinion if a job can be replaced by big-data + LLM's then that job position was already at risk and probably did not need to exist in the first place. I see AI as an optimization of existing data that likely could have been performed by companies on a smaller scale. So this is really just a sweep and clean. I have not yet seen anything earth shattering come of it. I fully expect AI to be weaponized much like social media algorithms weaponized the internet once it reaches critical mass and reviewing the source code will mean nothing. The risks will be how each instance is tuned and operated like any other tool.
  • forgotpass012 19 hours ago
    Speaking about this latest wave of LLMs only…

    I was super pessimistic and I now recognize that there is value.

    I am saving about 20% of my effort coding at work. However, coding is not exactly the main thing that takes time at work. It’s figuring out what should be done, getting everyone on board, coming up with the high level architecture.

    So I’ve gone from this is terrible to acknowledging that it has limited use. I am still sick of AI slop and even when I use it for code at work AI has a tendency to generate mountains of garbage.