The problem: You can't win anymore.
The old way: You'd think about the problem. Draw some diagrams. Understand what you're actually trying to do. Then write the code. Understanding was mandatory. You solved it.
The new way: The entire premise of AI coding tools is to automate the thinking, not just the typing. You're supposed to describe a problem and get a solution without understanding the details. That's the labor-saving promise.
So I feel pressure to always, always, start by info dumping the problem description to AI and gamble for a one-shot. Voice transcription for 10 minutes, hit send, hope I get something first try, if not hope I can iterate until something works. And when even something does work = zero satisfaction because I don't have the same depth of understanding of the solution. Its no longer my code, my idea. It's just some code I found online. `import solution from chatgpt`
If I think about the problem, I feel inefficient. "Why did you waste 2 hours on that? AI would've done it in 10 minutes."
If I use AI to help, the work doesn't feel like mine. When I show it to anyone, the implicit response is: "Yeah, I could've prompted for that too."
The steering and judgment I apply to AI outputs is invisible. Nobody sees which suggestions I rejected, how I refined the prompts, or what decisions I made. So all credit flows to the AI by default.
The result: Nothing feels satisfying anymore. Every problem I solve by hand feels too slow. Every problem I solve with AI feels like it doesn't count. There's this constant background feeling that whatever I just did, someone else would've done it better and faster.
I was thinking of all the classic exploratory learning blog posts. Things that sounded fun. Writing a toy database to understand how they work, implementing a small Redis clone. Now that feels stupid. Like I'd be wasting time on details the AI is supposed to handle. It bothers me that my reaction to these blog posts has changed so much. 3 years ago I would be bookmarking a blog post to try it out for myself that weekend. Now those 200 lines of simple code feels only one sentence prompt away and thus waste of time.
Am I alone in this?
Does anyone else feel this pressure to skip understanding? Where thinking feels like you're not using the tool correctly? In the old days, I understood every problem I worked on. Now I feel pressure to skip understanding and just ship. I hate it.
It’s a different, less enjoyable, type of work in my opinion.
This is an elegant way of putting it. I like it
Whereas the vibe in the lecture theatre 4 years ago was far more nerdy and enthusiastic. It makes me feel very sorry for this new generation that they will never get to enjoy the same feeling of satisfaction from solving a hard problem with code you thought and wrote from scratch.
Ironically, I've had to incorporate some AI stuff in my course as a result of needing to remain "current", which almost feels like it validates that cynical sentiment that this soulless way is the way to be doing things now.
And can we assume that because AI has made it easy to solve some hard problems, other hard problems won't arise?
Not that I don't agree
And hasn't the internet generally added to this attitude?
And if it makes you feel any better, as someone around that age, this environment seems to have also led some of us to go out of our way to not outsource all our thinking
> import solution from chatgpt
Which reminded me of all the students in classes (and online forums) mocking non-nerds who wanted easy answers to programming problems. It would seem the non-nerds are getting their way now.
Where are the labor saving _measurements_? You said it yourself:
> You'd think about the problem. Draw some diagrams. Understand what you're actually trying to do.
So why are we relying on "promises?"
> If I use AI to help, the work doesn't feel like mine.
And when you're experiencing an emergency and need to fix or patch it this comes back to haunt you.
> So all credit flows to the AI by default.
That's the point. Search for some of the code it "generates." You will almost certainly find large parts of it, verbatim, inside of a github repository or on an authors webpage. AI takes the credit so you don't get blamed for copyright theft.
> Am I alone in this?
I find the thing to be an overhyped scam at this point. So, no, not at all.
Only if you're doing something trivial or highly common, in which case it's boilerplate that shouldn't be copyrighted. We already had this argument when Oracle sued Google over Java. We already had the "just stochastic parrots" conversation too, and concluded it's a specious argument.
"It's boilerplate therefore it isn't IP" isn't the argument that was made by Google, nor is it the argument that the case was decided upon.
It was decided that Google's use of the API met the four determining factors used by courts to ascertain whether use of IP is fair use. The court found that even though it was Oracle's copyrighted IP, it was still fair use to use it in the way Google did.
https://en.wikipedia.org/wiki/Google_LLC_v._Oracle_America,_...
Let's say it's boilerplate code filled with comments that are designed to assist in understanding the API being written against. Are the comments somehow not covered because they were added to "boilerplate code?" Even if they're reproduced verbatim as well?
> We already had the "just stochastic parrots" conversation too
Oh, I was not part of those conversations, perhaps you can link me to them? The mere stated existence of them is somewhat underwhelming and entirely unconvincing. Particularly when it seems easy to ask an LLM to generate code and then to search for elements of that code on the Internet. With that methodology you wouldn't need to rely on conversations but on actual hard data. Do you happen to know if that is also available?
I'd disagree. For me, I direct the AI to implement my plan - it handles the trivia of syntax and boilerplate etc.
I now work kinda at the "unit level" rather than the "syntax level" of old. AI never designs the code for me, more fills in the gaps.
I find this quite satisfying still - I get stuff done but in half the time because it handles all the boring crap - the typing - while I still call the shots.
Do all the stuff you mention the old way. If I have a specific, crappy API that I have to deal with, I'll ask AI to generate the specific functionality I want with it (no more than a method or two). When it comes to testing, I'll write a few tests (some simple, some complicated) and then ask AI to generate a set of tests based on those examples. I then run and audit the tests to make sure they are sensible. I always end my prompts with "use the simplest, minimal code possible"
I am mostly keeping the joy of programming while still being more productive in areas I'm not great at (exhaustive testing, having patience with crappy APIs)
Not world changing, but it has increased my productivity I think.
You're having imposter syndrome-type response to AI's ability to outcode a human.
We don't look at compliers and beat out fists that we can't write in assembly... why expect your human brain to code as easily or quickly as AI?
The problem you are solving now becomes the higher-level problem. You should absolutely be driving the projects and outcomes, but using AI along the way for programming is part of the satisfaction of being able to do so much more as one person.
That’s the promise, but not the reality :) Try this: pick a random startup idea from the internet, something that would normally take 3–6 months to build without AI. Now go all in with AI. Don’t worry about enjoyment; just try to get it done.
You’ll notice pretty quickly that it doesn’t get you very far. Some things go faster, until you hit a wall (and you will hit it). Then you either have to redo parts or step back and actually understand what the AI built so far, so you can move forward where it can’t.
>I was thinking of all the classic exploratory learning blog posts. Things that sounded fun. Writing a toy database to understand how they work, implementing a small Redis clone. Now that feels stupid. Like I'd be wasting time on details the AI is supposed to handle.
It was "stupid" then - better alternatives already existed, but you do it to learn.
> Am I alone in this?
absolutly not but understand it is just a tool, not a replacement, use it and you will soon find the joy again, it is there
But the whole blog is a consequence of exactly this that you are describing.
It’s a huge help for diving into new frameworks, troubleshooting esoteric issues (even if it can’t solve it its a great rubber duck and usually highlights potential areas of concern for me to study), and just generally helping me get in the groove of actually DOING something instead of just thinking about it. And, once I do know what I’m doing and can guide it method by method and validate/correct what it outputs, it’s pretty good at typing faster than I can.
Why didn't the fact that Redis already existed make the whole thing feel pointless before? You could just go to github and copy the thing. I don't get why AI is any different in this regard.
It’s just going to take time for “best practice” to come around with this. It’s like outsourcing, for a while it seems like a good idea and it might be for very fixed tasks that you don’t really care about, but nobody does it now for important work because of the lack of control and understanding which is exactly where AI will end up. I think for coding tasks you can almost interchangeably use AI and outsourcing and preserve the meaning.
I used to work in land surveying, entering that field around the turn of the millennium just as digitalisation was hitting the industry in a big way. A common feeling among existing journeymen was one of confusion. Fear and dislike of these threatening changes, which seemed to neutralise all the hard-won professional skills. Expertise with the old equipment. Understanding of how to do things closer to first-principles. Ability to draw plans by hand. To assemble the datasets in the complex and particular old ways. And of course, to mentor juniors in the same.
Suddenly, some juniors coming in were young computer whizzes. Speeding past their seniors in these new ways. But still only juniors, for all that - still green, no matter what the tech. With years and decades yet, to earn their stripes, their professionalism in all it's myriad aspects. And for the seniors, their human aptitudes (which got them there in the first place) didn't vanish. They absorbed the changes, stuck with their smart peers, and evolved to match the environment. Would they have rathered that everything in the world had stayed the same as before? Of course. But is that a valid choice, professionally speaking? or in life itself? Not really.
wrote recently about it https://punkx.org/jackdoe/misery.html
now at night i just play my walkman(fiio cp13) and work on my OS, i managed to record some good cassettes with non AI generated free music from youtube :) and its pretty chill
PS: use before:2022 to search
I’ve never met so many people that hate programming so much.
You get the same thing with artists. Some product manager executive thinks their ideas are what people value. Automating away the frustration of having to manage skilled workers is costly and annoying. Nobody cares how it was made. They only care about the end result. You’re not an artist if all you had to do was write a prompt.
Every AI-bro rant is about how capital-inefficient humans are. About how fallible we are. About how replaceable we are.
The whole aesthetic has a, “good art vs. bad art,” parallel to it. Where people who think for themselves and write code in service of their work and curiosity are displayed as inferior and unfit. Anyone who is using AI workflows are proper and good. If you are not developing software using this methodology then you are a relic, unfit, unstable, and undesirable.
Of course it’s all predicated in being dependent on a big tech firm, paying subscription fees and tokens to take a gamble at the AI slot machine in hopes that you’ll get a program that works the way you want it to.
Just don’t play the game. Keep writing useless programs by hand. Implement a hash table in C or assembly if you want. Write a parser for a data format you use. Make a Doom clone. Keep learning and having fun. Satisfaction comes from mastery and understanding.
Understanding fundamental algorithms, data structures, and program composition never gets old. We still use algebra today. That stuff is hundreds of years old.
Ok, you don't like a particular way of working or a particular tool. In any other era, we would just stop doing using that tool or method. Who is saying you cannot? Is a real constraint or a perceived one?
Regardless, I understand the need to understand what you built. So you have a few options. You can study it (with the agent's help?), you can write your own tests / extensions for it to make sure you really get it, or you can write it yourself. I honestly think that most of those take about as long. It's only shorter when you don't want to understand it, so then we're back to the main question: Why not?
Note: I don't vibe-code, or use agents. Just standard Jetbrain IDEs, and a GPT-5-thinking window open for C+P.
When I need something to work that hasn't been done before, I absolutely have to craft most of the solution myself, with some minor prompts for more boilerplate things.
I see it as a tool similar to a library. It solves things that are already well known, so I can focus on the interesting new bits.
Most commenters comment because it makes them feel good inside. If a comment helps you.. well, that’s a rare side-effect.
To truly broaden your perspective - instead of just feeling good inside - you must do more than Ask HN.
You have to understand your problem and solution inside and out. This means thinking deeply about your solution along with the drawing boxes and lines. And only then do you go to the LLM and have it implement your solution!
I heavily use LLMs daily, but if you don't truly understand the problem and solution, you're going to have a bad time.
This isn't accurate.
> So I feel pressure to always, always, start by info dumping the problem description to AI and gamble for a one-shot. Voice transcription for 10 minutes, hit send, hope I get something first try, if not hope I can iterate until something works.
These things have planning modes - you can iterate on a plan all you want, make changes when ready, make changes one at a time etc. I don't know if the "pressure" is your own psychological block or you just haven't considered that you can use these tools differently.
Whether it feels satisfying or not - that's a personal thing, some people will like it, some won't. But what you're describing is just not using your tools correctly.
Yes, you still decompose problems. But what's the decomposition for? To create sub-problems small enough that the AI can solve them in one shot. That's literally what planning mode does - help you break things down into AI-solvable chunks.
You might say "that's not real thinking, that's just implementation details." Look who came up the the plan in the first place << It's the AI! Plan mode is partial automation of the thinking there too (improving every month)
Claude Code debugs something, it's automating a chain of reasoning: "This error message means execution reached this file. That implies this variable has this value. I can test this theory by sending this HTTP request. The logs show X, so my theory was wrong. Let me try Y instead."
To get it done correctly, that's always what it's been about.
I don't feel that code I write without assistance is mine, or some kind of achievement to be proud of, or something that inflates my own sense of how smart I am. So when some of the process is replaced by AI, there isn't anything in me that can be hurt by that, none of this is mine and it never was.
This is not a technical problem or an AI problem, it’s a cultural problem where you work
We have the opposite - I expect all of our devs to understand and be responsible for AI generated code
The job now feels quite different than the one I signed up for a decade+ ago. The only options I see are to accept that with a sigh or reject automation of the fun part and lose employability (worst case) or be nagged by anxiety that eventually that’ll happen.
I've coded in win32, XWindows, GTK, UIKit, Logo, Smalltalk, QT, and others since 95. I had various (and sometimes serious) issues with any of these as I worked in them. No other mechanism of helping humans interact with computation has been more frustrating and disappointing than the web. Pointing out how silly it all is (really, I have to use 3 separate languages with different computation models, plus countless frameworks, and that's just on the client side???), never makes me popular with people who have invested huge amounts of time and energy into mastering etheral library idioms or modern "best practices" which will be different next month. And the documentation? Find someone who did a quick blog on it, trying to get their name out there. Good luck.
The fact that an AI is an efficient, but lossy compression of the big pile, to help me churn it faster, it's actually kind of refreshing for me. Any confidence that I was doing the Right Thing in this domain always made me wonder how "imagined" it was. That fact that I have a stochastic parrot with sycophantic confidence to help me hallucinate through it all? That just takes it to 11.
I thought when James Mickens wrote "To Wash It All Away" (https://scholar.harvard.edu/files/mickens/files/towashitalla...), maybe someday things would get better. 10 years later, the furniture has moved and changed color some, but its still the same shitty experience.
How can I chose my political views and preferences if I need to consult about them with LLM?
It's important to remember, at times like these, that the LLM is not thinking. You can't persuade it of anything; you're looking at a convincing response based on patterns in language.
LLM code is extremely "best practices" or even worse because of what it's trained on. If you're doing anything uncommon, you're going to get bad code.
Aside from regular arguments and slinging insults at chatgpt, I've been enjoying being able to be way more productive on my personal projects.
I've been using agentic AI to explore ESP32 in Arduino IDE. I'm learning a ton and I'm confident I could write some simpler firmware at this point and I regularly make modifications to the code myself.
But damn if it isn't amazing to have zero clue how to rewrite low level libraries for a little known sensor and within an hour have a working rewrite of the library that works perfectly with the sensor!
I'll say though, this is all hobby stuff. If my day job was professional chatgpt wrangler I think I'd be pretty over it pretty quickly. Though I'm burnt out to hell. So maybe it's best.
Here's where I'm at:
- Your subjective taste will become more important than ever, be it graphic design, code architecture, visual art, music, and so on for each domain that AI becomes good at. People with better taste will produce better results. If you have bad taste, you can't steer _any_ tool (AI or otherwise) into producing good outputs. So refining your taste and expanding it will become more important. re: "Yeah, I could've prompted for that too.", I see a parallel to Stable Diffusion visual art. Sure, anyone _can_ make _anything_, but getting certain types of artistic outputs is still an exercise in skill and knowledge. Without the right skill expression, they won't have the same outputs.
- Delegating the things where "I don't have time to think about that right now" feels really good. As an analog, e.g., importing lodash and using one of their functions instead of writing your own. With AI, it's like getting magical bespoke algorithms tailored exactly to your needs (but unlike lodash, I actually see the underlying src!). Treat it like a black box until it stops working for you. I think "use AI vs not" is similar to "use a library or not": you kinda still have to understand what you need to do before picking up the tool. You don't have to understand any tool perfectly to make effective use out of it.
- AI is a tremendous help at getting you over blockers. Previous procrastination is eliminated when you can tell AI to just start building and making forward progress, or if you ask it for a high level overview on how something works to demystify something you previously perceived as insurmountable or tough.
> Nothing feels satisfying anymore
You still have to realize that were it not for you guiding the process, the thing in question would not exist. e.g., if you vibecode a videogame, you start to realize that there's no way (today) that a model is 1-shotting that. At least, it isn't 1-shotting it exactly to your vision. You and AI compile an artifact together that's greater than the sum of both of you. I find that satisfying and exciting. Eventually you will have to fix it (and so come to understand parts you neglected to earlier).
It's incredibly satisfying when AI writes the tedious test cases for things I write personally (including all edge cases) and I just review and verify they are correct.
I still find I regret in the long term cases where I vibe-accept the code it produces without much critical thought, because when I need to finesse those, I can see how it sometimes produces a fractal of bad designs/implementations.
In a real production app with stakes and consequences you still need to be reading and understanding everything it produces imo. If you don't, it's at your own peril.
I do worry about my longterm memory though. I don't think that purely reading and thinking is enough to drill something into your brain in a way that allows you to accurately produce it again later. Probably would screw me over in a job interview without AI access.
Use it in the precise, augmenting, accelerating way.
Do your own design and architecture (it sucks at that anyway) and use AI to tab complete the work you already thought through and planned.
This can preserve your ability to reason about the project and troubleshoot, improve your productivity while not turning your brain off.
I do not want to be a programmer anymore
https://news.ycombinator.com/item?id=45481490
I Don't Want to Code with LLM's
https://news.ycombinator.com/item?id=45332448
What i like is problem solvinig.
Coding is 90% syntax 10% thinking
AI is taking away the 90% garbage, so we can channel 90% to problem solving
So I'm more productive, but at what cost...
For side projects no, but I use it at the level that feels like it enhances my workflow and manually write the other bits since I don’t have productivity software tracking if I’m adopting AI hard enough
AI coding fixed that. Pre-AI I loved using all of the features of an IDE with an intention of speeding up my coding. Now with AI, it's just that much faster.
>The result: Nothing feels satisfying anymore. Every problem I solve by hand feels too slow. Every problem I solve with AI feels like it doesn't count. There's this constant background feeling that whatever I just did, someone else would've done it better and faster.
I've had so much satisfaction since ai coding. Ive had greater satisfaction.
The only exception here is learning (solving a solved problem so you can internalize it).
There are tons of problems that LLMs can't tackle. I chose two of those (polyglot programs, already worked on them before AI) and bootstrapping from source (AI can't even understand what the problem is). The progress I can get on those areas is not improved by using LLMs, and it feels good. I am sure there are many more of such problems out there.
You said "the only exception here is learning" - and that exception was my hobby. Programming simple things wasn't work for me. It was entertainment. It was what I did for fun on weekends.
Reading a blog post about writing a toy database or a parser combinator library and then spending a Saturday afternoon implementing it myself. that was like going to an amusement park. It was a few hours of enjoyable, bounded exploration. I could follow my curiosity, learn something new, and have fun doing it.
And you're right: if an LLM can solve it with the same quality, it's not a problem worthy of human effort. I agree with that logic. I've internalized it from years in the industry, from working with AI, from learning to make decisions about what to spend time on.
But here's what's been lost: that logic has closed the amusement park. All those simple, fun learning projects now feel stupid. When I see those blog posts now, my gut reaction is "why would I waste time on that? That's one prompt away." The feeling that it's "not worthy" has completely drained the joy out of it.
I can't turn off that instinct anymore. I know those 200 lines of code are trivial. I know AI can generate them. And so doing it myself feels like I'm deliberately choosing to be inefficient, like I'm LARPing at being a programmer instead of actually learning something valuable.
The problem isn't that I disagree with you. The problem is that I agree with you so completely that I can no longer have fun. The only "worthy" problems left are the hard ones AI can't do. But those require months of serious investment, not a casual Saturday afternoon.
It was never "worthy". With the proliferation of free, quality, open source software, what's now a prompt away, has been a github repo away for a long time. It's just that, before, you chose to ignore the existence of github repos and enjoy your hobby. Now you're choosing to not ignore the AI.
People have plenty of hobbies that are not the most "efficient" way to solve a problem. There are planes, but some people ride bikes across continents. Some walk.
LLMs exist, you can choose to what level you use them. Maybe you need to detox for a weekend or two.
The only thing you cannot do anymore is show off such projects. The portfolio of mini-tutorials is definitely a bygone concept. I actually like that part of how the culture has changed.
Another interesting challenge is to set yourself up to outperform the LLM. Golf with it. LLM can do a parser? Okay, I'll make a faster one instead. Less lines of code. There's tons of learning opportunities in that.
> The only "worthy" problems left are the hard ones
That's not true. There are also unexplored problems which the AI doesn't have enough training data to be useful.
I already learned to appreciate working with code from "others" by working in teams and leading teams in a past life. So I don't feel as personally attached to code that comes from my own fingertips anymore, or the need for the value of my work to be expressed that way.
Before this "AI" I had to do the mundane tasks of boilerplate. Now I don't. That's a win for me. The grand thinking and the whole picture of the projects is still mine, and I keep trying to give it to "AI" from time to time, except each time it spits BS. Also it helps that as a freelancer my stuff gets used by my client directly in production (no manager above, that has a group leader, that has a CEO, that has client's IT department, that finally has the client as final user). That's another good feeling. Corporations with layers above layers are the soul sucking of programming joy. Freelancing allowed me to avoid that.
I ask because I've worked across different domains: V8 bytecode optimizations, HPC at Sandia (differential equations on 50k nodes, adaptive mesh refinement heuristics), resource allocation and admission control for CI systems, custom network UDP network stack for mobile apps https://neumob.com/. In every case in my memory, the AI coding tools of today would have been useful.
You say your work is "very specific" and AI is "too stupid" for it. This just makes me very curious what does that look like concretely? What programming task exists that can't be decomposed into smaller problems?
My experience as an engineer is that I'm already just applying known solutions that researchers figured out. That's the job. Every problem I've encountered in my professional life was solvable - you decompose it, you research up an algorithm (or an approximation), you implement it. Sometimes the textbook says the math is "graduate-level" but you just... read it and it's tractable. You linearize, you approximate, you use penalty barrier methods. Not an theoretically optimal solution, but it gets the job done.
I don't see a structural difference between "turning JSON into pretty HTML" and using OR-tools to schedule workers for a department store. Both are decomposable problems. Both are solvable. The latter just has more domain jargon.
So I'm asking: what's the concrete example? What code would you write that's supposedly beyond this?
I frequently see this kind of comment in AI threads that there is more sophisticated kinds of AI proof programming out there.
Let me try to clarify another way. Are you claiming that say 50% of the total economic activity is beyond AI? or is some sort of niche role that only contributes 3% to GDP? Because its very different if this "difficult" job is everywhere or only in a few small locations.
That's the "promise", but in practice it's exactly what you don't want to do.
Models can't think. Logic, accuracy, truth, etc are not things models understand, nor do they understand anything. It's just a happy accident that sometimes their output makes sense to humans based on the statistical correlations derived during training.
> The result: Nothing feels satisfying anymore. Every problem I solve by hand feels too slow. Every problem I solve with AI feels like it doesn't count. There's this constant background feeling that whatever I just did, someone else would've done it better and faster.
Am I the only one who is not totally impressed by the quality of code LLMs generate? I've used Claude, Copilot, Codex and local options, all with latest models, and I have not been impressed on the greenfield projects I work on.
Yes, they're good for rote work, especially writing tests, but if you're doing something novel or off the beaten path, then just lol.
> I was thinking of all the classic exploratory learning blog posts. Things that sounded fun. Writing a toy database to understand how they work, implementing a small Redis clone. Now that feels stupid. Like I'd be wasting time on details the AI is supposed to handle. It bothers me that my reaction to these blog posts has changed so much. 3 years ago I would be bookmarking a blog post to try it out for myself that weekend. Now those 200 lines of simple code feels only one sentence prompt away and thus waste of time.
If you don't understand these things yourself, how do you know the LLM is "correct" in what it outputs?
I'd venture to say the feeling that models can do it better than you comes from exactly that problem: you don't know enough to have educated opinions and insights into the problem you're addressing with LLMs, and thus can't accurately judge the quality of their solutions. Not that there's anything wrong with not knowing something, and this is not meant to be a swipe at you, your skills or knowledge, nor is my intention to make assumptions about you. It's just that when I use LLMs for non-trivial tasks that I'm intimately familiar with, I am not impressed. The more that I know about a domain, the more nits I can pick with whatever LLMs spew out, but when I don't know the domain, it seems like "magic", until I do some further research and find problems.
To address the bad feelings: I work with several AI companies, the ones that actually care about quality were very, very adamant about avoiding AI for development outside of doing augmented searches. They actively filtered out candidates that used AI for resumes and had AI slop code contributions, and do the same with their code base and development process. And it's not about worrying about their IP being siphoned off to LLM providers, but about the code quality in itself and the fact that there is deep value in the human beings working at a company understanding not only the code they write, but how the system works in the micro and macro levels. They're acutely aware of models' limitations and they don't want them touching their code capital.
--
I think these tools have value, I use them and reluctantly pay for them, but the idea that they're going to replace development with prompt writing is a pipe dream. You can only get so far with next-token generators.
If this seems interesting to me, and I have time, I will do it.
If it is uninteresting to me, or turns out to be uninteresting, or the schedule does not fit with mine, someone else can do it.
Exactly the same deal with how I use AI in general, not just in coding.