Ask HN: How are you actively keeping your thinking sharp while using LLMs daily?
I've noticed AI tools have made me lazier. I used to think and reason through problems deeply, writing out my thoughts, going back and forth on items. With LLMs so easily available, I find myself having a conversation with Claude instead. I'm concerned that I'm not exercising my deep thinking muscles much.
Anyone else feel this? And more specifically — what deliberate practices are you using to keep your reasoning sharp?
When I use LLMs heavily, I definitely stop exercising parts of my thinking I used to rely on. That can look like regression.
But consider: Stone Age humans spent all their energy just getting food. No capacity left for higher-order thinking. Programming and document creation worked the same way for me - they consumed the mental bandwidth that could have gone elsewhere.
Now that AI handles that layer, I can think at a different level. In my case, handing all the coding to Claude has freed up real mental space for product strategy - decisions that actually matter.
Maybe the question isn't whether we're thinking deeply, but whether we're thinking deeply about the right things.
Quite the opposite. Review is the bottleneck. LLMs generate more code, hence more cognitive load on my mind to comprehend and review it all. Review isn't optional if it's a production system that other people use.
After I see how astonishingly poor LLMs are at decision-making (even the best ones, like Claude Opus or GPT 5.4) while writing code, I naturally stop trusting them in other areas of life too much to "just have a conversation with them and get all the answers".
It's all fun and games while the stakes are non-existent, but if the question really matters, would you trust an LLM fully as much as to not exercise thinking at all?
I've found that daily LLM use have sharpened my mind considerably, it actually forces a shift from 'production' to 'judgment'. Because AI commoditizes average output, the value moves to evaluation and learning to me-speak-words-good. To stay sharp, I started building tools to audit and refine the agent's reasoning cycles (an observability framework). When you treat the LLM as a junior dev, you are able to think much harder about systemic constraints, edge cases, and the taste of the final product.
I turned off autocomplete in my IDE back in August. Absolutely hate it. I can defer work to an LLM and review it later. I really can't stand autocomplete built into OSX and iOS. Recently, I tried Grammarly as I'm building my own text editor for non-coding use cases, and that is driving me crazy, too. Everything I type is wrong. Some suggestions are generally good - these would be obvious typos or misspellings I simply missed. But a lot of the suggested solutions are subjective.
I like AI use cases where I can offload large tasks and follow up on them to review the output critically. I think there is a ton of opportunity to leverage yourself with AI, as all of the hype posts say you can. But most tools are simply invasive and, to me, the equivalent of social media 'junk food for the mind' where you just turn your brain off and start clicking approve, approve, copy, paste, etc.. as opposed to doom scrolling.
Or worse you delegate an easy task to an LLM agent and start doom scrolling while you wait on it. No bueno.
After I see how astonishingly poor LLMs are at decision-making (even the best ones, like Claude Opus or GPT 5.4) while writing code, I naturally stop trusting them in other areas of life too much to "just have a conversation with them and get all the answers".
It's all fun and games while the stakes are non-existent, but if the question really matters, would you trust an LLM fully as much as to not exercise thinking at all?
The actual thinking task is still very much mine.
LLM helps with solving the task faster, but here too, with lots of corrections and involving an extremely solid supervision process.
So, I kind of doubt the sincerity of your post. In fact, I doubt it very much.
I like AI use cases where I can offload large tasks and follow up on them to review the output critically. I think there is a ton of opportunity to leverage yourself with AI, as all of the hype posts say you can. But most tools are simply invasive and, to me, the equivalent of social media 'junk food for the mind' where you just turn your brain off and start clicking approve, approve, copy, paste, etc.. as opposed to doom scrolling.
Or worse you delegate an easy task to an LLM agent and start doom scrolling while you wait on it. No bueno.