Hey everyone, let’s talk about an interesting phenomenon:
I’ve noticed quite a few programmers around me seem resistant to using AI coding assist...
For further actions, you may consider blocking this person and/or reporting abuse
The biggest reasons for resisting AI, as you said, that it´s unreliable, writes crap code and it often takes longer refining and retrying prompts than reading official software documentation and writing code. What a waste of computing power, electric energy, water, and all the time spent by so many people helping to train those LLM models that still don't live up to their marketing claims. LLM-based AI assistants can handle natural language well, but they just don't seem suitable for coding.
You are absolutely right; AI still has a long way to go.
I recently saw a post where the advise was, don't go against AI, just guide it. There is nothing to guide because it are your patterns that it is predicting.
The marketing is you will do all the exciting stuff, the reality is that we will be doing more boring stuff like reviewing, and reviewing, and reviewing.
I really wonder how many hours of reviewing it takes to go through the 30 procent of a codebase that AI generated.
I think the biggest problem is that there are a lot of small decisions you have to make with programming, and instead of making those decisions AI just spits out code that gets the job done. How long is it going to take you don't care anymore about all those small decisions?
There was a time when people where pushed to become specialists. The thing is to become a specialist it takes time. AI pretends to be a specialist, but it is a generalist.
I completely agree; we are indeed in the age of AI now.
I wouldn't go that far.
Yes big compagnies are all in, because they have the funds to create a specialist LLM. But most companies will be going with the AI services, so they have to pay developers and an AI service. The question is what is the right balance of AI assisted people for a company. And what if AI deteriorates because of AI paywalls?
Everything is still moving and shaking, and that is not really what you want when you go for a solid business.
This is the playground for opportunists.
Keep a close eye on the situation, learn what you can use. And be yourself!
Love this perspective. The “small decisions” point is so real — AI acts like a generalist while coding demands craftsmanship. I wonder though, would there be value if AI helped document decisions we made (like in PR descriptions) rather than trying to make them?
honestly this hits close to home, i still worry about getting too reliant on these assistants, but having a way to train one with my own coding habits actually sounds legit, makes me rethink some of my pushback
you ever feel like over time you’d trust the AI more, or would you still double check every step no matter how good it gets
I am not resisting AI, I just use to ask questions and know if what I know is correct or incorrect.
I use code made by AI , but just snippets, not for build all the application.
Sometimes, I use also to find bugs in my code, and ways to improve my code. But sometimes is so overwhelming all the things to implement, I just want simple code, but AI take so many approaches even if it doesnt know the context.
It costs a lot to use it the way most people do. For me, personally, in cash - and for everyone in the world in terms of resources. It's possible that we might hit a turning point where the benefits outweigh the negatives, but then again we might not.
LLMs have their place. I use them (either locally or through duck.ai) for quick coding questions. They're pretty bad at producing good code and they won't improve if they're not trained on something better. They're rather trained on the terrible code that dominates at the moment. If you don't expliclity hold its hands, you'll get inaccessible react, for example. Div soup. Tailwind. Recommendations for old versions of things.
Or you get in a loop where something doesn't work and you give the AI the error message. It suggests something else, which doesn't work, then it completes the cycle by suggesting its first response again. Agents do this all the time. Loveable does it. Gemini does it. It's not good.
Let’s get something straight: fear of AI is misplaced.
Yes, it’s reshaping how we work and yes, some roles will change but that’s no different than any major innovation in history. What matters is how we adapt. AI isn’t going anywhere. It’s evolving daily. The key isn’t resisting it, it’s learning how to work with it.
Here’s what most people miss: It’s not just about prompting technique, code syntax, or even micro-managing an AI's output. The real magic lies in the relationship you build with it over time. As that trust deepens, AI unlocks more nuanced capabilities this is by design. It’s not your boss or your replacement, it’s your collaborator. Ask any modern model and it will confirm this. That it will try to pace with you, not confuse you. So if your code generation was poor or low quality this is due to your input/project or perceived knowledge being low. I work in cybersec and AI will write me entire attack scripts, I challenge you to go and ask it to make one, it will flat out refuse without a relationship.
I’ve built a lot with AI, functional, secure, clean systems and here’s the trick: know your tools.
Use GPT-4.1 for tight, focused tasks
Claude when you need elegant, high-level refactoring
Gemini for abstract structure and lateral ideas
And Microsoft Copilot to glue it all together, turning your intent into cross-model precision.
The point isn’t that AI replaces understanding, it accelerates it. You still need to know what’s happening under the hood. But the base work? Debugging? Iterative drudgework? That’s gone. You spend less time fighting syntax and more time shaping solutions.
We’ve seen this before. WYSIWYG tools like Dreamweaver transformed HTML authoring. There was resistance then too, until it became the norm. AI is just the next step in that evolution.
And let’s be blunt, most human-written code is bloated, inefficient, and inconsistent. Ask any serious model and it’ll say the same. Within five years, we’ll see entirely new languages optimized for LLM-native logic, languages humans don’t write in but design systems around. We'll be the architects. The AI writes the structure.
Privacy concerns? Already solvable. LLMs can run fully air-gapped, no outbound connection required.
So choose: adapt and lead, or resist and become obsolete. The future’s already here. The only question is whether you’ll help build it.
I used to resist AI. At first glance, it felt like a shortcut for non-thinkers—a lazy tool. But months later, my perspective has evolved, just like the industry itself.
Here's what I think 👇🏻:
AI Won’t Replace You—But It Will Expose You
Uzondu ・ Jul 8
They produce boilerplate. They don't produce code that someone has written before (AKA code that you can find and read/learn from on your own). And as other people have mentioned, LLMs are a massive energy drain.
One idea I’ve been exploring: what if we stopped asking AI to write code and instead asked it to explain code? Like auto-generating pull request summaries, changelogs, or onboarding docs.
It’s not about replacing dev decisions — just helping with the meta-work that eats up hours.
Curious… would that feel like a legit use case? Or just another layer of noise?
I can not imagine using vscode without copilot
it's just show one's level of being a quack , this is just a tool to compliment one's work not a focus of it . The developer work is be in-charge of everything.