DEV Community

I Don’t Understand: Why Do So Many Programmers Not Like Using AI to Assist in Coding?

Elfreda on July 03, 2025

Hey everyone, let’s talk about an interesting phenomenon: I’ve noticed quite a few programmers around me seem resistant to using AI coding assist...
Collapse
 
ingosteinke profile image
Ingo Steinke, web developer • Edited

The biggest reasons for resisting AI, as you said, that it´s unreliable, writes crap code and it often takes longer refining and retrying prompts than reading official software documentation and writing code. What a waste of computing power, electric energy, water, and all the time spent by so many people helping to train those LLM models that still don't live up to their marketing claims. LLM-based AI assistants can handle natural language well, but they just don't seem suitable for coding.

Collapse
 
elfreda profile image
Elfreda

You are absolutely right; AI still has a long way to go.

Collapse
 
xwero profile image
david duymelinck • Edited

I recently saw a post where the advise was, don't go against AI, just guide it. There is nothing to guide because it are your patterns that it is predicting.
The marketing is you will do all the exciting stuff, the reality is that we will be doing more boring stuff like reviewing, and reviewing, and reviewing.
I really wonder how many hours of reviewing it takes to go through the 30 procent of a codebase that AI generated.

I think the biggest problem is that there are a lot of small decisions you have to make with programming, and instead of making those decisions AI just spits out code that gets the job done. How long is it going to take you don't care anymore about all those small decisions?

There was a time when people where pushed to become specialists. The thing is to become a specialist it takes time. AI pretends to be a specialist, but it is a generalist.

Collapse
 
elfreda profile image
Elfreda

I completely agree; we are indeed in the age of AI now.

Collapse
 
xwero profile image
david duymelinck

I wouldn't go that far.

Yes big compagnies are all in, because they have the funds to create a specialist LLM. But most companies will be going with the AI services, so they have to pay developers and an AI service. The question is what is the right balance of AI assisted people for a company. And what if AI deteriorates because of AI paywalls?
Everything is still moving and shaking, and that is not really what you want when you go for a solid business.
This is the playground for opportunists.

Keep a close eye on the situation, learn what you can use. And be yourself!

Collapse
 
notadevbuthere profile image
Urvisha Maniar

Love this perspective. The “small decisions” point is so real — AI acts like a generalist while coding demands craftsmanship. I wonder though, would there be value if AI helped document decisions we made (like in PR descriptions) rather than trying to make them?

Collapse
 
nathan_tarbert profile image
Nathan Tarbert

honestly this hits close to home, i still worry about getting too reliant on these assistants, but having a way to train one with my own coding habits actually sounds legit, makes me rethink some of my pushback
you ever feel like over time you’d trust the AI more, or would you still double check every step no matter how good it gets

Collapse
 
mattdivs profile image
matt vid

I am not resisting AI, I just use to ask questions and know if what I know is correct or incorrect.
I use code made by AI , but just snippets, not for build all the application.

Sometimes, I use also to find bugs in my code, and ways to improve my code. But sometimes is so overwhelming all the things to implement, I just want simple code, but AI take so many approaches even if it doesnt know the context.

Collapse
 
moopet profile image
Ben Sinclair

It costs a lot to use it the way most people do. For me, personally, in cash - and for everyone in the world in terms of resources. It's possible that we might hit a turning point where the benefits outweigh the negatives, but then again we might not.

LLMs have their place. I use them (either locally or through duck.ai) for quick coding questions. They're pretty bad at producing good code and they won't improve if they're not trained on something better. They're rather trained on the terrible code that dominates at the moment. If you don't expliclity hold its hands, you'll get inaccessible react, for example. Div soup. Tailwind. Recommendations for old versions of things.

Or you get in a loop where something doesn't work and you give the AI the error message. It suggests something else, which doesn't work, then it completes the cycle by suggesting its first response again. Agents do this all the time. Loveable does it. Gemini does it. It's not good.

Collapse
 
kurealnum profile image
Oscar

They produce boilerplate. They don't produce code that someone has written before (AKA code that you can find and read/learn from on your own). And as other people have mentioned, LLMs are a massive energy drain.

Collapse
 
notadevbuthere profile image
Urvisha Maniar

One idea I’ve been exploring: what if we stopped asking AI to write code and instead asked it to explain code? Like auto-generating pull request summaries, changelogs, or onboarding docs.

It’s not about replacing dev decisions — just helping with the meta-work that eats up hours.

Curious… would that feel like a legit use case? Or just another layer of noise?

Collapse
 
vincenttommi profile image
Vincent Tommi

it's just show one's level of being a quack , this is just a tool to compliment one's work not a focus of it . The developer work is be in-charge of everything.

Collapse
 
alifar profile image
Ali Farhat

I can not imagine using vscode without copilot