DEV Community

Elfreda
Elfreda

Posted on

I Don’t Understand: Why Do So Many Programmers Not Like Using AI to Assist in Coding?

Hey everyone, let’s talk about an interesting phenomenon:

I’ve noticed quite a few programmers around me seem resistant to using AI coding assistants (like Cursor).

I’ve asked a few of them, and their reasons are generally something like:

“The generated code is often junk; it takes too long to fix, so I’d rather code it myself.”
“If I keep relying on it, I’m worried about losing my coding skills.”
“The prompts are confusing, and it’s just easier to look it up on Stack Overflow.”
“Sometimes the model is helpful, but other times it’s totally off—it’s inconsistent.” 👈 I can relate to this point.

But recently, I tried a tool called ChatGOT (fun name, right?), and it seems to address several of my pain points:

Multiple Models: I can switch between different models like GPT-4o, DeepSeek, and Gemini. I can see which one performs best on the same question, which improves code quality a lot. I don’t have to worry about one model suddenly going offline.

Custom AI Bots: I can create an AI assistant tailored to my coding style! By feeding it my project standards, libraries, and naming conventions, the generated code aligns closely with my preferences, which means fewer major changes. No more long prompts every time I write.

Bonus: I can upload requirement documents, and it can quickly summarize or generate a presentation (AI Slides)—great for last-minute meeting prep.

So I’m really curious to hear your thoughts:

What’s your biggest reason for resisting AI?
Is it because you find it inconvenient, or are you worried about being replaced?
Would concepts like ChatGOT’s customizable assistant and multiple models convince you to use AI? Or do you still believe AI-generated code just isn’t good enough?

Just genuinely curious and looking to exchange thoughts!

Top comments (18)

Collapse
 
xwero profile image
david duymelinck • Edited

I recently saw a post where the advise was, don't go against AI, just guide it. There is nothing to guide because it are your patterns that it is predicting.
The marketing is you will do all the exciting stuff, the reality is that we will be doing more boring stuff like reviewing, and reviewing, and reviewing.
I really wonder how many hours of reviewing it takes to go through the 30 procent of a codebase that AI generated.

I think the biggest problem is that there are a lot of small decisions you have to make with programming, and instead of making those decisions AI just spits out code that gets the job done. How long is it going to take you don't care anymore about all those small decisions?

There was a time when people where pushed to become specialists. The thing is to become a specialist it takes time. AI pretends to be a specialist, but it is a generalist.

Collapse
 
elfreda profile image
Elfreda

I completely agree; we are indeed in the age of AI now.

Collapse
 
xwero profile image
david duymelinck

I wouldn't go that far.

Yes big compagnies are all in, because they have the funds to create a specialist LLM. But most companies will be going with the AI services, so they have to pay developers and an AI service. The question is what is the right balance of AI assisted people for a company. And what if AI deteriorates because of AI paywalls?
Everything is still moving and shaking, and that is not really what you want when you go for a solid business.
This is the playground for opportunists.

Keep a close eye on the situation, learn what you can use. And be yourself!

Collapse
 
notadevbuthere profile image
Urvisha Maniar

Love this perspective. The “small decisions” point is so real — AI acts like a generalist while coding demands craftsmanship. I wonder though, would there be value if AI helped document decisions we made (like in PR descriptions) rather than trying to make them?

Collapse
 
mattdivs profile image
matt vid

I am not resisting AI, I just use to ask questions and know if what I know is correct or incorrect.
I use code made by AI , but just snippets, not for build all the application.

Sometimes, I use also to find bugs in my code, and ways to improve my code. But sometimes is so overwhelming all the things to implement, I just want simple code, but AI take so many approaches even if it doesnt know the context.

Collapse
 
ingosteinke profile image
Ingo Steinke, web developer • Edited

The biggest reasons for resisting AI, as you said, that it´s unreliable, writes crap code and it often takes longer refining and retrying prompts than reading official software documentation and writing code. What a waste of computing power, electric energy, water, and all the time spent by so many people helping to train those LLM models that still don't live up to their marketing claims. LLM-based AI assistants can handle natural language well, but they just don't seem suitable for coding.

Collapse
 
elfreda profile image
Elfreda

You are absolutely right; AI still has a long way to go.

Collapse
 
moopet profile image
Ben Sinclair

It costs a lot to use it the way most people do. For me, personally, in cash - and for everyone in the world in terms of resources. It's possible that we might hit a turning point where the benefits outweigh the negatives, but then again we might not.

LLMs have their place. I use them (either locally or through duck.ai) for quick coding questions. They're pretty bad at producing good code and they won't improve if they're not trained on something better. They're rather trained on the terrible code that dominates at the moment. If you don't expliclity hold its hands, you'll get inaccessible react, for example. Div soup. Tailwind. Recommendations for old versions of things.

Or you get in a loop where something doesn't work and you give the AI the error message. It suggests something else, which doesn't work, then it completes the cycle by suggesting its first response again. Agents do this all the time. Loveable does it. Gemini does it. It's not good.

Collapse
 
kurealnum profile image
Oscar

They produce boilerplate. They don't produce code that someone has written before (AKA code that you can find and read/learn from on your own). And as other people have mentioned, LLMs are a massive energy drain.

Collapse
 
nathan_tarbert profile image
Nathan Tarbert

honestly this hits close to home, i still worry about getting too reliant on these assistants, but having a way to train one with my own coding habits actually sounds legit, makes me rethink some of my pushback
you ever feel like over time you’d trust the AI more, or would you still double check every step no matter how good it gets

Collapse
 
javeethicalsoftware profile image
JAVE-Ethical-Software

Let’s get something straight: fear of AI is misplaced.

Yes, it’s reshaping how we work and yes, some roles will change but that’s no different than any major innovation in history. What matters is how we adapt. AI isn’t going anywhere. It’s evolving daily. The key isn’t resisting it, it’s learning how to work with it.

Here’s what most people miss: It’s not just about prompting technique, code syntax, or even micro-managing an AI's output. The real magic lies in the relationship you build with it over time. As that trust deepens, AI unlocks more nuanced capabilities this is by design. It’s not your boss or your replacement, it’s your collaborator. Ask any modern model and it will confirm this. That it will try to pace with you, not confuse you. So if your code generation was poor or low quality this is due to your input/project or perceived knowledge being low. I work in cybersec and AI will write me entire attack scripts, I challenge you to go and ask it to make one, it will flat out refuse without a relationship.

I’ve built a lot with AI, functional, secure, clean systems and here’s the trick: know your tools.

Use GPT-4.1 for tight, focused tasks

Claude when you need elegant, high-level refactoring

Gemini for abstract structure and lateral ideas

And Microsoft Copilot to glue it all together, turning your intent into cross-model precision.

The point isn’t that AI replaces understanding, it accelerates it. You still need to know what’s happening under the hood. But the base work? Debugging? Iterative drudgework? That’s gone. You spend less time fighting syntax and more time shaping solutions.

We’ve seen this before. WYSIWYG tools like Dreamweaver transformed HTML authoring. There was resistance then too, until it became the norm. AI is just the next step in that evolution.

And let’s be blunt, most human-written code is bloated, inefficient, and inconsistent. Ask any serious model and it’ll say the same. Within five years, we’ll see entirely new languages optimized for LLM-native logic, languages humans don’t write in but design systems around. We'll be the architects. The AI writes the structure.

Privacy concerns? Already solvable. LLMs can run fully air-gapped, no outbound connection required.

So choose: adapt and lead, or resist and become obsolete. The future’s already here. The only question is whether you’ll help build it.

Collapse
 
ingosteinke profile image
Ingo Steinke, web developer • Edited

WYSIWYG tools like Dreamweaver transformed HTML authoring. There was resistance then too, until it became the norm

as "low-code" or "no-code" tools like WebFlow. No-code might have become the norm for non-tech-savvy marketing people, but if developers wouldn't love to code, we didn't have React, Next, Nuxt, Svelte, Astro, node, bun, you name it ... and we adapt and learn, but we also abandon and reject what doesn't feel right, as soon as there's a better alternative.

LLMs already claimed their place and some devs can't do without, but developer experience and quality is still so poor (4K+ YouTrack issues for JetBrains AI aissistant) that we're just beta-testing flawed previews right now.

Collapse
 
accioprocurement profile image
Accio by Alibaba Group

Interesting perspective! I've noticed the same hesitations among developers. It reminds me of how procurement teams initially resisted AI tools - until they saw how well it handled their repetitive tasks while they focused on strategic work.

The customization aspect you mentioned is huge. When the tool adapts to your specific needs, that's when it really clicks. Still requires human oversight of course, but the time savings add up quickly.

Collapse
 
notadevbuthere profile image
Urvisha Maniar

One idea I’ve been exploring: what if we stopped asking AI to write code and instead asked it to explain code? Like auto-generating pull request summaries, changelogs, or onboarding docs.

It’s not about replacing dev decisions — just helping with the meta-work that eats up hours.

Curious… would that feel like a legit use case? Or just another layer of noise?

Collapse
 
uzondu9 profile image
Uzondu • Edited

I used to resist AI. At first glance, it felt like a shortcut for non-thinkers—a lazy tool. But months later, my perspective has evolved, just like the industry itself.

Here's what I think 👇🏻:

Some comments may only be visible to logged-in visitors. Sign in to view all comments.