Hey everyone, let’s talk about an interesting phenomenon:
I’ve noticed quite a few programmers around me seem resistant to using AI coding assistants (like Cursor).
I’ve asked a few of them, and their reasons are generally something like:
“The generated code is often junk; it takes too long to fix, so I’d rather code it myself.”
“If I keep relying on it, I’m worried about losing my coding skills.”
“The prompts are confusing, and it’s just easier to look it up on Stack Overflow.”
“Sometimes the model is helpful, but other times it’s totally off—it’s inconsistent.” 👈 I can relate to this point.
But recently, I tried a tool called ChatGOT (fun name, right?), and it seems to address several of my pain points:
Multiple Models: I can switch between different models like GPT-4o, DeepSeek, and Gemini. I can see which one performs best on the same question, which improves code quality a lot. I don’t have to worry about one model suddenly going offline.
Custom AI Bots: I can create an AI assistant tailored to my coding style! By feeding it my project standards, libraries, and naming conventions, the generated code aligns closely with my preferences, which means fewer major changes. No more long prompts every time I write.
Bonus: I can upload requirement documents, and it can quickly summarize or generate a presentation (AI Slides)—great for last-minute meeting prep.
So I’m really curious to hear your thoughts:
What’s your biggest reason for resisting AI?
Is it because you find it inconvenient, or are you worried about being replaced?
Would concepts like ChatGOT’s customizable assistant and multiple models convince you to use AI? Or do you still believe AI-generated code just isn’t good enough?
Just genuinely curious and looking to exchange thoughts!
Top comments (14)
The biggest reasons for resisting AI, as you said, that it´s unreliable, writes crap code and it often takes longer refining and retrying prompts than reading official software documentation and writing code. What a waste of computing power, electric energy, water, and all the time spent by so many people helping to train those LLM models that still don't live up to their marketing claims. LLM-based AI assistants can handle natural language well, but they just don't seem suitable for coding.
You are absolutely right; AI still has a long way to go.
I recently saw a post where the advise was, don't go against AI, just guide it. There is nothing to guide because it are your patterns that it is predicting.
The marketing is you will do all the exciting stuff, the reality is that we will be doing more boring stuff like reviewing, and reviewing, and reviewing.
I really wonder how many hours of reviewing it takes to go through the 30 procent of a codebase that AI generated.
I think the biggest problem is that there are a lot of small decisions you have to make with programming, and instead of making those decisions AI just spits out code that gets the job done. How long is it going to take you don't care anymore about all those small decisions?
There was a time when people where pushed to become specialists. The thing is to become a specialist it takes time. AI pretends to be a specialist, but it is a generalist.
I completely agree; we are indeed in the age of AI now.
I wouldn't go that far.
Yes big compagnies are all in, because they have the funds to create a specialist LLM. But most companies will be going with the AI services, so they have to pay developers and an AI service. The question is what is the right balance of AI assisted people for a company. And what if AI deteriorates because of AI paywalls?
Everything is still moving and shaking, and that is not really what you want when you go for a solid business.
This is the playground for opportunists.
Keep a close eye on the situation, learn what you can use. And be yourself!
Love this perspective. The “small decisions” point is so real — AI acts like a generalist while coding demands craftsmanship. I wonder though, would there be value if AI helped document decisions we made (like in PR descriptions) rather than trying to make them?
honestly this hits close to home, i still worry about getting too reliant on these assistants, but having a way to train one with my own coding habits actually sounds legit, makes me rethink some of my pushback
you ever feel like over time you’d trust the AI more, or would you still double check every step no matter how good it gets
I am not resisting AI, I just use to ask questions and know if what I know is correct or incorrect.
I use code made by AI , but just snippets, not for build all the application.
Sometimes, I use also to find bugs in my code, and ways to improve my code. But sometimes is so overwhelming all the things to implement, I just want simple code, but AI take so many approaches even if it doesnt know the context.
It costs a lot to use it the way most people do. For me, personally, in cash - and for everyone in the world in terms of resources. It's possible that we might hit a turning point where the benefits outweigh the negatives, but then again we might not.
LLMs have their place. I use them (either locally or through duck.ai) for quick coding questions. They're pretty bad at producing good code and they won't improve if they're not trained on something better. They're rather trained on the terrible code that dominates at the moment. If you don't expliclity hold its hands, you'll get inaccessible react, for example. Div soup. Tailwind. Recommendations for old versions of things.
Or you get in a loop where something doesn't work and you give the AI the error message. It suggests something else, which doesn't work, then it completes the cycle by suggesting its first response again. Agents do this all the time. Loveable does it. Gemini does it. It's not good.
They produce boilerplate. They don't produce code that someone has written before (AKA code that you can find and read/learn from on your own). And as other people have mentioned, LLMs are a massive energy drain.
One idea I’ve been exploring: what if we stopped asking AI to write code and instead asked it to explain code? Like auto-generating pull request summaries, changelogs, or onboarding docs.
It’s not about replacing dev decisions — just helping with the meta-work that eats up hours.
Curious… would that feel like a legit use case? Or just another layer of noise?
it's just show one's level of being a quack , this is just a tool to compliment one's work not a focus of it . The developer work is be in-charge of everything.
I can not imagine using vscode without copilot
Some comments may only be visible to logged-in visitors. Sign in to view all comments.