Saved in this browser and reused when you open other posts.
I'm going to say something that will annoy people on both sides of the AI coding debate.
AI-generated code is not good or bad by itself. It's a mirror. It reflects the skill level of the person using it and it does so at about three times the speed and ten times the volume. A skilled developer using AI writes better code faster. An inexperienced developer using AI writes worse code faster. The AI doesn't know the difference. It just outputs.
I've been using local models in my daily workflow for months now such any in LLM Studio, Ollama and so on. I've watched what it does to code quality when I use it versus when junior developers on client projects use it. The gap is not subtle.
What Actually Happens When a Junior Uses AI
Ask a junior developer to build a user authentication system without AI. They'll spend time reading the Laravel docs, look at some examples, make some mistakes, learn from them. The output will be imperfect but they'll understand it. They can debug it when it breaks.
Now give that same developer Cursor or GitHub Copilot. They describe what they want. The AI generates a complete auth system in thirty seconds. It looks professional. The tests pass. It ships.
Six months later that system has a session fixation vulnerability nobody noticed because nobody fully read the code. There's no CSRF protection on one of the token refresh endpoints because the developer didn't know to ask for it, and the AI didn't volunteer it. The password reset flow has a subtle timing attack vector.
None of this is the AI's fault, exactly. The AI generated plausible code for the prompt it was given. The prompt didn't mention security requirements because the person writing it didn't know those requirements existed. The AI can't tell you what you forgot to ask for.
This is the core problem. AI is very good at answering questions. It is completely useless at telling you which questions you should have asked.
What It Looks Like When a Senior Uses It
Here's what my actual workflow looks like when I use AI for a Laravel feature.
I already know what the feature needs. I've thought about the data model, the edge cases, the security boundaries. I know which Eloquent methods are injectable if misused. I know which middleware needs to be on which routes. I know what the failure modes are.
I use AI to generate the boilerplate the migration, the resource class, the form request skeleton. Stuff that's mechanical and correct-by-default. I review every line. I spot when the generated code makes an assumption I don't agree with. I change it. I add the validation rule the AI skipped. I move the authorization check the AI put in the wrong place.
The AI made me faster at the parts that didn't require my judgment. My judgment handled everything that mattered.
That's a completely different activity than what the junior developer was doing. It just looks the same from the outside both of us typed a prompt and got code back.
The Confidence Problem
What makes this genuinely dangerous isn't the bad code. It's that the bad code *looks* exactly like good code.
Handwritten junior code usually has tells. Inconsistent patterns. Awkward variable names. Missing error handling that's obviously missing. An experienced developer reviewing a PR can see the skill level of the author and calibrate their scrutiny accordingly.
AI-generated code has none of those tells. It's stylistically consistent. It uses the right method names. The error handling is *there* it's just handling the wrong errors in some cases. The structure follows conventions. It passes linting. It looks, at a glance, like it was written by someone who knew what they were doing.
That's the trap. Code review gets less rigorous because the code looks rigorous. Static analysis catches some of it — which is exactly why I started running Semgrep on every MR regardless of who wrote the code. But tooling isn't a full substitute for the reviewer understanding what they're looking at.
But AI Will Get Better
Sure. Models improve. Reasoning gets sharper. Context windows get longer.
But "better at generating code" and "better at generating secure, maintainable production code for your specific application's threat model" are not the same thing. The second one requires understanding your business logic, your deployment environment, your user base, your compliance requirements, and a hundred other things that live outside the prompt.
A model that can generate a theoretically correct authentication system still can't tell you that your specific application handles healthcare data and therefore needs specific protections the generic implementation doesn't include. You have to know that. You have to bring it to the prompt. And knowing what to bring requires exactly the kind of hard-won experience that AI is supposedly making unnecessary.
The people confidently saying "AI will replace senior developers in two years" are, in my observation, mostly people who haven't spent much time debugging AI-generated code in production. That experience has a way of clarifying your views.

What I Tell Junior Developers
Use it. It's a genuinely useful tool and pretending otherwise is just nostalgia dressed up as professionalism.
But use it like a calculator, not like an oracle. A calculator gives you the right answer if you set up the equation correctly. Set it up wrong and you get a confident, precise, completely wrong answer. The calculator doesn't know your equation was wrong.
Understand what the AI generates before you ship it. Not just "does it work" understand *why* it works and what happens when the inputs are different from what the AI assumed. If you can't explain a piece of generated code to a teammate, you're not done yet.
And build the fundamentals anyway. The developers who get the most out of AI tools are the ones who would be competent without them. The shortcuts are only useful if you know what you're shortcutting.
The Honest Bottom Line
AI coding tools are productivity multipliers. The problem with multipliers is that they scale whatever you already have skill and judgment as much as speed and volume. Give a skilled developer a 3x multiplier and you get excellent code, faster. Give an inexperienced developer the same multiplier and you get more code, faster, with the same proportion of problems as before except the problems are now harder to spot because the code looks polished.
The technology is not the variable here. The developer is.
I'll keep using AI in my workflow. I'll keep running Semgrep on everything it touches. I'll keep reviewing every line before it goes to production. And I'll keep being slightly suspicious of any team that's shipping faster than before but hasn't gotten noticeably better at knowing what to ship.
Speed without judgment isn't progress. It's just more surface area for things to go wrong.