No Skill, No Taste: What AI Can't Hide for You
No Skill, No Taste: What AI Can't Hide for You
Prologue: Everyone thinks AI has democratized coding. I think it's revealed something worse.
Everyone thinks AI has democratized coding.
I think it's revealed something far more uncomfortable: most people have neither the taste nor the skill to build anything worth using.
Here's the paradox: the barrier to entry has never been lower, yet software quality has never been worse.
41% of all code written in 2026 is AI-generated1. 92% of US developers use AI coding tools daily1. We're in the golden age of productivity, right?
Wrong.
63% of organizations now spend MORE time debugging AI code than they would have spent writing it manually2. 45% of AI-generated code fails basic security tests3. Experienced developers are actually 19% SLOWER when using AI tools4.
The tools got better. The output got worse.
Why?
Because we confused access with ability.
I – The Magic Quadrant No One Wants to Talk About
I've spent two decades working with code and teaching thousands of students.
What I've noticed is this: there's a quadrant that separates the people who build things that matter from those who just add to the noise.
graph TD
A["Low Skill<br>Low Taste"] -->|"Vibe Coding"| B["Derivative Slop"]
C["High Skill<br>Low Taste"] -->|"Enterprise"| D["Technically Sound<br>Aesthetically Dead"]
E["Low Skill<br>High Taste"] -->|"Scrappy Indie"| F["Viral Simplicity"]
G["High Skill<br>High Taste"] -->|"Rare"| H["Products People Love"]
style A fill:#ff6b6b
style E fill:#4ecdc4
style G fill:#95e1d3
style C fill:#f9ca24
Taste is knowing what's worth building and how it should feel.
Skill is the ability to execute that vision without breaking everything.
Most people overestimate both.
Here's what I mean: someone built "This Website Will Self-Destruct" — a site that would delete itself if no one visited for 24 hours. Simple concept. Clean execution. Went viral.
Low technical complexity. Pure taste.
Then there's OpenClaw — technically messy but captured the exact aesthetic and feel users wanted. People loved it despite the code quality.
High taste. Medium skill. Still worked.
Now compare that to the average AI-generated SaaS template. Perfect TypeScript. Beautiful separation of concerns. Generic as hell. No one cares.
"The sin isn't using LLMs. The sin is lacking the taste and skill to cross the quality threshold."
II – Vibe Coding Is Counterfeit Competence
Let's be honest about what's happening.
Vibe coding — copy-pasting AI suggestions without understanding them — has created an entire generation of developers who can't read their own code.
40% of junior developers now deploy AI-generated code they don't fully understand2.
That's not learning. That's cargo culting.
Here's the brutal truth: LLMs are pattern-matching engines trained on average code from GitHub. They produce statistically probable solutions, not good ones.
86% failure rate on XSS prevention3. 69 vulnerabilities found across five major vibe coding tools in December 2025 alone5.
And yet people keep shipping it.
Why?
Because it feels like progress. The code runs. The tests pass. The pull request gets merged. But only 30% of AI-suggested code gets accepted by human reviewers who actually know what they're looking at1.
The other 70%? Derivative slop.
| Tasteful Developer | Vibe Coding |
|---|---|
| Knows why the code works | Knows that the code works (sometimes) |
| Questions AI suggestions | Accepts them blindly |
| Debugs by understanding | Debugs by regenerating |
| Builds for users | Builds for completion |
| Ships simple, coherent experiences | Ships feature lists |
The gap isn't just technical. It's philosophical.
One group sees code as a tool to solve human problems. The other sees it as a means to appear productive.
"LLMs amplify the gap. If you have taste, they help you execute faster. If you don't, they help you produce garbage at scale."
III – The Taste Gap Is Widening
Here's what no one wants to admit: AI tools are making good developers better and bad developers worse.
The METR study showed experienced developers getting 19% slower with AI assistance4. That seems counterintuitive until you realize what's happening.
Good developers were already optimizing for the right things: clarity, maintainability, taste.
AI tools optimize for completion.
When you're trying to build something meaningful, "done" is not the same as "good."
Bad developers don't have the taste to distinguish between them. So they ship faster. And the output is worse.
53% of organizations discovered security issues in AI code that passed initial review2. That's not a tooling problem. That's a judgment problem.
What struck me was how similar this feels to the 2017 crypto boom.
Everyone thought they could get rich. Most didn't.
Everyone thinks they can build software now. Most won't build anything that matters.
The barrier to entry dropped. But the barrier to quality didn't move at all.
IV – What Actually Works
If you're reading this and feeling defensive, good. That means you're paying attention.
Here's what I believe: taste is learnable, but not from AI.
Taste comes from exposure and failure.
You build taste by:
- Using great products and asking why they feel great
- Building bad things and watching people ignore them
- Reading code from people who care about craft
- Saying no to features that don't serve the core experience
Skill comes from deliberate practice.
You build skill by:
- Writing code without the assistant first
- Understanding your dependencies, not just importing them
- Debugging without regenerating
- Shipping small, coherent experiences instead of kitchen-sink MVPs
The best developers I know use AI like they use Stack Overflow: as a reference, not a crutch.
They ask it to generate boilerplate. They use it to explore API patterns. They let it handle the tedious parts.
But they don't let it think for them.
Your situation is no different.
If you're using AI to avoid learning, you're building on sand. If you're using it to accelerate what you already understand, you're leveraging a tool.
The difference is obvious to anyone who's been doing this long enough.
💭 Questions Worth Sitting With
-
If AI has lowered the barrier to coding, why is software quality getting worse?
-
Is taste something you can learn, or is it a byproduct of failure and experience?
-
If LLMs amplify the skill gap, how should the junior developer growth path be redesigned?
Share your thoughts in the comments.
Conclusion: The Question No One Asks
Here's the timely question: "Will AI replace coding?"
But underneath that lives a deeper, eternal question:
"What are you optimizing for?"
If you're optimizing for speed, AI is great. You'll ship fast. Most of it will be forgettable.
If you're optimizing for quality, AI is a tool. One of many. Taste and skill still determine the outcome.
If you're optimizing for learning, AI is a trap. It lets you skip the struggle that builds judgment.
The technology isn't the problem. The delusion is.
We've convinced ourselves that the hard part of software is writing code. It's not. The hard part is knowing what to build and why it matters.
That requires taste. And taste doesn't come from a prompt.
"In the magic quadrant of taste and skill, the market never deceives itself. Only we do."
Sources
If this essay resonated with you, share it with one friend who needs to hear it.