Budding Planted January 16, 2026 7 min read

Whether You're Vibing or Not, You're Going to Use Coding Agents

Counterintuitive things after 125 active repositories and 50k+ commits later in 2025

So here are some counterintuitive things after 125 active repositories and 50k+ commits later in 2025.

These stuff I’ve learned building with Claude Code, Cursor, Copilot, Cline and honestly whatever else comes out at this point. Some of this I’m still figuring out.


1. Be a Detective, Not a Typist

Writing software with coding agents feels less like programming and more like being an investigator. You know how when you go to the doctor and say “my shoulder hurts,” they don’t just hand you painkillers? They ask what you did three days ago. What you slept on. Whether you lifted something weird.

Actually, I don’t know if that analogy totally holds. The point is more like: someone says “the button is broken” and your instinct is to jump in and fix the button. But I’ve wasted so many hours doing exactly that, only to find out the button was fine, there was some race condition upstream, or a state thing, or whatever. The symptom is almost never the actual problem. I try to ask a bunch of precursor questions now before I send the agent off to fix one thing and lose whatever else it was working on.


2. Your First 10 Projects Are Drafts

Nobody told me this and I wish they had: your first like 10, maybe 15 codebases/web or desktop apps built with AI coding tools will feel broken. Not completely broken. They’ll look right. They’ll demo fine. But then you actually use them for a week and all the gaps show up.

I kept trying to perfect my first few projects and it was such a waste of time. You learn way more from finishing 10 rough things than polishing one of them. Just get through them. Build it, use it, find where it falls apart, build the next one knowing that.


3. Love the Problem, Not the Tool

Every time a new framework drops, every time a new AI coding agent and tool launches, I get this pull to use it everywhere. I definitely did this when I started. New paradigm? Let me shove it into every project whether it makes sense or not. MCP / ACP / Skills / Whatever is next, Pills!

But like… React and Docker were hot, then it was old news, now it’s just kind of there. Same thing will happen with all the current stuff. The thing that doesn’t go stale is understanding state management, how data flows, why systems fail. I’m trying to get better at staying curious about those things instead of chasing whatever’s new.


4. Read the Code Like It’s From a Stranger

I mass approved Claude Code’s output for like two weeks straight when I first got comfortable and confident with the results. It looked right. Good variable names, reasonable structure, whatever. Then I hit this bug that took me four days to find. Turns out the agent had been silently swallowing errors in a try-catch somewhere and I’d just never noticed.

Now I treat it like a PR from a new contractor. Not because it’s bad, but because it’s confident in a way that doesn’t match how much it actually knows about my specific codebase. It doesn’t know that we never return null from that function, or that this other service expects a specific error format. You have to be the one asking “wait, what happens if this is empty.”


5. Small Asks Beat Ambitious Asks

You could prompt “Build me a complete user authentication system with OAuth, email verification, password reset, and session management.” And you’ll get something. It might even look impressive, people in general love this, because companies tell them that it’s impressive to one-shot, it’s not!

I’ve learned it’s usually a mess when you dig in. One piece doesn’t quite fit, so you ask the agent to fix it, which breaks something else, and now you’re in this spiral where you don’t even know what’s changed anymore.

I’ve had way better luck going slow. “Set up the basic auth routes.” Then I actually review it, test it. “Now add email verification.” Review, test. It feels slower but it’s not. Each piece actually works before you stack the next one. I don’t know, maybe this is obvious but it took me a while to figure out.


6. Show, Don’t Tell

You can spend a paragraph explaining the exact coding pattern you want. The naming conventions, the error handling style, how you structure functions.

Or you can just paste an example and say “Here’s how we do it in userService.ts. Follow this pattern for the new orderService.ts.”

The second way works better, like dramatically better. The coding agent mirrors the patterns and does exactly that. I’ve mostly stopped trying to describe patterns in words, instead I keep them in a mini library or pull examples from open source MIT licensed projects.


7. Know When to Burn It Down

You’re 45 minutes deep. The agent fixed a bug, which introduced a new bug, which you asked it to fix, which broke something else. You’ve got a diff that’s 400 lines of changes you don’t fully understand anymore.

I used to just keep pushing. Sunk cost or whatever. But honestly the move is usually to just git reset to the last working version and start over. With better context, a cleaner prompt, a different approach. Often gets you to working code in like 10 minutes.

It’s weirdly hard to do this. It feels like you’ve come so far. But you haven’t really, you’ve just made a mess. Starting over isn’t failure, it’s the faster path.


8. The Coding Agent Decides “What,” You Decide “Whether”

Ask it to build a feature, it’ll build the feature. Ask it to add complexity, it’ll add complexity. It does whatever you ask.

What it won’t do is tell you that you shouldn’t. It won’t say “this feature makes your app worse” or “you’re over-engineering this” or “this problem doesn’t exist.” It has no opinions about what’s worth building.

So you still need taste. You still need judgment. I’ve definitely built features I didn’t need because the agent made it easy and I didn’t stop to ask if I should.


9. Give the Why, Not Just the What

“Add error handling to this function.”

You’ll get generic try-catch blocks. Some console.logs maybe. Technically works, practically useless.

But if you say “Add error handling to this function. Users upload CSV files that are sometimes malformed, like wrong number of columns, weird encoding, missing headers. We need to catch these cases and tell them specifically which row failed and why.”

Now you get something actually useful. I’ve started explaining context way more than I used to. Feels redundant sometimes but the output is so much better.


10. Learn From It, Not Just Through It

The irony of AI coding agents and tools is that people use them to skip learning. But I think you get way more out of them if you use them to learn faster?

Like after the agent generates something, ask it to explain why it structured it that way. What the alternatives were. What would break if you changed something. It’s weirdly patient. It doesn’t judge you for not knowing stuff.

I’m still trying to get better at this. It’s easy to just take the output and move on. But that’s kind of a waste.


The Bottom Line

I don’t know if I have a clean takeaway. Programming with coding agents (aka vibe coding) is weird. You’re thinking more, but differently. You’re the architect and the detective and the editor and sometimes just the person saying “no wait, go back.”

The coding agent is really good at execution. But execution without direction builds the wrong thing faster. So like… plan before you prompt, build a bunch of stuff, stay skeptical, give context, keep learning. That’s basically it I think. This took way longer than I wanted to write but here it is, because I’m learning how to correct my grammar and write more coherently for the Internet and agents.