Make an Honest Resume for Your Coding Agent
Why giving your coding agent an honest map of what you know and don't know changes everything about how it helps you
Here is my latest favorite hack. Make an honest resume for yourself. Not the one you put on LinkedIn. Not the one you polish for recruiters. The one that says what you’re actually good at, what you’re bad at, what you’ve never done, and where your blind spots are. Don’t share it publicly. Share it with your coding agent.
I did this a few months ago and it changed how I work with Claude Code in ways I didn’t expect.
The job search
I made a brutally honest resume. All the jobs I’d had, what I actually did at each one, where I excelled, where I struggled. Not the “increased revenue by 40%” version. The version that says “I’m strong at data infrastructure and ML but I’ve never managed a team larger than 8 people” or “I can architect a system but I’m slow at frontend work.”
Then I had a coding agent pull in job listings and evaluate which ones I’d actually be good at, not which ones I wanted to be good at. The agent used my real experience to match against requirements. It didn’t just keyword-match skills to job descriptions. It understood context. It knew that my background in offensive security meant I could handle a DevSecOps role even though I’d never had that exact title.
That process led to my current role. I’m not going to name specifics, but the point is: the honest resume got me a better match than the polished one ever did.
And here’s the thing that surprised me. The agent didn’t just match me to jobs. It started suggesting projects I could build to demonstrate skills I claimed to have. Like, if I said I was good at building data pipelines, it would suggest a project that would prove it. That’s a different kind of help than “write me a cover letter.”
The technical skills map
I also made a technical version. This one is detailed. Every area of DevOps I’m comfortable in. Every area of backend work. Python, security, web dev, system design. Rated honestly. Not “expert in everything” but “strong at X, intermediate at Y, have never touched Z.”
When I use this with Claude Code to start a new project, something changes. The agent stops playing guessing games. It stops suggesting things and waiting for me to say “no, I already know that” or “no, that’s too advanced for me.” Instead it can actually reason about trade-offs. Like: “You know Python well and you’ve deployed FastAPI services before, but you’ve never used Rust. For this project, the performance requirements probably justify learning Rust, but here’s what that learning curve looks like given where you are.”
That’s useful. That’s not the agent spoon-feeding me or assuming I’m a beginner. It’s the agent knowing my actual parameters and making real recommendations based on them.
The blind spots
This is the part that matters most. When you give the agent an honest picture of yourself, it can tell you things you didn’t ask about.
I didn’t ask “what are my blind spots?” But because the agent had my actual skill profile, it could see patterns. It noticed I had a lot of experience building things but almost none in monitoring and observability. It noticed I’d done security work but never set up proper CI/CD security scanning. It noticed I’d built ML models but never dealt with data drift in production.
These are things I knew on some level but never articulated. The honest resume turned them into visible gaps instead of vague feelings. And once they’re visible, you can do something about them. The agent can suggest learning paths, projects that fill the gaps, or at least warn you when you’re about to hit a blind spot in the middle of building something.
How Mixture of Experts works (and why this matters)
Side tangent, but I think it’s useful. If you’re working with coding agents, it helps to understand how the models behind them actually work.
Not all LLMs are built the same way. Some are “dense,” meaning every parameter is active for every token. But a lot of the newer models use something called Mixture of Experts, or MoE. The idea is that instead of one giant feedforward network, you have several smaller ones called experts, and a router that decides which ones to activate for each token.
Mixtral 8x7B has 47 billion total parameters but only 13 billion are active for any given token. It has 8 experts and the router picks 2 per token. DeepSeek-V2 has 236 billion total but only 21 billion active, with 160 routed experts and 6 selected per token. Qwen3-235B-A22B has 235 billion total, 22 billion active. The naming convention literally tells you: the “A22B” means 22 billion active.
This matters because the router’s job is to figure out which experts are relevant to the current input. The better the input, the better the routing. Garbage in, garbage routing, garbage out.
When you give a coding agent an honest resume, you’re doing something similar. You’re giving the model better input for its own internal routing. The model doesn’t need to waste capacity figuring out who you are, what you know, what level to pitch its explanations at. You’ve already told it. It can spend its active parameters on the actual problem instead of on guessing your context.
I’m not saying your brain is a MoE architecture. It’s not. But the principle of “better context produces better routing” applies to both the model and to you. The honest resume is context. The model uses it to route better. And you use the model’s output to route your own attention to your actual blind spots instead of spinning on things you already know.
This connects to the navigator pattern I wrote about in building-erdos-navigator. The Erdos database gives the agent a structured environment to explore math problems. The honest resume gives the agent structured context about you. Both make the agent’s routing better.
The action space
Think about what a coding agent actually does. It operates in a loop. At each step it can take one of a few actions: look something up, call a tool, read a file, write code, ask a question. That’s the action space.
Without context about you, the agent is choosing actions blindly. It doesn’t know if it should explain something or skip ahead. It doesn’t know if it should suggest the tool you’re familiar with or the one that’s technically better. It doesn’t know if you’ll struggle with the deployment step or breeze through it.
The honest resume constrains the action space in a good way. The agent stops wasting cycles on things you already know. It spends more time on the parts where you actually need help. It makes better trade-off recommendations because it knows which trade-offs you can actually evaluate.
It’s like the difference between giving someone directions in a city they’ve never visited versus giving directions to someone who lives there but doesn’t know the east side. You don’t explain what a traffic light is. You focus on the part they don’t know.
How to actually do this
Make a document. Be honest. Structure it however makes sense for you. Here’s roughly what mine looks like:
Strong at: list the things you can do in your sleep. Be specific. Not “Python” but “Python data pipelines, FastAPI services, async programming, pandas/numpy.”
Intermediate: things you can do but you’re slow at or need to look things up. This is the biggest section for most people and that’s fine.
Beginner or never done: things you’ve heard of but never shipped. Be honest here. This is where the value is.
Blind spots you’re aware of: things you know you should know but don’t. Monitoring, testing, accessibility, performance optimization, whatever.
Work history (honest version): what you actually did at each job. Not what your title was. What you built, what you struggled with, what you’d do differently.
Save it somewhere private. Reference it in your Claude Code CLAUDE.md or system prompt or just paste it in when you start a new project or learning session. You don’t need to share it with anyone else. The agent doesn’t judge. It just routes better.
Where this leaves me
I keep coming back to this idea that the bottleneck in working with coding agents isn’t the agent’s capability. It’s the context you give it. A blank environment produces blank advice. A polished LinkedIn resume produces polished generic advice. An honest map of who you actually are produces something useful.
I don’t think most people will do this. There’s an ego hit in writing down “I’ve never properly set up monitoring” or “I’m slow at frontend work.” But the people who do it will get more out of their agents than the people who don’t. Not because the agent is sentient or cares about your feelings. Because better input produces better output. That’s it.
Or maybe I’m overcomplicating this and you should just start every session with “here’s what I know and here’s what I don’t.” That works too. The format doesn’t matter. The honesty does.