Video Games Were Training Us for This
How puzzles, strategy games, and chemical mixing in Resident Evil prepared me for coding agents
I was maybe 11 or 12, stuck on a puzzle in Alone in the Dark on PlayStation 1. The game wanted me to do math. Actual math. Something involving the Pythagorean theorem to figure out a combination or unlock a door, I don’t remember exactly. What I remember is sitting there, no internet to look up the answer, trying to work it out on paper.
I didn’t finish the game. Got stuck somewhere else later. But that moment stayed with me. A video game asking me to think, not just react.
Fast forward to now. I’m using Claude Code daily, and I keep noticing that the skills I use aren’t programming skills. They’re the same skills I built playing games for twenty-something years.
The games that taught me to think
Resident Evil 3 had this section where you’re mixing chemicals to create something, I think it was to purify water or make a vaccine. You had these colored liquids and you had to combine them in the right order. Get it wrong and nothing happens. Or it explodes. I spent a lot of time on that puzzle. Trial and error. What if I try this combination? Okay that didn’t work. What about this one?
That’s exactly what prompt engineering feels like. You try something, it doesn’t quite work, you adjust, you try again. The feedback loop is the same.
Metal Gear Solid looked straightforward when you first played it. Sneak past guards, get to the objective. But then you realize there’s stuff everywhere. Hidden items. Codec frequencies that unlock things. Entire storylines you miss if you don’t explore. The obvious path through the game is maybe 30% of what’s actually there.
Same with coding agents. The obvious use case is “write this function for me.” But there’s so much more if you dig. Subagents, custom commands, MCP servers, different permission modes. I wrote about some of this in my cheat codes post. Most people never find the hidden stuff.
Fog of war
Age of Empires and Red Alert shaped how I think about information.
You start the game and most of the map is black. You don’t know where the enemy is. You don’t know where the resources are. You don’t know the terrain. Your first job isn’t to build an army. Your first job is to scout. Send units out. Reveal the map. Get information before you commit resources.
I do the same thing with codebases now. Before I ask Claude to change anything, I need to understand what’s there. Where are the important files? How does data flow? What patterns does this codebase use?
The Anthropic team talks about this in their best practices post. They recommend an “Explore, Plan, Code, Commit” workflow. Research first, then implement. Clear the fog of war before you attack.
When I skip this step, I waste time. Claude makes changes that don’t fit the existing patterns. I have to undo things. It’s like rushing your army into unexplored territory and walking into a chokepoint.
The cheat code car
There’s a cheat in Age of Empires 2. You type “how do you turn this on” and a Shelby Cobra appears. It has a machine gun or a rocket launcher or something. It’s completely broken. You can drive around and destroy anything.
But here’s the thing. The car doesn’t win the game for you.
You still need to know where the enemy base is. You still need to manage your economy. You still need to understand the objectives. The car is overpowered but it’s just a tool. If you don’t know what you’re doing, you drive around shooting at nothing while your base gets destroyed from the other side.
That’s what LLM coding agents feel like to me. You have this absurdly powerful thing. It can write code faster than you ever could. It knows more languages and frameworks than you’ll ever learn. But if you don’t understand the system you’re building, you’re just driving the cheat car in circles.
I talked about this in my post about how coding agents made me a better programmer. The agent handles syntax. You handle architecture. That division of labor only works if you actually understand architecture.
This isn’t just cope
I know how this sounds. “Video games were actually good for me” is something every gamer wants to believe. But researchers have actually studied this.
Green and Bavelier published a paper in Current Biology in 2012 showing that action video games cause improvements in attention and perception. Not correlation. Causation. They ran controlled experiments.
Kokkinakis and colleagues found in 2017 that performance in games like League of Legends correlates with fluid intelligence measured in lab conditions. The age-related decline in MOBA performance mirrors how fluid intelligence declines with age.
Manero’s team published in PLOS ONE in 2023 showing that video game play predicts fluid intelligence, mental flexibility, planning, and working memory.
Maybe my parents were wrong about games rotting my brain. Or maybe I’m cherry-picking research to justify my childhood. Probably both. But the connection feels real when I’m using these tools.
Pattern recognition
Here’s something specific.
When you play enough games, you start recognizing patterns. This enemy always attacks the same way. This puzzle type always has the same solution structure. This level design trick always means there’s a secret nearby.
Coding agents have patterns too. When Claude starts suggesting the same fix repeatedly, it’s stuck in a loop. When the responses get shorter and less specific, it’s lost context. When it says “I apologize” and restates what you just said, it doesn’t actually understand the problem.
I notice these patterns faster than people who didn’t grow up gaming. I don’t have data on this, it’s just what I’ve observed. Gamers seem to pick up agent behavior faster. They’re used to learning opaque systems through trial and error.
The Anthropic team wrote about tool design for agents and how agents need clear feedback to work well. Games figured this out decades ago. Good games give you immediate, clear feedback. Bad games leave you guessing. Same principle applies to agent interfaces.
Save states
One habit from gaming that transferred directly: save often.
In games you save before boss fights. Before you try something risky. Before you enter a new area. Because you might die and lose progress.
With coding agents I commit constantly. Before I ask Claude to refactor something big. Before I try a different approach. Before I let it touch files I don’t fully understand.
When things go wrong, and they do go wrong, I can reset. Git is just save states for code. The long-running agents post from Anthropic talks about this. They have agents commit after every successful change. Leave the codebase in a mergeable state. It’s the same instinct.
I wrote about this in my vibe coding post too. Know when to burn it down. If you’re 45 minutes deep in a mess of changes you don’t understand, sometimes the move is to reset to the last save and try again with better context.
The system design level
This is the part I’m still working on.
Strategy games taught me to think about systems. In Age of Empires, villagers gather resources, resources go to buildings, buildings produce units, units need to be positioned somewhere useful. Everything connects. You can’t just optimize one part. You have to think about how the pieces interact.
Codebases work the same way. This function calls that function which updates this state which triggers that side effect. When Claude makes a change, I think about what else it might affect. What depends on this? What does this depend on?
The building effective agents post talks about orchestrator-worker patterns. A lead agent coordinates while specialized subagents do specific tasks. It sounds complicated but it’s just… knowing which piece does what. Like knowing your barracks makes infantry and your archery range makes archers and your stable makes cavalry. Different buildings, different units, different purposes. You have to know what each thing does or you just build randomly and lose.
The multi-agent research post goes deeper. They found that token usage alone explains 80% of performance variance in their evaluations. Resource management. Just like in an RTS.
What I’m still figuring out
I don’t have this fully worked out.
Sometimes the gaming instincts help. Sometimes they mislead. Games have designed solutions. Real problems don’t. Games give you all the pieces you need. Real codebases have missing documentation and outdated comments and code that nobody remembers writing.
But the core skills transfer. Exploring before committing. Managing limited resources. Recognizing patterns. Saving your progress. Thinking in systems.
I spent a lot of hours on video games as a kid. My parents thought it was a waste of time. Maybe they were right about some of it. But some of those hours were training for something I couldn’t have predicted.
Or maybe I’m just rationalizing. That’s also possible.
References
Peer-reviewed research:
- Green, C.S. & Bavelier, D. (2012). Learning, Attentional Control, and Action Video Games. Current Biology, 22(6), R197-R206.
- Kokkinakis, A.V. et al. (2017). Exploring the relationship between video game expertise and fluid intelligence. PLOS ONE, 12(11).
- Manero, B. et al. (2023). Video games and board games: Effects of playing practice on cognition. PLOS ONE, 18(3).
- Bediou, B. et al. (2023). Effects of Action Video Game Play on Cognitive Skills: A Meta-Analysis. Translational Issues in Psychological Science.
- Oei, A.C. & Patterson, M.D. (2013). Enhancing Cognition with Video Games: A Multiple Game Training Study. PLOS ONE, 8(3).
Anthropic engineering posts: