Budding Planted February 5, 2026 8 min read

The Missing Toolbox: Civil Engineering, Ethics, and Creative Writing Classes

Why agent builders need lateral thinking from civil engineering, creative writing, and ethics

I don’t know what a software engineer is. I’m not being cute about it. Programmer, developer, engineer, senior, junior. These words get thrown around like they mean something specific and they don’t. I’ve never worked at a FAANG company. If that’s your bar for taking someone seriously, cool, stop reading.

What I have done is contribute to open source, work as a research engineer, deploy things across 50 machines with load balancers, build scrapers, accidentally download way too much of Wikipedia. The kind of stuff where you learn by doing something slightly irresponsible and then figuring out why it broke.

But here’s what’s been bugging me. If you look at what a CS degree actually gives you, once you strip away the calc and maybe some EE or physics electives, it’s mostly algorithm classes. And I love algorithms. I’m Iraqi. I’ve walked the streets of Baghdad. The word “algorithm” comes from Al-Khwarizmi, a ninth-century mathematician from my part of the world. We’re all still using leftovers of his work. I’m proud of that.

But algorithms were never the whole thing. They were one tool in a much bigger toolbox that nobody bothered to assemble. And now that AI agents are writing the code, that missing toolbox is becoming a problem.

The missing toolbox

Software engineering as a discipline is thin. I don’t mean the work is easy. I mean the education is narrow. You get algorithms, data structures, maybe an OS class, maybe a networks class. Then you graduate and figure out the rest on the job. Compare that to civil engineering, where you spend years studying structural analysis, construction management, failure modes, load planning, and how to verify that what the computer tells you actually makes sense.

Civil engineers have been doing complex systems work for centuries. Long before anyone wrote a line of code, they were coordinating teams, managing dependencies, planning for things to go wrong, and building stuff that couldn’t afford to fall down. A bridge doesn’t get to crash gracefully. There’s no rollback on a collapsed building.

And here’s what got me thinking about this. When you build an agent system, you’re basically playing three roles at once. You’re the architect deciding what the system should look like. You’re the civil engineer figuring out if it can actually hold weight. And you’re managing the construction crew, which in this case is the agents themselves. Claude Code, Codex, Cursor, whatever you’re using.

Civil engineers don’t just trust the simulation output. They develop intuition for what a correct answer looks like before they check it. If the computer says a beam needs to be 2 inches thick for a 40-story building, they know something is wrong without running the numbers again. That instinct is exactly what’s missing when people work with coding agents. They see output that looks right, demos fine, and they ship it. Then it falls over in a week.

I wrote about some of this in vibe-coding-or-not-youre-going-to-use-coding-agents. Your first 10 or 15 apps built with AI tools will feel broken. Not completely broken. They’ll look right. But the gaps show up when you actually use them.

World building and narrative

This is where it gets weird. I think agent builders should take creative writing classes.

Not because agents need to write poetry. Because designing an agent system is world building. You’re defining rules, constraints, characters, and how they all interact. You’re deciding what’s possible and what isn’t. Fiction writers have been doing this forever. They call it world building. We call it system design. It’s the same skill.

And then there’s the communication problem. Agents need to talk to people. They need to adapt tone, explain what they’re doing, tell a coherent story about why they made a decision. If you’ve never thought about narrative structure, about how to move someone from point A to point B without losing them, your agent’s output is going to feel like reading a terms of service agreement. Technically complete. Nobody wants to read it.

Prompt design is writing. I don’t mean it’s like writing. It is writing. You’re choosing words, setting context, establishing voice, managing ambiguity. Every English teacher who told you to be specific and show don’t tell was accidentally training you for this.

Ethics as a design constraint

If your agent makes decisions on behalf of humans, you need to understand ethics. Not as philosophy trivia you memorize for an exam. As design constraints.

Consequentialism, deontology, virtue ethics. These aren’t abstract frameworks. They’re different ways of answering “what should my agent optimize for?” A consequentialist agent maximizes outcomes. A deontological agent follows rules regardless of outcomes. A virtue ethics agent asks what a good person would do. Most agent builders don’t think about this at all, and then they’re surprised when their system does something that’s technically correct but feels wrong.

And then there’s the stuff that’s harder to ignore. Bias. Fairness. Accountability. Consent. The gap between a demo and something you can actually ship to real people is almost entirely made of these questions. You can build an agent that works perfectly in your test environment and still causes harm because you never thought about who’s affected when it makes a mistake.

This isn’t optional anymore. It probably never was, but now that agents are making decisions at scale, the consequences of skipping this part are getting harder to rationalize away.

The actual bottleneck

Here’s the part that nobody wants to hear. The thing that limits you as an engineer, especially now, is not technical skill. It’s not how well you understand transformers or how fast you can ship. It’s whether people want to work with you.

And this isn’t just for engineers. PMs, biz dev, “idea guys,” everyone. Your intelligence is not a magic pill. Being the person who has ideas is not a job. If your whole contribution is vision and vibes while you’re still using Cursor in 2026, get off your high horse. The tools are available to everyone now. Having access to them doesn’t make you special. Knowing how to think, communicate, and actually build with other people does.

Can you explain an idea clearly? Can you listen to someone who disagrees with you without getting defensive? Can you get your point across to a team without being an asshole about it?

There’s so much only optimization can get you. People are obsessed with it. Squeeze out another 10% here, another 15% there. And then they wonder why their perfectly optimized system doesn’t get adopted, or their team falls apart, or nobody wants to collaborate with them on the next thing.

Civil engineers figured this out a long time ago. A project with 200 people and a two-year timeline doesn’t survive on technical brilliance alone. It survives on scheduling, communication, managing egos, and the boring work of making sure everyone is building toward the same thing. Construction project management is just orchestration with humans in the loop. Sound familiar?

The actual classes

If you want specifics, here’s what I’d look at.

Art and Creative Writing

  • Creative Writing: agents need to communicate clearly, adapt tone, and tell coherent stories. Understanding narrative structure helps you design better prompts and outputs.
  • World Building: designing agent environments is world building. You’re defining rules, constraints, characters, and how they interact. Fiction writers have been doing this forever.

Ethics

  • Intro to Ethics: if your agent makes decisions on behalf of humans, you need to understand frameworks like consequentialism, deontology, and virtue ethics. Not as philosophy trivia, but as design constraints.
  • Tech Ethics or AI Ethics: bias, fairness, accountability, consent. The stuff that turns a demo into something you can actually ship responsibly.

Civil Engineering

  • Structural Analysis: knowing what correct output looks like before you check it. Civil engineers validate computer models by intuition first. Agent builders need the same instinct.
  • Construction Project Management: scheduling, dependencies, resource allocation. This is orchestration thinking.
  • Structural Design: systematic load analysis before you build anything. Understand requirements and failure modes first.
  • Construction Engineering: maximizing efficiency through sequencing and planning.

Civil engineers have spent centuries planning complex systems, managing failure modes, and verifying that what the computer says actually makes sense.

Where this leaves me

I’m not saying go enroll in a civil engineering program. I’m saying the skills that are about to matter most aren’t the ones we’ve been training for. The algorithm classes were great. Al-Khwarizmi’s legacy runs deep. But the next layer is communication, ethics, structural thinking, and the ability to design systems that hold up when real people use them.

Or maybe I’m wrong. Maybe you can keep grinding LeetCode and prompting your way to success. I just think the people who bother to look sideways, to borrow from disciplines that have been solving coordination and integrity problems for centuries, are going to have an easier time building things that actually work.

And honestly, being nice helps. I know that sounds stupid. But in a world where everyone has access to the same AI tools, the person who can explain their ideas without making everyone in the room feel dumb is going to win more often than the person who can’t.

Don’t be a dick. It’s engineering advice.