Money spent is obvious—we burn through tokens like a hedge fund manager through investor capital, exhausting our weekly quotas by Tuesday. Time, however, is subtle and invisible. Something I call the Anti-AI Paradox: that creeping realization that you could have hand-coded the entire feature in half the time it took to “collaborate” with your AI assistant. Let me save you some grief.
1. Being Super Vague
AI models are getting smarter by the day. But they can’t read tea leaves like some digital oracle you summoned from Silicon Valley. “My ‘Schedule’ button isn’t scheduling the post.” Sure, Einstein. I can see that. Revolutionary observation.
Give me more context. What do you see in the logs? What did you expect to happen? What happened instead? Did it fail silently? Throw an error? Launch the nuclear codes? Eric S. Raymond’s “How to Ask Questions The Smart Way” is still devastatingly relevant after all these years, but apparently nobody got the memo.
The AI isn’t a mind reader—it’s a very expensive pattern matcher. Treat it accordingly.
2. Vibe Coding in the Truest Spirit
I’m going to hit “Accept” until my fingers are sore or the code does what I want. Whichever comes first. It’s like Russian roulette, but with merge conflicts.
No. Take a step back. Talk with your tool about what needs to be implemented and what the approach should be. It is imperative—not optional, not nice-to-have—that you understand it. Ask questions until you do. Don’t allow a single line of code to be written without you knowing the consequences.
Don’t pay the ignorance tax. The interest rates are criminal.
3. Don’t Read the Code Written by AI
Again, just hit “Accept” and pray to whatever deity oversees production deployments. Why read? Why think? Why have standards?
You need to know the consequences. How this piece affects other parts of your codebase. How the addition of a new feature might possibly break something else that’s been working fine for three months. I feel even AIs aren’t good enough at this second-order thinking. So many “You’re absolutely right!” responses to things that were obvious in hindsight but the AI somehow missed. This Reddit thread is in equal parts hilarious and terrifying.
Code review exists for a reason. Even if the author is artificial.
4. Do Multiple Changes in One Session
This is a surefire way to confuse the heck out of the AI, and eventually yourself. Congratulations, you’ve achieved parity—you’re both lost.
Have one Claude/Cursor session for one unit of work. It can be a simple fix for a broken sidebar. Or preparation for something monumental, like refactoring all functions to use JWT. How do you figure out the right unit of work? Depends on context. (Your mileage may vary, batteries not included, void where prohibited.) There are tools to divide your “build me an image editing tool” prompt into proper AI and human digestible units of work.
In the JWT example, maybe all the functions are in the auth module only. Not a lot of context switching—for both carbon and silicon-based lifeforms. Even if it breaks, you can purge that commit off the face of the earth, go back to the drawing board, recoup, rethink, and re-execute.
But what if you club both examples in the same session? A broken sidebar comes across as a seemingly harmless fix, only to discover that the “fix” isn’t responsive, and now you need to add a new library, a consequence of which is that npm run build fails spectacularly. You debug this rabbit hole for an hour. Your context window explodes like a supernova. You didn’t fix the sidebar. Two hours and eight dollars in credits flew by. (Just sayin’.)
Which brings us to...
5. Choke the Context Window
We need to be strategic with our prompts and questions. They need surgical precision, not the intellectual equivalent of a shotgun blast.
When you say “the register endpoint in @app.py returns 500 if the email already exists, but @utils.py already has a check for that,” you’re sending the entire 1,000-line app.py file and the 1,500-line utils.py file into your precious context window. Congratulations, you just spent $2 to ask a $0.50 question.
Even better: when you coded app.py and utils.py in the first place, don’t make them 5,000 lines long. Even if the AI wants to take any of these files into context, it will result in context hemorrhage sooner or later. Give clear instructions to the AI not to make your files and modules like Homer’s Iliad. Nobody wants to read that much Python in one sitting.
If your codebase was written by humans (remember those?), create units of work to refactor these monstrosities. Your future self will thank you profusely. The AI even more.
6. Don’t Use Parallel Sessions
When you’re building a feature—say, WebSocket integration—and have another in the pipeline, you fire up your IDE, give clear instructions, rightsized units of work, and then... you twiddle your thumbs while the AI finishes, right? Making coffee? Checking Twitter? Contemplating the heat death of the universe?
Have you considered using git worktrees and parallel sessions so that they can execute independently of each other? Later, you can merge both feature branches into the parent branch. Revolutionary concept, I know.
Here’s another kicker: you can orchestrate an AI agent to do this entire thing for you—split into parallel units of work, create git worktrees, orchestrate parallel sessions, review and merge back the code, clean up. It isn’t a stretch to say we’re hitting technological singularity. The robots are already doing the DevOps we were too lazy to automate properly.
I wrote a piece related to this:
7. MCP Overuse
MCPs (Model Context Protocol servers) consume tokens. A lot of them. They’re the SUVs of the API world—powerful, useful, and absolute gas guzzlers.
Use prudence and exercise your own judgment here. For example, to scaffold a GitHub repo with a FastAPI boilerplate, the GitHub MCP is the slowest and costliest way. You’re better off using the gh CLI. Or, you know, copying a template. Revolutionary, I know.
Not all MCP use is stupid. But some of it is really stupid.
8. MCP Underuse
Context7 MCP is used by your AI to refer to up-to-date documentation for the libraries and APIs you use. Use the GitHub or JIRA MCP to update the task you’ve been working on. The utility value of MCPs is staggering.
If you’re not using MCPs where they make sense, you’re simply leaving time and money on the table. It’s like having a Swiss Army knife and only using the bottle opener. Sure, it works, but you’re missing out.
9. Don’t Write Tests
Did you finish a unit of work? Did you forget to write tests for that? Superb! Because this is going to come back and haunt you after weeks—possibly months—when you’re shipping something else on a tight deadline, and this thing you wrote many lifetimes ago is suddenly, inexplicably broken.
Every unit of work that goes in as a git commit must be tested. At least manually. Preferably with actual test cases that run in CI/CD and don’t just live in your head as “yeah, I’m pretty sure this works.”
Future you is going to hunt down present you with a very particular set of grievances. Don’t give them ammunition.
Again, a related post:
10. Don’t Update Your Project’s Context
Something AIs and humans have in common: context is everything. And both forget it constantly.
Picture this: you’re three months into a project. You open a file. “Wait, the architecture document says we use Redis to store the messages. But here we are using ZeroMQ. Let me do a git blame.”
Ah. You did it three weeks back. Right before going on vacation. The context that seemed so obvious at the time has evaporated like morning dew. You’re now an archaeologist excavating your own code, trying to understand why Past You made these decisions.
Update your project’s context. Maintain a living document—a README, an architecture decision record, inline comments that aren’t just “// fix later” (spoiler: you won’t). Explain why you made certain choices. Your AI needs this context to give you useful suggestions. Your human collaborators need it to not send you passive-aggressive Slack messages. Your future self needs it to avoid existential crises at 2 AM.
“We switched from Redis to ZeroMQ because the message ordering guarantees were critical for the event sourcing pattern we implemented in sprint 12.” There. Was that so hard? Now everyone—human and AI alike—can work with the actual state of the world instead of a beautiful fiction from three months ago.
The Bottom Line
AI coding assistants are powerful tools. Emphasis on tools. They’re not magic. They won’t read your mind, fix your architecture problems, or absolve you of the responsibility to understand your own codebase.
Use them strategically. Be precise. Maintain context. Write tests. Don’t let the robots drive—you’re still the one who has to explain to your manager why the production database got dropped.
And for the love of all that is holy, read the code before you merge it.
Your token budget will thank you. Your future self will thank you. And your AI assistant will stop generating those “You’re absolutely right!” responses that make you question everything.



