The danger of shipping AI code without review
OpenClaw showed the world you can build an entire project by letting an AI agent write and ship code without reviewing it. It went viral. People loved it. And I get it — it’s impressive, it’s fun, and it makes for a great demo.
But here’s the thing: following that mindset professionally is reckless.
If you’re building a personal project, a toy, something for fun — go for it. Experiment. Let the agent run wild. But the moment you have real clients, production systems, and a team relying on the code — you can’t afford to ship what you haven’t read.
The tendency to slack
There’s a growing pattern I see in teams adopting AI coding tools: the more the agent produces, the less people review. It starts small — you skip a self-review because the agent “knows what it’s doing.” Then you stop reading the diffs. Then you’re shipping features you don’t fully understand.
You get bitten when you fall for it. Maybe not today, maybe not next week. But eventually, you’ll debug something the agent wrote and realise you have no idea why it made the choices it did — because you never looked.
Supply chain risks are real
AI models hallucinate. They suggest packages that don’t exist, or worse, packages with names close enough to real ones that you don’t notice the difference. Typosquatting attacks are not hypothetical — they happen, and AI tools make it easier to fall for them.
When an agent adds a dependency, it’s not doing the checks you should be doing. It’s not reviewing the source code, checking maintenance status, or verifying the author. It’s pattern-matching on training data. That’s not good enough when you’re shipping to production. Choose dependencies wisely — that responsibility doesn’t go away because an AI picked the package.
Your machine is not a sandbox
Most developers run AI agents directly on their work machine. That machine has access to a lot: credentials, dotfiles, SSH keys, other projects, production environments. If an agent goes rogue — whether through a malicious skill, a compromised MCP server, or just a bad prompt — it’s not just one project at risk. It’s everything.
It’s different if you’re running in a self-contained environment — Docker, a separate machine, a locked-down CI pipeline. But be honest with yourself: are you? Most of us aren’t.
We’ll get there, but we’re not there yet
Autonomous coding is the future. I genuinely believe that. But treating it as if we’re already there is reckless at this stage. The tools are impressive but not infallible. The models are powerful but not trustworthy enough to run unsupervised on production codebases.
We are professionals. Own your contributions. If your name is on the commit, you’re responsible for what it does — regardless of who or what wrote it.
What to do
- Review what AI gives you — read the code as if you wrote it
- Review your AI Skills — understand what instructions your agent follows
- Vet your AI tools — check the tools before they check your codebase
- Know your stakes — adjust your rigour to the context
- Self review — the practice hasn’t changed, even if the author has
- AI coding warning — keep your standards