Given the increase in velocity from coding assistants such as Claude Code and Copilot, automated review had become a necessity. Since we rolled out Coderabbit on our project, almost everybody stopped reviewing code. It didn’t happen immediately, but organically.
As it turns out, Coderabbit is a formidable reviewer: with only a bit of context in the commit messages and a well-written AGENTS.md, Coderabbit will often be more thorough in reviews than most of my colleagues. And if that wasn’t enough, Coderabbit is almost always available—98.3% of the time in the last 30 days, not good for typical SLA targets but much better than colleagues—and a fraction of their cost.
Eventually everybody stopped reviewing code… Save for me.
For one thing, coding assistants can hallucinate blunders that neither humans—regrettably—nor Coderabbit will catch:
const NON_EDITABLE_BOOKING_STATUSES = new Set([
'cancelled',
'canceled',
'declined',
'withdrawn',
]);I took that snippet from a merge request submitted yesterday. We do not have any declined or canceled statuses in our entire domain model—they simply don’t exist. I chose this example because it’s one of the simplest. The day before, a merge request had a complete re-implementation of a 200+ lines utility we already had. This happens all the time.
But most importantly, if the phone rings because of an issue on the system, I need to get on the call and fix the issue. When my work has potential to impact my personal life and my weekends with family, there’s no AI that I will trust enough. And every alert, whether it fires on the job or not, still costs the whole team velocity when we scramble for unplanned work.
Ultimately, our customers don’t care who wrote the code: “We’re sorry, Claude wrote that line and Coderabbit didn’t catch the problem” is not a good look. No matter how powerful our tools have become, responsibility is still mine. So I’ll keep on reviewing code.