The Mandate Is Clear. The Security Process Is Not.
Something notable happened in the last twelve months. Major technology companies stopped asking developers if they wanted to use AI coding tools and started requiring it. The productivity gains are real. Developers using Copilot, Cursor and Claude Code report shipping features faster. Managers see velocity metrics climb. Executives see headcount projections flatten.
The mandate makes business sense. The security implications have not caught up.
When the CEO Codes Again
Mark Zuckerberg told the world he is writing code again using AI tools. Linus Torvalds acknowledged that AI-generated contributions are finding their way into the Linux kernel. These are not fringe signals. When the people who built the foundations of modern technology embrace AI coding, adoption is not a question of if. It is a question of how fast.
For enterprise security teams, the question is different: who reviews what the AI produces?
Traditional code review processes assume a human author who understands the code they wrote, can explain their design decisions and can be held accountable for security implications. AI-generated code breaks all three assumptions. The developer who accepted the suggestion may not fully understand what it does. The design decisions were made by a model, not a person. And when a vulnerability is exploited, the accountability chain is unclear.
The Shadow AI Problem
Before companies mandated AI coding tools, they had a shadow AI problem. Developers were already using them without approval, on codebases they should not have shared, through personal accounts without enterprise data processing agreements.
Mandating approved tools actually helps with this. It moves AI usage from shadow IT into sanctioned channels where security teams can apply controls. But mandating tools without updating the security review process just formalizes the risk.
Here is what we see in enterprise engagements at Sherlock Forensics:
- AI coding tools are approved at the executive level
- Developers adopt them immediately and enthusiastically
- Code review processes remain unchanged from the pre-AI era
- SAST and DAST scanners catch some AI-introduced vulnerabilities but miss the most dangerous ones
- Security teams are not trained on AI-specific vulnerability patterns
- Nobody is measuring the security impact of AI adoption on the codebase
What AI Code Review Misses
The standard code review process was designed for human-written code. A developer opens a pull request. A reviewer reads the diff. They check for obvious issues, verify the logic makes sense and approve the merge.
This process breaks down with AI-generated code for three reasons:
Volume. AI tools generate more code per developer per day than manual coding. Review fatigue sets in faster. Reviewers start skimming AI-generated PRs because the code "looks right" even when it contains subtle vulnerabilities.
Pattern blindness. AI-generated code follows consistent patterns that look professional and well-structured. Reviewers learn to trust the pattern and stop examining individual lines. The vulnerability hiding in line 47 of a 200-line AI-generated function gets approved because the overall structure looks competent.
Knowledge gaps. AI introduces vulnerability patterns that human developers do not typically create. Hallucinated dependencies, overly verbose API responses, authorization checks that verify authentication but not permissions. Reviewers are not trained to look for these AI-specific issues.
The Gap Between Mandate and Audit
The gap is this: corporations are mandating AI coding tools to increase developer productivity, but they are not updating their security infrastructure to audit the output at the same rate.
Productivity mandates are driven by engineering leadership and measured in velocity metrics. Security updates are driven by security teams and measured in risk reduction. These two groups operate on different timelines, report to different executives and optimize for different outcomes.
The result is predictable. AI tool adoption happens in weeks. Security process updates happen in quarters. The gap between those timelines is where vulnerabilities accumulate.
What Needs to Change
Enterprise security teams need to treat AI-generated code as a distinct risk category, not an extension of developer-written code. This means:
AI-specific security testing. Add test cases for the vulnerability patterns AI tools commonly introduce. Broken access control, hardcoded secrets, SQL injection via string concatenation, missing rate limiting and hallucinated dependencies need to be explicit items on your testing checklist.
Dependency verification gates. Implement automated checks in your CI/CD pipeline that verify every dependency exists on the official registry before installation. Block builds that include unverifiable packages.
Updated code review training. Train your reviewers on AI-specific vulnerability patterns. The review process for AI-generated code should be more thorough than for human-written code, not less.
Regular security assessments. Conduct periodic security assessments focused specifically on AI-generated components. The vulnerability patterns shift as AI models update, so point-in-time assessments need to be ongoing.
Accountability frameworks. Define who is responsible when AI-generated code introduces a vulnerability. The developer? The team lead? The tool vendor? Without clear accountability, vulnerabilities become everyone's problem, which means they are no one's priority.
The Opportunity
This is not an argument against AI coding tools. The productivity benefits are real. The technology is improving. The adoption curve is irreversible.
It is an argument for matching the speed of AI adoption with the speed of security adaptation. Every organization mandating AI coding tools should simultaneously invest in the security processes needed to audit the output.
The organizations that get this right will have both the productivity gains and the security posture. The ones that do not will discover the gap the hard way, usually through an incident.
If your organization is mandating AI coding tools and your security processes have not been updated, start with our enterprise AI coding security framework. For a hands-on assessment, scope an engagement starting at $12,000 CAD.