AI coding agents now account for 27% of all production code — and teams with high AI adoption merge 98% more pull requests, but PR review time increased 91%. The security pipeline designed for human-paced development cannot absorb a near-doubling of review volume. In April 2026, Forrester coined Agentic Development Security as the first formal model that acknowledges this structural gap. The question is not whether traditional AppSec is failing. The question is what replaces it.

The Speed Ceiling of Traditional AppSec

Application security testing assumes code moves through a sequential pipeline at human speed: a developer writes code, commits it, a scanner runs, a reviewer triages findings, and a fix is deployed. AI coding agents violate every assumption in that model simultaneously.

The volume problem is measurable. DX's Q1 2026 analysis of 500+ organizations found that AI-generated code represents 27% of all production code, up from 22% the prior quarter. Faros AI analyzed data from over 10,000 developers and found that teams with high AI adoption merge 98% more pull requests — but PR review time increased 91%. Traditional security approaches cannot absorb a near-doubling of review volume without either degrading review quality or creating bottlenecks that negate the speed gains agents provide.

As one practitioner told Checkmarx: "If you are generating code ten times faster and your security team is not getting ten times faster, then you are going to have to make difficult decisions, and that is where risk emerges."

Where Each Tool Category Fails

The failures are not theoretical. Each traditional AppSec tool category has a specific, documented blind spot when confronting agentic development patterns.

Failure Mode Mechanism Tool Category Affected
Scale mismatch Code volume grows faster than review capacity All tools requiring human triage
SAST context blindness Text-based analysis cannot evaluate architectural correctness SAST
SCA hallucination gap Phantom packages undetectable before manifest commitment SCA
Non-determinism defeats signatures Same prompt produces different code; rules cannot generalize SAST signature-based rules
Provenance and context loss No tracking of AI authorship; positionally wrong code SAST, DAST, code review
Agentic privileged actor risk Agents operate with developer-level access scopes No existing tool category

SAST scanners miss context-dependent flaws because static analysis assumes security properties can be determined by examining code as text, independently of runtime context — an assumption AI-generated code systematically violates. Checkmarx's analysis of DevSecOps in the age of AI states: "Limitations arising from AI-generated code stem from design assumptions — that code is deterministic, flows are reproducible, and the primary risks lie in traditional vulnerabilities. In AI, these assumptions no longer hold."

SCA tools miss hallucinated dependencies. A USENIX study on package hallucinations found that AI model recommendations point to non-existent packages approximately 5.2% of the time for commercial models and 21.7% for open-source models. Traditional SCA scans manifests for known-vulnerable packages; it cannot intercept a hallucinated dependency at the point of AI recommendation because the hallucinated name does not appear in any manifest until a developer has already acted on the suggestion.

The Bitwarden CLI supply chain attack of April 2026 demonstrated the practical consequence: a malicious npm package impersonating @bitwarden/cli harvested cloud credentials and propagated itself through developer environments during a 93-minute window before deprecation. AI coding tools accelerated the spread because agents auto-resolve and auto-install dependencies without human review of the package name.

What Forrester's ADS Proposes

In April 2026, Forrester Senior Analyst Janet Worthington published a post arguing that AppSec needs a new operating model. The resulting paradigm — Agentic Development Security, or ADS — is not a single product category or a rebranding of existing tools. It is a security paradigm focused on protecting AI-powered software development end to end.

ADS spans four pillars: prevention, detection, prioritization, and remediation. The critical distinction from traditional AppSec is that ADS treats security decisions as autonomous, policy-driven actions — not alerts handed to overburdened teams. Security interventions happen at machine speed because the code they protect is generated at machine speed.

Prevention: Policy-Driven Quality Gates

Under ADS, policy-driven software development lifecycle quality gates are enforced by autonomous agents rather than manual review. When an AI coding agent produces a pull request, the prevention layer evaluates the code against organizational security policies before it enters the review pipeline. Code that fails policy checks is blocked or auto-remediated — not queued for a human who is already 91% slower at reviewing.

Detection: Beyond Static Analysis

ADS detection extends beyond the code-as-text model. It incorporates runtime signals, dependency provenance validation, and behavioral analysis of AI agent outputs. The OWASP Top 10 for LLM Applications provides the initial detection taxonomy: prompt injection, unsafe output handling, excessive agency, and missing controls. As agentic applications mature, detection must extend beyond single-model interactions to analyze multi-agent workflows, tool invocation chains, autonomous decision paths, and policy enforcement gaps.

Prioritization: Context Over Volume

Forrester's central thesis is that detection alone is no longer a source of durable differentiation. SAST, DAST, SCA, secrets scanning, infrastructure-as-code scanning, and container image scanning are table stakes. What separates leaders from laggards is the ability to correlate findings with real-world context: exploitability, reachability, runtime exposure, and business impact. ADS platforms must continuously rank findings based on exposure and business impact rather than severity alone.

Remediation: Autonomous Fix Validation

ADS calls for automated remediation that produces validated fixes preserving functionality. The remediation pillar is where LLMs provide the most direct value: correlating disparate data sources — code repositories, dependency heuristics, security scanners, runtime signals, and workflows — into coherent, lower-false-positive fixes that developers can trust. The goal is fewer alerts requiring human action, not more sophisticated alerting.

AEGIS: The Broader Governance Layer

ADS addresses the development pipeline. Forrester's companion framework, AEGIS — Agentic AI Guardrails for Information Security — addresses the enterprise governance layer. AEGIS covers six domains: Governance, Risk, and Compliance; Identity and Access Management; Data Security and Privacy; Application Security; Threat Management; and Zero Trust Architecture.

The key principle AEGIS introduces is least agency: granting AI agents the minimum set of permissions and autonomy required for their task, analogous to least privilege for human users but extended to encompass decision-making scope, tool access, and data reach. AEGIS also introduces continuous assurance and explainable outcomes as governing principles.

Together, ADS and AEGIS form a two-layer model: ADS secures the development workflow where AI agents generate and modify code; AEGIS governs the enterprise perimeter where AI agents operate as autonomous actors. Teams deploying AI coding agents without ADS have an unguarded development pipeline. Teams deploying AI agents in production without AEGIS have ungoverned autonomous actors. Both conditions are now common.

Exceptions and Limits

ADS is not a panacea. Several constraints limit its immediate applicability.

No single vendor delivers the full ADS vision. Forrester explicitly acknowledges this. Some vendors excel at code analysis, others at supply chain analysis, and others at runtime intelligence or governance. What is missing is a unified operating model that treats security as an autonomous, continuous function. Organizations must assemble ADS capability from multiple vendors, which introduces integration complexity and coverage gaps.

Policy-driven enforcement requires mature security policy. Teams without documented, codified security policies cannot enforce them autonomously. The prevention pillar assumes policies exist in machine-readable form. Organizations still relying on spreadsheets and tribal knowledge must invest in policy formalization before ADS provides value.

Small teams with low AI adoption may not need ADS yet. If AI-generated code represents less than 10% of production commits and review capacity is adequate, traditional AppSec with incremental improvements remains sufficient. ADS addresses scale, and scale is a threshold effect.

Autonomous remediation carries trust risk. Teams must validate that auto-generated fixes do not introduce regressions, break functionality, or create new vulnerabilities. The remediation pillar works when fixes are small, scoped, and validated in isolated environments. It fails when fixes require architectural reasoning that current LLMs cannot reliably perform.

The Honest Assessment

Dimension Traditional AppSec ADS
Speed Human-paced scan-and-review cycles Autonomous, policy-driven, continuous
Scope Code and dependencies Code, dependencies, workflows, and running applications
Detection model Pattern and signature matching Contextual: exploitability, reachability, runtime exposure
Remediation Alert with human fix Validated autonomous fix with human oversight
AI authorship tracking None Provenance and attribution required
Agent access control No category exists Least agency governance via AEGIS
Vendor maturity Established, fragmented, commoditized Emerging, no single-vendor coverage

The data from Wiz Research's 2026 State of AI in the Cloud report underscores the urgency: 80% of organizations have developers using AI IDE extensions, 71% have at least one AI coding assistant, and 57% have deployed self-hosted AI agent technologies. MCP servers appear in 80% of environments, introducing control-plane risks that no traditional AppSec category was designed to address. The adoption curve has already outpaced the security model.

Actionable Takeaways

  • Audit AI-generated code volume before choosing a path. Measure what percentage of production commits come from AI coding agents. If it exceeds 20%, traditional scan-and-review workflows are already degrading. If it exceeds 40%, the degradation is structural and incremental improvements will not restore coverage.
  • Formalize security policies in machine-readable form. ADS prevention requires codified policies. Teams without documented policies — written in a format an autonomous system can evaluate — must invest in policy formalization first. YAML-based policy-as-code frameworks like OPA and Kyverno provide the enforcement substrate.
  • Implement provenance tracking for AI-generated code. Tag every commit, PR, and dependency addition with its origin: human-written, AI-assisted, or AI-autonomous. Without provenance, detection and prioritization cannot differentiate reliable patterns from high-risk ones. Git commit trailers and CI metadata provide the tracking mechanism.
  • Apply least agency to AI coding agents immediately. Restrict agent access scopes to the minimum required for their task. Isolate worktrees so one agent's changes cannot affect another's workspace. Validate agent outputs before they surface to developers. These are AEGIS principles that do not require vendor support to implement.
  • Demand contextual prioritization from security vendors. Severity-based triage is insufficient when AI agents produce findings at machine speed. Require vendors to correlate findings with exploitability, reachability, and business impact. If a vendor cannot demonstrate lower false-positive rates through contextual analysis, it is not an ADS-capable platform.
  • Prepare for autonomous remediation with validation gates. Auto-generated fixes must be tested in isolated environments before deployment. Establish canary pipelines and rollback mechanisms for security patches. Treat autonomous remediation the same way you treat autonomous deployment: with verification, not blind trust.