The Security Team That Deleted 90% of Their Alerts (And Caught More Threats)
Your SOC processes 4,000 alerts daily. Your analysts investigate 12% of them. The rest sit in queues, auto-close after 72 hours, or get dismissed in bulk during Monday morning triage. This isn't security operations—it's alert manufacturing. The team I'm going to describe did something radical: they deleted 90% of their detection rules and started over. Six months later, they were catching more threats with less noise, retaining analysts who previously burned out, and spending 60% less time in the SIEM. Here's exactly what they did—and what you can do on Monday.
The Problem Is the Model, Not the Volume
Before I describe the solution, understand the diagnosis. Most SOCs operate on a detection-maximization model: generate as many alerts as possible, prioritize by severity, investigate the top of the stack. This model assumes that more detection equals more security. It doesn't. More detection equals more noise, analyst fatigue, and missed threats hiding in the volume.
The team I studied—mid-size financial services, 12-person SOC—was typical. They had 400+ detection rules across their SIEM, EDR, and cloud security tools. Daily alert volume: 3,800-4,200. True positive rate: 8%. Analyst turnover: 40% annually. Mean time to detect actual incidents: 187 days (they measured this retrospectively by analyzing breaches that bypassed their alerts entirely).
Their breakthrough wasn't buying better tools. It was recognizing that their detection strategy was producing haystacks and calling it security. They needed to flip the model: minimize noise to maximize attention on what matters.
Phase 1: The Alert Audit (Week 1)
On Monday morning, their SOC manager pulled the last 30 days of alert data and asked three questions about every detection rule:
Question 1: Did this alert ever result in a confirmed incident? Not "was it investigated"—was it the alert that actually detected a real security event? For 60% of their rules, the answer was no. These rules had generated thousands of alerts over months without once identifying a genuine threat.
Question 2: What's the false positive rate? Rules with >95% false positives got flagged immediately. These weren't "noisy but valuable"—they were noise machines consuming analyst cycles for no security benefit.
Question 3: What would we miss if this rule disappeared? This was the critical question. It forced honest assessment of actual coverage gaps versus theoretical detection capability.
The result: 347 detection rules became 38. The deleted rules weren't "disabled pending review"—they were removed entirely from the SIEM. No safety net. No gradual phase-out. Deleted.
Alert volume dropped from 4,000 daily to 280. Analysts panicked initially. "We're blind now," one said. They weren't. They were finally able to see.
Phase 2: The Metric Shift (Weeks 2-4)
With alert volume manageable, they changed what they measured. Out: alerts generated, mean time to respond, cases opened. In: mean time to anticipate, threat hypotheses validated, coverage gaps closed.
Mean time to anticipate (MTTA): How long between a new threat emerging and your team having detection for it? Most SOCs don't measure this—they react to threats after they're seen in the wild. This team started tracking how quickly they could anticipate attack patterns and build coverage before seeing them in their environment.
Threat hypotheses validated: Instead of reactive alert investigation, analysts spent time forming and testing hypotheses: "We believe attackers might use X technique against our Y assets. Let's validate whether we'd detect it." Validated hypotheses became new, high-fidelity detection rules. Invalidated hypotheses revealed coverage gaps they actively closed.
Coverage gaps closed: Measured monthly. How many MITRE ATT&CK techniques could they detect with high confidence? They started at 23% coverage. Six months later: 67%. Not by adding 500 low-quality rules—by adding 40 high-fidelity ones and removing 300 noise generators.
Phase 3: The Analyst Transformation (Months 2-6)
With 90% fewer alerts, analysts had time for different work. The team restructured into three functions:
Threat researchers (40% of team): No longer chasing SIEM alerts. Instead, studying threat actor TTPs, analyzing attack trends, and building detection for techniques before they see them in telemetry. These were previously the analysts most likely to burn out—high performers overwhelmed by alert volume.
Detection engineers (30% of team): Focused on rule development, testing, and optimization. Their job: make the 38 rules as precise as possible, adding new coverage only when threat research identified genuine gaps.
Incident responders (30% of team): Still responding to alerts, but now handling 280 daily instead of 4,000. Mean time to respond: 8 minutes instead of 4 hours. Alert quality: 35% true positive rate instead of 8%. When they investigated, they found things.
The Tooling Transition (No New Vendors)
Here's what surprised me: they didn't buy new tools. They used what they had differently.
SIEM: Still their primary platform, but configured for 38 rules instead of 347. Query performance improved dramatically. Storage costs dropped 40% because they weren't retaining noise telemetry.
EDR: Shifted from "alert on everything suspicious" to "alert on high-confidence indicators only." Tuned out behavior-based heuristics that generated thousands of false positives. Focused on known-bad indicators and anomalous patterns with high correlation to actual threats.
Threat intel platform: Previously used for alert enrichment ("this IP is suspicious"). Now used for proactive hunting—feeding IOCs into threat research, validating coverage against MITRE ATT&CK, identifying techniques they couldn't detect.
The only "new" tool: A simple wiki where threat researchers documented attack techniques, detection gaps, and coverage plans. Low-tech, high-value.
The Monday Checklist
Here's what you can actually do this week:
Monday: Export 30 days of alert data. Count alerts per rule. Identify your top 10 noisiest rules. For each, answer: did this ever catch something real?
Tuesday: Disable (don't delete yet) the 5 noisiest rules that have never caught a real incident. Monitor for 48 hours. Notice what happens: nothing. This builds confidence for bigger cuts.
Wednesday: Review your "critical" and "high" severity alerts from the last 30 days. How many were actually critical? If <50%, your severity ratings are broken. Fix them.
Thursday: Pick one MITRE ATT&CK technique your team believes is relevant to your environment. Test: if someone used this technique today, would you detect it? Be honest.
Friday: Calculate your actual true positive rate. Most teams never do this math. It's humbling. It's also the baseline for improvement.
The Maturity Model
They developed a simple framework for measuring progress:
Level 1 (Reactive): Alert maximization. High volume, low quality. Analysts chase noise. Burnout is common.
Level 2 (Triage): Alert reduction. Cut noisy rules. Focus on high-confidence detection. Volume drops, quality improves.
Level 3 (Proactive): Threat anticipation. Analysts research threats before they arrive. Detection engineering builds precise coverage. Incident response handles fewer, higher-quality alerts.
Level 4 (Predictive): Continuous validation. Regular purple teaming, attack simulations, coverage measurement. Security posture is data-driven, not fear-driven.
Most SOCs are Level 1, think they're Level 2, and want to be Level 4. You can't skip Level 2. Alert reduction is prerequisite for everything else.
The Results
Six months after deleting 90% of their alerts:
- Alert volume: 4,000/day → 280/day
- True positive rate: 8% → 35%
- Analyst turnover: 40% → 12%
- Mean time to detect (actual incidents): 187 days → 11 days
- MITRE ATT&CK coverage: 23% → 67%
- Incidents detected proactively: 0 → 4 (previously unknown threats identified through threat research)
The CFO noticed too: SIEM licensing dropped 40% (less data ingestion), analyst overtime dropped 75%, and they stopped paying incident response retainers for breaches they were catching themselves.
The Hard Truth
This team's success wasn't about technology. It was about courage—the willingness to admit that most of their detection was theater, and the discipline to cut it.
The security industry sells detection quantity as a proxy for security quality. "We have 400 detection rules" sounds better than "we have 38 detection rules"—unless you know that 347 of those rules never caught anything.
Your SOC doesn't need more alerts. It needs fewer, better alerts and analysts who have time to think. The threats you're missing aren't hiding in the 4,000th alert—they're in the techniques you never learned to detect because you were too busy chasing noise.
Delete the rules. Trust your analysts. Build detection for what matters. The security improvement will surprise you.