Your security team gets alerts about a new vulnerability. You read the threat intel: it's actively exploited, targets your tech stack, no patch available yet. But you have no idea if it's hitting your infrastructure. Your detection rules can't correlate it. Your response playbook doesn't fit. The intel is correct—your workflow isn't.

Most organizations treat threat intelligence as a passive input: something to read, file, and hope someone remembers when an incident happens. The real value is operational. When threat intel flows through detection and response systems, it changes what you can actually defend.

This is the difference between knowing about a threat and knowing how to act on it.

Why Most Intel Doesn't Drive Defense

Threat intelligence exists in abundance. Public vulnerability feeds, paid intel platforms, security vendor reports, industry advisories—a mature organization easily ingests 50+ intel sources. But data volume doesn't equal operational value.

The problem is structural. Intelligence and defense operate as separate systems:

  • Intel teams collect and analyze — but their output is a report, a CSV, or a dashboard.
  • Detection teams write rules — but they don't have systematic access to contextualized threat data when they need it.
  • Response teams handle incidents — but they don't feed back what they learned to prioritize future intel collection.

The workflow breaks because intel and defense speak different languages. A threat report describes TTPs, malware families, and actor motivations. Your SIEM needs IP addresses, domain names, file hashes, and behavioral patterns. Translating between them is manual, slow, and inconsistent.

Intel-driven defense fixes this by making the workflow explicit: collect, normalize, detect, respond, and loop back. Each step has a concrete output that feeds the next.

1. Collection: Decide What You Actually Need

Collection seems obvious—just subscribe to everything relevant. But without a purpose, you collect noise. The input to collection should be your threat model: what threats could materially impact your business, and which are you not yet equipped to defend against?

For a SaaS company running on AWS, that might mean:

  • Supply chain attacks on dependencies you use
  • Cloud infrastructure misconfigurations targeting your platform
  • Credential stuffing and account takeover campaigns
  • Third-party provider breaches affecting your infrastructure

Each of these has specific intel sources: GitHub security advisories and npm audit logs for dependencies, cloud security research for infrastructure, breach databases and credential leaks for account takeover, and vendor security notifications for third-party risk.

The wrong approach is subscribing to general threat feeds—vulnerability databases, darknet forums, APT tracking—hoping something relevant appears. The right approach is asking: what decisions do I need intel to inform, and what data makes that decision possible?

Once you know that, collection becomes selective. You're not collecting everything; you're collecting for a purpose.

2. Normalization: Make Intel Detectable

Raw threat intelligence is asymmetrical. A vulnerability report might describe a flaw, its CVSS score, affected vendors, and exploitation in the wild. A malware analysis might catalog C2 infrastructure, TTPs, and file hashes. A credential leak might be a database of email/password pairs. Each format requires different handling.

Normalization translates all of this into a common schema that your detection and response systems can understand.

Indicators of Compromise (IOCs): These are the atomic units—concrete observables that indicate a threat.

  • IP addresses (attacker infrastructure)
  • Domain names (C2 servers, phishing domains)
  • File hashes (malware, backdoors)
  • Email addresses (campaign senders, compromise indicators)
  • URLs (malicious pages, exploit kits)
  • SSL certificate fingerprints

When you normalize threat data into IOCs, you can feed them directly into detection systems. A SIEM can block an IP address if it appears in your logs. An EDR can alert if a file hash matches. A proxy can flag a domain.

Tactics, Techniques, and Procedures (TTPs): These describe how attackers operate—not specific artifacts, but behavioral patterns. MITRE ATT&CK is the standard taxonomy.

A TTP might be "Credential Dumping (T1110)" or "Defense Evasion via Living off the Land" (T1036). TTPs map to detection rules: you can't search your logs for "Defense Evasion," but you can search for parent process names that indicate legitimate tools being misused (PowerShell spawning a process typically run by svchost).

Metadata: Where the intelligence came from, how confident you are in it, whether it's still relevant, and what action you should take.

  • Confidence: high, medium, low
  • Severity: critical, high, medium, low (based on your threat model, not the vendor's)
  • Source: which intel feed provided this
  • Age: when was this first observed, last confirmed
  • Action: block, alert, monitor, investigate

Organizations that normalize intelligence into this structure can integrate it into tooling. Teams without it spend time manually transcribing data between systems and miss the window when intelligence is actionable.

3. Detection: Translate Intelligence Into Rules

Once intelligence is normalized, it becomes material for detection rules. This is where intel-driven defense operationalizes.

IOC-Based Detection: The simplest translation is direct. You have a list of malicious IPs. You write a rule: "alert if traffic to these IPs appears in network logs." You have file hashes of known malware. Rule: "alert if these hashes are executed." This is low-effort, high-signal detection—but it's reactive. You're looking for known threats, not potential ones.

Deploy IOC-based detection in your SIEM, EDR, proxy, and firewall. The implementation differs by tool, but the logic is identical: check observed artifacts against the intelligence list.

TTP-Based Detection: TTPs are behavioral patterns. A TTP-based rule looks for specific sequences of activity that indicate an attack, even if the specific tools or artifacts are different.

Example: An intelligence report describes a campaign using legitimate tools for command execution (living-off-the-land). The TTP is T1036 (Masquerading). The detection rule might be: "alert when PowerShell executes a script from a web download, followed by process creation with unusual parent process, followed by outbound connection to a non-whitelisted domain on port 443."

TTP-based detection is more complex to implement and requires tuning to avoid false positives. But it's forward-looking: new tools and malware variants that use the same techniques will trigger the same rule.

Scenario-Based Detection: Combine multiple intelligence signals (TTPs, IOCs, contextual data) into detection scenarios that represent realistic attack chains.

Example: Threat intel indicates a credential-stuffing campaign targeting your sector. The IOCs are attacker-controlled IP addresses. The TTPs are password spray and brute force attacks. The scenario is: "high volume of failed login attempts from the same IP to multiple accounts, followed by successful login from that IP." Your detection rule alerts when this sequence appears, not just when you see any failed login.

This reduces false positives (many failed logins are legitimate) while improving signal (the specific pattern matches the threat intelligence).

4. Response: Feedback Into Better Defense

Detection and response are tightly coupled. A good detection rule without clear response procedures is noise. A response procedure without clear detection is reactive guesswork.

When an alert fires, your response playbook should specify:

  • Severity and urgency: Does this require immediate action, or can it be queued for triage?
  • Investigation steps: What telemetry do you need to assess if this is real?
  • Containment actions: If confirmed, what immediate steps reduce harm? (block the IP, isolate the host, reset credentials, etc.)
  • Root cause analysis: What enabled this attack, and what do you change to prevent it next time?

The feedback loop is critical: each incident teaches you something about your threat model, your detection gaps, or your response procedures.

Example workflow:

  1. Detection rule fires: malware hash matches known ransomware.
  2. Response team investigates: it was a false positive (the file was in a quarantine folder, not executing).
  3. Feedback: the detection rule needs context filtering. Update the rule to alert only when the file is in executable directories, not in quarantine.
  4. Next time the same malware hash appears, the rule won't fire on quarantined files.

Without feedback loops, your detection program becomes increasingly noisy and operationally burdensome.

5. Closing the Loop: From Response Back to Collection

The most mature organizations make collection reactive to response. When an incident occurs, you learn gaps in your detection or intelligence. Those gaps inform what you collect next.

Example: You experience an attack using a new malware variant. Your detection rules don't catch it. Your threat intelligence feed didn't warn you. Post-incident, you have three actions:

  • Update detection rules — Write rules for the TTPs you observed, the malware hash, the C2 infrastructure.
  • Update threat collection — Subscribe to feeds that would have warned you about this malware family, or this threat actor.
  • Update your threat model — Adjust your assumptions about which threats are likely to hit you. This threat was real; prioritize detection for similar attacks.

This loop—incident → detection update → intelligence update → threat model refinement—is how defense matures over time.

Practical Starting Points

If you don't have threat intelligence integrated at all: Start with IOC feeds. Services like Abuse.ch (malware hashes), Team Cymru (malicious IPs), and Shadowserver provide free or low-cost IOC data. Get one feed into your SIEM and write basic detection rules. Don't optimize yet—just establish the pipeline.

If you have feeds but no response workflows: Document what happens when a detection fires. Who is notified? What do they check? What are the containment actions? Make response procedures explicit, even if they're simple. This forces you to prioritize which alerts matter.

If you have detection but it's noisy: Build a feedback loop from response back to detection tuning. After each alert review, ask: was this useful? If not, why? Use the answer to refine the rule. Signal quality improves faster with feedback than with more intelligence data.

If you want to scale beyond IOCs: Map your threat model to MITRE ATT&CK. For each threat you care about, identify which TTPs are relevant. Write behavioral rules for those TTPs. This takes more effort upfront but generalizes better as attackers evolve.

The Core Principle

Intel-driven defense isn't about having the most threat intelligence. It's about making intelligence systematic and actionable. The workflow is: collect deliberately, normalize for tools, translate into detection, respond with context, and loop back to improve collection.

Organizations that follow this workflow stop treating threat intelligence as optional background reading. It becomes the foundation of measurable defense.