Breach & Attack Simulation (BAS)
What It Does
Breach & Attack Simulation (BAS) validates whether your defensive tools actually detect attack techniques. It closes the loop between offense and defense by testing your EDR, SIEM, and NDR against real attack patterns and telling you exactly what was detected, what was missed, and what to fix.
Guard executes attack techniques from the MITRE ATT&CK framework against your environment, then checks your security stack for matching alerts. The result is a concrete, per-technique detection scorecard—not a theoretical assessment.
Feature gate: BAS is gated behind the enable_breach_and_attack feature flag. Contact your account team to enable it.
Why It Matters
Having an EDR deployed doesn’t mean it’s detecting attacks. Detection rules drift, get disabled, or simply don’t cover new techniques. Security teams often discover these gaps only during a real incident—the worst possible time.
Guard tells you exactly which compromise paths your defenses miss, continuously, not just during a periodic pentest.
“You’d never run your EDR for two weeks a year. Why are you still running offensive security that way?”
BAS transforms offensive security from a point-in-time exercise into a continuous validation loop. Every time you tune a detection rule or deploy a new control, you can re-run the simulation and measure improvement.
Detection Outcomes
Every simulated attack technique receives one of six detection outcomes, scored on a 0–1 scale:
EDR/SIEM Integrations
Guard integrates with your existing security stack to pull alerts and match them against simulated attacks:
How Detection Matching Works
Guard uses a hybrid approach that combines deterministic rules with AI-assisted classification:
Rule-Based Matching
Hostname filter: Case-insensitive substring match against the target host in alerts.
File hash filter: Matches MD5, SHA-1, or SHA-256 hashes from alert artifacts against simulation payloads.
MITRE technique ID filter: Matches technique IDs (e.g., T1059.001) with support for parent and sub-technique matching—an alert tagged T1059 will match a simulation of T1059.001.
LLM-Assisted Classification
When an alert doesn’t match any deterministic rule, Guard falls back to AI classification:
Model: Llama 4 Scout, running at temperature 0.0 for deterministic results.
Output: A confidence score (0–100) plus reasoning text explaining the match rationale.
Rate limit: 100 LLM classification calls per cron run to control cost.
Escalation: Alerts that the model cannot confidently classify are flagged for manual review.
MITRE ATT&CK Coverage
Guard provides comprehensive mapping to the MITRE ATT&CK and D3FEND frameworks:
198 ATT&CK techniques + 400+ sub-techniques (aligned to v14.1) available for simulation.
92 MITRE D3FEND defensive controls mapped across 334 total technique-to-control relationships.
Gap analysis: Identifies techniques with zero detection coverage in your environment.
Remediation recommendations: For each gap, suggests specific D3FEND controls to deploy (e.g., “Deploy File Content Rules to detect T1027 Obfuscated Files”).
Prevalence weighting: Techniques are weighted by real-world threat frequency using data from the Picus Red Report 2024, so you prioritize the gaps that matter most.
Security Posture Score
Guard computes a composite Security Posture Score that combines offensive and defensive coverage into a single metric:
Formula
(Attack Coverage × 0.4) + (Defense Coverage × 0.35) + (Effective Defense × 0.25)
Rating Scale
BAS Workflow
A typical BAS monitoring session follows these steps:
Create a BAS session — Define scope with filters: target hostnames, file hashes, and/or specific MITRE techniques to simulate.
Execute attack simulation — Guard schedules and runs the selected attack techniques against your environment.
Poll for alerts — Guard queries your connected EDR/SIEM/NDR platforms for alerts generated during the simulation window.
Detection matching — Alerts are matched to simulated techniques using rule-based filters first, then LLM-assisted classification as a fallback.
Score results — Each technique receives a detection outcome (Prevented through Undetected) with latency metrics showing how long detection took.
Gap analysis dashboard — Review what was detected, what was missed, and which MITRE techniques have no coverage.
Remediate — Deploy recommended D3FEND controls, tune detection rules, or adjust alert priorities based on the results.
Re-run to validate — Execute the same simulation again to confirm that your changes improved detection coverage.
What’s Next
EDR Integration Guides — Step-by-step setup for connecting Microsoft Defender, CrowdStrike, SentinelOne, and other platforms.
MITRE ATT&CK Visualization — Explore the ATT&CK matrix heatmap showing your detection coverage across all techniques.
AI Enablement — Enable AI-powered features for automated validation, anomaly detection, and continuous posture monitoring.