Abstract
The federal cybersecurity mandate has shifted materially since 2021. Executive Order 14028, M-22-09, and the NIST Zero Trust guidance that followed have moved the government from a compliance-checkbox posture toward something closer to genuine security outcome accountability. Agencies are now expected to detect anomalous behavior continuously, log all access and activity against defined standards, maintain verifiable audit trails for AI-assisted decisions, and demonstrate measurable improvement in detection and response timelines.
Causal security intelligence, as a technical discipline, satisfies each of these requirements in ways that conventional SIEM, XDR, and behavioral analytics do not. This paper examines the specific federal mandates, maps them to causal intelligence capabilities, and describes the procurement and authorization path for agencies considering this class of technology.
1. The Federal Mandate Has Changed
Cybersecurity has always been a federal priority, but the character of that priority changed fundamentally with Executive Order 14028 in May 2021. The EO was a response to a specific set of failures: SolarWinds, Microsoft Exchange, Colonial Pipeline. What unified those incidents was not a failure of perimeter controls or endpoint protection. It was a failure of detection: attackers were present in federal networks for months, generating telemetry, and no system assembled that telemetry into a coherent picture of what was happening.
The EO's response to this was to mandate outcomes, not just controls. Agencies were directed to adopt Zero Trust Architecture, implement endpoint detection and response capabilities across their environments, centralize log management with defined retention standards, and establish the ability to detect anomalous behavior at both the endpoint and network level. The requirement was not to purchase specific products. It was to be able to see attacks in progress and demonstrate that capability through audit.
M-22-09, the Office of Management and Budget's Zero Trust strategy memorandum published in January 2022, added specificity. Federal civilian agencies were given concrete targets: specific identity maturity levels, device visibility requirements, network segmentation milestones, and application security standards. The targets were graded against a maturity model with defined timelines. Agencies that had previously satisfied security requirements through policy documentation now had to satisfy them through demonstrated capability.
The practical consequence of this shift is that the question agencies face in procurement has changed. The old question was: does this product satisfy control X? The new question is: does this product contribute to the demonstrated outcomes the EO and M-22-09 require? Those are different conversations, and they favor different technology architectures.
2. EO 14028: What the Specific Requirements Mean
The EO contains several requirements that map directly to causal intelligence capabilities.
Section 7, Improving Detection of Cybersecurity Vulnerabilities and Incidents on Federal Government Networks, is the most operationally significant for security technology. It directs agencies to deploy endpoint detection and response tools and to deploy them in a manner that enables continuous threat detection, threat hunting, and analysis and remediation. The standard is continuous, not periodic. Analysis is explicitly included, not just detection.
Conventional EDR tools detect. They generate alerts. Analysis of those alerts, including the construction of causal narratives connecting individual endpoint events into attack chains, has traditionally been manual work performed by human analysts. The EO's requirement for continuous analysis creates a technology gap that point-in-time alerting does not fill.
Causal intelligence closes this gap by making the analysis continuous and machine-performed. Every event ingested is evaluated against the causal graph. Chains are constructed incrementally. The analysis is not a periodic batch job performed by an analyst on a queue of alerts. It is a live, continuously maintained model of what is happening across the environment and why.
Section 8, Improving the Federal Government's Investigative and Remediation Capabilities, directs agencies to maintain logs sufficient to enable investigation of incidents, with specific requirements around log retention, centralization, and accessibility. The section also directs agencies to implement event logging that captures sufficient context to support forensic reconstruction of incidents.
This is precisely where the gap between conventional logging and causal logging becomes procurement-relevant. Conventional log management centralizes raw events. Causal logging maintains the inferred relationships between events: the graph structure that explains why event B happened in terms of event A. Forensic reconstruction of an incident from raw logs is a labor-intensive analytical exercise. Forensic reconstruction from a causal graph, with its explicit edge weights and evidence grades, is substantially faster and produces outputs that can be presented to oversight bodies with confidence grades attached.
3. NIST SP 800-53 Rev 5 and Causal Intelligence Alignment
NIST SP 800-53 Rev 5 is the control catalog that underlies FedRAMP authorization and serves as the baseline for most federal agency security programs. Several control families have direct causal intelligence implications.
IR-4, Incident Handling, requires agencies to implement an incident handling capability that includes preparation, detection, analysis, containment, recovery, and lessons learned. The analysis component is worth noting specifically. NIST defines analysis as the examination of available information and supporting evidence to determine the extent, nature, and cause of an incident. Determining cause, in a large and complex environment, requires causal reasoning. Correlation-based detection tools surface alerts. They do not determine cause. The analysis activity described by IR-4 is what causal intelligence automates.
SI-4, System Monitoring, requires continuous monitoring of the information system to detect attacks, indicators of potential attacks, and anomalous activity. The enhancement SI-4(13) specifically requires that monitoring tools correlate event data from multiple sources. Correlation in the SI-4(13) context is used broadly, but the intent is clear: monitoring systems should be able to connect events from different sources into coherent pictures of potential adversary activity. A causal graph that connects endpoint events to identity events to network events to cloud API activity directly satisfies this enhancement.
AU-12, Audit Record Generation, and the broader AU control family require comprehensive audit logging with sufficient detail to establish what actions were taken, by whom, when, and from where. The AU family also includes AU-6, Audit Record Review, Analysis, and Reporting, which requires regular review of audit records for inappropriate activity and reporting of findings.
Where causal intelligence adds specific value to AU compliance is in AI-assisted analysis. When AI reasoning sessions are used to analyze causal chains, every session must be logged with sufficient detail to constitute an audit record in its own right. TRA-CE's AI_Run table logs every reasoning session: inputs provided, tool calls made, evidence retrieved, conclusions reached, citations validated or rejected. This creates an audit trail for AI-assisted analysis that satisfies AU requirements and provides accountability for AI-generated findings that most current AI security tools entirely lack.
RA-5, Vulnerability Monitoring and Scanning, and the associated RA-5(2) enhancement requiring update of vulnerabilities to be scanned and RA-5(5) requiring privileged access for vulnerability scanning both connect to feed intelligence capabilities. Continuous ingestion of the CISA Known Exploited Vulnerabilities catalog, NVD CVE data, and vendor security advisories, matched against the organization's actual asset exposure, satisfies the continuous vulnerability monitoring intent of RA-5 in a way that periodic scanning alone does not.
4. M-22-09 and Zero Trust Maturity
M-22-09 established the Federal Zero Trust Architecture Strategy and set specific maturity targets for federal civilian agencies across five pillars: Identity, Devices, Networks, Applications, and Data.
The Identity pillar is where causal security intelligence connects most directly to federal ZT compliance. M-22-09 requires agencies to reach a specific identity maturity level that includes enterprise-wide MFA, integration of identity signals into access decisions, and continuous monitoring of identity behavior for anomalies.
The phrase continuous monitoring of identity behavior for anomalies appears in the M-22-09 guidance and is subsequently elaborated in CISA's Zero Trust Maturity Model (ZTMM) version 2.0. The ZTMM's Optimal level for identity, which represents the target maturity for agencies seeking to demonstrate genuine Zero Trust implementation, specifically calls for automated continuous monitoring, risk scoring of identities based on behavioral signals, and the use of those scores as inputs to dynamic access policy decisions.
Trust Drift, as implemented in a causal intelligence platform, is the technical mechanism for achieving this. Continuous trust trajectory computation per identity, updated in real time from behavioral signals with causal antecedent awareness, with scores published to an API that ZT enforcement points query at access decision time, satisfies the ZTMM Optimal requirements for identity-level continuous monitoring.
The gap between ZTMM Advanced and Optimal levels, for identity, is largely about whether behavioral monitoring is static or dynamic. Advanced requires behavioral analysis with defined thresholds. Optimal requires that thresholds adapt based on the identity's causal risk context. An identity appearing in an active credential theft causal chain should have a different behavioral threshold than an identity with a stable trust trajectory. The ZTMM Optimal level requires this. Conventional UEBA provides behavioral analysis against static baselines and does not satisfy the requirement. Causal trust drift modeling does.
5. CISA Known Exploited Vulnerabilities as a Feed Source
CISA's KEV catalog, established in November 2021 under Binding Operational Directive 22-01, is the federal government's authoritative list of vulnerabilities known to be actively exploited. Agencies are required to remediate KEV entries on defined timelines: typically 15 days for vulnerabilities in internet-facing systems, 60 days for others.
The compliance obligation creates a workflow, and the workflow creates a technology requirement. To demonstrate compliance with BOD 22-01 timelines, agencies need to: continuously ingest KEV updates, match new entries against their asset inventory, determine which assets are affected, prioritize by exploitability and exposure, and track remediation status.
A causal intelligence platform with integrated feed ingestion can automate the first three steps entirely. New KEV entries are ingested on publication, IOCs and CVE references are extracted and normalized, and matching runs against the security event graph to identify which hosts have exhibited indicators associated with the vulnerability or have confirmed exposure. The output is an asset-specific vulnerability posture report with causal context: not just "this host has this vulnerability" but "this host has this vulnerability, and here is the causal chain evidence suggesting it may already have been exploited."
This enrichment of vulnerability posture with causal context is the specific capability that justifies causal intelligence in federal vulnerability management workflows, rather than conventional vulnerability management tools. The conventional tool tells you what is exposed. The causal tool tells you what is exposed and what has already been touched.
6. Data Sovereignty and Infrastructure Requirements
Federal security technology procurement must satisfy data sovereignty requirements that commercial enterprise procurement does not face. Specifically, federal data, particularly data at FedRAMP Moderate and High impact levels, must reside in infrastructure with defined sovereignty characteristics: US-based storage, US personnel with appropriate clearance levels for High impact systems, and supply chain requirements that extend to infrastructure providers.
PostgreSQL running on Amazon Web Services GovCloud (US) or Azure Government satisfies the sovereignty requirements for most federal use cases at the FedRAMP Moderate and High impact levels. Both cloud environments are FedRAMP authorized, maintain US-based data sovereignty, and operate under cleared personnel requirements for applicable service tiers.
TRA-CE's architecture, with PostgreSQL as the production database layer, directly maps to this compliance path. The database layer is the only component that holds persistent security event data and audit records. All other components, the causal graph engine, the feed ingestion scheduler, the AI reasoning layer, are stateless compute that can run in FedRAMP-authorized compute environments without data sovereignty complications.
The AI routing layer introduces a specific sovereignty consideration: cloud AI providers (Anthropic, OpenAI, Google) are not FedRAMP authorized for all data types, and security event data containing PII requires routing to local or agency-controlled inference infrastructure. TRA-CE's PII-aware routing handles this by automatically routing PII-containing payloads to locally deployed models, with cloud models used only for aggregated, de-identified analysis. This routing architecture satisfies the data sovereignty requirements for AI-assisted analysis in federal environments.
7. FedRAMP Authorization Considerations
FedRAMP authorization for a security intelligence platform at Moderate or High impact level requires satisfying the applicable SP 800-53 control baselines as described in the FedRAMP Security Controls Baseline documents. The Moderate baseline includes 323 controls; the High baseline includes 421. For a platform that processes security event data and produces AI-assisted analysis, the control families with the highest implementation complexity are typically: AU (Audit and Accountability), AT (Awareness and Training), IA (Identification and Authentication), SC (System and Communications Protection), and SI (System and Information Integrity).
The SI family is worth examining in depth for a causal intelligence platform. SI-7, Software, Firmware, and Information Integrity, requires protection against unauthorized modification of software and data, with integrity verification mechanisms. For a causal graph that serves as the evidentiary foundation for security decisions, integrity of the graph data is a security requirement in its own right. Causal edges that are modified or deleted without audit trail could allow an attacker, if they gained access to the platform, to erase the causal evidence of their own activity. The AI_Run audit trail, combined with append-only logging patterns for causal graph events, provides the integrity controls SI-7 requires.
SI-4(22), Network Monitoring, requires monitoring network traffic to detect and report anomalies. In the context of a causal intelligence platform, network anomaly detection is a sub-function of the broader causal graph construction process. Network events are ingested, causally linked to process and identity events, and evaluated against behavioral baselines. Anomalies are surfaced as nodes in causal chains with appropriate confidence grades, satisfying SI-4(22) in a way that is more contextually rich than conventional network monitoring alone.
8. The Procurement Conversation
Federal technology procurement for security tools typically runs through one of several vehicles: GSA Schedules (primarily Schedule 70/IT), IDIQs with GWAC holders, agency-specific BPAs, or direct acquisition under simplified acquisition thresholds for smaller engagements. FedRAMP authorization status is a baseline requirement for cloud offerings at most agencies; some agencies accept FedRAMP In-Process designations for pilot deployments.
The procurement conversation for causal security intelligence differs from the procurement conversation for conventional SIEM or XDR in a specific way: the value proposition cannot be demonstrated by feature comparison against a checklist. The value is in what the platform does that the existing stack cannot do, specifically the construction of causal chains connecting events that the existing stack treats as isolated, and the production of confidence-graded findings with counterfactual analysis.
Demonstrating this value in a federal procurement context requires access to representative data. The most effective proof-of-concept deployment approach for federal agencies is to ingest a defined period of historical telemetry and run retrospective causal chain construction against incidents that were investigated manually. The comparison between what the manual investigation found and what the causal platform surfaces, in terms of timing, completeness, and counterfactual analysis quality, is the evidence that closes the procurement conversation.
For agencies subject to FISMA reporting requirements, the causal intelligence platform also provides direct value in the production of the annual FISMA metrics. Detection time, time to respond, and coverage of continuous monitoring requirements are all metrics that a causal intelligence platform with complete audit trails can report with higher accuracy and lower manual compilation effort than conventional tools.
9. Conclusion
The federal cybersecurity mandate of 2021 to 2025 has created genuine requirements for technology capabilities that conventional security tools were not designed to provide: continuous behavioral analysis with causal context, AI-assisted analysis with complete audit trails, identity trust modeling that adapts based on causal risk signals, and vulnerability exposure assessment integrated with live threat intelligence.
Causal security intelligence satisfies each of these requirements. The architecture, built on a sovereign-capable database layer, evidence-locked AI reasoning with complete AI_Run logging, continuous feed ingestion from CISA KEV and related sources, and Trust Drift identity modeling, maps directly to the specific mandates of EO 14028, M-22-09, and the NIST and CISA guidance documents that implement them.
Federal agencies evaluating security technology in the current mandate environment should evaluate causal intelligence not as an incremental improvement to their existing SIEM or XDR investment, but as the analytical layer that makes continuous monitoring meaningful. Logging everything is a prerequisite. Analyzing everything continuously with causal context is the requirement. They are different capabilities, and only one of them satisfies what the mandates actually require.
References
- - Executive Order 14028, Improving the Nation's Cybersecurity, May 12, 2021.
- - OMB M-22-09, Moving the US Government Toward Zero Trust Cybersecurity Principles, January 26, 2022.
- - NIST SP 800-207, Zero Trust Architecture, August 2020.
- - NIST SP 800-53 Rev 5, Security and Privacy Controls for Information Systems and Organizations, September 2020.
- - CISA, Zero Trust Maturity Model Version 2.0, April 2023.
- - CISA Binding Operational Directive 22-01, Reducing the Significant Risk of Known Exploited Vulnerabilities, November 3, 2021.
- - FedRAMP Security Controls Baseline, Rev 5 Moderate and High, General Services Administration, 2023.
TRA-CE.ai | Causal Security Intelligence | tra-ce.ai
Research Division | March 2026