Policy Briefs
Classifying AI Cybersecurity Systems as Critical Infrastructure Software
Abstract
This policy brief examines the AI cybersecurity systems that protect essential national infrastructure, including government, energy, and financial networks. It focuses on a major oversight gap, that these AI systems are not held to the rigorous safety and testing standards legally required for other critical infrastructure. The analysis focuses on the policy solution of having the federal government formally classify this AI software as critical infrastructure itself, which would mandate enforceable security standards.
External Document
This policy analysis is derived from my research examining trustworthy machine learning systems. Presented at the University of California, Santa Barbara.
Mandatory Explainability for AI Cybersecurity Enforcement Systems
Abstract
This policy brief examines AI systems used to automatically enforce cybersecurity rules, such as locking user accounts or blocking network access.
It identifies a critical operational gap: these AI tools make high-stakes decisions without providing understandable explanations, eroding trust and causing costly disruptions. The analysis focuses on the policy response of mandating explainability and human review, which would transform these systems into accountable and reliable partners for security teams.