ONG-ISAC & MITRE Briefing in Outlook


MITRE Adversarial Threat Landscape for AI Systems (ATLAS™) is a globally accessible, living knowledge base of adversary tactics and techniques based on real-world attack observations and realistic demonstrations from artificial intelligence (AI) red teams and security groups. There are a growing number of vulnerabilities in AI-enabled systems as the incorporation of AI increases the attack surfaces of existing systems beyond those of traditional cyberattacks. We developed ATLAS to raise community awareness and readiness for these unique threats, vulnerabilities, and risks in the broader AI assurance landscape.

Christina discussed the latest ATLAS community efforts focused on capturing cross-community data on real-world AI incidents, growing understanding of vulnerabilities that can arise when using open-source models or data, building new open-source tools for threat emulation and AI red teaming, and developing mitigations to defend against AI security threats.


Dr. Christina Liaghati, Trustworthy & Secure AI Department Manager and MITRE ATLAS Lead

Working across a collaborative global community of industry, government, and academia, Dr. Liaghati leads MITRE’s Trustworthy & Secure AI Department and MITRE ATLAS, where she passionately drives research and developments in trustworthy and secure AI for everyone
working to leverage AI-enabled systems. Leading her department of 50+ scientist and engineers and serving the community with the not-for-profit, objective, MITRE perspective, she is dedicated to working together to create and openly share actionable tools, capabilities, data, and frameworks for trustworthy and secure AI like ATLAS, an ATT&CK-style framework of the threats and vulnerabilities of AI-enabled systems.

As Dr. Liaghati has worked across the community to improve the common understanding of AI security concerns, her work quickly started overlapping with broader AI assurance concerns, which includes, AI equitability, interpretability, reliability, robustness, safety, and needs for privacy enhancement. As a result of this expansion beyond AI security into more of these elements of trustworthy AI and AI assurance, her current focus under ATLAS and across the international community is to build a protected mechanism for increased knowledge and incident sharing across government and industry in both AI security and the broader areas of AI assurance.

Dr. Liaghati also chairs the NATO Science and Technology Organization Research Task Group on the AI Assurance and Security, focused on fostering an enduring collaborative community of NATO organizations and industry partners, leveraging the Science and Technology Organization to shape future interoperable capability developments in AI security and assurance.