Investigating the Effects of National Institute of Standards and Technology's (NIST)'s Fresh Standards for Cybersecurity, Privacy, and Artificial Intelligence
In a move to address the unprecedented challenges posed by AI systems in cybersecurity and privacy, the U.S. National Institute of Standards and Technology (NIST) has launched a comprehensive Cybersecurity, Privacy, and AI program.
The program focuses on developing standards, frameworks, and guidelines to manage risks related to AI systems, cybersecurity, and privacy in organisations. A key feature is the Artificial Intelligence Risk Management Framework (AI RMF), published by NIST in 2023, which provides practical guidance across industries on identifying, assessing, and mitigating AI-related risks.
The program addresses AI-related risks by promoting rigorous AI evaluations, enhancing cyber incident response protocols, integrating AI risk management with existing cybersecurity and privacy frameworks, collaborating with defense and intelligence agencies, and encouraging organisations to continuously evaluate AI privacy implications.
Organisations must adapt their defensive strategies to address AI-enabled cyberattacks effectively, including updating security awareness training programs to address emerging threats. Machine learning models, inference engines, and AI-powered applications create unique vulnerabilities, including attacks on model weights, training data, and APIs serving AI functions.
The complexity of AI supply chains compounds these vulnerabilities significantly, as modern AI systems often incorporate numerous third-party libraries, pretrained models, and cloud services, each potentially harboring hidden vulnerabilities. Maintaining data integrity during storage and transport requires robust cryptographic measures, including the use of cryptographic hashes and checksums, and digital signatures.
Data teams face unprecedented responsibilities in ensuring the integrity and security of AI training datasets, with validation and sanitisation processes becoming continuous rather than periodic. Controlling privileged access to training data, enforcing least privilege for both human and nonhuman identities, and continuously monitoring for anomalous behaviour are practical, achievable steps organisations can take.
The program will be implemented as a community profile within NIST's Cybersecurity Framework (CSF) 2.0. The National Security Agency's Artificial Intelligence Security Center (AISC) has released a Cybersecurity Information Sheet - AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems.
AI-specific incident response procedures are a critical gap in many organisations' security postures, requiring specialized incident response planning tailored to AI system architectures. The new guidance focuses on three main areas of AI data security: data drift and potentially poisoned data, risks in the data supply chain, and maliciously modified or "poisoned" data.
The program aims to harmonise AI risk management with established cybersecurity and privacy standards, providing industry-tailored frameworks for organisations. NIST is developing the Cyber AI Profile based on its landmark Cybersecurity Framework, with a planned release within the next six months. Adopting quantum-resistant cryptographic standards ensures future-proofing against emerging threats.
In sum, the program's approach is a blend of creating standards and processes that promote trustworthy AI development and deployment while mitigating cybersecurity and privacy risks through coordinated federal initiatives, evaluation ecosystems, and practical frameworks that organisations can implement to proactively manage AI risks.
- The Artificial Intelligence Risk Management Framework (AI RMF), published by NIST in 2023, is a practical guideline for industries on managing risks related to data security, machine learning, and AI systems.
- Organisations must address AI-enabled cyberattacks by continuously evaluating AI privacy implications, implementing AI-specific incident response procedures, and controlling privileged access to training data.
- The implementation of the Cybersecurity, Privacy, and AI program by NIST involves harmonising AI risk management with established cybersecurity and privacy standards, and developing industry-tailored frameworks, such as the Cyber AI Profile.