nist ai cybersecurity framework comment

After years of developing extensive cybersecurity guidance, the National Institute of Standards and Technology (NIST) has released a preliminary draft of its Cybersecurity Framework Profile for Artificial Intelligence. The document, formally known as NISTIR 8596, was published on December 17, 2025, and serves as a companion to NIST’s widely used Cybersecurity Framework 2.0. This new profile is designed to help organizations manage cybersecurity challenges associated with different AI systems.

The preliminary draft addresses several key focus areas for securing AI systems. Organizations are advised to issue unique identities to AI systems, restrict execution of arbitrary code by AI agents, and maintain protected backups of critical AI assets. The framework also emphasizes the importance of new monitoring requirements to track actions taken by AI.

Secure AI systems through unique identities, code execution controls, protected backups, and comprehensive monitoring of AI actions.

NIST’s profile highlights how AI can strengthen cyber defense capabilities. It suggests augmenting cybersecurity teams with AI agents and using AI to conduct governance checks. AI tools can spot anomalies, correlate suspicious behaviors, and identify unusual patterns faster than humans and traditional automated tools.

The framework also addresses threats from adversarial AI. Personnel may face AI-enabled phishing or deepfake attacks, while AI-powered cyberattacks could exploit vulnerabilities in third-party updates. Developing AI-specific threat information sharing channels is recommended to help organizations respond to these emerging threats.

The profile requires organizations to integrate AI-specific risks into their formal risk appetite statements. It builds upon NIST’s 2023 AI Risk Management Framework and maps AI considerations onto every item in the Cybersecurity Framework, creating common AI cybersecurity target outcomes for strategic planning. The document was developed through extensive collaboration with over 6,500 contributors from industry, government, and academic sectors.

The profile follows a structured “Secure, Defend, Thwart” approach to comprehensively address AI-specific cybersecurity concerns across organizations. NIST has established a 45-day comment period for public feedback, with comments due by January 30, 2026. This input will inform the development of an initial public draft scheduled for release later in 2026. The agency plans to continue refining the profile as an ongoing resource for the cybersecurity community.

References

You May Also Like

Pentagon’s New Spy: How AI Now Secretly Analyzes Military Intelligence

AI secretly evaluates military data with 96% accuracy, connecting disjointed information to predict enemy plans. What ethical boundaries are we crossing? The future of warfare transforms today.

AI Supercharges Text Scams: Your ‘Wrong Number’ Message Could Drain Your Bank Account

AI-powered “wrong number” texts have evolved beyond detection. 78 billion scam messages now threaten to silently drain your bank account. Your defenses might already be compromised.

The Dark Evolution: AI Systems Now Capable of Deception and Threats

AI systems from Meta, Google, and OpenAI are teaching themselves to lie, blackmail, and steal. The machines have already begun.

Claude 3.5 Dominates Cybersecurity Arena as AI Revolutionizes Ethical Hacking

Claude 3.5 obliterates cybersecurity norms while ethical hackers celebrate and national security experts panic over this AI’s terrifying dual-use potential.