
Once a frontier of experimentation, AI is now embedded in core systems and actively augmenting human-decision making across industries. With its growing presence comes a growing list of concerns, particularly around security, privacy and ethics. In response, oversight models are emerging, from United States (U.S.) policy initiatives and the European Union (EU) AI Act to international standards like ISO/IEC 42001. As emerging guidance shapes the future of AI, staying informed is essential for keeping in line for compliance requirements and safeguarding data.
The NIST AI Risk Management Framework
The NIST AI Risk Management Framework stands out as a leading effort to help organizations navigate and mitigate the risks associated with AI. Published in January 2023, it is a voluntary framework that can be applied across sectors and organizational sizes. It is divided into two main parts: an explanatory section and an actionable one. The first section covers core risk management concepts, identifies stakeholders needed to manage AI risks across the AI system’s lifecycle and defines the characteristics of a trustworthy AI. The second section provides actionable guidance organized across 4 functions: Govern, Map, Measure, and Manage.
- The Govern function focuses on establishing roles and responsibilities for effective AI risk management.
- The Map function provides guidance for identifying and contextualizing risks within an organization’s unique environment.
- The Measure function supports the assessment and prioritization of those risks.
- The Manage function provides strategies for mitigating, monitoring, and continuously addressing them over time.
This framework serves as a valuable resource for organizations seeking to understand and manage AI risks in their environment and strengthen their ability to deploy AI systems securely.
The EU AI Act
Another prominent governance model is the EU’s AI Act, which adopts a risk-based approach to AI regulation. Under this model, AI systems are evaluated for the level of risk they pose, ranging from minimal to unacceptable. Each classification carries with it corresponding regulatory requirements. On one end of the spectrum, minimal-risk AI systems face no regulation; on the other, unacceptable-risk AI systems are prohibited entirely. Between the two extremes lie limited-risk and high-risk AI systems. High-risk systems attract the most regulatory attention due to their potential to affect individuals’ health, safety, and rights. They are commonly used in sensitive sectors like healthcare, education, employment, and critical infrastructure.
The EU AI Act entered into force in August 2024 and will be implemented in phases in 2025, with full enforcement expected by 2026. The Act applies to both providers and deployers of AI within the EU, as well as those outside the EU whose systems – or their outputs – are used within the region. As one of the most comprehensive AI regulations to date, the EU AI Act sets a key precedent for global AI governance. Aligning with it can position organizations as leaders in secure and responsible AI deployment.
ISO/IEC 42001
The EU AI Act drew inspiration from ISO/IEC 42001, a voluntary standard published in December 2023 that offers guidance for managing AI systems throughout their lifecycle. Its recommendations are organized into four annexes.
- Annex A outlines the foundational controls for effective and secure AI governance.
- Annex B provides practical guidance for implementing those controls to effectively operationalize AI governance.
- Annex C helps organizations assess AI risks by identifying AI-related objectives and risk sources.
- Annex D highlights the need to consider broader ISO management standards, given AI’s use of diverse technologies, components and potential to process sensitive data.
For organizations and security professionals, aligning with this standard supports responsible AI use and helps prepare for evolving regulatory expectations, like the EU AI Act.
California's Leading Role in U.S. AI Regulation
Within the U.S., California is leading the way for AI regulation, with several new laws that took effect in January 2025. These laws address the use of AI across key sectors, including social media, entertainment, politics and healthcare. One notable law, AB 2885, establishes a standard definition for AI, enabling a consistent understanding across legislation. In addition, California has amended the California Consumer Privacy Act (CCPA) to clarify that AI-generated data is considered personal information. The amendment acknowledges that AI systems can infer personal details from existing data or through educational guesses. Now, any personal data created by AI will receive the same legal protections as traditionally collected personal data. California’s AI legislation may set the tone for broader legislative efforts across the United States.
Start Preparing Now
As AI becomes increasingly embedded in critical systems and everyday decision-making, governance models like the NIST AI Risk Management Framework, the EU AI Act, ISO/IEC 42001, and California’s emerging legislation are laying the groundwork for responsible AI development and deployment. While these models vary in scope and enforcement, they share a common goal: ensuring that AI systems are securely deployed, and data remains protected. These early governance models are only the beginning; engaging with them now means being prepared for the rapidly evolving future of AI governance.
K logix can help you navigate this and start to prepare your cybersecurity program for a secure and compliant AI implementation. For more information, please contact one of our experts: info@klogixsecurity.com.