The rapid advancements in artificial intelligence (AI) have brought about a golden age of human possibility, but they have also raised concerns that necessitate greater AI Compliance regulation, with the EU AI Act at the forefront. This new era of technological growth comes at a price: the potential loss of traditional jobs and the emergence of negative actors capable of endangering lives through the misuse of AI.
The Cry for Greater Regulation of AI
Sam Altman, the CEO of OpenAI, emphasized the significance of this issue, stating that the immense power of AI systems demands special concern and global cooperation. In fact, many experts, including IBM’s Christina Montgomery, have warned about the existential threat posed by AI and have called for it to be a global priority alongside other large-scale risks like pandemics and nuclear war.
The scale of potential job displacement is also alarming, with estimates from Goldman Sachs suggesting that as many as 300 million full-time jobs worldwide could eventually be automated by generative AI, for example utilizing ChatGPT in the GRC arena to develop policies. The World Economic Forum's report further reinforces this concern, predicting that 14 million positions could vanish within the next five years alone, begging the obvious question: should AI be regulated? Given these mounting challenges, the need for comprehensive regulation of AI has become imperative to safeguard society from its unintended consequences and ensure its responsible and beneficial application.
How Should AI be Regulated?
While AI regulation in the US is still in the planning stages, the European Union (EU) has made significant progress by adopting new rules in November 2022, with further amendments in June 2023. The proposed EU AI regulation focuses on developers, deployers, and users, aiming to address various challenges associated with the technology. However, these regulations have faced pushback from top business leaders who argue that they could harm the EU bloc's competitiveness and lead to an exodus of investments. The EU Artificial Intelligence Act, which encompasses generative AI and foundation models, is seen as going too far and imposing high Compliance costs and liability risks on companies. The concerns raised highlight the potential impact on innovation and the region's international standing if Europe is constrained by stringent regulations.
On the other hand, the US Senate has announced a plan for regulating AI, recognizing the need to catch up with the technology's rapid advancement. While the EU has taken concrete steps for AI Compliance with the EU AI Act, a tangible set of rules, the US is still in the process of forming panels and discussing the key questions and challenges posed by AI. The bottom line is that impacted organizations across both regions can expect to face high Compliance costs.
What Can We Expect for New AI Compliance Regulations?
The EU's proposed regulatory framework on AI stands out as one of the most comprehensive and far-reaching laws addressing the use of AI technologies, aiming to manage risks while promoting innovation and ensuring transparency and accountability. Here are some of the main differentiators between this proposal and other AI laws:
Scope of the AI Act (EU):
The EU AI Act applies to a wide range of AI systems, including those developed and deployed in the EU and those used outside the EU but impacting EU citizens or fundamental rights within the EU. This extraterritorial reach sets it apart from many other AI regulations that are limited to domestic use.
EU AI Act Risk Categories:
The EU AI Act categorizes AI systems into four risk levels: Unacceptable, High, Limited, and Minimal. This categorization determines the level of regulation and requirements for Compliance based on the potential harm an AI system could cause.
The following AI systems are included in the EU AI Act’s risk categories:
- Unacceptable Risk: AI systems that are considered a clear threat to the safety, rights, and freedoms of individuals are prohibited. This includes systems that manipulate human behavior to cause harm.
- High Risk: AI systems that have the potential to cause significant harm or impact critical areas like healthcare, transportation, or law enforcement fall under this category. They must undergo a conformity assessment and meet specific requirements.
- Limited Risk: AI systems with limited potential for harm but still require transparency and user oversight.
- Minimal Risk: AI systems that pose minimal risk to individuals or society are subject to lighter regulation or no specific requirements.
{{banner-image}}
Bans and Restrictions:
The proposed EU AI regulation includes a list of prohibited AI practices, such as using AI to manipulate human behavior in a deceptive manner, deploying AI for social scoring, and using AI for subliminal techniques that exploit vulnerabilities. High-risk AI systems, like those used in critical infrastructures or law enforcement, face additional restrictions.
Transparency and Accountability:
The EU Artificial Intelligence Act emphasizes the importance of transparency and accountability for high-risk AI systems. Developers and deployers must provide clear information about the AI's capabilities and limitations, and they are required to establish robust governance, risk and Compliance management processes.
Oversight and Compliance:
The proposal introduces the concept of "conformity assessments" for high-risk AI systems, which involves third-party assessment bodies verifying AI Compliance with the EU AI Act. Additionally, there are strict requirements for documentation, data quality, and human oversight in high-risk AI systems.
Process for AI Compliance with the EU AI Act
To comply with the proposed regulations, organizations will have to meet multiple requirements, including:
- Categorization: Determine the risk level of the AI system based on the potential harm it could cause. High-risk AI systems must undergo a conformity assessment.
- Data and Documentation: Ensure that data used in training AI models is of high quality and that documentation is maintained, providing details on the AI system's functionality, purpose, and intended use.
- Transparency: High-risk AI systems must provide users with clear information about the AI's capabilities and limitations to enable informed decision-making.
- Human Oversight: Implement mechanisms for human oversight, allowing users to intervene or override AI decisions in high-risk scenarios.
- Governance and Risk Management: Develop and apply robust governance and risk management practices for high-risk AI systems.
How can Organizations Prepare for the EU AI Act & AI Compliance?
There is much work to be done. According to a Stanford study, most AI models do not comply with the EU AI Act, including the Open AI’s ChatGPT-4. Organizations will most likely need to start demonstrating Compliance sometime in 2024. Anecdotes, the leaders in helping enterprises achieve their Compliance goals, has stepped in to help.
Anecdotes developed a generative AI GRC Toolkit that aims to give organizations a clear framework through which Generative AI can be securely integrated into existing organizational operations without impeding security and Compliance processes and obligations. By implementing the framework, security and Compliance leaders take an essential step towards ensuring that their organization remains at the forefront of embracing modern technology while refraining from excessive risk exposure and conforming to best practices. Discover how the Anecdotes AI Framework (AAIF) can help ensure your organization’s AI Compliance with the EU AI Act and access the toolkit to get started today.