Great innovations bring great risks — including artificial intelligence.
You may be aware of the cutting-edge capabilities, but have you considered the risks of AI? A few weeks ago, my colleague wrote on the dangers of ChatGPT, focusing on the risk of adding third parties to your ecosystem.
Since then, things have only gotten worse.
What Is AI?
First, let’s explore just what AI is and why you should care. AI is the simulation of human intelligence processes by machines, especially computer systems.
Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision1.
When people refer to AI, they are often referring to one popular component of the technology, such as machine learning. According to IBM, machine learning is a branch of AI and computer science that focuses on using data and algorithms to imitate how humans learn, gradually improving its accuracy.
That all sounds interesting — so, what could possibly go wrong?!
What Are the Risks of AI?
At the end of March 2023, ChatGPT’s creator, OpenAI, confirmed that a data breach occurred and was caused by a bug in an open-source library. This breach happened on the same day that a security firm reported seeing the use of a component affected by an actively exploited vulnerability.
The bug introduced by OpenAI resulted in ChatGPT users being shown chat data belonging to others. The bug also exposed payment-related information belonging to subscribers, including PII, such as first and last name, email address, payment address, payment card expiration date and the last four digits of the customer’s card number.2
Additionally, another vulnerability was revealed that could be leveraged to obtain secret keys and root passwords.
These are just two examples from one application that justify concerns about AI technology.
How Are AI Risks Being Managed?
The risks of AI have not gone unnoticed by influential members of the technology community and regulators.
Open Letters
More than 30,000 people have expressed their concerns about AI by signing an open letter calling for a six-month halt to work on AI systems that can compete with human-level intelligence.
Tesla CEO Elon Musk and Apple co-founder Steve Wozniak are two of the signatories of the letter the Future of Life Institute published on March 22, 2023. The letter expressed fear that AI programs could have negative consequences if left unchecked, from widespread disinformation to the ceding of human jobs to machines.
The letter suggests that “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.”
That begs the question…what (if any) safety protocols and controls exist today?
Regulatory Frameworks
Regulators swiftly released AI risk management guidelines following ChatGPT’s rapid adoption last fall.
NIST AI Risk Management Framework
In January of 2023, NIST released the AI 100-1, Artificial Intelligence Risk Management Framework (AI RMF 1.0). The goal of the AI RMF is to offer a resource to the organizations designing, developing, deploying or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.
It is a voluntary framework and “is intended to be practical, to adapt to the AI landscape as AI technologies continue to develop and to be operationalized by organizations in varying degrees and capacities so society can benefit from AI while also being protected from its potential harms.”
The Core of the AI RMF is comprised of four functions: govern, map, measure and manage. Each of these high-level functions is broken down into categories and sub-categories, which are then further subdivided into specific actions and outcomes.
SCF’s AI Risk Management Domain
Another response to the unique risks of AI is the addition of a new domain for Artificial Intelligence and Autonomous Technologies (AAT) to the Secure Controls Framework (SCF). The SCF version 2023.1 added this domain, and it is mapped to the NIST AI RMF described above.
The SCF’s AAT domain and controls aim to “help organizations ensure AI and autonomous technologies are designed to be reliable, safe, fair, secure, resilient, transparent, explainable and privacy-enhanced.”
In addition, AI-related risks are governed according to technology-specific considerations to minimize emergent properties or unintended consequences. Controls include evaluating the security and resilience along with examining the fairness and bias of AI to be deployed.
The NIST AI RMF and the SCF’s new domain are just the beginning of what is sure to be the creation of more safeguards to mitigate the unique risks of AI.
Beat AI Risk Today
There is no question that AI is rapidly transforming our world. We must be diligent in ensuring that we monitor the ethical and technical risks that change brings — so we can mitigate them before they wreak havoc.
Your first step?
Gain real-time insights into your AI risk within the context of your strategic business priorities. And that’s exactly what the ZenGRC is designed to do. See it in action now!