• Product
      • circleROAR Platform
      • cogwheelZenComply
      • lockZenRisk
      • globeZenGRC Platform
      • chartRisk Intellect
      • kes tagPricing
    • Solutions
      • By Industry
        • TechnologyTechnology
        • Financial ServicesFinancial Services
        • HospitalityHospitality
        • HealthcareHealthcare
        • GovernmentGovernment
        • Higher EducationEducation
        • retailRetail
        • MediaMedia
        • InsuranceInsurance
        • ManufacturingManufacturing
        • Oli & GasOil & Gas
      • By Framework
        • PopularPopular
          • ISO
          • PCI
          • SOC
          • COSO
          • SSAE 18
        • PrivacyPrivacy
          • CCPA
          • GDPR
        • HealthcareHealth Care
          • HIPAA
        • GovernmentGovernment
          • NIST
          • FedRAMP
          • FERPA
          • CMMC
          • FISMA
        • FinanceFinance
          • SOX
          • COBIT
    • Success
      • customer-successCustomer Success
    • Resources
      • Resource CenterResource Center
      • Reciprocity CommunityReciprocity Community
      • NewsroomNewsroom
      • EventsEvents
      • BlogBlog
      • Customer StoriesCustomer Stories
      • Content RegistryContent Registry
    • Company
      • About UsAbout Us
      • Contact UsContact Us
      • CareersCareers
      • Leadership
      • Trust CenterTrust Center
      • PartnersPartners
      Get a Demo

        Getting Started on Governing AI Issues

        Published March 6, 2023 • By Matt Kelly • Blog
        digital question mark on circuitboard

        This article first appeared on radicalcompliance.com February 20th, 2023.

        Today we are going to keep looking at artificial intelligence and how corporations can get ahead of the risks thereof. Our previous post on AI was primarily a list of potential risks that could run rings around your company if you’re not careful; so what steps can the board and senior executives take to prevent all that?

        Well, first things first. AI is a new technology. So the first question governance and risk assurance teams should ask themselves is simply: how did you manage the adoption of new technologies in the past?

        Plenty of people will answer, “Poorly,” or “I don’t know” or some similar answer. Those answers actually demonstrate an important point. Lots of previous technologies were adopted haphazardly by employees first; and then senior management woke up to the need for committees and workstreams and SWOT analyses and all that fun stuff.

        The goal today is to avoid a repeat of that dynamic — and the person who can help the board and senior management avoid it will be a valuable person indeed. So where can risk, audit, and compliance professionals turn for advice, and how can you put that advice to work in your own company?

        Enter the risk management frameworks.

        The most notable AI risk management framework right now comes from NIST, which released a voluntary, Version 1.0 framework for artificial intelligence in January. In fact, however, a few other AI frameworks are already out there:

        • COSO released guidance in 2021 to apply its enterprise risk management framework specifically to AI;
        • That COSO guidance was partially derived from the Trustworthy AI framework developed by Deloitte;
        • Microsoft publishes a list of Responsible AI Principles, and since Microsoft is bankrolling ChatGPT, the principles it’s following are worth a look;
        • Google has its own set of Responsible AI Practices;
        • The Partnership on AI, funded by a consortium of tech interests, lists several broad tenets for AI development and use (although it’s a little light on risk management practices).

        From the above list, the NIST and COSO frameworks are the most useful for compliance and audit executives because they are true risk management tools that help you understand how to implement AI at an actual corporation. The others are worth reading, but they’re more a collection of good ideas for how AI should work, or pitfalls of bad AI that you want to avoid. That’s nice, but someone still needs to put structure and discipline around all that awareness; COSO and NIST help you do that.

        Mapping Out AI Risks

        I don’t know about you, but one thing that intimidates me about AI is the sheer number of issues that it poses to corporations. This isn’t like switching from Oracle to SAP to run your business systems, or moving from an in-house email system to one managed by GMail. Those business processes are already mature and well-understood; you’re simply switching around the technology that the humans use to run those systems.

        Artificial intelligence will let corporations design entirely new business processes. It’s more akin to the adoption of cloud computing or the arrival of mobile devices. It will allow you to set new strategic goals, change your financial targets, and redefine your human capital needs. That said, AI will also change how your company interacts with customers, employees, and third parties — which, in turn, will create new operational and compliance risks.

        Simply put, you’ll need to think about how you’ll use AI, and how others will use it. You’ll need to consider how others’ use of AI affects you, and your use of AI affects them.

        To that end, I cooked up this risk-reward matrix:

        Risks we pose to ourselves by using AI

        Risks we pose to others by the use of AI

        Risks others pose to us by their use of AI

        Benefits we can bring to ourselves by using AI

        Benefits we can bring to others by our using AI

        Benefits we can gain by others’ use of AI

        The above matrix is one example of how an in-house risk committee could start to game out the implications of AI. Bring together the people within your enterprise who’d have good insight for each of those squares, such as:

        • IT
        • Sales & marketing
        • HR
        • Finance
        • Compliance & privacy
        • Legal

        Then start brainstorming. Or, assign people to the squares most relevant to them, to work up a list of potential risks and benefits. For example, compliance teams would presumably have lots to say about risks the company poses to itself and others; sales would have better insights about the benefits of you using AI and the risks of others using AI.

        Then the committee can reconvene to compare notes. See where risks and benefits overlap, or which risks and benefits are the largest, and therefore should get the most attention. Start to develop a process to manage AI’s arrival in your enterprise and your broader world.

        A Word on Governance

        Risk management frameworks always start with governance, and for good reason: to create systems that steer your employees toward a few basic goals, even if the day-to-day steps in that journey feel a bit rocky and improvisational.

        So when we talk about a risk-reward matrix and in-house risk committees, we’re really talking about establishing a governance process to manage your company’s embrace of artificial intelligence. A few points then come to mind.

        First, you should establish some sort of governance process because that’s something the board will want to see. Technically, the board doesn’t establish that governance process itself; it exists to assure that you, the management team, have established a sensible governance process. If you haven’t, and your company slowly finds itself outflanked by competitors who are embracing AI smartly – it’s not the board’s job to step in and develop that AI governance process. It’s the board’s job to replace the management team with new managers who can.

        Second, establish a governance process because without one, employees in your enterprise will start implementing AI on their own. That creates the one risk that senior managers hate most of all: that they get surprised by something they didn’t know their company was doing.

        I can recall one of the largest fast food businesses in the world (won’t name them here, but you’ve eaten there) grappling with social media in the early 2010s. The company settled on a policy that when local units wanted to try something new on social media, they first had to review that project with a team at corporate HQ comprised of legal and IT executives. Once corporate approved the local team’s idea, that idea became an “approved use” that any other local teams could use freely. That sort of approach would fit well with AI too.

        Anyway, that’s enough for today. We’ve barely begun with artificial intelligence and there will be plenty more to say about it in the future.

        Why sign up for the Risk Insiders newsletter?

        To stay in the know! Get new blogs, resources, CPE opportunities, industry research & more — direct to your inbox.

        Thank you for subscribing to the Risk Insiders newsletter!

        Recommended

        Image
        Driving Business Results with a Strategic Approach to Risk – and w...
        Reciprocity ZenRisk application dashboard
        Enterprise Risk Mitigation (ERM)

        Driving Business Results with a Strategic Approach to Risk – and with ZenRisk

        Read more
        Image
        CISO and Trust: Why It Matters
        businessman and CISO shaking hands outside of an office building
        Security

        CISO and Trust: Why It Matters

        Read more
        Image
        Should Cyber Insurance Cover Ransomware Protection?
        encountering ransomware on laptop
        Security

        Should Cyber Insurance Cover Ransomware Protection?

        Read more

        Discover the Power of the Reciprocity ROAR Platform

        Get a Demo
        Reciprocity Logo
        Product
        • ROAR Platform
        • ZenComply
        • ZenRisk
        • ZenGRC Platform
        • Risk Intellect
        • Pricing
        Solutions
        • Industries
        • Frameworks
        Success
        • Customer Success
        Resources
        • Resource Center
        • Reciprocity Community
        • Newsroom
        • Events
        • Blog
        • Customer Stories
        • Content Registry
        Company
        • About Us
        • Contact Us
        • Careers
        • Leadership
        • Trust Center
        • Partners
        Contact Us
        Contact Us

        © 2023 All rights reserved

        Privacy Policy