
The concept of automation has been around for decades in the software field, but recent advancements in machine learning and natural language processing have led to huge breakthroughs. We’ve gone from machines that complete rules-based, predetermined tasks to a new generation of software that “learns” from huge sets of data so that it can make predictions — collectively known as artificial intelligence (AI).
Most people are probably familiar with “narrow” AI — applications such as Apple’s Siri or Amazon’s Alexa. This form of AI is designed to answer basic questions when promoted or to follow basic commands.
Generative AI goes a step further than that to analyze large amounts of raw data, and producs probable or possible outcomes and solutions. These apps include ChatGPT, Bard, DALL-E, and many others.
Process AI — also known as workflow automation — allows companies to create rule sets, and then program machines to follow those rules and produce repeatable results.
Each category has their own unique benefits and unseen risks.
The Role and Benefits of AI in Cybersecurity
Reduced Manual Tasks
AI is most commonly used in cybersecurity to automate manual tasks. This works best when there is a clear “output” and parameters for meeting it. For example, say you want to automate the collection of evidence to demonstrate compliance with a privacy regulation. If you know what evidence you need and the source of the data, you can use process automation to retrieve that information. In many cases, you can also use AI to determine whether the evidence satisfies the regulation based on a predetermined rule.
Perhaps you have a control that requires all certificates on production servers to be updated every 90 days. Organizations can use process automation to run a check every month to see whether all certificates are less than 90 days old. If the AI identifies one outside the parameter, it can report back a non-conformity. This not only saves time in collecting and assessing evidence, but it also increases the accuracy and timeliness of your control testing.
Improved Threat Response
Another great benefit of AI is improved threat detection and response. For example, generative AI can provide updates and alerts on new or changing threats. Combining this with API-driven integrations allows organizations to set parameters and default responses — “if X event occurs, respond in Y way”.
Further, AI unlocks the ability to analyze statistical trends in near-real time and identify outliers for action in a way that is impossible to do manually. These insights enable AI to produce smarter, more accurate recommendations for reducing threats because it’s contextualized to the business. It’s important to remember that threats don’t stand alone- so we must focus on the relationships. This ultimately drives the ability to set common resolutions and unlock process AI for threat reduction.
Streamlined Incident Response
Similar to the benefits noted for threat detection, AI can greatly improve incident response time as well. Generative AI can provide updates and alerts on new or trending incidents and breaches, including the vulnerabilities that caused them. By creating triggerable workflows with automatic containment response actions, incident response becomes more streamlined and efficient.
In addition, generative AI can parse through data much faster than a human, ultimately leading to faster resolution of incidents. Combining that with AI note-taking assures that all relevant data is captured during the incident response process and that detailed after-action reports can be generated quickly.
This data can then be repurposed to train models for simulated events, which makes future incident response exercises more valuable. All of this means organizations can make smarter, more informed decisions throughout the incident response lifecycle.
Common Misconceptions
AI Can Replace Humans
The most common misconception about AI is that it will replace humans. That fear is not new; when personal computers started to gain traction in the 1980s and 90s, people worried about mass layoffs due to these advancements. While some changes do happen with new technologies, and sometimes those changes are disruptive, in the long run, new technology increases productivity and innovation. That ultimately leads to the creation of new jobs and new industries.
AI Is Unbiased and Fair
Another common misconception is that AI is unbiased and fair, because it isn’t corrupted or affected by human influence. Ironically, an equally large group of people believe artificial intelligence is biased and unfair. In reality, AI is neither biased nor unbiased — but AI can be manipulated or influenced by the humans who design and deploy the technology and select the data AI uses to learn.
AI Is Complicated, Expensive, and Intrusive
Plenty of people are interested in AI, but believe it’s too complicated or expensive. (Or they simply don’t know where to start.) This often comes from someone taking too big of a bite into AI. As noted earlier, AI exists in many forms and has many uses within the cybersecurity world. Take baby steps into AI, so that you can find the right “amount” of AI and the best use cases for your business.
The Unseen Risks of AI
Over-Reliance
As new technologies become more popular, it’s easy to rely on them too much. In our personal lives, programming a coffee pot to brew at a certain time each day delivers efficiency in your morning — but what happens if it fails and the coffee doesn’t brew? On a larger scale, think about what happens at your organization if Slack goes down.
With AI the stakes can be even higher. First, without continuous tuning and adjustments, AI can produce false positives and false negatives (especially considering how complex the threat landscape is and how rapidly things change). AI systems may struggle to keep up with those emerging threats because they require frequent updates and adjustments.
For example, you might have an automated system to identify and patch vulnerabilities in your IT environment. But new vulnerabilities are discovered all the time; if you don’t have a human overseeing that automated AI system, it might miss a critical new vulnerability and let that weakness go unpatched. Then your company can become a target for hackers.
This is not a hypothetical; it happened to consumer credit reporting agency Equifax in 2017, leading to a breach that exposed the sensitive personal information of 147 million people.
The incident serves as a stark reminder that overreliance on automation without proper oversight, monitoring, and human intervention can leave organizations vulnerable to known threats. It also emphasizes the importance of maintaining a balance between automation and human expertise in cybersecurity, assuring that automated systems are regularly evaluated, updated, and complemented with human analysis to identify and mitigate risks effectively.
Ethical Considerations of AI
As noted earlier, although AI is not inherently biased or unfair, humans can make AI do what they want. The output is only as accurate and reliable as the data fed into it. If you teach AI that 18+1 = 20, it will report that as the correct answer.
Consider a common generative AI use case: content creation. Using AI to draft a policy, report, or blog post may seem innocent enough. But if the content isn’t factual (or worse, is being manipulated by others) AI can lead organizations astray. Assuring accountability and transparency on how the organization uses AI is key to its successful use.
Being cautious, especially with automated decision-making tools, reduces the risk of injecting biases into AI. In 2018, it was revealed that Amazon had developed an automated system to review job applicants’ resumes and provide recommendations for hiring. However, the system exhibited gender bias, penalizing resumes that included terms associated with women.
According to a report by Reuters in 2018, Amazon’s automated system learned from resumes submitted to the company over 10 years. Due to the historically male-dominated tech industry, the majority of resumes used for training the system were from male applicants. As a result, the system developed a bias against resumes that contained terms commonly found in women’s resumes, such as “women’s college” or membership in women’s organizations.
The unintended consequence of this bias was that the automated system systematically downgraded resumes from female applicants, leading to gender discrimination in the recruitment process. Thankfully, Amazon recognized the bias (and the potential legal implications) and stopped using the system. But this is a great example of how AI can wander into unethical situations.
Limited Contextual Insight
AI relies on predefined rules and algorithms, but those rules often lack context; hence organizations need to strike a balance between automation and human oversight to mitigate unseen risks. The easiest way to do that is by looking at where and how automation is being used in your company, and then assuring the risks don’t outweigh the rewards. For example, if an organization uses AI to improve incident response but shuts down the business unintentionally, is it actually improving security?
That’s exactly what happened during the WannaCry outbreak in 2017. Some security organizations and automated systems indiscriminately shut down or blocked network traffic to mitigate the spread of the ransomware, resulting in unintended disruptions.
Some automated security systems (particularly intrusion prevention systems and firewalls) responded to the WannaCry outbreak by applying broad rules to block suspicious network activity. However, due to the lack of proper context and understanding of the malware’s behavior, legitimate network traffic, including critical services and systems, was mistakenly blocked. This caused significant operational disruptions and financial losses for those organizations.
The unintended consequences of the automated response in this case highlighted the importance of considering the context and potential impact of automated actions in cybersecurity. Blindly applying broad rules without proper analysis and understanding of the specific threat can result in unnecessary disruptions and collateral damage.
Best Practices for Implementing AI
Phase 1: Objectives and Use Cases
- Identify specific tasks or processes to be automated
- Evaluate what problems or issues to be resolved by the AI
- Determine the desired outcome and benefits
- Develop a mechanism to measure the change
Phase 2: Research and Define Options
- Evaluate your existing technology
- Identify gaps and necessary additions
- Define workflows, trigger points, and integrations
- Assess functionality and security
RiskInsider Tip: As new AI technologies emerge, there are numerous open-source and free options available. And because they are easy to obtain, oftentimes these tools bypass corporate third-party risk management processes and are not included in company-wide security controls. If an employee creates a free ChatGPT account, uploads the organization’s strategic plan, and asks ChatGPT to create a slide deck, will the organization ever know? Be sure to include frequent education and monitoring for new tools and unusual usage.
Phase 3: Monitor and Scale
- Assess the outputs and functionality of the AI
- Determine the effectiveness of the AI
- Fine-tune and adjust AI to scale and improve
- Provide training and education related to AI
As organizations deploy, measure, and adjust AI usage, new and engaging use cases will likely emerge that will challenge the cybersecurity industry to stay on its toes. Don’t take this journey alone- read more in our RiskInsiders’ blogs or request a demo of the RiskOptics Solutions today.
—
Sources:
- “The Role of Automation in Cybersecurity: Benefits and Challenges” by the National Institute of Standards and Technology (NIST) https://www.nist.gov/publications/role-automation-cybersecurity-benefits-and-challenges
- “Automation in Cybersecurity: Risks and Best Practices” by the Center for Strategic and International Studies (CSIS) https://www.csis.org/analysis/automation-cybersecurity-risks-and-best-practices
- “Automation and Security” by the United States Computer Emergency Readiness Team (US-CERT) https://www.us-cert.gov/ncas/tips/ST18-003
- “Balancing Automation and Human Judgment in Security Operations” by the SANS Institute https://www.sans.org/reading-room/whitepapers/threats/balancing-automation-human-judgment-security-operations-38980
- “The Ethical Implications of Artificial Intelligence in Cybersecurity” by the Carnegie Endowment for International Peace https://carnegieendowment.org/2019/11/05/ethical-implications-of-artificial-intelligence-in-cybersecurity-pub-80260
- Reuters. (2018). Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women. Retrieved from: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
- Wired. (2017). How an Accidental ‘Kill Switch’ Slowed Friday’s Massive Ransomware Attack. Retrieved from: https://www.wired.com/2017/05/wannacry-ransomware-ddos-attacks/
- MITRE. (2020). Playbook Development for Cybersecurity Automation. Retrieved from: https://www.mitre.org/sites/default/files/publications/pr-20-1166-playbook-development-for-cybersecurity-automation.pdf
- IBM (2004). “What Is Artificial Intelligence”. Retrieved from: https://www.ibm.com/topics/artificial-intelligence
- “ChatGPT can finally access the internet in real time, but there’s a catch.” Retrieved from https://www.zdnet.com/article/chatgpt-can-finally-access-the-internet-in-real-time-but-theres-a-catch/
- Dark Reading. (2023). “Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears.” Retrieved from https://www.darkreading.com/risk/employees-feeding-sensitive-business-data-chatgpt-raising-security-fears