AI-supported cybersecurity: Three essential elements that demand your attention.
In early 2023, we witnessed a significant increase in the utilization of OpenAI’s ChatGPT, which was accompanied by concerns among the general public about the impending Artificial General Intelligence (AGI) revolution and the predicted disruptions in various markets. Undoubtedly, AI is poised to have a profound and transformative influence on many aspects of our lives. However, it is now crucial to adopt a more measured and thoughtful perspective regarding how AI will reshape the world, particularly in the realm of cybersecurity. But before delving into that, let’s take a moment to discuss chess.
In 2018, one of us had the privilege of hearing from and briefly conversing with Garry Kasparov, the former world chess champion who held the title from 1985 to 2000. He shared his experience of playing and losing to Deep Blue, IBM’s chess-playing supercomputer, for the first time. He described it as a crushing defeat, but he managed to bounce back and win more often than not initially. Over time, the situation changed, and he found himself losing more frequently, with Deep Blue consistently emerging victorious. Kasparov made a crucial observation: “For a period of about ten years, the world of chess was dominated by computer-assisted humans.” Eventually, AI took the lead on its own, and it’s worth noting that today, the strategies employed by AI in many games perplex even the greatest chess masters.
The key takeaway here is that AI-assisted humans possess an advantage. AI is essentially a toolbox primarily composed of machine learning and large language models (LLMs), many of which have been applied for over a decade to solve manageable problems such as novel malware detection and fraud prevention. However, we are currently in an era where advancements in LLMs surpass previous achievements. Even if there’s a burst in the AI market bubble, AI has become an integral part of our world, and cybersecurity will undergo profound changes as a result. Before we proceed, it’s essential to acknowledge a critical point, borrowed from Daniel Miessler: AI currently exhibits understanding but lacks reasoning, initiative, or sentience. This is crucial for dispelling fears and exaggerations about machines taking over, reassuring us that we are not yet in an age where silicon minds operate without the involvement of human minds.
Now, let’s explore three aspects at the intersection of cybersecurity and AI: the security of AI, AI in defense, and AI in offense.
Companies find themselves in a predicament similar to the early days of instant messaging, search engines, and cloud computing. They must embrace and adapt to AI or risk falling behind competitors who gain a disruptive technological edge. This means they cannot simply block AI adoption. Similar to the advent of cloud computing, the initial step is to create private instances of LLMs, especially as public AI offerings rush to adapt to market demands.
Drawing parallels with the cloud revolution, those contemplating private, hybrid, or public AI deployments must carefully consider various issues, including privacy, intellectual property, and governance.
However, there are also concerns related to social justice, as datasets can contain biases upon ingestion, models can inherit biases, or produce unforeseen consequences in their outputs. In this context, the following considerations are critical:
- Ethical Use Review Board: A governing body should oversee and monitor the correct and ethical use of AI, similar to how other industries regulate research, such as healthcare’s oversight of cancer research.
- Data Sourcing Controls: Copyright issues and privacy considerations must be addressed when ingesting data. Even if inferential techniques can re-identify data, anonymization is crucial, along with safeguards against poisoning attacks and sabotage.
- Access Controls: Access should be granted for specific research purposes and restricted to individuals and systems with unique identities, all subject to post-facto accountability measures. This includes data grooming, tuning, and maintenance.
- Specific and Targeted Outputs: AI outputs should be intended for specific business-related applications, and general interrogation or open API access should only be allowed under strict controls and management of the agents using the API.
- Security Role of AI: Consider appointing a dedicated AI security and privacy manager. This individual’s responsibilities include safeguarding against evasion attacks (recovering features and input used for model training), monitoring for undesirable outcomes (hallucination, misinformation, etc.), and ensuring long-term privacy and manipulation prevention. They also oversee contractual agreements, collaborate with legal teams, work with supply chain security experts, coordinate with AI toolkit teams, verify factual marketing claims, and more.
AI in Defense:
In the realm of cybersecurity, there are also practical applications of AI that enhance the practice itself. This is where we must consider the AI-assisted human approach when envisioning the future of security services. While the possibilities are numerous, wherever there are routine tasks in cybersecurity, from queries and scripting to integration and repetitive data analysis, there exists an opportunity for targeted AI application. When a human with a carbon-based brain is tasked with executing detailed work on a large scale, the potential for human errors increases, and the human becomes less efficient.
Human minds excel in tasks related to creativity and inspiration, areas where a silicon-based brain falls short, such as reasoning, sentience, and initiative. The primary potential for AI in the field of cyber defense lies in improving process efficiencies, extrapolating data from datasets, and eliminating repetitive tasks, among other things. This potential can be harnessed effectively as long as the pitfalls of “leaky abstraction” are avoided, ensuring that users understand the actions performed by the machine on their behalf.
For instance, there’s an opportunity for guided incident response that can anticipate an attacker’s next moves, facilitate faster learning for security analysts, and enhance the efficiency of human-machine interactions through a co-pilot (not auto-pilot) approach. However, it’s essential to ensure that those receiving assistance in incident response comprehend the information presented to them, have the capacity to disagree with suggestions, make corrections, and apply their uniquely human creativity and insight.
If this begins to resemble our previous discussion on automation, that’s because it should. Many of the challenges mentioned in that context, such as the potential for predictability exploited by attackers through automation, can now be addressed using AI technology. In essence, AI can make the automation mindset more practical and effective. Furthermore, AI can enhance the implementation of a zero-trust platform for managing the intricacies of the IT landscape, as discussed in our previous article on network visibility. It’s important to note that these benefits are not automatically granted upon deploying LLMs and other AI tools, but they become manageable and achievable projects.
AI in Offense:
The landscape of security itself must undergo a transformation because adversaries are employing AI tools to enhance their own capabilities. In much the same way that businesses cannot ignore the use of AI, as they risk disruption from competitors, the cybersecurity field is compelled to evolve because adversaries are also leveraging AI. This means that individuals within security architecture groups must collaborate with corporate AI review boards mentioned earlier and potentially take a leading role in considering the adoption of AI:
- Red teams must utilize the same tools as adversaries.
- Blue teams should employ AI tools in incident response.
- Governance, Risk Management, and Compliance (GRC) teams can gain efficiencies in interpreting natural language policies using AI.
- Data protection teams must utilize AI to gain a deeper understanding of data flow.
- Identity and access management teams need AI to drive zero trust principles and establish progressively unique and specific entitlements in near real-time.
- Deception technologies can employ AI to establish a negative trust in infrastructure to thwart adversaries.
In conclusion, we are entering an era not characterized by AI dominance over humans but one where AI-assisted human capabilities can triumph. We cannot exclude AI toolkits because competitors and adversaries will employ them. Therefore, the central issue lies in establishing appropriate guidelines and thriving in this new landscape. In the short term, adversaries, in particular, will become more adept at activities like phishing and malware generation. However, in the long term, the applications of AI in defense, the defenders of those who innovate in the digital realm, and the ability to succeed in cyber conflicts far surpass the capabilities of those who seek to breach our defenses.