Companies are in a cybersecurity arms race. Attackers have easy access to more tools as the lines between state actors and criminal gangs fade. Malware and identity theft kits are easy to find and inexpensive to buy on dark web exchanges. AI-enabled attack kits are on the way, and we can expect that they will be readily available at commodity prices in the next few years.
The list of actual AI applications is already long and growing. Faster and more accurate credit scoring for banks, improved disease diagnosis and treatment development for health care companies, and enhanced engineering and production capabilities for manufacturers are just a few examples. A survey in 2017 by BCG and MIT Sloan Management Review found that about 20% of companies have already incorporated AI in some offerings or processes and that 70% of executives expect AI to play a significant role at their companies in the next 5 years.
Is AI a threat…
With all the benefits, however, come substantial risks.
First, AI systems are generally empowered to make deductions and decisions in an automated way without day-to-day human involvement. They can be compromised, and that can go undetected for a long time. Second, the reasons that a machine-learning or AI program makes particular deductions and decisions are not always immediately clear to overseers. The underlying decision-making models and data are not necessarily transparent or quickly interpretable (although significant effort is underway to improve the transparency of such tools). This means that even if a violation is detected, its purpose can remain opaque. As more machine-learning or AI systems are connected to, or placed in control over, physical systems, the risk of serious consequences – including injury and death – from malevolent interference rises. And we have already seen that while cybersecurity concerns are a consideration in the adoption of AI, especially for pioneers in this field, cybersecurity is of less concern to companies that are lagging behind.
Companies’ AI initiatives present an array of potential vulnerabilities, including malicious corruption or manipulation of the training data, implementation, and component configuration. No industry is immune, and there are many categories in which machine learning and AI already play a role and therefore present increased risks. For example:
- Financial (credit fraud might be easier, for example)
- Brand or reputational (a company might appear discriminatory)
- Safety, health, and environment (systems might be compromised that control cyber-physical devices that manage traffic flow, train routing, or dam overflow)
- Patient safety (interference might occur in medical devices or recommendation systems in a clinical setting)
- Intervention in, or meddling with devices connected to the Internet of Things (IoT) that use machine learning or AI systems.
… or a solution?
The good news for companies is that they can tap the power of AI to both upgrade their cybersecurity capabilities and protect their AI initiatives (so long as they layer in appropriate protections to the AI systems being used for defense). Moreover, investments in AI will likely have multiple forms of payback.
For one, companies can build in better protection and the potential to at least stay even with the bad guys. AI not only enhances existing detection and response capabilities but also enables new abilities in preventative defense. Companies can also streamline and improve the security operating model by reducing time-consuming and complex manual inspection and intervention processes and redirecting human efforts to supervisory and problem-solving tasks. Consider the number of cyber incidents that the average large bank deals with every day, from the ordinary and innocent (customers mis-entering passwords, for example) to attempted attacks. They need automated systems to filter out the truly dangerous signal from the more-easily-addressed noise. In the medium to long term, companies that invest in AI can offer operational efficiencies and potential operating-expense savings.
To enhance existing cybersecurity systems and practices, organizations can apply AI at three levels.
1) Prevention and Protection. For some time, researchers have focused on AI’s potential to stop cyberintruders. While it is still early days, the future of cybersecurity will likely benefit from more AI-enabled prevention and protection systems that use advanced machine learning techniques to harden defenses. These systems will also likely allow humans to interact flexibly with algorithmic decision making.
2) Detection. AI enables some fundamental shifts. One is from signature-based detection (a set of static rules that relies on always being up-to-date and recognizing an attack signature) to more flexible and continuously improving methods that understand what baseline, or normal, network and system activity look like. AI algorithms can detect any changes that appear abnormal – without needing an advance definition of abnormal. Another shift is to move beyond classic approaches based on machine learning that require large, curated training datasets. Some companies have employed machine-learning programs in their security systems for several years, and more advanced AI-based detection technologies (such as reinforcement learning and deep neural networks) are now gaining traction, especially in IoT applications. AI can also provide insights into sources of potential threats from internal and external sensors or small pieces of monitoring software that evaluate digital traffic by performing deep packet inspection. Note that for most companies, AI-based detection and potential automated attribution will require careful policy design and oversight to conform with laws and regulations governing data use.
3) Response. AI can reduce the workload for cybersecurity analysts by helping to prioritize the risk areas for attention and intelligently automating the manual tasks they typically perform (such as searching through log files for signs of compromises), thus redirecting human efforts toward higher-value activities. AI also can facilitate intelligent responses to attacks, either outside or inside the perimeter, based on shared knowledge and learning. For example, today we have the technology to deploy semiautonomous, intelligent lures or “traps” that create a duplicate of the environment to be infiltrated to make attackers believe they are on the intended path and then use the deceit to identify the culprit. AI-enabled response systems can segregate networks dynamically to isolate valuable assets in safe “places” or redirect attackers away from vulnerabilities or valuable data. This can help with efficiency as analysts can focus on investigating high-probability signals rather than spending time finding them.
On the dark web today, anyone can buy a tailor-made virus guaranteed not to be detected by the 10 or 20 or so major antivirus tools. But defensive systems gain knowledge over time. This knowledge could be thwarted by an AI algorithm that adds to the stealthiness of a malware kit over time, masking the malware’s identity based on what it learns defense systems are detecting.
AI raises the stakes, with an advantage for the attackers. They need to get it right only once to score while defenders need to defend successfully 24/7/365.
The intelligence may be “artificial,” but the risks are all too real. Companies can use powerful new capabilities to enhance their overall cybersecurity efforts and stay even with bad guys in the security arms race. They also need to evaluate how AI is used in their products and services and implement specific security measures to protect against new forms of attack. More and more cybersecurity products will incorporate AI capabilities, and external partners can help integrate this capability into cybersecurity portfolios. Companies can start with an objective assessment of where they stand using the questions outlined above. There is no good reason for the delay.