The tech sector is quietly having a boom during the COVID-19 pandemic. Open-source developers are getting involved with many aspects of the fight against the coronavirus, using Python to visualize its spread and helping to repurpose data acquisition systems to perform contact tracing.
For people with responsibility for corporate security – everyone from CIOs to CISOs and CROs – AI presents two types of risk that change the nature of their jobs. The first is that criminals, bad state actors, unscrupulous competitors, and inside threats will manipulate their companies’ fledgling AI programs. The second risk is that attackers will use AI in a variety of ways to exploit vulnerabilities in their victims’ defenses. The question remains – which protects which?
Companies are in a cybersecurity arms race. Attackers have easy access to more tools as the lines between state actors and criminal gangs fade. Malware and identity theft kits are easy to find and inexpensive to buy on dark web exchanges. AI-enabled attack kits are on the way, and we can expect that they will be readily available at commodity prices in the next few years.