I was recently asked how artificial intelligence would impact cyber security.
However, it struck me that the question phrased in this way oversimplifies what AI is. In reality, it’s more accurate to say that the future will see multiple AI’s impacting the entire cyber security ecosystem. And every other ecosystem for that matter.
First, we must remember that AI is not just a single homogenous entity. Of the multiple AIs we can expect to see emerging, some will be good, others not so good, some will be high-performing, others badly designed, and so on. At the moment, most people don’t really appreciate the differences that can exist between different AI systems.
Consider These Three Issues
First, measurement. One of the main issues we need to consider is enabling systems to understand an AI in a way that measures its output and outcomes. Conventionally, we use certification to measure, test and evaluate tools. Shoehorning in an AI system because “we all need it” without a strategy for measuring its efficacy is a recipe for failure.
Second, data quality. Even when we produce a very high-performing AI, we may infuse it with bad habits via its machine learning capabilities. This means that if we feed an AI with bad, inaccurate, or incomplete data, it’s likely to create a bad environment whereby the AI system generates sub-optimal or even negative outputs. Who will tell to AI what is should be learning and what it shouldn’t be? Think back, for example, to the controversy of AI performing racial profiling. One thing we must avoid is scaling biases due to poor quality data. How good is your data?
Ideally, your business should be generating good quality data from your own network, so that your AI system better understands the normal working patterns of your company and can rapidly engage anomalies. Consideration must also be given to cybersecurity intelligence to combat threats by detection, network visualization, and network-wide collaboration. High data quality will yield better results at lower cost, whereas bad quality data may see your AI working against you.
Third, the need for AI. A pressing cybersecurity issue is assessing the need for companies to deploy AI if they already are running best practices based on mature strategies, policies, and processes. Would they need AI? Or, for a company that’s lagging behind when it comes to cybersecurity, would an AI system simply inherit legacy bad habits?
You Have No Real Choice
It’s inevitable that hackers will employ the best algorithms available to attack your system, probe the strength of your cybersecurity defenses, and quickly learn your system’s capabilities. The more data they steal, the better their algorithms will get.
And the cost of neglecting cyber security is high. Accenture reported in 2017 that the average cost of cyber-attacks to organizations in industries like Financial Services, Utilities, and Energy runs to US$17 million.
That means you cannot leave the race when it comes to AI and cybersecurity. Either you need your own AI system or a hired gun. We need strong technology at the base. We need fast machine learning. We need skilled personnel to set the direction in which AI learns. And finally, we need a large volume of high quality data.