AI & Machine Learning as a Force Multiplier

One of the most frequently maligned and misused terms by information security vendors is: Artificial Intelligence (AI). AI and machine learning (ML) are, however, not a passing fad. Let me define these terms in more details.

Artificial intelligence has been the subject of sci-fi novels for several decades now. Computer systems with “brains” to rival or surpass that of humans. The idea that AI systems are “thinking machines” that in any way rival the human brain, is fiction. At least in the near future.

Expert systems, are rules based decision trees, which utilize the decision-making capabilities of experts. For example, a help desk manager might develop a flow chart for troubleshooting a customer’s computer problems over the phone. “Is your computer plugged in?” “Are there any error codes?” Expert systems codify a person’s expertise, so that others or even computer automation may be used without an expert present. Expert systems follow very specific rules without exception.

An expert system has the shortcoming that it is only able to make decisions based on predetermined alternatives. If faced with an unexpected response, it doesn’t know what to do. An applied AI system has some ability to make decisions when uncertainty or new situations occur. Think of the brains behind a self-driving car (autonomous vehicle) – it is able to identify and classify objects (human vs. chicken crossing the road), it may recognize language or the mood of a driver by their facial expressions, as well as making predictions and taking autonomous action to simultaneously follow the rules of the road, optimize the comfort of the passengers, maximize fuel economy and avoid hitting other cars and objects. A self-driving car is an example of Generalized (broad) AI, making many decisions based on various inputs. It utilizes machine learning, to learn from experience and improve. AI is adaptive, compared to the rigid rule base of an expert system.

Machine learning is improved when you have more extensive data sets, and when you have realistic scenarios and experts to help in the learning process. A Narrow AI system might be designed to only distinguish between objects. “Which is a person, or a chicken, or a toaster?” The machine learning process might involve many photos from the Internet and human assistance to distinguish between the objects, until the training is adequate to allow the AI system to classify objects on its own. This is precisely what search engines do to classify images, and present you with pictures of a “red ball” when you perform such a search.

An example of a more complex AI system might be a GPS application (brain in the cloud) that can tell you in real-time, the fastest route to take home from work, during rush hour. You can imagine the many inputs and variables necessary to make realtime predictions which are pretty accurate. While the physics of moving traffic is known, your GPS is still unable to predict when humans will make decisions leading to accidents that will ensnarl traffic and force you onto a secondary route.

Deep learning is a subset of machine learning that uses algorithms and artificial neural networks to mimic the structure of the brain. As opposed to a single layer of machine learning, it has multiple layers. To return to our image recognition example, rather than learning objects by taking data from a photo and asking, “Is this a red ball? Y/N”, deep learning may have layers that look at edges, shapes, textures, colors, and so on. Deep learning requires much more data and more intensive training than single-layer machine learning, but may provide nuanced answers needed for the successful creation of highly complex, independent AI systems in the future.

Figure 1 Relationship between AI, ML and DL.

If we think that auto-pilot in airplanes, autonomous vehicles and predicting the time it will take you to get home in rush hour traffic are valid uses of AI, why would we hesitate to look at benefits AI can bring to cybersecurity? It only makes sense to consider the value of getting “intelligence” from lower in the stack and rolling it up to something actionable, to put in front of a human (security analyst).

Large, diverse and complex network environments can be rife with misconfigurations and vulnerable to attacks and insider threats. There are many legacy applications, non-compliant systems, IoT, OT and rogue devices on a large enterprise network. Consider in addition the difficulty in monitoring in a cloud deployment. There are threats we can identify because we “know we know” them, or those we “know we don’t know”, and as the threat landscape grows at an exponential rate, there will be more and more that fall into the third category: “what we don’t know we don’t know”.

Just as the threats are growing, the impact to factory systems, critical infrastructure, medical equipment/personal health devices, and our own safety and privacy are growing. It is projected we will have maybe 30 billion IoT devices connected to the Internet in the next two years. Most IoT devices come with default passwords, with no secure update process and other undiscovered vulnerabiliites. In other words, nothing that we connect to any network is truly secure, and every insecure thing puts every other insecure thing at risk. We can’t protect everything equally well (nor should we), and so everything puts everything at risk without some method of segmentation/isolation and monitoring to shorten the time it takes to detect and stop and recover from an attack. If well trained, AI is well suited to help us identify vulnerabilities and detect attacks, in this complex system of systems we have created.

Without the benefits of AI, with validated models and expert training, I will claim that it will be impossible to keep up with the volume, velocity and voracity of network, logging and other diverse security big data. The data needs to be in the right format, before a SIEM can ingest it. Feeding it into a SIEM without any machine learning, relies only on custom written triggers. In other words, you write scripts to catch the confluence of the attack patterns you “know you know”. The number of SIEM rules (which adds overhead), the glut of SIEM events and alerts, only motivates hiring more and more SOC analysts. Meanwhile, the signal to noise ratio is high, and out of all the daily (hundreds? thousands?) alerts that need to be tracked down by humans, only a few will matter. The SOC model of pulling low-level data together to stitch it into something useful at the high-level SIEM is broken – it won’t scale. What is needed is more intelligence in the various systems and at a lower level in the stack, therefore only the meaningful events boil up to be inspected. AI can play an important role in solving this problem, and solving the projected shortage of SOC analysts in the coming years. Let AI detect, correlate and report actionable intelligence to the security analyst. Let people do what they do best, and leverage AI where it adds value.

Just as the traditional SIEM model is broken, behavioral or anomaly detection that is only looking for variances from an established baseline is inadequate. Analysts don’t have time to look at all the “anomalous behavior”. The signal to noise ratio is too high. AI can augment behavioral/anomaly detection, to deliver what is meaningful for the analyst to investigate. It can be a force multiplier.

Is this perfect? Nope. Not by a long shot, but it is the direction we should be working. As our systems get “smarter”, train them to do what machines can do well. Don’t believe the vendors, ask them to prove it. Ask for value added use cases from other happy customers. Don’t buy vaporware or empty promises. But, don’t slam the door in their face, either.

April 9th, 2018 by