No More Black Boxes

No More Black Boxes

Andrew Fast, Chief Data Scientist, CounterFlow ai

Machine learning can be challenging to adopt in a cybersecurity context because of an initial lack of trust on the part of security analysts. We believe that machine learning is a transformational technology for cybersecurity and that the best use for machine learning is in conjunction with human intelligence. In the post, we look under the hood of OpnIDS and the Dragonfly MLE to gain insight into the strategies we are using to increase trust and adoption of machine-learning techniques.

Explainable AI is an Imperative

Model explainability is a hot topic in machine learning right now. With the rise of deep learning techniques, model performance has improved dramatically — often at the cost of reduced understanding of how the model is coming to a decision. For many tasks, such as algorithmic trading or image recognition, understanding how the algorithm comes to a decision may not matter as long as the decision is correct. For many other tasks, including cyber threat hunting, medicine, or credit scoring, the model cannot be used to take action directly but is used instead as input into a related human process. For these latter tasks, explainability is of the utmost importance for building trust and acceptance of the new process. In a recent article in the Wall Street Journal, Rob Alexander the CIO at Capital One said it this way: “Until you have confidence in explainability, you have to be cautious about the algorithms you use” (WSJ, 09/26/2018).

Open, from the source code up

Building trust is one of the main reasons we chose to release the Dragonfly MLE and OPNids under an open-source license. The code behind these tools is always available on GitHub. Releasing the code under an open-source license allows interested parties to dig into the code itself to understand what is happening as the software is running. In an adversarial environment, this strategy is still viable for a machine-learning application because the specific models and uses of the tool depend on the configuration of the analyzers, which are going to be specific for each organization using the MLE.

Explainable Techniques Required

The best machine-learning techniques are able to infer patterns from many different variables. Though it is one of the primary strengths of machine learning, the ability to find multi-variate correlations is also one of the largest challenges for developing explainable techniques because it is difficult to understand where more complex correlations come from. Our preferred strategy for solving this problem is to build individual analyzers that are understandable and then combine those results into more complex models. Drawing from the machine-learning approach of ensembles, this "building block" approach can be used to effectively identify complex correlations from combinations of explainable models.

User Defined Policies

One of the most harmful myths about AI is that the machine will make the decision for you, leading to costly errors. While errors in any system (human or machine) are inevitable, allowing users to determine the threshold and the action taken is a necessary part of any explainable system. Our example analyzers in the Dragonfly MLE use a "decorator" pattern to report results to downstream applications. Rather than picking a threshold and only passing along events that are above the threshold, our strategy is to report all scores, and then let the user and the situation determine how to respond. This dovetails with the "building block" approach described above, as each analyzer could be used on its own or combined with other analyzers. This approach allows analysts full control over which analyzers are used to process traffic and the thresholds that are used to determine further action, making the process more explainable and more defensible.

The Journey of Explainability

Building explainable AI is a journey, not a destination. It requires advances in techniques and greater understanding of how techniques work. Dragonfly MLE uses open-source technology to support explainable techniques and user-defined policies. Try it out at opnids.io.