Abstract: Anomaly Detection has limitations when used in cybersecurity applications. With human oversight and baseline management, you can mitigate the limits of anomaly detection for a more efficient, capable, and accurate process.
There is an enormous amount of data that flows through servers globally. Most of this data can be classified as benign - or fit within an acceptable set of parameters classified as normal or expected activity. But what about activities or events that are unexpected? These are considered anomalies, and SOCs have turned to AI and Machine Learning to create systems that help detect activity that deviates from the norm. There is, however, a caveat to the whole process - the technological tools aren’t quite smart enough to do it on their own yet.
Anomaly detection is a relatively new way to identify and classify data that is a rare event or observation and doesn't fit the norm. Also referred to as outlier detection, this is the term given to unlabelled data by data scientists and cybersecurity analysts for network security to identify intrusions, cyberattacks, or misuse of systems, including information leaks and fraud. Anomaly detection promises to be unsupervised by design.
It should be noted that critical in the definition and classification of anomalies is the characteristic that they’re rare and infrequent deviations from normal behavior and existing data sets. But not every rare anomaly that is flagged is necessarily bad, so a quick and intelligent intervention is needed. Given the amount of network data flowing in and out of any given SOC, the number of attacks they’ve seen is relatively low, and attacks evolve every day.
Hence, an algorithm is the best bet for finding that dangerous needle in a cyber-haystack, so that an analyst can address the anomaly.
As promising as it sounds to rely on intelligent technology, AI, and Machine Learning to detect these new and diverse types of attacks, there are several procedural factors that should be considered.
As mentioned earlier, not every deviation from the normal is a bad deviation. Large computer networks have a pattern or rhythm; processes run periodically, the same users do the same things every day, and so on. But network traffic is far from regular at the best of times.
The routine is often interrupted by operational events stemming from system errors and misconfigurations. Or system changes and patches are deployed to address security vulnerabilities and offer new features. Even job responsibilities change in the natural cycle of hiring, promotions, or resignations. Is it difficult to classify these events as “anomalies” with so many irregular changes?” The answer is yes. And no.
Every network ecosystem is constantly changing, and relying on an anomaly detection algorithm to detect cyber-attacks 100% accurately will be a challenge. When it comes to intrusion detection, not every unexpected piece of data is bad. If your software doesn’t know any better, it might flag a benign action as something more serious.
As discussed above, anomaly detection techniques rely on a generic statistical measure or a set of data points that need to be programmed into the algorithm. And as we also mentioned, benign activity can sometimes be classified as an anomaly when it is not a security risk. Not every rare event is bad, so the goal is to find the “rare rare” events and behaviors that are malicious. But again, this lends itself to the all-too-common problem of a large number of false positives, which requires an analyst, or a senior member of the team, to manually verify each one via a costly and time-consuming investigation. This also requires a full roster of capable but possibly overworked analysts in your SOC to conduct the investigations.
Consequently, when an analyst is presented with a set of anomalies, there’s the possibility that they may have no knowledge of why the items are considered anomalous since some algorithms utilize an underlying detection technique that may be a black box and does not reveal what features led to the alert. Newer processes introduced into anomaly detection platforms may incorporate features such as Sequential Feature Explanation (as an example), which can direct research only to the associated data that triggered the alert and save time from an unnecessary investigation.
This clearly is far from the unsupervised anomaly detection process that a SOC hoped for.
The final limitation stems logically from the first two. While anomaly detection relies on the key factors of rich data sources, data science, domain knowledge of attack behavior, predictive algorithms within the AI, and machine learning, as we’ve indicated above, it’s not enough. The next challenge for security analysts is finding ways to continuously manage baselines and exception handling without relying on cybersecurity experts and people power. This aspiration, however, is proving to be elusive.
There’s no debating that the growing sophistication of AI technologies for predictive risk intelligence offers promise. A report by Deloitte posits that AI has reached the point where it can generate its hypothesis, predict attack techniques, and provide recommendations for them. While this may be true, it’s an imperfect system. At present, smart cyber technologies and AI complement existing security controls to detect progressive, emergent, and unknown threats. The human element still needs to teach, train, and manage the AI-powered detection system to ensure maximum efficacy.
As we’ve mentioned in previous blogs, the most optimal future for a secure infrastructure is AI-Assisted Cybersecurity, as part of a hybrid model with analysts. A future-proof SOC can emerge by combining an AI capable of deep learning and anomaly detection with the extra input of human creativity, common sense, and knowledge from analysts.
The goal is to manage those baselines (which are the cause of false positives/difficulty to understand the anomaly) without requiring extensive knowledge. Hence, artificial intelligence offers assistance. Once an anomaly is detected and an alert issued, an AI-Assisted Cybersecurity tool such as Arcanna.ai streamlines the time-consuming work of a large number of the cybersecurity workforce and frees up an analyst’s capacity to deal with threats. This is the critical strength of relying on AI. Their institutionalized knowledge is continually integrated with the AI-Assisted Cybersecurity tool to get the most out of the algorithm.
Learn more about how Arcanna.ai can help your SOC