Stay ahead with our blog articles | Arcanna.ai

arcanna.ai

Written by Horia Sibinescu | Dec 18, 2021 12:45:00 PM

From attack initiation to response in under a minute

Brute force attacks are among the most used weapons that hackers use to gain control over user or admin accounts and perform disruptive actions. The concept is simple: the attacker tries to guess the password by trying various combinations of characters and the most commonly used passwords.

As simple as this concept is, however, the effects of a successful brute force attack can be devastating. If the attacker gains access to an admin account, s/he can then create other accounts for future attacks, steal sensitive data, stop critical applications or services and so on. The key to stopping such an attack is speed – both in detection as well as in providing an immediate response. Additionally, such attacks can generate alert storms which are difficult to handle without a tool capable to reduce alert noise and need to be further analyzed after they are stopped.

Defining Machine Learning Jobs

The first step in configuring a Machine Learning Job to detect possible brute force attacks is to make sure that we are looking at the right data. For this to happen, we should first create a view of the index in which we filter data only for the events which indicate a denial of access to the hardware.

We selected a time bucket of 10 minutes, as the attacker will most likely launch several attempts in a short time window. By doing so, the data will be split into 10-minute buckets and compared to each other based on the ML formula selected in the detector.

As a detector, we used a high_count by source IP. This formula effectively looks at every single source IP and compares the number of events (failed logins in our case, because we filtered the data in our view specifically for these events) with the IP’s previous behavior. If that IP usually fails to log in once or twice in 10 minutes (we all misspell sometimes) but in the last 10 minutes has more than 5 failed attempts to log in, that would be considered an anomaly and will be flagged as such.

For our influencers, we selected the name of the host which is the target of the attack, as well as the source IP. This will enable us to see which host is being attacked and from which IP when viewing the anomaly in the Kibana anomaly viewer.

It’s important to note that even though it is not visible in the screenshot above, we do suggest that you store the ML Job results in a dedicated index. Doing so will simplify the process of alerting and response.

Configuring Watcher – automated alerting and attack response

Now that we have our Machine Learning Jobs running, we should make sure we are notified when an attack takes place.

Watcher has 4 segments which need to be configured in order to work:

Trigger – defines when the Watcher should trigger its defined action (actions will be explained below). In our case, we set the Watcher to run every 10 minutes. We can think of this as similar to the bucket in the Machine Learning Job.

Input – defines in which index the Watcher should look and also what filters should be applied. This is where the dedicated index option from the Machine Learning Job comes into play. We point our Watcher to look at that particular index and that it should always take into consideration the last 10 minutes’ worth of data.

Condition – defines in which situations Watcher should take its defined action. For our Brute Force Attack use case, we set Watcher to trigger when it finds an anomaly with a score greater than or equal to 50. This threshold can be modified depending on your appetite for risk.

Condition – defines in which situations Watcher should take its defined action. For our Brute Force Attack use case, we set Watcher to trigger when it finds an anomaly with a score greater than or equal to 50. This threshold can be modified depending on your appetite for risk.

Similar to the Watcher above, we will define another one with slightly different settings:

Condition – we will set the threshold for this second instance to anomalies with a score greater than or equal to 75

Action – in addition to sending an email notifying of this event, we will also include a script to be run. The script’s role is to block the source IP from which the attack is being run,offering a first response until further actions can be taken.

Custom-built dashboard for deep analysis

Whether you are confronted with brute force attacks frequently or rarely, you need to go further than just stopping the attack if you really want to improve security within your organization. For this purpose, deep analysis of the events becomes critical

Using Kibana, we can build an easy-to-navigate dashboard that enables easy point-and-click filtering of data that gives you a better understanding of how the attack took place.

When building such a dashboard, two things should be taken into consideration:

Fields of interest – in our case the most important ones should be user account, source IP and hostname

Visual type – different visual types are particularly useful for different situations. A pie chart makes filtering data easy because you can just click on a segment while a bar chart shows the correlation between the two fields.

Additionally, because of Elasticsearch’ nature, events from different indexes can be correlated, enabling us to include in our dashboards visuals from other indexes for a better understanding of the underlying problem that led to the attack. If a particular user account was used for different distinct attacks, it might be worth looking at that account’s activity in other areas such as applications or servers.

 

Contact an expert like Arcanna.ai who can help prevent brute force attacks.