- Malware gets the headlines, but the bigger threat is hands-on-keyboard adversary activity which can evade traditional security solutions and present detection challenges
- Machine learning (ML) can predict and proactively protect against emerging threats by using behavioral event data.
- CrowdStrike’s artificial intelligence (AI)-powered indicators of attack (IOAs) use ML to detect and predict adversarial patterns in real time, regardless of tools or malware used, stopping breaches
With news headlines like “A massive ransomware attack hit hundreds of businesses” becoming common, concern about malware has never been higher. High-profile examples of malware like DarkSide, REvil have been profiled so many times that not only cybersecurity professionals are on edge — every organization that has on-premises or in-the-cloud workloads is concerned.
However, even though malware attacks are making headlines, an increasing number of cyberattacks are malware-free. According to the CrowdStrike® 2022 Global Threat Report, 62% of attacks in 2021 did not use malware, but instead were carried out using hands-on keyboard activity. In addition, so-called “living off the land” (LotL) attacks misuse legitimate tools like PowerShell to carry out an attack and are therefore much more difficult to detect.
This is the reality security professionals face: hands-on-keyboard activity poses a far greater threat than a piece of malware.
Most traditional security solutions scan for known malware or packers, so hands-on-keyboard attacks can go unnoticed until it’s too late and a breach occurs. It can happen quickly — as reported in the CrowdStrike 2022 Global Threat Report, an adversary takes an average of 1 hour and 38 minutes to move laterally from the moment of initial access to the moment they can infect additional critical endpoints.
AI is a cybersecurity game-changer given ML’s ability to detect behavior-based IOAs in real time. Leveraging cloud-native ML models trained on the rich telemetry of the CrowdStrike® Security Cloud, threat intelligence pointing to a hands-on-keyboard attack can be delivered to security teams to prevent real-time breaches from happening.
Using AI in Cybersecurity: How ML Can Detect and Prevent Hands-on-Keyboard Activity Patterns
Let’s look at a typical attack scenario that exploits legitimate tools and services to infiltrate a network. Once a perch is gained and persistence is established, bad actors are then able to use valid credentials to move laterally through the target network, gain access to critical systems and steal or compromise data. They can also use their foothold to download and install malware and additional tooling without detection, holding the victim organization hostage. The dangerous aspect of hands-on-keyboard attacks is that adversaries can exploit stolen credentials (such as compromised account passwords) and then leverage legitimate tools and software to establish their presence.
Since this type of adversarial tradecraft is mostly malware-free or fileless, there is no “traditional” malware being downloaded for legacy security software to intercept. Because legitimate tools and software are used in these LotL attacks, such as the abuse of PowerShell and Windows Management Instrumentation (WMI) to run scripts or the use of existing remote desktop service, detecting these attacks with signature-based approaches is extremely difficult.
The use of AI, specifically ML, is a highly effective way to detect hands-on-keyboard activities as they develop, preventing the disaster of a breach.
Detecting the signs of a hands-on-keyboard attack before it progresses requires understanding patterns and behavior. Using the rich telemetry of the CrowdStrike Security Cloud and the expertise from our threat hunting teams we developed and trained ML models that can both identify and predict adversarial behavior patterns related to hands-on-keyboard activity.
The challenge is that because this type of adversary activity uses legitimate tools and processes, it is extremely difficult to detect or assess a single action that could be part of an attack. In addition, false positives — incorrectly alerting that an action is part of an attack — are a problem and can have analysts invest time in bringing systems back online from incorrectly triggered automated remediation procedures. Security resources are limited. False alarms are a distraction that diverts staff, lowers confidence in the solution and can have a dollar value associated with resolving them.
Taking these into consideration, ML is ideal for detecting and helping to prevent hands-on-keyboard intrusion activity.
ML models are trained using expertly curated data sets of malicious threats and benign activities, including data generated from real-life cyberattacks. They “learn” to recognize behavioral patterns and to accurately assess a threat level as a sequence of events progresses.
The CrowdStrike Falcon® platform was built from the ground up to leverage the power of ML and the cloud, demonstrating both in simulated tests and real-world environments that it is incredibly effective at defending against malware-free, hands-on-keyboard activities. For example, our cloud ML models known as AI-powered IOAs have identified over 20 new behavior-based indicator patterns indicative of post-exploitation payload detection or the use of LotL tools like PowerShell. When malicious activity is detected, the attack is stopped before a breach occurs. The organization’s security team is then alerted, giving them the ability to immediately investigate the threat and take any action required.
To see just how impressive the Falcon platform is in action, consider the results of a recent MITRE Engenuity ATT&CK Enterprise Evaluation. The Falcon platform went up against emulated attacks from highly sophisticated WIZARD SPIDER and VOODOO BEAR (Sandworm Team) adversaries, achieving 100% automated prevention across all of the MITRE Engenuity ATT&CK Enterprise Evaluation steps. This level of protection prevented attackers from even gaining access to the test environment, with the Falcon platform’s AI-powered automated defenses effectively stopping the test before it could even start.
Malware is dangerous and a constant threat, but it’s relatively easy to prevent with the right tools. Hands-on-keyboard activities are costly, relentless, high volume and virtually impossible to detect until it’s too late when using traditional security solutions. AI and ML are the right answers to defend against these increasingly sophisticated attacks.
But this doesn’t mean that the use of AI and ML can completely replace people and that human security staff are no longer needed. On the contrary, ML is a powerful tool that augments an organization’s security team. Employing the Falcon platform with its advanced AI and ML increases the effectiveness of the security team, cutting response time and providing the advanced tools needed to combat the most sophisticated adversaries.
Leave a Reply