Ruslan Rakhmetov, Security Vision
Real-time protection tools are often powerless against Zero-day vulnerabilities, Low-and-Slow attacks, and Advanced Persistent Threats (APT). A retrospective analysis (which we discussed earlier) is a critical tool for addressing this security gap. In this article, we will talk in more detail about the goals and results of data analysis and what tools help information security service employees to improve their security.
The main strategic goal of periodic retrospective analysis is to reduce the time spent by an attacker in the infrastructure. By detecting hidden threats after the fact, organizations can prevent or significantly minimize damage from long-term compromises such as APT attacks before the attacker gains full control over critical resources. The application of such an analysis pursues several interrelated goals, which together enhance the overall security of the organization:
- detection of malware, activity of APT groups, insider and hidden threats that successfully bypassed the initial lines of defense and were not detected by real-time monitoring systems;
- restoration of the full chronology and picture of the attack, including the determination of the initial penetration vector, the sequence of actions of the attacker and the full list of compromised systems and data;
- identification of ineffective security measures, weaknesses in the monitoring system and gaps in the correlation rules, which allows us to develop more reliable and relevant countermeasures, set up stricter security policies and improve the overall protection strategy.;
- providing evidence for conducting audits and confirming compliance with regulatory requirements in order to accurately demonstrate which data was accessed, which data was stolen or blocked in a certain period of time.
This approach also changes the perception of threat data: a compromise indicator or a new detection rule is no longer valuable only to prevent future attacks. They acquire a new dimension of value, becoming a key that can "unlock" the past and reveal previously unseen compromises: new threat data initiates a retrospective search, which, in turn, can generate unique, organization-specific threat data, further enriching the knowledge base. Thus, both historical data and the constant flow of new threat information become critically important, interdependent assets.
Modern cyber attacks are often designed with an emphasis on secrecy and a long-term presence in the victim's infrastructure: attackers can remain unnoticed on the network for weeks, months, or even years, during which they engage in intelligence, horizontal movement, privilege escalation, and search for valuable data. Therefore, for effective retrospective analysis, a centralized ecosystem is increasingly being built to make it easier to search for data from dozens of systems in a single interface. At the heart of this ecosystem is the concept of a centralized and searchable security telemetry repository, often referred to as a "security data lake." Regardless of whether it is implemented within the framework of SIEM, XDR or SOAR, this unified storage is a prerequisite for conducting retrospective analysis on an industrial scale.
SIEM (Security Information and Event Management)
SIEM systems are the cornerstone of retrospective analysis. Their main function is the aggregation, normalization, and long–term storage of event logs from the entire IT infrastructure. They provide a data repository and a powerful query engine needed to search through historical events. For example, the correlation rule constructor built on the basis of no-code in the Security Vision platform allows you to define sequences and set various conditions with the calculation of the "weight" of each event. This mechanism allows you to apply new or updated correlation rules to the entire archive of accumulated events in order to identify incidents that were missed earlier due to the lack of appropriate rules at the time of their occurrence.
Imagine a security console in a large apartment complex: data from all video surveillance cameras, motion sensors, intercoms, and access control systems is streaming into it. The guard (correlation algorithm) sits and watches all the monitors at once, and if he sees that someone is trying to pick the lock, climb over the fence, or just stand suspiciously at the neighbor's door for a long time, he immediately sends a patrol to check. SIEM does the same thing, but in the digital world: it "sees" everything that happens on the network and reacts to anomalies.
EDR (Endpoint Detection and Response)
EDR class solutions provide visibility of activity on endpoints (workstations and servers), where malicious code is often executed. EDR agents record telemetry about process creation, file access, network connections, and registry changes. This archive of data from endpoints is invaluable for tracking malware execution and its horizontal movement across the network. The automatic logging performed by EDR greatly simplifies the reconstruction of the kill chain. In the Security Vision solution, EDR is part of SOAR and is presented as a separate connector service installed on an IT asset - it is not a separate program, but part of a large automated response system, analogous to XDR (eXtended Detection and Response).
It's as if you have a personal butler guard in every room who knows exactly what you usually do in that room. If someone starts drilling into the wall of your safe in the living room, the butler will not wait for the general security call and will immediately neutralize the intruder and close the door to the living room so that he cannot escape to other rooms. He does not monitor the entire house at once (like SIEM), but is responsible for the security of a specific room-the "endpoint".
NDR/NTA (Network Detection and Response / Network Traffic Analysis)
These systems focus on intercepting and analyzing network traffic: they can store complete packet dumps or enriched metadata about network interactions. This data is extremely difficult for an attacker to forge, so it provides irrefutable evidence of communications with command centers (C2), data exfiltration, and attempts at horizontal movement. Thanks to the low-code connector constructor and the ability to collect raw data directly into Security Vision solutions, such data becomes part of a common database for analysts.
It looks like a system of cameras and sensors installed on all roads and intersections of a big city. The system doesn't know who exactly is sitting in the car (unlike the EDR guard in the room), but it sees the entire traffic flow: if it notices that a garbage truck is driving in the opposite lane towards the bank, or that the same suspicious car without license plates has been circling the same block for the third hour, she gives an alarm. NDR does not monitor the "rooms", but the "roads" between them.
DLP (Data Loss Prevention)
These systems create a historical archive of confidential data movements inside and outside the organization. Such an archive is critically important when investigating information leakage incidents.
Imagine that there is a very strict guard at the exit of a secret library or archive, who asks everyone to show the contents of the bag. If he sees that you are trying to take out a unique antique book or a secret drawing, he will politely but persistently ask you to return it to its place and will not allow you to leave. The DLP system is the very digital security guard that checks all outgoing traffic for leaks of valuable information.
Sandbox systems can reanalyze previously saved file objects using new signatures and behavioral rules. For example, a file that was declared "clean" six months ago may be retrospectively identified as malicious after a new signature for its family appears. The use of sandboxes, for example, through integration with VirusTotal, allows you to "run" a file through a variety of algorithms to check reliability and security.
Imagine that you have received a strange, ticking package. You're not going to open it right in the living room, are you? A reasonable solution would be to take it to a barn or to a special sapper laboratory, where it can be safely opened. If it explodes, there will be no damage to your house. The sandbox is such a secure "shed" for checking suspicious digital parcels, where they will not be able to harm the main system.
Real-time monitoring handles immediate threats, while retrospective analysis finds hidden ones that were overlooked. This is how the process works: monitoring everything that happens in various systems and services on one screen, an additional security butler at important network nodes, exploring all data movement routes and searching for important information in this data, as well as a secure laboratory for checking incoming files. All of this can work together if we put together a common ecosystem. And if you add threat analysis tools and compromise indicators to it (for example, the TIP module), you can create a truly powerful system for reactive and proactive analysis.