SOT

SOT

SOAR
Security Orchestration, Automation and Response

Automation of response to information security incidents using dynamic playbooks and information security tools, building an attack chain and with an object-oriented approach

NG SOAR
Next Generation SOAR

Automation of response to information security incidents with built-in basic correlation (SIEM), vulnerability Scanner (VS), collection of raw events directly from information security tools, dynamic playbooks, building an attack chain and an object-oriented approach. AM and VM are included

AM
Asset Management

Description of the IT landscape, detection of new objects on the network, categorization of assets, inventory, life cycle management of equipment and software on automated workstations and servers of organizations

VS
Vulnerability Scanner

Scanning information assets with enrichment from any external services (additional scanners, The Data Security Threats Database and other analytical databases) to analyze the security of the infrastructure.

VM
Vulnerability Management

Building a process for detecting and eliminating technical vulnerabilities, collecting information from existing security scanners, update management platforms, expert external services and other solutions

FinCERT
Financial Computer Emergency Response Team

Bilateral interaction with the Central Bank, namely the transfer of information about incidents and receipt of prompt notifications/bulletins from the regulator

GovCERT
Government Computer Emergency Response Team

Bilateral interaction with the state coordination center for computer incidents, namely the transfer of information about incidents and receipt of prompt notifications/bulletins from the regulator

Mail us to sales@securityvision.ru or get demo presentation

Bad advice on automation

Bad advice on automation
31.07.2025

Eva Belyaeva , Security Vision

 

Introduction


I was lucky enough to participate in AM Live three times on the topic of automation in information security. And as we laid out everything related to this on the shelves - advantages, disadvantages, use of AI and examples from life - a desire came to collect more anti-examples of automation from work experience and somehow systematize it all.


In less than 12 years in information security, I have had the chance to take part in both successful and not so successful projects. The good thing is that by the time I arrived at Security Vision three years ago I already had some basic understanding of how to do well and not do badly. Taking into account the mistakes of previous years, using the experience of all members of our team, now we still manage to find a balance and effectively automate IT and IS processes and immediately understand whether this automation is worth using at all, but it was not always so.


As for anti-examples , they will be both banal and not so banal. Everything that I will tell you about further happened during the last 5-10 years, and I will try to depersonalize the cases from my examples just in case. It does not matter what color the customer was, what kind of project it was - the main thing is that any failure helps us draw conclusions and move on.


When is automation useless or harmful?


I think it will be a little easier if we try to classify the most common errors in automation and approach this from the side of tasks. What were these problems, what are their key differences from similar ones, and most importantly - what will be the consequences if they are automated incorrectly.


One-time tasks


Probably, if you ask any person, even one not involved in our field, what kind of automation would be the most useless, the simplest and most obvious answer is one-time tasks.


That is, these are the tasks that you will not repeat and for which you do not need to formalize the process in principle - this process does not exist as such, because the tasks are of the "just take and do" category. This can initially be called automation without any purpose, just to, for example, apply some framework you like or practice writing scripts, nothing more.


And in the case when this is done, for example, for yourself, for some pet projects, this is still understandable. But if you try to do this in an enterprise , when in addition to such tasks there are plenty of those that still require automation, the story will be sad. This takes time, the resource of a valuable employee is wasted, and the output from this will be small - with the same success and maybe even in a shorter time, someone else could do the task and not return to this issue again.


The same applies to those simple tasks that might be difficult to solve manually - intellectually or in one sitting - but are actually much easier and faster to do manually. Perhaps such a task could be a classic monkey job with an obvious deadline, or it can be partially creative, and then automation will still require the participation of a live person. When you know for sure that this task will not come again. And if you know that there are no tasks in the backlog that are at all similar to the current one, the most effective option is to solve such a task.


If the task is one-time in one way or another, it turns out that there is simply no point in spending effort on automating it.


Tasks that require more resources to automate than manual work


Moreover, the resource here can be both money and time. This is exactly the second example of automation of tasks, the implementation of which requires a lot of effort or large financial investments or too much time of a specialist who could do something else at this time - for example, automate other tasks. When planning resources for automation, it is always worth considering the costs of performing the same tasks manually. This is also a truism that everyone knows and talks about, but still does not always take it into account in work when it comes to business.


As for the consequences, for example, there was a customer many years ago who wanted to set up integration for transferring tickets from one system to another. And this customer considered both a self-written integration script and purchasing a data bus in an existing solution. What happened? The customer sat down and calculated how much money it would cost to automate this task and how much money he would spend by paying a specialist who would sit and enter these tickets manually.


Given that there were very few tickets in principle and the flow was small, it turned out that when calculating for 5 or even 10 years it was cheaper to hire some intern or student.


The same is true when solving a problem requires raising the employees' competencies - just yesterday, anyone could generate and send reports, even a junior after an internship - and today he needs to undergo training on working with new software. Is it so important for this task, is it so critical - it would be better to determine this before launching the process of automating it.


The automated process is unclear or not formalized


In contrast to simple tasks, tasks can, of course, be complex. What is a complex task in this context? It is either a task that requires intellectual activity in order to understand what is happening in it, or it is complex compared to similar simple ones. Let's say they have fewer steps or these tasks are already solved by hand, this path has been taken by someone.


If no one knows how to do a task manually or at least has an approximate algorithm of actions, problems arise. In general, any automation requires a process approach - an algorithm for this story to work. It does not matter whether you need to write a script or integrate an open source solution - all the same, your first steps will be formalizing the process and forming the final goal.


There were also projects where these steps were simply skipped. They say, people do it “somehow” now, you just need to start and something will work out. Usually it didn’t work out. Sometimes the problem was that the process – for example, conducting annual internal testing of subordinates – was carried out differently in different departments, starting from notifying employees to recording the results; even the process artifacts at each stage were different. Alas, despite the fact that it would have been more logical to first agree internally and then go and bolt on automation, it was done the other way around.


The result was a very strange application, in which there were many details that were not connected to each other, not a single team was comfortable working in such a mode, and gradually everyone returned to sheets of paper and manual maintenance of statements.


A complex task is immediately automated in the presence of unsolved simple ones


Let's continue talking about those tasks that are considered complex compared to their peers. For example, these could be tasks for automating incident detection, i.e. writing correlation rules. Here it is important to understand that without experience in handling simple cases, it is not entirely correct to immediately take on complex ones.


What are complex correlation rules, to put it simply? They are a set of atomic conditions, that is, they are the same simple rules, only connected by an even greater number of conditions. Thus, if we undertake to identify complex correlation rules, we may encounter a problem when we do not know how to find these atomic pieces, how to connect them together.


And, accordingly, without working on something on simpler examples, we will get more and more false positive triggers and we will still have to return to simpler examples for debugging . Here, automation, on the contrary, does not help to simplify anything, it only takes up valuable time.


The second example is running application testing. Here, automation can be useless if you start writing automated tests before you write test cases and do at least part of the way manually, because without any process, without a formalized approach to testing, starting to do automated tests is also a bit pointless. You don’t know what it should look like in the end, you make things more difficult for yourself, and when it’s time to do real testing, you can throw all this work away and start writing test cases again.


Automating overly complex tasks with extensive context and variations (too many exceptions)


The task itself can be complex. Yes, it can be done manually and even the process can be understood and described, but the thing is that there can be as many branches within this process as you like, because the working nuances are everywhere.


For example, a classic example is the automation of incident response. It is impossible to once and for all describe all the conditions and options under which one can make an unambiguous conclusion about both what action to apply and whether its application is appropriate. We ourselves got out of this situation precisely through the concept of dynamic playbooks , but even here it is clear that it is impossible to take everything into account. Despite the fact that the new process can adapt to the infrastructure, take into account the rights of analysts, the context of the asset being worked on - sometimes it happens that the action will be irrelevant .


In this case, you need to at least make sure that there are mechanisms for bypassing and disabling automated rules and the possibility of manual analysis if necessary. A kind of switch that can be used to avoid negative consequences.


This example can generally be implemented if you put all your efforts into it and think it through properly. But there are cases when doing something like this is unreasonably difficult.


The second example about exceptions will be about reporting based on the results of source code testing. Yes, in general, it is possible to automatically generate fish for a report and then fill it with the necessary data. On the one hand, this is painstaking work that requires the utmost concentration of a specialist - in the end, the result will be checked by a laboratory or regulator, it is necessary that there are no errors.


On the other hand, there is a desire to somehow automate this in order to reduce the probability of error and speed up the process itself. The fact is that such reporting also requires a number of conditions for selecting content. It will probably take more time to do these calculations than to fill it out yourself.


Thus, we are gradually moving to the fact that it is not necessary to automate everything one hundred percent – sometimes it is enough to automate some preparatory part, giving the specialist the opportunity to connect later.


Automation without regard to business processes and strategic goals


Let's imagine that you have a process that already works effectively. You want to automate it so that people don't have to process a lot of information manually, but you have a hard time understanding what that might entail. For example, what if automating it actually complicates the entire process?


If you don’t ask yourself in time why the process exists at all and what it brings, you can end up in a situation where a lot of effort has been expended, but people have not become more efficient as a result.


For example, the department regularly spent two days a week manually recording and updating the schedule for tasks in detail, as well as the hours spent. Keeping all this in Excel was quite tedious, and people often made mistakes, typos, and the result was not pleasing. It was decided to implement a labor cost accounting system. However, only after the system was implemented did it turn out that data on all projects was already recorded, just... by a different team. And they spent much less time on it. It turns out that the task was already being solved, the process was built differently and did not require automation. If they had figured it out first, then it would simply not have come to complex implementations.


Another example is about goals. A customer had and still has his own center of expertise, which includes virus analysts. The work is useful and understandable. And he suddenly wanted to add a service to this center of expertise for generating pulses with indicators of compromise for his subsidiaries and other clients. But there was no understanding of how clients should use this information.


Ultimately, it is planned to add automation of data collection on indicators and threats without a clear understanding of how this data will help improve the state of protection or speed up incident investigation for each client. The data is collected, but not analyzed or applied in practice, it is simply stored until better times.


When a process is improved simply to be “more efficient,” and something new appears “so that we can have it too,” without defining the end goal, the result will correspond. It is necessary to initially understand where we are going and why, and then automation will most likely only bring benefits.


There is not enough data for automation


When we start to automate some task, we must also think about what will happen with this automation afterward.


One of the examples is exactly about what can happen to the data that the automation system needs as input. There is such an anti-example from a customer who wanted to build Data Lake , in which it would collect information from both IT and information security. And, in fact, the goal was precisely to unite these departments and their data flows in order to simultaneously identify anomalies with incidents and somehow act proactively , engage in infrastructure protection and hardening .


After the project was completed, it was expected that data would be coming into the lake from all available systems and information security systems, and similarly, these systems would be able to query all available data when required. The story is that while the project was going on, some test data was available, and the system was successfully developed, tested and verified.


The main problems began after implementation, in the form of a lack of data, because the two departments could not make friends with each other and did not exchange data. And as a result, in Data Lake was only a tiny flow of events, agreed upon by both parties. In order to make requests even in this flow, permission from the neighboring department was needed.


As a result, the idea, which was initially supposed to be useful, ended up with everything being formally done well – the system solved its tasks. But in practice, there are both process and ordinary human problems that prevent this automation from being used.


There is no one to support the algorithm


The issue of process support is standard for automation, and even before the start of development, it is worth thinking about what will happen to the application or script itself over time.


This is also one of my personal pains, when automation was required for the task of transferring data from one system to another. It all worked for some time, until the customer's entire team changed, after which there was no one to support this system, and, in fact, it became less useful. The applications connected by the integration began to be updated, and there was no one to fix the errors that arose.


As a result, we either have to say goodbye to this development or put it all together again, with new people.


In this case, of course, you can avoid the problem by using vendor or Open Source solution: there will always be a team or community available to keep the project alive and relevant.


How to evaluate whether a task needs automation?


To conclude the story, let's put together some algorithm from anti-examples on how to evaluate the need for automation so as not to waste time and money. The most important thing, of course, is to make sure that the task is not some one-time action, but a whole process.


First of all, try to understand what the end goal is: why we are doing so much work and how the new project can be useful.


Secondly, look at the process itself – can it be formalized, are there any pitfalls and different views on it among several teams. Here, look at whether a complex task is being solved ahead of simpler ones, whether this process exists and what results it brings.


Third, assess what data will be required for the new system, whether you will be able to not only obtain it for testing, but also maintain access to it after implementation. At the same time, calculate how complex the task is and whether it requires working out non-obvious nuances.


Fourthly, ask yourself the question of support - will the new system have its own dedicated team or will a partner handle it.


Fifth, evaluate how much easier it will be to break down the automated process into its components and implement it compared to manual labor. Perhaps the time for automation has not yet come.

Recommended

The two pillars of Linux monitoring
The two pillars of Linux monitoring
Spam protection for companies and households
Spam protection for companies and households
The Living off the Land Family: how to detect and mitigate
The Living off the Land Family: how to detect and mitigate
Mathematical risk modelling: shamanism or cybernetics?
Mathematical risk modelling: shamanism or cybernetics?
Cryptography basics: what is encryption, hash sum, digital signature
Cryptography basics: what is encryption, hash sum, digital signature
CVE (Common Vulnerabilities and Exposures) — database of information security vulnerabilities
CVE (Common Vulnerabilities and Exposures) — database of information security vulnerabilities
Bad advice on automation
Bad advice on automation
Basics of Cryptography: what is encryption, hash sum, digital signature
Basics of Cryptography: what is encryption, hash sum, digital signature
Between biscuits and carrots: keeping the team in limbo
Between biscuits and carrots: keeping the team in limbo
Mobile threats, detection and prevention: How to know if your phone has a virus and how to remove it
Mobile threats, detection and prevention: How to know if your phone has a virus and how to remove it
Flooding: from harmless noise to cyberattack
Flooding: from harmless noise to cyberattack
Next Generation Firewall (NGFW) – what is it and what does it protect against
Next Generation Firewall (NGFW) – what is it and what does it protect against

Recommended

The two pillars of Linux monitoring
The two pillars of Linux monitoring
Spam protection for companies and households
Spam protection for companies and households
The Living off the Land Family: how to detect and mitigate
The Living off the Land Family: how to detect and mitigate
Mathematical risk modelling: shamanism or cybernetics?
Mathematical risk modelling: shamanism or cybernetics?
Cryptography basics: what is encryption, hash sum, digital signature
Cryptography basics: what is encryption, hash sum, digital signature
CVE (Common Vulnerabilities and Exposures) — database of information security vulnerabilities
CVE (Common Vulnerabilities and Exposures) — database of information security vulnerabilities
Bad advice on automation
Bad advice on automation
Basics of Cryptography: what is encryption, hash sum, digital signature
Basics of Cryptography: what is encryption, hash sum, digital signature
Between biscuits and carrots: keeping the team in limbo
Between biscuits and carrots: keeping the team in limbo
Mobile threats, detection and prevention: How to know if your phone has a virus and how to remove it
Mobile threats, detection and prevention: How to know if your phone has a virus and how to remove it
Flooding: from harmless noise to cyberattack
Flooding: from harmless noise to cyberattack
Next Generation Firewall (NGFW) – what is it and what does it protect against
Next Generation Firewall (NGFW) – what is it and what does it protect against