The testing strategies, usually performed by experts and, or executed by automated processes can be rigid and struggle to adapt to fast-paced attacks. AI should be considered to facilitate the detection and protection processes by making them easier, faster, and more responsive. In addition, AI, as an autonomous system, is able to generate new defence strategies and test cases. The broad number of fundamental techniques (from decision tree to neural networks) helps to address the majority of the challenges. Today, additional techniques based on reinforcement and, or cognitive extends dramatically the art of the possible.
Nevertheless, if these potential benefits should definitely be considered, AI is not extremely well suited. This contradiction is due to the lack of explainability and interpretability of the AI models. Explainability is the aptitude for a model to provide an explanation to a business expert. Interpretability is a similar aptitude but to a data scientist. The most effective models have a black-box behaviour, providing an answer without clear reasoning. Luckily, there is a growing number of techniques which facilitates the understanding of these models.
These capabilities allow to:
- Confirm existing knowledge
A key stone of cyber security is to continuously validate the existing strategies and confirm the knowledge base, making sure it is still applicable and relevant. Experts are looking for confirming the accuracy of the rules in place. AI solution, as an automated agent, can perform effective penetration tests and increase, based on the result, the cyber awareness of the team.
- Challenge existing knowledge
A second role of AI is to challenge the knowledge base with the objective to extend the list of potential explanations to threats. AI solution has the capability to ingest a much wider number of characteristics which, naturally, broad the perspective and understanding of an event. Besides, this analysis can be performed in much shorter period of time.
- Generate new assumptions.
The third main role is assumed when more autonomy is given to the AI solution. This autonomy can be reached by the use of multiple agents acting in team and evaluating various strategies “randomly”. Each agent has a specific area of expertise. This approach is much more complex and requires an agent coordinator. Typically, cognitive methods are in use with the objective of mimicking human behaviour.
Adding mechanisms and elements that enhance detection capabilities to prove scenarios and help train teams.
When talking about security monitoring, one of the key points is to give confidence in the detection ability of the rules during real cases. Indeed, nothing would be worse than setting up detection mechanisms that would not trigger the alert at the right time. SOC teams have a quality process that allows them to validate the proper functioning of the detection mechanisms with change and creation. While this is a critical task, it is tedious and must be done carefully by choosing generic use cases.
But what about the ability to detect specific attack pattern?
This is where AI can play an important role by simulation attack patterns to validate the proper functioning of the rules in place. AI capabilities provide automation and diversification possibilities that are a perfect complement to the formal validation schemes already in place within the SOC. Indeed, AI, once sufficiently “trained” offers the possibility to be used not only as a means of defence but also as a means of testing and validation. It is possible to use AI to validate the implementation of the predefined security rules and develop new scenarios based on tests or trends.