top of page
logo malizen cybersecurité

Through Malizen's Lens > Exploring generative AI in cybersecurity

Updated: Aug 22, 2023

In our last blog article, we talked about the role of automation in cybersecurity and Malizen's vision of it. How can we hold back from delving deeper and discussing our position on generative AI? There have been quite a few developments in the field of generative AI applied to cybersecurity in recent months. It is crucial to remain aware of the challenges associated with it. Let’s talk about it then!


The good and the bad of generative AI in cybersecurity …


The competition among cybersecurity vendors to adopt generative AI is gaining momentum, as demonstrated at the recent RSA Conference. Multiple companies, such as SentinelOne, Google Cloud, and Recorded Future, unveiled new security solutions incorporating large language models (LLMs) like OpenAI's GPT-4. Microsoft, a prominent supporter of OpenAI, also announced its plans to integrate generative AI technology across its extensive range of security products. While generative AI may be trendy, it is essential to examine its actual performance and its implications for cybersecurity.


LLMs, the underlying technology behind ChatGPT, differ from traditional deep learning systems that typically analyze words or tokens in small clusters. Instead, LLMs have the capability to discover connections within vast collections of unstructured data, producing content that can closely resemble human-generated text. This presents a significant advantage for cybersecurity.


With an LLM, users can simply provide a natural language prompt, which can be instrumental in addressing the global shortage of cybersecurity professionals. This technology reduces barriers to entry for newcomers in the field by enabling them to leverage the power of natural language processing. Additionally, generative AI can analyze and process large volumes of information, resulting in faster response times and a sharper focus on critical threats. By utilizing generative AI, the difficult task of sifting through disparate data and tools faced by analysts becomes more manageable during incident responses. These systems also simplify automated response processes. Therefore, the introduction of GPT-4 represents a significant advancement towards a more robust defensive approach, benefiting smaller organizations with limited resources and expertise.


However, LLMs are not without their challenges. They are prone to generating "hallucinations," where the models produce false or misleading content that appears convincing. To mitigate this, it is crucial to build systems based on relevant and reliable data, with human validation playing a critical role.


Maintaining trust is paramount in adopting generative AI. Without complete transparency, doubts may persist within the market. Users may worry about hackers exploiting GPT-based platforms to create targeted attacks by leveraging the platform's knowledge. Microsoft addresses these concerns by explicitly stating that its Security Copilot will not be trained on customers' vulnerability data. Nevertheless, for LLMs to reach their full potential in cybersecurity, they must be tailored to security with training on specific and valuable data sources. There is a concern about how these systems can acquire knowledge from the latest attacks if they do not train on real-world customer incident and vulnerability data...


While generative AI can lower the barrier on the defensive side, there is a parallel concern regarding its potential misuse for offensive purposes. Some research reports suggest that generative AI can empower cyber attackers, particularly those with limited programming abilities or technical skills. It is conceivable that ChatGPT, for example, could become a tool of choice for script kiddies. Therefore, it is essential to carefully consider the implications of generative AI in both defensive and offensive cybersecurity strategies.


What about Malizen?


From our standpoint, we firmly maintain our belief that humans are essential and will continue to be indispensable for the effective functioning of cybersecurity processes. Keeping this in mind, we have chosen to adopt "old-fashioned" AI approaches, utilizing models like expert systems and Bayesian networks designed to stimulate the expertise of human professionals.

The central intelligence of our co-pilot module relies on our knowledge graph, which encompasses the relationships between high-level information used by cybersecurity analysts. This abstraction-based approach ensures the resilience of our graph over time and enables easy adaptation to various information systems and investigations.


With a knowledge-based recommendation system, we can address challenges such as the "blank page syndrome" by providing relevant suggestions even when there is a lack of past investigations or exploration traces. Furthermore, our system is hybrid in nature, incorporating context-based recommendations from ongoing investigations as well as past investigations through machine learning mechanisms. By learning from three different sources—events, context, and analyst behavior—our co-pilot assists both senior and junior analysts in making prompt decisions. We are actively working on the integration of recommendations based on analysts' exploration intentions, taking the next steps in advancing our system.

As we have mentioned in previous blog articles, at Malizen, we strongly believe that the success of a team, comprising both humans and machines, relies on the trust established between the human and machine teammates. Recognizing the critical nature of analysts' trust in our recommendations, we ensure that our recommendations are not shrouded in a black box. All our recommendations are thoroughly justified and explained to analysts, enabling them to make informed decisions. The potential of AI is vast. Presently, it serves as a tool that will undeniably impact our lives, for better or worse. The outcome, whether positive or negative, relies on the way we nurture this emerging technology. Every coin has two sides, and recognizing that AI has the potential to both mitigate cyber risks and create new ones is a promising beginning :-)


logo Malizen

Follow our adventures !

  • Discorde
  • X
  • LinkedIn

Subscribe to our newsletter

Be notified every time we have news !

Thanks for subscribing !

By subscribing, I agree to the General Terms of Use and Privacy Policy.

bottom of page