A new way to evaluate the performance of new ML models quickly ?

When developing new Machine Learning algorithms in cybersecurity, it often difficult to find data to train models. There are also risks that the learning process itself is biased and does not perform correctly on real data.


This research paper presents a new interface that allows to quickly evaluate the performance new ML models while maintaining confidence that the data underlying the evaluation is working correctly. It also allows users to immediately observe the root cause of an observed problem. This article introduces the tool and some real-world examples that have proven useful for model evaluation.


To be able to evaluate our Machine Learning models without wondering about the underlying datasets, we love it!


https://arxiv.org/abs/2110.07028

Recent Posts

See All

The number and frequency of malware is on the rise. This requires an increase in real-time investigations. However, conducting investigations on live systems is quite a challenge due to the urgency wi

Malizen cybersecurity operations france

Follow our adventures !

  • Discorde
  • Gazouillement
  • Linkedin

Subscribe to our newsletter

Be notified every time we have news!

Thanks for subscribing !

By subscribing, I agree to the Terms of Use and Privacy Policy.