top of page

A Timeline of Large Transformer Models for Speech

This week, we are not interested in a research article but rather a blog post. And a good one!

Since the arrival of the Transformer architecture in 2012, transformer-based models have come a long way in the Machine Learning field. A Transformer is a sequence-to-sequence encoder-decoder model. It's mainly used for advanced applications in natural language processing (NLP). Similar to the natural language processing field, tech companies are now building larger and larger speech transformer models. This blog post cover some of the most popular ones, starting in 2019 where things started to take off in the speech processing field.

We found it really interesting to see how much has been done in that field. The article highlights some quite important work. We particularly recommend reading the part on HuBERT. We already told you about our love of the BERT model for "Bidirectional Encoder Representations from Transformers". Today, it has become state-of-the-art NLP tools. It has revolutionized the field of SEO but not only. It is used in some cyber research works for static analysis on Android application source code and enable malware detection.

We look forward to seeing what innovations in speech processing will come next!

Recent Posts

See All

Today, as the use of digital technology increases, so does the risk of threat. Traditionally, cyber security has focused on understanding and resolving alerts. But a fine-grained understanding of curr

Malizen cybersecurity operations france

Follow our adventures !

  • Discorde
  • Gazouillement
  • Linkedin

Subscribe to our newsletter

Be notified every time we have news!

Thanks for subscribing !

By subscribing, I agree to the Terms of Use and Privacy Policy.

bottom of page