How can we make sure that machines learn a good representation of the data? If labelling the data is a typical solution, it is often costly and time-consuming. Models such as neural networks typically require thousands to millions of examples to solve a task. Is it possible to learn good representations of the data without having to label each one of these examples?
Representation Learning provides interesting solutions through semi-supervised and self-supervised learning. With these techniques, it becomes increasingly realistic to solve complex tasks with few labelled examples. In some cases, we might want these representations to be understandable by human users. This induces a significant overlap with Explainable Artificial Intelligence.
My research focuses on explainable artificial intelligence, representation learning and robust machine learning.