Research

We are now in a process of getting one (AI) on every desk and in every home which puts intense pressure on practitioners to consistently deliver empirical breakthroughs and on regulators to safeguard users. My research develops novel theoretical solutions to guide practicioners, to safeguard users, and to pave the way towards a truely autonomous AI solution. I believe that self-contained, provably safe, and energy efficient designs will be the leap from current human-guided AI to AGI.

AI pipelines involve many moving pieces from the data (collection, cleaning, augmentation, batching), the architecture (deep network, parametrization, distillation, pruning, quantization), to the loss (supervised, autoencoding, self-supervised). Practical theoretical results need to encompass those interactions which is why I pursue multiple research threads along each of those directions.

Bio

I have been doing research in learnable signal processing since 2013, in particular with learnable parametrized wavelets which have then been extended for deep wavelet transforms. The latter has found many applications, e.g., in the NASA's Mars rover for marsquake detection. In 2016 when joining Rice University for a PhD with Prof. Richard Baraniuk, I broadened my scope to explore Deep Networks from a theoretical persepective by employing affine spline operators. This led me to revisit and improve state-of-the-art methods, e.g., batch-normalization or generative networks. In 2021 when joining Meta AI Research (FAIR) for a postdoc with Prof. Yann LeCun, I further enlarged my research interests e.g. to include self-supervised learning or biases emerging from data-augmentation and regularization leading to many publications and conference tutorials. In 2023, I have joined GQS, Citadel, to work on highly noisy and nonstationnary financial time-series and to provide AI solutions for prediction and representation learning. Such industry exposure is driving my research agenda to provide practical solutions from first principles which I have been pursuing every day for the last 10 years.

Media
Publications
ICLR logo
NeurIPS logo
ICML logo
CVPR logo
ECCV logo
MSML logo
Nature logo
IEEE logo
Springer logo