I am interested in answering questions in differential privacy, trustworthy AI, uncertainty quantification, and federated learning, for which I use theoretical tools from statistics and optimization, complemented by rigorous experimentation. More concretely, in the near future, I am interested in the following directions:

  1. Auditing or using attacks to find empirical privacy leakage for modern ML models (Foundation models) with the goal of understanding leakage in realistic attack scenarios. In a similar vein, moving beyond differential privacy to find application-relevant definitions of privacy, fairness and copyright backed by rigorous experiments.
  2. Building better (trustworthy) algorithms for practically relevant ML and data analytics tasks like recommendations, ranking, frequency estimation, etc.


Check Google Scholar for an up to date list.