Dr Parpas is a Senior Lecturer in the Computational Optimisation Group of the Department of Computing at Imperial College London. Before joining Imperial College he was a research fellow at MIT (2009-2011).

Before that he was a quantitative associate at Credit-Suisse (2007-2009). He completed his PhD in computational optimization at Imperial College in 2006. He is interested in the development and analysis of algorithms for large scale optimisation problems and exploiting the structure of large scale models arising in applications such as machine learning and finance.

Stability and Uncertainty Quantification in Deep Neural Networks
Breakthroughs in modern Neural Network (NN) architectures and related algorithms in Machine Learning (ML) have entirely transformed whole areas of computer science such as computer vision and natural language processing. Unfortunately, both theoretical and empirical results have shown that neural networks compute unstable classifiers. An unstable classifier is vulnerable to adversarial attacks and illegal exploitation. The perturbations needed to fool ML classifiers are small, indistinguishable from noise, and therefore, they are difficult to detect. A necessary condition for successful ML systems in real-world applications is that the underlying system is stable. Without resolving this challenging problem, it is not possible to make meaningful progress in critical application areas such as the explainability and interpretability of machine learning algorithms, or efficient and robust training methods for reinforcement learning. Despite several attempts to address this problem, it remains open. In this talk, I review some recent developments that attempt to address this problem.

Early Bird End Date