Robustness Evaluation and Neural Network Verification
Robustness of machine learning models have become an important issue in many real world systems, such as self-driving cars and predictive healthcare. In this talk, I will start from the basic definition of robustness in the perfect Lp norm perturbation setting and discuss algorithms for robustness evaluation in both certified and uncertified ways. I will then talk about our initial attempts on making robustness evaluation more generalizable and automatic.
Cho-Jui Hsieh is an assistant professor in UCLA Computer Science Department. He obtained his Ph.D. from the University of Texas at Austin in 2015. His work mainly focuses on improving the efficiency and robustness of machine learning systems. In particular, his work on neural network verification has won the latest International Verification of Neural Networks Competition (VNN-COMP) at 2021, and the LAMB optimizer developed by him has become the default optimizer in MLPerf for distributed BERT training. He is the recipient of NSF Career Award, Samsung AI Researcher of the Year, Okawa fellowship, and best paper awards at KDD, ICDM, ICLR and ICPP.