Advances in machine learning have led to rapid and widespread deployment of learning-based inference and decision making for safety-critical applications, such as autonomous driving and security diagnostics. Current machine learning systems, however, assume that training and test data follow the same, or similar, distributions, and do not consider active adversaries manipulating either distribution.
Recent work has demonstrated that motivated adversaries can circumvent anomaly detection or other machine learning models at test time through evasion attacks, or can inject well-crafted malicious instances into training data to induce errors in inference time through poisoning attacks. In this talk, I will describe my recent research about security and privacy problems in machine learning systems. In particular, I will introduce several adversarial attacks in different domains, and discuss potential defensive approaches and principles, including game theoretic-based and knowledge-enabled robust learning paradigms, towards developing practical robust learning systems with robustness guarantees.
Speaker Bio:
Dr. Bo Li is an assistant professor in the Department of Computer Science at University of Illinois at Urbana–Champaign, and the recipient of the Symantec Research Labs Fellowship, Rising Stars, MIT Technology Review TR-35 award, Intel Rising Star award, NSF CAREER Award, Research Awards from Tech companies such as Amazon, Facebook, Google, and IBM, and best paper awards in several machine learning and security conferences. Her research focuses on both theoretical and practical aspects of trustworthy machine learning, security, machine learning, privacy, and game theory. She has designed several scalable frameworks for robust machine learning and privacy-preserving data publishing systems. Her work has been featured by major publications and media outlets such as Nature, Wired, Fortune, and New York Times.
Zoom Link: https://argonne.zoomgov.com/j/1604373309?pwd=QWVEeUczei9lS0lvR1Z2QTVGR3…