Feizi Funded by NIST to Develop Standard Evaluations of Machine Learning Robustness

Feizi writing equations on white board.
Tue Nov 17, 2020

A University of Maryland expert in machine learning is being funded by the National Institute of Standards and Technology (NIST) to develop metrics that will bridge the knowledge gap between empirical and certifiable defenses against adversarial attacks.

Soheil Feizi, assistant professor of computer science with an appointment in the University of Maryland Institute for Advanced Computer Studies (UMIACS), is principal investigator of the $387K two-year project.

An adversarial attack involves penetrating machine learning systems in order to make small changes to the input data to confuse the algorithm, resulting in flawed outputs. Some of these changes are so small they can fly under the radar undetected, posing a serious security risk for AI systems that are increasingly being applied to industrial settings, medicine, information analysis and more.

Both empirical and certifiable defenses have recently gained attention in the machine learning community for showing success against adversarial attacks, says Feizi. However, the scalability and usefulness of the provable defense methods, especially in deep and practical networks, are still not well understood.

Feizi’s goal is to bridge that gap by designing metrics that evaluate a defense’s effectiveness of robustness, scalability in deep networks and transferability across models.

“These metrics will be used to develop models that enjoy both provable robust predictions as well as robust interpretations, in addition to developing evaluation standards for deep learning systems,” he says.

Story by Maria Herd