“Opening the Black Box of Machine Learning: Interactive, Interpretable Interfaces for Exploring Linguistic Tasks”
Location: LTS Auditorium, 8080 Greenmead Drive
Speaker:
Jordan Boyd-Graber
Associate professor, Department of Computer Science and UMIACS
Abstract:
Machine learning is ubiquitous, but most users treat it as a black box: a handy tool that suggests purchases, flags spam, or autocompletes text. I present qualities that ubiquitous machine learning should have to allow for a future filled with fruitful, natural interactions with humans: interpretability, interactivity, and an understanding of human qualities.
After introducing these properties, I present machine learning applications that begin to fulfill these desirable properties. I begin with a traditional information processing task—making sense and categorizing large document collections—and show that machine learning methods can provide interpretable, efficient techniques to categorize large document collections with a human in the loop.
From there, I turn to language-based games that require machines and humans to compete and cooperate and discuss how this can improve and measure interpretability in machine learning.
Speaker Bio:
Jordan Boyd-Graber is an associate professor of computer science and a member of the Computational Linguistics and Information Processing (CLIP) Laboratory, with additional appointments in the iSchool and the Language Science Center.
Jordan’s research focus is in applying machine learning and Bayesian probabilistic models to problems that help us better understand social interaction or the human cognitive process.
He and his students have won “best of” awards at NIPS (2009, 2015), NAACL (2016) and CoNLL (2015).
Additionally, Jordan is the recipient of the British Computing Society’s 2015 Karen Spärk Jones Award and a 2017 NSF CAREER award.
He received his doctorate in computer science from Princeton University in 2010.