I am an associate professor in the University of Maryland Computer Science Department (tenure home), Institute of Advanced Computer Studies, iSchool, and Language Science Center. Previously, I was an assistant professor at Colorado's Department of Computer Science (tenure granted in 2017). I was a graduate student at Princeton with David Blei.

My research focuses on making machine learning more useful, more interpretable, and able to learn and interact from humans. This helps users sift through decades of documents; discover when individuals lie, reframe, or change the topic in a conversation; or to compete against humans in games that are based in natural language.

Sign up for an appointment

Recent Publications

  • Quynh C. Nguyen, Elizabeth M. Aparicio, Michelle Jasczynski, Amara Channell Doig, Xiaohe Yue, Heran Mane, Neha Pundlik Srikanth, Francia Ximena Marin Gutierrez, Nataly Delcid, Xin He, and Jordan Boyd-Graber. Randomized Pilot of Rosie, a Health Education Question-and-Answer Chatbot for New Mothers. Journal of Medical Internet Research: Journal of Formative Research, 2024. [Bibtex]
  • Zongxia Li, Andrew Mao, Daniel Kofi Stephens, Pranav Goel, Emily Walpole, Juan Francisco Fung, Alden Dima, and Jordan Lee Boyd-Graber. TENOR: Topic Enabled Neural Organization and Recommendation: Evaluating Topic Models in Task Based Settings. European Association for Computational Linguistics, 2024. [Bibtex]
  • Ishani Mondal, Shwetha S, Anandhavelu Natarajan, Aparna Garimella, Sambaran Bandyopadhyay, and Jordan Lee Boyd-Graber. Presentations by the People, for the People: Harnessing LLMs for Generating Persona-Aware Slides from Documents. European Association for Computational Linguistics, 2024. [Bibtex]
  • Chenglei Si, Zhe Gan, Zhengyuan Yang, Shuohang Wang, Jianfeng Wang, Jordan Boyd-Graber, and Lijuan Wang. Prompting GPT-3 To Be Reliable. International Conference on Learning Representations, 2023. [Code] [Bibtex]
  • Chenglei Si, Weijia Shi, Chen Zhao, Luke Zettlemoyer, and Jordan Lee Boyd-Graber. Getting MoRE out of Mixture of Language Model Reasoning Experts. Findings of Empirical Methods in Natural Language Processing, 2023. [Video] [Bibtex]
    Accessible Abstract: There are many ways for a computer to answer a question: a general knowledge question, a common sense question, or a math question. Each of these types of questions can be answered by a particular kind of expert. This paper investigates if we can automatically detect what kind of expert is best suited to answer a question and route the question to the correct expert.
  • Yoo Yeon Sung, Naeemul Hassan, and Jordan Boyd-Graber. Not all Fake News is Written: A Dataset and Analysis of Misleading Video Headlines. Empirical Methods in Natural Language Processing, 2023. [Video] [Bibtex]
    Accessible Abstract: Misinformation online is not all text-based. More information is being consumed in video form, and both social media companies and external monitors need to know when misleading videos are being shared online. We create a new dataset of misleading videos and describe what makes the problem so challenging.
  • Sander V Schulhoff, Jeremy Pinto, Anaum Khan, Louis-François Bouchard, Chenglei Si, Jordan Lee Boyd-Graber, Svetlina Anati, Valen Tagliabue, Anson Liu Kost, and Christopher R Carnahan. Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs Through a Global Prompt Hacking Competition. Empirical Methods in Natural Language Processing, 2023. [Prerecorded Video] [Data] [Award Video] [Bibtex] This paper was selected as the Best Theme Paper at EMNLP 2023 (1 of 4909)
    Accessible Abstract: As more AI services online are provided by prompted language models, we need to be aware of the weaknesses and exploits of the models. We present the HackAPrompt competition to help elicit a broad array of exploits that get around large langauge models.
  • HyoJung Han, Marine Carpuat, and Jordan Boyd-Graber. Automatic Explicitation to Bridge the Background Knowledge Gap in Translation and its Evaluation with Multilingual QA. Empirical Methods in Natural Language Processing, 2023. [Video] [Bibtex]
    Accessible Abstract: Sometimes when you a translating from one language to another, a literal translation is not enough. Sometimes to actually understand what is being said, you need additional context. Professional translators know this, and the process that they use to help a listener is called "explicitation" to capturing cultural differences between source and target audiences. We introduce techniques for automatically generating explicitations, motivated by WikiExpl(a dataset collected from Wikipedia and annotate with human translators), and evaluate the explicitation.
  • Program Chairs' Report on Peer Review at ACL 2023. Anna Rogers, Marzena Karpinska, Jordan Boyd-Graber, Naoaki Okazaki. Association for Computational Linguistics, 2023. [Bibtex]
  • Benjamin Börschinger, Jordan Boyd-Graber, Christian Buck, Jannis Bulian, Massimiliano Ciaramita, Michelle Chen Huebscher, Wojciech Gajewski, Yannic Kilcher, Rodrigo Nogueira, and Lierni Sestorain Saralegu. Meta Answering for Machine Reading. ArXiv, Preprint. [Preprint] [Bibtex]
  • Pedro Rodriguez, Shi Feng, Mohit Iyyer, He He, and Jordan Boyd-Graber. Quizbowl: The Case for Incremental Question Answering. ArXiv, Preprint. [Webpage] [Bibtex]
Jordan Boyd-Graber