Skip to main content
University of Maryland
Emerging Technologies

Zhang Receives Award from Open Philanthropy to Enhance Safety in Generative-AI Agents

October 16, 2025
Graphic of two AI Agents chatting with each other.

As AI continues to advance, large language models (LLMs) are performing increasingly complex tasks—from generating software code to writing articles, answering questions, and collaborating on reasoning-heavy problems. Understanding how these LLMs work in tandem has become a key question for AI safety researchers.

Kaiqing Zhang, an assistant professor of electrical and computer engineering with an affiliate appointment in the University of Maryland Institute for Advanced Computer Studies (UMIACS), is leading innovative research to investigate these interactions.

He recently received funding to support his efforts from Open Philanthropy, a prestigious philanthropic entity that partners with GiveWell and Good Ventures. Collectively, these organizations support novel, forward-looking research that can have a significant social impact.

Zhang’s project will examine how LLMs communicate with each other and with humans, and how to better coordinate these interactions. It focuses on whether collusion (when models secretly coordinate to achieve shared goals) and hidden communication (known as steganography) can emerge naturally, and how such behaviors can be controlled.

“LLMs aren’t just isolated tools,” says Zhang, a core member of the University of Maryland Center for Machine Learning. “When multiple models interact with each other—and especially with humans—they can develop subtle communication strategies that we might not expect. This project lets us observe those strategies, understand when collusion arises, and find ways to prevent it.”

Zhang’s research also explores how LLMs might develop hidden communication—even without specific training—and how training multiple models together can amplify these subtle collusive behaviors. Ultimately, Khang says, the research hopes to identify ways to mitigate these behaviors.

The Open Philanthropy funding will support bringing additional expertise into Zhang’s lab, leading to a strong foundation for ongoing research into agentic AI safety. Zhang believes this work will not only advance AI safety knowledge but also provide a framework for guiding multi-model behavior in real-world applications.

“Ensuring AI systems communicate safely and reliably is critical,” Zhang says. “I am grateful to Open Philanthropy for their support and excited to see how this project can advance both the theory and practice of safe Generative-AI agents.”

—Story by Melissa Brachfeld, UMIACS communications group

Back to Top