Participation in the design, development and deployment of AI systems is one of the key components required for people to trust AI, says Katie Shilton, a co-PI in the Institute for Trustworthy AI in Law & Society (TRAILS).
Shilton, working with graduate students and postdocs affiliated with TRAILS, conducted a thorough examination of where participation in AI is occurring on a global scale. Her team researched academic literature as well as YouTube content from around the world, identifying where people were incorporating participation, who was participating, and what was the outcome.
“Ultimately, we can’t control whether people trust AI or not, but what you can control is whether [AI systems] are trustworthy,” says Shilton, a professor in the College of Information with an appointment in the University Maryland Institute for Advanced Computer Studies (UMIACS). “And building participation into the process, we think, is really important for that.”
—Produced by UMIACS communications group