Skip to main content
Computing & Society

When the Feed Becomes the Front Line

May 6, 2026
 An illustration of personnel in a dimly lit command center monitoring a large wall of digital screens displaying world maps and complex data.

When the port in Beirut exploded in August 2020, the blast was captured from what seemed like every angle. Within hours, thousands of videos and photos flooded social media—shot from balconies, dashcams, phones held out of windows across the city. For Cody Buntain, an assistant professor at the University of Maryland College of Information, that torrent of imagery wasn’t just documentation. It was data.

“We saw so many photos of the aftermath, that one could be very certain about the impact of the event,” says Buntain, who has an appointment in the University of Maryland Institute for Advanced Computer Studies (UMIACS) and is active in the Institute for Trustworthy AI in Law & Society (TRAILS).

The volume and variety of the images, cross-referenced, produced something approaching ground truth, he added.

Buntain has spent years asking what becomes possible when you apply that same logic to war. He studies how social media behaves during crises—natural disasters, political uprisings, full-scale conflicts—and builds AI tools to make sense of it at a scale no human analyst team could match.

As conflicts involving the U.S., Israel and Iran dominate global headlines and the information environment around modern war grows harder to navigate, that work has taken on new urgency. His tools are designed for analysts, journalists, and humanitarian workers trying to understand what’s happening on the ground when getting there isn’t possible.

But building the tools, it turns out, is the easier part. Deciding when—and whether—to trust them is harder.

Reading the Visual Record of War

Conflicts are now documented in real time—by soldiers, civilians, journalists, and propagandists alike. That documentation creates an open-source intelligence opportunity, and a data problem no human team can handle at scale. Buntain’s approach is to train object-detection models to scan massive image collections, flagging those that contain specific weapon systems or insignia. His research shows that the timing of when weapon imagery appears online can correlate with offline conflict data, suggesting that the digital record can, under the right conditions, reflect physical reality.

Knowing that signal exists raises a more difficult question: should anyone act on it? “Especially when lives are on the line,” he says, “where is the root of trust here?”

Models can identify weapons in an image or estimate the severity of a strike, but the real question isn’t whether they perform well on a benchmark. It’s whether a human working alongside one makes better decisions than a human working without it.

That means studying the human as much as the system: how long a task takes without AI assistance, how that changes with a model in the loop, and how trust forms—and sometimes misfires—over time.

That problem doesn’t get easier when the underlying data is in question.

Fake Images, Real Stakes

The spread of AI-generated imagery has made an already chaotic information environment harder to navigate. But misinformation itself isn’t new. In past crises, the same doctored images resurfaced again and again. What’s changed is the scale, the speed, and the erosion of a signal researchers once relied on.

Early social media had a visible correction mechanism: false claims attracted public refutation, and that refutation was itself useful data. 

“Now you can have an audience of any number of people from anywhere in the world telling you something didn’t happen for ideological or monetary reasons.” The correction still exists. It just can’t be trusted the way it once was.

Instead, Buntain’s work looks beyond the content to the context surrounding it—who posted it, how old the account is, what it has shared before, how the information spread. “These things don’t emerge in a vacuum,” he says.

Context has limits too, though—especially when the data itself is thin.

When Conflicts Go Dark

Not all conflicts produce a dense digital record, but Buntain pushes back on the assumption that restricted environments mean no data at all. Armed groups and political actors tend to maintain an online presence even under suppression—for propaganda, coordination, and international reach. And absence itself can be meaningful. “When communities that are usually active are now quiet,” he says, “identifying those black holes is itself important.”

The deeper problem is bias. In lower-resource settings, the content that does exist tends to come from those with the most access—typically the most powerful or organized actors. The data isn’t missing so much as skewed. To account for that, Buntain partners with organizations like the International Red Cross and Red Crescent, whose staff on the ground can help contextualize what the data can and can’t say.

Even with strong data and well-tested models, none of it matters if the tools aren’t accessible.

Who Gets the Tools

The most unsettled question in Buntain’s research isn’t technical. It’s structural. If AI models can help interpret conflict imagery, who actually has access to them? Running large-scale systems requires computational resources that aren’t evenly distributed between countries—or even within them. Some communities can build and deploy advanced tools; others lack the infrastructure to use them at all.

Buntain expects costs to come down and points to smaller, more efficient models as part of the solution. But access isn’t just a technical problem—it’s a question of investment, capacity, and who gets to shape the tools in the first place.

What comes next depends less on model performance and more on whether these systems are trustworthy—and who can actually use them.

Story by Laurie Robinson, College of Information

Back to Top