Generative artificial intelligence systems such as ChatGPT and DALL·E are often perceived as intelligent, but their outputs are the result of pattern recognition rather than true understanding, according to David Broniatowski, deputy director of the Institute for Trustworthy AI in Law & Society (TRAILS).
Broniatowski, a professor of engineering management and systems engineering at George Washington University, said generative AI operates by identifying patterns in massive datasets and using those patterns to create new content. Unlike traditional systems that retrieve existing information, these models generate responses based on statistical relationships learned during training.
To do this, AI systems are fed vast amounts of data—often including large portions of publicly available internet text. Advanced computing systems analyze this data to detect patterns in language and structure. When a user inputs a prompt, the model produces a response that aligns with those patterns.
Because the outputs are typically coherent and well-structured, users may assume the system has some form of intelligence or reasoning. Broniatowski cautions that this is a misconception. “All it’s doing is generating text following patterns that make it sound like it’s intelligent,” he said, noting that these systems do not possess cognition in the way humans do.
The reliance on large, unfiltered datasets also introduces risks. Training data can include misleading or harmful content, meaning AI systems may generate responses that sound plausible but are inaccurate. This can contribute to the spread of misinformation, particularly when users place undue trust in the system’s outputs.
The scale and complexity of these models further complicate oversight. With enormous datasets and computational processes involved, even developers may not fully understand how specific outputs are produced.
Broniatowski said these challenges highlight the need for safeguards, including clearer risk assessment and accountability measures, as generative AI continues to be integrated into everyday tools and services.
TRAILS is a collaboration of four universities—the University of Maryland, George Washington University, Morgan State University and Cornell University—with UMD serving as the lead institution. TRAILS receives administrative and technical support from the University of Maryland Institute for Advanced Computer Studies (UMIACS).