Skip to main content
University of Maryland
Computing & Society

UMD Group Shares Sparsity Expertise at Upcoming Symposium on Microarchitecture

September 26, 2025
MICRO 2025 poster in shades of blue showing buildings in Seoul, South Korea.

An optimization strategy known as sparsity—which exploits the presence of many zero or near-zero values in data, thereby reducing processing, storage and energy demands—is fast becoming a cornerstone of advanced computing.

Sparsity is especially critical for on-device machine learning applications, where computational resources are often highly constrained. 

University of Maryland researchers are set to focus their attention on this topic, and more, at an upcoming symposium on microarchitecture.

Bahar Asgari, an assistant professor of computer science, and her students are presenting four papers on sparsity at this year’s IEEE/ACM International Symposium on Microarchitecture (MICRO). The annual event, considered a premier forum for breakthroughs in computer architecture, takes place from October 18–22 in Seoul, South Korea.

Asgari, who has an appointment in the University of Maryland Institute for Advanced Computer Studies (UMIACS), is also co-leading a workshop on sparsity, underscoring her role as a leading voice on this topic.

Bahar Assgari works with Ph.D. student Helya Hosseini in the Computer Architecture and Systems Lab.
Bahar Asgari (right) works with Ph.D. student Helya Hosseini in the Computer Architecture and Systems Lab. The two, along with other lab members, are presenting their work at the upcoming MICRO 2025 symposium.

She says she is grateful to the MICRO organizing committee for highlighting work by her group and others that is centered on sparsity, noting that there are few professional workshops within the microarchitecture community dedicated to the subject.

“MICRO provides the perfect stage to bring together world-class researchers, spark new collaborations, and catalyze a community-wide effort to advance sparsity in computing,” Asgari says.

Much of the work that resulted in the UMD papers being presented took place in Asgari’s Computer Architecture and Systems Lab (CASL), highlighting her group’s expertise in designing systems that handle massive workloads more efficiently.

The lab’s four featured papers highlighting innovative strategies for harnessing sparsity are:

• “Boötes: Boosting the Efficiency of Sparse Accelerators Using Spectral Clustering,” by Sanjali Yadav, lead author and a second-year computer science student, and Asgari, uses a scalable clustering technique to reorder matrix rows in sparse computations, cutting memory traffic and offering up to 11.6 times speedup. 

• “Chasoň: Supporting Cross-HBM Channel Data Migration to Enable Efficient Sparse Algebraic Acceleration,” co-authored by Ubaid Bakhtiar, lead author and a fourth-year electrical and computer engineering student, Amirmahdi Namjoo, a second-year computer science student, and Asgari, introduces a scheduling technique for sparse accelerators that improves resource use and achieves up to 14.6 times better energy efficiency than prior CPU and GPU solutions. 

• “Misam: Machine Learning–Assisted Dataflow Selection in Accelerators for Sparse Matrix Multiplication,” co-authored by Yadav, Namjoo, and Asgari, leverages machine learning to dynamically select optimal computation strategies, achieving up to 10.7 times speedup with minimal reconfiguration overhead.

• “Coruscant: Co-Designing GPU Kernel and Sparse Tensor Core to Advocate Unstructured Sparsity in Efficient LLM Inference,” is co-authored by Donghyeon Joo and Helya Hosseini, both third-year computer science students, with Asgari and Ramyad Hadidi, a machine learning computer architect at d-Matrix, also contributing. The paper highlights research that pairs a specialized GPU kernel with hardware support to process compressed data directly, reducing memory needs and running nearly three times faster than conventional GPU software.

For Asgari, one of the most rewarding aspects of the upcoming symposium is seeing her students effectively tackle complex challenges in sparsity and advanced system design. Their work demonstrates both technical skill and the collaborative, idea-sharing culture that drives CASL’s success, she says.

“Ultimately, this moment is about more than the papers themselves,” she adds. “It’s about demonstrating how innovations in sparsity—an essential driver of faster, more efficient computing and advanced AI workloads—can shape the future of systems, and how our students are helping lead that effort.”

—Story by Melissa Brachfeld, UMIACS communications group

Back to Top