Memon Wins Most Influential Paper Award at ASE 2017

Tue Oct 31, 2017

A University of Maryland expert in software applications is being recognized this week for the lasting impact of an academic paper he published 14 years ago.

Atif Memon, a professor of computer science with an appointment in the University of Maryland Institute for Advanced Computer Studies, will receive the Most Influential Paper award at the 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE 2017).

The award recognizes papers deemed most influential to the field of automated software engineering that were originally published approximately 15 years (give or take one year) from the current ASE conference.

"What Test Oracle Should I Use for Effective GUI Testing?"—authored in 2003 by Memon and former UMD graduate students Ishan Banerjee and Adithya Nagarajan—examines new methods to improve the effectiveness and cost of software testing.

The Maryland team specifically looked at test oracles, a mechanism that determines whether software has executed correctly for a test case. They determined that by splitting the oracle into two parts—oracle information that represents expected output, and an oracle procedure that compares the oracle information with the actual output—they could achieve better results.

Their findings have subsequently provided valuable guidelines to software testers. If testers have short and a small number of test cases, for example, they can improve their testing process by using complex test oracles. On the other hand, if they have generated test cases using an automated tool, then they can use cheaper and simpler test oracles to conserve resources.

The ASE 2017 award committee says the paper by Memon was chosen from an “exceptionally strong field of papers” that were published at ASE conferences between 2001 and 2003.

Memon’s paper has garnered 138 citations, according to Google Scholar.

"Before this paper was published, software testers would look toward test coverage alone to determine the adequacy and effectiveness of their test cases," Memon says. "Our paper led to a major paradigm shift in software testing by giving testers another, more accurate, measure of test effectiveness."