Former CLIP Lab Members Explore the Use of AI to Enhance TV Viewing Experience

Aug 15, 2018

Two former researchers in the University of Maryland’s Computational Linguistics and Information Processing (CLIP) Lab are using the power of artificial intelligence to enhance the way people interact with their televisions.

Jimmy Lin, the David R. Cheriton Chair in the David R. Cheriton School of Computer Science at the University of Waterloo, and Jinfeng Rao, who recently graduated with a doctoral degree in computer science at UMD, are collaborating with the Comcast Applied AI Research Lab to expand the voice query understanding capabilities of the Comcast Xfinity X1 entertainment platform.

In their paper, the researchers explain how they improved the AI system in Comcast’s voice remote to more accurately understand viewers’ requests by utilizing context. They accomplished this with a technique known as hierarchical recurrent neural networks.

The team will present their work at the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. This premier scientific conference in data mining will be held from August 19–23 in London.

Comcast’s Xfinity X1 platform comes with a voice remote that accepts spoken queries. Your wish is its command—tell your TV to change channels, ask it about free kids movies, and even about the weather forecast. However, the platform used an older style of AI technology based on matching patterns, which didn’t always correctly interpret user intent.

“Say the viewer asks for ‘Chicago Fire’ which refers to both a drama series and a soccer team – how does the system determine what you want to watch?" says Rao. "What’s special about this approach is that we take advantage of context—such as previously watched shows and favorite channels—to personalize results, significantly increasing accuracy.”

In other words, the system takes advantage of context to personalize results for you, thereby increasing accuracy.

The researchers’ new neural network model was deployed in production last January and now answers millions of queries from users per day.

Not content with this success, the team has already begun to develop an even richer model which is also outlined in their paper. By analyzing queries from multiple perspectives, the system can better understand what the viewer is saying. This model is currently being readied for deployment.

“This work is a great example of a successful collaboration between academia and industry that yielded significant real-world impact,” says Lin. “My research group aims to build intelligent agents that can interact with humans in natural ways, and this project provides a great example of how we can deploy AI technologies to improve the user experience.”

About CLIP: The Computational Linguistics and Information Processing (CLIP) Laboratory at the University of Maryland is engaged in designing algorithms and building systems that allow computers to effectively and efficiently perform language-related tasks. CLIP is one of 16 labs and centers in the University of Maryland Institute for Advanced Computer Studies.