Ching Lik Teo
I am a Ph.D. student at Department of Computer Science at University of Maryland, College Park . I am also affliated to the Computer Vision Lab at UMIACS. My advisors are Yiannis Aloimonos and Cornelia Fermüller.
I am interested in Computer Vision, Computational Linguistics and how these two emerging fields combine to solve difficult problems that models human understanding, attention and recognition. I am also interested in robotic perception, especially on how we can model Vision and Language on mobile active agents.
I am a recipient of the Qualcomm Innovation Fellowship 2011, the UMD CS Department Fellowship award and DSO National Laboratories Post-graduate Scholarship.
I am one of the main organizers for the UMD Computer Vision Student Seminars (CVSS) series.
I am working on developing theories and algorithms that integrates Language with Vision beyond the semantic (label) level. This extends the previous work of using the "Cognitive Dialog" framework by considering the integration of vision and language across the entire spectrum of visual processes: from high-level (semantic) to low-level (signal). We are currently developing efficient methods for influencing low-level to mid-level visual processes: edge detection, contour grouping and segmentation, using high-level representations derived from language. More information can be found here.Previous Research
Software and dataSeveral datasets that we have used in our work are listed and made available here.
We are actively using the Robot Operating System (ROS) to develop codes for the Pioneer 3-DX (previously Videre Erratic) robot platform. See our ROS hints to get acquainted. It contains useful README, links and sample ROS codes as well.
Ching L. Teo, Austin Myers, Cornelia Fermüller, Yiannis Aloimonos. Embedding High-Level Information into Low Level Vision: Efficient Object Search in Clutter. IEEE International Conference on Robotics and Automation, ICRA. 2013.
Douglas Summers-stay, Ching L. Teo, Yezhou Yang, Cornelia Fermüller, Yiannis Aloimonos. Using a Minimal Action Grammar for Activity Understanding in the Real World. IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS. 2012.
Ching L. Teo, Yezhou Yang, Hal Daumé III, Cornelia Fermüller, Yiannis Aloimonos. Towards a Watson That Sees: Language-Guided Action Recognition for Robots. IEEE International Conference on Robotics and Automation, ICRA. 2012.
Ching L. Teo, Yezhou Yang, Hal Daumé III, Cornelia Fermüller and Yiannis Aloimonos. A Corpus-Guided Framework for Robotic Visual Perception. AAAI Workshop on Language-Action Tools for Cognitive Artificial Agents. 2011.
Robots Need Language, Qualcomm Innovation Fellowship Winners Day. Sep 2012. slides
The Telluride Neuromorphic Workshop 2011: Our Experience, UMD Computer Vision Students Seminar (CVSS). Feb 2012. slides
Integrating Language into Computer Vision, NUS Department of Mathematics Weekly Seminar. Jan 2012. slides
Robots Need Language: A computational model for the integration of vision, language and action, Qualcomm Innovation Fellowship Finalist Presentation. Apr 2011. Slides available upon request
Current Courses and Teaching
Last Updated: Apr 03, 2013