Ching Lik Teo

I am a Ph.D. student at Department of Computer Science at University of Maryland, College Park . I am also affliated to the Computer Vision Lab at UMIACS. My advisors are Yiannis Aloimonos and Cornelia Fermüller.

I am interested in Computer Vision, Computational Linguistics and how these two emerging fields combine to solve difficult problems that models human understanding, attention and recognition. I am also interested in robotic perception, especially on how we can model Vision and Language on mobile active agents.

I am a recipient of the Qualcomm Innovation Fellowship 2011, the UMD CS Department Fellowship award and DSO National Laboratories Post-graduate Scholarship.

I am one of the main organizers for the UMD Computer Vision Student Seminars (CVSS) series.

Here's my updated CV and my Google scholar profile.

Latest News

2013

2012

Previous News

Current Research

I am working on developing theories and algorithms that integrates Language with Vision beyond the semantic (label) level. This extends the previous work of using the "Cognitive Dialog" framework by considering the integration of vision and language across the entire spectrum of visual processes: from high-level (semantic) to low-level (signal). We are currently developing efficient methods for influencing low-level to mid-level visual processes: edge detection, contour grouping and segmentation, using high-level representations derived from language. More information can be found here.

Previous Research

Software and data

Several datasets that we have used in our work are listed and made available here.
  • UMD Complex Activities. 16 sequences of synced Kinect and SR4000 data of manipulatory hand actions using a variety of tools and objects collected from 4 different actors.
  • POETICON video dataset consisting of 6 complex activities involving 2 persons. Fully annotated, and hand segmented.
  • UMD-Telluride Kinect Dataset consisting of RGB-Depth Kinect data of 11 kitchen actions using different tools.
  • UMD Sushi-Making dataset consisting of synced RGB + MOCAP data of 4 actors performing various actions connected with making sushi.

We are actively using the Robot Operating System (ROS) to develop codes for the Pioneer 3-DX (previously Videre Erratic) robot platform. See our ROS hints to get acquainted. It contains useful README, links and sample ROS codes as well.

Refereed Publications

Ching L. Teo, Austin Myers, Cornelia Fermüller, Yiannis Aloimonos. Embedding High-Level Information into Low Level Vision: Efficient Object Search in Clutter. IEEE International Conference on Robotics and Automation, ICRA. 2013.

  • pdf slides dataset

  • Yezhou Yang, Ching L. Teo, Cornelia Fermüller, Yiannis Aloimonos. Robots with Language: Multi-Label Visual Recognition Using NLP. IEEE International Conference on Robotics and Automation, ICRA. 2013.

  • pdf slides dataset

  • Douglas Summers-stay, Ching L. Teo, Yezhou Yang, Cornelia Fermüller, Yiannis Aloimonos. Using a Minimal Action Grammar for Activity Understanding in the Real World. IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS. 2012.

  • pdf slides dataset

  • Ching L. Teo, Yezhou Yang, Hal Daumé III, Cornelia Fermüller, Yiannis Aloimonos. Towards a Watson That Sees: Language-Guided Action Recognition for Robots. IEEE International Conference on Robotics and Automation, ICRA. 2012.

  • pdf slides dataset supplementary results

  • Ching L. Teo, Yezhou Yang, Cornelia Fermüller, Yiannis Aloimonos. Synergistic Methods for using Language in Robotics. Performance Metrics for Intelligent Systems Workshop, PerMIS. 2012.

  • pdf slides dataset

  • Xiaodong Yu, Cornelia Fermüller, Ching L. Teo, Yezhou Yang, Yiannis Aloimonos. Active Scene Recognition with Vision and Language. International Conference on Computer Vision, ICCV. 2011.

  • pdf poster

  • Ching L. Teo, Yezhou Yang, Hal Daumé III and Yiannis Aloimonos. Corpus-Guided Sentence Generation of Natural Images. Conference on Empirical Methods in Natural Language Processing, EMNLP. 2011.

  • pdf slides results

  • Ching L. Teo, Yezhou Yang, Hal Daumé III, Cornelia Fermüller and Yiannis Aloimonos. A Corpus-Guided Framework for Robotic Visual Perception. AAAI Workshop on Language-Action Tools for Cognitive Artificial Agents. 2011.

  • pdf slides dataset

  • Ching L. Teo, S. Li, L-F. Cheong and J. Sun. 3D ordinal constraints in Spatial Configuration for Robust Scene Recognition. 19th International Conference on Pattern Recognition, ICPR. 2008.

  • IEEE Xplore poster

  • Other Publications

    Talks

    Robots Need Language, Qualcomm Innovation Fellowship Winners Day. Sep 2012. slides

    The Telluride Neuromorphic Workshop 2011: Our Experience, UMD Computer Vision Students Seminar (CVSS). Feb 2012. slides

    Integrating Language into Computer Vision, NUS Department of Mathematics Weekly Seminar. Jan 2012. slides

    Robots Need Language: A computational model for the integration of vision, language and action, Qualcomm Innovation Fellowship Finalist Presentation. Apr 2011. Slides available upon request

    Current Courses and Teaching

    Fall 2012

    • BMGT 808E: Stochastic Optimization by Ilya Ryzhov
    List of courses taken/taught in previous semesters

    Locations of visitors to this page

    Last Updated: Apr 03, 2013