News:

 

  1. Our group will be involved in the 2011 Telluride Neuromorphic Cognition Engineering Workshop.

    Within the workshop we are organizing together with Andreas Andreou from JHU a working group entitled

    “A Cognitive Robot Detecting Objects using Sound, Language, and Vision”.

    The project is about combining natural language processing with computer vision and sound processing. The general idea is to use natural language processing tools for introducing higher level knowledge to localize and recognize objects, and actions of humans using objects. We will work with a simple wheeled robot platform, equipped with cameras on a pan-tilt unit, a laser range sensor, a Kinect camera, and microphones. The work will be structured around the following two tasks: 1.The robot will look at a table scene and interpret the objects and the activity of a human. 2. The robot will be instructed to find an object in a room-like setting

  2. Our students Ching Lik Teo and Yezhou Yang won the prestigious Qualcomm Innovation Fellowship 2011 with a proposal entitled:

    "Robots Need Language: A computational model for the integration of vision, language and action."

  3. We successfully tranferred our segmentation and visual filter software to the humanoid robots at the Italian Institute of Technology under the Project Poeticon , a consortium of Universities in Europe with the University of Maryland. This project is about the development of robots that reason about their visual environment.. See this video from the final review of the project, where the robot is integrating Vision, Language, Cognition and Motor Control.

    Click here to view the video

    or play the video below