Yiannis "John" Aloimonos is a professor of computer science and director of the Computer Vision Laboratory.
He has written more than 200 research publications on computer vision, especially on Active Vision. Aloimonos has contributed to the theory of computational vision in various ways, including the discovery of the trilinear constraints (with M. Spetsakis) and the mathematics of stability in motion analysis as a function of the field of view (with Cornelia Fermüller), which contributed to the development of omni directional sensors. He serves on the editorial boards of several journals (such as IEEE PAMI, CVIU, the Visual Computer, Pattern Recognition); has chaired several international and national conferences (CVPR, ICPR, 3DPVT); and is the co-author of four books, including one textbook on artificial intelligence.
Aloimonos has received awards for his work, including the Marr Prize Honorable Mention Award 1987, the Presidential Young Investigator Award from President Bush in 1990, and the Bodossaki Prize in AI and Computer Vision in 1994. His research has been supported over the years by NSF, NIH, ONR, DARPA, IBM, Honeywell, Dassault, Westinghouse, Google, Honda and the European Union. For the past five years, he has been working on cognitive systems under the project POETICON, and more recently under the NSF Cyberphysical Systems Program.
He received his doctorate in computer science from the University of Rochester in 1987. Aloimonos started at the University of Maryland in 1986. In 1993, he was a visiting professor at the Royal Institute of Technology, in Stockholm, Sweden, and in 1994, he served as a visiting professor at the Institute FORTH in Crete, Greece.
2011. A Corpus-Guided Framework for Robotic Visual Perception. Workshops at the Twenty-Fifth AAAI Conference on Artificial Intelligence.
2011. Active scene recognition with vision and language. 2011 IEEE International Conference on Computer Vision (ICCV). :810-817.
2011. Visual Scene Interpretation as a Dialogue between Vision and Language. Workshops at the Twenty-Fifth AAAI Conference on Artificial Intelligence.
2010. An Experimental Study of Color-Based Segmentation Algorithms Based on the Mean-Shift Concept. Computer Vision – ECCV 2010Computer Vision – ECCV 2010. 6312:506-519.
2010. Learning shift-invariant sparse representation of actions. 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). :2630-2637.
2009. MEASURING 1ST ORDER STRETCH WITH A SINGLE FILTER. Relation. 10(1.132):691-691.
2009. Sensory grammars for sensor networks. Journal of Ambient Intelligence and Smart Environments. 1(1):15-21.
2009. Active segmentation for robotics. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009. IROS 2009. :3133-3139.
2009. Image Transformations and Blurring. IEEE Transactions on Pattern Analysis and Machine Intelligence. 31(5):811-823.
2009. Real-time shape retrieval for robotics using skip Tri-Grams. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009. IROS 2009. :4731-4738.
2008. Measuring 1st order stretchwith a single filter. IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. ICASSP 2008. :909-912.
2008. Who killed the directed model? Computer Vision and Pattern Recognition, IEEE Computer Society Conference on. :1-8.
2007. A Language for Human Action. Computer. 40(5):42-51.
2007. A probabilistic framework for correspondence and egomotion. Dynamical Vision. :232-242.
2007. A probabilistic notion of camera geometry: Calibrated vs. uncalibrated. PHOTOGRAMMETRIE FERNERKUNDUNG GEOINFORMATION. 2007(1):25-25.
2007. Multiple View Image Reconstruction: A Harmonic Approach. IEEE Conference on Computer Vision and Pattern Recognition, 2007. CVPR '07. :1-8.
2007. A roadmap to the integration of early visual modules. International Journal of Computer Vision. 72(1):9-25.
2007. View-invariant modeling and recognition of human actions using grammars. Dynamical Vision. :115-126.
2006. A probabilistic notion of correspondence and the epipolar constraint. Proceedings of the Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06). :41-48.
2006. Human activity language: Grounding concepts with a linguistic framework. Semantic Multimedia. :86-100.
2006. Towards a sensorimotor WordNet SM: Closing the semantic gap. Proc. of the International WordNet Conference (GWC).
2006. A sensory grammar for inferring behaviors in sensor networks. Proceedings of the 5th international conference on Information processing in sensor networks. :251-259.
2006. Understanding visuo‐motor primitives for motion synthesis and analysis. Computer Animation and Virtual Worlds. 17(3‐4):207-217.
2006. A Sensory-Motor Language for Human Activity Understanding. 2006 6th IEEE-RAS International Conference on Humanoid Robots. :69-75.
2006. Deformation and viewpoint invariant color histograms. British Machine Vision Conference. 2:509-518.
2006. Integration of visual and inertial information for egomotion: a stochastic approach. Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006. :2053-2059.
2005. Discovering a language for human activity. Proceedings of the AAAI 2005 Fall Symposium on Anticipatory Cognitive Embodied Systems, Washington, DC.
2005. Robust Contrast Invariant Stereo Correspondence. Proceedings of the 2005 IEEE International Conference on Robotics and Automation, 2005. ICRA 2005. :819-824.
2005. Chromatic Induction and Perspective Distortion. Journal of VisionJ Vis. 5(8):1026-1026.
2005. Motion segmentation using occlusions. IEEE Transactions on Pattern Analysis and Machine Intelligence. 27(6):988-992.
2005. Detecting Independent 3D Movement. Handbook of Geometric ComputingHandbook of Geometric Computing. :383-401.
2005. Shape and the stereo correspondence problem. International Journal of Computer Vision. 65(3):147-162.
2004. The influence of shape on image correspondence. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004. Proceedings. :945-952.
2004. Compound eye sensor for 3D ego motion estimation. 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2004. (IROS 2004). Proceedings. 4:3712-3717vol.4-3712-3717vol.4.
2004. A hierarchy of cameras for 3D photography. Computer Vision and Image Understanding. 96(3):274-293.
2004. Structure from motion of parallel lines. Computer Vision-ECCV 2004. :229-240.
2004. The Argus eye, a new tool for robotics. IEEE Robotics and Automation Magazine. 11(4):31-38.
2004. Stereo Correspondence with Slanted Surfaces: Critical Implications of Horizontal Slant. Computer Vision and Pattern Recognition, IEEE Computer Society Conference on. 1:568-573.
2003. Computational video. The Visual Computer. 19(6):355-359.
2003. Eye design in the plenoptic space of light rays. Ninth IEEE International Conference on Computer Vision, 2003. Proceedings. :1160-1167vol.2-1160-1167vol.2.
2003. New eyes for robotics. 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2003. (IROS 2003). Proceedings. 1:1018-1023vol.1-1018-1023vol.1.
2003. Polydioptric camera design and 3D motion estimation. 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings. 2:II-294-301vol.2-II-294-301vol.2.
2002. Bias in visual motion processes: A theory predicting illusions. Statistical Methods in Video Processing.(in conjunction with European Conference on Computer Vision).
2002. Visual space-time geometry - A tool for perception and the imagination. Proceedings of the IEEE. 90(7):1113-1135.
2002. Eyes form eyes: New cameras for structure from motion. Proceedings Workshop on Omnidirectional Vision (OMNIVIS).
2002. Polydioptric cameras: New eyes for structure from motion. Pattern Recognition. :618-625.
2002. Eyes from eyes: new cameras for structure from motion. Third Workshop on Omnidirectional Vision, 2002. Proceedings. :19-26.
2002. Polydioptric Cameras: New Eyes for Structure from Motion. Pattern RecognitionPattern Recognition. 2449:618-625.
2002. Spatio-temporal stereo using multi-resolution subdivision surfaces. International Journal of Computer Vision. 47(1):181-193.
2001. Statistics Explains Geometrical Optical Illusions. Foundations of Image UnderstandingFoundations of Image Understanding. 628:409-445.
2001. Towards the ultimate motion capture technology. Deformable avatars: IFIP TC5/WG5. 10 DEFORM'2000 Workshop, November 29-30, 2000, Geneva, Switzerland and AVATARS'2000 Workshop, November 30-December 1, 2000, Lausanne, Switzerland. 68:143-143.
2001. A spherical eye from multiple cameras (makes better models of the world). Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2001. CVPR 2001. 1:I-576-I-583vol.1-I-576-I-583vol.1.
2001. Eyes from Eyes. 3D Structure from Images — SMILE 20003D Structure from Images — SMILE 2000. 2018:204-217.
2001. Geometry of Eye Design: Biology and Technology. Multi-Image Analysis. 2032:22-38.
2001. The Statistics of Optical Flow. Computer Vision and Image Understanding. 82(1):1-32.
2001. Animated heads: From 3d motion fields to action descriptions. Proceedings of the IFIP TC5/WG5. 10:1-11.
2000. Multi-camera networks: eyes from eyes. IEEE Workshop on Omnidirectional Vision, 2000. Proceedings. :11-18.
2000. New eyes for building models from video. Computational Geometry. 15(1–3):3-23.
2000. A New Framework for Multi-camera Structure from Motion. Mustererkennung 2000, 22. DAGM-Symposium. :75-82.
2000. New Eyes for Shape and Motion Estimation. Biologically Motivated Computer VisionBiologically Motivated Computer Vision. 1811:23-47.
2000. Observability of 3D Motion. International Journal of Computer Vision. 37(1):43-63.
2000. The statistics of optical flow: implications for the process of correspondence in vision. 15th International Conference on Pattern Recognition, 2000. Proceedings. 1:119-126vol.1-119-126vol.1.
2000. Detecting independent motion: The statistics of temporal continuity. Pattern Analysis and Machine Intelligence, IEEE Transactions on. 22(8):768-773.
2000. Structure from motion: Beyond the epipolar constraint. International Journal of Computer Vision. 37(3):231-258.
2000. The Ouchi illusion as an artifact of biased flow estimation. Vision Research. 40(1):77-95.
2000. Analyzing Action Representations. Algebraic Frames for the Perception-Action CycleAlgebraic Frames for the Perception-Action Cycle. 1888:1-21.
1999. Motion Segmentation: A Synergistic Approach. Computer Vision and Pattern Recognition, IEEE Computer Society Conference on. 2:2226-2226.
1999. Active Perception. Wiley Encyclopedia of Electrical and Electronics EngineeringWiley Encyclopedia of Electrical and Electronics Engineering.