Multi-camera networks: eyes from eyes

TitleMulti-camera networks: eyes from eyes
Publication TypeConference Papers
Year of Publication2000
AuthorsFermüller C, Aloimonos Y, Baker P, Pless R, Neumann J, Stuart B
Conference NameIEEE Workshop on Omnidirectional Vision, 2000. Proceedings
Date Published2000///
PublisherIEEE
ISBN Number0-7695-0704-2
KeywordsBiosensors, CAMERAS, Computer vision, Eyes, Image sequences, intelligent systems, Layout, Machine vision, Robot vision systems, Robustness, Spatiotemporal phenomena, video cameras, Virtual reality
Abstract

Autonomous or semi-autonomous intelligent systems, in order to function appropriately, need to create models of their environment, i.e., models of space time. These are descriptions of objects and scenes and descriptions of changes of space over time, that is, events and actions. Despite the large amount of research on this problem, as a community we are still far from developing robust descriptions of a system's spatiotemporal environment using video input (image sequences). Undoubtedly, some progress has been made regarding the understanding of estimating the structure of visual space, but it has not led to solutions to specific applications. There is, however, an alternative approach which is in line with today's “zeitgeist.” The vision of artificial systems can be enhanced by providing them with new eyes. If conventional video cameras are put together in various configurations, new sensors can be constructed that have much more power and the way they “see” the world makes it much easier to solve problems of vision. This research is motivated by examining the wide variety of eye design in the biological world and obtaining inspiration for an ensemble of computational studies that relate how a system sees to what that system does (i.e. relating perception to action). This, coupled with the geometry of multiple views that has flourished in terms of theoretical results in the past few years, points to new ways of constructing powerful imaging devices which suit particular tasks in robotics, visualization, video processing, virtual reality and various computer vision applications, better than conventional cameras. This paper presents a number of new sensors that we built using common video cameras and shows their superiority with regard to developing models of space and motion

DOI10.1109/OMNVIS.2000.853797