CfAR Seminars: "Event-based Learning and Computation for Computer Vision" by Michael Pfeiffer - University of Zurich

Wed Apr 03, 2013 2:00 PM

Location: A.V. Williams Building, Room 3450

Speaker:
Michael Pfeiffer
Institute of Neuroinformatics
University of Zurich and ETH Zurich

Abstract:
The concept of an image or a video frame as a matrix of intensity or color values, recorded all at the same time throughout the visual field has become so central to computer vision that we take it from granted. However, biological organisms do not see still images like a camera does: already at the lowest sensory level, in the retina, the visual input is highly pre-processed, spatial and temporal redundancy is removed, and information is sent towards the visual cortex in the form of afferent spike trains, in which the timing of spikes plays a crucial role.
In recent years, silicon retina sensors have been developed that support this style of processing by asynchronously emitting events at pixels at times when temporal contrast is detected, at very low data-rates, low latencies and with very high temporal resolution, thereby providing an alternative to conventional frame-based cameras. I will first present this "Dynamic Vision Sensor" developed by Tobi Delbruck at the Institute of Neuroinformatics, and discuss advantages and potential application areas of such unconventional sensors in high-speed vision, tracking, and classification. The major challenge of event-based vision is that working with this new sensory representation of spatio-temporal events also requires a completely new algorithmic framework for visual processing, in particular if event-streams should be processed in real-time. Learning in particular plays an important role, because detection of re-occurring patterns in high dimensional streams of events is a difficult task, in which we have to learn to cope with various sources of noise and uncertainty. Most of the standard mechanisms from machine learning and computer vision cannot be directly applied to event data, but in my talk I will present our recent results that link spike-based learning rules in networks of spiking neurons to Bayesian inference and unsupervised learning techniques like the EM algorithm. I will also discuss how such approaches can be extended towards learning generative models of spatio-temporal sequences, and how event-based computation provides a novel paradigm for processing in deep neural network architectures for object recognition and sensory fusion.