This paper presents an adaptive cooperative approach towards the 3D reconstruction tailored for a bio-inspired depth camera: the stereo dynamic vision sensor (DVS). DVS consists of self-spiking pixels that asynchronously generate events upon relative light intensity changes. These sensors have the advantage to allow simultaneously high temporal resolution (better than 10μs) and wide dynamic range (>120dB) at sparse data representation, which is not possible with frame-based cameras. In order to exploit the potential of DVS and benefit from its features, depth calculation should take into account the spatiotemporal and asynchronous aspect of data provided by the sensor. This work deals with developing an appropriate approach for the asynchronous, event-driven stereo algorithm. We propose a modification of the cooperative network in which the history of the recent activity in the scene is stored to serve as spatiotemporal context used in disparity calculation for each incoming event. The network constantly evolves in time - as events are generated. In our work, not only the spatiotemporal aspect of the data is preserved but also the matching is performed asynchronously. The results of the experiments prove that the proposed approach is well suited for DVS data and can be successfully used for our efficient passive depth camera.
E. Piatkowska, A. Belbachir, M. Gelautz: "Asynchronous Stereo Vision for Event-Driven Dynamic Stereo Sensor Using an Adaptive Cooperative Approach"; Talk: IEEE International Conference on Computer Vision (ICCV) Workshops: 3rd Workshop on Consumer Depth Cameras for Computer Vision (CDC4CV), Sydney; 12-02-2013; in: "IEEE International Conference on Computer Vision (ICCV) Workshops", (2013), 5 pages.
Click into the text area and press Ctrl+A/Ctrl+C or ⌘+A/⌘+C to copy the BibTeX into your clipboard… or download the BibTeX.