About this Project
The aim of this project is to develop new technologies for creating high-quality 3D video materials by combining active and passive 3D sensors. The major goals are to analyze the strengths and weaknesses of active and passive depth sensors, to design a novel, state-of-the art fusion algorithm by exploiting the synergy between different sensing methods, to represent fuzzy objects which are very difficult to capture with the conventional depth estimation method, and to achieve high-quality 3D video by incorporating both spatial and temporal coherence.
In order to achieve these aims, we will try to find the best hardware configuration for active and passive sensors to achieve an accurate system calibration and robust depth acquisition. In the system setup and calibration phase, we will conduct a photometric, colorimetric, and geometric calibration of each depth sensor and a geometric calibration of the entire system. Next, when working on the design of new fusion models, we will head in different directions by elaborating different ideas gained in the system calibration phase. Finally, we expect a first version of the framework approximately nine months after the beginning of the project. In this phase, ideas for extending our experiments by including other existing methods will come up as soon as the first results of our benchmark test are available. Lastly, the experimental phase aims at providing a competitive performance evaluation.
We expect that the proposed method, which combines both active and passive depth sensing, will perform better than either one alone. If this approach is successfully developed, it will improve the accuracy and robustness of depth maps. Furthermore, we believe that the accurate estimation of depth information will have a positive impact on all image-based depth technologies, since their results are highly dependent on the quality of the estimated depth map. Finally, it is expected that the proposed research will have a great impact on the broadcasting industry, since the fast and easy generation of 3D video will affect the various ways of 3D contents production.
Funding provided by
FWF - Österr. Wissenschaftsfonds
The project has been funded by the Austrian Science Fund (FWF) under project M1383-N23.