The generation of novel views is a crucial processing step in 3D content generation, since it gives control over the amount of depth impression on (auto-)stereoscopic devices and enables free-viewpoint video viewing. A critical problem in novel view generation is the occurrence of disocclusions caused by a change in the viewing direction. Thus, areas in the novel views may become visible that were either covered by foreground objects or were located outside the borders in the original views. In this paper, we propose a depth-guided inpainting approach which relies on efficient patch matching to complete disocclusions along foreground objects and close to the image borders. Our method adapts its patch sizes depending on the disocclusion sizes and incorporates the depth information by focusing on the background scene content for patch selection. A subjective evaluation based on a user study demonstrates the effectiveness of the proposed approach in terms of quality of the 3D viewing experience.
T. Rittler, M. Nezveda, F. Seitner, M. Gelautz: "Depth-guided Disocclusion Inpainting for Novel View Synthesis"; Talk: OAGM & ARW Joint Workshop 2017, Wien; 05-10-2017 - 05-12-2017; in: "Proceedings of the OAGM & ARW Joint Workshop Vision, Automation and Robotics", Verlag der Technischen Universität Graz, (2017), ISBN: 978-3-85125-524-9; 160 - 164.
Click into the text area and press Ctrl+A/Ctrl+C or ⌘+A/⌘+C to copy the BibTeX into your clipboard… or download the BibTeX.