We present a novel method for rendering and compositing video in augmented reality. We focus on calculating the physically correct result of the depth of field caused by a lens with finite sized aperture. In order to correctly simulate light transport, ray-tracing is used and in a single pass combined with differential rendering to compose the final augmented video. The image is fully rendered on GPUs, therefore an augmented video can be produced at interactive frame rates in high quality. Our method runs on the fly, no video postprocessing is needed. In addition we evaluated the user experiences with our rendering system with the hypothesis that a depth of field effect in augmented reality increases the realistic look of composited video. Results with 30 users show that 90% perceive videos with a depth of field considerably more realistic.
P. Kán, H. Kaufmann: "Physically-Based Depth of Field in Augmented Reality"; in: "Proceedings of EUROGRAPHICS 2012", Eurographics Association, 2012, 89 - 92.
Click into the text area and press Ctrl+A/Ctrl+C or ⌘+A/⌘+C to copy the BibTeX into your clipboard… or download the BibTeX.