ATTENTION: This is a web archive! The IMS Group was split up in 2018 and does not exist anymore. Recent work of former members can be found at the VR/AR Group and the Computer Vision Group.

Interactive Media Systems, TU Wien

Vision-based Autonomous Feeding Robot

By Matthias Schörghuber, Marco Wallner, Roland Jung, Martin Humenberger, and Margrit Gelautz

Abstract

This paper tackles the problem of vision-based indoor navigation for robotic platforms. Contrary to methods using adaptions of the infrastructure (e.g. magnets, rails), vision-based methods try to use natural landmarks for localization. However, this imposes the challenge of robustly establishing correspondences between query images and the natural environment which can further be used for pose estimation. We propose a monocular and stereo VSLAM algorithm which is able to, first, generate a map of the target environment and, second, use this map to robustly localize a robot. Our hybrid VSLAM approach is able to utilize map points from the previously generated map to (i) increase robustness of its local mapping against challenging situations such as rapid movements, dominant rotations, motion blur or inappropriate exposure time, and to (ii) continuously assess the quality of the local map. We evaluated our approach in a real-world environment as well as using public benchmark datasets. The results show that our hybrid approach improves the performance in comparison to VSLAM without an offline map.

Reference

M. Schörghuber, M. Wallner, R. Jung, M. Humenberger, M. Gelautz: "Vision-based Autonomous Feeding Robot"; Talk: OAGM Workshop 2018 Medical Image Analysis, Hall; 05-15-2018 - 05-16-2018; in: "Proceedings of the OAGM Workshop 2018 Medical Image Analysis", Verlag der Technischen Universität Graz, (2018), 111 - 115.

BibTeX

Click into the text area and press Ctrl+A/Ctrl+C or ⌘+A/⌘+C to copy the BibTeX into your clipboard… or download the BibTeX.