About this Project
Although content based image and video analysis have been used successfully to solve a set of real world tasks in recent years, approaches to employ such techniques for video retrieval have been significantly outperformed by pure text-based approaches operating on manually generated annotations. In this dissertation project, we intend to improve the state of the art for video retrieval within specific video domains. In order to achieve this, we will work on a novel video annotation approach where automatically detected objects-of-interest can be annotated by a user in an easy and comfortable way. A prototype will be developed to show the resulting video retrieval facility.
From a technical point of view, this object-of-interest detection will be realized by the use of local features as starting point. Such features have already been studied extensively and it is expected that they are suitable to find correspondences between video frames automatically, and that such correspondences can be used to detect objects that appear repeatedly in a video. To increase the results of this baseline system, our research activities will focus on the following points:
1) Combine local features to semantically higher features representing single objects
2) Use domain-specific knowledge with emphasis on the relations between objects
In this context, we will investigate different techniques to model, collect, and use the visual appearance of objects within videos and the relations of objects in specific video domains for object-of-interest detection. This basic research fits perfectly to the application of video annotation and retrieval because it is further planned to integrate the generation of such domain specific knowledge (which is also an open research issue) into the semi-automatic video annotation process.
Joanneum Research Forschungsgesellschaft m. b. H. (Graz)
Funding provided by
Österreichische Forschungsförderungsgesellschaft mbH (FFG)