ATTENTION: This is a web archive! The IMS Group was split up in 2018 and does not exist anymore. Recent work of former members can be found at the VR/AR Group and the Computer Vision Group.

Interactive Media Systems, TU Wien

Recording Depth with Dual-Camera Smartphone

Practicum

About This Topic

Recently dual-camera devices have been introduced to the smartphone market, and enable end-users to create stunning photographs with emulated telephoto lenses and bokeh effects. The basis of these effects is the ability to apply stereo matching to the dual RGB images which in turn produces a depth images. In computer vision, depth data is useful for a range of algorithms from 3D reconstruction, over visual odometry to object detection and recognition. Hence, we are interested in utilizing the novel sensor setups of the latest smartphone generation for future application in research.

Additional Information

For Apple IPhone in particular, API functionality to read depth data from the stereo cameras into custom applications has been opened recently. We are looking forward to generate a video pipeline, which allows recording of RGB and depth data from dual cameras on Apple IPhone devices. 

Goals:

  • Research the capabilities of the current API
  • Record RGB and depth images/video to files
  • Stream RGB and depth video to a computer for simultaneous processing
  • Implement an example algorithm of your choice to utilize acquired data

Requirements:

  • Experience in Swift and/or Objective C
  • Bring our own devices. We can provide you with an IPhone 7 plus, but you would need your own Mac for development. 

Feel free to bring in your own ideas. We are looking forward to discuss the details with you! 

Further Links:

https://developer.apple.com/documentation/avfoundation/avdepthdata
https://developer.apple.com/library/content/samplecode/AVCam/Introduction/Intro.html#//apple_ref/doc/uid/DTS40010112-Intro-DontLinkElementID_2
https://www.youtube.com/watch?v=kbsDyTf7k2I
https://www.youtube.com/watch?v=_JxlOn7HpXM