About This Topic
Recently dual-camera devices have been introduced to the smartphone market, and enable end-users to create stunning photographs with emulated telephoto lenses and bokeh effects. The basis of these effects is the ability to apply stereo matching to the dual RGB images which in turn produces a depth images. In computer vision, depth data is useful for a range of algorithms from 3D reconstruction, over visual odometry to object detection and recognition. Hence, we are interested in utilizing the novel sensor setups of the latest smartphone generation for future application in research.
For Apple IPhone in particular, API functionality to read depth data from the stereo cameras into custom applications has been opened recently. We are looking forward to generate a video pipeline, which allows recording of RGB and depth data from dual cameras on Apple IPhone devices.
- Research the capabilities of the current API
- Record RGB and depth images/video to files
- Stream RGB and depth video to a computer for simultaneous processing
- Implement an example algorithm of your choice to utilize acquired data
- Experience in Swift and/or Objective C
- Bring our own devices. We can provide you with an IPhone 7 plus, but you would need your own Mac for development.
Feel free to bring in your own ideas. We are looking forward to discuss the details with you!