Many of the currently existing Virtual Dressing Rooms are based on diverse approaches. Those are mostly virtual avatars or fiducial markers, and they are mainly enhancing the experience of online shops. The main drawbacks are the inaccurate capturing process and the missing possibility of a virtual mirror, which limits the presentation to a virtual avatar. By utilizing depth cameras like the Microsoft Kinect it becomes possible to track the movements of a body, extract body measurements and furthermore create a virtual mirror with the corresponding video stream. The video image can be merged with a piece of clothing frame by frame. The project is implemented in Unity, a programming environment for 3D applications. OpenNI and the NITE middleware are used for various fundamental functions and for the tracking process in combination with the Microsoft Kinect. Taking a closer look at the results, several 3D cloth models were created and textured. The pieces of garment are based on a female 3D model. The clothes are adapted to the body of the user in front of the Kinect during runtime. In addition, cloth physics are taking care of a realistic representation. Furthermore trying on clothes in front of different backgrounds and surroundings (e.g. at night) shall be possible. Also a lot of value is placed on the interaction aspects of the Virtual Dressing Room.
P. Presle: "A Virtual Dressing Room based on Depth Data"; Supervisor: H. Kaufmann; Institut für Softwaretechnik und Interaktive Systeme, 2012; final examination: 10-08-2012.
Click into the text area and press Ctrl+A/Ctrl+C or ⌘+A/⌘+C to copy the BibTeX into your clipboard… or download the BibTeX.