The team described the method of removing the infrared filter that is used to block unwanted light effects emitted from the photographs by attaching a small ring near the infrared LED’s to a Galaxy Nexus that permits depth information to be recorded by the phone in the conference held at Vancouver. Actually, depth sensing would require many visual inputs along with 6 cameras. But now the team successfully used a great combination of low cost LED’s and machine learning to approximate depth.
The team focuses on the technique to estimate absolute, per pixel depth using any conventional monocular 2D camera with small adjustments. After the proper research, they found that they were able to sense depth and motion at a rate of 220 frames per second and they had to rely on the relative intensity of reflected infrared light to determine the distance to a point.
The Microsoft team believes that this new trend will allow at least increased number of prototypes for depth sensing applications in many ways. But still this method cannot replace the main commodity depth sensors; the team hopes the new technique will enable 3D faced and active systems in the novel context.