One of the many problems that hamper the overall experience of mixed reality is that the face of the person inside the system cannot be seen because it is obscured by the virtual reality headset, a hindrance that blocks the person's full facial expression from being rendered. In order to remedy this, Google Research and Google Daydream Labs have worked together and finally developed a more realistic solution.
In a statement posted on Google Research's official blog, the team was able to achieve the breakthrough using "a combination of 3D vision, machine learning and graphics techniques, and is best explained in the context of enhancing Mixed Reality video."
The breakthrough consists of three important components: dynamic face model capture, calibration and alignment, compositing and rendering.
The first step, face model capture, entails creating a 3D model of the user's face which will be used as a proxy once they are donning the VR headset, according to The Verge. The proxy image is then used in rendering the final mixed reality video, creating a sort of illusion that the VR headset is removed. This step takes less than a minute in order to create a database of the user's blinks and eye gazes.
The next step, calibration and alignment, syncs the video stream captured in the virtual reality render and the headset coordinated systems. This step requires pinpoint accuracy as the developers need to sync and align the 3D face model and the camera stream accurately in order to render a seamless result.
The last step, compositing and rendering, is where developers create a suitable render of the 3D face model that is consistent with the one from the camera stream. Using the data from the first two steps, developers were able to create a synchronized mixed reality experience that shows the face of the user.