Google made a Research Blog post about the Google Camera’s portrait mode. The Google Pixel’s portrait mode only needs a single camera lens, unlike many other OEMs that require a second camera to map out depth to synthesize the bokeh (blur) effect.
Google’s camera uses semantic image segmentation to make this happen. It is able to map out which pixels pertain to the subject and which ones belong to the background. This technology has been released as open source by Google so that any phone maker can implement the technology in its own smartphones or app developers to make include it into their own apps.
The semantic image segmentation model can also be used to label pixels as anything like road, person, dog, and sky. Check out the the source link below for more details on how Google's Semantic Image Segmentation with DeepLab works.
Source: Google Research Blog
Post a Comment