Google Research explains the Pixel 2 portrait mode

Google Research explains the Pixel 2 portrait mode
ФОТО: dpreview.com

Standard image (left) and depth mask (right), Image: Google Background-blurring portrait or bokeh modes have pretty much become a standard feature on dual-camera equipped phones. Similar effects can be achieved with single-lens devices but operation tends to be more cumbersome, with more manual interference required, and results less realistic than on dual-camera setups.

However, on the the new Pixel 2 models, Google has been able to implement a portrait mode on a single-lens phone that can compete with the dual-camera competition in terms of both operation and image results. And now, Marc Levoy and Yael Pritch—two of the engineers behind the Pixel 2 portrait mode—have taken the time to explain how in a comprehensive post on the Google Research Blog.

HDR+ picture without (left) and with (right) portrait mode, Image: Google

The Google Pixel 2 offers portrait mode on both its rear-facing and front-facing cameras, and uses machine learning and neural networking to generate a foreground-background segmentation. The front camera does its best using just neural network technology, while the rear camera creates a depth-map that is further improved using depth information generated by the Pixel 2 image sensor's dual-pixel technology.

In a final step, the information from both depth maps is combined to calculate the amount of blur applied to each part of the image, and generate the end result.

If you are interested in a more detailed description of the process you can find it, along with a range of sample images and illustrations, on the Google Research Blog. Or stick around DPReview because we'll be doing a deep technical dive on all things 'Portrait Mode' very soon!

.

portrait google image mode pixel

2017-10-18 15:04

portrait google → Результатов: 1 / portrait google - фото


Фото: petapixel.com

Enhance! Google Uses AI to ‘Rebuild’ a Portrait from an 88-Pixel Image

How many times have you rolled your eyes at those “zoom… enhance… zoom in closer… I SAID ENHANCE!” scenes on television shows? Well, Google is attempting to unroll your eyes with a pair of neural networks that can intelligently “enhance” images using a source image that is just 64 pixels. The official process and paper, […] petapixel.com »

2017-02-09 00:47