Video: Google's Super Resolution algorithm explained in three minutes

Video: Google's Super Resolution algorithm explained in three minutes
ФОТО: dpreview.com

Space constraints in the thin bodies of modern smartphones mean camera engineers are limited in terms of the size of image sensors they can use in their designs. Manufacturers have therefore been pushing computational imaging methods to improve the quality of their devices' image output.

Google's Super Resolution algorithm is one such method. It involves shooting a burst of raw photos every time the shutter is pressed and takes advantage of the user's natural hand-shake, even if it is ever so slight. The pixel-level differences between each of the frames in the burst can be used to merge several images of the burst into an output file with optimized detail at each pixel location.

An illustration that shows how multiple frames are aligned to create the final image.

Google uses the Super Resolution in the Night Sight feature and Super-Res zoom of the Pixel 3 devices and has previously published an in-depth article about it on its blog . Our own Rishi Sanyal has also had a close look at the technology and the features it has been implemented in.

A visual representation of the steps used to create the final image from a burst of Raw input images.

Now Google has published the above video that provides a great overview of the technology in just over three minutes.

'This approach, which includes no explicit demosaicing step, serves to both increase image resolution and boost signal to noise ratio,' write the Google researchers in the paper the video is based on. 'Our algorithm is robust to challenging scene conditions: local motion, occlusion, or scene changes. It runs at 100 milliseconds per 12-megapixel RAW input burst frame on mass-produced mobile phones. '

.

burst image google has resolution video

2019-5-30 21:04