Google loves computer photography and has shown this in different generations of pixels.
From single-lens bokeh blur to near-dark shots or super-zoom across multiple shots, the company has always been committed to image processing.
Now he is about to take another step with the next Pixel 6 and Pixel 6 Pro. According to the demonstrations that Google has done for some media (The Verge, Wired), the Pixel 6 will solve a common problem thanks to its new camera system: Blurred faces when people are on the move.
Moving subjects are difficult to capture for a number of reasons, but Google is trying to do this in Pixel 6 through a combination of software and hardware.
When a user photographs a person in motion, the Pixel 6 will capture one shot with the main sensor with normal exposure and another shot with the ultra wide angle lens with a much faster shutter speed.
The phone can then use the new Tensor chip to combine the two, capturing the colors and details from the longer exposure and maintaining a sharper face image thanks to the shorter exposure.
Along with the image blending algorithm, there is also machine learning, such as a face detection feature that tries to keep the subject’s face in focus, as well as templates to combat common problems like camera shake caused by hand shake of the photographer.
It’s hard to judge the results without seeing them yet, but as far as we know now, everything looks pretty impressive.