Camera software used on Google Pixel/Pixel XL is the legacy of Google Glass
The cameras on the Google Pixel and Google Pixel XL have received plenty of complements. Last year, DxOMark called the Pixel cameras “the best smartphone cameras ever made.” The 12MP rear camera uses a Sony IMX378 sensor, and the 8MP selfie snapping shooter sports a Samsung S5K4H8/Omnivision OV8856 sensor. In our review of the Pixel XL, we noted that Google is able to get the most out of these sensors by deploying special software. As it turns out, that special software might be a legacy of Google Glass.
Google’s issue with the Google Glass camera was its size. While the Search giant was hoping to have a camera on its wearable that matched the quality of smartphone cameras, the sensor used in Glass was smaller than those employed on a handset. That meant that Google engineers had to figure out a way for the smaller sensor to capture enough light so that Glass users wouldn’t have to wear a miner’s helmet on their head. Eventually, a solution was invented that was called Gcam.
Since the hardware couldn’t be changed, Google worked on software, and more precisely, the image processing. Gcam is similar to HDR in that multiple shots of the same image are snapped consecutively and then merged together. The theory is that the final picture offers the best of each photo snapped. This is so similar to HDR that on the Pixel, it is called HDR+. The technology is beginning to find its way into other Android devices and Google owned apps like YouTube and Google Photos.
What should be interesting will be a comparison of the pictures taken with the Pixel models and those snapped by the upcoming BlackBerry KEYone. The latter will employ the same exact sensors as the Pixel does on both of its cameras. With such a comparison, we can see how much this technology really adds to those high quality snapshots that the Pixel and Pixel XL are known for.