Table of contents:
- This is how Google's algorithm works
- Automatic correction taking into account the context
- Maybe we will see this technology in the Pixel
Researchers from Google and MIT developed an algorithm that seeks to correct the defects of wide-angle shots.
You may have seen that the faces of some people look stretched, slightly squashed or with some distortion in photos. Although it may be due to the few skills of the photographer, the truth is that shots with the wide-angle lens from mobile devices usually cause distortion in objects or people that are on the edge of the image.
There are different methods that try to solve this problem, but so far none have been as effective as Google's new proposal. While it may seem easy to fix, it is not, as it requires complex local editing that does not affect the rest of the objects in the photo.
This is how Google's algorithm works
As the researchers explain, this algorithm detects faces and creates a mesh that allows to automatically reverse this type of distortion when taking a shot with a wide angle as illustrated in the image:
We can tell the difference when applying this algorithm in the following images. It is a selfie that was taken with a wide angle lens with a 97 ° field of view.
The first image shows the distortions in the faces and the second shows how the algorithm restored the face shapes to their original state.
That is, this process is activated automatically when using the wide-angle lens, providing this special assistance to faces, but without causing changes to the rest of the image. And of course, the speed of this automatic correction (about 920 milliseconds) makes it impossible for users to detect.
According to the tests they have carried out, this algorithm works successfully in the range of 70 ° to 120 ° in field of view, covering almost all the possibilities from a mobile device.
To take advantage of the potential of this algorithm, the user would not have to take any action or activate any special camera mode. This function would be implemented automatically only when it detects that the wide angle lens is used.
Automatic correction taking into account the context
We can see the dynamics of this algorithm in the following video:
The algorithm corrects automatically taking into account the rest of the objects that appear on the scene, making the entire context of the photograph coincide. The results are natural without details that show that the photograph has been modified at all.
The team behind this project have shared a photo gallery on Flickr showing how different methods address this problem compared to the dynamics of their algorithm. There are more than 160 photographs (like the one you see at the beginning of the article) that help us evaluate the results.
Maybe we will see this technology in the Pixel
We could hope that this algorithm or some derived technology will be applied in the next generation of Pixel, since the team that participates in this project are employees of Google.
However, in the document they have shared, they have not mentioned anything about it. These first tests of the algorithm in action have been successful showing once again how the potential of artificial intelligence can improve the dynamics of mobile devices and make life easier for users.
Since without a doubt, having a similar dynamic in our mobile device would save a lot of headaches and time trying to edit these distortions in the photographs.