Google AI is starting to figure out where a photo was taken based on the background.
In the past few years the concept of artificial intelligence had a sophistical increase which helped to grow the image recognition software. Wolfarm Alpha is one of the companies that put a neural nets to the test in image recognition but as usual Google is one step ahead and it taught its brain to find objects in the photos such as recognizing faces in different photos. Google now is going to do background recognition in the photos.
The base Google’s neural nets are using is to determine the location of the photo that was taken by recognizing the background. In order to do that Google is using its created 126 million image database that are tagged by location previously. Google has created a visual map of the world to run this concept. The Google’s AI was trained by about 91 million images. The rest of images were used for further calibration of the AI.
Not sure where you shot that landscape or selfie? Using some clues in the image itself, Google will remember that for you.
After training the AI about 2.3 million photo were added to google’s database from Flickr. The AI was able to guess the location of the photo 3.6% of the time when it was about the city block, 10.1% down to the city level, 28.4% of the time when the photo was taken in country level and 48% of the time in continent level.
The results seems too low but the neural net that is called PlaNet beat human 56% of the time in match-up, and of course with more robust image sets the accuracy can go far higher. With a little changes and more photos the PlaNet may eventually be able to recognize the inside of places as well.
The interesting part is that this entire process can be run on 377MB, so no supercomputer needed. Of course, the database that keeps the images is much bigger than that.
You can read the Google’s entire paper here.
Source: MT Tech Review.