Wednesday, May 17, 2017

Google's new visual recognition app can identify flowers you don't know

Caroline Plouff

Google (GOOGL, GOOG) CEO Sundar Pichai announced a visual recognition product called the Google Lens on Wednesday at its Google I/O developer conference.

The Lens feature, which will be used on phones, sees what the viewer sees through the camera and provides information about the object. In a demonstration, Pichai showed the app correctly identifying a flower, inputting a Wi-Fi router’s password and SSID from the sticker, and giving a restaurant’s Google rating and reviews all when the phone camera was pointed at each object. Google wants to pre-empt your googling.

Google Lens follows other visual recognition products put out recently by other tech companies. Amazon, for instance, has had a product recognition tool built into its shopping app to allow users to see how much the company will undercut brick-and-mortar competitors for the same item. Samsung’s Bixby app can scan a photo of a business card and save the information as a contact, something more aligned with Google’s new capabilities.



Owering all this is new hardware from Google, Tensor Processing Units, or TPUs, which are behind Google’s AI training system. Users will never see these “deep learning” systems, however, because Google is all about the cloud doing the heavy lifting it takes for a computer to identify real-life stuff through its camera.

As the HBO show “Silicon Valley” illustrated on a recent episode with its “food Shazam” app, getting a camera to identify real-life stuff from a variety of angles, lighting situations, and with different phone cameras is quite the computational challenge. This time, however, Google isn’t buying these processors from Nvidia (NVDA), but is making its own, optimized to its software. (Nvidia was Yahoo Finance’s company of the year in 2016.)

For More Information: Ethan Wolff-Mann

0 comments:

Post a Comment