Google has announced a new search feature on Google Lens which allows you to search using text and images at the same time, refining your query to get exactly what you need. It’s powered by AI and Google describes it as a way to “go beyond the search box and ask questions about what you see”.
To get started, just open up the Google app on Android or iOS, tap the Lens camera icon on the right hand side of the search box and take a photo of something around you or search one of the photos in your camera. Then tap on the ‘Add to your search’ button to add text to your query, for example adding an attribute like color.
Here’s how the flow works:
Image source: Google
Google gave some examples of how you might refine a query:
- Screenshot a stylish orange dress and add the query “green” to find it in another color
- Snap a photo of your dining set and add the query “coffee table” to find a matching table
- Take a picture of your rosemary plant and add the query “care instructions”
Google says multisearch uses their latest advances in AI and that they are exploring ways in which the feature could be enhanced by MUM, Google's AI technology focused on Search.
Multisearch is available as a beta in English in the US and Google says the best results are for shopping searches.
It’s a new way for consumers to search products and another step forward in Google’s development of a more visual, intuitive, AI-enhanced search.