top of page
  • Writer's pictureNick Lansley

Taking visual search to the next three stages

Whilst it’s great to see that Tesco Wine Finder has had a positive reception across the web, some more wiley readers have emailed me as they see into my thinking about the potential of automated object recognition of the sort that Cortexica Vision Systems have built for us.

So rather than remain implicit, I’ll state it explicitly here: using barcodes to recognise a product is frankly a compromise. Indeed it’s saying that computers are too ‘thick’ to understand what they are seeing and so this visual representation of data is needed. However, the technology works and we use it everywhere. If you’re Tesco Finder user you’ll be trying this out for yourself soon!

Customers have to pick up the product and fiddle with it to get the barcode in front of the camera. Often phone cameras don’t allow close-up (macro) photography which means that as soon as the barcode fills the screen it can be out of focus. It’s not a great experience even if it is understood by the customer.

I really want to go in a different direction, and really focus on the Tesco customer’s actual use-case: As the customer I want to know if Tesco stock ‘this’ product. If so, at what price is it and where is the nearest store selling it? Whether or not Tesco stock it, what credible alternatives are there (cheaper / on-offer / more clubcard points / premium version / greener / healthier)?

Using their phone camera and suitable software they take a pic of any product – anything – and the system recognises it and gives these answers within seconds. It’s simple, accurate and useful. Actually it doesn’t have to be the product – it could be a magazine, newspaper, poster or web-based advert showing one in an image.

That’s just the first stage! The second stage is a more subtle representation of the product such as a food plate with a cooked version of it, or a model wearing it if it’s an item of clothing, or a device using it when thinking of getting new batteries, printer cartridge or other consumables for that product.

The third stage is even more subtle: I open the fridge door or food cupboard, take a photo of what I see, and the system shows me recipes that use those ingredients, or some suggestions for other food that these items ‘go well with’, or healthier alternatives. I take a photo of my shower cubicle and the system suggests products to keep it clean, get me clean, and cheaper energy suppliers to reduce the bills on the hot water it needs.

So the new customer use-case is: As the customer I want to take a photo of this ‘scene’, and Tesco works out the context to help me with suggested products and services that make it better, simpler and cheaper.

That’s where I want to go. I just need the technology to do it!

3 views0 comments

Recent Posts

See All

OpenAI's poetry writing shows the power of GPT-3 models

AI is getting good! I have joined OpenAI's beta programme with access to their GPT-3 AI service, and I asked it to write me a poem. My exact brief: "Write a poem the explores our love of our home in t


Post: Blog2_Post
bottom of page