Anjali Ramachandran, Head of Innovation, writes about a few new developments in the field of scanning technology.
Scanning technology has come a long way in the last couple of years.
Last week, Shazam announced that they’ve ventured into visual scanning, building on top of their already popular listening technology. It works on any image that has the Shazam logo on it, such as this poster for Disney’s George Clooney-starrer Tomorrowland, where scanning the poster with the Shazam app throws up interactive content related to the film on your smartphone. Interesting, but my quibble here is that image recognition is tied to using both the Shazam logo and app.
We also have Blippar, who were initially known for their QR-scanning technology, which (don’t get me started on it!) hardly anyone uses properly – even now. Blippar acquired Layar last year to become the biggest Augmented Reality technology provider out there, bringing static images to life, and with a combined user base of 50 million. At PHD, Blippar competitor Zappar was even integrated into one of our books, 2016: Beyond the Horizon by PHD Worldwide Strategy & Planning Director Mark Holden, a few years ago. Point the app at an infographic on a page and it came to life – fascinating even then, to me at least.
Then things on the augmented reality front quietened down. The more interesting executions were artistic commissions like Chris O’Shea’s Hand from Above.
Till now. In March this year, Blippar raised $45 million in funding. They have some big plans, including becoming a visual search engine – a sort of visual Wikipedia where you point at something you want information about and it automatically pulls up relevant tagged information on your phone. Not only that, it can start linking outward to further information from that one image. Jessica Butcher, one of the co-founders of Blippar, spoke about their plans at TedX London Business School in April.
In other words, you don’t need a QR code or specific logo to trigger the information anymore, just the Blippar app and your smartphone camera. Definite progress.
However, the frontrunner for me at the moment is internet of things company Evrythng’s newest evolution of the technology. Pull up a specific URL on your smartphone, open your smartphone camera, point at a thing (anything, as long as it’s sufficiently recognisable, whether a logo or an object) and automatically get linked information. It can even be configured to trigger messages based on location and specific conditions (‘if this image was taken in London, show X content but if it was taken in New York, show Y’). Crucially, it doesn’t depend on you using a specific app or logo – just visiting a URL on your smartphone.
Step by step, over the months, the barriers to customers using this kind of technology are going down. That’s what’s important. I’m not going to download an app just because you tell me to; mostly I can’t be bothered – my life isn’t run by your requests or commands. That’s what so many technology companies get wrong: they think it’s a privilege for customers to use their technology but it’s about the privilege customers give *them* by *allowing* them into their busy lives.
So doing something I do fairly often – take a picture on my phone – is a much easier ask, even if I do have to go to a web page first. It’s not perfect yet, but for now visiting webpages is a reasonably natural behaviour as well, so it will do.
We’re nowhere near the end of this journey yet. Who knows what the next few years will bring? That’s what makes this whole space interesting at the moment: the potential to find out.