Search engines have come a long way since the mid 1990s, from Altavista’s meta-based criteria to Google’s link based algorithm. Launched in 1998, Google’s algorithm has gone from strength to strength ever since. So where do we go from here?

We can’t see into the future, but we can predict it by understanding the challenges that search engines face today.  Thanks to the advances in cloud computing and mathematics, search engines are able to apply these to finding better ways of helping users foreshorten their search for an answer.

Radically personal

The data scientists working on search engines like Google are already working on algorithms that enable them to predict the kind of search engine results users wish to see. People will no longer search for the same thing. Before long, the search engines will show unique results based on what different users want, even though the keyword they’re searching for is the same.  

For example, user A and user B may both be searching for holidays in Greece. The only difference being that Google will infer from artificial intelligence that user A shops in Harrods and wears Burberry. As a result, the results will include the websites of luxury tour operators. For user B, who shops most frequently in Tesco, Google may infer that search results should be prioritised towards the more mass-market brands like First Choice Holidays.

This will be made possible thanks to the data collected by search engines – particularly Google. The fact that Google’s Chrome website browser enjoys a 65% market share (W3Schools) and many users of their services like Gmail are signed in enables Google to collect a lot of information about a user’s website preferences.  This information allows Google to put together a reliable dataset and make predictions about what results the user will want to see to satisfy their search.

On another level, a user searching for articles could be shown articles that their peers, friends or family have liked or shared on Twitter or other social media. Search engines will statistically infer that these articles are likely to be relevant to the user.

Getting sentimental

Social media signals already play a major part in explaining why some content outranks others, despite links playing an important role (i.e. the more links from authoritative websites like the BBC, the more credible your site’s content).  If that wasn’t enough, search engines already use sentiment analysis to further qualify inbound links to your website.

Sentiment analysis is a relatively new way to help computers understand the attitude of content towards a topic. For example,  “that was an amazing holiday” would likely score positive due to the words ‘holiday’ and ‘amazing’ being predicted as a positive sentiment.  

In the next five years, search engines will go further to evaluate the sentiment of all kinds of things including tweets, images, and mobile app content about sites and brands. This analysis will give search engines a better sense of what is being publicly discussed about sites on social media and mobile apps. For example, if there are many negative tweets about a luxury furniture retailer then, despite the volume of attention the site or brand is getting, these may not show up in the user’s search for a hand-made bed. The company is relevant to the user’s demographic, but a more qualitative assessment has changed the ranking.

In effect, using sentiments could not only help the future search engine improve the search experience in terms of relevancy, but also turn search engines into artificially intelligent advisors.

Contextual image classification

If search engines think a picture tells a thousand words, we are also likely to see sentiment analysis of images and videos.  Today, there are approximately just over 2,000 machine learning experts specialising in computer vision, which is already furthering the search engines capability to understand images.

These computer vision experts are helping search engines like Blippar and Yossarian Lives to classify images in the way the everyday person would. Visual search engines  can already recognise everyday objects such as an apple by scanning a live image on a mobile phone, or typing in a metaphor to generate image suggestions.

So the progress being made will likely see search engines being able to add much more context of the images in terms of their sentiment, the types of objects in the image and so forth.  This will give search engines much more information to go on in terms of serving users the best and most original content.

In five years…

Many of these advances are likely to achieved not just because of the advances in technology but also because of the imagination of users and marketers that find new ways of searching and communicating respectively. The next ten years will become even more exciting as more information becomes available for collection and analysis by wearable technology. This will no doubt present challenges for search engines and SEOs alike.