Adapting content strategy to keep up with the “Knowledge Graph”
The Knowledge Graph (KG) is an increasingly common feature of Google’s search results. Even if most users haven’t heard the phrase, they’ve probably seen it in action in all the information and answer boxes that have been popping up among the search results:
This new format of search result is an early signal of a deeper change that has profound implications on publishers and content/SEO strategy alike. Underlying this is Google’s desire to deliver richer results based on entities and inferred context as opposed to query strings, or simply matching up relevant keywords.
In other words, Google’s technology can now comprehend real world objects rather than just their keyword analogies, and it can understand the intent behind search queries to help answer questions faster and more effectively.
By connecting data to real world entities, the search engine is able to create a much richer set of results for its users, or in Google’s words:
The Knowledge Graph enables you to search for things, people or places that Google knows about—landmarks, celebrities, cities, sports teams, buildings, geographical features, movies, celestial objects, works of art and more—and instantly get information that’s relevant to your query. This is a critical first step towards building the next generation of search, which taps into the collective intelligence of the web and understands the world a bit more like people do.
It’s this technology that, for example, allows Google to take a search query like “Smithsonian opening hours” and instantly display this box right at the top of the page:
Why send users to another webpage (and lose potential ad clicks) when you could just answer their question instantaneously?
Much has been said about the threat that this poses to publishers, particularly the sites whose data is being scraped by the KG without their permission (i.e. Wikipedia), but also for a huge range of informational sites who may find their search traffic drying up as Google starts to answer their users’ questions for them.
On the other hand, the KG could also present significant opportunities for publishers who are able to adapt their content strategies and make it easier for Google to understand connections between their content and the entities it describes.
A signal of things to come has been the quiet inclusion of other sources of content in KG results. Google is no longer only pulling (scraping) from big reference sites like Wikipedia, it is starting to display content from websites in the rest of its enormous index too, and in a way that could revolutionise organic search.
In an article for Moz.com, Pete Meyers gives a few examples of how this might work. For instance the query “social security tax rate” returns this KG result:
Crucially, these results are displayed with full attribution to the source site and a link that looks just like a regular organic result, but appear at the very top of the page, above the regular organic results and ads. In short, KG results like this are the new #1 spot for their search queries.
In some cases the difference can be drastic. Meyers gives another example, the search query “richest man in the world,” which returns this result:
Note that the first KG result is pulling from a page on Time.com, which previously sat at #8 in the organic results.
These types of KG results seem to be in early experimentation and so far we’ve been unable to reproduce them with any travel-specific queries, but the nature of the industry is such that we have a huge number of potential entities which could begin to appear in KG results: destinations, events, local businesses, personalities and more, they’re all on the KG radar.
Additionally, although KG results are still fairly new and likely (guaranteed) to change in coming months, it’s safe to assume that the KG will continue to focus on answering the short, quick and simple questions while Google will maintain its emphasis on quality, detail and authority in the “regular” search results. The introduction of In Depth articles reflects this trend, as does the frequency that KG results offer “related topics” or “also searched for” queries.
Staying ahead of the curve
Given the experimental nature of these results it’s probably not wise to start trying to optimise for this new environment just yet. However there’s no doubt that these are all early, tentative steps towards a changing model for how search (and therefore SEO and content strategy) will work in the near future.
Many publishers have a justifiable grievance over the way Google is pulling this data from their sites (see this hilarious tweet for example) but the only alternative is to block Googlebot from your site entirely and wave goodbye to a massive chunk of organic search traffic. A more pragmatic course would be to think about how your existing and future content can be adapted to keep up with this changing search landscape.
AJ Kohn has written a detailed introduction to the nascent concept of “Knowledge Graph Optimisation” which delves deep into the theory as well as providing some practical steps publishers can implement now to prepare:
Use nouns in your content: similar in concept to the use of keywords in traditional SEO – make it clear what entities your content describes by clearly and unambiguously using nouns and actual names.
Build contextual links: forget the old concept of hoarding “link juice” – linking to other relevant and contextual sites allows search engines to build connections between related entities and understand your content better.
Markup your data: old meta tags have evolved into “structured data” which allows you to provide the search engines with much more context on the specific entities that your content represents. Schema.org is a good starting point, see this previous article on applications for travel sites.
Get listed: the KG’s big data sources (currently) are Google+, Wikipedia and Freebase, all of which allow you to create and maintain an informational page on your brand (some easier than others.) Include as many references and links as possible, particularly reviews on G+, links to other authority & relevant sites on your Wikipedia page and all your social media URLs on Freebase.
To that I would add one final, and rather inconvenient observation: so far KG results and In Depth articles have pulled exclusively from the highest authority sites. This obviously makes sense from Google’s perspective but it does put the dampener on any notion that the above points are all you need. The above points are about providing context and subject to the search engine. There’s no doubt you’ll still need to graft your way to earning sufficient authority for Google to take notice of your content.
Knowledge Graph 2.0: Now Featuring Your Knowledge: Moz.com, March 25, 2014
How To Tell Search Engines What “Entities” Are On Your Web Pages: Search Engine Land, Mar 21, 2014 (Good tactical guide to markups.)
Semantic SEO: Making the Shift from Strings to Things: Seoskeptic.com, October 2, 2013 (A very thorough analysis on the theory of the KG and tactical suggestions for KGO.)