Profile Information

Born in Wales. Degree from over the bridge. Digital Marketing & SEO type fella at http://www.andrewisidoro.co.uk

Blog Bio:

Andrew Isidoro is the Founder at Typefonts, SEO Manager at Gocompare.com, a major UK price comparison website and SEO freelancer. You can find him on his blog talking about digital marketing and the state of semantic search or on Twitter: @andrew_isidoro.

Over the past few weeks, while looking into how the Knowledge Graph pulls data for certain sources, I have made a few general observations and have been tracking what, if any, impact certain practices have on the display of information panels.

Really nice article Tom. Google are the kings at thinking "outside of the box" and have spoken about creating this star trek style ubiquitous system for a long time. I spoke recently at the digital marketing show about something similar but you've really taken it on a step.

I think the interesting area will be how Google monetise a physical web presence; or indeed, if they can without damaging the end product. We're of course a long time (at least in digital terms) away from this becoming a reality but strategising now for potential disruptions in search is what we're all here for.

In all seriousness, you're correct humans created the data on the web and there are certainly a lot of errors in there, but as with any corpus of data the strength is in consistency. A few months back Google relaxed their need for validation from multiple sources and it lead to this post on manipulating the Knowledge Graph.

As ever I have not doubt that this is Google testing scenarios and measuring effect; but in the meantime we are stuck with low(er) quality panels drawn from a dirty dataset.

One of the issues I have had with the Knowledge Graph is it's over reliance on human edited data. It's the reason Wikipedia can't be used as an accurate up to date resource (at least in academia) and it has already begun to haunt entity entries.

I know we have spoken before about how the Knowledge Graph's expansion is one of the most complex areas of search at the moment yet so few seem to be actively studying it. Great to see you continue to break that mold!

This is where things are a little underdeveloped at the moment.Currently you'll be shown the data of the user who is the most "authoritative" for that term. For example; a friend of mine, John Glover, is a pro cricketer at Glamorgan Cricket Club and has a fairly comprehensive Freebase profile. Yet when I, a close friend, search for him I get a result for John Glover the actor.

I think in the future we'll see much more dynamic data based on our social graph. Essentially taking our social connections (and data within) into account when constructing knowledge panels. For more info on how this might work I recommend Justin Briggs' post on Building the Implicit Social Graph.

Not quite. The knowledge graph is quite a complicated idea of pulling in data from multiple sources, understanding and then displaying them within relevant SERPs. Authorship is similar in that it scrapes data from the page but it seems to be handled in a different way. See Bill Slawski's post on this:

I think relevance will begin to have a part to play in the future. Those that know you are much more likely to be returned your data than that of a footballer that they have no affinity to. Will have to wait and see on that one though...

As I said above there are a number of data sources that go into making an Entity. Take Moz's Gillian Muessig as an example. Gillian has no Wikipedia page and only a very basic Freebase profile but she is still understood to be an "seomoz co-founder" as the RDF data that fuels her Freebase listing has pulled in 3rd party data from these Open Data sources.

Best practice would be to list yourself in these places (where applicable) using uniformed data and test with informational queries.

It isn't really a case of privacy. All of the data that it understands about the entity "Andrew Isidoro" shown above is freely available on the web. It's just been found, understood, and displayed in a new format.

I do, however, think that as more people, places and things get added to the Knowledge Graph, we'll begin to see large scale personalisation of it; showing entities based on their affinity to your own entities information.

Essentially you are moving away from just being words that make up a search string like "Andrew+Isidoro" and towards an understanding that those words relate to a real "thing" that can be conceptualised.

There are a number of Open Data sources that Google (and Satori) are pulling data from but they also seem to use proprietary data such as their search logs to help determine intent behind entity related queries. There is a very good paper written by Google (albeit a little old now) which highlight how entities *could* be being formed.

I've just looked through Enrico's post and it's a great read (I wish I could understand the original as the translation is a bit rubbish). The Knowledge Graph is certainly an area that we as SEOs should be exploring more.

Great stuff Rand, another top notch WBF. It's always good to see market research informing campaigns rather than the blind scatter-gun approach that we see so much of online.Also a huge +1 for getting my name right...Thanks for that :)

Great content can take a while to create and an effective content strategy takes time, but I've often found that the more time and effort that you put into nailing the strategic elements, the greater the rewards once the plan in in full swing.

Would of loved to hear more on your thoughts and approach to pairing outreach methods to your content strategy, but hey...you can't conquer Rome in a day eh? June 26, 2012

I was recently at the BrightonSEO conference where a few of the speakers talked about the semantic web and intuitive search. It's pretty clear that employing a semantic keyword plan is becoming crucial in an increasingly "intelligent" web.