Menu

Category Archives: Google Webmaster Central

When you’re browsing the web, you’re not transferring any specific personal data to and from the websites you visit. The method of data transfer between your computer and the websites, known as “http”, is technically insecure, in that someone could intercept the data you’re being sent by the website and understand it. But the data is just the content of a public website, so this is hardly important, right?

Things change, however, when you fill in a form or otherwise send personal data across the internet. For this, the “http” method of data transfer is not secure, and so an encrypted version was developed, called “https”. Changing from one to the other is all seamless to web users, so you may never have even known or thought about this before. But if you ever see a web page starting with “https”, you’ll know that any data you send is “secure”.

Google, however, has joined a growing call for all web browsing and data transfer to be secured in this way. If you’re interested, there’s a whole presentation about it here. Proponents say that if the whole web worked on “https”, you could be sure that you’re communicating with who you think you are, that the data has not been tampered with, and that the “conversation” has been thoroughly encrypted anyway. Sites like gov.uk appear to be https by default already, and Google would like to encourage the whole web to be this way. The reason is not so much that it’s dangerous for someone to find out how one visitor has interacted with one website, but the consequences of huge amounts of eavesdropped data being collected, and the pictures that might paint.

So, realising that everyone needs a nudge, Google has dropped a bombshell with this statement: “Over the past few months we’ve been running tests taking into account whether sites use secure, encrypted connections as a signal in our search ranking algorithms. We’ve seen positive results, so we’re starting to use https as a ranking signal.”

Let’s just think about that. They’ve realised (and it’s hardly surprising) that secure sites are likely to be the type of “good” sites which should rank more highly in searches. And that’s exactly why they’re going take that into account. “For now it’s only a very lightweight signal”, they say, but you can rest assured that it’ll become more significant in the future. Change to https and – even if it’s only slowly at first – your site will start to rank more highly in the Google search results.

So the question we should all be asking is: “Should I make my website https?”, followed closely by “…and how do I do that?” This is where it’s advantageous to have a good in-house IT department, or to work with independent web developers/hosts. There is a cost involved outside of their time, but it’s not major. If you’ve got someone to turn to, I would get their opinion on converting your site to https today.

For the rest of us who just use off-the-shelf hosting with “problem only” support, things are a lot more difficult. Unless you know a good website back-end developer who’s not already snowed under with work, you might have to wait until this becomes a standard offering from web hosts and you have a proper website rebuild. That could be a long time. But if you are planning a website rebuild any time soon, I’d advise you to make sure that a change to https is in the specification unless there’s a very good reason not to do so.

We’re all familiar by now with the little author photos which appear in Google results. The initiative is supposed to help results from “trusted authors” stand out, and in the example below, where my own blog gets highlighted, you can get a fair idea of how effective it is:

There has been some talk in the last few days that Google is extending the concept to put company logos next to the relevant results, which would be of interest to a lot of us. Here’s what they have to say (and note the – rare – correct use of the most overused word of the last year, ‘iconic’):

Today, were launching support for the schema.org markup for organization logos, a way to connect your site with an iconic image. We want you to be able to specify which image we use as your logo in Google search results.

However, despite the implications you might draw from that, Google is not planning to start putting logos next to your company’s results in the same way as it has for authors. What the announcement above refers to is the rather odd panel Google calls the “Knowledge Graph” which appears top right in the results for certain searches, usually those with Wikipedia entries (below). And should you be the lucky recipient of one of these, you can already specify your logo by having a related Google+ page. The new announcement just provides an alternative way of pointing the search engine towards your official logo.

That said, I’m going to add this markup to my business website, because you never know when and where Google might start making use of the data. It’s a small addition which can’t do any harm.

A new function which you’ll see in your Google Webmaster Tools has made a lot of full-time webmasters very happy. Hiding under the “health” menu, the new “Index Status” feature shows you how many pages from your site have been included in Googles index. If you take a look, you should see a graph which steadily goes upwards. All is well unless it flattens out or drops (unless you’re not adding pages to your website). Under the “Advanced” tab you’re provided with three other graphs, showing the cumulative number of pages crawled, the number of pages which are blocked by robots.txt, and also the number of pages that were not selected for inclusion in Google’s results. This is great information. More at the Official Google Webmaster Central Blog.

A good post on the Google Webmaster Central Blog covers PDFs in Google search results, something which is probably of critical importance to most industrial and scientific companies. It’s almost certain that you have catalogues, brochures or data sheets on your website as PDF files, and these seem to be appearing ever more strongly in Google’s results (so long as you don’t hide them from your website visitors by making them only available on request). We’ve discussed how to present PDF files before, here and here and here and here and here! However, this article comes straight from the horse’s mouth, so to speak. Do remember a common failing – which I see quite frequently – of PDF brochures being scans of original documents. Like any image, Google can’t read the content of these. If you have PDF documents which are scanned images, you need to get proper versions urgently. You can tell images quite easily: the text won’t be selectable.

I’m sure we’ve all, at one time or another, posted something online which was private or was just incorrect. While changing the page isn’t hard, what if you’re unlucky enough that a search engine has been round and hoovered up what you’ve written in the meantime? Don’t forget, Google and Bing effectively keep a public copy of the entire web (click the ‘cached’ link next to almost every Google result). Unfortunately, you’ll just have to wait until they come round again. If you have an “XML Sitemap” – and you should – then you can mark the page as high priority, ensuring it gets re-crawled the next time the search engine comes round. But you will have to wait.

Harder than this is getting a page removed from the index completely. Just deleting a page from your site is a hopeless approach; the search engines might take months of crawling and re-crawling your site before they decide that the page really no longer exists. You need to flag the page as having gone, either by redirecting its URL to a replacement page, or listing it as gone, which in Google, you do through Webmaster Tools.

(You have got a Webmaster Tools account, haven’t you? Whoever created your website should have ensured you do, as part of the service, or they weren’t doing their job properly. But if you haven’t got one, sort it out today. It’s in week 1 of our Insider Programme, that’s how essential it is.)

Now Google has made the process of removing a URL from the results a little easier, and Easier URL removals for site owners on Google Webmaster Central gives the full story. You’ll still need – eventually – to properly indicate that the page no longer exists, or block the “Googlebot” from crawling the page, but you can fast-track its removal now.