Subscribe to SearchCap

Google’s Matt Cutts confirmed this morning that Google has lifted J.C. Penney’s SEO penalty, that the Panda update will continue to rollout internationally and offered numerous SEO- and search-related tips in a live webchat on the Google Webmaster Central YouTube Channel.

More than a thousand viewers tuned in but, for those who weren’t able to stay for the full 90 minutes, or who missed it entirely, here’s a recap of some of the more important topics..

J.C. Penney Penalty Lifted

Responding to several questions about J.C. Penney, Cutts confirmed our report earlier this week that the penalty was lifted after 90 days.

“We saw a valid reconsideration request” from JCP, Cutts said, and explained that, after reviewing the request, Google found that the company “did quite a bit of work to cleanup what had been going on. You don’t want to be vindictive or punitive, so after three months the penalty was lifted.” He later added, “I think the penalty was tough and the appropriate length.”

In my article earlier this week, I mentioned that many of JCP’s old pages/URLs have returned to high spots in Google’s search results. But those URLs are actually blocked by Penney’s robots.txt file, and they redirect to the company’s home page.

Cutts said those are “probably old pages that haven’t been recrawled,” and said they’ll “naturally drop down” as the URLs are recrawled.

Google’s Panda Update: International Rollout?

Cutts said the Panda update will eventually roll out internationally, but he’s not sure how soon it’ll happen. Right now, Panda has rolled out beyond the U.S., but still only impacts English-language queries worldwide.

“There were some characteristics that were more applicable to English-language sites,” Cutts said. The original question came from a viewer in Poland, and Cutts explained that “the link structure of websites in Poland is a lot different” from the link structure of sites in other countries.

When will Google be ready to expand Panda beyond English queries? “Probably not in the next couple weeks. Maybe in the next couple months,” Cutts said.

He also clarified that there have been no manual exceptions, and “I don’t expect us to have any manual exceptions to Panda.” That and much more are in this clip that Barry Schwartz has uploaded to the Rustybrick YouTube account.

Google Analytics / Webmaster Tools Integration

In responding to a question asking when Google Analytics and Webmaster Tools will be tied together more consistently, Cutts explained that “Analytics is different team, in a different location, and they’re different tools. Analytics also came in via acquisition, from Urchin, so they’re in different ‘silos.’”

Google is aware of the many requests for better integration between Analytics and Webmaster Tools.

“The teams have been trying to make their data work better together,” he explained, “but these are different groups so there’s lots of overhead in trying to combine them into one spot — versus just focusing on adding useful features to each one.”

Random SEO Topics & Questions

Cutts also answered a number of other SEO- and search-related questions. I’ll try to wrap up a bunch of them below.

Blocked sites in Webmaster Tools? Cutts said Google is “probably not going to add ‘number of times blocked’ to Webmaster Tools because it’s not actionable.”

Rel=canonical abuse? Cutts says there’s been talk about adding a feature to Webmaster Tools about the rel=canonical tag because, in some cases, hackers use the tag to sabotage/hijack a website’s visibility and traffic. The tool might be an alert that “your canonical page has changed,” Cutts said. But he also said “the number of hackers using rel=canonical is very, very small right now. It’s a very soft way to try to get you removed from search results.”

IP address as a ranking factor? “Yes, we do use your server’s IP address to try to determine if your site is relative to a particular country,” Cutts said. “If you’re really relevant to France, it might help to have a French IP [address]. It’s not a stopper — it’s not that you can’t rank in France without a French IP, but it can help.”

Using rel=nofollow on internal links? Cutts echoed previous advice when he told viewers, “Don’t bother using [rel=nofollow] on internal links, even on [links to] the copyright and privacy policy. But feel free to use it on external links that you don’t necessarily trust.” Why? Because “where you link to can affect your reputation as a website,” Cutts said.

(Note: All above quotes are as close as possible to what was said given my note-taking ability. If Google makes the full video available, we’ll add a link to it so you can listen to anything you missed.)

SMX Advanced is the only conference designed exclusively for experienced paid search advertisers and SEOs. You'll participate in experts-only sessions and network with fellow internet marketing thought leaders. Check out the tactic-packed agenda!

About The Author

Matt McGee is the Editor-In-Chief of Search Engine Land. His news career includes time spent in TV, radio, and print journalism. After leaving traditional media in the mid-1990s, he began developing and marketing websites and continued to provide consulting services for more than 15 years. His SEO and social media clients ranged from mom-and-pop small businesses to one of the Top 5 online retailers. Matt is a longtime speaker at marketing events around the U.S., including keynote and panelist roles. He can be found on Twitter at @MattMcGee and/or on Google Plus. You can read Matt's disclosures on his personal blog. You can reach Matt via email using our Contact page.

Sponsored

Cutts said those are “probably old pages that haven’t been recrawled,” and said they’ll “naturally drop down” as the URLs are recrawled.

Umm, so Matt’s saying that Google is (re)crawling pages that have a disallow in the robots file? Don’t see how they “naturally” do anything given that the recrawl won’t happen – unless i’m missing something.

http://www.webdesign-solutions.blogspot.com/ Daniel Marriott

i like the sound of the +1 but i think it will be abused to much

http://www.michael-martinez.com/ Michael Martinez

@roseberry, “so Matt’s saying that Google is (re)crawling pages that have a disallow in the robots file? ”

Doubtful. But you can always test what happens for yourself by disallowing a section of your site for a few months and see what happens.

http://webmd.com roseberry

@michael – yeah that was kind of sarcasm given that it was implied that the pages would “naturally” disappear once they were recrawled and they found the 301, but of course there’s the disallow so they won’t be “(re)crawled” so something else would have to happen for these to be removed from the index. Again, that’s what i’m assuming. Though looks like all these old pages have magically disappeared since yesterday and all of them all at once. I assume it was just “natural”.