The SEO geek doing set up for the Local search geeks! Wish I was there. Not sure what I posted is news enough or solid enough to be mentioned or if you have time. BUT the principle behind it, if this change goes through is kind of a biggie, to those that have followed the SAB fiasco over time. I'm going to ping Mike now to be sure he saw it.﻿

In my latest post, I analyzed three large-scale lyrics websites to determine the impact of Google providing lyrics directly in the SERPs. I dug into a ton of data across all three sites and have explained the entire story in the post (from the initial algo hit in April 2014 to Panda to the lyrics invasion in the SERPs). You should check it out. :) cc: +Barry Schwartz+Pete Meyers ﻿

#MozCast update -- We've made some changes to the MozCast systems - they won't be noticeable to most, but here are the details for the hard-core data folks...

The original MozCast temperature was built around a fixed, 1,000 query set (which we chose as a sort of keyword "lab", with consistency being a key requirement). Put simply, we wanted the sample to be tightly controlled and consistent, day after day.

A while back, when the Feature Graph was launched, it was based on a 10,000 query set, with half of those queries being localized to 5 different cities. Essentially, MozCast is now 11 different 1K weather "stations".

While the Feature Graph runs on the 10K set, the temperature and "top-view" metrics still run on the 1K set. This happened for a few reasons - historical consistency being the chief one. Interestingly, the 10K set didn't reduce the noise that much (parsing the signal from the noise is the biggest challenge of tracking SERP flux by far).

In addition, some of the top-view metrics are very dependent on the sample size. For example, as your keyword set grows, your domain diversity decreases. Basically, the top domains keep occurring, whereas new long-tail domains pop up. So, the more queries you consider, the less diverse the "world" appears.

We've finally decided to bite the bullet and switch everything to the 10K set (as running two systems was getting silly, and the old system really is outdated). This means that the top-view metrics have changed and we've put a new history in place. Don't obsess about the absolute numbers for those - it's really about trends over time.

The historical temperatures have been preserved, but as of today, we're calculating temperatures over a new data set. Instead of a single station, MozCast now takes the 5 stations that are delocalized (generic US results, essentially), calculates the temperature for each one, and then selects the median of those 5 stations. This should help insure that we're not picking up flux due to large-scale tests or data center differences.

Hopefully, all of these changes provide better data and more stability going forward. If you have any specific questions, please hit up +Pete Meyers, either on G+ or Twitter (@dr_pete).﻿

People keep asking me when Moz's Google Algorithm History will be updated, but the simple truth is that there haven't been any major, confirmed updates since late October. I've added the Penguin roll-out to UK/CA/AU and Pigeon "everflux", both of which happened in December

The Penguin timeline is unclear - flux was high for weeks after Penguin 3.0, even causing reports of an unconfirmed Penguin 3.1.﻿

These videos are from the second in our series of free presentations at the Mozplex called MozTalks. For those who couldn't attend the last one with Rand and Dr. Pete (or those who'd like to see them again!), we've got you covered. Enjoy, and we hope that you're able to make the next MozTalk!

I think the next Indiana Jones movie should be a remake of Raiders of the Lost Ark, but starring a toy dachsund in an Indiana Jones hat.

Think about it:

- Toy dachsund grabbing an idol in its mouth and running frantically through a room of shooting darts- Toy dachsund running from a rolling boulder- Toy dachsund riding a toppling statue to escape a nest of snakes- Toy dachsund leaping into a truck, biting a Nazi, then driving the truck and running over Nazis with it- And many more

In my latest post, I analyzed three large-scale lyrics websites to determine the impact of Google providing lyrics directly in the SERPs. I dug into a ton of data across all three sites and have explained the entire story in the post (from the initial algo hit in April 2014 to Panda to the lyrics invasion in the SERPs). You should check it out. :) cc: +Barry Schwartz+Pete Meyers ﻿

As Non-resident Resident Marketing Scientist at Moz, my job is to make data cool. I've always believed that the best ideas and the best data are useless unless you can communicate them, and so it's my mission to explore strange new data and boldly blog where no one has blogged before.

Education

North Central College

Psychology, Computer Science, 1988 - 1992

University of Iowa

Cognitive Psychology, 1992 - 1997

Basic Information

Gender

Male

Other names

Dr. Peter J. Meyers

We were unable to fetch any YouTube videos at this time. Please try again.

David is one of the top minds in local SEO. His Local Search Ranking Factors is still one of the best resources in the industry, and he's the first person I go to when I can't decipher what Google is up to on the local level.

If you're looking for a company that's endorsed by fake SEOs everywhere, then look no further! AuroIN LLC can get you 100s of reviews from fictional profiles in just days, probably for a fraction of the cost of legitimate, legal reviews that won't get you sued. Act now, and they'll throw in one endorsement from a famous political figure and one from an 80s cartoon character absolutely free. Not everyone has reviews from Winston Churchill AND Space Ghost!

I've had the pleasure to work with the Walker Sands team on a number of internet-marketing projects (both SEO and SEM) over the past year. They really get personally involved in projects and do a great job of looking at the entire PR and marketing spectrum to bring clients measurable results.