DS: You guys made this post with 22 questions, but it sounds like you’re saying even if you’ve done that, it wouldn’t have helped yet?

MC: It could help as we recompute data. Matt goes on to say that Panda 2.2 has been approved but hasn’t rolled out yet.

Has the algorithm changed enough where some people have actually recovered from Panda?

If you go back to Florida (the update, not the state), it launched and had a big impact, and then they pulled that back. They push stuff out and then they find additional signals to help differentiate on that spectrum. He hears the pain from people in search, but he also hears complaints about things polluting the search results. They haven’t done any official changes (though they did a few tweaks) to pull things back but they continue to look at ways to differentiate there. They’re still looking for ways to find more of the low quality sites

DS: Has it changed enough that some people have recovered? Or is it too soon?

MC: The general rule is to push stuff out and then find additional signals to help differentiate on the spectrum. We haven’t done any pushes that would directly pull things back. We have recomputed data that might have impacted some sites. There’s one change that might affect sites and pull things back.

Yes, this is a manually run algorithm and with only 2 updates to the update, no recoveries have been marked. Will 2.2 reverse that, I am not sure but time will tell.

So in short, 2.2 is not out yet - I am not sure what that hiccup was. When it comes out, it might address more of the scraper sites out there but no idea if it will reverse some of the false positives.