In a previous article on RealEstateCompsToday.com we noted the costs associated with inaccurate home value estimates for sellers and buyers. More Accurate Than Chase Home Value Estimator was the only site that offered both the speed and flexibility of the online guesstimator sites with the accuracy associated with a real estate agent’s home value comparison report. Another article tested 7 of these tools, including Chase, Bank of America and Zillow. Here are some of their findings and their methodology was simple enough for anyone to retest.

When you plug your home’s address into one of the many free home value estimators available online, a lot of people simply accept the location-based figure it gives us at face value. It might be shockingly low or breathtakingly high. We then start to make life-altering decisions about whether we should sell our home based on this one opinion.

Richard Batts, one of central Florida’s top real estate agents, says, “there is no way you can accurately give an estimate on the price of a home by using a free home value estimator — they are very, very misleading.”

According to US News & World Report, there are at least half a dozen reasons online home value estimators so often fail to hit their mark.

Among these reasons are the uniqueness of a home compared to others in the neighborhood, a rapidly changing market and a general lack of data for your location.

They decided to put on their proverbial lab coats and do the ultimate experiment on the leading free home value estimators:

Zillow

Eppraisal

Bank of America

Chase

Fifth Third Bank

Realtor

RE/MAX

These tools do not require you to register for an account nor provide any personal information.

The guinea pigs include five properties located throughout the US, all of which were last sold about two years ago:

A 1-bedroom condo in New York City

A 4-bedroom house in southern California currently owned by a celebrity

Home Value Estimate Experiment #1:

1-bedroom condo in New York City’s Financial District

Background: This unit was purchased in June 2014 for $999,000. It boasts one bedroom, 1.5 baths and just under 1,000 square feet.

Our home value estimators produced the following results:

Zillow’s Zestimate: $1,244,350

RE/MAX: $1,214,000

Epraisal: $1,133,596

Fifth Third Bank: $1,082,000

Bank of America: $1,008,593 – $1,309,988

Realtor: No estimate

Chase: No estimate

Total discrepancy:$301,395Average: $1,165,421

While there is a range in values, all estimates agree that the price has increased from the time it was last sold. Overall, not a terrible outcome. Although it’s interesting that already, within the first experiment two tools didn’t even try. Our leading alternative home value estimator site didn’t get polled at all in this and already is as accurate as Chase.

Home Value Estimate Experiment #2:

A 4-bedroom house in Calabasas, CA currently on the market. (Current owner: Kourtney Kardashian)

Background: Bought by the oldest Kardashian sister in 2014 for $2,975,00, this house has 4 bedrooms, 5 bathrooms and is a whopping 5,400 square feet.

RE/MAX: $3,291,600

Fifth Third Bank: $3,167,000

Zillow’s Zestimate: $3,008,782

Epraisal: $3,005,929

Bank of America: $2,842,283 – $3,473,901

Realtor: $2,722,795

Chase: $2,432,480 – $2,855,520

Total discrepancy: $1,041,421 Average: $2,993,096

This is a very big difference. In four of the estimates, Kardashian makes money on the sale. In two of the estimates, she loses money.

Home Value Estimate Experiment #3:

A 3-bedroom townhouse in Cleveland, OH

Background: This 3-bedroom, 4-bath townhome has over 2,000 square feet (and an elevator!) It was purchased in 2014 for $325,000.

Realtor: $343,094

Fifth Third Bank: $317,00

Zillow’s Zestimate: $299,512

RE/MAX: $295,000

Chase: $289,520 – $326,480

Epraisal: $287,991

Bank of America: $245,000 – $384,125

Total discrepancy:$139,125Average: $308,840

Once again, these estimates show the potential for the owner to make and lose money if they were to sell now. Bank of America’s range provides the high and the low for this property.

Chase’s Home Value Estimator required a disclaimer essentially disavowing any trust in the tools accuracy. The alternative to Chase Home Value Estimator site has no such disclaimer.

Home Value Estimate Experiment #4:

A recently constructed 5-bedroom house in Vail, CO

Background: This 5-bedroom, 6-bath chalet is over 5,300 square feet and has unobstructed views of the mountains. It was built in 2014 and sold to its owner last year for $4,750,000.

Zillow’s Zestimate: $5,098,606

Fifth Third Bank: $4,626,000

Bank of America: $3,492,607 – $5,946,927

Epraisal: $2,583,036

Realtor: $1,780,382

RE/MAX: No estimate

Chase: No estimate

Total discrepancy:$3,318,224Average: $3,921,260

The online estimators didn’t know what to make of this one. Perhaps because it was built so recently or because there aren’t much sales data on the neighborhood, the estimators failed us greatly here.

Home Value Estimate Experiment #5:

A 4-bedroom house in the heartland

Background: With 4 bedrooms, 3.5 baths, and over 3,100 square feet, this family home was sold last year for $230,000.

Zillow’s Zestimate: $274,058

Realtor: $273,980

Fifth Third Bank: $250,000

RE/MAX: $237,400

Epraisal: $236,499

Chase: $204,680 – $271,320

Bank of America: $108,783 – $301,171

Total Discrepancy:$192,388Average: $239,766

Once again, Bank of America provided the highest high and the lowest low in their range. However, 5 of the 7 tools provided relatively close estimates.

What did we learn from all of these free home value estimators?

The site that conducted these experiments concluded…”We learned free home value estimators shouldn’t be the final say in your decision to sell your home, but they’re a good place to start. If the property in question isn’t new construction, and it’s located in an area with plenty of recent sales data, it is possible to get a fairly accurate idea of its value.

As Batts notes: “[Home value estimators] have no idea what the inside of the house looks like; have no idea what’s been replaced; have no idea if it has a new roof; they don’t have any of that information.”

We also learned that despite their shortcomings, consulting several of these tools can provide a decent average.”

A decent average is great as long as you aren’t selling or buying a home. Even the Chase estimator site agrees. Their disclaimer literally says, “it should not be relied upon,” it may as well say “for entertainment purposes only.” Luckily home buyers, investors and sellers all have an accurate online source. It allows you to use their online system to purchase a comp report provided by an agent in your area. Typically reports are delivered within a day or two and reports start at about $5.

Considering the total amount of difference in these tests reached $4,992,553 …. that is to say on 5 homes 5 million was off, its important to get accurate estimates.

Each PBN site will include registering the domain, setting the name servers and hosting the site.

First and foremost the most important aspect of your Private Blog Network is randomness. Consider what pattern or foot print your PBN might have and avoid that commonality.

Patterns and commonality to avoid in building a Private Blog Network

Good PBNs Are Random, Start With Different Name Registrars

First off you need private domain registration, if not private then you’ll need people and addresses from all over. If you always use Godaddy you’re going to have to try out others to avoid a pattern. Incidentally if you always use Godaddy you’re getting ripped off as they will charge you for privacy and many others don’t. Some popular Name Registrars are 1and1.comnamesilo.comnamecheap.comcosmotown.com each of these can save you a considerable amount over Godaddy considering they offer free private registration and using more than one breaks a pattern.

Each time you add a new site to your PBN you need to approach it from the beginning as if you’re playing a character in a story who has never made a website before, when I say that I mean if you know you have a site on Host A and you like that host you’re making decisions based on previous sites and are more likely to create a pattern. Forget Host A how would you find a host for the first time? Google popular web hosts and pick a cheap new partner.

One thing that’s really beneficial about building PBNs that is more helpful to you in the long run is the forced exploration. After you’ve built ten sites on ten hosts using ten registrars and ten WordPress themes you’ll be able to write three top ten lists and rank the best of the 720 combinations that were available to you. It’s a lot of practice and as you’re avoiding patterns and repetition you’ll find yourself stepping out of your norm.

Vary Your Web Hosts

Speed of a web host is important normally but not necessarily when your building a PBN. While you want your primary or money site to load in under 3 seconds its perfectly fine if your PBN site loads in 7 seconds and that opens the door to all manner of generic no name web hosts. Your primary goal with multiple web hosts is to utilize a different IP address.

Considering the complexity that can quickly arise when seeking randomness of your sites.

The only two big issues with this model …

Organization OF PBN Resources

What site is down? Oh….well which domain registrar did I use? Am I using their nameservers, someone else’s? Where did I point that to be hosted? Sure these aren’t that annoying to answer with a 10 site network, but try answering it when you’ve built and scaled up to 200 sites using 7 registrars, 20 name servers, 150 different IPs … it becomes unmanageable as you find yourself searching for your site more than you are building new sites, and why are you having to search? Maintaining a site is essential, as updates roll out to WordPress, plugins get updated and hackers exploit new vulnerabilities. If you log into every site you own and spend 5 minutes on each site your 200 domain name network will take 16 hours … or two days a week and consider that you only spend 5 minutes on a site, you likely didn’t fix any issues and took no breaks! It’s time to consider an apprentice or spreadsheets that fully document every aspect of your network, or both.

Uptime Monitoring

Somewhere around 100 domains I figured out I needed to approach this like an enterprise would and have actual uptime monitoring allowing me to see the state of the network easily. UptimeRobot allows you to set up 50 monitors on a free account.

In the real world 94% Uptime is horrible. Consider that in the last 30 days I had a recorded 104765 minutes sites were down in this sample of sites. I had issues with a server getting attacked by someone using 1700 servers causing a DOS attack. Why? Anyone’s guess … usually its a game to them and they aren’t paying for those 1700 servers but they’re other people’s hacked resources being used to grow their network.

You may be interested in MainWP or InfiniteWP … Godaddy provides Godaddy Pro. You need to be mindful that these only work when they work and will they give away a signature pattern? Likely they can create an easier management solution but easier is dangerous.

Costs Ballon And Randomness Prevents Savings

As you scale up from 10 to 20 to 50 sites your going to wake up one day and realize youre spending hundreds of dollars a month on infrastructure and all of your time will now be consumed with maintaining your network. Adding someone to help you is going to increase costs and take your time to train them in being effective at maintaining the network. Be careful who you bring in to help you, friends are obvious choices but when they get upset about something unrelated to the network they could leave you high and dry. Worse yet, they are the most likely to teach you a lesson by bailing on you for a couple weeks. Trust the people who are in it for the money … pay them more than they can get at a retail job to build loyalty to your mission. They need not be technical people but they need to understand that if a site is down, Google can’t index it and that backlink is missing now. They need to be able to follow a logical progression and understand the parts that are in play to help you maintain the site.

The obvious answer to addressing costs is to bundle services and make sure you’re utilizing resources in the most effective manner but that is accomplished by making patterns. You can’t find cost savings by giving away your sites.

Cloudflare Allows Consolidation And The Pattern Is Indistinguishable

Cloudflare offers the ability to hide among the masses. Who is Cloudflare? They stand in front of your server and take the brunt of the internets crap. Upwork.com, Medium.com, Themeforest.net, Chaturbate.com are among the names using Cloudflare.com services. Some estimates suggest that Cloudflare is about 8% of the entire internet. Thats huge! At one point they found themselves protecting the Israeli government’s network as well as the PLOs.

Using Cloudflare is hiding in plain sight and free. I recommend it but in a mixture capacity still have some sites out side of their network just to avoid any one bottleneck, it would seem odd if 100& of the sites linking to a domain are using Cloudflare….remember they are 8% and while the largest chunk of the internet they aren’t the internet.

This article has focused mainly on external and infrastructure concerns of building a PBN. This is really a third of topic and in the coming weeks I’ll include two more posts that address on site content issues of building a PBN and site design considerations for a network of sites.

Featured snippets, a vehicle for voice search and the answers to our most pressing questions, have doubled on the SERPs — but not in the way we usually mean. This time, instead of appearing on two times the number of SERPS, two snippets are appearing on the same SERP. Hoo!

In all our years of obsessively stalking snippets, this is one of the first documented cases of them doing something a little different. And we are here for it.

While it’s still early days for the double-snippet SERP, we’re giving you everything we’ve got so far. And the bottom line is this: double the snippets mean double the opportunity.

Google’s case for double-snippet SERPs

Not yet launched, details on the feature were a little sparse. We learned that they’re “to help people better locate information” and “may also eventually help in cases where you can get contradictory information when asking about the same thing but in different ways.”

Thankfully, we only had to wait a month before Google released them into the wild and gave us a little more insight into their purpose.

Calling them “multifaceted” featured snippets (a definition we’re not entirely sure we’re down with), Google explained that they’re currently serving “‘multi-intent’ queries, which are queries that have several potential intentions or purposes associated,” and will eventually expand to queries that need more than one piece of information to answer.

With that knowledge in our back pocket, let’s get to the good stuff.

The double snippet rollout is starting off small

Since the US-en market is Google’s favorite testing ground for new features and the largest locale being tracked in STAT, it made sense to focus our research there. We chose to analyze mobile SERPs over desktop because of Google’s (finally released) mobile-first indexing, and also because that’s where Google told us they were starting.

After waiting for enough two-snippet SERPs to show up so we could get our (proper) analysis on, we pulled our data at the end March. Out of the mobile keywords currently tracking in the US-en market in STAT, 122,501 had a featured snippet present, and of those, 1.06 percent had more than one to its name.

With only 1,299 double-snippet SERPs to analyze, we admit that our sample size is smaller than our big data nerd selves would like. That said, it is indicative of how petite this release currently is.

Two snippets appear for noun-heavy queries

Our first order of business was to see what kind of keywords two snippets were appearing for. If we can zero in on what Google might deem “multi-intent,” then we can optimize accordingly.

By weighting our double-snippet keywords by tf-idf, we found that nouns such as “insurance,” “computer,” “job,” and “surgery” were the primary triggers — like in [general liability insurance policy] and [spinal stenosis surgery].

It’s important to note that we don’t see this mirrored in single-snippet SERPs. When we refreshed our snippet research in November 2017, we saw that snippets appeared most often for “how,” followed closely by “does,” “to,” “what,” and “is.” These are all words that typically compose full sentence questions.

Essentially, without those interrogative words, Google is left to guess what the actual question is. Take our [general liability insurance policy]keyword as an example — does the searcher want to know what a general liability insurance policy is or how to get one?

Because of how vague the query is, it’s likely the searcher wants to know everything they can about the topic. And so, instead of having to pick, Google’s finally caught onto the wisdom of the Old El Paso taco girl — why not have both?

Better leapfrogging and double duty domains

Next, we wanted to know where you’d need to rank in order to win one (or both) of the snippets on this new SERP. This is what we typically call “source position.”

On a single-snippet SERP and ignoring any SERP features, Google pulls from the first organic rank 31 percent of the time. On double-snippet SERPs, the top snippet pulls from the first organic rank 24.84 percent of the time, and the bottom pulls from organic ranks 5–10 more often than solo snippets.

What this means is that you can leapfrog more competitors in a double-snippet situation than when just one is in play.

And when we dug into who’s answering all these questions, we discovered that 5.70 percent of our double-snippet SERPs had the same domain in both snippets. This begs the obvious question: is your content ready to do double duty?

Snippet headers provide clarity and keyword ideas

In what feels like the first new addition to the feature in a long time, there’s now a header on top of each snippet, which states the question it’s set out to answer. With reports of headers on solo snippets (and “People also search for” boxes attached to the bottom — will this madness never end?!), this may be a sneak peek at the new norm.

Instead of relying on guesses alone, we can turn to these headers for what a searcher is likely looking for — we’ll trust in Google’s excellent consumer research. Using our [general liability insurance policy] example once more, Google points us to “what is general liabilities insurance” and “what does a business insurance policy cover” as good interpretations.

Because these headers effectively turn ambiguous statements into clear questions, we weren’t surprised to see words like “how” and “what” appear in more than 80 percent of them. This trend falls in line with keywords that typically produce snippets, which we touched on earlier.

So, not only does a second snippet mean double the goodness that you usually get with just one, it also means more insight into intent and another keyword to track and optimize for.

Both snippets prefer paragraph formatting

Next, it was time to give formatting a look-see to determine whether the snippets appearing in twos behave any differently than their solo counterparts. To do that, we gathered every snippet on our double-snippet SERPs and compared them against our November 2017 data, back when pairs weren’t a thing.

While Google’s order of preference is the same for both — paragraphs, lists, and then tables — paragraph formatting was the clear favorite on our two-snippet SERPs.

It follows, then, that the most common pairing of snippets was paragraph-paragraph — this appeared on 85.68 percent of our SERPs. The least common, at 0.31 percent, was the table-table coupling.

We can give two reasons for this behavior. One, if a query can have multiple interpretations, it makes sense that a paragraph answer would provide the necessary space to explain each of them, and two, Google really doesn’t like tables.

We saw double-snippet testing in action

When looking at the total number of snippets we had on hand, we realised that the only way everything added up was if a few SERPs had more than two snippets. And lo! Eleven of our keywords returned anywhere from six to 12 snippets.

For a hot minute we were concerned that Google was planning a full-SERP snippet takeover, but when we searched those keywords a few days later, we discovered that we’d caught testing in action.

Here’s what we saw play out for the keyword [severe lower back pain]:

After testing six variations, Google decided to stick with the first two snippets. Whether this is a matter of top-of-the-SERP results getting the most engagement no matter what, or the phrasing of these questions resonating with searchers the most, is hard for us to tell.

The multiple snippets appearing for [full-time employment] left us scratching our head a bit:

Our best hypothesis is that searchers in Florida, NYS, Minnesota, and Oregon have more questions about full-time employment than other places. But, since we’d performed a nation-wide search, Google seems to have thought better of including location-specific snippets.

Share your double-snippet SERP experiences

It goes without saying — but here we are saying it anyway — that we’ll be keeping an eye on the scope of this release and will report back on any new revelations.

In the meantime, we’re keen to know what you’re seeing. Have you had any double-snippet SERPs yet? Were they in a market outside the US? What keywords were surfacing them?

In my last post, I explained how using network visualization tools can help you massively improve your content marketing PR/Outreach strategy —understanding which news outlets have the largest syndication networks empowers your outreach team to prioritize high-syndication publications over lower syndication publications. The result? The content you are pitching enjoys significantly more widespread link pickups.

Today, I’m going to take you a little deeper — we’ll be looking at a few techniques for forming an even better understanding of the publisher syndication networks in your particular niche. I’ve broken this technique into two parts:

Technique One — Leveraging Buzzsumo influencer data and twitter scraping to find the most influential journalists writing about any topic

Technique Two — Leveraging the Gdelt Dataset to reveal deep story syndication networks between publishers using in-context links.

Why do this at all?

If you are interested in generating high-value links at scale, these techniques provide an undeniable competitive advantage — they help you to deeply understand how writers and news publications connect and syndicate to each other.

In our opinion at Fractl, data-driven content stories that have strong news hooks, finding writers and publications who would find the content compelling, and pitching them effectively is the single highest ROI SEO activity possible. Done correctly, it is entirely possible to generate dozens, sometimes even hundreds or thousands, of high-authority links with one or a handful of content campaigns.

Let’s dive in.

Using Buzzsumo to understand journalist influencer networks on any topic

First, you want to figure out who your topc influencers are your a topic. A very handy feature of Buzzsumo is its “influencers” tool. You can locate it on the influences tab, then follow these steps:

Select only “Journalists.” This will limit the result to only the Twitter accounts of those known to be reporters and journalists of major publications. Bloggers and lower authority publishers will be excluded.

Search using a topical keyword. If it is straightforward, one or two searches should be fine. If it is more complex, create a few related queries, and collate the twitter accounts that appear in all of them. Alternatively, use the Boolean “and/or” in your search to narrow your result. It is critical to be sure your search results are returning journalists that as closely match your target criteria as possible.

Ideally, you want at least 100 results. More is generally better, so long as you are sure the results represent your target criteria well.

Once you are happy with your search result, click export to grab a CSV.

The next step is to grab all of the people each of these known journalist influencers follows — the goal is to understand which of these 100 or so influencers impacts the other 100 the most. Additionally, we want to find people outside of this group that many of these 100 follow in common.

To do so, we leveraged Twint, a handy Twitter scraper available on Github to pull all of the people each of these journalist influencers follow. Using our scraped data, we built an edge list, which allowed us to visualize the result in Gephi.

Here is an interactive version for you to explore, and here is a screenshot of what it looks like:

This graph shows us which nodes (influencers) have the most In-Degree links. In other words: it tells us who, of our media influencers, is most followed.

Using the “Betweenness Centrality” score given by Gephi, we get a rough understanding of which nodes (influencers) in the network act as hubs of information transfer. Those with the highest “Betweenness Centrality” can be thought of as the “connectors” of the network. These are the top 10 influencers:

Maia Szalavitz (@maiasz) Neuroscience Journalist, VICE and TIME

David Kroll (@davidkroll) Freelance healthcare writer, Forbes Heath

Jeanne Whalen (@jeannewhalen) Business Reporter, Washington Post

Travis Lupick (@tlupick), Journalist, Author

Johann Hari (@johannhari101) New York Times best-selling author

Radley Balko (@radleybalko) Opinion journalist, Washington Post

Sam Quinones (@samquinones7), Author

Eric Bolling (@ericbolling) New York Times best-selling author

Dana Milbank (@milbank)Columnist, Washington Post

Mike Riggs (@mikeriggs) Writer & Editor, Reason Mag

@maiasz, @davidkroll, and @johannhari101 are standouts. There’s considerable overlap between the winners in “In-Degree” and “Betweenness Centrality” but they are still quite different.

What else can we learn?

The middle of the visualization holds many of the largest sized nodes. The nodes in this view are sized by “In-Degree.” The large, centrally located nodes are disproportionately followed by other members of the graph and enjoy popularity across the board (from many of the other influential nodes). These are journalists commonly followed by everyone else. Sifting through these centrally located nodes will surface many journalists who behave as influencers of the group initially pulled from BuzzSumo.

So, if you had a campaign about a niche topic, you could consider pitching to an influencer surfaced from this data —according to our the visualization, an article shared in their network would have the most reach and potential ROI

Using Gdelt to find the most influential websites on a topic with in-context link analysis

The first example was a great way to find the best journalists in a niche to pitch to, but top journalists are often the most pitched to overall. Often times, it can be easier to get a pickup from less known writers at major publications. For this reason, understanding which major publishers are most influential, and enjoy the widest syndication on a specific theme, topic, or beat, can be majorly helpful.

By using Gdelt’s massive and fully comprehensive database of digital news stories, along with Google BigQuery and Gephi, it is possible to dig even deeper to yield important strategic information that will help you prioritize your content pitching.

We pulled all of the articles in Gdelt’s database that are known to be about a specific theme within a given timeframe. In this case (as with the previous example) we looked at “behaviour health.” For each article we found in Gdelt’s database that matches our criteria, we also grabbed links found only within the context of the article.

Pull data from Gdelt. You can use this command: SELECT DocumentIdentifier,V2Themes,Extras,SourceCommonName,DATE FROM [gdelt-bq:gdeltv2.gkg] where (V2Themes like ‘%Your Theme%’).

Select any theme you find, here — just replace the part between the percentages.

To extract the links found in each article and build an edge file. This can be done with a relatively simple python script to pull out all of the <PAGE_LINKS> from the results of the query, clean the links to only show their root domain (not the full URL) and put them into an edge file format.

Note: The edge file is made up of Source–>Target pairs. The Source is the article and the Target are the links found within the article. The edge list will look like this:

Article 1, First link found in the article.

Article 1, Second link found in the article.

Article 2, First link found in the article.

Article 2, Second link found in the article.

Article 2, Third link found in the article.

From here, the edge file can be used to build a network visualization where the nodes publishers and the edges between them represent the in-context links found from our Gdelt data pull around whatever topic we desired.

This final visualization is a network representation of the publishers who have written stories about addiction, and where those stories link to.

What can we learn from this graph?

This tells us which nodes (Publisher websites) have the most In-Degree links. In other words: who is the most linked. We can see that the most linked-to for this topic are:

tmz.com

people.com

cdc.gov

cnn.com

go.com

nih.gov

ap.org

latimes.com

jamanetwork.com

nytimes.com

Which publisher is most influential?

Using the “Betweenness Centrality” score given by Gephi, we get a rough understanding of which nodes (publishers) in the network act as hubs of information transfer. The nodes with the highest “Betweenness Centrality” can be thought of as the “connectors” of the network. Getting pickups from these high-betweenness centrality nodes gives a much greater likelihood of syndication for that specific topic/theme.

Dailymail.co.uk

Nytimes.com

People.com

CNN.com

Latimes.com

washingtonpost.com

usatoday.com

cvslocal.com

huffingtonpost.com

sfgate.com

What else can we learn?

Similar to the first example, the higher the betweenness centrality numbers, number of In-degree links, and the more centrally located in the graph, the more “important” that node can generally be said to be. Using this as a guide, the most important pitching targets can be easily identified.

Understanding some of the edge clusters gives additional insights into other potential opportunities. Including a few clusters specific to different regional or state local news, and a few foreign language publication clusters.

Wrapping up

I’ve outlined two different techniques we use at Fractl to understand the influence networks around specific topical areas, both in terms of publications and the writers at those publications. The visualization techniques described are not obvious guides, but instead, are tools for combing through large amounts of data and finding hidden information. Use these techniques to unearth new opportunities and prioritize as you get ready to find the best places to pitch the content you’ve worked so hard to create.

Do you have any similar ideas or tactics to ensure you’re pitching the best writers and publishers with your content? Comment below!

With the new year in full swing and an already busy first quarter, our 2019 predictions for SEO in the new year are hopping onto the scene a little late — but fashionably so, we hope. From an explosion of SERP features to increased monetization to the key drivers of search this year, our SEO experts have consulted their crystal balls (read: access to mountains of data and in-depth analyses) and made their predictions. Read on for an exhaustive list of fourteen things to watch out for in search from our very own Dr. Pete, Britney Muller, Rob Bucci, Russ Jones, and Miriam Ellis!

1. Answers will drive search

People Also Ask boxes exploded in 2018, and featured snippets have expanded into both multifaceted and multi-snippet versions. Google wants to answer questions, it wants to answer them across as many devices as possible, and it will reward sites with succinct, well-structured answers. Focus on answers that naturally leave visitors wanting more and establish your brand and credibility. [Dr. Peter J. Meyers]

Further reading:

2. Voice search will continue to be utterly useless for optimization

Optimizing for voice search will still be no more than optimizing for featured snippets, and conversions from voice will remain a dark box. [Russ Jones]

Further reading:

3. Mobile is table stakes

This is barely a prediction. If your 2019 plan is to finally figure out mobile, you’re already too late. Almost all Google features are designed with mobile-first in mind, and the mobile-first index has expanded rapidly in the past few months. Get your mobile house (not to be confused with your mobile home) in order as soon as you can. [Dr. Peter J. Meyers]

Further reading:

4. Further SERP feature intrusions in organic search

Expect Google to find more and more ways to replace organic with solutions that keep users on Google’s property. This includes interactive SERP features that replace, slowly but surely, many website offerings in the same way that live scores, weather, and flights have. [Russ Jones]

Further reading:

5. Video will dominate niches

Featured Videos, Video Carousels, and Suggested Clips (where Google targets specific content in a video) are taking over the how-to spaces. As Google tests search appliances with screens, including Home Hub, expect video to dominate instructional and DIY niches. [Dr. Peter J. Meyers]

Further reading:

6. SERPs will become more interactive

We’ve seen the start of interactive SERPs with People Also Ask Boxes. Depending on which question you expand, two to three new questions will generate below that directly pertain to your expanded question. This real-time engagement keeps people on the SERP longer and helps Google better understand what a user is seeking. [Britney Muller]

Further reading:

7. Local SEO: Google will continue getting up in your business — literally

Google will continue asking more and more intimate questions about your business to your customers. Does this business have gender-neutral bathrooms? Is this business accessible? What is the atmosphere like? How clean is it? What kind of lighting do they have? And so on. If Google can acquire accurate, real-world information about your business (your percentage of repeat customers via geocaching, price via transaction history, etc.) they can rely less heavily on website signals and provide more accurate results to searchers. [Britney Muller]

Further reading:

8. Business proximity-to-searcher will remain a top local ranking factor

In Moz’s recent State of Local SEO report, the majority of respondents agreed that Google’s focus on the proximity of a searcher to local businesses frequently emphasizes distance over quality in the local SERPs. I predict that we’ll continue to see this heavily weighting the results in 2019. On the one hand, hyper-localized results can be positive, as they allow a diversity of businesses to shine for a given search. On the other hand, with the exception of urgent situations, most people would prefer to see best options rather than just closest ones. [Miriam Ellis]

Further reading:

9. Local SEO: Google is going to increase monetization

Look to see more of the local and maps space monetized uniquely by Google both through Adwords and potentially new lead-gen models. This space will become more and more competitive. [Russ Jones]

Further reading:

10. Monetization tests for voice

Google and Amazon have been moving towards voice-supported displays in hopes of better monetizing voice. It will be interesting to see their efforts to get displays in homes and how they integrate the display advertising. Bold prediction: Amazon will provide sleep-mode display ads similar to how Kindle currently displays them today. [Britney Muller]

11. Marketers will place a greater focus on the SERPs

I expect we’ll see a greater focus on the analysis of SERPs as Google does more to give people answers without them having to leave the search results. We’re seeing more and more vertical search engines like Google Jobs, Google Flights, Google Hotels, Google Shopping. We’re also seeing more in-depth content make it onto the SERP than ever in the form of featured snippets, People Also Ask boxes, and more. With these new developments, marketers are increasingly going to want to report on their general brand visibility within the SERPs, not just their website ranking. It’s going to be more important than ever for people to be measuring all the elements within a SERP, not just their own ranking. [Rob Bucci]

Further reading:

12. Targeting topics will be more productive than targeting queries

2019 is going to be another year in which we see the emphasis on individual search queries start to decline, as people focus more on clusters of queries around topics. People Also Ask queries have made the importance of topics much more obvious to the SEO industry. With PAAs, Google is clearly illustrating that they think about searcher experience in terms of a searcher’s satisfaction across an entire topic, not just a specific search query. With this in mind, we can expect SEOs to more and more want to see their search queries clustered into topics so they can measure their visibility and the competitive landscape across these clusters. [Rob Bucci]

Further reading:

13. Linked unstructured citations will receive increasing focus

I recently conducted a small study in which there was a 75% correlation between organic and local pack rank. Linked unstructured citations (the mention of partial or complete business information + a link on any type of relevant website) are a means of improving organic rankings which underpin local rankings. They can also serve as a non-Google dependent means of driving traffic and leads. Anything you’re not having to pay Google for will become increasingly precious. Structured citations on key local business listing platforms will remain table stakes, but competitive local businesses will need to focus on unstructured data to move the needle. [Miriam Ellis]

Further reading:

We’ve heard from Mozzers, and now we want to hear from you. What have you seen so far in 2019 that’s got your SEO Spidey senses tingling? What trends are you capitalizing on and planning for? Let us know in the comments below (and brag to friends and colleagues when your prediction comes true in the next 6–10 months). 😉