SUMMARY: Lead scoring helps marketers rank and identify the best prospects. But assigning scores requires the sales team's input.

Here is how one marketer worked closely with the sales team to set up and refine a lead- scoring system. Six strategies, including:
o Defining a sales-ready lead
o Establishing scores for different activities
o Testing your methodology
o Refining a system through ongoing collaboration

Marketers who manage lead-scoring systems must remember that they’re not just measuring whether a prospect is engaged with campaigns and collateral. Lead scoring can determine whether engagement means a prospect is ready to hear from a sales rep.

“Lead scoring should be an indicator of sales readiness – not of how much of a fan they are of [our company],” says Emily W. Salus, Senior Marketing Manager, CollabNet, which provides a project development collaboration platform for the software industry.

That goal requires a close collaboration between the sales and marketing teams to establish a threshold for when a lead is ready to be passed on to sales. Teamwork is also needed to assign scores for a prospect’s actions – a process that begins before a lead scoring system is implemented, says Salus.

Salus describes how her team and the sales department worked together to set up a lead-scoring system. Here are six strategies she recommends for establishing a methodology, setting values for activities, and monitoring efforts.

Marketers and sales reps often have different takes on what makes a lead ready for a follow-up. Before establishing scores for different actions, Salus’ team first asked the sales team about the specific combination of traits that make a prospect interesting to them.

The teams met once or twice a week over several weeks to discuss the scoring methodology. They focused on three major areas:o Demographics (e.g., job title, company size, geography)o Activity, including downloading collateral and attending webinars or in-person eventso Interaction with email, such as clickthrough rates and response to email offers

They also discussed rules for withholding leads from the sales team, such as having incomplete or bogus contact information or leads identifying themselves as students.

During the conversations, Salus’ team solicited feedback on individual factors within those three major categories that indicated high-value sales leads. They uncovered the combination of activities and demographic data that triggered a hand-off to the sales team.

For example, the sales team viewed webinar attendance as a high-value activity. They indicated that a prospect with the right demographic profile who had participated in a webinar was a lead they wanted to see immediately.

The marketing team also expected white-paper downloads to indicate a high-value lead. But the sales team was skeptical that a download alone indicated sales-ready status. “They said, ‘That’s not as valuable as someone who has actively engaged in a webinar,’” says Salus.

Instead, the sales team wanted to see prospects that had downloaded a white paper and also engaged in other activities.

Strategy #2. Use sales team feedback to establish point value of a sales-ready lead

Based on feedback from the sales department, Salus and her team began creating the point value that would drive their lead-scoring system.

First, they established the total point value that would mark the barrier between a lead that needed more nurturing and a lead that was ready to go to the sales team. In CollabNet’s case, that was 100 points.

Next, they used sales guidance on high-value prospect characteristics and activities to set a benchmark for reaching that barrier. For example, the sales team said a director-level prospect in the UK (one of the company’s top geographic regions) who attended a webinar was the type of lead they wanted to see. So, the marketing team knew that the combination of those activities had to equal or exceed the 100-point barrier.

Strategy #3. Created weighted scores for different actions

From that benchmark, the team established individual scores for each variable in a lead’s profile, weighting specific actions based on their value to the sales team.

Salus says marketers can approach scoring in two ways:o Working backwards from their highest-value activityo Working forward from their lowest-value activity

If you determine, for example, that attending a webinar is the highest-value action, worth 50 points, you could then assign a lower score to other options, such as 35 points for downloading a white paper.

Alternatively, a team can decide that visiting a Web page is the lowest-value action, worth one point. Then, you can assign higher point values to options that signal more interest, such as clicking on an email (10 points).

To manage the process, they created a spreadsheet that mapped all the possible actions a prospect could take. The spreadsheet included:

- Email interactionSalus’ team didn’t count email opens toward their lead score. Instead, they looked for email clicks, which indicated that the prospect was responding to an offer.

- Voluntary requests for information Filling out a “contact us” form or emailing the company with a request for important information, such as pricing data, signaled high engagement.

- Website visitsThe team considered this activity more passive than responding to an email, and scored visits to different Web pages according to their content.

Visits to the homepage or contact-us page weren’t given a score. Instead, the team chose to score visits to pages with in-depth information, such as product information pages.

- Collateral downloadsThey examined all the collateral options a prospect could download, such as analyst reports, white papers, case studies and product overviews. Then they analyzed whether all types of collateral were worth the same score, or if certain pieces, such as white papers, were more valuable.

- Event attendanceThe team examined the range of in-person and online events that prospects could attend. Then they asked questions about the value of those different events.

The team determined, for instance, that viewing webinar replays should receive the same score as attending a live webinar; after all, overseas prospects might not have been able to attend the live event due to time-zone differences.

Strategy #4. Review proposed scores with sales team

The marketing team reviewed their proposed scores with the sales team. They explained how they had determined different point values, and sought disagreements with any of their choices from the sales team. “We went back to sales and said, ‘This is what we’re thinking,’ and they said, ‘That’s kind of what we think, too.”

However, reviewing the proposed scores with the sales team can help you tweak or refine the scoring model. Salus and her team originally had planned to score analyst report downloads higher than case study downloads, but the sales team said they considered case studies to be more valuable than analyst reports. In their experience, prospects reading case studies were looking for comparisons to their own business model.

As a result, downloading a case study is worth 5 more points than downloading an analyst report in Salus’ system.

Strategy #5. Test scoring system against existing sales pipeline

Salus likens establishing a lead-scoring system to the scientific process: You come up with a hypothesis, and then you test it.

The team didn’t want to roll out their new system to their database of 500,000 records without ensuring they were on the right track with scoring values. So, they created a test to determine how existing leads and opportunities in the pipeline would have scored if the system had been in place.

The team conducted a random sampling of records in the company’s CRM system. They selected:o Open sales opportunitieso Opportunities declared unresponsive or unqualified by the sales teamo Leads still in the nurturing pipeline

They examined each contact’s demographic characteristics and activity records. They assigned each contact a score based on the new lead-scoring criteria. “It was a lot of manual labor,” says Salus.

The effort was worth it. The test found:o More than 85% of the open opportunities would have scored high enough to be passed along to sales.o Almost none of the unresponsive leads would have been sent to sales in the first place.

“That’s really satisfying,” says Salus. “It means we in marketing can keep things that are a waste of your time off your plate.”

Lead-scoring systems must be continually monitored and refined to remain effective. “This is completely a team effort between marketing and sales,” says Salus. “Youhave to be willing to not only set up the system, but also to analyze it.”

Salus and her team hold a meeting every other week with sales department execs to analyze metrics and assess potential changes to the lead-scoring system. They cover three major subject areas:o Report on key metrics from the marketing funnelo Report on key metrics from the sales funnelo Discussion of lead-scoring system operations and management

- The third discussion area typically constitutes the bulk of the regular meeting. The conversation allows marketing and sales to analyze what aspects of the lead-scoring system work well, what areas need to improvement, and potential tests to perform.

- Salus documents the discussion from each meeting. Team members can review which issues were addressed, which decisions were made, and what projects to follow up on at subsequent meetings.

Useful links related to this article:

CollabNet uses Marketo’s marketing automation software to manage its lead scoring and lead nurturing program:http://www.marketo.com

Post a Comment

Note: Comments are lightly moderated. We post all comments without editing as
long as they (a) relate to the topic at hand, (b)
do not contain offensive content, and (c) are not overt sales
pitches for your company's own products/services.

The views and opinions expressed in the articles of this website are strictly those of the author and do not necessarily reflect in any way the views of MarketingSherpa, its affiliates, or its employees.