Recently I saw a preview of Eloqua’s spring release and it got me thinking about the role lead scoring plays in determining campaign effectiveness. I hadn’t seen the product in a while and was impressed with the UI improvements the Eloqua team has produced. They have added new capabilities for delivering highly personalized direct mail, SMS/voice reminders, and on-demand fax and RSS delivery – interesting stuff that, while I’d need to talk to a client or two to be convinced of their specific usefulness, show that Eloqua is delivering a broader range of lead nurturing, drip marketing capabilities. Lastly, new campaign design UI will help shorten the time it takes to get first campaigns up and running.

With these changes, I see Eloqua – like many of the other firms I mentioned in my prior post – moving the B2B marketing conversation ahead in an important direction. The key to getting a campaign up and running quickly is not to make it easier to launch more campaigns but instead to focus marketing attention on the results – well qualified leads. And this is where I think the marketing rubber hits the sales road.

As an analyst who has written about lead management extensively – I am still amazed at how many marketers feel challenged to produce leads that sales appreciate. The bickering between marketers (who feel sales doesn’t follow up on the great leads they generate) and sales (who feel the quality of said leads is subject to debate) seems to continue unabated. The few marketers who end the arguing figure out early that quantifying lead quality is essential.

In my view, these marketers live by four best practices. They:1) Sit with sales, talk about leads, and come to an agreement about what is a Marketing Qualified Lead (MQL). What are the characteristics of a lead that make it worth working from sales perspective? And they are prepared to have this conversation every quarter.

2) Assign numeric scores to the different criteria – both explicit (size, industry, title, budget, etc.) and implicit (behavior, activity, interest, etc.) – that both sales and marketing believe distinguish hot leads from the rest. Specific criteria carry specific point values, like +5 for downloading a white paper and +15 for attending a webinar.

3) Use these criteria and weights to score raw leads (contacts, inquiries, replies, etc.), and set a numeric threshold that leads must attain before they earn the MQL status and get passed to sales. They also adjust scores downward as contacts go inactive or age.

4) Rescore contacts place in nurturing, education, or development campaigns. They work to understand what optimum scores are for each category of lead type. This score can vary by product line or geography or other company-specific factors, so they don’t assume that one size fits all. Again, scores change with activity levels and age.

Is this all there is to demonstrating campaign effectiveness? No, but it’s a start. Using numerical, quantifiable scores to grade leads turns the art of marketing into a science and marketers who use this approach tell me numeric scoring is one of the biggest factors in raising marketing’s value to the sales organization. But like most good science, it takes time and effort to perfect. So I commend Eloqua on their next generation of marketing software and their efforts dedicated to the process of making marketing more accountable.

I would like to hear about your scoring approaches and what you have done to achieve a common definition of qualified leads with sales.

Comments

I agree with your assessment of Eloqua's new release and in particular the improvement in the UI. It seems that many marketers believe that automated lead management and lead scoring is something only for large companies with large budgets and plenty of resources. I've also heard sales people call it graduate degree level stuff that's out of reach for most organizations.I hope that Eloqua's work, among a handful of others, help change the marketing dialogue around the accessibility and value of using marketing automation to bridge the divide between sales and marketing.

We've actually seen a shortcoming in assigning a single numeric lead score.Our approach is to score the lead along multiple dimensions. Three key ones that we look at: profile match, engagement level, and buying cycle position.The first -- profile match -- is an assessment of how much you want to talk to that person based on who they are. The profile score includes factors such as job title, company size, industry, department, etc. A lot of companies lead score programs are heavily biased to profile-oriented attributes. The problem is, focussing too much on profile-based attributes isn't enough. The reality is that identifying high profile score leads is relatively easy -- do a search in Hoover's or D&B and get the C-level execs at your target companies. Start dialing. The problem? Obviously, the overwhelming majority of those people won't return calls or emails from Sales.Introducing an engagement level score attempts to address this. It's how interested the lead is in talking to your company. it's an assessment of how likely the lead is to respond to Sales. It does no good if you hand leads off to Sales who Sales cannot then engage, right? Engagement level is measured based on things like recency and frequency of the lead's visits to your web site, registration/attendance at events, etc.The third dimension we use is the lead's position in the buying cycle. This can be a bit noisy. But, a lead who is looking at pricing and detailed specifications on your site is implicitly telling you that he's farther along in the buying cycle than a lead who is simply downloading a high-level white paper or analyst article.By decomposing the lead into multiple scores, the identification of an MQL is any lead that exceeds thresholds set for each dimension independently. For one thing, that gives you more levers to pull to tune the qualification process -- adjust the thresholds or adjust the scoring algorithm on individual dimensions based on the type of feedback from Sales.And there's another major benefit. By scoring each lead along multiple dimensions, each lead falls into a different quadrant on a multidimensional scoring grid. That can really help you apply different nurturing strategies for different types of leads. For instance, a lead with a high profile score but a low engagement score is someone you might be willing to spend a bit more on in order to get them to engage -- make a higher value "offer" to get them interested in your company. On the other hand, a lead with a high engagement score but a low profile score...probably is not someone you want to spend a lot of energy/time/resources on, as it's more difficult to change someone's profile than it is to change their engagement level.There's a white paper that goes into more detail about multidimensional lead scoring on the Bulldog Solutions web site: http://tinyurl.com/4gkywnTo your notes about Eloqua, it's a great tool for implementing multi-dimensional lead scoring like this! The tool captures a lot of information that is useful for scoring along all of these dimensions, and Program Builder is a great tool for actually implementing the scoring algorithm you develop.

Dear Tim, Excellent discussion and I believe we are in agreement. I probably didn't make it clear enough in my point 2) in the original post, but I am definitely suggesting that marketing folks use multiple criteria/dimensions for scoring leads. As you point out -- engagement or behaviorial criteria are as important, if not more so, than the conventional profiling criteria that tends to focus on facts, not interest, needs, or motivations. Buying cycle is tricky, since decision makers and influencers can be at different individual stages in the process from each other. In B2B, it is always a challenge to identify the "key" decision makers and to understand where their heads are at. Thanks for clarifying and contributing to the discussion. And, yes, I did fix your tiny url.

The idea of lead scoring has been around - but has there been any analysis done to see if higher scoring leads lead to a higher conversion or more revenue?Seems like there should be a correlation here. And if the correlation is weak - then perhaps the criteria for scoring need another look.Thoughts?

Devashish, to answer your question from Forrester's perspective: no, we have not done any research to analyze if higher scoring leads lead to higher conversions or revenue. But I have heard many anecdotal stories and seen models that show how including both explicit (fact) and implicit (behavior) criteria in the lead scoring model DOES help companies sort out the most promising leads from the rest of the respondents.Advocates, like myself, believe that scoring gives marketers a "stake in the ground" from which to measure their efforts. Without this, we are back to John Wannamaker's comment about wasting half of your promotional spend. Measuring is key and scoring makes measurement explicit. I am sure the lead management technology provider can weigh in here with other more specific examples. Bottomline: I'd be interested to see quantitative research on this, but the anecdotal information, in my mind, suggests the correlation is strong.You also raise a good point: if the correlation is weak, your scoring may need adjusted. You might be looking at the wrong criteria or have the weights applied inappropriately. But if you don't measure, how do you know?

I'm missing something.If I get a lead from any source, I just call him or her.And 2 minutes later you know if the lead is a lead or not.We use a web service for revealing the company name of the visitors on our website. Simple and practical as these companies are already interested.The visitors who qualify as a lead, are inserted into the CRM and then followed up by calling or emailing.

Compelling post Lara. I work only with complex B2B sales so here's what I've learned after doing lead scoring and qualification for clients little over a decade.Lead scoring can be complex and often begins as a relatively uncomplicated grading system that is then gradually enhanced as the process gets up and running. Lead scoring is only recommended when there is a large number of inquiries to screen. The numerous variables to weigh in screening suggest that the process be as uncomplicated as possible at the outset, before attempting a scoring system. Lead scoring does afford visibility into the lead pipeline as well as the sales pipeline.Communication is the key. It doesn’t matter by what name you call a lead as long as it is meaningful to the sales force. Salespeople don’t care if a lead is A, B, C or Q. All they really care about is whether the leads they are given are sales-ready. They will not adjust their behavior just because marketing classifies a lead as having a score of 200 instead of WARM. Most follow the path of least resistance to get to the destination of making quota. Communication, teamwork, and shared vision are, as a result, essential.In fact, I would only recommend numerical lead scoring if your organization is managing hundreds of leads and inquires in a lead management system. Lead scoring enables you to have a more stringent lead definition requirements, thus more detailed and perhaps accurate reporting, but can sometimes leave out the element of good judgment.In cases where your experience tells you that a lead is sales-ready, or would best be in the hands of a salesperson, you should create an exceptions code or status. This would allow the lead to be passed on to sales without meeting the minimum lead score and would also alert the salesperson that the lead will have special requirements.Ultimately, deciding when to hand over these exceptions, those leads that do not include the final decision-maker, you should ask the following question, “Can marketing continue to nurture this opportunity until they are more sales-ready or is this a situation best handled by a salesperson.” Again this is why communication and cooperation between sales and marketing is so important. I hope this helps.

@Brian Caroll - thanks for your explication.As our company is probably smaller, and thus the number of leads is lower in quantities.However we do qualify our website visitors in order to be marked as leads.Thus the first step is to discover the visitors by company name. Then to identify their needs from their website visit(s). Using this visit data combined with information retrieved from different sources on the Internet for the qualification as a lead or not.Then we start communicating with the company qualified as lead, which is followed up closely by analyzing their reactions on the website.As you write: "Communication is the key."

John and Brian -- thank you for the posts and different insights. As your posts show, lead scoring changes with the size of the firm and the maturity of the process. John, I would say you are performing lead scoring in an indirect manner. You use a combination of explicit data (company name of visitor as revealed by your Web service) and implicit (the fact that they were on your site and where they went. In your case, you must have both to "qualify", i.e. hit a score threshold. As Brian points out, you don't have to create a complicated scheme for scoring -- it's all about communication and getting to a common language about what is "qualified" and what is not. I am still advising a numeric scheme, not because it indicates which leads are ready for sales but which would benefit from additional nurturing/education and which may simply be worth filing away. Keep the conversation going -- I appreciate hearing the different perspectives!

Great thread, and timely. BtoB magazine and the Sales Lead Management Association reported last week that 45.4% of the 273 corporate marketers they surveyed in May 2008 plan to increase spending on lead-generation programs in the second half of 2008.Marketingcharts.com reports,"The biggest obstacles for increasing spend on such programs, the majority of respondents (47.2%) reported, was that they do not have reports to show the ROI for what they are spending."Seems we need a sound policy on lead scoring, now more than ever.

Two other thoughts:A -- Based on my experience with marketing smaller firms, I'd offer that it's key to have the same person do all the lead scoringB -- From Laura and John's exchange I wonder if fundamentally you have different views about how successful an Eloqua can be in helping integrate marketing and sales?When I engage with the Eloquas and Neocases of the world, I hear this sort of case being made:1. Marketing and sales are poorly if at all integrated at most companies2. Poor marketing-sales integration hobbles growth.3. The right software can solve the marketing-sales integration problem-------------Neocase/Eloqua/______ is the right software to solve the marketing-sales integration problemBoth 1 and 2 are clearly true. But what about 3?Or is my reconstruction of their case a straw man?

Rebekah, thanks for the data points. Together, the data reflect the treadmill B2B marketers find themselves running -- pushed to deliver higher numbers of leads, but without feedback on whether the leads they produce are valuable or which programs produced the best qualified leads. Just the typical back and forth bickering:--"What did you do with those leads I sent you? Why didn't you follow up?"--"What leads? Those were junk..."To get off the treadmill -- and to stop hobbling growth as you point out -- marketing needs to move the conversation from subjective to objective measures. Using even a very simple scheme can be helpful. Not because it's quantitative, but because it creates the opportunity for sales and marketng to get on the same page.I disagreed mildly with Brian Carroll when he said you needed to use numerical lead scoring only if you are handling a large number of leads. A number helps you determine if the lead is progressing in the buying cycle. A lead that scores 70 or 85 (on a 100 point scale) is a "good" lead. But you can also tell in which direction it;s been moving.Whether you expose the number to sales or not is up to you (you can just send them "A"s or "HOT"s and keep working the "B"s, "C"s and "Warm"s...) but I think numbers reinforce that qualifying can be turned into a scientific approach.Regarding your point 3 above, lead management software can HELP solve the marketing-sales integration problem by automating scoring, routing, and nurturing activity -- but if you aren't having the "what is the score/value of a qualified lead?" discussion with sales, the software may not help, may not get used, or may be used to automate a poor process. Between points 2 and 3 there has to be recognition of the problem, agreement by top sales and marketing execs that it MUST be fixed, and a program launched to make it happen. Feel free to agree or disagree.....

Laura -- no thanks needed -- I'm honored to have it posted!I think I agree with your point in the "I disagree mildly..." paragraph of your 6/13 comment.My own guess is: <100 staff x small volume of unscored leads = high risk of low or no margins.Arguably, the smaller the firm, the more precious each minute and the more expensive it is to bark up the wrong tree. Also, I think there's a correlation between no lead scoring and high degree of guessing about what biz dev efforts will bring the right leads.Maybe a venture capitalist can weigh in here re lead scoring and company valuation.Or perhaps VCs don't care about a company's practices re lead scoring -- which would, in itself, be telling?

I work for a small company that is evaluting marketing automation and lead scoring is part of our requirment. Do you have any best practices when evaluating lead scoring? What is the core functionality that should be part of any good lead scoring methodology for a small business?