StackLead vs. Humans

How much time do you spend looking up new leads and signups before you contact them? Some entrepreneurs and sales teams might use sites like oDesk, Elance, and TaskUs to find an inexpensive virtual assistant for tasks like web research. We wanted to quantify how StackLead’s automated research compares to human virtual assistants, so we ran an experiment to find out.

In the end, the automated system was not only faster and covered more data than the human researchers, but also 43% more accurate.

If you’re looking to automatically qualify your sales leads, you can register here. Read on for the complete results of our science experiment.

Experiment setup


We took 100 of our own signups’ email addresses and used a few different services to research the following information about each lead:

  • First and last name – What’s the person’s name so that I can write a personal email?
  • LinkedIn URL – What’s their professional background?
  • Role – What’s their buying authority for my product?
  • Company name – Where do they currently work?
  • Employee count – How large is the company and does it have headcount relevant to my product?
  • Company website – Where can I find this company online (even if the lead signed up with a Gmail address)?
  • Company industry – Do I sell into this vertical?
  • Twitter handle and follower count – Does this lead actively use social media?
  • Location – Do I sell into this region?

We chose these fields because they include a good mix of person and company details that help qualify a sales lead. We’ll break down lead scoring in future posts, but we actually prioritize our leads by looking for job titles that include “CEO” or “director” and then sorting by company size. And when contacting a lead, everyone loves that you took the extra effort to call them by name and mention something from their Twitter profile. This research data is also easily available via our Google spreadsheet integration.

Since we were using real leads, the data was a good representation of web traffic for a B2B SaaS company. Roughly 30% of the leads had email addresses we marked as “personal” (e.g. Gmail or Hotmail). Also, we had already reached out to each signup, so we had a good idea which ones were legitimate, engaged leads. In total, about 2/3 of the emails had information available online and were legitimate leads.

We rated the performance of StackLead and the human freelancers based on 3 criteria:

  • Coverage – Of the 100 leads researched, how much information was found?
  • Accuracy – Of the information found, how many had the wrong data?
  • Time – How long did it take to process the list? The sooner a lead is contacted the more likely they are to engage and convert.

Choosing a researcher

The quality of researchers on oDesk varies greatly, and finding a good one is a topic worth a whole other blog post, but we followed some best practices to get the best researcher we could:

  • Clearly defined the job post in the listing and invited some strong freelancers with experience to apply.
  • From a pool of over 40 applications, we selected 3 researchers with over a 4.8/5.0 rating from different regions and over 100 previously completed jobs.
  • We gave the finalists clear and explicit instructions for how to search for information and a list of tools they could use.
  • We asked each to research the first 10 emails in the list and selected the the applicant with the best performance on the sample set.

Results

Ultimately, StackLead won in all 3 categories versus the virtual assistant. As you might expect for an automated computer algorithm, StackLead was considerably faster (actually faster than writing this breakdown!) than the outsourced assistant. To verify the results for accuracy, we checked 3 fields (LinkedIn URL, company name, and employee count) by hand. Diving into our experiment criteria:

  • Coverage – StackLead had 64% coverage, while the researcher had 54% coverage. StackLead wins by 20%.
  • Accuracy – Across the 3 fields we manually verified, StackLead was 99% accurate and the researcher was 70% accurate. StackLead wins by 43%.
  • Time – The oDesk freelancer spent 8 hours researching, which was split over 2 days due to considerable back and forth to fix problems with the initial results. StackLead returned all of the results in 17 minutes. Only considering the time paid to the freelancer, StackLead wins by 2,724%.

The oDesk researcher’s poor accuracy was a little disappointing. Having so many inaccurate entries made it difficult to trust any of the results without manually checking first. We felt like we were on a terrible online dating site. Lots of profiles for potential partners, but no idea who’s fake and who’s real.

Although the freelancers did a good job communicating, we still spent a lot of time managing the whole work process. Between defining the job, writing up instructions, choosing the researchers, and checking in on their progress, we easily spent 2 hours managing the process. We wanted to spend $10 (or 10 cents per lead) on this experiment, but we had to give the researcher more time to fill in as many of the company datapoints as possible. Factoring in the additional prep work and communication would’ve further hurt oDesk’s performance based on time.

The virtual assistant’s poor coverage, particularly for company data, highlighted another issue: freelancers are pressed for time and the results suffer with additional fields. Beyond the data in this test, StackLead returns company details like funding status and leadership, as well as domain information (Alexa Rank, software vendors, and WHOIS records) without needing any extra time. Machines excel at layering on this additional information instantly, while humans require the same amount of manual work for each additional field.

StackLead saves you money

In addition to covering fewer email addresses, returning less accurate results, and taking far longer, the human researchers were also quite a bit more expensive than our automated solution. We spent about $27 to research 100 leads in this experiment. Per lead, that’s over three times as expensive as StackLead’s Startup Plan.

Are you spending time and money outsourcing lead research and qualification? Get started today with the faster, more accurate, and more affordable solution from StackLead.

its-science-anchorman

Also read...

Comments

  1. You put into bold type the claim that you are 43% more accurate than your human competitors. You each looked up nine pieces of information on 100 leads, but you only made the accuracy comparison on three of the nine pieces of information. How well did everyone do with the other six? Why were these three used for the comparison?

    You state that you “gave the finalists clear and explicit instructions for how to search for information and a list of tools they could use.” So, in the end, you’re actually 43% more accurate than researchers using your instructions and limitations. Why would you prevent the researchers from using whatever methods they thought best?

    Finally, using percentages of percentages to compare results can lead to confusion (e.g. no one would say that someone getting 30% correct is 300% better than someone getting only 10% correct). You did answer 19% more of the answers correctly, and you did get only 1% wrong while the humans got an error rate of 30%.

    Reply
    • Thanks for the feedback, Mark. Here’s a response to your points:

      1) We focused on those fields for 3 reasons. First, some fields are derived (e.g. location from LinkedIn) or redundant (e.g. first/last name). Second, the oDesk researcher chose to skip the company industry field. Third, we wanted to manually verify the results, and this made the process more reasonable.
      2) The researchers could use whatever methods they wanted, but we gave them tips such as how to run a LinkedIn search and install Rapportive.
      3) Sorry this caused some confusion. We thought it was a reasonable way to present the numbers.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>