Search
Search
Close this search box.

Measuring the Cost of Defected Lead Records (A Step-by-Step Guide)

Cost of Defective Lead Records

How can you measure the cost of defected lead records? This step-by-step guide walks you through the process.

Losing sales due to bad quality of lead records

According to a recent Forrester report about the impact of bad data on-demand generation, around 25% of a company’s customer and prospect records have critical data errors. Inaccuracies in an organization’s customer, prospect, or lead’s database have led marketing and sales professionals to face serious difficulties. This includes the inability to correctly estimate deals, as well as finding the right POC within an organization. Hence, organizations end up jeopardizing or losing a significant number of sales every year.

High influx of leads data

One of the biggest reasons behind poor lead data quality is the diversity of devices, channels, and platforms consumers use during their buying journey. It is expected that the number of internet-connected devices that a consumer uses while interacting with a brand will soon increase to 13 from 4. 

Due to this reason, we have leads data coming in from various channels, such as emails, webforms, chatbots, social media platforms, web cookies, etc. When you have such a high influx of data coming in, it gets complicated to maintain the quality of information stored for each lead. This is where your business can potentially run into serious data quality problems, and employing strategies like data cleansing, data standardization, data matching, and data deduplication can help rectify a lot of these issues.

Prioritizing lead quality over lead quantity

Almost all companies focus on generating more traffic and attracting more leads to their website and other social media platforms, but very few actually notice the quality of information stored for the leads collected. This greatly impacts the lead-to-customer ratio at a company, and by the end of each year, executives are left to think about what went wrong. This is what happens when you base the performance KPIs of your marketing team on arbitrary variables such as lead count and not on more meaningful metrics, such as lead quality.

Measuring the quality of lead database

Organizations often use a list of ten data quality metrics to understand the quality of their database. These metrics include:

  1. Accuracy: How well do data values depict reality/correctness?
  2. Lineage: How trustworthy is the originating source of data?
  3. Semantic: Are data values true to their meaning?
  4. Structure: Do data values exist in the correct pattern and/or format?
  5. Completeness: Is your data as comprehensive as you need it to be?
  6. Consistency: Do the same records at disparate sources have the same data values?
  7. Currency: Is your data acceptably up-to-date?
  8. Timeliness: How quickly is the requested data made available?
  9. Reasonableness: Do data values have the correct data type and size?
  10. Identifiability: Does every record represent a unique identity / is not a duplicate?

Measuring the cost of defective leads records

Although these ten data quality metrics have proven to be very useful for assessing the quality of data at an organization, companies usually want to get a quick overview of the current state of their data quality. This is helpful in:

  • Catching data quality errors in time and being alerted before the state gets below an acceptable threshold,
  • Calculating the cost of poor data quality of leads database and onboarding management to employing proper data quality management strategies.

For this reason, Tom Redman proposed a method called the Friday Afternoon Measurement (FAM), that powerfully and quickly answers the question: When should you worry about data quality?

FAM is a quick four-step method that calculates the cost of poor data quality of your lead database weekly. It also helps raise red flags before the situation gets out of hand and your team uses the wrong data in their marketing and sales activities.

According to this method, the cost of poor data quality can be measured as:

Step 1: Collect recent leads data

The method starts by collecting the most recently created or used data records from your customer, prospect, or lead database. Collect about 100 records and select the top 10 or 15 data attributes representing the most critical information about these entities. 

Step 2: Label records as defected and defect-free

Invite two or more people from your team who understand the data under consideration, and ask them to highlight any error they may encounter in the 100 selected records. These errors can be incomplete, inaccurate, invalid, or missing fields. Moreover, it could be that the same record has been entered into the database more than once. All such discrepancies must be highlighted.

Next, it is time to add a new column in the sheet and label each record as Defected or Perfect, depending on whether an error was encountered in that record. Finally, calculate the total number of lead records labeled as Defected.

Step 3: Measure data quality

Now, calculate the percentage of records that were perfect in the last 100 entries in your lead database. Let’s say, out of the last 100 records, about 38 had data quality problems, while the rest of 62 was perfect. This 38% error rate raises a red flag and informs you that you have serious data quality issues.  

Step 4: Consider the rule of ten (RoT) to calculate the cost of poor data quality

This final step will help you to calculate the cost of the poor data quality of your dataset. It considers the rule of ten that states: it costs ten times more to complete a unit of work when the data is defective than when it is perfect. 

As an example, let’s say it takes $1 to complete a unit of work when the data is perfect, so according to RoT, it will take $10 to complete work when the data is imperfect. This means the total cost becomes:

Total cost = (62$1) + (38$1*10) = $62 + $380 = $442

This clearly shows that your lead dataset just cost you about four times more than if the data was defect-free.

Using a self-service data quality tool for your lead database

Although the FAM is a quick method of data quality assessment and calculating costs associated with poor data quality, it still takes about 2 hours of 3-4 team members on a Friday afternoon – hence, the name. This is where a self-service data quality tool can come in handy that quickly generates a report about data quality and automatically labels defected and defect-free records. 

Many vendors nowadays offer an all-in-one data quality software tool that in addition to data quality reports, also offers extensive data quality capabilities and acts as a data deduplication software – that matches lead records to understand whether they represent the same individual, and merges them into one.

Whether manually or through automated tools and workflows, it has become imperative for every company to scan their lead database before it can be labeled safe to use by your marketing and sales team for their activities. Thus, saving the company about four times more than the actual cost of working with your lead database.

TAGS :
SHARE :
Abstract depiction of a microchip and data flow, illustrating the danger of big data.
Explore cutting-edge big data solutions with our advanced data center visualization, showcasing a dynamic interplay of data flow and technology.
data science books

Explore our topics