Predictive Marketing Blog

Evaluating Marketing List Performance: The Implications of Merge-Purge

Say you want to run a customer acquisition campaign. You’ve done the song and dance before: you gather lists from various sources – filtered, univariate, modeled, etc – you put them into the system, and at some point you have to look at how much the lists overlap and give priorities – marketers call it merge-purge.

Merge-purge is an important step in the campaign setup, because you have to get rid of the duplicate data contained in your various vendor lists. Once the duplicates have been identified, you run some campaigns, and when responses start coming in you start to evaluate how those marketing lists performed.

Evaluating the Results
When you have your results from each list you might think that you know how each list performed, but it’s important to remember that these results were generated from the scrubbed lists, not by looking at each of the original lists.

In List #1, for example, you generated a 2.3% response rate. List #8 only generated a 1.8% response rate. “Wow,” you might say to yourself, “List #8 didn’t perform.” Really, though, it turns out that 20% of the people in List #1 appeared in List #8.

Names that appear on multiple lists are always the best ones, they’re the names that list providers don’t have a lot of difficulties finding. The lists that are merge-purged first get to keep the best names and the later lists have to give them up.

In order to accurately determine which lists performed best, you either have to compare original lists with duplicates or compare each list without duplicates.

List #1 generated a 2.3% response rate. List #8 generated a 1.8% response rate. This list is better than that list.

Because of the inevitable overlap in marketing lists, credit is not always given where credit is due. What we see all the time are marketers who rank their lists, get to the merge-purge process, scrub out duplicate names and then throw one very important question out the window: which list gets to keep the duplicate names?

This question and how marketers choose to answer it has a large impact on how final list performance is evaluated. More often than not, we literally see marketers order their lists and then scrub from the top down, giving duplicate names to lists at the top and removing those names from bottom lists. In other cases, marketers arbitrarily decide that one list, due to vendor preference, gets to keep duplicates instead of another.

  • If we track the response rates back to the original without bias: List #8 actually had a 2.5% response rate when the overlap was included, higher than List #1’s 2.3%. Not only would List #8 have gotten responses from the people who appeared on both lists #1 and #8, but they found additional customers who weren’t in List #1.

    Evaluate vendors by allowing one list to take all of the credit for names found on multiple lists in a campaign is not only bad math, it’s bad science. You have to be honest with yourself, every time you evaluate your lists like this you weight the outcome in a vendor’s favor – intentionally or accidentally. This approach is not objective and does not evaluate based on a list’s actual performance. So you can’t say that List #1 performed better than List #8, but you can say that you prefer Vendor #1.

    Proper Measurement Is Worth It

    In order to truly measure performance, we recommend that marketers track the original sources of a name from every list that it appears on so that each vendor list is accurately attributed the sales of names that it contains even if those names were removed during the merge-purge process. Merge-purge is a great process to ensure that you don’t duplicate your mailing costs, but it should not impact the results of your lists.

    Alternatively, marketers could normalize their lists by removing all of the names which appear on multiple lists before evaluating response rates; however, this approach will show bias as more lists are added into the process.

    In the end, ensuring that you perform the merge-purge process correctly with appropriate statistical rigor so as not to bias your outcomes, you will have a true picture of the value of your lists and not a false sense of security that your old list is still the best, even when it’s not. Don’t let yourself fall into the trap of proving what you want to be true by ‘juking the stats.’ By being honest with yourself and with your numbers you will gain a more accurate picture of performance to reap better ROI in the long run.