Part 1 discussed the following fundamental questions that must be asked in order to avoid disappointment with business intelligence software employed as part of your efforts to achieve sophisticated Customer Relationship Marketing ("CRM"):
- Is important functionality missing?
- Can software really build models?
- Will standard reports be enough?
Part 1 ended part way through a discussion of the fourth question: How do you avoid discovering useless things? We pick up with the second and third examples of useless discoveries:
Example #2: It is easy to select predictor variables that correlated with the desired customer behavior during the "time slice" of the analysis, but no longer do. For example:
During the construction of a model to predict upcoming purchase volume from a national hard-goods retailer, a strong positive relationship was found between purchase activity and ownership of the retailer's private-label credit card. Further analysis determined that, during the "time slice" of customer activity being used to build the model, the private label card had just been introduced. "Early adopters" of the card generally were the retailer's most loyal customers. Hence, incentives for sign-up were modest, and card ownership was a surrogate for loyalty.
However, by the time the model was to be moved into production, the rate of card sign-up had slowed considerably. The retailer had responded by becoming more aggressive in the provision of incentives. Hence, the relationship between card ownership and customer loyalty had changed dramatically. The prudent decision was to eliminate card ownership from further consideration in the modeling process.
Example #3: In the calibration phase of predictive modeling, exposing the quantitative procedure to the "best" cross section of data is complicated.
Techniques that produce "black boxes" are particularly troublesome. A black box might fit the data used to build it, but as your business evolves and produces new combinations of variable values the black box may start to generate nonsense.
Suppose your new software presides over a newly built marketing database. While your analyst knows to be careful because of having access to only eighteen months of data, did anyone mention this to the software? How do you tell the software that some customers are three, four, or ten years old and that only a portion of their history is reflected in the database? Will there be problems a year from now when the maximum elapsed time since the most recent purchase has increased to thirty months?
Question #5: Who is Verifying What and How?
As previously discussed, it is easy to be misled by poorly conceived and carelessly executed data analysis. But, even full awareness of the business, data and analytical issues does not protect you from simple mistakes. Because you will have to translate your thoughts into organized instructions that a computer can execute, there is plenty of room for error.
The art of results verification is a specialized branch of data analysis. Multi-million dollar mistakes await those who believe that computers print nothing but gospel. Just ask Dick Sabot, Chairman and Co-Founder of Eziba.com, an online retailer and catalog company. According to the January 24, 2005 issue of The New York Times:
[T]he company had sent out tens of thousands of catalogs in late September and early October and waited for the phones to ring. After a couple of "grim, quiet" days, Mr. Sabot said, company executives checked with the business that mailed the catalogs on Eziba's behalf. They hoped to find that the mailing had simply been delayed, but instead discovered that the catalogs had been sent to the wrong addresses. Because of a computer error, the catalogs had reached the members of Eziba's mailing list who showed the lowest likelihood to respond to the catalog...
The revenue shortfall created by that event put the company in such a tenuous financial position that it was forced to halt operations temporarily on Jan. 14 while it sought cash to pay off creditors. Bill Miller, Eziba's chief executive, resigned amid the problems.
The moral to this sad story is that, although certain software features may prove helpful to minimize the occurrence of error, training is the real answer.
Question #6: Who Is In Charge of Data Integrity?
A CRM package without data is an empty shell. When stocked with data that is inaccurate, incomplete or inconsistent, it is a time bomb ready to explode.
In checking data integrity, there is no substitute for a human expert. But beware! Often, people who can make sense of system and file structures have no clue how to analyze and interpret data. Assuring data integrity is a data analysis task rather than a programming/IT job.
Should you believe IT when they say the data is clean and that all you need to do is load the files? If they are not familiar with the tools and techniques needed to answer your business questions, how were they able to query the integrity of the data? How much has been learned from an audit of the data? If many basic facts about the business await the running of the "standard" reports, then on what basis did IT base its clean bill of health?
Investigation of data integrity should go far beyond the standard analysis of "valid" values, key counts and referential integrity. Instead, it should interrogate the completeness, consistency and usability of the data for robust CRM applications. For example, the following should be examined:
- Trends and seasonality patterns. Does the data capture the trends and annual patterns of activity accurately, and do the patterns hold steady from year to year and from marketing promotion to promotion? Are there gaps in the data?
- Longitudinal stability of field value distributions. Has the same coding scheme been used over time?
- Core relationships. Examples include the analysis of promotional depth by past segmentation variables, examination of purchases by product type, and counts by source characteristics.
When it comes to the construction of a marketing database with accurate, complete and consistent data "and, just as importantly, the development of a process to maintain that integrity "it is hard to imagine a cookie cutter solution. Nor is it advisable to entrust anyone with the task but business-savvy, computer-literate, seasoned data analysts.
Question #7: How does the Interface Deal with Query Complexity?
GUI's (graphical user interfaces) can be wonderful. They allow you to construct queries by pointing and clicking instead of typing. And, for a two-finger typist, they are a relief.
The problem is that most GUI's make it easier for you to construct SQL or 4GL (4th generation language) statements, but still require you to know the technical constructs of their systems. They offer step automation by demanding less effort for mindless steps. Nevertheless, although entering a query by dragging and dropping saves keystrokes, it does not eliminate the responsibility for learning the effect of what you are dragging and where you are dropping!
It is far preferable for the interface to achieve a conceptual shift: the elimination of entire groups of steps that, if it were not for the need to spoon-feed the computer, would not even be part of the way that you think. Such a conceptual shift is vastly more powerful. It is the automation or even complete hiding of an entire set of steps that, together, are conceptualized as a single task. This saves not only physical but also mental energy.
An automobile analogy is the automatic transmission. With an automatic transmission, you do not have to think about which gear is appropriate for your current driving conditions. Without having to think about when and how to shift, a human being can focus entirely on avoiding other automobiles and getting to the desired destination in a timely fashion.
Data analytics, which arguably is one of the more complex computer-assisted endeavors, is challenging enough in its own right. It is preferable for the software to allow you to "show" or "lead" it to a desired final result, and in a manner that is natural to you. Software that requires you to tell it how to apply its own "foreign to you "operators lengthens the learning curve. It also increases the likelihood of mistakes and rework, and reduces the available time for more creative activities.
Question #8: Was the Testing Ground the Same as Your Battle Ground?
Look at the past and present clientele of the business intelligence software vendor. Ask which businesses were used as test beds for prototyping and stress testing the software. If the industries or companies mentioned are not known for their analytical CRM expertise and the use of accountable marketing media, what makes you believe the software is appropriate for you?
These eight questions are applicable to most potential users of business intelligence software. However, the specific circumstances of your business will dictate just how important each question is and generate other specific questions. The important thing is not to be overtaken by the hype!