Your In-House Databases Provide Internal Skip Tracing Process

  • Written by Rodney Bowers

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

The new business files have just arrived. You’re anxious to begin working and immediately begin the usual steps: run analytics to categorize the accounts, assign the categories to the appropriate teams, build the dialer campaigns, choose the appropriate letter plan and load the collection system. With a few clicks of the mouse the data is released and the teams begin working the accounts.

Then the inevitable: wrong numbers materialize, phone numbers are disconnected, the person is no longer at this address or may have never been at this address, the person answering the phone does not know the debtor or will not cooperate to help you locate the debtor. Over time, mail is returned for incorrect addresses. The new business needs to move to the next layer in the process: skip tracing in order to locate the debtor.

The art of skip tracing has become a science with many specialized solutions and approaches, driven by new technologies, the proliferation of bulk databases and society’s mad use of social networks.

We are seeing that skip tracing now extends to the common denominators of search engines and social networks. Search engines such as Google, Yahoo, and Bing are being used regularly. Social networks such as YouTube, Facebook and MySpace have easy access and usability. Even professional sites such as LinkedIn could provide opportunities for locating debtors. All are free to access, but the costs arise from the amount of time necessary to get accurate results.

We continue to see wide use of the bulk database services by our industry. The service providers are expanding their data to include all aspects of needed data, not just a specialty service as before. They are also improving the analytics with matching logic and including the waterfall processes within their service portfolio. The service providers can provide real-time results (getting the return immediately) or batched results (waiting for the file to process over night). The service providers are also improving the ease with which they can incorporate your data file structures and interface with your collection systems.

But, there is one area that all agencies should investigate prior to any skip tracing efforts — your own in-house databases. After all, every agency has received hundreds of thousands of large files containing millions of records from a wide population of debtors. It’s sitting there in your collection system or your data warehouse. (If you have one, we will discuss more on that later.) Why not use technology to provide an internal skip tracing method, prior to the traditional activities or the use of service providers?

What new approaches should we use? Use data mining, business intelligence and regression analysis with your existing data. Data mining is a branch of computer science that deals with artificial intelligence. Data mining is the process of extracting specific data records or data patterns from your data. Data mining is then combined with analytical techniques to drive business intelligence for your company. Business intelligence will take the raw, mined data and provide historical, current and future views of business operations. Taking the data mining concepts further, one can incorporate statistical methods on the data by using regression analysis to further shape the business intelligence results and increase the accuracy of the analysis. Regression analysis is the technique of using statistics to analyze and model several data variables, where the focus is on what occurs to a dependent variable by one or more independent variables.

These techniques are not new. Bayes’ Theorem appeared in the 1700s and provided the basis for detecting patterns within data. The progression of the computer industry now has many such tools for the mining of data and the prediction of patterns within the data. Even Microsoft Excel has powerful regression analysis capability built-in! From the open source world, you can use RapidMiner, KNIME or R Project as tools for defining a data mining process.

In a data mining process there will be four steps: clustering, classification, regression and associated rule-learning. Clustering is the step of discovering structures within data that are some way or another similar. For instance, street address may indicate an area of apartments versus a subdivision of homes. Classification is the step of taking the structures and applying the rules to new data as it is provided. So, you may check to see if a person is now moved from street addresses with apartments versus those of homes. Regression is the step whereby the function to predict the pattern has the least amount of error. Here you would begin to create the algorithms that could accurately predict the whereabouts of a debtor based upon the home versus apartment patterns. Associated rule learning is the step where the function is now applied to other variables to predict behavior. If the debtor has moved from a home to an apartment, then how would buying patterns change and could this data be utilized to obtain accurate contact information.

Now, all of this has to do with higher order manipulation of the data. Now let’s get back to the basics.

Be aware that large volumes of data being manipulated for high order analytics and business intelligence purposes are very taxing on computer systems. For this reason, it is important to off-load the analysis to quiet periods of the day so as to not adversely impact your collection system during collection activities. A technique to alleviate this issue is to move an exact copy of your data onto another node such that the analytics teams have their own area to work and the heavy computational load would not affect the production systems. And, the best alternative is to create a data warehouse that contains all data from your agency, both old and new.

A data warehouse is nothing more than all of the off-loaded data from your production systems. The data warehouse also contains additional functionality that your standard production databases lack and is optimized for data mining and analytics. The data warehouse is made up of three layers: staging, integration and access. The staging layer can be used by the development team for analysis and support of the production databases. The integration layer is used to integrate and combine various independent databases and separates the access from casual, daily users. The access layer is for getting the data out for the analytical users.

The key personnel in the extraction and analysis of the data can be your existing database analyst (DBA) and your business analyst (BA). These resources would utilize the existing tools imbedded in the database that are based upon structured query language (SQL). These resources use SQL to create queries to obtain the data for use in the data mining, business intelligence and regression analysis methodologies.

All agencies have millions of records in their databases. Would it not be logical to use modern techniques and tools to analyze all of the data to find missing information on debtors? Would it not be likely that a debtor in one client file may exist in another client file? That the debtor information that is missing in one database could be found in another, all of which are inside your agency?

If agencies were to profile their data, both current and historical, could the patterns of debtor movement and behavior be discovered? Could these patterns be used to predict where the debtor would surface next? Could the mining of your own existing data increase your chances for right party contacts?

By mining your own data, you will uncover key information that will help your agents, making them more productive and efficient and driving revenue to your bottom line.

Rodney Bowers is the principal consultant at MinervaWorks, a technology consultancy in Atlanta, GA. Contact him at This email address is being protected from spambots. You need JavaScript enabled to view it..