Keep Or Clean Out? 3723

Keep Or Clean Out?

Published

When your modeling data are refreshed, should you keep multiple years of scores or only work with what’s most current?
 

I’ve gotten the same question from a lot of repeat clients lately who use our modeling scores (like Major Giving Likelihood, Target Gift Range, Principal Giving Solution, etc.).  It’s a good question on its own, and it’s been on my mind to ask how your offices have considered it.

They’re asking this…

If we have scores from our last round of modeling in our primary database, should we keep those?  Or should we clean them out?

Most often clients tell me they hope to find meaningful patterns in the donors whose giving increased or to identify some other actionable insight to improve their strategies.  I appreciate the thought and intention of this question because data itself doesn’t raise money.  Doing things with it makes the difference, and this question points to an action-oriented idea. 

However…

I almost always vote for “out with the old, in with the new.”

First, there’s a technical reason.  Target Analytics scores are rooted in statistical analysis, and year-to-year comparison can be tricky.  Variables and data used in the modeling process likely shift over time—either because the client supplies, redefines, or changes the data they share with us or because we add and update data from the public space to make the process more robust.  Without controlling the variables involved, determining why the scores changed—cause and effect—can be murky.

Second, reality sets in.  To do lists are long in development offices, and I haven’t met a client that actually has time to tackle a project like this.  They are excited about it, they want to do it, but there’s always a proposal to write, an introduction to find, a list to get to the mail house—and there should be.  If it’s a question of forward progress or looking back at old scores, I don’t think there’s a question of where time is better spent.

So I typically recommend clients do the following:
  • Export the details they want archived from their primary database and securely save the file so the older information isn’t lost entirely
  • Determine if they want all new modeling scores in their primary database, if it makes sense to include a segment of high priority scores, or if they prefer modeling data exist in ResearchPoint only (assuming RP is active in the office)
  • Scrub older data before importing new scores, or use “link and sync” between Raiser’s Edge and ResearchPoint to update details
How does your team handle multiple years of scores?  Does anyone have a different viewpoint, experience, or recommendation?  We’ll keep our eye on the comments section for your insights!

Leave a Comment

3 Comments
Lora Cowan Lora Cowan Jul '18
We have done both through the years with different types of scores or models.  And while our intentions when keeping the old to compare to the new were really good, we didn't follow through as we thought we would AND we would have been comparing apples to oranges as mentioned above.  We also ran into trouble with queries and exports that required a rework of how we labeled our ratings - huge project!  We are preparing for more ratings and models to arrive, and my supervisor and I have discussed cleaning out the old soon - either now or very soon after the new is loaded.  I'm ready to have new ratings that are on a cleaner, easier to read slate because they aren't mixed in with or confused with the old ratings by other users.
So, we did NOT clean out our old scores. I have been running queries and it's causing problems. Can we still go in and delete the old scores? 
 
I agree with out with old comment.  Development offices and staff typically have limited resources and time.  We need to live in the now from my point of view and see where are prospects/donors are at present day.  That being said, history/background of  a donor/prospect has great value but for us that is something we hope to uncover in a discovery meeting.

Share: