(Editor’s note: this is the first chapter in an ongoing project to identify ways CRM users can see greater return on their CRM investments by making greater and better use of features that often are ignored. Instead of pretending I know all these features, I’ve reached out to some of the best consultants and CRM resellers I know. We’re going to wrap these chapters up into a white paper, with some insight on how to avoid letting these features slip by in the first place. Here’s the first chapter, starring Mike Snyder of Sonoma Partners.)
We buy CRM in part to understand what’s going on in our businesses – and, to do so, we collect data about customer behavior, sales and marketing effectiveness, and the level of service we provide. That data is subjected to analysis that presents the facts of your business in numbers – cold, hard, indisputable numbers.
But when it comes to understanding the effectiveness of CRM usage within your business, do you use the same degree of analytical rigor?
The answer is probably no, says Mike Snyder co-founder and principal of Sonoma Partners, one of the nation’s leading resellers of Microsoft Dynamics CRM. “We see a lot of development effort put into getting systems up and running to address initial problems, but very few customers have taken the time to quantify how the system is used” – even though most CRM applications include features that allow you to analyze the usage of the application, understand who’s using it, how much time is spent with it, and the number of reports you run during specified periods of time.
If you’re not using it, don’t feel alone, says Snyder. At most companies, “There’s not a lot of analysis of what’s actually happening.”
Instead, he says, anecdotal evidence is used as the basis for judging success and failure – a dangerous and imprecise method that weighs the most ardent opinions most heavily. Even more hazardous is the use of this anecdotal evidence in building the next stages of CRM. “IT is often told or guesses at what CRM users want, but they don’t really know,” Snyder says. “IT doesn’t often get the chance to see CRM users in action – they never get to do a ‘ride-along’ before embarking on projects. Then, IT performs a customization, and does it right, only to be told later that the users never use it. That’s a waste of time and money that could have addressed real problems.”
Real problems – like the ones internal analytics in CRM could have identified.
How do you tap into this potential? First, learn the reporting capabilities your CRM application has built into it. Next, Snyder suggests, put someone in charge of monitoring usage statistics.
“Ideally, this person should have a ‘’tweener’ role – one foot in IT, one foot in the business side,” he said. “This person might report to IT, but he’s not writing code. Instead, he’s ‘Switzerland’ – he can be neutral and help decision makers see what’s actually going on.”
This “monitor” should examine usage metrics monthly, or quarterly at the very least, Snyder advised. Decisions on how to improve CRM usage and performance can then be based on real user behaviors, not on best guesses.
There’s a secondary benefit to bringing these analytic tools to bear on usage issues: they allow you to spot areas in the business where human factors are impairing adoption. “These tools allow you to see who the users are, and you can understand how different groups or departments use CRM,” says Snyder. “If there’s a difference, why is that? Could it be the attitudes of the leaders of the groups? If one group believes in CRM and another doesn’t, the numbers will reveal that.”