How trust-worthy are your Salesforce reports?
- AUTHOR Luke Duncan
- October 15, 2013
- No Comments
One of the most common salesforce.com questions I get is “Why does this report tell me one thing, and this other one tell me something different.” In general the answer is always “the two reports are looking at different things”. Despite the fact that the reports are accurate based on the data, having reports produce one result when the end user is expecting something different will reduce the overall trust in the system and reporting. Without trust, the data is not actionable and therefore not useful.
One example, and a major reason we created Full Circle CRM, is the disconnect between marketing users running reports in an entirely different database than Salesforce
(Marketo, Eloqua, etc.), then comparing the results in their Sales database to what’s in their Marketing database. As most of you have experienced, these two databases are almost never completely in agreement. Even within Salesforce, there are many pitfalls that can lead to misinterpretation of the data in reports.
Another common scenario is when two reports are designed to look at the same metrics, perhaps you are looking at Opportunity conversion rates, but you are using the Opportunity create date to filter on your monthly report, and the Opportunity close date is used on a quarterly report. If you recognize the difference in the data being pulled, both reports are equally valid and tell you slightly different things. However, you cannot get meaningful analysis if you compare the two reports against each other without realizing the fundamental difference in the data being pulled.
In a similar vein, summary report formulas are specific to a report and even though they may be labeled the same in two reports it’s possible that the underlying formula is different. If your conversion rate in one report excludes “in progress” records and the other does not, you will have drastically different data.
A subtler mistake is using fields in reports in ways that are misleading. A good example that came up in a question on my last blog post was using the statistics fields on the campaigns in a Campaign Member report. With custom report types you can often include fields that have a specific meaning when reporting on one object, but when used to report on another object they are confusing. The campaign statistics is a good example of this, because if your reporting on individual Campaign Members the performance of the entire Campaign they are a part of won’t make sense, especially if you are only looking at Campaign Members during a subset of the time the Campaign was active.
It can be powerful to allow users to customize reports and dynamically get to the answers they need, but it is also easy to end up with a bunch of reports that people think should say the same thing but are all very different. In larger organizations where this can become a major issue, it’s crucial that as an organization there is a process in place to ensure that everyone who is consuming the reports understand what the reports mean. This is such a critical issue, that in some organizations there is a single person whose is responsible for creating and vetting every report.