Designing a Customer Happiness Report: Five decisions you have to make

customer-happinessRecently I've noticed a few companies publishing customer happiness reports. They're not totally new, but the transparency can be powerful. If you've never seen one, search for “customer happiness reports” and you'll find all sorts of them out there. Some companies even create monthly infographics from their customer service stats.

In essence, they share the current status of a business by sharing insights and details about some of their business metrics (like tickets created, closed, and the average length of time before first contact).

What you notice, if you read a few of them, is that they're very similar. And so, if you're new to creating a customer happiness report, you'll likely think that there's some sort of standard that they're both relying on.

But there's not.

More importantly, I think they can both be improved. If your startup doesn't have a customer happiness report yet, here's how you might design a more focused report.

I think you can do it by paying attention to five critical decisions.

The First Decision: Existing or New Customers?

The very first question I'd be asking myself when designing a report like this is pretty simple. Who is my audience?

Is it existing customers? Or is it new potential customers?

You can see why those are different, almost immediately. Existing customers may have open tickets remaining. They're likely to want to know the rate at which those will be closing. Or they're likely to want to know if their experience is normative or an anomaly.

New customers are a different audience. They might use the information to determine product stability and the level of ease associated with a product. They may also use it to better understand the level of support associated with a product.

How you answer this first question will immediately impact how you design your report. And no, you can't say “both.” You design a communication (any communication) with a singular focus and message if you want it to be effective. Sure, others will see it, but that's an ancillary benefit.

The Second Decision: Raw or Percentages?

You'll see that many companies deliver the raw statistics. They even show you the incoming tickets per day.

A lot of people will tell you that the pure data is what's important. But honestly, I'm not so sure. I don't know if anyone can determine if 572 is high or low when it comes to monthly tickets, or if 47 on a single day is high or low. How do we determine these things?

I think the report is more effective when I see percentages. Maybe I'm the only one.

But telling me that 27 billing tickets were closed in a month (hypothetical data) isn't nearly as important as telling me that 80% of the billing tickets were closed (in the same month they were opened).

The Third Decision: Snapshot or Trends?

Many of the customer happiness reports I've seen  give me a month's snapshot of data.

Here's the issue. I care less about the specifics and care much more about the trends.

If you saw 4,000 tickets created four months ago, and now you're only seeing 1,000 created a month, you're trending down. That's even more important if you can explain the downward shift (because you refactored some modules, etc).

It's the trends, not the snapshot, that help tell the story.

The Fourth Decision: Internal or External Grouping?

One of the ways these customer happiness reports are alike is that they break down their incoming tickets – into categories that match their business.

You'll see billing tickets, support tickets, tickets for specific products, or tickets from one of their internal segments.

Here's the thing – those groups are helpful to their own company, but I question the power of those groupings from someone on the outside of the business.

Now, this may be an artifact of the support system used, or how it's configured, but I would imagine an externally-driven perspective on grouping would be different (and potentially more powerful).

Imagine I could see the difference between data issues, usability issues, training issues, and enhancement requests. That might give me much better insight, right?

Especially as I watch them change, month over month.

The Fifth Decision: Descriptive or Predictive?

At the end of the day, the biggest question is: so what? What's the point of all this data?

I'm not saying this flippantly. I'm being sincere. The core of the issue is that I hope you're sharing this information with me for a reason. One that I can embrace and take away with me. One that has an impact on how I think about you (and how I talk about you).

If you're going to spend time crafting this whole report, wouldn't it be awesome if it was more than just a descriptive report of where things had been in the past month?

This may be the biggest shift I would make in these reports. Because at the core of this whole report, I am hoping this data is changing behavior. Aren't you?

And the behavior I'm hoping isn't just “we're going to try harder to get faster.” That's not what I want.

Instead, I want insight that has impact.

I want to notice a recent trend in usability tickets in part of the product line and then decide that we're going to spend a month eradicating them. Or as many as we can. In that way, the data I see today tells me where we'll spend time in the next month, months, or quarter. And it gives me a reasonable ability to predict a change that's coming.

Wrapping it up in a Bow

In the end, my goal for a happiness report is that someone walks away and can summarize it. Simply.

I do a lot of public speaking, and when I craft a talk I know there's a good chance people will forget most of what I've said. So I start with my main point. A single take away. And it's often a simple and memorable quote.

“If you protect clients from their dumb ideas, they'll gladly pay you to help their good ones.”

“How you think about how you think will affect how you think.”

These are some of the recent lines that have been at the core of my talks. They are easy to understand. Easy to digest. And easy to share.

I say this because my sense of the goal of a happiness report ought to be the same. Some key data should lead to a key insight that should be encapsulated and driven into the core of the entire report. The theme, if you will.

When you do that, there's a good chance people will share, talk, and tweet about your insight. And that, after all, is the whole point, isn't it?

Not data for data's sake. But insight for impact.