Blog

Blog

Overcoming Inconsistency in Your Win / Loss Data

This blog was written by Zach Golden, Director of Client Management at Anova.

Stop me if you’ve experienced this before: you’re looking at data to help inform a business decision, but the data seems contradictory.

Perhaps it should be stop me if you haven’t experienced this, because it’s likely you have. Collect enough data points about anything and it’s likely you will find at least two that contradict one another.

Win / loss data is no different. It’s common that companies with win / loss programs have to pause to scratch their head when they start to review findings:

Our solution’s ease of use is being cited as both a strength and a weakness?

Our sales performance is a factor when we win but also when we lose?

Our pricing is competitive but also a reason why we are losing?

At Anova we see these seeming contradictions, what we typically call inconsistencies, every day because of our position as a win / loss partner to our clients. And while sometimes the data can be a head scratcher, our job is to help our clients understand the subtle nuances that cause this inconsistency.

Here are some of the main reasons why you may be seeing inconsistencies in your own data:

High Level Categorizations

When going through feedback from your win / loss interviews or surveys it’s likely you have to bucket or tag the responses into categories that make sense: sales, product functionality, price, etc.

Relying on these broad categorizations is helpful, both to get broad brush strokes of a theme (e.g., 80% of customers like our product functionality) and understand where different areas of your business lie in the eyes of the market (e.g., 80% of customers like our product functionality, compared to only 20% which like our user experience… The market thinks our product can do everything but is overly complex).

However, because of how broad these categories are, it can be confusing when that same category is seen both positively and negatively (e.g., 80% of customers cited our product as a strength, but 75% said it was a weakness… is our product good or bad?!). Part of this is because the category itself has so many specifics to it. For instance, for one client of ours in the supply chain management space, their product is highly praised for things like supply planning and demand planning, but forecasting and the ability to generate custom reports are commonly cited as weaknesses.

Getting to deeper levels of specificity helps our client understand what exactly the market appreciates, and where they need to continue to work.

Segmentations

Sometimes the specificity of feedback is evident, but not who is saying it. That’s where doing some sleuthing becomes necessary. Segmenting the data allows teams to pinpoint if different populations or personas are providing inconsistent feedback about their experience interacting with your company.

For example, in one recent client program we saw large inconsistencies with ratings of its pricing. While discounting was a common practice, the client still thought it should be in the ballpark for the majority of its deals. When we applied a deal size segmentation we uncovered in large enterprise deals the client was largely seen as competitive, but satisfaction with its pricing was much lower in SMB deals. The reason for this was there were many new entrants targeting the SMB market, and were trying to undercut on price.

Our client used this greater granularity of understanding what segment of the market was providing which type of feedback to roll out an updated pricing model for its starter package, commonly utilized in the SMB market, while maintaining its legacy pricing model in its enterprise business.

Team Performance Level

Perhaps the most head-scratching comes from analyzing sales team’s performance metrics. Sales leaders are looking for definite truths about where their team is strong and weak to know where to prioritize investment and training.

However, looking at average scores for the team’s performance in metrics like product demo quality or differentiation don’t always tell the whole story. It can be common to see something rated highly in wins, but poorly in losses. This is especially true when thinking about how the amount of turnover experienced at so many organizations over the last few years has led to great variances in salesperson tenure and product knowledge. Different team members have different skill levels.

Not only do technical skills, tenure, and knowledge all differ amongst your team members, but consistently in and of itself is a hard skill to master.

This is seen all the time in sports: think of a football team that can beat anybody in the league, but when playing in games they are heavily favored, plays down to the inferior level of their opponent only to lose games they shouldn’t. Or the basketball player who can score 30 points one night and 3 the next. What is commonly said on sports talk shows the next day? The team / player needs to be more consistent.

**

Often there are data-driven reasons for inconsistency in your win / loss findings: maybe the categorizations you are using are too broad and you need to be drilling down to a deeper level of specificity. Maybe the different sentiments are coming from different populations of your customer base. Or maybe your team just hasn’t mastered the skill of being consistent.

It’s hard to know which is the right answer, which is why relying on an expert win / loss consultant can help you make sense of your data. Click here to schedule a consultation with one of Anova’s experts.