top of page

How to Design a Customer Survey That Really Drives Decisions (Not Just a Questionnaire)

How to design a customer survey

Many customer surveys are created for the wrong reasons: ISO procedures, a request from headquarters, “we need to measure satisfaction,” or—worse—the hope that customers will tell us everything is fine.


The result is often predictable: absentmindedly filled out questionnaires, dashboards full of averages... and decisions made anyway "on gut feeling".


This guide was born from the Hangler webinar of November 6, 2025 and starts from a fixed point:

A customer survey isn't a survey. It's a system for making better decisions (both strategic and operational).

You won't find "50 questions to copy" here. You'll find a method for designing the survey so that it generates useful, comparable, and actionable signals .


Summary


Parliamo di ricerche di mercato
30 min
Book Now

The customer survey is not used to "measure": it is used to reduce risk

In the webinar, we say it bluntly: surveys are useful when you need to avoid making decisions based on past business habits . Classic examples:


  • a product remains in the catalogue "because it has always been a best seller"

  • Pricing changes based on internal perception, not perceived value

  • communication remains the same "because it worked once"

  • the website or packaging don't change "because no one ever complained"


The problem isn't a lack of data. It's confusing internal insights with actual customer perceptions.


First point: what kind of decision do you have to make?

The Hangler way to get started is not “what tools do we use”, but:

What decisions do we want to stop making by feeling?

From here come two large families of surveys (which should not be confused):


A) Periodic survey (Customer Satisfaction)

When needed:


  • strategic decisions

  • trend over time (delta between surveys)

  • “broad” photograph of the customer base

  • segment and benchmark analysis


It is designed “at the table”, with a more detailed questionnaire and more in-depth reporting.


B) Continuous survey ( Voice of Customer )

When needed:


  • daily operations

  • hot feedback

  • rapid intervention on negative experiences

  • continuous improvement of touchpoints


It's streamlined (often within 10 questions) and helps you close the loop quickly.

👉 Maturity isn't about choosing one or the other. It's about knowing when to use one, the other, or a combination.

How often to do it (and why frequency is a strategic choice)

There is no universal "right" frequency. There is a frequency that is consistent with:


  • speed of change in your market

  • product/service life cycle

  • number of touchpoints

  • business ability to act on results


Practical advice that emerged during the webinar:


  • periodic survey: every 3–6 months (sometimes annually)

  • VoC: continuous


Two opposite risks:


  • survey fatigue : you ask too much and you lower the quality/response

  • Inter-detection blindness : you miss weak signals


How to distribute it: channels and tools (without doing a "tutorial")

Distribution is not an operational detail: it determines who responds , and therefore what you will read in the data.


Periodic survey: “guided” distribution

Here you choose:


  • who to involve (segments, clusters, geographic areas)

  • the best channel for each group

  • the “depth” of feedback based on customer value


Typical channels (mixable):


  • Email with personalized link (ideal if you have a clean CRM)

  • telephone invitation for key customers

  • call center for intermediate segments

  • auto-fill link for the long tail


Hangler approach (very concrete): separate customers by value (e.g. gold/silver/bronze) and calibrate effort and cost:


  • top client → interview or direct contact + survey

  • mid-range → call center / guided invitation

  • long tail → auto-fill


This way you reduce the risk of:

  • hear only "friendly" customers

  • getting distorted responses from the sales network

  • unbalance the sample without realizing it


Voice of Customer: “situational” distribution

Here's what those who have experienced it have to say. Typical channels:


  • QR code in store / service point

  • post-interaction email link

  • Post-service SMS/WhatsApp (if consistent with the context)

  • link in personal area or app (if the company has one)


Important operational note from the webinar:


  • Physical totems → questionnaires left half-finished often increase

  • Mobile → more completion (you can answer while moving)


The main indicators: what they are and what you really do with them

Indexes are not “numbers to show on slides”: they are a useful summary only when linked to drivers and segments.


CSAT (Customer Satisfaction)

Measures overall satisfaction. It is useful for:

  • comparison over time

  • segment comparison

  • comparison between product/service lines


NPS (Net Promoter Score)

It measures the propensity to recommend (therefore, relationship and trust). It is useful for:

  • identify promoters / passives / detractors

  • segment retention and advocacy actions


OSAT (Overall Satisfaction) in VoC

A distinctive point emerges from the webinar:

In VoC it makes sense to look especially at the “top” range (5 on a 1–5 scale). 4 is not “ok”: it is “there is room and I want to know why”.

This choice changes the use of open comments:

  • the comment becomes a diagnostic tool

  • not “nice free field”


Where to Find Benchmarks (and How to Avoid Making Mistakes)

Clarity is needed here: benchmarks are often misleading if misunderstood.

You have 3 possible benchmark types:


Internal benchmark (the best)

  1. comparison between locations, touchpoints, channels, periods

  2. Comparison between segments and clusters. This is the most reliable because you have control over the method and context.


Competitive benchmark (designed in the survey)

You don't find it "free" online: you build it by asking customers:

  1. what alternatives do they know?

  2. which ones they actually use

  3. What do they see you as better/worse at?


External industry benchmark

It exists, but it should be taken with a pinch of salt:

  1. often changes methodology

  2. often aggregates different contexts

  3. it is often not comparable to your case


Hangler Rule:

If the benchmark is not comparable in method and target, it is more marketing than research.

Also use it as a benchmark against the competition (without turning it into a "fan" survey)

A customer survey can become a competitive benchmarking tool if you include:

  • batteries of questions on knowledge and use of competition

  • perceived comparison on key dimensions


Example of structure (conceptual, not the list of questions):


  1. competition: awareness (who you know)

  2. competition: usage (who you actually use)

  3. comparison: on what dimensions (price, quality, service, reliability, etc.)

  4. reason: why do you choose them / why do you choose us


This gives you two strategic outputs:

  • real map of the competitive scenario (not the one you “think”)

  • areas where you can reposition or improve


The questionnaire

The questionnaire is only part of the process. We make it clear in the webinar:

A survey is not just a questionnaire, it is a process: planning, collection, analysis and action.

In this post we deliberately don't go into "how to write questions": if you're interested in the topic, read this article on how to write a questionnaire .


How to accompany it: introduction and invitation (the part that really increases the response rate)

This is an often underestimated lever. It's not enough to send a link: you have to "prepare" the response.


1) Explain why you're doing it (honestly)

Positioning example:

  • “we need it to decide X”

  • “We want to understand what to improve before investing”

  • “It's not marketing: it's structured listening”


2) Explain what will happen to the data

People respond more if they understand:

  • which is not a waste of time

  • that won't end up in a drawer


3) Tone and timing consistent with the context

  • post-service → quick, “hot”

  • strategic survey → calmer, more contextualized, even with advance notice from marketing


Important note from the webinar:

  • often in the periodic survey the incentive is not material: it is "really counting"

  • but you have to make it credible with your introduction


What to Do with Data (The Real Point): From Mean to Action

The most common mistake is to stop at:

  • Average CSAT

  • Average NPS


In the Hangler method, the data “must talk to each other”:

  • convergences = strong signals

  • heterogeneity = need to segment

  • trend = delta between surveys

  • doubts = use qualitative to explain the “why”


And above all: if critical issues arise, you need a plan. Otherwise, you've only created expectations.


Close the loop: without action, the survey worsens the relationship

Collecting feedback and not acting is worse than not asking for it. This is even more true for VoC: if a customer reports a problem and no one responds, you've created a "double whammy":

  • negative experience confirmed

  • promise of listening betrayed


The model is simple: Plan → Do → Check → Act and then start again.


In short: the “useful” customer survey is designed like this

A customer survey that guides decisions:


  • it starts from a decision, not from a tool

  • distinguishes between strategy (periodic) and operations (VoC)

  • use indexes + drivers + segmentation

  • integrates internal benchmark and competitive benchmark

  • It is distributed consistently with the sample you want to listen to

  • It is introduced well (for reminder, context and credibility)

  • always closes the circle with actions


If you're planning a survey, the right question isn't "What questions do we ask?" but "What decisions do we want to guide with real data?"

FAQ

What is a customer survey?

It is a feedback collection system structured to support strategic and operational decisions, not just to measure satisfaction.

What is the difference between customer satisfaction and voice of the customer?

Customer satisfaction is periodic and oriented towards trends and strategy; VoC is continuous, streamlined, and oriented towards rapid intervention.

How often should a customer survey be conducted?

It depends on the market and the objective. Typically 3–6 months for periodic surveys, continuous for VoC.

What are the main indicators?

CSAT and NPS in periodic surveys; OSAT and NPS in VoC, often integrated with drivers and open comments.

How to use the survey to compare yourself with competitors?

By including batteries on knowledge/use of the competition and perceived comparison on key dimensions, building a competitive benchmark "from the customer".


Comments


bottom of page