Skip to content

Article

Innovation

Customer Satisfaction Score (CSAT): Calculation, Application, and Practical Guide

The CSAT explained: calculation, scales, industry benchmarks, and a practical guide for measuring customer satisfaction.

by SI Labs

The Customer Satisfaction Score (CSAT) is a metric for measuring customer satisfaction after a specific interaction, transaction, or experience. Unlike NPS, which measures general willingness to recommend, CSAT captures satisfaction at the concrete touchpoint — directly, immediately, and action-oriented.

The history of systematic customer satisfaction measurement reaches back to the 1960s. Cardozo formulated the concept of the expectation-confirmation paradigm in 1965: satisfaction arises when the actual experience meets or exceeds expectations [1]. Oliver formalized this model in 1980 as the Expectancy-Disconfirmation Theory, which remains the theoretical foundation of satisfaction research to this day [2]. Standardized measurement at the national level began in 1989 with the Swedish Customer Satisfaction Barometer and achieved its breakthrough in 1994 with the American Customer Satisfaction Index (ACSI) by Claes Fornell [3].

If you search for “measuring customer satisfaction” or “CSAT,” you find encyclopedia entries with formulas but no guide that explains when a 5-point scale is better than a 7-point scale, why survey timing can distort the score by 15—20%, and how to translate CSAT data into operational improvements.

This guide closes those gaps.

From Cardozo to Fornell: Where the Method Comes From

The Expectation-Confirmation Paradigm

The entire CSAT methodology builds on a simple insight: satisfaction is not an absolute quantity but the result of a comparison. Customers compare their expectation with their actual experience. Three scenarios:

ComparisonResultSatisfaction Effect
Experience > ExpectationPositive disconfirmationHigh satisfaction
Experience = ExpectationConfirmationModerate satisfaction
Experience < ExpectationNegative disconfirmationDissatisfaction

This explains a phenomenon familiar to many service teams: an objectively good service can generate dissatisfaction if the expectation was even higher. And an objectively mediocre service can generate satisfaction if the expectation was low.

What this means for CSAT: The absolute number is less meaningful than the change over time and the comparison between touchpoints. A CSAT of 78% says little in isolation — a CSAT of 78% after a service incident that previously measured 62% says a great deal.

The ACSI and Standardization

Claes Fornell developed the American Customer Satisfaction Index (ACSI) at the University of Michigan in 1994 — a macroeconomic model that treats customer satisfaction as a driver of customer loyalty and ultimately firm value [3]. The ACSI uses a structural equation model with three input variables (perceived quality, perceived value, customer expectations) and two output variables (complaints, loyalty).

In Europe, the European Customer Satisfaction Index (ECSI) followed in 1999. In the DACH region, industry-specific indices exist, such as the Kundenmonitor Deutschland by ServiceBarometer AG.

The operational CSAT measurement — a single satisfaction question after an interaction — is the simplified practitioner version of this academic tradition.

The CSAT Question and Its Variants

The Standard Question

“How satisfied were you with [specific interaction]?”

Unlike NPS (which is forward-looking: “Would you recommend?”), CSAT is backward-looking: “How was this experience?” This immediacy is simultaneously the metric’s greatest strength and weakness.

Which Scale?

The three most common scales compared:

ScaleAdvantagesDisadvantagesBest For
1—5 (stars or numbers)Intuitive, fast, low cognitive loadLimited differentiation, central tendencyMobile surveys, in-app feedback, high volumes
1—7 (Likert)More differentiation, statistically more robustHigher cognitive load, “4” is ambiguousResearch-based surveys, detailed analyses
1—10Maximum differentiationCan be confused with NPS, cultural biasAvoid — the risk of NPS confusion is too high

Recommendation: 5-point scale for high-volume transactional CSAT measurements. 7-point scale for more detailed analyses where you want to examine the distribution more closely. Avoid the 10-point scale — customers confuse it with NPS, and the additional granularity rarely produces actionable differences.

Verbal vs. Numeric Anchors

TypeExampleAdvantage
Numeric1 2 3 4 5Fast, comparable across cultures
VerbalVery dissatisfied — Dissatisfied — Neutral — Satisfied — Very satisfiedLess ambiguity, every point has meaning
Emoji😠 🙁 😐 🙂 😀Lowest barrier, ideal for mobile surveys

Recommendation: Verbal anchors for B2B services (higher precision). Emoji scales for B2C contexts with high volume and low response rates.

Calculating CSAT

Formula:

CSAT = (Number of satisfied responses / Total number of responses) x 100

What counts as “satisfied”? On a 5-point scale: responses of 4 or 5. On a 7-point scale: responses of 5, 6, or 7. Only the top scale values — the neutral midpoint is not counted as “satisfied.”

Example (5-point scale): 150 responses. Of these, 45 x “Very satisfied” (5) and 60 x “Satisfied” (4). CSAT = (45 + 60) / 150 x 100 = 70%.

Alternative: Average score. Some organizations calculate the arithmetic mean of all responses instead (e.g., 3.8 out of 5). This is mathematically valid but harder to communicate than a percentage. In board reporting, “78% satisfied” is more comprehensible than “3.9 out of 5.”

CSAT Benchmarks by Industry

IndustryTypical CSAT Range (DACH)Context
Retail / E-Commerce75—85%The Amazon effect sets expectations
Banking70—80%Branch banks tend to score lower than direct banks
Insurance65—75%Claims processing as the main driver
Telecommunications60—72%Hotline wait times as the biggest pain point
SaaS / Software75—85%Onboarding quality as the main factor
Healthcare70—80%Doctor-patient communication dominates

Important: These benchmarks are reference values. Industry associations and market research institutes publish more specific data.

When to Use CSAT

CSAT is most valuable when:

  • You want to measure quality at specific touchpoints (after a support call, after delivery, after onboarding)
  • You need quick, operational feedback that can lead to improvements within days
  • You want to compare different touchpoints (which touchpoint generates the lowest satisfaction?)
  • You want to conduct before/after measurements (how did satisfaction change after a process change?)

CSAT is NOT suitable when:

SituationBetter AlternativeWhy
You want to measure overall loyaltyNPSCSAT measures the moment; NPS measures the relationship
You want to measure customer effortCESCSAT doesn’t capture whether a satisfactory solution cost too much effort
You want to prioritize featuresKano modelCSAT says “satisfied/dissatisfied”; Kano says “why”
You want to understand root causesIshikawa diagram, root cause analysisCSAT measures the outcome, not the cause

Step by Step: Setting Up a CSAT Program

Step 1: Identify Touchpoints

Don’t measure every touchpoint — select the most important ones. Criteria:

  • Moments of truth: Touchpoints where customers make a decision (purchase, renewal, complaint)
  • Known pain points: Touchpoints where you observe high complaint rates or drop-off rates
  • New processes: Touchpoints that were recently changed or introduced

Typical touchpoints for CSAT measurement:

  • After purchase / contract signing
  • After onboarding
  • After support contact (phone, chat, email)
  • After delivery or service provision
  • After complaint handling
  • After contract renewal / upgrade

Step 2: Design the Survey

Core question: “How satisfied were you with [specific touchpoint]?”

Follow-up question (open): “What could we have done better?”

Additional questions (maximum 2—3):

  • “Was your issue fully resolved?” (Yes/No)
  • “How would you rate [specific aspect]?” (1—5)

Design rules:

  • Keep the survey under 2 minutes
  • Timing: 1—24 hours after the interaction (leverage the recency effect, but don’t survey during the interaction itself)
  • Channel: Survey where the interaction took place (in-app after app usage, email after email support)
  • Throttling: Maximum one survey per customer per month

Step 3: Collect and Segment Data

Minimum sample: 30 responses per touchpoint for reliable results. Under 30: identify trends, but don’t draw firm conclusions.

Segment by:

  • Customer type (new vs. existing, private vs. business)
  • Channel (phone vs. chat vs. email)
  • Employee or team (for coaching, not ranking)
  • Time of day / day of week (service quality often varies by shift)

Step 4: Analyze and Prioritize

CSAT heat map: Create a matrix with touchpoints on the Y-axis and segments on the X-axis. Color-code cells by CSAT value (Red: <60%, Yellow: 60—75%, Green: >75%). The red cells are your priorities.

Text analysis: Categorize open responses into thematic areas. The three most frequent themes at the lowest CSAT scores are your biggest levers.

Trend analysis: CSAT values as a single measurement are weak evidence — as a trend over weeks and months, they show whether improvement measures are working.

Step 5: Derive Operational Improvements

CSAT data that doesn’t lead to action is waste. Connect results to operational processes:

  • CSAT < 60% at a touchpoint: immediate root cause analysis, action plan within 2 weeks
  • CSAT 60—75%: quarterly review, targeted improvement initiatives
  • CSAT > 75%: continue monitoring, identify and transfer best practices

Practical Example: CSAT in the Onboarding Process of a DACH Bank

Context: A major DACH bank has digitized its account opening process. The new online onboarding replaces the previous branch-based process. The team wants to know whether customer satisfaction has risen or fallen with digitization.

Measurement design: CSAT survey (5-point scale) 24 hours after account opening completion, via email. One follow-up question: “What could we have done better?”

Results after 3 months (N = 340):

SegmentCSATMost Frequent Feedback
Private customers <35 years82%“Fast and straightforward”
Private customers >55 years58%“Video identification was confusing,” “I wanted to go to the branch”
Business customers64%“Required documents were unclear,” “Too many individual steps”
Overall average71%

Insight: The overall CSAT of 71% looks acceptable. Segmentation reveals a problem: customers over 55 and business customers are significantly less satisfied. Without segmentation, this would have remained invisible.

Actions taken:

  • Video identification: additional guidance (2-minute video) before the start
  • Older customers: option for phone-assisted guidance during the online process
  • Business customers: document checklist before starting the process, progress indicator with step counter

Result after 6 months: CSAT for >55 years from 58% to 72%. CSAT for business customers from 64% to 76%. Overall CSAT from 71% to 78%.

Note: This example is illustratively constructed to demonstrate the method in a service context. The structure is based on typical banking benchmarks.

CSAT vs. NPS vs. CES: When to Use Which Metric?

DimensionCSATNPSCES
Question”How satisfied were you?""How likely are you to recommend us?""How easy was it?”
Time referencePast (this interaction)Future (would you recommend?)Past (how much effort?)
GranularityTouchpoint-specificOverall relationshipProcess-specific
Best forOperational managementStrategic managementProcess optimization
WeaknessRecency bias, says little about loyaltyNot the best growth predictor [4]Only measures effort, not satisfaction
OriginFornell / ACSI (1994)Reichheld / Bain (2003)Dixon / CEB (2010)

The three-metric combination: Use CSAT for operational touchpoint quality, NPS for strategic overall loyalty, and CES for process efficiency from the customer’s perspective. Not all three at every touchpoint — but the appropriate metric for each context:

  • After support contact: CES (was it easy?) + CSAT (was the outcome satisfactory?)
  • Quarterly: NPS (overall loyalty)
  • After purchase / onboarding: CSAT (how was the experience?)

5 Common CSAT Mistakes

1. Collecting CSAT at the Wrong Time

Symptom: Customers are surveyed directly during the interaction — e.g., during the support call or immediately after clicking “complete purchase.”

Why it hurts: Two distortions. First: the courtesy effect — customers give higher scores when they know the employee is listening. Second: the experience isn’t complete yet — the customer doesn’t yet know whether the promised solution actually works.

Solution: Survey 1—24 hours after the interaction. Enough distance for a reflective response, close enough for an accurate memory.

2. Collecting CSAT Without an Open Follow-up Question

Symptom: “Our CSAT is 68%” — but nobody knows why.

Why it hurts: The number alone is not actionable. 68% could mean: wait time too long, staff unfriendly, solution unhelpful, process too complicated. Without qualitative data, every action is a guessing game.

Solution: Always include an open follow-up question: “What could we have done better?” or “What was the most important reason for your rating?“

3. Measuring Too Many Touchpoints

Symptom: CSAT is collected at 15+ touchpoints. Customers receive three surveys per week. The response rate drops to 5%.

Why it hurts: Survey fatigue. Customers who are annoyed don’t respond — or give low scores as a protest. This distorts data downward and reduces sample size.

Solution: Select 5—7 critical touchpoints. Throttle: maximum one survey per customer per month. Better to measure a few touchpoints well than many poorly.

4. Not Linking CSAT Results to Behavioral Metrics

Symptom: CSAT values are reported in isolation — without connection to churn, repurchase rate, or upselling.

Why it hurts: A CSAT of 80% sounds good — but if 30% of “satisfied” customers still leave, satisfaction has no loyalty effect. Anderson and Sullivan showed as early as 1993 that the relationship between satisfaction and repurchase is nonlinear — especially in the middle satisfaction range [5].

Solution: Correlate CSAT with behavioral metrics. Ask: “At what CSAT value does the churn rate drop significantly?” That threshold is your operational target.

5. Using CSAT for Employee Rankings

Symptom: CSAT scores are published per employee and used in performance reviews.

Why it hurts: Employees quickly learn to optimize the score rather than the service. Typical tactics: preferring easy cases, forwarding difficult customers, asking for a good rating before the survey. Additionally, the sample size per employee is often too small for reliable comparisons.

Solution: Report CSAT at the team level, not the individual level. For individual development: qualitative feedback conversations rather than quantitative rankings.

When CSAT Does NOT Work

1. Complex, long-term service relationships: When the value of a service only becomes visible over months or years (wealth management, management consulting, long-term IT implementations), CSAT measures immediate satisfaction — not long-term value. NPS or a loyalty metric is better suited here.

2. When expectations are unclear: When customers don’t know what to expect (radically new service, unfamiliar product category), satisfaction measurement is distorted — because the reference point is missing.

3. In monopoly situations: When customers have no alternative, high satisfaction may reflect resignation (“It is what it is”) rather than genuine quality assessment.

4. As the sole management metric: CSAT measures the moment, not the relationship. High satisfaction at the last interaction doesn’t automatically mean loyalty. Triangulate with NPS and behavioral metrics.

Frequently Asked Questions

What is a good CSAT score?

Industry-dependent. In the DACH region, a rule of thumb: >75% = good, >85% = excellent, <60% = urgent action needed. But the more important comparison is the trend over time and the comparison between your own touchpoints.

How often should CSAT be measured?

CSAT is measured transactionally — after every relevant interaction. The question is not “How often?” but “At which touchpoints?” Select 5—7 critical touchpoints and measure there continuously. Reporting: weekly for operations, monthly for strategy.

What is the difference between CSAT and NPS?

CSAT measures satisfaction with a specific interaction (looking back at the past). NPS measures willingness to recommend (looking into the future). CSAT is operational (how was this contact?); NPS is strategic (how is the overall relationship?). Both are complementary.

Which CSAT scale is best?

5-point scale for high volumes and mobile surveys (simple, fast). 7-point scale for detailed analyses (more differentiation). Avoid the 10-point scale (confusion with NPS). Verbal anchors (Very dissatisfied — Very satisfied) are more precise than numbers alone.

How can you improve CSAT?

In three steps: (1) Identify the touchpoint with the lowest CSAT. (2) Analyze the open responses — what are the two to three most frequent complaints? (3) Address the most common causes structurally, not with quick fixes. Measure again after 4—6 weeks.

A typical service measurement sequence: Use CSAT to measure satisfaction at individual touchpoints. Use NPS to measure overall loyalty. Use CES to identify processes that create too much effort. Use the Kano model to prioritize which improvements will have the greatest satisfaction impact.


Research Methodology

This article synthesizes findings from the foundational works on customer satisfaction research (Cardozo 1965, Oliver 1980), the ACSI framework (Fornell 1994), satisfaction-loyalty research (Anderson & Sullivan 1993), and DACH-specific industry benchmarks from ServiceBarometer AG and Kundenmonitor Deutschland.

Limitations: The CSAT benchmarks are reference values from different survey methods and periods. Academic literature on CSAT application in specific DACH industries is limited. The practical example is illustratively constructed, not a documented case study.

Disclosure

SI Labs provides consulting services in the field of service innovation. In service measurement projects, we use CSAT as one of several metrics. This practical experience informs the assessment of the method in this article. Readers should be aware of the potential for perspective bias.

References

[1] Cardozo, Richard N. “An Experimental Study of Customer Effort, Expectation, and Satisfaction.” Journal of Marketing Research 2, No. 3 (August 1965): 244—249. DOI: 10.1177/002224376500200303 [Foundational work | Expectation-confirmation paradigm | Citations: 2,500+ | Quality: 85/100]

[2] Oliver, Richard L. “A Cognitive Model of the Antecedents and Consequences of Satisfaction Decisions.” Journal of Marketing Research 17, No. 4 (November 1980): 460—469. DOI: 10.1177/002224378001700405 [Foundational work | Expectancy-Disconfirmation Theory | Citations: 12,000+ | Quality: 92/100]

[3] Fornell, Claes, Michael D. Johnson, Eugene W. Anderson, Jaesung Cha, and Barbara Everitt Bryant. “The American Customer Satisfaction Index: Nature, Purpose, and Findings.” Journal of Marketing 60, No. 4 (October 1996): 7—18. DOI: 10.1177/002224299606000403 [Journal Article | ACSI framework | Citations: 8,000+ | Quality: 90/100]

[4] Keiningham, Timothy L., Bruce Cooil, Tor Wallin Andreassen, and Lerzan Aksoy. “A Longitudinal Examination of Net Promoter and Firm Revenue Growth.” Journal of Marketing 71, No. 3 (July 2007): 39—51. DOI: 10.1509/jmkg.71.3.039 [Journal Article | NPS refutation | Citations: 1,500+ | Quality: 90/100]

[5] Anderson, Eugene W. and Mary W. Sullivan. “The Antecedents and Consequences of Customer Satisfaction for Firms.” Marketing Science 12, No. 2 (1993): 125—143. DOI: 10.1287/mksc.12.2.125 [Journal Article | Satisfaction-loyalty nexus | Citations: 5,000+ | Quality: 88/100]

Related Articles

Net Promoter Score (NPS): Calculation, Benchmarks, and Critical Assessment

The Net Promoter Score explained: calculation, industry benchmarks, empirical critique, and a practical guide for services.

Read more →

Measuring Service Innovation: ROI Framework, Metrics Taxonomy, and the 8 Most Common Measurement Mistakes

Measure innovation impact: ROI calculation, input/output/outcome metrics, maturity assessment, DACH benchmarks, and evidence-based measurement mistakes.

Read more →

Kano Model: Guide, Practical Example & Questionnaire Template

The Kano model step by step: practical guide with service example, method comparison, Kano questionnaire template & evaluation table for immediate use.

Read more →

Service Innovation in Numbers: Benchmarks, KPIs, and Industry Data for the DACH Region

Key benchmarks and KPIs for service innovation: market data, innovation rates, ROI studies, and industry comparisons for the German-speaking DACH region.

Read more →

Service Innovation: Definition, Types, DACH Examples -- and Why 70% of Value Creation Needs Its Own Innovation Methodology

Service innovation: definition, 6 types (Gallouj), den Hertog model, DACH examples (Telekom, Allianz, VW), and inside-out capability building.

Read more →