Skip to content

Article

Innovation

Customer Effort Score (CES): Calculation, Research, and Practical Guide

The Customer Effort Score explained: calculation, the CEB study, applications, and a practical guide for services.

by SI Labs

The Customer Effort Score (CES) is a metric for measuring the perceived effort a customer must exert to resolve an issue. The central insight behind CES: it is not delight that retains customers but the reduction of effort. 96% of customers who experience high effort during a service interaction become disloyal — compared to only 9% of customers with low effort [1].

CES was developed in 2010 by Matthew Dixon, Karen Freeman, and Nick Toman at the Corporate Executive Board (CEB, now Gartner). Their study, published in the Harvard Business Review under the title Stop Trying to Delight Your Customers, overturned one of the most popular assumptions in service management: that excellent service means exceeding expectations [1].

This article explains the research behind CES, how to calculate it, where it outperforms NPS and CSAT, and how to connect it with service design to systematically eliminate effort sources.

The CEB Study: Why Delight Does Not Drive Loyalty

The Surprising Findings

Dixon, Freeman, and Toman surveyed over 75,000 customers about their service interactions and analyzed the effects on loyalty behavior (repurchase, increased spending, word-of-mouth). Three core findings [1]:

1. Delight has minimal loyalty impact. Customers whose expectations were exceeded were only marginally more loyal than customers whose expectations were exactly met. The investment in delighting customers does not pay off in loyalty.

2. Effort has massive loyalty impact. 96% of customers with high service effort showed disloyal behavior (switching, negative word-of-mouth, reduced spending). With low effort, only 9% did. The correlation was stronger than for any other tested variable.

3. Service interactions destroy loyalty more often than they build it. One in four customers experienced a service contact that reduced their loyalty. Only one in five experienced one that increased it.

What “Effort” Means

Effort in the CES context is not just time spent. Dixon et al. identified four dimensions of customer effort [2]:

DimensionDescriptionExample
Repeat contactsHaving to contact multiple times for the same issue”I had to call three times before someone understood my problem”
Channel switchingBeing forced to switch between channels”I started online and had to call after all”
TransfersBeing passed from one person to another”I was transferred four times”
RepetitionHaving to provide information repeatedly”I had to give my customer number again at every transfer”

These four dimensions explain why CSAT (satisfaction) and CES (effort) measure different things: a customer can be satisfied with the result (CSAT high) while perceiving the path there as grueling (CES low). Satisfaction with the result does not prevent the disloyalty that effort generates.

The Effortless Experience: The Expanded Framework

In 2013, Dixon, Toman, and DeLisi published the book The Effortless Experience, which expanded the HBR study into an operational framework [2]. Four pillars:

1. Channel Stickiness Over Channel Choice

Customers who have to switch channels (e.g., from self-service to phone) experience 10% higher effort than customers who remain in the same channel. The recommendation: design self-service so that customers want to stay — not through channel coercion but through channel quality.

2. Next-Issue Avoidance

The most common pattern in high-effort interactions: customers call about one problem, the problem is solved, but the next problem (causally related to the first) is not anticipated. Example: a customer files an insurance claim. The claim is recorded. But nobody explains which documents still need to be submitted — generating a second call.

Solution: Analyze contact reasons in clusters. Which issues frequently follow one another? Address these proactively in the first contact.

3. Experience Engineering

The perception of effort can be influenced — through the design of the interaction flow. Three techniques:

  • Advocate language: Formulations that signal “I’m on your side” rather than “That’s our policy”
  • Positive framing: “I can offer you an appointment on Tuesday” rather than “Monday is unfortunately not available”
  • Anchoring: “Most customers resolve this in 5 minutes” — sets a positive expectation

4. Building a Low-Effort Culture

Effort reduction is not a project but a mindset. Dixon et al. recommend embedding CES in team management: daily CES reviews, freedom for employees to break processes in the moment when they generate effort, and recognition for effort reduction rather than upselling.

The CES Question and Its Calculation

The Question (CES 2.0)

“To what extent do you agree with the following statement: [Company] made it easy for me to handle my issue.”

The original CES question (CES 1.0) was: “How much effort did you personally have to put forth?” This negative framing was replaced in 2013 by the positive formulation (CES 2.0) because it is more intuitive and delivers better data quality [2].

Which Scale?

VariantScaleRating
CES 2.0 (standard)1—7 (Strongly disagree — Strongly agree)5+ = Low Effort
CES simplified1—5 (Very difficult — Very easy)4+ = Low Effort
CES emoticonSad — Neutral — Happy faceHappy = Low Effort

Recommendation: CES 2.0 (7-point scale) for robust analyses. Simplified 5-point scale for in-app feedback and high volumes. Emoticon variant for quick pulse checks.

Calculation

Variant 1: Average score

CES = Sum of all responses / Number of responses

Example: 100 responses, sum = 520. CES = 5.2 (on a 7-point scale). Values above 5 are considered good, above 6 excellent.

Variant 2: Percentage low-effort

CES = (Responses of 5, 6, or 7) / Total responses x 100

This variant is easier to communicate: “72% of our customers find the service easy.”

Variant 3: Effort Net Score (analogous to NPS)

CES Net Score = % Easy (6-7) - % Difficult (1-3)

Advantage: directly comparable to NPS logic. Disadvantage: less widely adopted.

When to Use CES

CES is most valuable when:

  • You want to measure support and service interactions (phone, chat, email, self-service)
  • You want to optimize process-intensive touchpoints (filing a claim, account opening, returns, complaints)
  • You want to reduce channel switching and repeat contacts
  • You want to understand why CSAT is high but churn is also high (customers are satisfied but the journey was too arduous)

CES is NOT suitable when:

SituationBetter AlternativeWhy
You want to measure overall loyaltyNPSCES measures the effort of one interaction, not the relationship
You want to measure satisfaction with the outcomeCSATCES measures the path, not the result
You want to prioritize featuresKano modelCES measures effort; Kano classifies satisfaction impact
You want to measure delightCSAT or NPSCES by definition captures only effort — delight has no CES dimension

CES and Service Design: The Connection

CES identifies where effort arises. Service design explains why and provides the tools for resolution. The connection:

Service Blueprint as an Effort Analysis Tool

A service blueprint visualizes the service process from both the customer’s and the organization’s perspective. The “line of interaction” shows every contact point between customer and organization. When you overlay CES data onto the service blueprint, it becomes visible at which points in the process effort arises — and which internal process breakdowns (behind the “line of visibility”) are responsible.

Example: CES shows high effort in claims filing. The service blueprint reveals: the customer has to enter the same information three times (online form, phone call with claims handler, damage report) — because the three systems are not integrated. The root cause is not a customer problem but an internal integration problem.

Four Levers for Effort Reduction

LeverDescriptionTypical CES Impact
Process simplificationEliminate, consolidate, or automate stepsHighest impact, highest investment
Channel integrationMake channel switches seamless (transfer data, preserve context)High impact, medium investment
Proactive communicationInform customers before they need to ask (status updates, next-issue avoidance)Medium impact, low investment
Self-service qualityDesign self-service so that channel switching becomes unnecessaryHighest long-term impact

Step by Step: Setting Up a CES Program

Step 1: Identify High-Effort Touchpoints

Not every touchpoint needs a CES. Focus on:

  • Service recovery touchpoints: Complaints, escalations, claims
  • Process-intensive touchpoints: Account opening, contract changes, returns
  • Support touchpoints: Hotline, chat, email support
  • Self-service touchpoints: FAQ, customer portal, app functions

Selection criterion: Touchpoints where customers must solve a problem or navigate a process — not touchpoints where customers experience or enjoy something (CSAT is better suited there).

Step 2: Design the Survey

Core question (CES 2.0): “[Company] made it easy for me to [handle my issue].” (1—7, Strongly disagree — Strongly agree)

Follow-up question (open): “What would have made the process easier for you?”

Important: The open question is even more critical with CES than with CSAT or NPS. CSAT asks “What could we have done better?” — that is vague. CES asks “What would have made it easier?” — that is operationally actionable.

Timing: Immediately after the interaction concludes (within 1—4 hours). Unlike CSAT (where 24 hours of distance makes sense), CES benefits from the immediate memory of the effort — the perception of effort fades faster than satisfaction.

Step 3: Collect and Segment Data

Minimum sample: 50 responses per touchpoint.

Segment by:

  • Contact channel (phone vs. chat vs. email vs. self-service)
  • Issue type (initial inquiry vs. follow-up, simple vs. complex)
  • Customer type (new vs. existing)
  • Resolution status (resolved vs. unresolved)

Critical: Segment by resolution status. Customers with unresolved issues systematically rate effort higher — not because the process was poor but because the outcome was missing. If you don’t separate effort from outcome, you optimize the wrong lever.

Step 4: Analyze Effort Drivers

Pareto analysis of open responses: Categorize the free-text responses into the four effort dimensions (repeat contacts, channel switching, transfers, information repetition). The most frequent dimension is your biggest lever.

CES journey map: Overlay CES values onto the customer journey. Where are the effort spikes? Do they correlate with specific process steps? With specific channels?

Step 5: Create a Service Blueprint and Eliminate Effort Sources

For the top 3 high-effort touchpoints, create a service blueprint. Identify:

  • Which internal process breakdowns generate customer effort?
  • Where is system integration missing?
  • Where is the customer being used to correct internal process failures?

Practical Example: CES in Claims Processing at a DACH Insurer

Context: A major DACH insurer has an NPS of +18 and a CSAT of 72% — both acceptable values. Yet the cancellation rate among customers who have filed a claim rises to 28% (compared to 12% without a claim). The hypothesis: it is not dissatisfaction with the outcome driving cancellation but the effort of the process.

Measurement design: CES 2.0 (7-point scale) immediately after claims processing completion, plus an open question.

Results after 4 months (N = 280):

Claim TypeCES (avg.)% Low Effort (5+)% High Effort (1-3)
Auto glass damage5.878%8%
Home insurance water damage3.222%54%
Liability personal injury2.815%62%
Overall average3.938%41%

Open responses — top 3 effort drivers:

  1. Information repetition (38%): “I had to tell my story to three different people.”
  2. Unclear process (29%): “Nobody could tell me how long it would take or what happens next.”
  3. Channel switching (22%): “I filed online, then had to call, then had to send documents by mail.”

Service blueprint analysis: The blueprint revealed that for water damage claims, three different departments (claims intake, assessor coordination, settlement) worked sequentially — without a shared system. Each department re-requested the same basic information.

Actions taken:

  • Shared claims management system for all three departments (capture customer data once)
  • Automatic status updates via SMS every 48 hours (“Your claim is at step 3 of 5”)
  • Document upload via customer portal instead of postal mail
  • Next-issue avoidance: proactively communicate all required documents at the time of claim filing

Result after 8 months: Water damage CES from 3.2 to 4.9. Cancellation rate after claims from 28% to 17%. Customer satisfaction (CSAT) also rose — even though no measure targeted satisfaction, only effort reduction.

Note: This example is illustratively constructed to demonstrate the method in a service context. The structure is based on typical insurance benchmarks.

CES vs. NPS vs. CSAT: When to Use Which Metric?

DimensionCESNPSCSAT
Question”Was it easy?""Would you recommend?""Were you satisfied?”
MeasuresEffort (process quality)Loyalty intentSatisfaction (outcome quality)
Best forSupport, service recovery, process-intensive servicesOverall loyalty, strategic managementTouchpoint quality, experience measurement
StrengthStrongest loyalty predictor for service interactions [1]Widely accepted, easy to communicateSpecific, immediate, operational
WeaknessOnly measures effort, not satisfaction or delightNot a superior growth predictorSays little about loyalty
OriginDixon / CEB (2010)Reichheld / Bain (2003)Fornell / ACSI (1994)

Recommendation: CES after support and service interactions. CSAT after purchase, onboarding, and experience touchpoints. NPS quarterly for strategic overall loyalty. The three metrics do not compete — they measure different things.

5 Common CES Mistakes

1. Using CES at Experience Touchpoints Instead of Process Touchpoints

Symptom: CES is collected after a restaurant visit, an event, or a consulting session.

Why it hurts: CES measures effort. At experience touchpoints, effort is not the relevant dimension — satisfaction or delight is what matters. A CES of 6.5 after a restaurant visit says: “It was easy.” But “easy” is not what restaurants aspire to.

Solution: Use CES at process touchpoints (support, complaints, returns). Use CSAT at experience touchpoints.

2. Viewing CES in Isolation Without CSAT

Symptom: “Our CES is 5.8 — the service is easy.” But the customer’s problem was not resolved.

Why it hurts: Low effort with an unresolved problem is not good news. CES without CSAT creates a distorted picture: the process was easy, but the result is missing.

Solution: Always collect CES and CSAT together. The combination reveals four scenarios: (1) easy + satisfied = ideal, (2) easy + dissatisfied = outcome problem, (3) difficult + satisfied = process problem, (4) difficult + dissatisfied = urgent.

3. Not Linking CES to Behavioral Metrics

Symptom: CES is reported, but nobody knows at what value the churn rate drops.

Why it hurts: Dixon et al. demonstrated the relationship between effort and disloyalty at the study level. But every company has its own threshold. Without correlation analysis with behavioral data, CES is a number without an operational target.

Solution: Correlate CES with churn data. Ask: “At what CES value does the cancellation rate drop significantly?” That value is your target.

4. Not Translating CES Data into Process Improvements

Symptom: CES is reported monthly. Nobody changes a process.

Why it hurts: CES is by definition a process metric. If you don’t translate CES into process improvements, you collect data without consequence.

Solution: For every touchpoint with CES < 4.5 (7-point scale): create a service blueprint, identify effort drivers, develop an action plan within 4 weeks.

5. Using the Old CES Question (CES 1.0)

Symptom: “How much effort did you personally have to put forth?” instead of the positive CES 2.0 formulation.

Why it hurts: The negative framing creates a framing effect: customers actively think about effort and tend to rate more negatively. The positive formulation (“made it easy for me”) delivers better data quality [2].

Solution: Use CES 2.0: “[Company] made it easy for me to handle my issue.” 7-point scale, Strongly disagree — Strongly agree.

When CES Does NOT Work

1. Emotional or relationship-oriented touchpoints: CES measures functional effort. At touchpoints characterized by empathy, trust, or emotional connection (advisory conversations, bereavement support, financial planning), effort is not the relevant dimension.

2. One-time transactions with no repurchase option: CES predicts loyalty behavior. When repetition is not possible (home purchase, one-time legal consultation), the context in which CES shows its strength is absent.

3. As the only metric: CES measures the path, not the result. Always combine it with CSAT (outcome) and NPS (overall loyalty).

4. When effort is intentionally desired: In gamification, learning platforms, or fitness services, high effort can be part of the value proposition. There, CES measures the wrong thing.

Frequently Asked Questions

What is a good CES score?

On a 7-point scale: >5.0 = good, >5.5 = very good, >6.0 = excellent. Below 4.0 = urgent action needed. More important than the absolute value is the comparison between touchpoints and the trend over time.

What is the difference between CES and NPS?

CES measures the effort of a specific interaction (was it easy?). NPS measures general willingness to recommend (would you recommend?). CES is operational and process-oriented; NPS is strategic and relationship-oriented. CES is the better loyalty predictor for service interactions [1]; NPS is the broader loyalty indicator.

When is CES better than CSAT?

After support interactions, service recovery situations, and process-intensive touchpoints. There, the question “Was it easy?” is more informative than “Were you satisfied?” — because effort is the stronger loyalty driver [1].

How do you improve CES?

In three steps: (1) Categorize CES data by effort dimensions (repeat contacts, channel switching, transfers, information repetition). (2) Create a service blueprint for the highest-effort touchpoints. (3) Eliminate internal process breakdowns that cause customer effort.

A typical service measurement sequence: Use CES to identify where the service creates too much effort. Use service design to analyze the causes in the service blueprint. Use CSAT to check whether outcome quality is adequate. Use NPS to measure overall loyalty.


Research Methodology

This article synthesizes findings from the CEB original study (Dixon, Freeman & Toman, 2010), the expanded framework from The Effortless Experience (Dixon, Toman & DeLisi, 2013), service quality research (Parasuraman et al. 1988), and effort-loyalty research (Gartner/CEB, multiple waves).

Limitations: The CEB study is based on English-language markets. DACH-specific replication studies are limited. The practical example is illustratively constructed, not a documented case study. The 96% figure (high-effort customers become disloyal) comes from the original study and has not been replicated at exactly the same magnitude in other contexts — the direction of the effect, however, has been confirmed many times.

Disclosure

SI Labs provides consulting services in the fields of service innovation and service design. In service measurement projects, we use CES as one of several metrics. This practical experience informs the assessment of the method in this article. Readers should be aware of the potential for perspective bias.

References

[1] Dixon, Matthew, Karen Freeman, and Nick Toman. “Stop Trying to Delight Your Customers.” Harvard Business Review 88, No. 7/8 (July-August 2010): 116—122. [Foundational work | CEB study | Citations: 2,000+ | Quality: 88/100]

[2] Dixon, Matthew, Nick Toman, and Rick DeLisi. The Effortless Experience: Conquering the New Battleground for Customer Loyalty. Portfolio/Penguin, 2013. [Book | Expanded CES framework | Quality: 80/100]

[3] Parasuraman, A., Valarie A. Zeithaml, and Leonard L. Berry. “SERVQUAL: A Multiple-Item Scale for Measuring Consumer Perceptions of Service Quality.” Journal of Retailing 64, No. 1 (1988): 12—40. [Journal Article | Service quality measurement | Citations: 35,000+ | Quality: 92/100]

[4] Keiningham, Timothy L., Bruce Cooil, Tor Wallin Andreassen, and Lerzan Aksoy. “A Longitudinal Examination of Net Promoter and Firm Revenue Growth.” Journal of Marketing 71, No. 3 (July 2007): 39—51. DOI: 10.1509/jmkg.71.3.039 [Journal Article | NPS refutation | Citations: 1,500+ | Quality: 90/100]

Related Articles

Net Promoter Score (NPS): Calculation, Benchmarks, and Critical Assessment

The Net Promoter Score explained: calculation, industry benchmarks, empirical critique, and a practical guide for services.

Read more →

Customer Satisfaction Score (CSAT): Calculation, Application, and Practical Guide

The CSAT explained: calculation, scales, industry benchmarks, and a practical guide for measuring customer satisfaction.

Read more →

Measuring Service Innovation: ROI Framework, Metrics Taxonomy, and the 8 Most Common Measurement Mistakes

Measure innovation impact: ROI calculation, input/output/outcome metrics, maturity assessment, DACH benchmarks, and evidence-based measurement mistakes.

Read more →

Service Innovation in Numbers: Benchmarks, KPIs, and Industry Data for the DACH Region

Key benchmarks and KPIs for service innovation: market data, innovation rates, ROI studies, and industry comparisons for the German-speaking DACH region.

Read more →

Service Design: Definition, Process & Practical Example

What is service design? Definition, the 5 principles, the Double Diamond, and a B2B practical example. Including comparison to Design Thinking and UX Design.

Read more →