Skip to content

Article

Service Design

Kano Model: Guide, Practical Example & Questionnaire Template

The Kano model step by step: practical guide with service example, method comparison, Kano questionnaire template & evaluation table for immediate use.

by SI Labs

The Kano model (also known as Kano analysis or Kano diagram) is a method for classifying product and service features by their effect on customer satisfaction. The model distinguishes five categories — must-be features, performance features, attractive (delighter) features, indifferent features, and reverse features — revealing that not all features affect satisfaction equally. It was developed in 1984 by Noriaki Kano at the Tokyo University of Science [1].

What distinguishes the Kano model from simple priority lists: it makes the asymmetry of customer satisfaction visible. A missing must-be feature creates massive dissatisfaction, but its presence creates no excitement. A delighter feature creates joy, but its absence creates no dissatisfaction. This asymmetry explains why some teams build excellent features yet still have unhappy customers — they invest in delight while the foundation crumbles.

Search for “Kano model” and you will find dozens of results with the same smartphone and car examples. None cites Kano’s original 1984 paper. None demonstrates the method in a service process. None explains that Kano categories have an expiration date — that what delights today becomes a baseline expectation in three years. And none systematically compares Kano with MoSCoW, RICE, or scoring models.

This guide closes those gaps.

From Noriaki Kano to modern product development: Where the method comes from

Noriaki Kano (born 1940), professor at the Tokyo University of Science, published the paper Attractive Quality and Must-Be Quality in 1984 together with Seraku, Takahashi, and Tsuji — the foundation of today’s Kano model [1]. Kano’s central insight: the relationship between feature fulfillment and customer satisfaction is not linear. Some features generate disproportionate satisfaction, some generate disproportionate dissatisfaction when missing — and some barely move the needle.

This theory of asymmetric quality perception was a direct response to the then-prevailing assumption that “more quality = more satisfaction.” Kano showed: that only holds for one category of features.

Charles Berger et al. formalized the methodology for English-speaking audiences in 1993, introducing the standard evaluation table (functional × dysfunctional cross-tabulation) and the Customer Satisfaction Coefficient [2]. Matzler and Hinterhuber (1998) integrated the Kano model into Quality Function Deployment (QFD), enabling its adoption in systematic product development [3].

The 5 categories in detail

CategoryJapaneseEffect on satisfactionExample (insurance customer portal)
Must-be当たり前品質Absence → strong dissatisfaction. Presence → no particular satisfactionLogin works, policy data is correct, SSL encryption
Performance一元的品質The better fulfilled, the more satisfied. Linear relationshipPortal load time, response time to inquiries, navigation clarity
Attractive (Delighter)魅力的品質Presence → strong satisfaction. Absence → no dissatisfactionAutomatic claim status updates via push, personalized dashboard, document upload via smartphone camera
Indifferent無関心品質Neither satisfaction nor dissatisfaction, regardless of presenceChoice between 12 different color themes
Reverse逆品質Presence → dissatisfactionAutomatic sharing of personal data with third-party providers

The crucial insight: Must-be features are invisible as long as they work. Customers don’t mention them in open surveys — because they’re taken for granted. Only when they’re missing does dissatisfaction become visible. That’s why feature prioritizations based on open customer surveys systematically overlook must-be features [1].

Temporal dynamics: Why delight becomes baseline

An aspect most descriptions only mention superficially: Kano categories are not stable. Kano himself described the systematic migration of quality attributes in Life Cycle and Creation of Attractive Quality (2001) [4]:

Delighter → Performance → Must-be

What delights today becomes expected tomorrow. Example: in 2015, a mobile app for claims processing was a delighter for insurers. By 2020, it had become a performance feature (customers compared which app was better). By 2025, it’s a must-be — failing to offer mobile claims processing creates dissatisfaction.

What this means for you: A Kano analysis has an expiration date. In dynamic markets (SaaS, telecom, financial services), repeat it every 12–18 months. In more stable industries (mechanical engineering, public administration), the cycle can be longer.

When is the Kano model the right tool?

The Kano model is most valuable when you want to understand how features affect satisfaction — not just whether customers want them.

Use the Kano model when:

  • You need to prioritize a feature list and want to know which features have the greatest satisfaction impact
  • You can collect customer research data (survey with 30+ responses per segment)
  • You want to understand why customers are dissatisfied despite many features being available (hint: must-be features are missing)
  • You want to distinguish between customer segments (what’s a delighter for segment A might be must-be for segment B)
  • You want to objectify stakeholder discussions: “Customer data shows that feature X is a must-be” is more convincing than “I believe feature X is important”

Use a different tool when:

SituationBetter alternativeWhy
You need a quick prioritization without customer surveysMoSCoWMoSCoW is based on stakeholder assessment and needs no questionnaire
You want to quantify impact vs. effortRICE ScoringRICE considers Reach, Impact, Confidence, and Effort — Kano only captures the customer side
You want to evaluate many criteria simultaneously (cost, feasibility, strategy)Scoring model (weighted criteria)Scoring models integrate any number of evaluation dimensions
You want to understand why customers have a problemIshikawa diagramIshikawa analyzes causes; Kano analyzes effects
You need an iterative improvement cycle for existing servicesPDCA cyclePDCA improves iteratively; Kano prioritizes at a point in time

Comparison: Kano vs. MoSCoW vs. RICE vs. Scoring Model

Four prioritization methods in direct comparison:

DimensionKano ModelMoSCoWRICE ScoringScoring Model
FocusCustomer perception (satisfaction asymmetry)Feature prioritization (Must/Should/Could/Won’t)Quantitative impact prioritizationWeighted criteria evaluation
Data sourceCustomer survey (functional/dysfunctional)Stakeholder assessmentReach, Impact, Confidence, EffortTeam evaluation against criteria
ComplexityMedium — survey design + evaluationLow — workshop formatMedium — requires metricsLow to medium
Best forUnderstanding WHY features matterSprint/release planningPrioritization with limited resourcesStrategic decisions
WeaknessRequires customer survey (time-consuming)Subjective, no customer dataConfidence often estimatedCriteria weighting subjective
OriginKano (1984), JapanDai Clegg (1994), DSDMSean McBride / Intercom (2016)Decision theory (various)

Our recommendation: Use Kano when you have time for a customer survey and want to answer the strategic question “How do our features affect satisfaction?” Use MoSCoW or RICE when you need a quick operational prioritization without customer data. Best practice: Kano for strategic prioritization (annually), RICE for operational prioritization (per sprint/release).

Step by step: How to conduct a Kano analysis

Timeline for the entire Kano analysis: Plan for 2–3 weeks: 3–5 days for questionnaire design and pilot testing, 1–2 weeks for data collection, 2–3 days for evaluation and visualization.

Step 1: Define features

List the features or service attributes you want to evaluate. Critical: Formulate each feature concretely enough that respondents know what’s meant.

Poorly formulatedWell formulated
”Good customer service""Response to your inquiry within 4 hours"
"Modern design""Dark mode for the customer portal"
"Fast processing""Real-time claim status updates via push notification”

Typical count: 15–25 features per survey. More than 30 creates survey fatigue and reduces response quality.

Step 2: Create the Kano questionnaire

For each feature, ask two questions — one functional and one dysfunctional:

Functional question: “How would you feel if the customer portal offered real-time claim status updates via push notification?”

Dysfunctional question: “How would you feel if the customer portal did NOT offer real-time claim status updates via push notification?”

Response options (identical for both questions):

  1. I would really like that
  2. I expect that
  3. I’m indifferent
  4. I can tolerate that
  5. I really dislike that

Common questionnaire design mistake: Teams use technical jargon instead of everyday language. “API integration” means nothing to the customer — “automatic data sync with your accounting software” does.

Note on survey tools: Standard survey tools (Google Forms, Typeform) don’t natively support the functional/dysfunctional question pair — you need to set up both questions manually as consecutive items. Specialized Kano tools (e.g., Survalyzer, Kano analysis in Qualtrics) automate the evaluation table. For 20+ features, a specialized tool is worth the investment.

Step 3: Collect data

Minimum sample size: 30 responses per customer segment. Below 30, categorizations are not statistically reliable.

Segmentation: If you have different customer groups (e.g., individual vs. business customers, new vs. existing customers), collect data separately. What’s a delighter for individual customers might be must-be for business customers.

Plan for response rates: For internal surveys (e.g., to your own enterprise customers), expect 30–50% response rates. For external B2B surveys, typical response rates are 10–20%. Tips for higher response rates: keep the survey under 10 minutes, send personalized invitations, and send a reminder after 5 days. Incentives (e.g., sharing a results summary) help with external surveys.

Step 4: Apply the evaluation table

The combination of functional and dysfunctional responses determines the category:

Dysfunctional: Really likeExpectIndifferentTolerateReally dislike
Functional: Really likeQAAAO
ExpectRIIIM
IndifferentRIIIM
TolerateRIIIM
Really dislikeRRRRQ

Legend: M = Must-be, O = One-dimensional (Performance), A = Attractive (Delighter), I = Indifferent, R = Reverse, Q = Questionable

Handling “Q” responses: Q classifications occur when functional and dysfunctional responses contradict each other (e.g., “Really like” + “Really like”). A Q rate above 10% indicates poorly formulated questions — not confused customers. Revise the questions and repeat the survey for the affected features.

Step 5: Visualize and prioritize results

Discrete evaluation: Assign each feature its most frequent category. Simple, but loses nuance with heterogeneous distributions.

Continuous evaluation (recommended): Calculate the Customer Satisfaction Coefficient per Berger et al. [2]:

  • CS+ (Satisfaction coefficient) = (A + O) / (A + O + M + I) → How strongly does this feature increase satisfaction?
  • CS- (Dissatisfaction coefficient) = (O + M) / (A + O + M + I) × (-1) → How strongly does the absence of this feature decrease satisfaction?

Plot CS+ (Y-axis) against CS- (X-axis) in a scatter plot — that’s the Kano diagram. Features in the upper right (high CS+, high CS-) are performance features. Upper left (high CS+, low CS-) are delighters. Lower right (low CS+, high CS-) are must-be features.

Example: Kano analysis for an insurance customer portal

Context: A mid-sized insurer plans to relaunch its customer portal. The product team has identified 18 potential features. Budget and time allow for 10 features in version 1. The question: which 10?

The team conducts a Kano survey with 87 individual customers and 43 business customers. Results (excerpt):

FeatureIndividual customersBusiness customersDecision
SSL encryptionMust-beMust-beV1 — mandatory
View policy dataMust-beMust-beV1 — mandatory
Claim status trackingPerformanceMust-beV1 — mandatory
Document upload via cameraDelighterPerformanceV1 — high potential
Push notificationsDelighterIndifferentV1 — for individual customers
Dark modeIndifferentIndifferentV2 — low priority
Chatbot supportPerformanceReverseSegmented: yes for individual, no for business
API accounting integrationIndifferentDelighterV1 — for business customers

Insight: Without segmentation, “push notifications” and “API integration” would likely have been classified as “Indifferent” in the overall average — because the opposing evaluations of the segments neutralize each other. Only separate analysis reveals that both features have high value for their respective segments.

Next step: Must-be features (SSL, policy data) go into V1 without discussion. Then performance features (claim status, document upload). Then segment-specific delighters. Indifferent features are consciously moved to V2 — even if individual stakeholders demand them.

Note: This example is illustratively constructed to demonstrate the method in a service context. The categorizations are based on typical industry values.

Kano questionnaire: Template for immediate use

Use this template directly for your next Kano analysis:

Preparation

  • 15–25 concrete features formulated (everyday language, no jargon)
  • One functional and one dysfunctional question formulated per feature
  • Customer segments defined (who gets analyzed separately?)
  • Minimum sample size planned: 30+ per segment

Execution

  • Questionnaire created (online tool: Google Forms, Typeform, SurveyMonkey)
  • Pilot test with 3–5 people conducted (does everyone understand the questions?)
  • Data collected
  • Q rate checked: >10% → revise questions

Evaluation

  • Evaluation table applied (functional × dysfunctional)
  • Results separated by customer segment
  • CS coefficients calculated (CS+ and CS-)
  • Kano diagram created (CS+ vs. CS-)

Prioritization

  • Must-be features → mandatory (non-negotiable)
  • Performance features → investment (the better, the more satisfied)
  • Delighter features → differentiation (competitive advantage)
  • Indifferent features → cut or defer
  • Reverse features → avoid

Next step after prioritization: Transfer Kano categories into your backlog. Must-be features become must-have stories (non-negotiable). Performance features go into impact evaluation — combine them with RICE to factor in effort and reach. Delighter features become strategic differentiation initiatives with their own business case. Document indifferent and reverse features as deliberate “do not build” decisions — this protects against stakeholder discussions in 3 months.

4 common mistakes in Kano analysis

1. Formulating questions in technical language instead of customer language

Symptom: High Q rate (>10%), many “Indifferent” responses, customers abandon the questionnaire.

Why it hurts: Customers who don’t understand the question respond with “Indifferent” or give contradictory answers — both distort the categorization.

Solution: Formulate each feature from the customer’s perspective, in the language customers use. Test the questionnaire with 3–5 real customers before sending it to the full sample.

2. Not segmenting results

Symptom: Features are classified as “Indifferent” even though individual customer segments rate them as delighters or must-be.

Why it hurts: Opposing evaluations neutralize each other in the average. A feature that’s a delighter for individual customers and reverse for business customers appears as “Indifferent” — even though it’s highly relevant for both segments.

Solution: Before data collection, define which segments will be evaluated separately. Collect at least 30 responses per segment.

3. Treating Kano results as permanent

Symptom: The Kano analysis from three years ago is still being used as the basis for feature decisions.

Why it hurts: Kano categories systematically migrate: Delighter → Performance → Must-be [4]. Features that delighted in 2023 may be baseline expectations in 2026. Prioritizing with outdated data means investing in yesterday.

Solution: Repeat the Kano analysis regularly — every 12–18 months in dynamic markets, every 2–3 years in stable markets.

4. Using Kano as the sole prioritization criterion

Symptom: “That’s a delighter, so let’s build it” — with no regard for cost, feasibility, or strategic fit.

Why it hurts: Kano only answers “How does this feature affect satisfaction?” — not “What does it cost?”, “How many customers does it affect?”, or “Does it fit the strategy?”

Solution: Use Kano as one input to prioritization, not the prioritization itself. Combine Kano with RICE or a scoring model to consider customer perspective, effort, and strategy together.

When the Kano model does NOT work

1. No access to customers: Kano requires direct customer access. If you can’t survey your customers (e.g., anonymous users, regulatory restrictions, or a product that doesn’t exist yet), Kano provides no data. Alternatives: MoSCoW workshop with stakeholders, Jobs-to-be-Done interviews, or analysis of support tickets and reviews as a proxy.

2. Too few responses per segment: Below 30 responses per customer segment, categorizations are not statistically reliable. A Kano analysis with 15 responses is worse than no analysis — it creates false confidence.

3. Radically new products or services: Kano works best for incremental feature decisions in existing products. When testing a completely new service concept, customers often can’t answer the functional/dysfunctional questions because they can’t envision the feature. Design Thinking (prototyping + testing) is better suited here. The Morphological Box helps systematically explore the solution space before evaluating individual features.

4. Cultural differences in international markets: What’s a must-be feature in Germany might be a delighter in South Korea or the US. A single global Kano analysis ignores these cultural differences. Solution: conduct a separate survey per market.

5. Fast-moving categories: In markets with extreme innovation speed (e.g., generative AI, consumer tech), Kano categories can migrate within 6 months — faster than the survey and implementation cycle. Here, a continuous feedback loop (e.g., product analytics + NPS) is often more practical than a periodic Kano survey.

Variations and advanced techniques

A-Kano: The quantitative Kano model

The classic Kano model is qualitative — it says “This feature is a performance feature” but not “A 10% improvement in this performance feature increases satisfaction by X points.” The analytical extension (A-Kano) closes this gap by combining quantitative satisfaction values with Kano classification [5]. A-Kano is suited for teams that already have experience with the classic model and want to refine their prioritization further.

Continuous Kano: Trend data instead of snapshots

Instead of a one-time survey, the Kano analysis is repeated at regular intervals — e.g., quarterly or with each major release. This generates trend data showing how features migrate over time and when a delighter has become a must-be.

Data-driven Kano: Kano without a questionnaire

In contexts with large usage data (SaaS, e-commerce, digital services), Kano classification can be derived from behavioral data — e.g., from the correlation between feature usage and NPS/CSAT [6]. This method requires data science competence but eliminates the dependency on questionnaires.

Frequently asked questions

What is the Kano model?

The Kano model is a method for classifying product and service features by their effect on customer satisfaction. Developed in 1984 by Noriaki Kano, it distinguishes five categories: must-be features (expected; absence creates dissatisfaction), performance features (the better, the more satisfied), attractive/delighter features (unexpected; create joy), indifferent features (no impact), and reverse features (create dissatisfaction when present).

How do you conduct a Kano analysis?

In five steps: (1) Define features (15–25, concretely formulated). (2) Create questionnaire with one functional and one dysfunctional question per feature. (3) Collect data (at least 30 responses per segment). (4) Classify responses using the evaluation table. (5) Calculate CS coefficients and visualize in the Kano diagram.

What is the difference between Kano and QFD?

Kano classifies features by their satisfaction impact — it answers “How important is this feature to the customer?” QFD (Quality Function Deployment) translates customer requirements into technical specifications — it answers “How do we technically implement this feature?” Kano can serve as input for QFD: first prioritize with Kano, then technically specify the most important features with QFD [3].

How many responses do you need for a Kano analysis?

At least 30 per customer segment. Below 30, categorizations are not statistically reliable. For an overall evaluation without segmentation, 50–100 responses suffice. For three segments (e.g., individual, small business, enterprise), you need at least 90 responses — 30 per segment.

How often should you repeat a Kano analysis?

Every 12–18 months in dynamic markets (SaaS, telecom, financial services). Every 2–3 years in stable markets. The reason: Kano categories systematically migrate. What delights today becomes a baseline expectation in 2–3 years [4].

What advantages does the Kano model have over simple prioritization lists?

Three advantages: (1) It makes asymmetry visible — not all features affect satisfaction equally. (2) It’s based on customer data, not gut feeling or stakeholder volume. (3) It identifies must-be features that customers would never mention in open surveys, but whose absence creates massive dissatisfaction.

A typical sequence in service development: Use the Ishikawa diagram to identify root causes of service problems. Use the Kano model to prioritize which improvements have the greatest satisfaction impact. Use the Morphological Box to explore solution alternatives for the most important features. Use the PDCA cycle to implement improvements iteratively.

  • Ishikawa diagram: When you don’t want to prioritize but understand why a service feature causes problems — root cause analysis instead of satisfaction classification
  • Morphological Box: When you want to systematically explore the solution space before evaluating individual features — idea generation instead of prioritization
  • PDCA cycle: When you want to iteratively improve the most important features after Kano prioritization — Kano prioritizes, PDCA improves
  • Gemba Walk: When you want to observe how customers actually use your service before conducting the Kano survey
  • Jobs-to-be-Done: When you want to understand what “job” the customer is hiring your service for — a complementary approach to the “what” of Kano features

Research methodology

This article synthesizes findings from Kano’s original paper (1984), the methodological codification by Berger et al. (1993), Kano’s own lifecycle theory (2001), a recent service quality study (Kermanshachi & Nipa 2022, N>400), the systematic review of categorization methods (Slevitch 2025), and the analysis of 10 German-language expert contributions on the Kano model.

Limitations: Academic literature on Kano application in B2B service environments is limited — most studies come from product development or healthcare. The practical example (insurance customer portal) is illustratively constructed, not a documented case study.

Disclosure

SI Labs offers consulting services in the area of service innovation. In the concept phase of the Integrated Service Development Process (iSEP), we use the Kano model to prioritize service features before they go into development. This practical experience informs the framing of the method in this article. Readers should be aware of potential perspective bias.

References

[1] Kano, Noriaki, Nobuhiku Seraku, Fumio Takahashi, and Shinichi Tsuji. “Attractive quality and must-be quality.” Journal of the Japanese Society for Quality Control 14, no. 2 (1984): 39-48. DOI: 10.20684/quality.14.2_147 [Foundational work | Original paper | Citations: 6,000+ | Quality: 95/100]

[2] Berger, Charles, Robert Blauth, David Boger, et al. “Kano’s Methods for Understanding Customer-defined Quality.” Center for Quality Management Journal Fall (1993): 3-36. [Methodological codification | Practitioner Guide | Citations: 2,000+ | Quality: 82/100]

[3] Matzler, Kurt, and Hans H. Hinterhuber. “How to Make Product Development Projects More Successful by Integrating Kano’s Model of Customer Satisfaction into Quality Function Deployment.” Technovation 18, no. 1 (1998): 25-38. DOI: 10.1016/S0166-4972(97)00072-2 [Journal Article | Kano+QFD Integration | Citations: 1,500+ | Quality: 80/100]

[4] Kano, Noriaki. “Life Cycle and Creation of Attractive Quality.” 11th QMOD Conference, 2001. [Conference paper | Kano himself | Quality: 85/100]

[5] Xu, Qianli, Roger J. Jiao, Xi Yang, Martin Helander, Khalid Halimahtun, and Anders Opperud. “An analytical Kano model for customer need analysis.” Design Studies 30, no. 1 (2009): 87-110. DOI: 10.1016/j.destud.2008.07.001 [Journal Article | A-Kano Methodology | Citations: 400+ | Quality: 78/100]

[6] Kermanshachi, Sharareh, and Thahomina Jahan Nipa. “Service quality assessment and enhancement using Kano model.” PLoS ONE 17, no. 2 (2022): e0264423. DOI: 10.1371/journal.pone.0264423 [Empirical Study | N>400 | Service Quality | Quality: 72/100]

[7] Slevitch, Lisa. “Kano Model Categorization Methods: Typology and Systematic Critical Overview for Hospitality and Tourism Academics and Practitioners.” Journal of Hospitality & Tourism Research (2025). DOI: 10.1177/10963480241230957 [Systematic Review | Categorization methods | Quality: 80/100]

[8] Sauerwein, Elmar, Franz Bailom, Kurt Matzler, and Hans H. Hinterhuber. “The Kano Model: How to Delight Your Customers.” International Working Seminar on Production Economics, Vol. 1 (1996): 313-327. [Conference paper | Practitioner Introduction | Citations: 1,000+ | Quality: 75/100]

Related Articles

Ishikawa Diagram: Step-by-Step Workshop Guide (With Free Template)

Run a 90-minute Ishikawa root cause analysis workshop. Facilitation scripts, 4M vs 6M vs 8M comparison, service example, and free template.

Read more →

Morphological Box (Zwicky Box): Guide with CCA & Example

The Morphological Box explained systematically: step-by-step guide with Cross-Consistency Assessment, example, and method comparison.

Read more →

PDCA Cycle: Guide, Practical Example & Template (Deming Cycle)

The PDCA cycle step by step: practical guide with service example, PDCA vs. PDSA comparison, method comparison table & ready-to-use checklist.

Read more →

Service Design: Definition, Process & Practical Example

What is service design? Definition, the 5 principles, the Double Diamond, and a B2B practical example. Including comparison to Design Thinking and UX Design.

Read more →