Article
InnovationMaturity Model: Definition, Types, and Practical Guide for Innovation Management
Maturity models for innovation and services: definition, established models, how to build one, and practical guide.
Most maturity models are used incorrectly. They are deployed as evaluation instruments — “We are at level 2, we need to be at level 4” — instead of as development tools. The result: teams optimize for the level instead of the capability. They document processes to “achieve” the next level without any actual change in organizational competency. The maturity model becomes a checklist, and the checklist becomes an end in itself.
Yet the real strength of a maturity model lies elsewhere: it makes the development state of a capability diagnosable. Not to judge, but to understand where the organization stands — and which next development step has the greatest leverage. This article explains where maturity models come from, which models are relevant for innovation management and service innovation, how to build your own model — and where the method hits its limits.
What Is a Maturity Model?
A maturity model is a framework that describes the development state of an organizational capability on a scale of levels. Each level defines a state with characteristic features, processes, and outcomes. The basic idea: organizations don’t develop capabilities in leaps but in identifiable stages — and each stage has specific prerequisites and challenges.
The generic maturity scale: Most models use five levels, though the labels vary:
| Level | Label | Characteristic | Typical Behavior |
|---|---|---|---|
| 1 | Initial / Ad hoc | No defined processes; success depends on individuals | ”We do it when someone has an idea” |
| 2 | Managed / Repeatable | Basic processes exist; success is repeatable | ”We have a process, but it isn’t always followed” |
| 3 | Defined | Standardized processes, documented organization-wide | ”Everyone knows the process and follows it” |
| 4 | Quantitatively Managed | Processes are measured and quantitatively managed | ”We measure our process performance and actively manage it” |
| 5 | Optimizing | Continuous improvement based on data | ”We systematically and proactively improve our processes” |
Where Do Maturity Models Come From?
Watts Humphrey and the CMM
The history of maturity models begins in 1987 at the Software Engineering Institute (SEI) at Carnegie Mellon University. Watts Humphrey — often called the “father of software quality” — developed the Capability Maturity Model (CMM) for software development on behalf of the US Department of Defense.1
The problem was pragmatic: the Pentagon contracted software work to companies but had no instrument to assess their development capability. Projects were delayed, budgets exploded, and software was delivered with critical defects. Humphrey’s solution: a model that assessed the maturity of a company’s software development processes on a scale from 1 (chaotic) to 5 (optimizing).
The CMM quickly became the standard — not just in the defense industry but worldwide in the software sector. Companies began marketing their CMM level as a competitive advantage: “We are CMM Level 3” became a quality seal.
CMMI: The Integration
In 2002, the CMM was evolved into the Capability Maturity Model Integration (CMMI), which covered not just software development but also systems engineering, acquisition, and service development.2 CMMI became the most comprehensive and best-documented maturity model worldwide.
CMMI offers two representations:
- Staged representation: The classic 5-level scale (most people know only this). The organization as a whole is placed at a level.
- Continuous representation: Individual process areas are assessed independently. An organization can be at level 4 in requirements management and level 2 in risk management. This approach is more differentiated and often more useful in practice.
The Proliferation: Maturity Models for Everything
After the success of CMM/CMMI, the number of maturity models exploded. Becker, Knackstedt, and Poeppelbuss identified over 150 different maturity models in academic literature in a 2009 systematic analysis.3 Maturity models exist for:
- Digital transformation (MIT/Capgemini Digital Maturity Model)
- Innovation (Innovation Maturity Model, various variants)
- Service design (Design Management Institute, Service Design Maturity)
- Data management (DAMA Data Management Maturity)
- Sustainability (Corporate Sustainability Maturity Model)
- Artificial intelligence (AI Maturity Model, various variants)
- And over 140 more
This proliferation has led to legitimate criticism: when a maturity model exists for everything, the concept gets diluted.
Maturity Models for Innovation and Service Design
Innovation Maturity Model
For DACH companies that want to systematically develop their innovation capability, innovation maturity models are particularly relevant. A practical model covers multiple dimensions:
| Dimension | Level 1: Ad hoc | Level 3: Defined | Level 5: Optimizing |
|---|---|---|---|
| Strategy | Innovation is not a strategic topic | Innovation strategy exists but isn’t integrated into overall strategy | Innovation is integral part of corporate strategy with clear investment decisions |
| Process | No defined innovation processes | Standardized innovation process exists | Adaptive process that varies by innovation type (incremental vs. radical) |
| Culture | Innovation is accidental or individual initiative | Innovation culture is encouraged but not systematic | Experimentation culture is anchored organization-wide; failure is treated as learning |
| Competencies | No specific innovation competencies | Innovation team with specific skills | Innovation capability is broadly distributed as a core competency |
| Measurement | No innovation metrics | Input metrics (budget, headcount) | Outcome metrics (innovation rate, experimentation velocity, validated hypotheses) |
| Portfolio | Individual projects without connection | Project portfolio with simple prioritization | Strategically managed innovation pipeline with horizon model |
| External networking | No systematic collaboration | Occasional partnerships | Open innovation platform, systematic ecosystem development |
Service Design Maturity Model
For organizations building service design as a capability, a specific maturity model provides orientation:
| Level | Label | Characteristic |
|---|---|---|
| 1 | Project-based | Service design is used in individual projects but not systematically. Dependency on external consultants. |
| 2 | Repeatable | Internal service design competencies emerge. Standard methods (journey mapping, blueprinting) are regularly used. |
| 3 | Systematic | Service design is defined as a process. Dedicated team exists. Tools and templates are standardized. |
| 4 | Integrated | Service design is integrated into product development, IT, and strategy. Cross-functional collaboration is standard. |
| 5 | Strategic | Service design is a strategic competency. All new services go through the process. Results are systematically measured. |
Digital Maturity in the DACH Region
The MIT/Capgemini model for digital maturity distinguishes two dimensions: digital intensity (investments in technology) and transformation management intensity (leadership capability for change).4 The combination produces four quadrants:
| Low Digital Intensity | High Digital Intensity | |
|---|---|---|
| High Transformation Mgmt | Conservative (good leadership, little technology) | Digital Master (both strong) |
| Low Transformation Mgmt | Beginner (both weak) | Fashionista (much technology, little direction) |
DACH finding: Many DACH companies fall into the “Conservative” quadrant — good leadership structures but too slow in technology adoption. Or they are “Fashionistas” — they invest in technology (AI, cloud, IoT) without building the transformation management competency that makes the technology effective.
Building Your Own Maturity Model: Step by Step
When Your Own Model Makes Sense
Before developing your own maturity model, check: does an established model already exist for your domain? CMMI, the MIT/Capgemini model, and ISO 33001 (process assessment) are well-founded, validated, and comparable. Your own model is worthwhile when:
- No existing model covers your specific capability (e.g., service innovation capability)
- You want to use the model as an internal development tool (not as a benchmark)
- Existing models are too generic for your context
Step 1: Define the Capability
What exactly do you want to measure? “Innovation capability” is too broad. “Capability to systematically validate service hypotheses with customers” is specific enough. The more precisely the capability is defined, the more useful the model becomes.
Step 2: Identify Dimensions
Which aspects of the capability are relevant? Typical dimensions:
- Processes: Are there defined, repeatable workflows?
- Competencies: Do employees have the necessary skills?
- Culture: Does the organizational culture support the capability?
- Technology/Tools: Are the necessary tools available?
- Governance: Are there responsibilities and management mechanisms?
- Measurement: Is progress measured?
Step 3: Define Levels
For each dimension: What does Level 1 (ad hoc) look like? Level 3 (defined)? Level 5 (optimizing)? Important: the levels must be cumulative — each higher level presupposes the characteristics of lower levels.
Quality criteria for good level descriptions:
- Observable: The characteristics of each level must be recognizable in practice, not just on paper.
- Distinguishable: Level 2 must be clearly different from Level 3.
- Achievable: The jump from one level to the next must be realistic.
- Non-judgmental: Level 2 is not “bad.” It is the current development state — and the starting point for targeted improvement.
Step 4: Conduct the Assessment
Methods for self-assessment:
- Questionnaire: 3-5 statements per dimension on a Likert scale. Fast, but susceptible to self-overestimation.
- Workshops: Cross-functional groups discuss each dimension. More time-intensive but deeper insights and higher validity.
- Interviews: Individual interviews with stakeholders from different areas. Most laborious but most differentiated.
- Evidence-based: Instead of self-assessment, artifacts are reviewed (documentation, process descriptions, metrics). Most objective, but not all dimensions (e.g., culture) can be assessed this way.
Recommendation for DACH companies: Start with a workshop-based assessment. Invite 8-12 people from different functions (strategy, innovation, IT, HR, business units). Plan 3-4 hours. The workshop’s value lies not just in the assessment but in the conversation: when the strategy department believes the organization is at Level 4 and the business units say Level 2, the discrepancy itself is the most important insight.
Step 5: Derive a Development Plan
From the assessment: Which dimension has the biggest gap? Which development step has the greatest leverage? The rule: don’t try to advance one level on all dimensions simultaneously. Focus on the one or two dimensions that have the greatest impact on the overall capability.
Practical Example: Innovation Maturity at a DACH Insurer
A major DACH insurer wanted to systematically assess and develop its innovation capability. The approach:
Assessment result (workshop with 12 executives):
| Dimension | Current Level | Target Level (18 Months) | Action Needed |
|---|---|---|---|
| Strategy | 3 (defined) | 4 (managed) | Innovation strategy exists but isn’t linked to investment decisions |
| Process | 2 (repeatable) | 3 (defined) | Processes exist in the innovation team but not enterprise-wide |
| Culture | 1 (ad hoc) | 2 (repeatable) | Innovation is driven by individuals; no systematic culture-building |
| Competencies | 2 (repeatable) | 3 (defined) | Service design competency only in the innovation team, not in business units |
| Measurement | 1 (ad hoc) | 3 (defined) | No innovation metrics; success is assessed anecdotally |
| Portfolio | 2 (repeatable) | 3 (defined) | Projects are managed individually; no systematic portfolio management |
Priorities (derived from the assessment):
- Measurement (Level 1 -> 3): Without metrics, no progress can be demonstrated. First action: define innovation metrics.
- Culture (Level 1 -> 2): The cultural foundation is missing. First action: launch a pilot project with visible leadership commitment.
- Process (Level 2 -> 3): Roll out the existing innovation process enterprise-wide.
Why not all dimensions simultaneously? Because the organization would overestimate its change capacity. Three focused development steps over 18 months are more realistic than six simultaneous ones.
Maturity Model Compared: BSC, OKR, and Maturity
Maturity models, the Balanced Scorecard, and OKR are complementary tools — not alternatives:
| Dimension | Maturity Model | BSC | OKR |
|---|---|---|---|
| Question | How developed is our capability? | Are we strategically on course? | What do we focus on this quarter? |
| Time horizon | Long-term (years) | Medium-term (year) | Short-term (quarter) |
| Function | Diagnostics + development planning | Strategy management | Focus + alignment |
| Output | Level assessment + development plan | Metrics + target values | Objectives + Key Results |
| Typical mistake | Using it as a checklist instead of a development tool | Using it as a KPI dashboard instead of a Strategy Map | Too many OKRs, output instead of outcome |
The optimal combination:
- Maturity model: Diagnoses the development state (annually or semi-annually).
- BSC: Translates development goals into strategic metrics.
- OKR: Sets the quarterly focus on the most important development steps.
Example: The maturity model shows that innovation culture is at Level 1. The BSC sets “build innovation culture” as a strategic objective with the metric “share of employees who conduct at least one experiment per quarter: from 2% to 15%.” OKR translates this into a quarterly focus: “Objective: Establish experimentation culture in three pilot teams. KR: Each pilot team has launched at least two safe-to-fail probes and documented the results.”
The Most Common Maturity Model Mistakes
Mistake 1: Maturity as Checklist
When the assessment is a list of requirements that are “met” or “not met,” the model becomes a compliance tool. The consequence: teams document processes to “pass” the next level without any actual improvement in capability. A team that creates process documentation to move from Level 2 to Level 3 has documentation — not a better capability.
Mistake 2: Level 5 as Universal Goal
Not every organization needs Level 5 in every dimension. A startup doesn’t need Level 5 processes — they would suffocate it. A mid-sized company might need Level 3 in innovation culture and Level 4 in process. The target level must match the strategic context.
Mistake 3: Assessment Without Consequences
An assessment that ends in a PowerPoint is worthless. Value emerges only when the assessment becomes a development plan — with concrete actions, responsible parties, and a timeline.
Mistake 4: Only Quantitative Assessment
Maturity models with pure Likert scales produce numbers that suggest precision that doesn’t exist. “Our innovation culture is 3.4” is an illusion of precision. Better: qualitative descriptions per level that are discussed by the team.
Mistake 5: Not Evolving the Model
A maturity model developed three years ago may no longer capture the relevant dimensions. Models need regular review: Are the dimensions still relevant? Are the level descriptions still current? Has the strategic context changed?
Frequently Asked Questions
What is a maturity model in simple terms?
A maturity model describes the development state of an organizational capability on a scale of typically five levels: from “ad hoc” (no defined processes) to “optimizing” (continuous, data-based improvement). It serves as a diagnostic and development tool — not a judgment.
What is CMMI?
CMMI (Capability Maturity Model Integration) is the most comprehensive and best-documented maturity model, developed by the Software Engineering Institute at Carnegie Mellon University. It assesses process maturity in areas such as software development, systems engineering, and service development on a scale from Level 1 (Initial) to Level 5 (Optimizing).
How are maturity models and the Balanced Scorecard related?
The maturity model diagnoses the development state of a capability (e.g., innovation capability at Level 2). The Balanced Scorecard translates the development goal into strategic metrics (e.g., “increase innovation rate from 8% to 20%”). The maturity model says: where you stand. The BSC says: where you want to go — and how you measure the path.
Do I need a maturity model for innovation?
If you want to systematically develop your organization’s innovation capability (rather than just managing individual projects), yes. A maturity model makes visible which dimensions (strategy, process, culture, competencies, measurement) need the most development — and prevents the common trap of improving processes while culture remains at Level 1.
How often should a maturity assessment be conducted?
Annually or semi-annually. More frequently than semi-annually rarely makes sense because organizational capabilities don’t change in weeks. Less frequently than annually risks the assessment becoming ritualized rather than driving development.
What is the difference between a maturity model and benchmarking?
A maturity model assesses the internal development state of a capability on a defined scale. Benchmarking compares performance against external reference values (competitors, industry leaders). Both complement each other: the maturity model shows where you stand. Benchmarking shows where others stand.
Methodology & Sources
This article draws on 12 academic and practitioner sources, including the foundational works by Humphrey/SEI (1987/1989), CMMI Institute (2002/2018), the meta-analysis by Becker et al. (2009), the MIT/Capgemini Digital Maturity Model (2014), and practice-oriented innovation maturity models.
SERP finding: The German-language top-10 results for “Reifegradmodell” are predominantly CMMI summaries and generic definitions. None offers a concrete innovation maturity assessment, explains the connection to BSC and OKR, names the most common application mistakes, or offers a DACH practical example. This article closes these four gaps.
Limitations: The effectiveness of maturity models as development tools is empirically little studied. Most validation studies relate to CMMI in software development — not to innovation management. The innovation and service design maturity levels presented here are based on practical experience and existing frameworks, not controlled studies.
Disclosure: SI Labs supports companies in developing service innovation capabilities. We use maturity models as diagnostic instruments to assess the development state and derive priorities — not as a standalone consulting product.
References
Footnotes
-
Humphrey, Watts S. Managing the Software Process. Addison-Wesley, 1989. Foundational work for the Capability Maturity Model (CMM). Watts Humphrey, often called the “father of software quality,” developed the model at the Software Engineering Institute at Carnegie Mellon University. ↩
-
CMMI Institute. CMMI for Development, Version 2.0. ISACA, 2018. Integration of CMM for software, systems, acquisition, and services. Staged and continuous representation. ↩
-
Becker, Jorg, Ralf Knackstedt, and Jens Poeppelbuss. “Developing Maturity Models for IT Management.” Business & Information Systems Engineering 1, no. 3 (2009): 213—222. Systematic analysis of over 150 maturity models. Quality criteria for developing new models. ↩
-
Westerman, George, Didier Bonnet, and Andrew McAfee. Leading Digital: Turning Technology into Business Transformation. Harvard Business Review Press, 2014. MIT/Capgemini Digital Maturity Model with four quadrants: Beginner, Conservative, Fashionista, Digital Master. ↩