display:none
Skip to main content

Healthcare 2030: DiMe’s Blueprint to Transform Healthcare

NEW RESOURCE: Advancing a Sustainable Hospital-at-Home Ecosystem at Scale

Get involved in an upcoming DiMe project

DIME PROJECT

Implementing AI in Healthcare

SECTION 1 | IDENTIFY THE PROBLEM

Health AI Maturity Model

How ready is your system to support new AI tools? The Health AI Maturity Model helps you benchmark where your organization stands today and chart a path forward. It highlights seven domains, such as leadership, workforce, governance, technology, and more, that determine the success of your implementation. Use this page to identify strengths, expose gaps, and build a shared path to readiness across your system.

See the Whole System

The decisions, people, and processes across your system are tightly connected, and AI adoption will expose those dependencies quickly. Use the dependency map to explore these connections before you move into the Health AI Maturity Model to see where your organization stands today and what it will take to advance. 

Review the Health AI Maturity Model

The Playbook team developed this Health AI Maturity Model as a strategic tool for health systems to assess and advance their readiness to implement AI in a safe, effective, and sustainable way.

INSTRUCTIONS

Health AI Maturity Framework

How to Use It:

  • Maturity levels measure your current state and help set realistic targets.
  • Benchmarks set an example of measurable progress indicators
  • Pro tips and Red flags are curated best practices and common pit falls

Health systems need the right people, policies, infrastructure, and metrics in place. This model helps organizations understand where they are today and what it will take to reach the next level.

PRO TIP

PRO TIP

This model includes five maturity levels to show a complete path to health AI excellence. However, organizations should view Level 4 as “mature” and Level 5 as “aspirational”.

Not every use case or setting requires full optimization to deliver high value and impact.

  • Leadership, governance & compliance

    For: Strategic decision-makers, IT & data leads, clinical leaders

    Strong governance safeguards your patients, investments, and reputation. It provides the strategic, ethical, and regulatory backbone needed to responsibly scale AI.

    Pro Tip

    PRO TIPS | Considerations from The Playbook team

    Assign an executive sponsor for AI oversight aligned with enterprise strategy.

    Stand up a cross-functional AI governance body before procurement.

    Map regulatory requirements and engage early to avoid gray-zone missteps.

    Document decision-making processes and value alignment from the start to ensure transparency and accountability.

    Avoid these red flags:

    Weak governance exposes your system to legal, financial, and reputational risks.

    Failing to account for developmental stages (e.g., pediatric vs. adult) risks inappropriate care recommendations and potential harm.

    Regulatory misalignment (e.g., with HIPAA, FDA, state privacy laws) can delay or derail implementations.

    Lack of clear protocols for multi-party consent (e.g., for minors or those with guardians) creates ethical and legal vulnerabilities.

    Maturity level 1:
    Initial
    Maturity level 2:
    Developing
    Maturity level 3:
    Defined
    Maturity level 4:
    Managed
    Maturity level 5:
    Optimized
    Leadership alignment

    AI interest is isolated, lacking formal executive sponsorship, a clear vision, or a dedicated budget. AI efforts are ad-hoc and uncoordinated with organizational or clinical strategic priorities.

    Benchmark: No documented AI strategy or dedicated budget line item can be found.

    Executive interest in AI is present, with some champions emerging. Draft AI strategy exists but lacks formal approval, dedicated funding, or clear alignment with overall organizational goals. Clinical leadership is consulted sporadically.

    Benchmark: A draft AI vision statement or slide deck exists.

    Formal AI strategy approved by executive leadership, clearly linked to organizational and clinical priorities, with a dedicated budget allocated. Key clinical and operational leaders are actively involved in AI steering.

    Benchmark: A board-approved AI strategic plan document exists.

    AI strategy is fully integrated into the organization’s operational and strategic planning. Leadership actively champions AI, ensures resource allocation, and monitors enterprise-wide AI performance and clinical impact.

    Benchmark: AI program KPIs are reviewed in quarterly leadership meetings.

    AI leadership is visionary and adaptive, fostering a culture of responsible innovation. Strategy dynamically adjusts to emerging AI advancements, clinical needs, and enterprise performance, with transparent public reporting on AI initiatives and outcomes.

    Benchmark: Public reports or press releases detail the organization’s AI strategy and outcomes.

    Ethical oversight

    No formal ethical review processes or guidelines for AI. Ethical considerations are ad-hoc, if discussed at all, with minimal clinician input.

    Benchmark: Project charters for AI initiatives lack a section on ethical review.

    Initial discussions on AI ethics are occurring. Informal or developing review processes for AI initiatives, with limited scope and inconsistent application. Basic awareness of ethical principles related to AI in healthcare.

    Benchmark: Email records show an existing committee (e.g., IRB) was informally consulted on a pilot.

    A dedicated ethics committee or review board (IRB subcommittee) established with a defined charter for AI, including clinical representation. Formal ethical guidelines and principles for AI development and deployment are documented and communicated.

    Benchmark: A documented charter for the AI ethics review committee is available.

    Regular ethics audits and risk assessments of AI systems are conducted. Processes for addressing ethical concerns and incidents are well-defined and operational. Ethical considerations are integrated into the AI lifecycle.

    Benchmark: Audit reports for high-risk AI systems are completed and stored annually.

    Proactive and continuous ethical oversight is embedded in all AI initiatives. Public reporting on ethical frameworks and performance. Organization actively contributes to broader discussions on AI ethics in healthcare.

    Benchmark: The organization has published or presented its AI ethics framework externally.

    Policy & compliance

    No specific AI-related policies or formal compliance focus. Awareness of regulatory implications (e.g., HIPAA, FDA) for AI is low.

    Benchmark: No AI-specific policies are found in the organization’s policy library.

    Draft AI policies are being developed. Minimal alignment with legal and regulatory requirements. Compliance checks are informal and inconsistent.

    Benchmark: A draft “AI Data Handling” policy is in review with a single department.

    Comprehensive AI policies are finalized, approved, and communicated, covering data privacy, security, and intended use, with explicit reference to healthcare regulations. A compliance framework for AI is established.

    Benchmark: An approved, organization-wide AI policy is published and accessible to all staff.

    AI policies are regularly reviewed, updated, and integrated with enterprise risk management and legal governance. Proactive compliance monitoring and reporting mechanisms are in place for all AI systems.

    Benchmark: A compliance dashboard monitors AI systems against internal policies and external regulations.

    AI policy and compliance are fully integrated into legal/risk governance. The organization is agile in adapting to new regulations and actively shapes best practices for AI compliance in healthcare.

    Benchmark: Legal team actively participates in national task forces on AI policy in healthcare.

    Curated Resources:

    FDA: Digital Health Policy Navigator, AHIMA: AI in Health Information Governance

  • Data infrastructure & analytics

    For: Strategic decision-makers, IT & data leads

    Every AI tool depends on one thing: good data. Your organization’s ability to source, manage, govern, and utilize data will determine how far your AI initiatives can go.

    PRO TIP

    PRO TIPS | Considerations from The Playbook team

    Audit the availability, quality, and granularity of data for your target use case, noting limitations (e.g., pediatric data is often more limited than adult).

    Define a data governance model for AI that includes bias mitigation, source traceability, and update schedules.

    Validate assumptions about data timeliness, completeness, and relevance before committing to a project.

    Account for developmental variations in “normal” parameters that change with age and other clinical contexts.

    Avoid these red flags:

    Incomplete, low-quality, or biased data can invalidate entire AI projects and lead to harmful outcomes.

    Models trained on one population will perform poorly and potentially unfairly when applied to others.

    Lack of data lineage tracking leads to unsafe, unverifiable “black box” recommendations.

    Ignoring data limitations for rare conditions can lead to flawed or overconfident model performance.

    Maturity level 1:
    Initial
    Maturity level 2:
    Developing
    Maturity level 3:
    Defined
    Maturity level 4:
    Managed
    Maturity level 5:
    Optimized
    Data quality & integration

    Data is largely siloed, inconsistent, and of poor or unknown quality. Minimal data integration capabilities. No standardized data models or governance for AI data.

    Benchmark: <20% of key data sources for AI use cases are integrated.

    Some data integration projects are underway (e.g., basic data warehousing). Initial efforts at data standardization and quality improvement for specific AI use cases. Data governance for AI is nascent.

    Benchmark: A pilot project integrates 2-3 data sources for a single AI use case.

    Key data sources (clinical, operational) are largely centralized and standardized (e.g., >80% relevant data). Enterprise data governance policies are implemented, ensuring data quality and appropriate access for AI.

    Benchmark: >60% of prioritized data sources are integrated into a data warehouse/lake.

    Real-time or near-real-time data integration is achieved for critical AI applications. Comprehensive data quality monitoring and robust data governance are operational across the enterprise. Data lineage is tracked.

    Benchmark: >80% of critical data is integrated with automated data quality monitoring and reporting.

    Adaptive, fully interoperable data systems with automated data quality management. Data is treated as a strategic asset, readily accessible and AI-ready across the organization, supporting agile AI development and deployment.

    Benchmark: Automated data cleansing and enrichment pipelines are operational for critical AI data flows.

    Analytics & evaluation

    No dedicated AI analytics capabilities or performance tracking. Reporting is manual, ad-hoc, and limited to basic operational metrics.

    Benchmark: No standardized metrics for AI model evaluation exist.

    Basic reporting capabilities for AI projects exist, but tracking is largely manual and project-specific. Limited use of dashboards or standardized AI performance metrics.

    Benchmark: Basic dashboards track usage counts for 1-2 AI pilot projects.

    Routine dashboards and standardized reporting cycles for AI initiatives are in place, tracking key performance indicators (KPIs), usage, and basic outcomes. Analytics inform AI model refinement.

    Benchmark: Standardized performance dashboards are implemented for all deployed AI solutions.

    Automated analytics and feedback mechanisms monitor AI model performance, drift, and impact in real-time. Results are used for continuous improvement and to inform clinical/operational decision-making.

    Benchmark: Automated alerts for AI model performance degradation or significant data drift are in place.

    Predictive analytics are deeply embedded, and AI-driven insights proactively inform strategic decisions, care pathways, and operational efficiencies. Continuous learning systems optimize AI performance.

    Benchmark: Continuous A/B testing frameworks are used to optimize AI model performance.

    Curated resources:

    NIST: AI Risk Management Framework

  • Technology, infrastructure & integration

    For: IT & data leads, strategic leaders, procurement leads

    The right technology foundation avoids implementation surprises and ensures your AI tools can scale safely and securely. A clear-eyed technical assessment is non-negotiable.

    PRO TIP

    PRO TIPS | Considerations from The Playbook team

    Start integration planning before selecting a tool—not after.

    Assess where the AI will live (cloud, on-premise, edge) and what that implies for integration, security, and performance.

    Identify integration requirements with existing clinical systems (e.g., EHR, PACS, scheduling) upfront.

    Plan for security reviews, API compatibility, latency testing, and real-world downtime scenarios.

    Avoid these red flags:

    Missed infrastructure dependencies (e.g., compute power, storage) delay or derail go-lives.

    Incompatible environments between the AI tool and your existing systems force costly rework or unsafe workarounds.

    Underestimating security needs or failing to plan for real-world downtime scenarios can compromise patient data and system stability.

    Maturity level 1:
    Initial
    Maturity level 2:
    Developing
    Maturity level 3:
    Defined
    Maturity level 4:
    Managed
    Maturity level 5:
    Optimized
    Interoperability & integration

    AI tools are standalone with no integration into core clinical systems (e.g., EHR). Data exchange is manual or nonexistent.

    Benchmark: AI tools operate as standalone systems requiring manual data input/output.

    Pilots or departmental AI solutions are partially integrated, often using point-to-point connections. API exploration for data access has begun.

    Benchmark: At least one AI pilot has a one-way, read-only connection to the EHR.

    Key AI applications are integrated with core clinical systems (e.g., EHR) using standardized APIs (e.g., FHIR). Bidirectional data flow supports some clinical workflows.

    Benchmark: >50% of AI use cases use standardized APIs for bidirectional data exchange with the EHR.

    Enterprise-wide, scalable AI deployments are deeply integrated into clinical and operational workflows. Interoperability standards are consistently applied. Robust API management is in place.

    Benchmark: An enterprise service bus (ESB) or integration platform facilitates scalable AI integration.

    A modular, “plug-and-play” AI architecture allows for rapid and seamless integration of new AI tools and services. Full interoperability across the health system and with external partners where appropriate.

    Benchmark: New AI tools can be securely deployed and integrated within weeks, not months.

    Scalability & security

    No dedicated cloud infrastructure or scalable solutions for AI. Basic security measures are in place for general IT, not specific to AI vulnerabilities.

    Benchmark: AI models are run on local servers or individual workstations.

    Early exploration of cloud services for specific AI projects. Basic security protocols applied to AI pilots, but AI-specific vulnerabilities are not systematically addressed.

    Benchmark: A single AI project is being piloted on a cloud service (IaaS/PaaS).

    A secure and scalable infrastructure (on-premise, cloud, or hybrid) is established to support current AI initiatives. AI-specific security policies and controls are implemented.

    Benchmark: The organization has a documented, AI-specific security policy.

    Robust, dynamically scalable infrastructure supports enterprise-wide AI deployments. Advanced security measures, including threat detection and data encryption for AI, are operational with regular audits.

    Benchmark: Regular penetration testing and vulnerability assessments for AI systems are conducted.

    Highly resilient, agile, and cost-optimized infrastructure supports rapid scaling and deployment of diverse AI models. Cutting-edge security practices are embedded, with proactive threat modeling for AI systems.

    Benchmark: The organization uses advanced AI security measures like adversarial attack detection.

    Curated resources:

    Deep Dive: AI at the edge

  • Workforce development & change management

    For: Clinical leaders, operational leadership, strategic decision-makers

    Successful AI initiatives are ultimately human initiatives. Building the curiosity, trust, and readiness of your workforce is more critical than the AI technology itself.

    PRO TIP

    PRO TIPS | Considerations from The Playbook team

    Appoint respected clinical champions with cross-departmental influence to drive engagement and signal institutional alignment.

    Survey frontline teams early to surface shared pain points and gauge readiness for change.

    Build change tolerance gradually: launch small pilots, showcase wins, and scale with momentum.

    Avoid these red flags:

    Staff may perceive AI as a top-down imposition without visible champions and clear communication.

    Change fatigue, especially from recent tech rollouts, can cause resistance and burnout if not managed proactively.

    Failing to provide adequate training leads to misuse of tools, erosion of trust, and potential for error.

    Maturity level 1:
    Initial
    Maturity level 2:
    Developing
    Maturity level 3:
    Defined
    Maturity level 4:
    Managed
    Maturity level 5:
    Optimized
    AI literacy & training

    No formal AI training or general awareness programs. Staff have minimal understanding of AI concepts or the organization’s AI plans.

    Benchmark: < 10% of clinical/IT staff can articulate basic AI concepts.

    Basic AI awareness training is offered to some roles or departments. Training is ad-hoc and not tailored to specific job functions or clinical responsibilities.

    Benchmark: A one-time “Intro to AI” seminar was offered to the innovation department.

    Role-based AI literacy and skills training programs are developed and implemented for relevant clinical and operational staff. Training completion is tracked.

    Benchmark: A role-based AI curriculum exists and >50% of targeted staff have completed it.

    Comprehensive and ongoing AI training is integrated into professional development pathways and aligned with job roles and responsibilities. AI competencies are assessed.

    Benchmark: AI training is part of the standard new employee onboarding process for relevant roles.

    AI fluency is widespread across the organization. Continuous learning is a cultural norm, with incentives for AI skill development and innovation. Organization seen as a leader in AI workforce preparedness.

    Benchmark: The organization hosts internal AI workshops and staff contribute to knowledge sharing.

    Change management & culture

    No formal change management plan for AI adoption. Resistance to new technologies may be high, with limited staff engagement in AI initiatives.

    Benchmark: Staff surveys indicate high resistance or skepticism towards new technology.

    AI champions are identified informally. Basic communication about upcoming AI projects occurs, but no structured approach to manage cultural or workflow impacts.

    Benchmark: A basic communication plan (e.g., newsletter updates) exists for an AI pilot.

    A formal change management program for AI is defined, including stakeholder analysis, communication plans, and strategies to address resistance. Leadership visibly supports the change.

    Benchmark: A documented change management strategy is required for all major AI initiatives.

    Change management strategies are consistently applied org-wide for AI initiatives, with active leadership engagement and support. Feedback mechanisms are used to adapt change strategies.

    Benchmark: >75% of impacted staff report understanding the purpose of new AI tools.

    An embedded culture of innovation and continuous improvement embraces AI. Change management is proactive and agile. Staff are empowered to co-create and adapt to AI-driven transformations, with rewards for innovation.

    Benchmark: Documented examples exist of staff-led AI innovations being adopted by the organization.

  • Clinical integration & implementation

    For: Clinical leaders, IT & data leaders, operational leads

    If AI doesn’t fit into the workflows clinicians already use, it won’t be adopted. Success depends on seamless alignment with clinical tasks, not just technical performance.

    PRO TIP

    PRO TIPS | Considerations from The Playbook team

    Target known pain points; start with high-friction areas where AI can clearly reduce burden or improve decisions.

    Define AI’s role explicitly—is it for triage? Automation? A second opinion?—and match it to the clinical moment.

    Embed AI outputs into existing clinical workflows (e.g., via EHR interfaces or alert systems clinicians already rely on).

    Consider if workflows accommodate caregiver-facing care (e.g., for pediatrics, geriatrics, or any patient requiring third-party involvement).

    Avoid these red flags:

    Poor integration slows care and drives clinician rejection, even if the tool is technically accurate.

    Vague or shifting definitions of an AI’s role expose the system to safety, quality, and legal risks.

    Deploying models without robust, context-specific clinical validation can lead to unsafe outputs and misdiagnoses.

    Maturity level 1:
    Initial
    Maturity level 2:
    Developing
    Maturity level 3:
    Defined
    Maturity level 4:
    Managed
    Maturity level 5:
    Optimized
    Workflow alignment

    AI tools are not integrated into clinical workflows; any use is isolated, often in research or sandbox environments. No consideration of impact on provider workload.

    Benchmark: No workflow impact assessment has been conducted for any AI tool.

    Early AI pilots are tested by clinicians, but workflow integration is minimal or clunky. Focus is on technical feasibility rather than seamless clinical use.

    Benchmark: A pilot requires clinicians to log into a separate system to use the AI tool.

    AI use cases are strategically selected based on clinical priorities and are thoughtfully integrated into core clinical workflows with clinician input to ensure usability and efficiency.

    Benchmark: Usability testing (e.g., SUS scores) is conducted pre-launch for AI tools.

    AI tools are scaled across multiple service lines and clinical settings, with workflows optimized for provider adoption and patient benefit. Impact on provider burden is actively managed.

    Benchmark: Time-motion studies show a neutral or positive impact on provider efficiency.

    Real-time, precision-guided clinical decision support tools are deeply embedded and dynamically adapt within workflows. AI seamlessly augments clinical intelligence and operational efficiency across the care continuum.

    Benchmark: AI-driven alerts are seamlessly integrated into the EHR and part of standard care pathways.

    Feedback & iteration

    No formal mechanisms for collecting clinician feedback on AI tools. Iteration is rare or non-existent.

    Benchmark: No system exists for clinicians to report issues or suggestions for AI tools.

    Informal feedback from clinicians involved in pilots is occasionally collected. Little structured process for evaluation or incorporating feedback into AI tool refinement.

    Benchmark: Feedback is collected via ad-hoc emails or conversations.

    Structured processes are in place for collecting, evaluating, and prioritizing clinician feedback on AI tools. Iterative improvements are made based on this feedback.

    Benchmark: A dedicated feedback button or channel exists within the AI tool interface.

    Clinician feedback is systematically collected and drives continuous improvement cycles for AI tools. There is a clear process for co-design and rapid iteration with clinical teams.

    Benchmark: At least one major AI tool update based on user feedback has been deployed in the last year.

    AI tools are co-designed with clinicians, and dynamic updates based on real-world performance and ongoing feedback are standard practice. A culture of collaborative improvement is established.

    Benchmark: Clinicians are active partners in co-design sprints for new AI features and iterations.

    Curated resources:

    AHRQ: Workflow Toolkit

  • Financial sustainability & ROI

    For: Strategic leaders, financial decision-makers, procurement leads

    AI must earn its keep. From pilots to production, demonstrating value and financial viability is a core requirement—not an afterthought.

    PRO TIP

    PRO TIPS | Considerations from The Playbook team

    Build a total cost of ownership (TCO) model upfront—including vendor fees, IT integration, training, support, and data management.

    Define what success looks like in dollars saved, minutes reduced, or clinical outcomes improved.

    Align pilots with reimbursable services or clear cost-saving metrics where possible.

    Plan for post-pilot sustainability: who pays for, maintains, trains, and scales the tool?

    Avoid these red flags:

    Misjudging costs or overestimating ROI can erode executive trust and jeopardize future funding.

    Pilots without a sustainability plan often stall before scaling, becoming “pilot purgatory.”

    Focusing only on direct financial return can cause you to miss major value drivers like reduced clinician burnout or improved patient experience.

    Maturity level 1:
    Initial
    Maturity level 2:
    Developing
    Maturity level 3:
    Defined
    Maturity level 4:
    Managed
    Maturity level 5:
    Optimized
    ROI tracking

    No ROI measurement or formal value assessment for AI initiatives. Costs are not systematically tracked against benefits.

    Benchmark: AI project costs are not consistently tracked.

    Informal, pilot-level ROI considerations. Basic cost tracking for specific AI projects, but benefit realization is not consistently measured or attributed.

    Benchmark: A basic cost-benefit analysis is conducted for a single pilot.

    Standardized ROI frameworks and dashboards are used to track financial, clinical, and operational impact per AI initiative. Regular reviews of AI value are conducted.

    Benchmark: A dashboard tracks AI project costs vs. projected benefits for key initiatives.

    Ongoing, systematic ROI tracking is integrated into financial management. AI performance outcomes directly influence future investment decisions and resource allocation.

    Benchmark: ROI reports for all major AI investments are presented to leadership semi-annually.

    Sophisticated ROI models link AI investments directly to enterprise strategic goals, including improved margins, revenue growth, and enhanced patient outcomes. Value realization is continuously optimized.

    Benchmark: A value realization framework quantifies AI-driven financial improvements.

    Budgeting & reinvestment

    No dedicated budget for AI. Funding, if any, is opportunistic and project-specific. No concept of reinvesting AI-generated gains.

    Benchmark: AI projects are funded, if at all, from departmental savings.

    One-time budgets for AI pilots or initial exploratory projects. Financial planning for AI is short-term and reactive.

    Benchmark: AI pilots are funded via one-time innovation grants.

    AI initiatives are included in annual operational and capital budgeting cycles. Mechanisms for identifying and tracking AI-driven savings or revenue are emerging.

    Benchmark: Business cases for AI are a required part of the annual budgeting process.

    Reinvestment of value (cost savings, efficiencies) generated by AI initiatives into further AI development or other strategic priorities is a standard practice, guided by ROI data.

    Benchmark: A documented process for reinvesting AI-generated savings is in place and has been used.

    Strategic, long-term financial planning fully incorporates AI as a driver of value and innovation. A virtuous cycle of investment, value generation, and reinvestment is established for AI.

    Benchmark: A multi-year strategic AI investment plan is aligned with long-term financial forecasts.

    Curated resources:

    McKinsey: Health AI Value Capture

  • Stakeholder & community engagement

    For: Strategic decision-makers, operational leaders, clinical leaders

    Effective AI strategy requires the voices of patients, caregivers, and care teams from the start. To build trust and ensure tools are equitable and useful, engagement must be structured, inclusive, and ongoing.

    PRO TIP

    PRO TIPS | Considerations from The Playbook team

    Involve patients and frontline users in co-design from the start—not just validation at the end.

    Use structured engagement methods (e.g., ethnographic interviews, red-teaming exercises) to reveal gaps and ensure strategies accommodate diverse patient needs.

    Continuously close the loop: share how feedback was used, show the impact, and keep engagement active beyond the pilot.

    Address concerns about data privacy and consent clearly and proactively.

    Avoid these red flags:

    Excluding patient and frontline voices risks building the wrong tool, leading to rejection, ethical failure, and loss of trust.

    Passive engagement efforts (e.g., surveys alone) may miss deeper usability and equity issues that only direct conversation can uncover.

    Failing to address data privacy and consent transparently will immediately erode patient trust.

    Maturity level 1:
    Initial
    Maturity level 2:
    Developing
    Maturity level 3:
    Defined
    Maturity level 4:
    Managed
    Maturity level 5:
    Optimized
    Internal engagement (Healthcare professionals)

    AI interest is isolated; no formal executive sponsorship, vision, or dedicated budget. AI efforts are ad-hoc and uncoordinated with organizational or clinical strategic priorities.

    Benchmark: Staff are informed of AI tools only after implementation.

    Executive interest in AI is present with some champions emerging. Draft AI strategy exists but lacks formal approval, dedicated funding, or clear alignment with overall organizational goals. Clinical leadership is consulted sporadically.

    Benchmark: A few select clinicians are invited to provide feedback on a pilot.

    Formal AI strategy approved by executive leadership, clearly linked to organizational and clinical priorities, with dedicated budget allocated. Key clinical and operational leaders are actively involved in AI steering.

    Benchmark: Formal working groups with clinical staff are established for key AI projects.

    AI strategy is fully integrated into the organization’s operational and strategic planning. Leadership actively champions AI, ensures resource allocation, and monitors enterprise-wide AI performance and clinical impact.

    Benchmark: An “AI Champion” network is established and active across departments.

    AI leadership is visionary and adaptive, fostering a culture of responsible innovation. Strategy dynamically adjusts to emerging AI advancements, clinical needs, and enterprise performance, with transparent public reporting on AI initiatives and outcomes.

    Benchmark: Staff are empowered to lead AI ideation through internal innovation programs.

    External engagement (Patient/community)

    No patient, caregiver, or community input into AI initiatives.

    Benchmark: All AI communications are internal-only.

    Basic outreach (e.g., surveys, informational sessions) to patients or community groups about AI.

    Benchmark: General information about AI is available on the public website.

    Patient and community representatives are included in advisory roles for AI pilots or planning.

    Benchmark: A patient advisory council is consulted on specific AI initiatives.

    Patients, caregivers, and community members are systematically involved in the co-design and testing of AI tools.

    Benchmark: Patients are included in reviewing AI tools for fairness and usability.

    Patients, caregivers, and community representatives are integral partners in AI governance and co-lead AI initiatives.

    Benchmark: Patients and community members are active partners on AI governance bodies.

    Curated Resources:

    The Patient AI Rights Initiative, Health Care AI Code of Conduct – NAM, Patients and AI Deep Dive

Next steps

By utilizing this Health AI Maturity Model, healthcare leaders and implementation teams can gain a comprehensive understanding of their organization’s current AI readiness across strategic, technical, operational, cultural, and ethical dimensions. This understanding will help identify strengths, weaknesses, and areas for improvement, enabling the development of a realistic and effective plan for successful AI adoption and integration.

Health AI Readiness Assessment

Your next step is to actively assess your organization’s current readiness in the Health AI Readiness Assessment. Use the assessment to pinpoint where your organization currently stands within the maturity model and identify the specific steps and resources needed to progress.

Join our next project

Help streamline the path to regulatory and commercial success to optimize health outcomes for the greatest number of patients

Join the Integrated Evidence Plans project

Join us
Not today