Healthcare is building an AI economy without an accountability layer
AI is rapidly being integrated into the core infrastructure of care delivery, evolving beyond a discrete category of healthcare innovation. EHRs are embedding it directly into their products, AI vendors are moving closer to clinical workflows, and policymakers are advancing care models that assume technology-enabled coordination, navigation, and decision support will be part of the operating environment.
Earlier this week, CMS announced that more than 150 organizations have been provisionally accepted into the ACCESS Model, a program designed to expand access through coordinated, technology-enabled care that follows patients across settings. That model depends on digital tools, including AI, functioning reliably across providers, populations, and care environments.
These shifts are reorganizing the market, and control is consolidating around three layers:
- The systems that hold data
- The applications that generate insight or automate tasks
- The pathways through which those tools reach patients and clinicians
This moment of innovation differs from previous technological evolutions in healthcare because the pace of innovation is not confined to a single layer of the stack, but spans all three at once.
Systems under strain
The systems that determine whether any of this works in practice – specific teams within health systems, developers, payers, policy-makers, and standards bodies – are simply not keeping pace with this multi-layered transformation.
Health systems are being asked to make high-stakes decisions about adopting AI without consistent methods for evaluating performance across populations, integrating these tools into clinical and operational workflows, or monitoring them post-deployment. Developers are encountering a fragmented landscape in which expectations for evidence, validation, and ongoing accountability vary widely from one buyer to the next. Policymakers are moving with urgency to expand access and modernize care delivery, but without a clear, real-world view of how these technologies behave once they leave controlled environments.
The result is that we are building an AI economy in healthcare without an accountability layer.
This gap is already shaping behavior in ways that should be familiar to anyone operating in the system. Health systems are highly motivated to adopt AI, but are slowing down because they lack confidence in how to assess risk and value in a repeatable way. Contracting cycles are lengthening as each organization develops its own approach to validation and oversight. Tools that appear to perform well in one setting behave differently in another, but there is no common mechanism to understand or compare that variation. After deployment, many systems rely on limited or ad hoc monitoring, which makes it difficult to detect drift, identify unintended consequences, or intervene early when performance degrades.
Access without a corresponding system of accountability introduces variability into care delivery at exactly the moment we are trying to reduce it. If AI-enabled tools are going to support navigation, triage, and clinical decision-making across populations, then the question must be whether we have a shared, operational way to determine what works, where it works, for whom it works, and what to do when it does not.
That infrastructure does not currently exist in a coherent, scalable form.
Announcing our partnership
The Digital Medicine Society (DiMe) has helped establish shared expectations for digital health through efforts like the DiMe Seal and DATAcc, creating a common foundation for clinical evidence, usability, privacy, security, and performance.
But definition alone is not enough. These expectations must be tested and made operational in real-world care delivery. Without that translation, even well-designed frameworks remain disconnected from the decisions health systems and developers make every day.
Qualified Health operates at this point of translation, embedded within health systems where AI tools are evaluated under real conditions, integrated into workflows, and monitored over time. This is where performance becomes visible beyond vendor claims and pilot environments, and where governance processes are either validated or exposed as insufficient.
It is also where the most consequential decisions in the market are made. Clinical leaders, operators, and procurement teams determine which tools are approved, scaled, or shut down based on whether they deliver value. Today, those decisions are made without shared methods, comparable data, or repeatable processes, reinforcing fragmentation across the system.
Our impact-driven alliance
DiMe and Qualified Health are partnering to connect definition with execution, creating a feedback loop between what is defined as good and what is observed in practice. This reduces friction for health systems through more consistent evaluation and faster time-to-value, gives developers clearer pathways to demonstrate performance and scale, and provides policymakers with a grounded view of how AI is functioning in practice.
Together, we are developing DiMe’s AI Governance initiative. The objective is not to introduce another high-level framework, but to produce a set of operational resources that can be used across the lifecycle of AI in healthcare. This includes tools for risk assessment and triage, approaches to local validation, standardized methods for defining and measuring performance, and mechanisms for continuous monitoring and escalation when issues arise.
The emphasis throughout is on usability and scalability. Governance that cannot be integrated into day-to-day workflows will not be sustained, and governance that cannot be applied consistently across organizations will not support a functioning market.
AI is becoming embedded in the core of care delivery as policymakers advance models that rely on coordinated, technology-enabled systems. The supporting infrastructure must be developed quickly enough to ensure that these tools deliver consistent, equitable, and high-quality care.
If that infrastructure takes shape, AI has the potential to become a reliable component of healthcare delivery, improving access, reducing the burden on clinicians, and supporting better patient outcomes at costs the patients, the system, and society can sustainably afford. If it does not, the system risks repeating a familiar pattern in which promising technologies are deployed unevenly, generate mixed results, and erode trust over time.
Jennifer Goldsack is the CEO of the Digital Medicine Society (DiMe), a global nonprofit advancing the safe, effective, and equitable use of digital technologies to redefine healthcare and improve lives.
Justin Norden is the CEO of Qualified Health, a company working with leading health systems to evaluate, deploy, and govern AI in real-world clinical environments.

