display:none
Skip to main content

Healthcare 2030: DiMe’s Blueprint to Transform Healthcare

NEW RESOURCE: Advancing a Sustainable Hospital-at-Home Ecosystem at Scale

Get involved in an upcoming DiMe project

Beyond Deployment: How Vanderbilt University Medical Center closed the AI governance gap

A Deep Dive Interview with Susannah Rose, PhD


Artificial intelligence is permeating every corner of healthcare, from clinical decision support to back-office operations, yet launching an AI tool is not the finish line. It’s the starting point. According to Susannah Rose, PhD, a leader in AI governance and research at Vanderbilt University Medical Center (VUMC), significant work of ensuring AI is safe, effective, and trustworthy begins after it goes live.

Health systems in the US are rapidly adopting AI tools, but most lack the infrastructure, resources, and expertise to adequately govern or actively monitor them post-launch. This gap between implementation and oversight creates significant risks from performance degradation, unintended uses, and a failure to deliver on AI’s promise. Rose argues for a paradigm shift toward continuous, active monitoring, offering a blueprint for how to build the human-centered governance structures necessary for responsible AI in healthcare once it is deployed. 

Beyond technical oversight, responsible governance encompasses the ethical dimensions of how AI is introduced into patient care. At Vanderbilt, Rose and team are asking questions like, what does informed consent mean when AI is part of the decision-making process?

Rethinking patient consent in the age of AI

The conversation around AI ethics often defaults to a simple-sounding solution: get patient consent. However, Rose argues this approach is both impractical and insufficient. “It’s also pretty easy for people to say that all forms of AI should require informed consent,” she notes, but “what we really should do is outline an approach, to be honest with you, that is realistic and aligned with patients’ best interests”.

With dozens of algorithms potentially touching a single patient’s care in the ICU, a case-by-case consent model would create an impossible administrative burden. Instead, Rose and her colleagues developed a nuanced, ethically supported framework that determines the need for patient notification or consent based on several factors:

  • AI Model Autonomy: The more a model makes decisions independently, the more it deviates from patient expectations that their clinician is the one making decisions, and the greater the need for consent.
  • Deviation from Standard Practice: AI tools that fundamentally change how care is delivered requires more proactive communication with patients.
  • Patient-Facing vs. Clinician-Facing: Tools that directly interact with patients require clear and direct language. Rose warns against obfuscation, noting how terms like “virtual assistant” can create confusion or mistrust. “We really think that actual words need to be used with patients,” she states.
  • Clinical Risk: The higher the overall risk of a procedure and the potential risk from an AI error, the stronger the case for formal consent.

This framework moves beyond a one-size-fits-all mandate to a practical, risk-stratified approach. It also acknowledges a key insight from Rose’s research: far from being fearful of AI, many patients are excited. “A lot of our patients are thrilled that AI is being used.” she says. “They think this is new. This is great. It’s going to help me. It’s going to help my doctor”.

VUMC closes the governance gap

At VUMC, the AI governance process is notable for its sheer breadth. “Anything that has AI in it, or ML, or however one wants to define it… goes through our governance process,” Rose explains. This includes not just clinical tools but also operations, research, and even productivity software, giving her team’s uniquely holistic view of how AI is being used across the enterprise.

This comprehensive intake is the first step, but the most critical component is what comes next: monitoring. As models learn and adapt, and as clinical environments change, their performance can drift. Rose is adamant on this point: Hospitals should not use AI tools  “…without some sort of evaluation and monitoring plan.”

She shares a powerful real-world hypothetical example of how an unmonitored tool could be dangerously misused. For example, if a model designed to predict renal function in very sick ICU patients was visible in the electronic health record system, then the model could be used with patients, such as outpatients, for whom the model is not intended, and this could be dangerous. It could give false reassurance that the patient is healthier than they are, which could result in delaying appropriate care.  

A look under the hood

To solve this, VUMC has created a system that is designed for the active, ongoing monitoring of AI models. Rose emphasizes that this is not just a technical dashboard but a system built on human-centered design. 

The greatest challenge isn’t just detecting a problem; it’s creating the accountability to act on it. “If you’re monitoring something, and the dashboard’s working great, but nobody’s doing anything with it, this is a real challenge”. 

Therefore, such systems need to be meaningfully connected to human intervention.  VUMC’s solution involves creating clear workflows and accountability structures, including the ability to investigate the root cause of an anomaly, which could be anything from a flaw in the model to a person misusing it. This approach acknowledges that a tool’s failure is often a system failure. The institution is already looking ahead, developing a separate system called the Vanderbilt Chatbot Accuracy and Reliability Evaluation System (V-CARES) that the outputs of medical large language models, which are increasingly being used in healthcare sectors across the country. 

Collaboration is key for success

The final piece of the puzzle is collaboration. Rose points out that in most healthcare organizations, AI ownership is fragmented across departments like imaging, cardiology, and IT. “Often, nobody actually owns it within organizations,” she says, making system-wide governance nearly impossible. VUMC may be one of the only institutions with a truly comprehensive, centralized view into all uses of AI.

To move the entire industry forward, this siloed approach must end. “I think that it is important for healthcare systems to work more collaboratively together to solve some of the biggest challenges of using AI successfully,” Rose urges. By sharing information, best practices, and lessons learned from AI implementation, health systems can collectively build the robust, responsible AI ecosystem that no single institution can create alone.

 


A resource to support your AI adoption 

Susannah Rose’s work at Vanderbilt provides a clear-eyed look at the future of AI in healthcare—a future that depends less on the novelty of algorithms and more on the rigor of human and machine-led governance. For health systems to succeed, they need frameworks, workable tools and resources that acknowledge this reality. DiMe’s playbook helps systems move beyond algorithms to emphasize the people, workflows, and strategies that drive success.

What The Playbook makes possible:

  • Empowering clinicians with training on AI tools and their appropriate uses.
  • Centering patients as strategic partners in developing transparent AI communication strategies.
  • Anticipating risks like unintended use cases and performance drift, and proactively building monitoring and response plans.
  • Building governance that ensures transparency, accountability, and equity across the entire AI lifecycle.
  • Scaling AI responsibly with sustainable processes, not just pilots.

Whether you’re a health system leader, innovation officer, clinician, or vendor, The Playbook provides the shared language, tools, and frameworks to bridge the vision of AI with the reality of responsible implementation.

Join our next project

Help streamline the path to regulatory and commercial success to optimize health outcomes for the greatest number of patients

Join the Integrated Evidence Plans project

Join us
Not today