
Patients, AI, and the imperative for health system engagement
A Deep Dive Interview with Grace Cordovano, PhD, BCPA
In the evolving landscape of healthcare AI, one truth has been hiding in plain sight: patients and their families are already using AI extensively, creatively, and independently. Grace Cordovano, DiMe’s Patient-in-Residence and a board-certified patient advocate, lifts the veil on the #PatientsUseAI movement. From appointment prep to end-of-life planning, patients are turning to AI, particularly large language models (LLMs) like ChatGPT, not as a novelty, but as a lifeline.
The takeaway is clear: patients aren’t waiting for permission. They’re building their own AI workflows, stitching together tools and platforms to better navigate a fragmented healthcare system. And yet, the broader healthcare ecosystem of clinicians, health systems, and payers remains unprepared mainly for this groundswell of patient-led AI engagement. This disconnect introduces profound risks to safety, trust, equity, and ultimately, outcomes.
The patient-led AI reality
“When we talk about patients using AI, it’s not just about using an LLM. Patients and their carepartners are creating new workflows that we’ve never had access to, and they’re here at our fingertips… This is patient engagement, by the way—so this is the holy grail that everyone tries to unlock. It’s here, probably at scale in many cases, and the health systems and the doctors …don’t have a workflow or culture adjustment to accept it.” Grace reveals that patients are using AI to navigate their way through every phase of the healthcare journey by:
- Checking symptoms, researching conditions, drafting questions, ranging from simplistic to thought-provoking and advanced, to bring to the clinic and point-of-care.
- Using LLMs in real time, while in waiting rooms, infusion chairs, or hospital beds, to make sense of test results, diagnoses, or treatment options.
- Relying on AI to assist with the broad spectrum of administrative burdens woven throughout their care journey, including: prior authorizations, insurance appeals, disability forms, and coordinating care.
- Turning to LLMs for clarity, emotional support, and even help mediating difficult family conversations.
- Generating a real-time, longitudinal co-pilot to wade into the depths of our health conditions, our loved one’s health conditions, generating personalized intelligence every step of the way.
Many patients string together multiple tools like ChatGPT, Gemini, Notebook LLM, and image generators to build custom agentic flows that offer more utility, immediacy, and personalization. Thanks to the efforts of the Office of the National Coordinator (ONC)/Assistant Secretary of Technology Policy (ASTP), patients have more seamless access to their medical records than ever before. Patients are curating and correcting their own medical records, cross-referencing treatments, and filling in the gaps that siloed systems leave behind, and leveraging the power of LLMs to bridge the gaps.
This is widespread, organic adoption happening almost entirely outside the formal healthcare infrastructure.
The cost of silence
Yet, health systems and clinicians are largely unengaged, if not unaware. When patients bring AI-generated insights to their providers, they sometimes face what Grace calls “immediate shutdown”—eye-rolling, scoffing, or outright dismissal. This not only fractures trust but may push patients further away from the health system, deepening reliance on unvetted tools.
Meanwhile, patients may lack the guidance they need to use AI responsibly. They may enter sensitive personal data into public chatbots, unaware of privacy risks. They may misinterpret hallucinated outputs. They may over-trust the AI and under-trust their care team.
The result? A widening chasm between empowered, AI-using patients and health systems still operating under legacy assumptions. If unaddressed, this chasm threatens to erode trust, amplify misinformation, and leave vulnerable patients behind.
“The healthcare ecosystem has to take charge of this conversation. We have to accept the fact that patients and families are already using AI. How do we best support patients and patient communities in a responsible manner?” said Cordovano.
What health systems can do now
“There’s a great opportunity to bring our physicians and care teams together with patients, families, and advocates that are supported and powered by responsible AI,” Grace mentions as she helps to lay out a clear set of imperatives for bridging this gap:
- Acknowledge and engage: Stop treating patient AI use as a fringe behavior. It’s here. Health systems must develop proactive strategies to engage patients as informed, AI-augmented participants in their own care.
- Educate and protect: Equip patients with guidance on safe prompting, data privacy, and recognizing AI limitations. Share “starter prompts.” Be honest about risks and tradeoffs.
- Co-design with patients: Involve patients and caregivers early and often in AI development. Compensate patients for their time and contributions. Loop their feedback into real change. Test tools across relevant populations, rural, geriatric, pediatric, multilingual, etc, to ensure diverse representations of voices are heard and all gaps in care are understood
- Support clinicians: Train clinicians to respond with curiosity, not skepticism, when patients bring AI-generated information. Integrate AI into workflows to reduce, not increase, cognitive burden. Appoint clinical champions. Build in accountability.
- Create reimbursement pathways: Establish billing codes and reimbursement mechanisms that allow clinicians to spend time discussing AI use with patients, reviewing AI-generated outputs, and providing guidance on responsible AI integration into care plans. Without financial incentives aligned with these new workflows, even the most well-intentioned system changes will struggle to take root in practice.
- Govern responsibly: Create AI governance bodies that include the patient stakeholder voice. Define roles, KPIs, and escalation pathways. Monitor for bias drift and performance degradation. Be ready for incidents—and learn from them.
- Prioritize human connection: Technology must serve empathy, not replace it. Ensure that every AI touchpoint, from clinical decision support to patient-facing chatbots, preserves dignity, trust, privacy, and understanding.
Patients are not asking for permission to use AI. They are asking to be partners in their care and have their gaps in care addressed and voices heard. If health systems continue to look the other way, they risk undermining the very goals AI promises to fulfill: better care, greater access, and improved outcomes.
A path forward
Meaningful change in healthcare requires economic alignment. Clinicians operate within systems where time is precious and reimbursement drives behavior. Creating dedicated time and compensation for things like AI guidance discussions, output review, and patient education is essential infrastructure for the AI-enabled future of medicine.
Healthcare systems need frameworks and resources that acknowledge a fundamental truth: patients are already AI users who need guidance, not gatekeeping. DiMe’s upcoming playbook, Implementing AI in Healthcare, helps systems move beyond algorithms to emphasize the people, workflows, and strategies that drive success.
What The Playbook makes possible:
- Empowering clinicians with training, tools, and workflows that actually work
- Centering patients and caregivers as strategic partners across the AI lifecycle
- Anticipating risks like hallucinations, privacy violations, or misaligned incentives, and proactively mitigating them
- Building governance that ensures transparency, accountability, and equity
- Scaling AI responsibly with sustainable processes, not just pilots
Whether you’re a health system leader, innovation officer, clinician, or vendor, The Playbook provides the shared language, tools, and frameworks to bridge vision with reality.