display:none
Skip to main content

Safety first: The nuances of health technology that can hurt your patients


Jennifer Goldsack and Dena Mendelsohn

When Diana Diller downloaded the pregnancy-tracking app, Ovia, she used it for much the same reason as the app’s other 10 million users: To track her pregnancy and the health of her baby. She did not intend to share that information with her employer, but that’s exactly what happened.

Professional codes of ethics, laws, and regulations prevent clinicians and researchers from sharing information about patients with unauthorized individuals and entities. And for good reason: Patients have a right to control who can access their health information.

But shaky data rights in the United States mean that when clinicians recommend some health technologies to their patients, they could be unwittingly putting their patients at risk.

Consider the current COVID crisis: From connected temperature sensors to pulse oximeters, clinicians across the world have recommended various digital health products to patients. But how many times have you seen the risks associated with such technologies discussed? We believe that, from unwanted surveillance to data being sold to third parties, the risk of harm to individuals is real and as yet poorly understood by clinicians.

Make no mistake: wearables, health apps, and in-home sensors offer great promise for affordable, accessible, equitable, high-quality care. But in the modern era, data rights have become a safety issue that extends beyond the body. It’s time that data rights are central to our definition of “patient safety.”

The digital health data you instruct patients to collect may threaten both their health and their financial safety

In this post, we illustrate the type of risks that we hope clinicians (both practicing and in research) will consider before deciding to deploy a particular digital health technology with individuals in their care. We offer suggestions to help support clinical experts scrambling to keep pace with the rapid development of digital health technologies and highlight policy gaps that leave all individuals vulnerable to harm.

Specifically, we propose that patient safety in the digital era be redefined to include the risk of harm to individuals through digital health technologies and the data they generate.

Our intent is not to advance the belief that digital products should be avoided in clinical care and research. Rather, decision-making around them should consider the risks from harm caused by the data they generate. Given the new risks they pose, when it comes to our health — physical, mental, and financial — the mantra must always be safety first.

Clinicians and researchers may be vaguely aware of some highly publicized embarrassing situations created by digital health technologies, such as when Fitbit inadvertently displayed users’ sexual activity. But clinicians and researchers should be aware that the technologies they recommend for patients have the power to do more than embarrass: this data can impact how individuals experience the world around them, like whether they can access important financial tools, get stable housing, and move about freely.

For example, data from digital health technologies can be folded into “health scores” that influence access to insurance such as life insurance, disability, and long-term care insurance. Data aggregators collect and combine data in the shadows of our everyday life to paint a picture of who we are as individuals. It can be difficult or impossible for individuals to correct inaccuracies, let alone remove accurate information that they don’t want shared.

Certainly, not all data manipulation accomplished by data aggregators is harmful. However, data packaged as a sort of “health score” can affect individuals’ access to insurance while the algorithms behind those scores are hidden and could include inaccurate information. For decades, we’ve allowed insurers to determine our access to insurance based on genuine medical information. Now, assumptions based on unconsented data from opaque algorithms are altering the traditional underwriting process, making it challenging for some individuals to purchase insurance at a price they can afford — and others are even being denied a policy at all.

Aggregated data can impact lending and housing decisions. Studies from American University’s Center for Digital Democracy and the National Bureau of Economic Research reported that consumer health data can be combined by data aggregators and used to profile and discriminate in the context of employment, education, insurance, social services, criminal justice, and finance. Clinicians and researchers may not be aware of how digital data can result in discrimination in these areas. This type of data manipulation is done as a proxy to circumvent the U.S. Equal Credit Opportunities Act, and is not something that would be done transparently. However, it happens — a fact which clinicians, with a duty of care for patients and participants, must be aware.

Geolocation information can be used for surveillance. Wearables that measure physical activity use technology that can identify an individual’s precise movements. While this data is rich information for legitimate clinical use, it can also be used as surveillance. This is particularly problematic for people of color who are disproportionately the subject of undisclosed surveillance, data collection, and monitoring.

Another concern is the fallout after inaccurate conclusions from such data become public. Earlier this summer, for example, a case of mistaken identity jeopardized the safety of an innocent man accused of racist and threatening behavior. Data can also be analyzed with such precision that the New York Times could parse through location data of millions of cellphones to piece together a digital diary of individuals. As great a story as this was for the Times, they could not reveal the source of the data because the data transfer was not actually authorized, making it a likely data rights violation for those who were tracked.

Inaccurate data from digital health technology can also impact a patient’s access to treatment. In one such example, health insurers used data from CPAP machines to shift the cost of care onto patients. Patients may not even know that their data will be shared with people other than their doctor and may not be able to contest faulty reporting.

Clinicians and researchers should be ever-mindful of these safety risks and take steps to mitigate those concerns by conducting due diligence on practices of data collection, sharing, and destruction. They should also press technology companies to commit, in writing, to follow strict data limitations set in collaboration with end users including patients and clinicians.

In the absence of a comprehensive federal data rights law, clinicians and researchers cannot assume that adequate guardrails exist for data derived from new health technologies.

Over time, we hope that medical and clinical curriculum will integrate information on data rights safety risks into their standard risk-benefit analyses. We are already seeing early examples: in January Rocky Vista University’s College of Osteopathic Medicine launched the first four-year program in digital health, while courses for physicians and other medical professionals are also being offered at the medical schools of Brown University and Thomas Jefferson University.

We are encouraged to see increasing support for labeling digital health technologies with data rights information. Once information about data rights is standardized and accessible, it is likely to become easier for clinicians and researchers to conduct risk-benefit analyses of digital health technologies.

At the end of the day, it’s on our lawmakers to enact legislation that sets a data rights framework that could serve as a baseline for all technology. Far from restricting innovation, we firmly believe that data rights set in law will free up technologists to focus on creating the best technology that can win in the marketplace, and will reduce the burden on clinicians and researchers like you, who would better serve healthcare and biomedical research in selecting the best technology for the intended purpose without factoring for data safety situations caused by uncertain data rights.

Keen to learn more? We presented, “Redefining Patient Safety in the Digital Era” during the Biohacking Village at DEF CON 2020.

Acknowledgements: We drew heavily from examples in a pair of journal articles that they highly recommend: Unregulated Health Research Using Mobile Health Devices and Diversity and Inclusion in Unregulated mHealth Research.

Written by: Jennifer Goldsack and Dena Mendelsohn

Join our next project

Help streamline the path to regulatory and commercial success to optimize health outcomes for the greatest number of patients

Join the Integrated Evidence Plans project

Join us
Not today