Promise and Peril of Instant Health Information Access for Consumers.
Apps and AI Help Patients Access in a Blink Their Diagnostic Data: But Lacking Physician Input Fuels Fear.
Today, accessing medical information, which once required scheduling an appointment with a health provider, can be done through an app with a single tap. Health portals provide instant access to diagnostic test results, from complete blood count panels to kidney function markers to hormone levels.
While we should welcome ready access to health data, viewing it without the interpretation of a seasoned clinical expert can fuel fear. This is especially true when worrisome data prompts you to reflexively rush off to “Dr. Google” – a source that knows neither empathy nor accuracy.
On behalf of myself, my family, and the patients whom I know, I’d urge those engaged in managing and improving health tech to recognize what now essentially amounts to a gap in care. You must integrate responsible, clear, and reassuring interpretative information into your apps and platforms. For patients, the reason for the health system’s very existence is much more than a “nice-to-have” feature; supplying it is an ethical imperative.
Patients should not be left to parse complex lab data alone, nor should they be making decisions based on fragmented information, especially when a single abnormal result might suggest a more serious situation than exists. While AI-generated engines process and summarize vast amounts of information in a blink, they lack the clinical judgment and humanity of a physician who knows a patient’s medical history.
Further, AI-based models cannot currently account for every factor influencing personal health metrics, such as lifestyle, age, genetic predispositions, or recent health events. Even the language used by AI can inadvertently amplify fear; a model might describe results as “abnormal” or “high-risk” when the variation is clinically insignificant. And because AI still lacks a provider’s empathy and human touch, patients may rightly feel uneasy receiving sensitive information without the guidance of a health professional.
AI and Large Language Models: Help and Hazard
A recent JAMA study surprisingly suggests patients often prefer ChatGPT tools to physician conversations. While physicians understandably doubted the study’s conclusions, its data and the public conversation it sparked revealed that technology available in real-time to answer patients’ pressing clinical and emotional needs was sought and welcomed.
Large language models (LLMs) can provide general context, describing what test results typically mean and suggesting possible follow-up actions, like consulting a doctor if certain thresholds are exceeded. Technology can offer helpful insights, especially for people with conditions and concerns like cancer and other frightening health challenges requiring prompt access to professional input.
LLM Cancer Mentor Apps such as Dave AI are revolutionary tools for cancer patients, caregivers, and physicians. It’s like having a WAZE for navigating the complex cancer diagnostic and care journey. Patients no longer have to wait for their next appointment to get answers to their questions. Health tech innovators realize that diagnoses leave patients anxious between doctor visits, and the best solution may often be well-trained LLMs.