Philosophy of medicine meets AI hallucination and AI drift: moving toward a more gentle medicine
Abstract
The contemporary world is profoundly shaped by technological progress. Among the advancements of our era is the proliferation of artificial intelligence (AI). AI has permeated every facet of human knowledge, including medicine. One domain of AI development is the application of large language models (LLMs) in health-care settings. While these applications hold immense promise, they are not without challenges. Two notable phenomena, AI hallucination and AI drift, pose setbacks. AI hallucination refers to the generation of erroneous information by AI systems, while AI drift is the production of multiple responses to a single query. The emergence of these challenges underscores the crucial role of the philosophy of medicine. By reminding practitioners of the inherent uncertainty that underpins medical interventions, the philosophy of medicine fosters a more receptive stance toward these technological advancements. Furthermore, by acknowledging the inherent fallibility of these technologies, the philosophy of medicine reinforces the importance of gentle medicine and humility in clinical practice. Physicians must not shy away from embracing AI tools due to their imperfections. Acknowledgment of uncertainty fosters a more accepting attitude toward AI tools among physicians, and by constantly highlighting the imperfections, the philosophy of medicine cultivates a deeper sense of humility among practitioners. It is imperative that experts in the philosophy of medicine engage in thoughtful deliberation to ensure that these powerful technologies are harnessed responsibly and ethically, preventing the reins of
medical decision-making from falling into the hands of those without the requisite expertise and ethical grounding