This marks the first documented case where an AI language model directly caused physical harm through medical misinformation by suggesting a regulated toxic substance (sodium bromide) as a dietary supplement. The incident reveals how AI can dangerously merge historical medical data (bromide's 19th century use) with modern health queries without proper context or safety parameters.
AI Chatbot Prescribes Toxic Bromide: Digital Deception Endangers Health
📰 What Happened
A 60-year-old man was hospitalized with bromism after following ChatGPT's dangerous medical advice to substitute sodium bromide for table salt. The incident, documented in the Annals of Internal Medicine, occurred when the man asked the AI chatbot for salt alternatives. ChatGPT incorrectly suggested sodium bromide - a toxic substance used in pesticides and pool cleaners. The patient developed bromide-induced psychosis before recovering. This case highlights the dangers of unregulated AI medical advice.
📖 Prophetic Significance
This incident demonstrates how AI systems are becoming false oracles for health and lifestyle guidance, paralleling prophetic warnings about deception in the last days. The economic implications are significant as OpenAI's ChatGPT represents a $90 billion AI industry gaining unprecedented influence over human decision-making. The ability of AI to access and misapply historical medical knowledge (like 1800s bromide treatments) while presenting it as current truth illustrates how technological systems can become vectors of mass deception, potentially controlling access to essential goods and services through misinformation.