For decades, physicians have relied on informal “curbside consults” – quick advice sought from specialists during breaks or rounds. Today, many doctors are turning to a new source for those rapid insights: artificial intelligence (AI). Tools like ChatGPT and specialized platforms such as OpenEvidence are becoming increasingly common, offering immediate, comprehensive answers that surpass traditional methods.
The shift is driven by practicality. AI provides 24/7 availability and speed, filling gaps in overloaded systems. While not flawless, its utility is clear. Doctors recognize the value in a tool that delivers useful input almost always, even if it requires careful review.
The debate isn’t about perfection, but about whether better is good enough. Critics demand impossibly high standards for AI in healthcare, exceeding those applied to human doctors. This reluctance to embrace AI stems from disproportionate focus on rare errors rather than overall improvement. Just as driverless cars are statistically safer than human-driven ones, AI tools can enhance medical care, even with occasional imperfections.
The broader context is essential: healthcare in the US is deeply flawed. Despite medical advancements, the system is plagued by chaos, bureaucracy, and unsustainable costs. AI offers a path towards transformation, but only if we shift from fear of isolated failures to assessing overall benefits.
AI doesn’t need to be perfect to improve care; it simply needs to be better than the current status quo. The future of medicine isn’t about eliminating human error, it’s about augmenting human capability with tools that offer speed, scalability, and a relentless pursuit of better outcomes.
