Or Degany MD, Itamar Ben Shitrit MD MPH
Artificial intelligence (AI) and machine learning have moved to the forefront of scientific discourse and clinical medicine, offering improved accuracy and efficiency while raising concerns about transparency, accountability, and unintended consequences. Recent developments, particularly large-scale and generative models, have fueled these debates. However, efforts to mimic aspects of human intelligence long predate ChatGPT. These efforts include the early rule-based systems to Weizenbaum’s ELIZA program, which humorously simulated a Rogerian psychotherapist in its Doctor script [1]. For clinicians, the real test is not whether predictions become marginally more accurate on average, but whether they improve the identification of high-risk patients and meaningfully change management.