Personal Injury And Medical Malpractice Attorneys

Will AI improve patient care or lead to more medical mistakes?

On Behalf of | Aug 10, 2023 | Medical Malpractice |

Recent advances in artificial intelligence (AI) software have already started shaking up numerous industries. From those in creative professions who worry about protecting their income to those in engineering who can use software to explore concerns that humans may have failed to identify, there are many professionals who might benefit from the use of AI in their fields and equally many who are worried about the implications of increased AI use.

Medical professionals could easily find themselves in either group. After all, medicine is traditionally a very labor-intensive field that requires direct communication with patients and extensive continuing education so that professionals can track new developments. Will AI likely help or hurt those seeking medical attention in the United States?

How AI can help

AI has the ability to learn through programming and exposure. In theory, medical specialists could train AI to review imaging test results to check people for signs of tumors and other maladies. Additionally, AI could help with everything from sequencing the genetic codes of different cancers to reformulating medications on an individual basis to make them safer for individual patients. AI is also, theoretically, capable of reviewing and analyzing vast quantities of information much more quickly than a human professional could. More importantly, an AI system is not subject to errors caused by provider fatigue or inattention.

How AI could make things worse

Many medical professionals are already severely overworked. Although AI could take some of the more demanding, repetitive work off of their to-do lists, it could also result in a sort of dependency that could lead to medical errors. Professionals who overly rely on AI to perform medical analysis may not review the information and assume the software arrived at the correct diagnosis. However, the failure of  a physician to oversee and confirm the AI generated treatment plan could lead to medical negligence. AI systems have caused attorneys to be disciplined because the legal analysis was based on cases that did not exist. Until sufficient reliability and reproducibility for AI generated information is assured, providers must be careful to check and not rely on AI medical information. For example, radiologists may not look as closely at images as they normally would and may take for granted that the AI correctly interpreted the study. If the AI misses or misinterprets an important finding for whatever reason, patient injury can result. If providers spend less time less engaging in critical medical thinking and leaving it to AI, they are at risk of losing their clinical skills. The ubiquity of calculators and GPS systems have diminished our mental math and  directional intelligence. Might the same happen if AI performs more of the clinical thinking for physicians putting them in the role of care delivery rather than decision making. These are concerns.

Additionally, any software will have its limits, even if it is capable of learning new information. The blind spots in AI programming might lead to diagnostic errors and other failures that would not occur with a human reviewing the same information. The increased use of AI in medical settings could lead to faster diagnosis and more effective treatment protocols. However, there are likely to be numerous mistakes and errors along the way, some of which may result in people becoming ill or even dying. Sufficient reliability and reproducibility for AI -generated care must be assured before widespread rollout.

If doctors cut corners and are overly reliant on AI technology they may open themselves up to medical malpractice claims. AI technology will improve some aspects of medical care but requires vigilant oversight to ensure its accuracy and reliability, otherwise it will compromise the standard of care. Ultimately, providers are responsible for the patients’ outcomes. AI can assist but should not replace providers.