Doctor-Patient Feedback and Interpretation

Of the different challenges that AI algorithms in medical training pose, feedback and interpretability are two of the most prominent issues. Interpretability of AI algorithms is the ability of a human to understand the way it made a connection between features extracted and its predictions.


A study done at Mount Sinai Hospital created a type of AI algorithm known as deep learning using data from 700,000 patients. Their algorithm was highly accurate and was able to diagnose conditions that even experts struggle to diagnose, such as  schizophrenia. However, there was no way for humans to know how the system reached a diagnosis (Paranjape, Schinkel, Nannan Panday, Car, & Nanayakkara, 2019). This is an issue because it becomes very hard for patients to trust a system that cannot provide an explanation and if a calculation were to be made incorrectly that puts a patient in danger, then it is not known whether the doctor, the hospital, or the company that developed the AI algorithm is liable. While there are many new devices being created that use Artificial Intelligence, they do have their limitations. Although these devices will make the jobs of medical professions easier, it may make interactions with patients uncomfortable and lose trust in their doctors. These automated devices do not have the ability to explain the reasoning for their decision and empathize with patients the same way that doctors can, making it hard for patients to understand their own course of treatment. This is known as the “black box” problem to describe the way AI algorithms are not able to express the relationship between the data they have observed and the outcomes that have been formed as a result of it.


The goal of interpretability isn't exactly to understand exactly how an AI system works, but to have enough information to understand it to the best extent possible and it is not always necessary. A wrong diagnosis in radiology can lead to extreme consequences for a patient, but reading images is prone to interpretation errors. Interpretability is a fast evolving field that has been at the center of AI research with great potential for future development of safe AI technologies (Kelly, Karthikesalingam, Suleyman, Corrado, & King, 2019, pg. 1). But before AI can be implemented into various tasks within radiology, task-specific interpretability solutions are required, and if algorithms known as “black boxes” are used in medicine, they need to be used with a great deal of judgement and responsibility. AI developers should be aware of the many consequences of algorithms and unintentionally lead to and make sure they are created with all patients in consideration. Doctors and surgeons being involved in this process can increase its efficiency significantly. If the interpretability of algorithms can be improved, then human-algorithm interaction would be smoother and the future adoption of AI with consideration of data protection, fairness and transparency of algorithms, and safety, would be supported by a large number of physicians.


References

Kelly, C., Karthikesalingam, A., Suleyman, M., Corrado, G., & King, D. (2019, October 29). Key challenges for delivering clinical impact with artificial intelligence. Retrieved October 27, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6821018/

Paranjape, K., Schinkel, M., Nannan Panday, R., Car, J., & Nanayakkara, P. (2019, December 3). Introducing                         Artificial Intelligence Training in Medical Education. Retrieved October 27, 2020, from                                             https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6918207/

Comments

Popular posts from this blog

The Impact of AI on Jobs

Segmentation and Cervical Cancer Classification