My Venture into the World of Artificial Intelligence in Radiology
Updated: Jan 15, 2020
About a month ago I started a position as the Scientific Editor at the Radiological Society of North America. I read and work on two journals: Radiology: Imaging Cancer and Radiology: Artificial Intelligence. My job is to read scientific manuscripts and help to improve reading flow and scientific/technical communication within articles so they are more accessible to our readers.
My PhD and postdoc experience revolved around cellular and molecular biology, and I had always had an interest in cancer, so reading and learning about articles in Imaging Cancer are more accessible to me. However, Artificial Intelligence is a newer field to me, and I have a lot to learn!
This week was the 105th RSNA Annual Meeting and Scientific Assembly. This year, the RSNA meeting held a brand new technical exhibit: The AI Showcase. Here we had over 150 AI companies showcase their AI applications to radiologists, scientists, IT personnel, and other health care providers. Within the AI Showcase, we had an AI Theater. This was positioned in the middle of the hall and acted as a sort of "hub" for vendors and AI experts to give presentations about the latest in AI technologies. I was positioned near the AI Theater to help direct members to where they needed to go. Along with this, I also was able to listen to some of the AI talks!
There are a few topics over the last couple of days that I learned about and will be major discussion points for AI in the years to come. I wanted to highlight these here by giving an overview of what I learned!
This blog is intended for individuals who are new to the field of AI!
Should we call it “Artificial Intelligence” or “Machine Learning”?
“Artificial intelligence” is definitely the sexy word to use, there is no doubt about that. Some individuals may not like that phrase and prefer “machine learning”. The figure below shows how these terms are accepted in the field, with machine learning being a part of artificial intelligence.
How can we implement AI algorithms in normal day use in hospitals?
One of the topics that I heard about at the meeting is the challenge of actually implementing AI algorithms within a hospital environment. Radiologists and physicians may be excited to readily accept these applications in their practice, however they are not the ones who are making the decision to purchase for the software. The individuals making the purchase are not ones that are interacting with the patients, like the CIO (at least that is how I understand it). The purchasing and implementation of AI algorithms needs to be profitable, or else a hospital system won't purchase it.
It is easy to see how AI will be profitable- radiologists can be aided by these AI algorithms for faster diagnoses, and therefore we can help more patients in a given time. Additionally, AI applications may be able to mitigate potential misdiagnoses and help with having higher rates of confidence in the diagnosis of certain disease states.
However, the profitability issue will arise when AI is initially implemented. The early days of using AI will require a bit of a learning curve, which may end up costing more in the beginning. This will be a point of discussion in the years to come as AI algorithms make their way into hospital systems.
How will hospitals handle implementing (potentially) hundreds of AI applications in their practice?
When I walked through the AI showcase, there were hundreds of different AI applicatins that were being presented. Here are just a few examples:
-Lesion detection in breast
-Tuberculosis and granuloma detection in lung from chest x-rays
-Hemorrhage detection from cranial CT images
-Tumor detection in brain
-Annotation tools for x-ray, CT, and MR images
-Cerebral aneurysm detection
-Colorectal cancer detection
To say the least, there are AI algorithms developed for every part of the body. Having multiple algorithms from multiple different companies can make these applications difficult to implement. Say for example, we want to implement 50 different applications- that will be 50 different programs we will need to handle, from potentially 50 different companies. That is a lot of work. Larger companies are working with smaller companies and integrating them into their own interfaces so that hospitals could purchase one interface with multiple different applications.
Lastly, how do deep learning algorithms come to their 'conclusions'?
This issue is known as the Black Box problem, and begs the question of how we can understand and interpret how these algorithms detect patterns in the radiographic images we input. Depending on the application, there can be multiple hidden layers that are not accessible to human users, and thus make it difficult to determine how certain conclusions are made. The field of black box interpretation is growing and will help radiologists understand how their applications came to their conclusions. This will also aid in the confidence in these programs with the diagnoses they output.
This blog only scratches the surface of the field of AI in Radiology! I am very much looking forward to learning more about this field and posting new blogs in the future! Leave a comment for discussion!
For more science fun, follow me at @adeline_bio on Twitter.
Thanks for the read!