A comment in Nature Medicine calls for the appropriate application of artificial intelligence (AI) in healthcare and warns of the risks that arise when machine learning algorithms are misused.
An international team of researchers write in the journal nature medicineit is advised that great care should be taken not to misuse or overuse machine learning (ML) in healthcare research.
“I fully believe in the power of ML but it should be a relevant addition. But sometimes machine learning algorithms do not perform better than traditional statistical methods, resulting in papers that lack clinical or scientific value,” says first author, neurosurgeon Dr. Victor Volovici from Erasmus University MC, The Netherlands.
Real-world examples have shown that misuse of algorithms in healthcare can perpetuate human biases or inadvertently cause harm when machines are trained on biased data sets.
“Many believe that machine learning will revolutionize healthcare because machines make choices more objectively than humans do. But Associate Professor Nan Liu, senior author, from the Center for Quantitative Medicine and Health Services and Systems Research Program at Duke-NUS College of Medicine, Singapore, said: Without proper oversight, ML models may do more harm than good.”
Together with a group of scientists from the United Kingdom and Singapore, the researchers highlight that although guidelines have been formulated to regulate the use of ML in clinical research, these guidelines are only applicable once a decision has been made to use ML and do not ask whether or when its use is Fit in the first place.
For example, companies have successfully trained machine learning algorithms to recognize faces and road bodies using billions of photos and videos. But when it comes to their use in healthcare settings, they are often trained on data in the tens, hundreds, or thousands. “This underscores the relative poverty of big data in health care and the importance of working towards sample sizes achieved in other industries, as well as the importance of concerted international efforts to share big data for health data,” the researchers wrote. .
Another problem is that most machine learning and deep learning algorithms (which do not receive explicit instructions regarding the outcome) are still considered a “black box”. For example, at the start of the COVID-19 pandemic, scientists published an algorithm that could predict coronavirus infection from lung images. Then, it turned out that the algorithm drew conclusions based on the fingerprint of the letter “R” (for “Right Lung”) in the images, which was always located in a slightly different place in the scans.
“We have to get rid of the idea that machine learning can detect patterns in the data that we can’t understand,” Dr. Volovici said of the incident. “ML can detect patterns that we can’t see directly, but then you have to be able to explain how you came to that conclusion. To do that, the algorithm has to be able to show you the steps you took, and that requires innovation.”
The researchers advise that machine learning algorithms should be evaluated against traditional statistical methods (where applicable) before they are used in clinical research. And when considered appropriate, they should complement the clinical decision-making process, rather than replace it.
The researchers argue that “machine learning researchers should recognize the limitations of their algorithms and models in order to prevent their overuse and abuse, which can cultivate mistrust and cause harm to the patient.”
The team is organizing an international effort to provide guidance on the use of ML and traditional statistics, as well as to create a large database of anonymized clinical data that can harness the power of ML algorithms.
#Researchers #advise #vigilance #machine #learning #healthcare #research