Healthcare researchers should warn against misusing AI

Healthcare researchers should warn against misusing AI

nature medicine (2022). DOI: 10.1038 / s41591-022-01961-6 “width=”800″height=”530″/>
Form installation. Given a data set with data points (green points) and a true effect (black line), the statistical model aims to estimate the true effect. The red line represents a close estimate, while the blue line represents the ML plus model with over-reliance on outliers. Such a model may appear to provide excellent results for this specific data set, but it fails to perform well on a different (external) data set. attributed to him: nature medicine (2022). DOI: 10.1038 / s41591-022-01961-6

An international team of researchers write in the journal nature medicineit is advised that great care should be taken not to misuse or overuse machine learning (ML) in healthcare research.

“I fully believe in the power of ML but it should be a relevant addition,” said trainee neurosurgeon and statistics editor Dr. Victor Volovici, first author of the commentary, from Erasmus University MC, The Netherlands. “Sometimes machine learning algorithms do not perform better than traditional statistical methods, resulting in the publication of papers lacking clinical or scientific value.”

Real-world examples have shown that misuse of algorithms in healthcare can perpetuate human biases or inadvertently cause harm when machines are trained on biased data sets.

Co-author Professor Nan Liu, senior author of the commentary, from the Center for Quantitative Medicine, said: “Many believe that machine learning will revolutionize healthcare because machines make more objective choices than humans. But without proper oversight, ML models may do more harm than good.” and the Health Systems and Services Research Program at Duke-NUS Medical College, Singapore.

“If, through ML, we detect patterns that we wouldn’t otherwise see – as in radiographs and pathology – we should be able to explain how the algorithms got there, to allow for checks and balances.”

Together with a group of scientists from the United Kingdom and Singapore, the researchers highlight that although guidelines have been formulated to regulate the use of ML in clinical research, these guidelines are only applicable once a decision has been made to use ML and do not ask whether or when its use is Fit in the first place.

For example, companies have successfully trained machine learning algorithms to recognize faces and road bodies using billions of photos and videos. But when it comes to their use in healthcare settings, they are often trained on data in the tens, hundreds, or thousands. “This underscores the relative poverty of big data in health care and the importance of working to achieve sample sizes that have been achieved in other industries, as well as the importance of concerted international efforts to share big data for health data,” write.

Another problem is that most machine learning and deep learning algorithms (which do not receive explicit instructions regarding the outcome) are still considered a “black box”. For example, at the start of the COVID-19 pandemic, scientists published an algorithm that could predict coronavirus infection from lung images. Then, it turned out that the algorithm drew conclusions based on the fingerprint of the letter “R” (for “Right Lung”) in the images, which was always located in a slightly different place in the scans.

“We have to get rid of the idea that ML can detect patterns in the data that we can’t understand,” Dr. Volovici said of the incident. “ML can very well detect patterns that we can’t see directly, but then you have to be able to explain how you came to that conclusion. In order to do that, the algorithm has to be able to show you the steps you took, and that requires innovation.”

The researchers advise that machine learning algorithms should be evaluated against traditional statistical methods (where applicable) before they are used in clinical research. And when considered appropriate, they should complement the clinical decision-making process, rather than replace it. “Machine learning researchers must recognize the limitations of their algorithms and models in order to prevent their overuse and misuse, which could otherwise spread mistrust and cause harm to the patient,” the researchers wrote.

The team is organizing an international effort to provide guidance on the use of ML and traditional statistics, as well as to create a large database of anonymized clinical data that can harness the power of ML algorithms.


Using AI to securely detect cancer from patient data


more information:
Victor Volovici et al, Steps to Avoid the Overuse and Abuse of Machine Learning in Clinical Research, nature medicine (2022). DOI: 10.1038 / s41591-022-01961-6

Provided by Duke-NUS Medical School


the quote: Health care researchers should be wary of misusing AI (2022, September 13) Retrieved on September 16, 2022 from https://medicalxpress.com/news/2022-09-health-wary-misusing-ai.html

This document is subject to copyright. Notwithstanding any fair dealing for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.


#Healthcare #researchers #warn #misusing

Leave a Comment

Your email address will not be published.