新闻资讯
新闻资讯
AI is expected to become a powerful tool for improving the speed and accuracy of medical diagnosis and enhancing patient treatment outcomes. From diagnosing diseases to personalized therapies, to predicting surgical complications, artificial intelligence may become an indispensable part of future patient healthcare, just like imaging and laboratory experiments today. However, as researchers at the University of Washington have found, artificial intelligence models, like humans, tend to seek shortcuts. In the context of AI assisted disease detection, deploying these shortcuts in a medical environment may result in misdiagnosis

In a new paper published in Nature Machine Intelligence on May 31st, researchers from the University of Washington analyzed several recently submitted models as potential tools for accurately detecting diseases from chest X-rays (also known as chest X-rays). The research team found that these models are not studying true medical pathology, but are limited to shortcut learning, drawing the correct connections between medical factors and disease states. Doctors usually expect to see specific patterns reflecting the disease process from X-rays, but they are not limited to these patterns. For example, a system that uses shortcut learning may identify someone as elderly and speculate that they are more likely to suffer from this disease, as this is more commonly used among elderly patients
The research team said that shortcut learning is not as strong as real medical pathology, which often makes it difficult for models to be extrapolated beyond early settings
The lack of fusion robustness and typical unevenness in AI decision-making may make this tool a burden for potential lifeguards
Lack of transparency is one of the factors contributing to the team's commitment to explainable medical and scientific artificial intelligence technologies. Most artificial intelligence is believed to be a 'black box'. This model is trained on massive datasets and makes predictions when no one understands how the model obtains a given result. Through explainable artificial intelligence, researchers and practitioners can gain a detailed understanding of how various inputs and their weights impact the output of the model
The research team used these similar techniques to test the credibility of the recently touted models, which seem to accurately identify lesions from chest X-rays. Although many published papers have calculated this result, researchers question whether something else may have exploded outside the black box that caused the model's predictions. The research team concludes that due to the lack of training data for this new disease, these models are unlikely to experience the "best case scenario turbulence". This situation reduces the possibility of the model relying on shortcuts rather than learning the basic pathology of the disease from the training data. The best case scenario is for the artificial intelligence system to only learn the identification dataset, rather than learning any real disease pathology. This situation occurs when all positive cases come from one dataset and all negative cases come from another dataset. Although researchers have submitted some techniques to mitigate these associations, when they are not too minor

The research team trained multiple deep convolutional neural networks on X-ray images from datasets using methods copied from published papers. Firstly, they tested the performance of each model on a set of external images from the initial dataset of training data
Although these models maintained low performance when tested on images from external datasets, their accuracy decreased by half in the second group. Researchers refer to it as the "generalized gap" and use it as strong evidence to confirm that confounding factors are the reason for the model's success on the initial dataset. The research team subsequently applied explainable artificial intelligence techniques, including decomposition adversarial networks and saliency maps, to confirm which image features are most critical in confirming model predictions. These results debunked the traditional view that misunderstandings do not cause any problems when the dataset comes from similar sources. They also reflected that low performance medical artificial intelligence systems can largely rely on inappropriate shortcuts rather than signals needed in the future, indicating that artificial intelligence will become a critical tool to ensure that these models can be safely and accurately used to weaken medical decisions and achieve worse results for patients. Although the research team's findings are worrying, the models analyzed by the team are unlikely to be widely deployed in medical environments
Complete information on where and how to deploy these models is not available, but it is commendable that their clinical application is difficult or not applicable, and in most cases, healthcare providers use laboratory detection PCR for diagnosis rather than limited to chest X-rays
Researchers who expect to apply artificial intelligence to disease detection need to improve their methods before using these models to make specific treatment decisions for patients
The analysis results indicate the importance of using artificial intelligence technology to conduct rigorous audits on medical artificial intelligence systems. If you look at a few X-rays, the AI system may look impressive. Only when you look at a lot of pictures, the problem becomes clear. Unless we have a way to examine these systems more accurately with a smaller sample size, it is even more evident that AI system applications can assist researchers in preventing some pitfalls. ”1006
发布日期: 2024-01-03
发布日期: 2024-04-29
发布日期: 2024-07-17
发布日期: 2024-05-10
发布日期: 2024-07-14
发布日期: 2024-07-08
发布日期: 2023-11-28
发布日期: 2024-07-15
发布日期: 2026-01-09
发布日期: 2026-01-09
发布日期: 2026-01-09
发布日期: 2026-01-09
发布日期: 2026-01-09