Tổng số lượt xem trang

Thứ Bảy, 9 tháng 1, 2021

TRÍ TUỆ THÔNG MINH CHẨN ĐOÁN U GAN

 






...


Focal liver lesion detection 

Deep learning algorithms combined with multiple image modalities have been widely used in the detection of focal liver lesions (Table 2). The combination of deep learning methods with CNNs and CT for liver disease diagnosis has gained wide attention[35] . Compared with the visual assessment, this strategy may capture more detailed lesion features and make more accurate diagnosis. According to Vivantil et al by using deep learning models based on longitudinal liver CT studies, new liver tumors could be detected automatically with a true positive rate of 86%, while the stand-alone detection rate was only 72% and this method achieved a precision of 87% and an improvement of 39% over the traditional SVM mode[36] . Some studies[37-39] have also used CNNs based on CT to detect liver tumors automatically, but these machine learning methods may not reliably detect new tumors because of the insufficient representativeness of small new tumors in the training data. Ben-Cohen et al developed a CNN model predicting the primary origin of liver metastasis among four sites (melanoma, colorectal cancer, pancreatic cancer, and breast cancer) with CT images[40] . In the task of automatic multiclass categorization of liver metastatic lesions, the automated system was able to achieve a 56% accuracy for the primary sites. If the prediction was made as top-2 and top-3 classification tasks, the accuracy could be up to 0.83 and 0.99, respectively. These automated systems may provide favorable decision support for physicians to achieve more efficient treatment. CNN models which use ultrasound images to detect liver lesions were also developed. According to Liu et al by using a CNN model based on liver ultrasound images, the proposed method can effectively extract the liver capsules and accurately diagnose liver cirrhosis, with the diagnostic AUC being able to reach 0.968. Compared with two kinds of low level feature extraction method histogram of oriented gradients (HOG) and local binary pattern (LBP), whose mean accuracy rates were 83.6% and 81.4%, respectively, the deep learning method achieved a better classification accuracy of 86.9%[41] . It was reported that deep learning system using CNN showed a superior performance for fatty liver disease detection and risk stratification compared to conventional machine learning systems with the detection and risk stratification accuracy of 100%[42] . Hassan et al used the sparse auto encoder to access the representation of the liver ultrasound image and utilized the softmax layer to detect and distinguish different focal liver diseases. They found that the deep learning method achieved an overall accuracy of 97.2% compared with the accuracy rates of multi-SVM, KNN(K-Nearest Neighbor), and naive Bayes, which were 96.5, 93.6, and 95.2%, respectively[43] . An ANN based on 18F-FDG PET/CT scan, demographic, and laboratory data showed a high sensitivity and specificity to detect liver malignancy and had a highly significant correlation with MR imaging findings which served as the reference standard[44] . The AUCs of lesion-dependent network and lesion-independent network were 0.905 (standard error, 0.0370) and 0.896 (standard error, 0.0386), respectively. The automated neural network could help identify nonvisually apparent focal FDG uptake in the liver, which was possibly positive for liver malignancy, and serve as a clinical adjunct to aid in interpretation of PET images of the liver.


CHALLENGES AND FUTURE PERSPECTIVES 


There is considerable controversy about the time needed to implement fully automated clinical tasks by deep learning methods[59] . The debated time ranges from a few years to decades. The automated solutions based on deep learning aim to solve the most common clinical problems which demand a lot of long-term accumulation of expertise or are much too complicated for human readers, for example, lung screening CT, mammograms and so on. Next, researchers need to develop more advanced deep learning algorithms to solve more complex medical imaging problems, such as ultrasound or PET. At present, a common shortage of AI tools is that they cannot resolve multiple tasks. There is currently no comprehensive AI system capable of detecting multiple abnormalities throughout the human body. A great amount of medical data which are electronically organized and amassed in a systematic style facilitate access and retrieval by researchers. However, the lack of curation of the training data is a major drawback in learning any AI model. To select relevant patient cohort for specific AI task or make segmentation within images is essential and helpful. Some segmentation algorithms using AI[60] are not perfect to curate data, as they always need human experts to verify accuracy. Unsupervised learning which includes generative adversarial networks[61] and variational autoencoders[62] may achieve automated data curation by learning discriminatory features without explicit labeling. Many studies have explored the possibilities of unsupervised learning application in brain MRI[63] and mammography[64] and more field applications of this state of the art method are needed. It is of great significance to indicate that AI is different from human intelligence in numerous ways. Although various forms of AI have exceeded human performance, they lacked higher-level background knowledge and failed to establish associations like the human brain.

In addition, AI is trained for one task only. The AI field of medical imaging is still in its infancy, especially in the ultrasound field. It is almost impossible for AI to replace radiologists in the coming decades, but radiologists using AI will inevitably replace radiologists who do not. With the advancement of AI technology, radiologists will achieve an increased accuracy with higher efficiency. We also need to call for advocacy for creating interconnected networks of identifying patient data from around the world and training AI on a large scale according to different patient demographics, geographic areas, diseases, etc. Only in this way can we create an AI that is socially responsible and benefits more people.





Thứ Tư, 23 tháng 12, 2020

Ultrasound outperforms x-ray for neonatal pneumothorax

By Theresa Pablos, AuntMinnie staff writer

December 23, 2020 -- Lung ultrasound (LUS) scans outperformed chest x-rays for diagnosing neonatal pneumothorax in a new review that included more than 500 newborns. Ultrasound achieved better sensitivity and specificity and took less time to perform, according to the December 16 study in Ultrasound in Medicine & Biology.

Pneumothorax is a common but life-threatening illness seen in neonatal intensive care units. While CT is the gold standard for diagnosing pneumothorax in adults, chest x-ray is the preferred modality for newborns in order to reduce exposure to ionizing radiation.

Still, chest x-ray has its limitations for neonates. Newborns are especially at risk for latent effects from repeated exposure to ionizing radiation, and it can be difficult to detect pneumothorax with chest x-ray.

Instead, lung ultrasound may be the better first-line imaging modality for diagnosing pneumothorax in infants. Not only does ultrasound not expose infants to ionizing radiation -- it also appears to be more accurate.

"LUS is a new choice for the diagnosis and treatment of neonatal [pneumothorax]," wrote the authors, led by Qiang Fei, PhD, a professor at Zhejiang University School of Medicine in Hangzhou, China. "Compared with [chest x-ray], ultrasound combines the advantages of bedside diagnosis, avoidance of irradiation, cost-effectiveness, high accuracy, and reliability."

For the review, Fei and colleagues reviewed both Chinese- and English-language databases to find prospective studies investigating the diagnostic performance of chest x-ray and lung ultrasound for neonatal pneumothorax. Eight studies with a total of 529 infants met their inclusion criteria.

Lung ultrasound performed better than chest x-ray in the review. Ultrasound netted a sensitivity of 98% and specificity 99%, compared to 82% and 96% for chest radiography. Furthermore, ultrasound achieved an area under the curve of 0.997 and was faster to perform in five out of the eight studies.

The authors also calculated the diagnostic odds ratio (DOR) for both modalities -- a measurement of the effectiveness of a diagnostic test where higher numbers represent more effectiveness. Lung ultrasound achieved a DOR of 920, while chest radiographs had a DOR of 45.

Chest x-ray vs. lung ultrasound for neonatal pneumothorax
 Chest x-rayLung ultrasound
Sensitivity82%98%
Specificity96%99%
Diagnostic odds ratio (DOR)45920

"[Chest x-ray] is associated with a certain rate of misdiagnosis and is less sensitive than LUS for the diagnosis of mild-to-moderate [pneumothorax], especially in premature infants," Fei and colleagues wrote.

They theorized that chest x-ray may have limited usefulness in this population because the lesions are small and can be deep in the lungs. Meanwhile, ultrasound may be better suited to imaging the thin chest walls and narrow thorax of newborns.

The authors didn't have enough studies to sufficiently evaluate the accuracy of lung ultrasound features for diagnosing pneumothorax. However, the disappearance of lung sliding and B-lines and the presence of A-lines looked promising as diagnostic markers of the illness. In addition, the presence or absence of lung points -- where the visceral and parietal pleural surfaces meet -- looked useful for helping to determine illness severity.

Based on the findings, the researchers recommended lung ultrasound as a first-line modality for diagnosing pneumothorax in this population.

"[Chest x-ray] could be carried out as the second-line procedure if there are doubts about the findings during LUS examination, such as examination of neonates with large-area atelectasis," they wrote.