Tổng số lượt xem trang

Thứ Ba, 12 tháng 1, 2021

A I [M L and D L ] in Thyroid Imaging

 Thyroid nodules are a common clinical problem, occurring in 19%-68% of the healthy population [1-3]. Ultrasonography (US) is an essential diagnostic tool used to assess the risk of malignancy and to inform decision-making regarding the use of fine-needle aspiration (FNA) and postFNA management in patients with thyroid nodules [1-3]. However, accurate recognition and consistent interpretation of US features are challenging for less-experienced operators, resulting in moderate to substantial interobserver and intraobserver variability [4-8]. In addition to experienced radiologists, many other clinicians-including endocrinologists, surgeons, nuclear medicine physicians, cytopathologists, family practice physicians, and other non-imaging specialists-perform thyroid US at primary care centers; therefore, unnecessary FNA and/or diagnostic surgery are commonly performed, placing a significant burden on the healthcare system and causing considerable anxiety to patients [1-3]. In addition, examining thyroid nodules on US is relatively labor-intensive due to their high prevalence in practice. Artificial intelligence (AI)-based computer-aided diagnosis (CAD) systems, based on machine learning (ML) and deep learning (DL) techniques, have been introduced for thyroid cancer diagnosis  to overcome the limitations of US diagnosis by clinicians. Many studies have reported the potential roles of these systems in thyroid cancer diagnosis, and have demonstrated comparable or even higher diagnostic performance than experienced radiologists [813]. However, at this point, the use of AI tools in clinical practice is of great concern since most studies were designed as proof-of concept or technical feasibility research without a thorough external validation of real-world clinical performance [14-16]. Most studies have been based on algorithms developed by individual researchers, and only a few have investigated the use of commercially available systems. In this review, we discuss the clinical background, development, and validation studies of AI-based CAD systems in thyroid cancer diagnosis, and describe the future developmental directions of these systems for the personalized and optimized management of thyroid nodules. 








Thứ Hai, 11 tháng 1, 2021

U S predicts ulcerative colitis treatment response

By Theresa Pablos, AuntMinnie staff writer


January 11, 2021 -- Ultrasound scans may help physicians predict which patients with severe ulcerative colitis won't respond to steroid treatment, according to a January 5 study published in Ultrasound in Medicine & Biology. As a result, these patients could gain quicker access to alternative colon-saving therapies


Physicians typically administer corticosteroids to patients admitted to the hospital with severe ulcerative colitis, but about one-third of these patients don't respond to the treatment. The small pilot study showed that bowel thickness measurements on ultrasound scans may indicate which patients are more likely to need salvage therapy.

"The key finding was that the simple measurement of bowel wall thickness of affected colonic segments at admission provided a clear guide to subsequent failure of steroids," wrote the authors, led by Dr. Rebecca Smith, a gastroenterologist at Alfred Hospital in Melbourne, Australia.

Smith and colleagues conducted their study with 10 patients who were hospitalized for an ulcerative colitis flare that required treatment with intravenous corticosteroids four times per day. The patients ranged in age from 21 to 39, and 90% were male.

Two independent gastroenterologists conducted gastrointestinal ultrasound scans on the patients at three different time points:

  1. Within 24 hours of admittance and starting steroids
  2. On the third day of steroid treatment
  3. On the seventh day after admittance if the patient was still in the hospital

The gastroenterologists used ultrasound scans to calculate bowel wall thickness measurements for the patients at each of the three time points as well as two weeks and three months after hospital discharge.

Difference in bowel wall thickness (BWT) after corticosteroid treatment
  Responded to steroidsDid not respond to steroids
AdmissionMedian BWT for all colonic segments4.6 mm6.2 mm
Median BWT for most affected colonic segment4.7 mm7.4 mm
Day 3Median BWT for all colonic segments4 mm6.3 mm

A total of six patients required salvage therapy with infliximab -- five who started therapy on day three and one who commenced therapy on day seven. Ultimately, three patients required a colectomy within 30 days of admission.

The four patients who responded to the steroid treatment had lower initial colonic bowel wall thickness measurements on ultrasound than the patients who required salvage therapy. Within 24 hours of admission, every patient who had a bowel wall thickness measurement of 6 mm or higher in any colonic segment required salvage therapy -- a finding that was statistically significant.

Importantly, the ultrasound-derived measurements proved significant, while endoscopic scores and appearances showed no value in discriminating patients who did and did not respond to steroids.

The two groups had even greater differences in bowel wall thickness measurements later in their hospital stay. On the third day of steroid treatment, patients who responded to treatment had statistically improved bowel wall thickness measurements, whereas the patients who required salvage therapy had no change.

Although the study was small, the findings exemplified the promise of ultrasound-derived measurements for treatment planning for patients with severe ulcerative colitis. The authors also pointed out that gastrointestinal ultrasound has its own benefits outside of the scope of the study, such as stratifying patients by risk level and potentially preventing the need for a colonoscopy.

"This pilot study has indicated the potential utility of gastrointestinal ultrasound in patients admitted to hospital with severe colitis by providing early and accurate prognostic information regarding the likelihood of response to corticosteroids," they concluded. "Combining this with its noninvasive nature and the information it provides on disease distribution, gastrointestinal ultrasound may be an important tool in optimizing and personalizing management of patients with severe ulcerative colitis."


Radiomic analysis là gì

 Radiomics

Dr Henry Knipe and Dr Muhammad Idris et al. From Radiopaedia

Radiomics (as applied to radiology) is a field of medical study that aims to extract a large number of quantitative features from medical images using data characterization algorithms. The data is assessed for improved decision support. It has the potential to uncover disease characteristics that are difficult to identify by human vision alone.

Process

The process of creating a database of correlative quantitative features, which can be used to analyze subsequent (unknown) cases includes the following steps 3.

Initial image processing

Using a variety of reconstruction algorithms such as contrast, edge enhancement, etc. This influences the quality and usability of the images, which in turn determines how easily and accurately an abnormal characteristic could be detected and characterized.

Image segmentation

Identify/create areas (2D images) or volumes of interest (3D images). Can be done either manually, semi-automated, or fully automated using artificial intelligence.

For large data sets, an automated process is needed because manual techniques are usually very time-consuming and tend to be less accurate, less reproducible and less consistent compared with automated artificial intelligence techniques.

Features extraction and qualification

Features include volume, shape, surface, density, and intensity, texture, location, and relations with the surrounding tissues.

Semantic features are those that are commonly used in the radiology lexicon to describe regions of interest.

Agnostic features are those that attempt to capture lesion heterogeneity through quantitative mathematical descriptors.

Examples of semantic features

·         shape

·         location

·         vascularity

·         spiculation

·         necrosis

·         attachments

Equivalent examples of agnostic features

·         histogram (skewness, kurtosis)

·         Haralick textures

·         Laws textures

·         wavelets

·         Laplacian transforms

·         Minkowski functions

·         fractal dimensions

Uses

Radiomics can be applied to most imaging modalities including radiographs, ultrasound, CT, MRI and PET studies. It can be used to increase the precision in the diagnosis, assessment of prognosis, and prediction of therapy response, particularly in combination with clinical, biochemical, and genetic data. The technique has been used in oncological studies, but potentially can be applied to any disease.

A typical example of radiomics is using texture analysis to correlate molecular and histological features of diffuse high-grade gliomas 2

The determination of most discriminatory radiomics feature extraction methods varies with the modality of imaging and the pathology studied and is therefore currently (c.2019) the focus of research in the field of radiomics.

Current challenges include the development of a common nomenclature, image data sharing, large computing power and storage requirements, and validating models across different imaging platforms and patient populations.

References

Promoted articles (advertising)

1.    MAPS: A Quantitative Radiomics Approach for Prostate Cancer Detection

EMBS Trans Biomed Eng, 2015

2.    Robust Collaborative Clustering of Subjects and Radiomic Features for Cancer Prognosis

EMBS Trans Biomed Eng, 2020

3.    Pattern Classification for Gastrointestinal Stromal Tumors by Integration of Radiomics and Deep Convolutional Features

EMBS J Biomed Health Inform, 2018

1.    Multi-Objective-Based Radiomic Feature Selection for Lesion Malignancy Classification

EMBS J Biomed Health Inform, 2019

2.    Benefits and limitations of real-world evidence: lessons from EGFR mutation-positive NSCLC

Bassel Nazha et al., Future Oncology, 2020

3.    A Radiomics Approach With CNN for Shear-Wave Elastography Breast Tumor Classification

EMBS Trans Biomed Eng, 2018

 

 

 

NANG GAN, HEMANGIOMA VÀ U GAN QUA THUẬT TOÁN A I VÀ DEEP LEARNING
















Thứ Bảy, 9 tháng 1, 2021

TRÍ TUỆ THÔNG MINH CHẨN ĐOÁN U GAN

 






...


Focal liver lesion detection 

Deep learning algorithms combined with multiple image modalities have been widely used in the detection of focal liver lesions (Table 2). The combination of deep learning methods with CNNs and CT for liver disease diagnosis has gained wide attention[35] . Compared with the visual assessment, this strategy may capture more detailed lesion features and make more accurate diagnosis. According to Vivantil et al by using deep learning models based on longitudinal liver CT studies, new liver tumors could be detected automatically with a true positive rate of 86%, while the stand-alone detection rate was only 72% and this method achieved a precision of 87% and an improvement of 39% over the traditional SVM mode[36] . Some studies[37-39] have also used CNNs based on CT to detect liver tumors automatically, but these machine learning methods may not reliably detect new tumors because of the insufficient representativeness of small new tumors in the training data. Ben-Cohen et al developed a CNN model predicting the primary origin of liver metastasis among four sites (melanoma, colorectal cancer, pancreatic cancer, and breast cancer) with CT images[40] . In the task of automatic multiclass categorization of liver metastatic lesions, the automated system was able to achieve a 56% accuracy for the primary sites. If the prediction was made as top-2 and top-3 classification tasks, the accuracy could be up to 0.83 and 0.99, respectively. These automated systems may provide favorable decision support for physicians to achieve more efficient treatment. CNN models which use ultrasound images to detect liver lesions were also developed. According to Liu et al by using a CNN model based on liver ultrasound images, the proposed method can effectively extract the liver capsules and accurately diagnose liver cirrhosis, with the diagnostic AUC being able to reach 0.968. Compared with two kinds of low level feature extraction method histogram of oriented gradients (HOG) and local binary pattern (LBP), whose mean accuracy rates were 83.6% and 81.4%, respectively, the deep learning method achieved a better classification accuracy of 86.9%[41] . It was reported that deep learning system using CNN showed a superior performance for fatty liver disease detection and risk stratification compared to conventional machine learning systems with the detection and risk stratification accuracy of 100%[42] . Hassan et al used the sparse auto encoder to access the representation of the liver ultrasound image and utilized the softmax layer to detect and distinguish different focal liver diseases. They found that the deep learning method achieved an overall accuracy of 97.2% compared with the accuracy rates of multi-SVM, KNN(K-Nearest Neighbor), and naive Bayes, which were 96.5, 93.6, and 95.2%, respectively[43] . An ANN based on 18F-FDG PET/CT scan, demographic, and laboratory data showed a high sensitivity and specificity to detect liver malignancy and had a highly significant correlation with MR imaging findings which served as the reference standard[44] . The AUCs of lesion-dependent network and lesion-independent network were 0.905 (standard error, 0.0370) and 0.896 (standard error, 0.0386), respectively. The automated neural network could help identify nonvisually apparent focal FDG uptake in the liver, which was possibly positive for liver malignancy, and serve as a clinical adjunct to aid in interpretation of PET images of the liver.


CHALLENGES AND FUTURE PERSPECTIVES 


There is considerable controversy about the time needed to implement fully automated clinical tasks by deep learning methods[59] . The debated time ranges from a few years to decades. The automated solutions based on deep learning aim to solve the most common clinical problems which demand a lot of long-term accumulation of expertise or are much too complicated for human readers, for example, lung screening CT, mammograms and so on. Next, researchers need to develop more advanced deep learning algorithms to solve more complex medical imaging problems, such as ultrasound or PET. At present, a common shortage of AI tools is that they cannot resolve multiple tasks. There is currently no comprehensive AI system capable of detecting multiple abnormalities throughout the human body. A great amount of medical data which are electronically organized and amassed in a systematic style facilitate access and retrieval by researchers. However, the lack of curation of the training data is a major drawback in learning any AI model. To select relevant patient cohort for specific AI task or make segmentation within images is essential and helpful. Some segmentation algorithms using AI[60] are not perfect to curate data, as they always need human experts to verify accuracy. Unsupervised learning which includes generative adversarial networks[61] and variational autoencoders[62] may achieve automated data curation by learning discriminatory features without explicit labeling. Many studies have explored the possibilities of unsupervised learning application in brain MRI[63] and mammography[64] and more field applications of this state of the art method are needed. It is of great significance to indicate that AI is different from human intelligence in numerous ways. Although various forms of AI have exceeded human performance, they lacked higher-level background knowledge and failed to establish associations like the human brain.

In addition, AI is trained for one task only. The AI field of medical imaging is still in its infancy, especially in the ultrasound field. It is almost impossible for AI to replace radiologists in the coming decades, but radiologists using AI will inevitably replace radiologists who do not. With the advancement of AI technology, radiologists will achieve an increased accuracy with higher efficiency. We also need to call for advocacy for creating interconnected networks of identifying patient data from around the world and training AI on a large scale according to different patient demographics, geographic areas, diseases, etc. Only in this way can we create an AI that is socially responsible and benefits more people.