Small bowel cleanliness in capsule endoscopy: a case-control study using validated artificial intelligence algorithm

[ad_1]

Small bowel capsule endoscopy (SBCE) may need to be performed immediately after colonoscopy without additional bowel preparation if active small bowel diseases are suspected. However, it is unclear whether the small bowel cleanliness is adequately maintained even after SBCE is performed immediately after colonoscopy. We compared the small bowel cleanliness scores of the study group (SBCE immediately after colonoscopy) and control group (SBCE alone) using a validated artificial intelligence (AI) algorithm (cut-off score > 3.25 for adequate). Cases of SBCE in which polyethylene glycol was used were included retrospectively. Among 85 enrolled cases, 50 cases (58.8%) were the study group. The mean time from the last dose of purgative administration to SBCE was 6.86 ± 0.94 h in the study group and 3.00 ± 0.18 h in the control group. Seventy-five cases (88.2%) were adequate small bowel cleanliness, which was not different between the two groups. The mean small bowel cleanliness score for the study group was 3.970 ± 0.603, and for the control group was 3.937 ± 0.428. In the study group, better colon preparation resulted in a higher small bowel cleanliness score (p = 0.015). Small bowel cleanliness was also adequately maintained in SBCE immediately after colonoscopy. There was no difference between the time and volume of purgative administration and small bowel cleanliness.

[ad_2]

Source link

Motion artefact reduction in coronary CT angiography images with a deep learning method

[ad_1]


Background:

The aim of this study was to investigate the ability of a pixel-to-pixel generative adversarial network (GAN) to remove motion artefacts in coronary CT angiography (CCTA) images.


Methods:

Ninety-seven patients who underwent single-cardiac-cycle multiphase CCTA were retrospectively included in the study, and raw CCTA images and SnapShot Freeze (SSF) CCTA images were acquired. The right coronary artery (RCA) was investigated because its motion artefacts are the most prominent among the artefacts of all coronary arteries. The acquired data were divided into a training dataset of 40 patients, a verification dataset of 30 patients and a test dataset of 27 patients. A pixel-to-pixel GAN was trained to generate improved CCTA images from the raw CCTA imaging data using SSF CCTA images as targets. The GAN’s ability to remove motion artefacts was evaluated by the structural similarity (SSIM), Dice similarity coefficient (DSC) and circularity index. Furthermore, the image quality was visually assessed by two radiologists.


Results:

The circularity was significantly higher for the GAN-generated images than for the raw images of the RCA (0.82 ± 0.07 vs. 0.74 ± 0.11, p < 0.001), and there was no significant difference between the GAN-generated images and SSF images (0.82 ± 0.07 vs. 0.82 ± 0.06, p = 0.96). Furthermore, the GAN-generated images achieved the SSIM of 0.87 ± 0.06, significantly better than those of the raw images 0.83 ± 0.08 (p < 0.001). The results for the DSC showed that the overlap between the GAN-generated and SSF images was significantly higher than the overlap between the GAN-generated and raw images (0.84 ± 0.08 vs. 0.78 ± 0.11, p < 0.001). The motion artefact scores of the GAN-generated CCTA images of the pRCA and mRCA were significantly higher than those of the raw CCTA images (3 [4-3] vs 4 [5-4], p = 0.022; 3 [3-2] vs 5[5-4], p < 0.001).


Conclusions:

A GAN can significantly reduce the motion artefacts in CCTA images of the middle segment of the RCA and has the potential to act as a new method to remove motion artefacts in coronary CCTA images.


Keywords:

Artificial intelligence; Coronary CT angiography; Deep learning; Motion artefacts.

[ad_2]

Source link

Deep learning in drug discovery: a futuristic modality to materialize the large datasets for cheminformatics

[ad_1]

Artificial intelligence (AI) development imitates the workings of the human brain to comprehend modern problems. The traditional approaches such as high throughput screening (HTS) and combinatorial chemistry are lengthy and expensive to the pharmaceutical industry as they can only handle a smaller dataset. Deep learning (DL) is a sophisticated AI method that uses a thorough comprehension of particular systems. The pharmaceutical industry is now adopting DL techniques to enhance the research and development process. Multi-oriented algorithms play a crucial role in the processing of QSAR analysis, de novo drug design, ADME evaluation, physicochemical analysis, preclinical development, followed by clinical trial data precision. In this study, we investigated the performance of several algorithms, including deep neural networks (DNN), convolutional neural networks (CNN) and multi-task learning (MTL), with the aim of generating high-quality, interpretable big and diverse databases for drug design and development. Studies have demonstrated that CNN, recurrent neural network and deep belief network are compatible, accurate and effective for the molecular description of pharmacodynamic properties. In Covid-19, existing pharmacological compounds has also been repurposed using DL models. In the absence of the Covid-19 vaccine, remdesivir and oseltamivir have been widely employed to treat severe SARS-CoV-2 infections. In conclusion, the results indicate the potential benefits of employing the DL strategies in the drug discovery process. Communicated by Ramaswamy H. Sarma.


Keywords:

Drug discovery; SARS-CoV-2; SMILES format; artificial intelligence; database; deep learning; deep learning algorithms; drug design.

[ad_2]

Source link

Reproducibility of artificial intelligence models in computed tomography of the head: a quantitative analysis

[ad_1]

When developing artificial intelligence (AI) software for applications in radiology, the underlying research must be transferable to other real-world problems. To verify to what degree this is true, we reviewed research on AI algorithms for computed tomography of the head. A systematic review was conducted according to the preferred reporting items for systematic reviews and meta-analyses. We identified 83 articles and analyzed them in terms of transparency of data and code, pre-processing, type of algorithm, architecture, hyperparameter, performance measure, and balancing of dataset in relation to epidemiology. We also classified all articles by their main functionality (classification, detection, segmentation, prediction, triage, image reconstruction, image registration, fusion of imaging modalities). We found that only a minority of authors provided open source code (10.15%, n 0 7), making the replication of results difficult. Convolutional neural networks were predominantly used (32.61%, n = 15), whereas hyperparameters were less frequently reported (32.61%, n = 15). Data sets were mostly from single center sources (84.05%, n = 58), increasing the susceptibility of the models to bias, which increases the error rate of the models. The prevalence of brain lesions in the training (0.49 ± 0.30) and testing (0.45 ± 0.29) datasets differed from real-world epidemiology (0.21 ± 0.28), which may overestimate performances. This review highlights the need for open source code, external validation, and consideration of disease prevalence.


Keywords:

Artificial intelligence; Epidemiology; Head CT; Machine learning; Reproducibility.

[ad_2]

Source link

Incorporating patient preferences and burden-of-disease in evaluating ALS drug candidate AMX0035: a Bayesian decision analysis perspective

[ad_1]

. 2022 Oct 26;1-8.


doi: 10.1080/21678421.2022.2136994.


Online ahead of print.

Affiliations

Item in Clipboard

Qingyang Xu et al.


Amyotroph Lateral Scler Frontotemporal Degener.


.

Abstract

Objective: Provide US FDA and amyotrophic lateral sclerosis (ALS) society with a systematic, transparent, and quantitative framework to evaluate the efficacy of the ALS therapeutic candidate AMX0035 in its phase 2 trial, which showed statistically significant effects (p-value 3%) in slowing the rate of ALS progression on a relatively small sample size of 137 patients. Methods: We apply Bayesian decision analysis (BDA) to determine the optimal type I error rate (p-value) under which the clinical evidence of AMX0035 supports FDA approval. Using rigorous estimates of ALS disease burden, our BDA framework strikes the optimal balance between FDA’s need to limit adverse effects (type I error) and patients’ need for expedited access to a potentially effective therapy (type II error). We apply BDA to evaluate long-term patient survival based on clinical evidence from AMX0035 and Riluzole. Results: The BDA-optimal type I error for approving AMX0035 is higher than the 3% p-value reported in the phase 2 trial if the probability of the therapy being effective is at least 30%. Assuming a 50% probability of efficacy and a signal-to-noise ratio of treatment effect between 25% and 50% (benchmark: 33%), the optimal type I error rate ranges from 2.6% to 26.3% (benchmark: 15.4%). The BDA-optimal type I error rate is robust to perturbations in most assumptions except for a probability of efficacy below 5%. Conclusion: BDA provides a useful framework to incorporate subjective perspectives of ALS patients and objective burden-of-disease metrics to evaluate the therapeutic effects of AMX0035 in its phase 2 trial.


Keywords:

Amyotrophic lateral sclerosis; Bayesian decision analysis; clinical trial development; patient value; pharmaceutical regulation; survival analysis.

[ad_2]

Source link