[ad_1]
. 2022 Dec 5;26(1):105692.
doi: 10.1016/j.isci.2022.105692.
eCollection 2023 Jan 20.
Affiliations
Item in Clipboard
iScience.
.
Abstract
The research of AI-assisted breast diagnosis has primarily been based on static images. It is unclear whether it represents the best diagnosis image.To explore the method of capturing complementary responsible frames from breast ultrasound screening by using artificial intelligence. We used feature entropy breast network (FEBrNet) to select responsible frames from breast ultrasound screenings and compared the diagnostic performance of AI models based on FEBrNet-recommended frames, physician-selected frames, 5-frame interval-selected frames, all frames of video, as well as that of ultrasound and mammography specialists. The AUROC of AI model based on FEBrNet-recommended frames outperformed other frame set based AI models, as well as ultrasound and mammography physicians, indicating that FEBrNet can reach level of medical specialists in frame selection.FEBrNet model can extract video responsible frames for breast nodule diagnosis, whose performance is equivalent to the doctors selected responsible frames.
Keywords:
Artificial intelligence; Cancer; Computer-aided diagnosis method.
© 2022 The Author(s).
Conflict of interest statement
The authors of this manuscript declare no relationships with any companies, whose products or services may be related to the subject matter of the article.
Figures

Graphical abstract

Figure 1
Flow chart and statistical results Note: R Frames: responsible frame; Phy Images: doctor frame selection; Fix Frames: fixed interval frame selection; All Frames: all frames of video; Ultrasound: Diagnosis by senior ultrasound doctors; Mammography: Diagnosis by senior mammography doctors; p: R Frames vs. others; AUC: area under the curve; NA: Not Applicable.

Figure 2
3-fold cross-validation results of training set Note: (A) R_Frames: responsible frame; (B) Phy_Images: doctor frame selection; (C) Fix_Frames: fixed interval frame selection; (D) All_Frames: all frames of video.

Figure 3
Comparison of the effectiveness of FEBrNet’s responsible frame and others in the independent testing set Note: AUC:area under the curve, 95% CI: 95% confidence Interval; (A) R_Frames: responsible frame; (B) Phy_Images: doctor frame selection; (C) Fix_Frames: fixed interval frame selection; (D) All_Frames: all frames of video; (E)Ultrasound: Diagnosis by senior ultrasound specialists; (F) Mammography: Diagnosis by senior mammography specialists.

Figure 4
A case study of responsible frames selected by FEBrNet Note: (A) and (B) The top 3 frames sorted by FScore are relatively similar in time sequence, visually identical, and contribute comparable characteristics; (C) and (D) The top 3 frames chosen by Entropy Reduce method show more diverse image characteristics and scattered on 2D feature plot.

Figure 5
Ultrasound video preprocessing

Figure 6
FEBrNet model structure Note: The backbone model is a feature extractor and weights of fully connected layer from pre-trained ultrasound image dataset filtered MobileNet_224. FEBrNet uses a feature extractor and weights from a pre-trained fully connected layer to generate a feature matrix that identifies responsible frames and makes diagnostic predictions.

Figure 7
An example of selecting responsible frames Step 1: MaxPool the three frame feature matrices and yield the video feature matrix. Step 2: choose frame1 as the first responsible frame for it minimizing the difference between FScorevideo and FScoreframei. Step 3: add another frame to the responsible frame collection, whereas step 2 already chose one. Since the FScore difference between the responsible frame collections of frame1 and frame3 is the smallest, frame3 is picked as the second responsible frame. Example Figure 1 Schematic diagram of automatic stop of responsible frame selection. Note: RF_1: The first responsibility frame, and so on; When the model selects the video responsibility frame, it gradually rises from the initial malignant prediction value to RF_9 (0.93), from RF_10 starts to decline gradually, so the model is selected to RF_9 stop.
References
-
-
Sung H., Ferlay J., Siegel R.L., Laversanne M., Soerjomataram I., Jemal A., Bray F. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA. Cancer J. Clin. 2021;71:209–249.
–
PubMed
-
-
-
Siegel R.L., Miller K.D., Fuchs H.E., Jemal A. Cancer statistics, 2021. CA. Cancer J. Clin. 2021;71:7–33.
–
PubMed
-
-
-
Siegel R.L., Miller K.D., Fuchs H.E., Jemal A. Cancer statistics, 2022. CA. Cancer J. Clin. 2022;72:7–33.
–
PubMed
-
-
-
Maliszewska M., Maciążek-Jurczyk M., Pożycka J., Szkudlarek A., Chudzik M., Sułkowska A. Fluorometric investigation on the binding of letrozole and resveratrol with serum albumin. Protein Pept. Lett. 2016;23:867–877.
–
PubMed
-
-
-
Chen W., Zheng R., Zhang S., Zeng H., Xia C., Zuo T., Yang Z., Zou X., He J. Cancer incidence and mortality in China, 2013. Cancer Lett. 2017;401:63–71.
–
PubMed
-
[ad_2]
Source link