眼科 ›› 2023, Vol. 32 ›› Issue (4): 305-309.doi: 10.13281/j.cnki.issn.1004-4469.2023.04.007

• 论著 • 上一篇    下一篇

应用深度学习算法评估角膜塑形镜配适状态的研究

宋红欣1   曹靖雯2   牛凯2   贺志强2   

  1. 1首都医科大学附属北京同仁医院  北京同仁眼科中心  眼科学与视觉科学北京市重点实验室100730; 2北京邮电大学泛网无线通信教育部重点实验室 100876
  • 收稿日期:2023-04-07 出版日期:2023-07-25 发布日期:2023-07-25
  • 通讯作者: 宋红欣, Email: songhongxin2012@ccmu.edu.cn
  • 基金资助:
    首都卫生发展科研专项(2022-1G-4083)

Evaluation of orthokeratology fitting status using deep learning algorithm

Song Hongxin1, Cao Jingwen2, Niu Kai2, He Zhiqiang2   

  1. 1 Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University; Beijing Key Laboratory of Ophthalmology & Visual Sciences, Beijing 100730, China; 2 Key Laboratory of Universal Wireless Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China
  • Received:2023-04-07 Online:2023-07-25 Published:2023-07-25
  • Contact: Song Hongxin, Email: songhongxin2012@ccmu.edu.cn
  • Supported by:
     The Capital Health Research and Development of Special (2022-1G-4083)

摘要:  目的 开发一种基于荧光素染色的人工智能算法以进行角膜塑形镜验配状态的自动评估。设计 诊断试验。研究对象 2022年4-5月360例(360眼)验配角膜塑形镜患者的角膜塑形镜配适视频。方法 使用基于注意力机制的深度学习算法对角膜塑形镜荧光素染色配适视频进行分析。算法利用角膜塑形镜染色视频的关键帧捕捉镜片静态形态信息,同时对视频整体综合考虑以获取镜片活动度等动态信息。算法采用双阶段的结构,第一阶段对配适偏紧样本分类,并基于此结果进一步完成第二阶段的配适合适以及配适偏松样本的分类,并与5位视光医生评估协商后的一致结果作为标准进行比较。主要指标 敏感性、识别准确率、判别一致性。结果 在验证集上,此算法在第一阶段对于配适偏紧样本的分类正确率达82%,敏感性80%,特异性85%。第二阶段对于配适合适和偏松的分类正确率达88%,敏感性85%,特异性93%。最终每一类的分类结果正确率均可达到80%以上,其中配适偏紧80%,配适合适83%,配适偏松81%。结论 基于注意力机制的深度学习算法可较好地对角膜塑形镜配适状态做出客观的自动评估。(眼科,2023,32: 305-309)

关键词:  , 角膜塑形镜;荧光染色;配适状态;深度学习

Abstract:  Objective To develop an automatic and objective quantification algorithm based on fluorescein patterns to evaluate the fitting status of orthokeratology. Design Diagnose test. Participants Ortho-k lens fitting video with fluorescein patterns from 360 subjects during 2022 from Beijing Tongren Hospital. Methods A deep learning algorithm based on an attention mechanism to analyze the fluorescein patterns video was used. The algorithm used key frames of the fluorescein patterns video to capture static morphological information of the ortho-K lens, while the video as a whole was considered comprehensively to obtain dynamic information such as ortho-K lens mobility. The algorithm adopted a two-stage structure, first classifying the tight fitting samples, and based on this result, further classifying the fit and loose fitting samples, the results were compared with the evaluation standard agreed by the 5 experienced optometrists. Main Outcome Measures Sensitivity, diagnosis accuracy, consistency with ophthalmologists’ results. Results In the validation set, our proposed algorithm achieved a classification accuracy of 82%, a sensitivity of 80%, and a specificity of 85% in the first stage of the classification task of fitting tight samples. In the second stage, the model can classify the remaining two types of samples with a correct rate of 88%, a sensitivity of 85% and a specificity of 93%. In the end, the correct rate of classification results for each category could reached more than 80%, which was highly consistent with the judgment given by optometrists. Compared with the results of human evaluation, the results of computer algorithms had a high degree of matching and better repeatability. Conclusions Using the deep learning algorithm based on the attention mechanism, we developed automatic algorithm for automatic analysis of the fluorescein patterns video of orthokeratology, which can make objective judgments about the fitting status of orthokeratology. (Ophthalmol CHN, 2023, 32: 305-309)

Key words:  orthokeratology, fluorescein patterns, fitting status, deep learning