Comparison of subjective grading and objective assessment in meibography between two computational programs and evaluation of inter and intraobserver reproducibility.

Manuel Alejandro Garza Leon, Laura Gonzalez, Nallely Ramos Betancourt, Everardo Hernandez-Quintela

Research output: Contribution to journalMeeting Abstractpeer-review

Abstract

Purpose : Objective assessment of Meibography It is an important step forward to evaluate the Meibomian Glands and new equipment appears frequently, however the analysis of the images is little studied, so our objective is to compare the measurement of the area of Meibomian glands loss between two computer programs as well as to evaluate the intra and interobserver reproducibility and to compare the subjective clinical evaluation with both computer programs. Methods : Prospective, longitudinal, observational study. Random selection of meibographies taken with the Antares® meibograph (CSO, Florence Italy). Images were analyzed with two programs (Phoenix and ImageJ) 5 times with a week of separation between each measurement by an expert observer for each program. Intraobserver repeatability was evaluated with the intraclass correlation coefficient (ICC) and the intrasubject standard deviation. The comparison between both programs was made comparing the means of both. The interobserver reproducibility was performed comparing the first measurement of each equipment by both experts, finally the subjective staging by an expert was compared with the measurement of the computer programs. Results : Fifty four images were evaluated. CCI of 0.989 for Phoenix and 0.988 for ImageJ, the intra-subject SD of 2.54 and 2.94. There was a significant difference (p <0.0001) in the measurement with both programs (Phoenix 24.48 ± 13.97%, ImageJ 29.05 ± 15.17). The interobserver evaluation did not have statistically significant difference with any of the programs (Phoenix for the first observer 24.48 ± 13.97% and 24.93 ± 12.70% for the second, ImageJ first observer 27.91 ± 14.82% and 29.05 ± 15.17% the second observer) Conclusions : The comparison of both teams showed significant difference. The interobserver repeatability with ImageJ and with Phoenix was not statistically significant. Intraobserver repeatability showed high repeatability. This is an abstract that was submitted for the 2018 ARVO Annual Meeting, held in Honolulu, Hawaii, April 29 - May 3, 2018.
Original languageEnglish
JournalInvestigative Ophthalmology and Visual Science
Volume59
Issue number9
Publication statusPublished - Jul 2018

Cite this