Comparison of meibomian gland loss area measurements between two computer programs and intra–inter-observer agreement

Manuel Garza-Leon*, Alejandra Gonzalez-Dibildox, Nallely Ramos-Betancourt, Everardo Hernandez-Quintela

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

12 Citations (Scopus)

Abstract

Background: Meibography is a diagnostic test that allows in vivo evaluation of meibomian gland (MG). Nowadays, it is unknown whether the two available computer programs are equivalent to evaluate the glandular loss area. Methods: This is a prospective, longitudinal, and observational study. A random selection of meibography photographs from healthy patients from the ocular surface clinic at Destellos de Luz foundation is made. The upper eyelid images were taken with the Antares® meibography (CSO, Florence, Italy); they were classified in five sessions with a week of separation between each measurement by an expert observer for each program, Phoenix (MAGL) and ImageJ (LAGB). An analysis of the meibomian gland loss area was performed, calculating it semiautomatically with Phoenix and manually with ImageJ. Intra-observer agreement was assessed through an intra-class correlation coefficient and the mean of standard deviations within subjects. Comparison between the two computational programs MG loss was made trough a nonparametric test. Results: Fifty-four images from x patients (n, 67.3% female) were analyzed. The limits of concordance analysis between the two programs showed a range between − 18.55 and 9.14%. The mean MG loss area through ImageJ by observer 1 was 27.91 ± 14.82% (IC 95% 23.87 to 31.96), and that by observer 2 was 29.05 ± 15.17% (95% CI 24.91 to 33.19). The mean MG loss area through Phoenix by observer 1 was 24.48 ± 13.97% (IC 95% 20.67 to 28.29), and that by observer 2 was 24.93 ± 12.70% (95% CI 21.46, 28.40) Conclusions: The comparison of the measurement of meibomian gland loss with both programs showed a statistically significant difference. Intra-observer repeatability and inter-observer repeatability were good, with no clinical or statistical difference.

Original languageEnglish
Pages (from-to)1261-1267
Number of pages7
JournalInternational Ophthalmology
Volume40
Issue number5
Early online date23 Jan 2020
DOIs
Publication statusPublished - 1 May 2020

Bibliographical note

Publisher Copyright:
© 2020, Springer Nature B.V.

All Science Journal Classification (ASJC) codes

  • Ophthalmology

Fingerprint

Dive into the research topics of 'Comparison of meibomian gland loss area measurements between two computer programs and intra–inter-observer agreement'. Together they form a unique fingerprint.

Cite this