Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6195831 | American Journal of Ophthalmology | 2014 | 6 Pages |
PurposeTo determine the interobserver and intraobserver reliability of 4 clinical grading systems for corneal staining.DesignRetrospective, observational study.MethodsOne hundred twenty-two photographs of corneal erosions from variable ocular surface diseases were graded by 11 ophthalmologists. Each image was graded with 4 grading systems: the Oxford scheme, the National Eye Institute-recommended system, the area-density combination index, and the Sjögren's International Collaborative Clinical Alliance ocular staining score. Grading was repeated after 1 week to evaluate repeatability. Interobserver and intraobserver reliability were evaluated using intraclass correlation coefficients (ICCs). To determine the degree of agreement based on the severity of corneal staining, the relationship between the variance and the score using each grading system was evaluated with linear regression.ResultsInterobserver reliability for the 4 grading systems was excellent, with ICCs ranging from 0.981 to 0.991. The intraobserver repeatability of the 4 grading systems also was excellent, with ICCs ranging from 0.939 to 0.998. The National Eye Institute-recommended system showed the best reliability and repeatability. There was no definite correlation between variance and score in the Oxford scheme (Y = 0.006X + 0.284; R2 = 0.002) or the Sjögren's International Collaborative Clinical Alliance ocular staining score grading system (Y = â0.068X + 0.595; R2 = 0.109). However, there was a significant correlation between variance and score in the National Eye Institute-recommended system (Y = 0.210X + 0.965; R2 = 0.144) and in the area-density combination index (Y = 0.187X + 0.279; R2 = 0.178); the variance increased with the corneal staining score.ConclusionsThe 4 grading systems may be useful for evaluation of corneal staining independent of disease conditions and grading individuals.