Celebrating_10_Years_(12).jpg

Welcome to question of the day #22

How can reliability indices help me differentiate between a meaningful visual field defect and an artefact?

It can be very difficult to work out if missed points on automated computer visual fields testing are due to eye and/or pathway disease or because the patient couldn’t perform the test; that is they were unreliable. We need to decide whether the results are meaningful (disease related) or artefactual (not disease related).

Part of the protocol I use is to I scrutinise the reliability indices (also referred to as performance indicators) automated computer based testing provides. These tell me whether the patient could do the test or not.

These are namely, fixation losses, false positive errors and false negative errors. These can sometimes indicate the reliability of the person giving instructions to the patient. Inadequate instructions will lead to poorreliability indices. At the end of test, reliability indices should be looked at first. Poor performance invalidates the other findings.

Fixation losses: the manual of a commonly used instrument states: 'When the fixation monitoring test parameter is set to the blind spot (Heijl-Krakau) mode, proper fixation is checked by projecting 5% of stimuli at the presumed location of the physiological blind spot. Only if the patient indicates seeing the blind spot check stimulus will the instrument record a fixation loss. A high fixation loss score indicates that the patient did not fixate well during the test, or that the blind spot was located incorrectly.' This means that the software is set up so as to present stimuli onto the optic nerve head. As there are no photoreceptors here, the patient should not see the stimuli. If they press the response button, then the stimuli intended for the optic nerve head must have landed on the retina as they couldn't have seen it otherwise; this means the optic nerve head is not where it should be and that the patient is not fixing the central target with their fovea. In other words, they aren't looking where they have been instructed to look. Fixation losses exceeding 20%suggest the other results are unreliable. A fixation loss of 30% means that if fixation was tested 10 times during the test, the patient was found not to be fixing the central target three times.

False positive errors: the software is set up so as to occasionally pretend to present a stimulus. The instrument goes through the process but a stimulus is not presented. Some patients press the response button even though no stimulus was presented or they respond faster than is humanly possible. False positive errors exceeding 15% suggest all the other results are unreliable. The patient could be anxious about missing targets and will need to be re-instructed and reassured that it is normal for some stimuli to be missed.

False negative errors: parts of the retina are tested with a stimulus and later tested again with a brighter stimulus than the first. The patient is fatigued, inattentive, not pressing in order to pretend there is a visual field defect (often with illicit financial gain in mind) or has genuine significant visual field loss. Glaucoma can cause this type of short term fluctuation. There is consensus that false negative errors exceeding 30% suggest all the other results are unreliable.

I used to find that I could never remember the difference between false positive errors and false negative errors. Now I simply think false (p)ositive, 'p' is for 'presser'.

Breaking it down makes it all the more palatable.