Oct 02, 2015
By Maher M. El-Masri, RN, PhD

Terminology 101: Screening tests and sensitivity

Sensitivity: A calculation that determines the proportion of patients who have the disease that are correctly classified as such by the screening test

Source: Webb, P., & Bain, C. (2011). Essential Epidemiology: An Introduction for Students and Health Professionals. (2nd ed.). Cambridge, UK: Cambridge University Press

Sensitivity is an important measure used to judge the performance of a screening test against a gold-standard test. It refers to the ability of the test to correctly identify people who have the disease in question. A test’s sensitivity score answers this question: what proportion of patients with the disease correctly receive a positive test result? Cases in which the test result is positive and the patient has the disease are referred to as true positives (TP). Cases in which the test produces a negative result for a person with the disease are referred to as false negatives (FN). A highly sensitive screening test produces very few FN classifications.

While clinicians would ideally use highly sensitive screening tests exclusively, this might not always be feasible. The minimum value deemed to be acceptable for a test’s sensitivity will vary depending on the nature of the health condition under screening. To illustrate this point, let us consider the example of blood donation. We routinely screen blood donors for blood-borne pathogens to protect potential blood recipients from receiving contaminated blood. It is extremely important that the blood screening test identify as many donors with infected blood as possible (100% sensitivity would be ideal) to minimize the risk of mistakenly producing a negative result for individuals with a blood-borne disease. This is because the consequences associated with discarding an uncontaminated blood unit that has been mistakenly classified as positive (false positive, or FP) are far less severe than the consequences of giving a patient a contaminated blood unit that has been mistakenly classified by the test as negative (FN).

How is sensitivity calculated? Using the above-mentioned blood donation scenario, let us say that we have employed a gold-standard test to determine that the blood from 100 potential donors is contaminated with blood-borne pathogens. Next, we try using a more cost-effective alternative screening test and find that it correctly identifies the blood of 97 of these donors as positive (TP) and misclassifies the blood from the remaining three donors as negative (FN). The sensitivity score of the test is calculated by dividing the TP count by the sum of the TP and FN counts (TP/[TP+FN]). The score is 97 per cent, indicating a sound sensitivity. If the test had misclassified the blood from 30 of the donors as negative, however, the sensitivity score would have been 70/(30+70) or 70 per cent. Such a score would be troubling, given the seriousness of the consequences of administering contaminated blood.

On the other hand, a sensitivity score of 75 per cent for a hearing loss screening test among school children may be deemed reasonable given that the consequences associated with missing a hearing loss diagnosis are less serious.

NurseONE.ca resources on this topic

ProQuest ebrary

  • Dunleavey, R. (2009). Cervical Cancer: A Guide for Nurses.
  • Dunning, T. (2009). Care of People with Diabetes: A Manual of Nursing Practice (3rd ed.).
  • Institute of Medicine. (2007). Cancer-related Genetic Testing and Counseling: Workshop Proceedings.
  • Institute of Medicine. (2008). Implementing Colorectal Cancer Screening: Workshop Summary.

Maher M. El-Masri, RN, PhD, is a full professor and research chair in the faculty of nursing, University of Windsor, in Windsor, Ont.

comments powered by Disqus