To give the right answer to an important question.

Photo by Willian Justen de Vasconcellos

To thing right when it comes to statistics, and health statistics in particular, is not an easy task.

A large proportion of women worldwide do participate in mammography screening. For a patient, it's hard to keep a cool head in the face of a positive result. A key health statistic that each medical practitioner needs to know how to estimate is: What is the probability that the woman who has just tested positive has breast cancer?

Let us start with definitions before we can look at some calculations ...

Concepts in screening tests

Diagnostic tests are used to detect or rule out medical conditions. Most diseases have a gold standard diagnostic test, which is used to establish a diagnosis. This concept has some limitations, but let us assume that such a gold standard does exist and is well defined of breast cancer.

Assessment of test performance is usually presented in a two by two table. Columns summarise the disease status (gold standard) and the rows summarise the screening test results. Each mammography screening test can now be classified into one of four categories:

Patients with disease Patients without disease
Positive test True Positive (TP) False Positive (FP) Total positive tests
(TP + FP)
Negative Test False Negative (FN) True Negative (TN) Total negative tests
(FN + TN)
Total number of patients
with disease
(TP + FN)
Total number of patients
without disease
(FP + TN)
Correct test results

True Positive - A diseased person who is correctly identified as having a disease.
True Negative - A healthy person who is correctly identified as healthy.

Incorrect test results

False Positive - A healthy person that is incorrectly identified as having the disease.
False Negative - A diseased person who is incorrectly identified as being healthy.

Ideally a tests correctly identify all patients with the disease, and similarly correctly identify all patients who are disease free, which is practically impossible.

Conditional probabilities

Conditional probabilities are essential in the interpretation of diagnostic tests because the test results influence our understanding of whether the patient has a disease. The conditional probabilities that we need to understand are sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV).

Sensitivity - The proportion of people with the disease who will have a positive result. In other words, the ability to correctly identify people who have the disease.

$ \small \mathsf{\text{Sensitivity} = P(\text{Positive test | Breast cancer}) = \frac{\text{TP}}{\text{TP + FN}}}$

A test with 85% sensitivity will identify 85% of patients who have the disease, but will miss 15% of patients who have the disease.

Specificity - The proportion of people without the disease who will have a negative result. In other words, when a test does a good job of ruling out people who don’t have the disease.

$\small \mathsf{\text{Specificity} = P(\text{Negative test | No breast cancer}) = \frac{\text{TN}}{\text{FP + TN}}}$

A test with 90% specificity will correctly return a negative result for 90% of people who don’t have the disease, but will return a positive result — a false-positive — for 10% of the people who don’t have the disease and should have tested negative.

I hope this image from Wikipedia can bring some additional clarity.

Sensitivity and Specificity

It’s essential to understand that sensitivity and specificity exist in a state of balance. For a test to be accurate, both sensitivity and specificity should be high. However, increased sensitivity usually comes at the expense of reduced specificity.

Sensitivity versus Specificity

Positive predictive value (PPV) - The proportion of people with a positive test result who actually have the disease. This is actually the question we are interested in trying to answer.

$ \small \mathsf{\text{PPV} = P(\text{Breast cancer | Positive test}) = \frac{\text{TP}}{\text{TP + TN}}}$

One important thing to understand is that conditional probabilities are not reciprocal. It means that sensitivity does not equal PPV.

$ \small \mathsf{ P(\text{Positive test | Breast cancer}) ≠ P(\text{Breast cancer | Positive test}) }$

Negative predictive value (NPV) - The proportion of people with a negative test result who do not have disease.

$ \small \mathsf{\text{NPV} = P(\text{No breast cancer| Negative test}) = \frac{\text{FN}}{\text{FN + TN}}}$

Back to our patient ...

Let us assume the we do have the following information:

  • The probability that a woman has breast cancer is 1% (prevalence)
  • If a woman has breast cancer, the probability that she tests positive is 85% (Sensitivity)
  • If a woman does not have cancer then there is a 90% chance the test will be negative (Specificity)

A tree diagram describing the outcomes of a mammography test.

The right answere

The question we are trying to answer is: What is the probability that the woman who has just tested positive has breast cancer?.

To make the calcularion, we can use the frequencies at the bottom of the tree diagram above.

$ \small {P(\text{Breast cancer | Positive test}) }
= \frac{\text{The number of positive test for women with breast cancer}}{\text{The total number of positive test}} = \frac{\text{True Positive}}{\text{True Positive + False Positive}} = \frac{85}{85 + 990} = 7.9\% $

The probability of having breast cancer given a positive test is approximatly 8%.

Can this be right?

An 85% sensitivity of mammography for the detection of breast cancer sounds pretty good. It gives a perception of precision. So why is, the probability that a woman who tests positive for breast cancer actually has breast cancer is only 8%, even though the mammogram is 85% accurate? The answer is this:


The goal of mammography is the early detection of breast cancer. The earlier breast cancer is detected, the better cancer survival.

To be clear here: I am by no means against mammography screening. While, we need to balance the “earlier diagnosis and treatment, and better survival” aspects of screening with the high rate of false alarms and the impact of overdiagnosis. Here are some point, that need to be concidered:

Point to concider

  • I always thought that cancer takes many years to decades to form before it comes invasive and spreads to other parts of the body. A recent study1 has shown that a tumour can spread in some patients at a very early phase of its development. This does question our believe that early deagnosis of cancer will improve outcomes.

  • A false-positive mammogram can result in anxiety, distress, and increased perceptions of breast cancer risk. Furthermore, women who had a false positive result are more likely to put off their next scheduled mammogram.2 The risk of having a false positive result after one mammogram ranges from 7-12% , depending on women's age.3 It is also known that younger women are more likely to have a false positive results.4
    Positive diagnoses need to be treated with caution, and drastic action should not be taken on the results of a screening diagnosis alone.

  • Overdiagnosis is the diagnosis of "disease" that will never cause any symptoms or problems during a patient's ordinarily expected lifetime. Overdiagnosis is one of the most common and unavoidable consequences of screening and early detection of any disease.5 Many of us live with slow-growing cancers that will never cause symptoms and don’t need to be treated. A systematic review and meta-analysis on autopsy studies6 suggest that screening programs should be cautious about introducing more sensitive tests that may increase the detection of these lesions. It can be harmful if it leads to psychological stress and unnecessary treatments.

  • The media often mentions risk when reporting on research, but this can sometimes be misleading. Absolute risk is a person's chance of developing a certain disease over a certain period of time. If we followed 100 000 women ages 30-34 for one year, about 30 women would develop breast cancer. On average one in eight (or about 12.5% ) women will develop breast cancer in their lifetime. That does not mean that every woman has 12.5% risk of developing breast cancer. Breast cancer risk may be higher or lower, depending on a number of factors, both genetic and environmental. For example we know that the absolute risk of breast cancer is much higher for women who have inherited mutations in the genes known as BRCA1 or BRCA2.7

  • In statistics, we often use five-year survival as a measure of screening effectiveness. If we include many patients with overdiagnosed cancers that would never be harmful, obviously the five-year survival would be inflated. The observed five-year survival becomes misleading. This phenomenon will, if not well understood, encourages more screening and consequently even more overdiagnosis. This creates a self-affirming positive cycle8, which in this case is not necessarily a positive thing.

    For instance, eighteen years ago, the 5-year relative survival for women with breast cancer, aged 30-89 years was 84.7% . Today the figure is up to 89.1% , according to Cancer i siffror 2018, published by The Swedish Cancer Society (Cancerfonden). The interesting question here would be: How much of the observed increase in survival can be attributed to overdiagnosis? Unfortunately, I do not have an answer.

  • Cost is not something I usually like to talk about, because I do get into a dilemma with my moral principles. The total cost of the screening in U.S. alone is around $8 billion per year.9 In a cost-effectiveness study10, researchers concluded, to not offer screening to women at low risk could:
    • improve the cost-effectiveness of the screening program,
    • reduce the cost of the program,
    • reduce overdiagnosis,

without compromising quality-adjusted life-years gained and maintaining reduced breast cancer deaths. This does take me to the last point in this discussion.

  • The future of health care. I am very much enjoying every page in Eric Topol newest book Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Machine learning will (and already is) transforming medical research. One way forward is through collaboration between medical practitioners and AI researchers. When it comes to mammography screening, I do strongly believe in the development of personalised risk assessment that could be used to identify higher-risk patients.

Sensitivity versus Specificity

Final word

"If you're called back to check an abnormal finding on your mammogram, try not to panic . False positive results are common. Most women who are called back don't have breast cancer"

Did you find this page helpful? Consider sharing it 🙌

Leyla Nunez


comments powered by Disqus