Sensitivity and Specificity Formulas:
From: | To: |
Sensitivity (true positive rate) measures the proportion of actual positives correctly identified. Specificity (true negative rate) measures the proportion of actual negatives correctly identified. These metrics are fundamental in evaluating diagnostic tests.
The calculator uses these formulas:
Where:
Explanation: Sensitivity indicates how good the test is at detecting true cases, while specificity indicates how good it is at excluding non-cases.
Details: These metrics are crucial for understanding diagnostic test performance. High sensitivity tests are good for ruling out disease (SnOUT), while high specificity tests are good for ruling in disease (SpIN).
Tips: Enter the counts from your 2×2 contingency table. All values must be non-negative integers. Results are shown as percentages.
Q1: What's the difference between sensitivity and PPV?
A: Sensitivity measures how well the test identifies true cases, while positive predictive value (PPV) tells you the probability that a positive test result is truly positive.
Q2: Can sensitivity and specificity be high at the same time?
A: Ideally yes, but often there's a trade-off between them. Changing the test's threshold can increase one at the expense of the other.
Q3: What is a good sensitivity value?
A: Generally >90% is excellent, 80-90% is good, but depends on context. Screening tests often prioritize high sensitivity.
Q4: What is a good specificity value?
A: Like sensitivity, >90% is excellent. Confirmatory tests often prioritize high specificity.
Q5: How do prevalence affect these metrics?
A: Sensitivity and specificity are prevalence-independent, but predictive values (PPV/NPV) are affected by prevalence.