- Joined
- Jun 24, 2008
- Messages
- 45,229
- Location
- London
- Car
- 2022 Hyundai IONIQ 5 RWD / 2016 Suzuki Vitara AWD
The veracity of the second sentence would be higher if the PCR assays had quantified, and acceptably small, false positive rates by comparison to the anticipated infection prevalence.
Unfortunately, the false positive rates are not adequately quantified, but they are thought to lie in a range that at its lower bound is at least an order of magnitude greater than the anticipated infection rate - which makes relying upon them absent clinical diagnosis of relevant symptoms at best precarious and at worse, very dubious.
Quite right.
This is from study published in Brazil in August:
'The accuracy of the PCR test for coronavirus diagnosis can change according to the prevalence of the disease.
We can simulate 3 situations:
- With a prevalence of 50%, common among health professionals with respiratory symptoms, we found a post-test probability of 96%.
- With a prevalence of 20%, the post-test probability was 84%.
- With a prevalence of 5%, there is a 55% post-test probability.
We can interpret that when the test is applied in conditions of low prevalence of the disease, it allows a precise diagnosis in 55% of the cases.
Hypothetically, when carrying out a second consecutive test in the same patient, considering a prevalence of 96% (post-test probability of the first test with an initial prevalence of 50%), there is a post-test probability of approximately 100% (diagnostic accuracy).'
What they are saying is that depending on the prevalence, the accuracy is between 55% for low-prevalence (e.g. when mass-testing the population at random), to 96% for high-prevalence (e.g. when testing health workers or people with symptoms).
This is in line with a similar point that you have raised before.
Also, interestingly, it seems that 100% accuracy is achievable even in low-prevalence conditions with repeat testing - the NHS serological survey involves 5 consecutive weekly tests.
Last edited: