I put together a little spreadsheet when Covid was first entering our awareness, in early 2020. Link to spreadsheet model (it may open in a Google app; if so, there will be an "open" button somewhere that allows it to be opened in Excel, which may be better).
There was an oft-repeated claim that the tests, though slow to come out, were highly accurate--I think they were often said to be 99% accurate, which is indeed very high. However, having been exposed to Bayesian revisions earlier in my career, I thought to see what this really means for accuracy in the field. Thus the spreadsheet. I pulled it up again recently, and though the early Covid testing issues are behind us, I still thought it might be interesting to some people. So I updated it as a learning tool--people are more interested in testing accuracy than they ever were before.
There are three inputs, the first two for what often passes for the "accuracy" of the test--the sensitivity of the test (its probability of identifying the disease among those known to have it); and the specificity of the test (its probability of identifying the absence of disease among those known to not in fact have it). The third input is the percentage prevalence of the disease in the tested population.
What we're interested in is the likelihood of true and false positive test results, and of true and false negative results. A more complete assessment of accuracy.