Thursday, June 09, 2005

A Computer Scientist Reads the Newspaper

A dinner conversation with a colleague recently took a convoluted route from a challenge to "Name 10 famous (living) economists" through "A random walk down wall street" and thence to "A mathematician reads the newspaper".

I read Innumeracy a long time ago and enjoyed it, and realized that I don't remember reading AMRTN. I shall definitely seek out a copy. I was reminded my of my frustration with most newspaper reporting of mathematical and scientific issues, especially those to do with risk, probability and chance, since these most often appear in the news sections. Recently, there has been discussion in the UK about introducing an identity card with biometric information. One issue is what form this information would take -- fingerprints, facial scans, iris recognition. A much repeated figure is that the best of these -- iris recognition -- achieves only 96% accuracy.

But, is this 96% accuracy at identification (ie looking you up in a database based on an iris scan) or verification (checking that you are the same person as the owner of your card). Is this 96% over all people on a single trial or over multiple (say, three attempts) trials? Under what conditions are these trials conducted -- ideal or realistic? Are there 4% of people for whom this procedure will never work, or can one merely say that each trial can be treated independently with 4% probability of failure, so with enough repetitions the probability of failure becomes virtually zero?

Without this information, it's impossible to gauge. One can only guess at the implications, and assume that a 96% accuracy is quantifiably better than a 50% accuracy [for that matter, is a 50% accuracy equivalent to tossing a coin?]. Even the concepts of distinguishing between false positives and false negatives seems too subtle for most newspaper discussions, yet these are exactly the level of information from which policy makers decide what to implement.

I found one article which discussed this study in much greater detail but even after reading this I was still no wiser on exactly how one measures "success rate".

This affects me as a computer scientist, since I often work in the realm of randomized algorithms for problems where exact algorithms provably cannot exist without using dramatically more resources. In some areas, even within the discipline, one encounters a certain resistance to a and the attitude that a deterministic algorithm is always preferable to a randomized one. If people had a more intuitive understanding of chance and probability ("there is a greater chance of a gamma ray striking the processor and causing the wrong answer to be produced than the algorithm making a mistake") then our lives would be better. Probably. Wanna bet on that...?
Post a Comment

Disqus for The Geomblog