Facial recognition software

Volume 14, Issue 30; 10 Aug 2011

Some innocent person is going to be arrested, tried, and convicted because facial recognition software compares a photo with a grainy bit of surveillance footage and asserts a match.

Danny’s post this morning reminded me of something I meant to post last time facial recognition software was in the news.

Since you can't claim to have predicted something that you know, factually to have already happened (well, you can, but you have to expect some amount of disbelief in your predictive powers), I'm going to write this now, before I know it has happened. (That said, I won't be surprised if someone adds a comment pointing to proof that it has.)

Prediction: some day, some poor soul is going to get snatched up by the police, falsely accused, put on trial, and convicted of something they didn't do. This is going to happen largely because some company's facial recognition software compares a reference photo with a grainy, blurry bit of surveillance footage and claims a match.

This isn't going to happen because widespread surveillance is a bad thing (though I think it is) or because facial recognition software is a bad thing (though I'm highly suspect of the companies selling it because their motivation is profit not truth). It's going to happen because most people put unjustified faith in computers. I have first hand, albeit anecdotal, experience to support this assertion:

My father taught high school. He kept a grade book. In that book (a paper artifact), he recorded each student's performance on each assignment. At the end of the grading period, he computed the average performance of each student and assigned a grade.

Invariably, there'd be six or seven students in each class that argued about the grade they'd been assigned: “please, Mr. Walsh, couldn't you make that 89.2 an A?” or “I'll do better next term, can't you make that 74 a C+ just this once?”

A time consuming and annoying occurrence to be sure.

Then, in about my sophmore year, he switched to using a spreadsheet instead of a grade book. I set it up to do the averages automatically and even worked out how to make Multiplan print an appropriate letter grade.

In theory, there was nothing different about this process except that the end-of-term grade book was a computer printout. Students with 89's and 74's were just as motivated for grade inflation and my father was no less capable of assigning a different mark on their report card.

In practice, all argument stopped. Students accepted their grades for no better reason than “the computer said so”.

Governments (national and local) are motivated to demonstrate that they're protecting the populace. Law enforcement is motivated to demonstrate convictions. Software vendors are motivated to demonstrate successes. And most people believe compouters.

That is a scary combination.


The computer bit is relevant. More relevant, though, is peoples' (jurors particularly) lack of understanding of Bayes' theorem.

Prosecution: there's only a 1:10,000 chance that this facial recognition software will match against a randomly selected person. Defence (if they're adept enough to explain this to the numerically challenged): in a population of 60 million (UK) there's only a 1:6000 chance my client is guilty.

Using the facial recognition software to target raids on likely people's houses and find glass dust matching a window broken in the riots in their boots and five PlayStations with the same batch number as those looted from a shop is one thing. Using it to just prosecute random individuals is another.

—Posted by Ed Davies on 10 Aug 2011 @ 03:46 UTC #

This is a very safe prediction. it's already happened many times with fingerprints.

—Posted by Al Lang on 11 Aug 2011 @ 11:37 UTC #

Ed Davies: The defense argument would be sound only if the defendant had in fact been picked at random.

—Posted by John Cowan on 25 Aug 2011 @ 09:48 UTC #