The Surveillance Solution, 5/26
a dangerous way to address crazed shooters
Red flags, police calls and electronic hints and giveaways were always conspicuous in hindsight. A decade ago it was plausible to argue, as some did, that algorithms would be too slow to yield relevant patterns and would cough up too many false positives. However, throwing away a decade is hardly a way to make progress on these challenges.
The real stumbling block is privacy risk. Privacy risk, let us notice, resides in who can see the data, not whether it exists, and in when and how it might be permissible to tie a potentially significant pattern to a named individual.
…A plausible solution would be wrapping the whole puzzle in a specialized legal process: The algorithms would be allowed to do their job; a judge’s permission would be required before a named person could be linked to an observed pattern so government officials could take steps. The opportunity exists whether we choose to take advantage of it or not, but history suggests that sooner or later we will take advantage of it.
Could incidents like the massacre in Buffalo be stopped by using surveillance technology to identify potential shooters? I have doubts. [this was written before the Texas school shooting, but the same math applies.]
When I taught statistics, I explained Bayes’ Theorem using the example of Saudi nationals and terrorists. As I recall, 19 out of the 20 men involved in the 9/11 attacks were Saudi nationals. So if a man was involved in the attack, there was a high probability that he was a Saudi national. But the inverse probability, that if a man was a Saudi national he was a terrorist, was very low.
Similarly, in a large proportion of terror killings these days, the killer reads and/or posts extremists views. But I suspect that there are a lot of people who read and/or post extremist views, and only a small fraction of these are killers. How useful is a “watch list” of tens of thousands of people?
To put this another way, it is “obvious in hindsight” that a given man was dangerous. But there may be tens of thousands of such people walking around today, and you only know at the last moment that they are about to go on a rampage.
For surveillance to work, you have to be willing to see thousands of people tracked for every one who actually attempts murder. And you will have to intervene every time the surveillance algorithm reveals a potential for the person to become violent. Think of “stop and frisk” amped up. Be prepared for disproportionate numbers of minority group members to be selected for questioning or temporary preventive detention or whatever it is you plan to do—including entrapment(?)—when an algorithm discovers a “ticking time bomb.”
I imagine that we are headed there, regardless. Concerning the potential for abuse of surveillance powers, I would not be satisfied with Jenkins’ approach of requiring a judge to grant approval. I would want a full-time, full-fledged audit of surveillance policies and practices. The audit agency should have a strong culture of non-partisanship and protection for civil liberties.
I am persuaded by David Brin’s The Transparent Society that we will not be able to get the government to refrain from engaging in surveillance. So the best we can hope for is that it takes place under an institutional framework that limits government power and preserves the right of dissent.