Red flags, police calls and electronic hints and giveaways were always conspicuous in hindsight. A decade ago it was plausible to argue, as some did, that algorithms would be too slow to yield relevant patterns and would cough up too many false positives. However, throwing away a decade is hardly a way to make progress on these challenges.
The real stumbling block is privacy risk. Privacy risk, let us notice, resides in who can see the data, not whether it exists, and in when and how it might be permissible to tie a potentially significant pattern to a named individual.
…A plausible solution would be wrapping the whole puzzle in a specialized legal process: The algorithms would be allowed to do their job; a judge’s permission would be required before a named person could be linked to an observed pattern so government officials could take steps. The opportunity exists whether we choose to take advantage of it or not, but history suggests that sooner or later we will take advantage of it.
Could incidents like the massacre in Buffalo be stopped by using surveillance technology to identify potential shooters? I have doubts. [this was written before the Texas school shooting, but the same math applies.]
When I taught statistics, I explained Bayes’ Theorem using the example of Saudi nationals and terrorists. As I recall, 19 out of the 20 men involved in the 9/11 attacks were Saudi nationals. So if a man was involved in the attack, there was a high probability that he was a Saudi national. But the inverse probability, that if a man was a Saudi national he was a terrorist, was very low.
Similarly, in a large proportion of terror killings these days, the killer reads and/or posts extremists views. But I suspect that there are a lot of people who read and/or post extremist views, and only a small fraction of these are killers. How useful is a “watch list” of tens of thousands of people?
To put this another way, it is “obvious in hindsight” that a given man was dangerous. But there may be tens of thousands of such people walking around today, and you only know at the last moment that they are about to go on a rampage.
For surveillance to work, you have to be willing to see thousands of people tracked for every one who actually attempts murder. And you will have to intervene every time the surveillance algorithm reveals a potential for the person to become violent. Think of “stop and frisk” amped up. Be prepared for disproportionate numbers of minority group members to be selected for questioning or temporary preventive detention or whatever it is you plan to do—including entrapment(?)—when an algorithm discovers a “ticking time bomb.”
I imagine that we are headed there, regardless. Concerning the potential for abuse of surveillance powers, I would not be satisfied with Jenkins’ approach of requiring a judge to grant approval. I would want a full-time, full-fledged audit of surveillance policies and practices. The audit agency should have a strong culture of non-partisanship and protection for civil liberties.
I am persuaded by David Brin’s The Transparent Society that we will not be able to get the government to refrain from engaging in surveillance. So the best we can hope for is that it takes place under an institutional framework that limits government power and preserves the right of dissent.
The case for a surveillance state and for gun control have both gotten weaker in the last several years. Every institution who gets a turn in the limelight shows a staggering level of incompetence or corruption, (FBI, WHO, FDA, CDC, IRS, even the military in afghanistan). These are the people we give surveillance powers to? On the gun control side, defund the police & the resulting rise in crime have grown the ranks of gun owners. Everyone with a soul is tortured by these recent incidents, but I don't see any movement towards the obvious "mainstream" solutions.
Someone who wants to kill kids is a broken person. Someone living on a tent on the sidewalk is a broken person. How do we fix broken people? That is the underlying question.
An easier way to explain this:
Let's say that knowing nothing, 1 in 100,000,000 people will become a school shooter
Now track everyone's social media, telephone calls, texts, and construct an algorithm that predicts the likelihood of becoming a school shooter. Your best Algorithm can identify that someone is one thousand times or 10,000% more likely then the general population to become a school shooter. That's a pretty good model you have there.
Except that for every individual your algorithm selects, only 1 in 100,000 will become a school shooter.