The loud knocking jarred him from his online research. Odd to hear a knock at the door at this hour, he thought. It was too powerful to be a friend. Then again, anyone he knew would have called ahead. As he approached the door, the second knock was insistent, and he heard, “Police. Open up. We know you’re home.”
Sitting alone in an interrogation room is scary. But he’d seen enough spy movies to know it was going to work out fine. It had to be a mistake, and as soon as it was sorted out, he’d be on his way home and would have either a great story to tell or a strong police harassment lawsuit or both.
“Mr. Jones,” the officer said as Jones looked up at him, “our anti-terrorism algorithm is very concerned about the online research you’ve been doing. It is especially worrisome in the context of your social media posts, your friends, your location data, your purchasing data, your financial situation, and the unfortunate way your girlfriend left you.”
Jones was stunned silent. “We know you’re very angry,” the officer continued, “and we know you have recently purchased a new AR-10 and 300 rounds of .308 ammunition. What would you need that kind of weapon for, Mr. Jones? And why so much ammo?” This is all a big mistake, Jones thought. If they’ve been watching me this closely, they must know that I’m about to leave on my annual hunting vacation with my friends.
Jones was about to speak, but the officer raised his hand to stop him. “Before you say anything, Mr. Jones, I’m going to read you your rights. You’re under arrest, for your own good and the good of America. Your behavior in the past few months has activated our Digital Red Flag notification system. You are suspected of the potential to do harm to yourself or to others. You have the right to remain silent. Anything you say can be used against you in court.” Wait. What? How could this happen?
Digital Big Brother
Was our hypothetical Mr. Jones going to the range to practice with a few hundred rounds, then off to a couple of weeks of deer hunting, or was he up to something else? How did our hypothetical red flag algorithm go about figuring that out?
A hypothetical Digital Red Flag algorithm would score propensities (likelihoods).
Mr. Jones has no criminal record, is over 21, has a good job, and pays his taxes. He has a political point of view (and expresses it freely on social media), and he has lots of like-minded friends. So the algorithm is X% confident that he is X% likely to do XYZ. He broke up with his girlfriend, and from his social media posts (and hers), it seems she left him for someone he really hates. So the algorithm is X% confident that he is X% likely to do ABC.
There are an infinite number of ways to design red flags algorithms. But generally speaking, it would analyze the available statistics to determine the propensity of an outcome. Once its confidence level in its analysis exceeded a threshold, it would send up a red flag.
The .308 Winchester ammunition purchased by Mr. Jones is one of the most trusted calibers for deer hunters, and it is exceptionally effective from 350–400 yards. The AR-10 is an inexpensive semiautomatic weapon chambered in .308 (aka 7.62 x 51mm NATO). It would not be anyone’s first choice for deer hunting, but if you had a limited budget, it is powerful enough to kill any land mammal in the USA. The AR-10 would not be anyone’s first choice as an assault weapon, but it can put a full magazine of high-powered rounds down range as fast as you can pull the trigger. In practice, the AR-10 is an all-purpose semiautomatic weapon that can be used to put a bullet through anything up to a quarter mile away.
Unfortunately for Mr. Jones, out of respect for his vegan boss, he never posted about hunting on social media. The algorithm had no idea Jones was a hunter. It just scored the information it had and sent up a red flag. Algorithms are only as good as the data they are analyzing, and AI bias and other factors (such as missing information) contribute to a high number of false positives.
Where do Mr. Jones’s civil rights end and society’s rights begin?
Is This the America We Want to Live In?
In this example, the government created a “single view of the customer” by gathering as much data on Mr. Jones as it could. It’s one thing to use these tools to craft a profile to target a population of people who fall into a predictive class called “175% overindexed on designer shoe purchases.” It is something entirely different to watch everyone, all the time, and try to train an algorithm to figure out who is dangerous and who is not.
Red Flag Laws
In theory, Red Flag laws make sense. Someone is acting odd; they don’t fit in; they are always angry, acting irrationally, talking about shooting someone or hurting someone. You call the police. The police investigate and decide this is a person worth watching. They investigate a bit more and go before a judge and get a writ that empowers them to take away the person’s firearms until a court-appointed professional makes a judgment call that the firearms should be returned.
It’s a messy business at best. There’s kind of a “Salem witch hunt” aspect to Red Flag laws, and unless the laws are carefully applied, they are a slippery slope toward a totalitarian state.
That said, we should have a way for citizens to report aberrant behavior, especially in a time when anyone over the age of 18 can purchase high-powered, military-grade weapons and pump 41 rounds into a crowd of people in under 30 seconds (this is how a shooter killed 9 people and injured 27 more in Dayton, Ohio, last week).
In the aftermath of last week’s domestic terrorist attacks, the President said, “First, we must do a better job of identifying and acting on early warning signs. I am directing the Department of Justice to work in partnership with local, state and federal agencies as well as well as social media companies to develop tools that can detect mass shooters before they strike.”
In our data-driven world, algorithms would be useful tools for Red Flag law investigations. Should we use them?
You may not think that AI bias matters or even understand what it is. You may not understand what predictive analytics are. You may not understand how digital advertising works. You may not understand how much data you have agreed to share. But you must understand that we are at a crossroads. The government’s ability to target you is significantly better than the advertising community’s ability to target you. Take a minute to truly understand this issue. Then, get involved in the conversation and make sure your voice is heard. The very fabric of our democracy is at stake.
Take the Survey
If the survey is not visible, click here.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.