‘Like Minority Report but in Real Life’: Post-Parkland, Schools Turn to Controversial Artificial Intelligence Surveillance to Thwart Potential Shootings
Updated on Jan. 14
In the future, authorities arrest people for crimes before they commit them. Shopping mall advertisements call out consumers by name. Retinal scanners allow the government to track citizens’ every move.
The scenes come from the 2002 movie Minority Report, a science-fiction thriller that predicts a dystopian future of mass surveillance. Although the world envisioned by the movie is still 35 years away, companies are currently using it to promote their products to America’s schools.
“Artificial intelligence to help law enforcement stop crime before it starts or escalates, like Minority Report but in real life, is becoming a reality,” a communications specialist for Athena Security said in a recent email. The company was promoting its new surveillance cameras, which use artificial intelligence to identify guns before a shot is fired. Once a weapon is spotted, law enforcement is notified and an intercom system tells the gunman that police are on the way. Archbishop Wood High School, a private Catholic institution in Pennsylvania, was the first campus to implement the company’s technology.
“Our first feature is gun detection,” said Lisa Falzone, Athena’s co-founder and CEO. Down the road, she said, the technology will be able to detect other activity, including fights. “We just did guns because people were dying and we wanted to do something about it.”
After two mass school shootings unfolded on American campuses last year, school officials and lawmakers have expended significant energy — and money — on physical security measures in an effort to protect children from violence. Athena is just one in a crowd of companies looking to tap into the $2.7 billion school security market. As schools like Archbishop Wood buy into surveillance through artificial intelligence, they’ve come under scrutiny from civil rights advocates who are concerned about pervasive government surveillance, potential bias, and the effects false positives could have on children who face accusations.
The issue re-emerged most recently when Florida’s Broward County Public Schools, which suffered a mass school shooting in Parkland last February, announced earlier this month that it would install artificial-intelligence surveillance on some of its campuses.
The use of the technology, like facial recognition, has become increasingly commonplace in law enforcement, in workplaces, and on smartphones. A recent government study found that the software is “rapidly advancing,” but there’s a dearth of research on the effects of AI-powered surveillance in schools and its ability to keep children safe. A national student survey in 2018 found that while school-based police and outdoor cameras made students feel safer, cameras inside the building made them feel vulnerable.
“As schools are investing in these types of resources, they may be neglecting or taking funds away from some of the things that we know actually do work to increase school safety,” said Deborah Temkin, director of education research at the nonpartisan Child Trends. “Things like building student and teacher relationships, promoting engagement in schools, and really focusing on school climate.”
In Broward, where a gunman opened fire and killed 17 people last February, the district is moving forward with a plan to purchase 145 analytic cameras from the security company Avigilon. The equipment, which has a $621,000 price tag paid primarily through a federal grant, represents a fraction of about 10,000 surveillance cameras currently deployed across the district. The high-tech Avigilon cameras, a district spokeswoman said, will be installed along the perimeter of “high schools with the highest security incidents.” The cameras, the spokeswoman said, are capable of recognizing movement and characteristics of people and vehicles, relaying unusual behaviors to officials.
The devices employ “appearance search” technology, which, according to Avigilon, allows officials to scan video footage based on someone’s identifiable features, such as clothing. Another aspect of the technology, the district spokeswoman said, will notify officials if someone crosses into an area where they aren’t supposed to be, such as when someone hops over a fence.
Littleton Public Schools in Colorado, which has spent millions of dollars on physical security since it suffered a high school shooting in 2013, uses Avigilon’s cameras. Guy Grace, the district’s director of security and emergency preparedness, called Avigilon the “Mercedes-Benz” of video surveillance. The district currently uses the cameras to monitor unusual behaviors, such as someone speeding into a school parking lot.
In the coming months, Grace said, he aims to implement facial recognition technology in the district, which could notify the security team when someone who isn’t supposed to be on campus enters the building.
“Let’s say there’s a kid on a threat assessment or something like that, a kid that’s not supposed to be there, blending in,” Grace said. “That could happen, and that is a fear of mine.” For him, cost “doesn’t matter. If it’s a couple thousand dollars, we’re going to do it.”
To boast product success, Avigilon points to a middle school in Billings, Montana. After school officials there implemented AI-enabled cameras, officials observed fewer incidents of vandalism and bullying, according to a company handout.
A spokeswoman in Broward County told the South Florida Sun Sentinel the devices in the Florida schools won’t include facial recognition, an area of artificial-intelligence surveillance that’s stirred up considerable pushback from privacy advocates. Under Florida law, schools are prohibited from collecting students’ biometric information through means such as fingerprints and facial geometry scans.
No less a technology champion than Microsoft President Brad Smith has called for government regulation of facial recognition technology, noting that it raises questions about “fundamental human rights protections like privacy and freedom of expression.”
In one test by the American Civil Liberties Union, Amazon’s facial recognition technology software falsely identified 28 members of Congress as having a criminal history. The false positives disproportionately identified members of the Black Congressional Caucus.
The ACLU findings join a list of reports that have found that facial recognition technology often struggles to accurately identify young people, women, and people of color. About half of American adults are included in a law enforcement face recognition network, and during a House oversight committee hearing in 2017, the Federal Bureau of Investigation acknowledged that its software misidentifies people 15 percent of the time. A recent report by the National Institute of Standards and Technology found that the capabilities of facial recognition technology have improved rapidly in recent years, but quality varies widely.
The concerns have not stopped security companies from increasingly marketing the technology to educators. One company, RealNetworks, even announced last year an initiative to give school districts facial recognition software for free.
Stefanie Coyle, education counsel at the New York Civil Liberties Union, said she’s worried that inaccurate facial recognition technology could prompt negative outcomes for students. The affiliate called on New York lawmakers to ban the use of facial recognition in schools after the district in Lockport spent more than $1 million in state money on the technology. Noting potential bias, she said she’s concerned that the technology could disproportionately affect at-risk students and that it could negatively affect students’ perceptions of school.
“Kids are supposed to come to school every day and be welcomed and feel supported,” Coyle said. Their faces “should not end up in some sort of database just for coming to school. They should feel comfortable. It really treats every kid as a potential suspect.”
But Falzone, whose company doesn’t currently offer facial recognition, said public concern over the proliferation of facial recognition technology could be misguided. Although “there are a couple of instances where you can use it for bad,” she said, the technology offers benefits such as the ability to recognize people who aren’t supposed to be on campus. Grace of Littleton offered a similar take. “You’re always going to have false positives in everything,” he said, but those negatives don’t outweigh the positives.
“You’ve just got to learn to work through the processes and understanding that you’re going to have” false positives, he said. “Technology and people are not perfect, but if we can reduce the risk, that’s what security technology is doing.”
In one high-profile use of artificial intelligence, Maryland police used a state facial recognition tool to identify the suspected gunman in the mass shooting at the Capital Gazette newspaper. That same tool was criticized by civil rights groups, however, after documents revealed that officers used it to monitor protesters following the 2015 death of Freddie Gray while in Baltimore police custody.
Despite the heightened concern around school shootings in the past year, such tragedies are statistically rare and, according to federal data, schools have become safer in recent years. But Falzone said that parents are afraid and that security technology calms anxieties.
“As a school, you’d better be putting the latest and greatest security in to get parents to send their kids to school there,” she said. “It’s pretty big as far as marketing your school to parents.”
But using security to calm fears, Temkin of Child Trends said, could make the problem worse.
“It reinforces fear in a lot of ways,” she said. “Schools engaging in this for the sake of parents are actually doing them a disservice by not reinforcing the fact that this is a very rare event, and most likely they have nothing to worry about.”
Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter