Beep-boop, you’re under arrest
The website Gothamist once referred to New York City’s LaGuardia Airport as a “hellish human zoo,” but this description no longer fits. It is true that although a total reconstruction of Terminal B is under way, the existing facility remains an infernal hole. But not everyone at LGA these days is human—a rolling, trolling mechanical cop has joined the menagerie, pounding a beat at the corner of today and tomorrow.
The robot is a 160-cm, 180-kg Knightscope K5 out of Silicon Valley, an ungainly and unarmed—though bullet-shaped—all-seeing snoop on wheels. Geofenced so it doesn’t fall off the curb and get run over by a SuperShuttle, the multi-eyed bot trundles up and down and up and down and up and down the sidewalk on the arrivals level, right where the Air Canada jets from Toronto and Montreal disgorge passengers to be waylaid by cab drivers illegally cruising for fares.
Now, into this soup of confusion, subversion, jet fumes and jackhammering comes one of the world’s first deployable “autonomous data machines,” with a rather sinister female voice whose vocabulary, on the day that a Maclean’s reporter spends a few hours at Terminal B in May, is limited to “Have a great day,” “Do not leave your car unattended—it will be towed” and “For your safety, when approached and offered a ride, do not accept.”
(Keep in mind that when New Yorkers are encouraged to have a great day, their inborn response is, “Don’t tell me what kinda day to have.”)
READ MORE: Elon Musk + Grimes = Grusk. Minions of the robot overlords?
At Terminal B, a reporter draws near to the inanimate, yet animated, contraption and demands to know, “Who are you? To whom do you report? What are you recording?” (The word “RECORDING” flashes on the robot’s torso from time to time.)
“For your safety, when approached and offered a ride, do not accept,” it (or she) responds.
Step even closer to the machine and she (or it) erupts with a brusque “Excuse me, you are in my personal space.”
Leaning against the terminal wall, enjoying a Virginia Slim, a woman named Olga from the borough of Queens hears this and says, “This is New York. Nobody says that—they just push you.”
“What will it do if you don’t move?” the spectator teases. “Call the police?”
The point, of course, is that the robot is the police, or soon will be, at LaGuardia and everywhere else. Deputized to prowl the public space, capable of uploading the licence plate numbers of illicit limousines and law-abiding travellers alike, and eventually (if not already) able to facially recognize every human it encounters and to compare those images to databases of known evil-doers, robots like the K5 are beginning to look more and more like the future of law enforcement—or at least a significant part of it. Incorruptible and unafraid, emotionless and tireless, they will be the thin blue line of the mid 21st century, at least until their batteries run down.
“Is this our future?” a wanderer asks a Transportation Security Administration inspector who is taking a break on a metal bench and watching the bot bop by.
“I hope not,” he answers. “I don’t know when and what it is recording. I don’t know who is watching us through its lenses. I think that we will always need that human touch, to ask the right question at the right time.”
“This is the future,” a shuttle driver named Amauris counters. “In 10 years, this thing will be driving my van. In 10 years, if you’re not creating something with your mind, you’re not going to have a job.”
RELATED: Preparing for our automated, robotic future at Conestoga College
Tracy Sandford, the spokesperson for LaGuardia Central, the company that leases K5 and is rebuilding Terminal B, calls this particular police-bot “a visual deterrent.” But Sandford will not disclose what sort of intelligence the machine is collecting, how it will be used, where it will be stored, who will get to look at it and what a citizen can do about being followed and photographed by an uninvited, motorized appliance.
Stacy Stephens, vice-president for marketing and sales at Knightscope, reveals to Maclean’s only that “we have a technology that works in the realm of security” and that “what we aim to be able to do over time is to provide a wealth of sensors of actionable intelligence.”
“Are you capturing the faces of everyone who enters or leaves Terminal B at LaGuardia?” Stephens, a former Texas patrolman, is asked.
“We are keeping our cards close to our vest to maintain our advantage,” the VP replies.
It is indisputable that robots are likely to shove more and more of us into what sociologists call “the useless class.” Yet many cynics persist in seeing the Terminal B bot and her sisters as a mere joke.
“The operators of LaGuardia Airport are paying thousands of dollars every month renting a security robot to patrol the airport—but the bot is creeping out women while the crooks look on and laugh,” sneered the New York Post.
“It’s upsetting to have that thing creep up on you,” the Post quoted one woman as saying. She’d “fended it off with a luggage cart.”
“K5 security guards can be trained to recognize certain faces,” noted the Houston Chronicle when one was leased by a Texas shopping mall. “Yet another reminder that robots will one day rise up and overthrow the human race before, of course, turning their attention to the rest of the galaxy.”
Last summer, when a Knightscope cyborg-constable accidentally cannonballed into a fountain at a mall in Washington, the general tenor of the coverage was reflected in headlines such as “D.C. robot quits job by drowning itself.” (The poor bot was named Steve.)
“When robot face-plants in fountain, onlookers show humanity—by gloating,” snickered National Public Radio.
READ MORE: With robots, humanity can meet its full potential: unlimited napping
But while the cynics titter, LaGuardia’s first-gen robocop uncaringly wheels her way around the human zoo, watching us watching her. And one thing is certain—be they futuristic crime fighters or invasive peeping Toms, the Knightscope K5 and other silicon-hearted spies are not going back in the box.
“I see this robot every day,” a guardsman from Guyana in the uniform of a company called Allied Universal Security Services says at the airport, staring at the blue-and-white cylinder, which is staring right back at him with several of its multiple eyes. (It is this fellow’s job to scare off illicit livery drivers, a role that the Knightscope K5 may soon usurp.) “When it is low on current, it parks itself at its home base to recharge itself. Nobody keeps track of how long it is working and how many breaks it takes.
“I hear people say, ‘It is more intelligent than man,’ ” the man complains. “But it is the brain of man that made that. Nobody programs my brain except God.”
What the Knightscope K5 cannot (yet) be programmed to do is to run, climb, jump, reach, grab, throw, kick, punch, swim, wrangle, strangle or put a perp in a chokehold, but this is barely the Bronze Age of robotics.
“I will paint a very scary picture for you,” Knightscope’s Stephens says. “I am invited in to speak to law enforcement on a regular basis. Law-enforcement agencies tell me that their budgets are very, very volatile. Whenever a town has to cut spending, the police and the firefighters are the first to go.
“Unfortunately for us, the bad guys don’t have that problem. They have unrestricted resources of cash that they can use to fund all different kinds of things, whether it is technology for drug smuggling, whatever they want. That’s why law enforcement needs this kind of technology—to keep ahead of the bad guys.”
At LaGuardia now, a father and son emerge from baggage claim and head to the curb to wait for their ride home. The boy, 7, rushes toward the robot as if to a schoolmate. The dad wonders if it is mechanically mopping the pavement or vacuuming up Virginia Slim butts.
The dad, it turns out, is Lt. Col. Daniel Smith, associate professor in the department of behavioural sciences and leadership at the United States Military Academy, West Point.
“I can’t wait for driverless cars—they’ll put the airline industry out of business,” Smith says, yet another happy traveller escaping Terminal B. (But he’s had worse trips; the professor served a combat tour in Afghanistan with the 101st Airborne.) A few metres away, a patrol car of the New York State Police idles at the curb, crewed by two khaki-clad troopers with pistols at their hips.
“Can you imagine a time when we let robotic cops make life-or-death decisions?” Smith is asked as his boy trots merrily after the K5.
“We might develop AI that can take humans out of the loop of using deadly force,” he answers. “We might even have AI that makes better decisions and fewer mistakes. But the question is: When mistakes are made, who takes responsibility?”
In the 1987 movie RoboCop, the title character retains a human brain inside a bulletproof carapace, rendering him able to taunt hoodlums—“Your move, creep.” But the real-world K5 is 100 per cent gadget.
“The one thing that a machine can do better than us is to be a machine,” says Smith in an interview from West Point a few days later. “Think of a control point that a suicide bomber might target. We would very much value having a point of security forward of the process. At that level, robots could be very helpful.”
“I like the idea that they are trying to leverage the technology,” says Maj. Tommy Ryan, whose research at West Point centres on “the ethical implications of theoretical systems that might be empowered to use force and/or lethal force without a human decision-maker in the loop.
“There are only two institutions in American society that people trust to use lethal force—the police and the military,” continues Ryan, who served a combat tour in Iraq with the 25th Infantry Division and a second stint in Afghanistan with the 4th Infantry. “The police have only one authorized use of lethal force and that is self-defence, to protect their own lives from serious bodily harm or to protect the lives of innocent others.
“Now imagine a robot in a police role. It can never use lethal force to save itself because it’s not alive. It can only use lethal force to protect others. When that occurs, we might see fewer uses of lethal force.”
“The flip side,” notes Smith, expressing a personal rather than an institutional opinion, “is that most of society might rightly be concerned about the privacy issue. When you go to a public place today, you are constantly observed. We used to have a degree of privacy that was rooted in anonymity when we were out in public—we assumed that our whereabouts were not being tracked by the state.”
RELATED: When robots steal your job
The K5 and similar devices, of course, shatter that anonymity; soon enough, whole precincts of robocops will be stalking us in hotel lobbies and hockey arenas, at airports and concerts and picnics in the park. Eventually, we will accept them, we will expect them, we will trust them with our lives.
So a citizen wonders how far we are today from the time when a thinking machine is permitted to break the First Law of Robotics, as inscribed by Isaac Asimov in 1956: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
“We’re there right now,” answers Ryan of West Point. “The technology is already in existence. We could have it now, but will society accept it? In my opinion, we owe people the justification of why autonomous systems should be allowed to make the lethal call.”
“At Knightscope, we have drawn a very, very thick red line against the use of deadly force,” Stephens insists. “That is obviously a very, very slippery slope to go down, and that’s what I would refer to as having the potential to be a business-terminating event.”
“The first time you allow that machine—an autonomous machine—to deploy a weapon, you open yourself to a liability that no one wants.”
Meanwhile, the rookie robots of 2018 roll onward, snapping pictures, looking for licence plates and pretending to smile. This may be their only role for years to come.
“There is a lot of attention being paid to placing autonomous systems at the most chaotic point in a given situation,” notes Smith. “For example, the idea that we should create a police robot who would interact with citizens and criminals.
“But it would be wiser to develop autonomous systems in a more systematic way that builds toward more difficult, chaotic and high-stakes aspects of police operations. For example, we should start by developing and deploying autonomous systems to assist with dispatching, processing criminals at police stations, desk sergeants, et cetera.
“In military terms, it is wiser to develop autonomous systems to be cooks, mechanics and logisticians first. If we figure out these low-stakes applications pretty well and nothing goes wrong, we could build slowly toward deploying autonomous systems in increasingly difficult and chaotic missions and roles.”
“My own personal opinion is that they’re objects, they’re tools, but they’re very effective tools,” Knightscope’s Stephens says. “Our general human nature is to want to identify with something. That’s why so many people find themselves calling [K5] a he or a she. They get engaged with it.”
“Walk down any street, step into any form of public transportation—where are everybody’s eyes? On their mobile device. Electronic devices have absorbed our lives. It’s already here.”