All hail the AI overlord: Smart cities and the AI Internet of Things

Cities generate lots of data. The exact amount depends on the size of the city and its sophistication and ambitions, but it’s certainly more than mere humans can absorb and use. The Smart Cities movement, which looks for ways to find data-driven technological solutions to everyday urban challenges, is increasingly turning to artificial intelligence to deliver “services” to its residents—everything from locating gunshots and finding tumors to dispatching work crews to pick up trash.

New York is one of about 90 cities worldwide that uses a system called ShotSpotter, which uses a network of microphones to instantly recognize and locate gunshots. In Moscow, all chest X-rays taken in hospitals are run through an AI system to recognize and diagnose tumors. And Taiwan is building a system that will be able to predict air quality, allowing city managers to warn residents of health dangers and work to lessen what the data tells them will be the worst of the impacts.

What constitutes a “Smart City” isn’t well-defined. In the broadest sense, a Smart City is one that uses electronic means to deliver services to its residents. But if you dig down even a little, delivering even on that simple promise of service delivery can be exquisitely difficult. For example, Smart City technology might strive to eliminate the need to call up your alderman to complain that the streets aren’t getting plowed. Instead, a network of sensors—yes, an Internet of Things—would know when the snow is falling, how much has fallen, where the snowplows are, when they’ve last been on your street, and when they’ll be there next. All of that would be delivered in a browser or app to anyone who cares to either dial in or build their own information utility using that freely available data.

Of course, you’ll need a communications infrastructure to allow all those sensors to talk to each other and a central database, as well as application programming interfaces and data dictionaries to let the snowplow data be accessed by other services—such as the fire department, which could use that information to better position its ambulances in bad weather.

Oh, and because this is the government, doing it on a tight budget, securely, and with maximum up-time are all design goals, too.

And that’s just one application. Consider all the functions that a municipal government provides, and it becomes readily apparent why no city is completely “smart” and how artificial intelligence and machine learning could readily get applied to the Smart Cities movement. Thus, the latest catchphrase of “Smart City” technology hawkers is AIoT: Artificial Intelligence incorporated into the Internet of Things.

Inevitable tensions

Inevitable tensions, however, have sprung up between AI/ML and the Smart Cities movement. One of the hallmarks of Smart Cities is the maximal openness and availability of the data that’s collected to make a smart city possible. Chicago, for instance, publishes its government data, as do New York, Barcelona (here in its English version), Moscow, and the island nation of Taiwan. But AI and ML algorithms are obscure by their nature, not necessarily something that a councilman or community organizer can readily understand. Political processes in every jurisdiction reflect local customs, needs, and desires, any of which may include levels of scrutiny for, among other values, fairness in the provision of services.

An AI that learns to send police to certain neighborhoods faster than others—or is suspected of doing so—is not an AI that would survive a political process. The politicians who put such a system in place would be unlikely to survive, either. The Smart Cities ideal—at least, under non-authoritarian regimes—is to use all that data to provide services better and more efficiently, not to engender urban dystopia. Expect AI systems in Smart Cities to make recommendations to actual people.

Spotting shots and tumors

Take the ShotSpotter system used by New York City and Washington, DC. Studies have shown that only one out of every eight incidences of gunfire is reported to authorities. ShotSpotter uses networks of strategically placed microphones to listen for gunshots. When the mics pick up a noise that matches the signature sound of a gunshot, the ShotSpotter system triangulates a location and sends the recording and associated information to a human being who decides whether the sound was, in fact, a gunshot.

Within 60 seconds of hearing something, ShotSpotter’s sensors can report the latitude, longitude, and altitude of the shot, how many shots there were, and the direction and speed of the bullet’s travel. In New York’s implementation of ShotSpotter, information about confirmed shots fired is merged with the city’s own address database (because not all locations have a street address), surveillance video, crime and shooting histories for the location, and the name and pictures of anyone with open warrants at that address, as well as any gun permits issued in that area. Police responding to the call have all that data on their computers or tablets by the time they arrive at the scene.

In all, 60 square miles of New York City—about 20 percent of the city—are covered by ShotSpotter. To prevent the possibility of vandalism, the exact areas covered and the locations of the sensors are both proprietary—even the NYPD doesn’t know. The city says it’s responding to four to five times the number of gunshots than it was before ShotSpotter was implemented and that it has been able to match guns and bullets that are recovered with other open crimes.

The solution is not perfect. ShotSpotter’s architecture means it can only cover areas of about three square miles or larger, so violent pockets smaller than that require different, more traditional solutions. Also, ShotSpotter, not the city, owns the system, and the contract is up in 2021. Still, the police department sounds very pleased with the value it’s getting for the money.

Leave a Reply