City HAL 9000: Do cities overpromise AI’s benefits?

Tech pexels cottonbro 5474025

San Jose is using Artificial Intelligence to “recognize tents and cars with people living inside.” Sunnyvale is using it for “live translation” of public meetings. Portland’s Bureau of Emergency Communications is using it to “answer all non-emergency calls.”

Denver is using it to “speed up the approval and delivery of … coupons” that allow “residents to recycle televisions, monitors and other electronics at a discount.” The Regional Transportation Commission of Southern Nevada is using it to “understand traffic patterns and safety issues at select intersections in the Las Vegas Valley.”

IBM’s Jeff Crume calls AI a technology that matches or exceeds humanity’s “ability to discover,” “ability to infer” and “ability to reason.”  It certainly is making its way into municipal government.

Well before its unofficial “debut,” with the 2022 public release of ChatGPT, municipal governments were curious about AI. During a spring 2021 conferenceStefaan Verhulst, of the Governance Laboratory, extolled AI’s “massive opportunity to do good at the local level.” Later that year, AI Magazine enumerated the “top companies” – including StreetLight DataWaycare and Mr. Fill – selling their “AI-powered technology” as “smart city solutions.”

The ChatGPT blockbuster had an immeasurable impact, and 12 months later, San Jose’s technology bureaucrats launched the GovAI Coalition, currently comprised of more than “600 public servants from over 250 local, county and state governments that represent over 150 million Americans across the nation,” according to the city’s website. The mission? Promotion of “responsible and purposeful artificial intelligence … in the public sector.”

William D. Eggers, of Deloitte’s Center for Government Insights, is giddy over AI. He concedes that “trust in government is at or near all-time lows,” but believes “a public-sector renaissance” may be on the horizon – delivering “a dramatic multiplication in service delivery, operational efficiency and mission attainment.” We’ve heard this all before.

Around the turn of the century, the arrival of the Internet induced some true believers to tout “e-government.” The World Wide Web, email, etc., it was claimed, would engineer a wholesale “reinvention” of the public sector, with “improved access for citizens, increased efficiency, lower costs and greater effectiveness.”

Posterity hasn’t been kind to such enthusiasm. Boosted labor productivity – i.e., serving taxpayers with fewer employees – is not in evidence for government at any level. (Don’t bother asking about public schools.) And the magic of the Internet doesn’t appear to have made much of a dent in waste, fraud and abuse. While a precise accounting is impossible, given the matter’s inherent subjectivity, it’s clear that the gains municipalities made by going online have been significantly offset by new problems.

Cyber-crime, for example, has taken a heavy toll on taxpayers. In 2019, the United States Conference of Mayors “unanimously resolved to no longer pay any ransom to hackers, following a series of cyber shakedowns that have extorted millions from city governments.”

Yet the breaches continue and seem to be worseningLast summer, “Dallas agreed to pay $8.5 million in expenses related to a ransomware attack.” Last month, Wichita got hit, “giving hackers access to an unspecified number of people’s personal information, including names, Social Security numbers, driver’s licenses and other state IDs, and payment card information.” According to a sobering assessment issued by the Center for Internet Security in January: “The biggest weakness in many state and local government organizations’ cybersecurity programs … is simply that they’re still being created.”

Fundamentally transforming city government is difficult enough in low-tech ways. With technology as complex as AI, the challenge poses immense opportunities for frustration. Whether bureaucrats and elected officials have the requisite expertise and sophistication remains a very open question.

Competence is one concern and constitutionality is another. Stockton, Calif.’s City Council has signed off on the use of “cameras equipped with artificial intelligence to detect overgrown lawns, vehicles parking on unapproved surfaces, peeling paint, boarded windows, graffiti and other code violations.” And San Jose’s homeless-spotting tech has garnered plenty of ire.

One advocate thundered that it was “beyond disturbing that the city is using AI to target the most vulnerable members of our community, as if they were potholes and graffiti.” Legal opposition over privacy concerns is inevitable.

A final red flag: AI’s potential to supercharge governments’ interest in tasks they have no business performing in the first place. Los Angeles County offers a creepy, The Minority Report-style example.

In April, CNBC detailed a pilot program by the Homelessness Prevention Unit that “uses predictive artificial intelligence to identify individuals and families at risk of becoming homeless, offering aid to help them get stabilized and remain housed.” (Most of the aid – “between $4,000 and $8,000” – flowed through the $1.9 trillion “stimulus” package enacted in the early days of the Biden administration.) Is predicting misfortune that might strike individuals a legitimate function of government?

With the technology constantly in flux and state/federal regulations far from certain, prudent city and county officials should adopt a go-slow approach regarding AI. Skip the grandiosity, and stick with proven, practical tools that make government more approachable, accountable and transparent.

Earlier this year, Bakersfield, Calif., launched “a customer service platform called Archie, which includes an accessible web chat assistant on the city’s website and a text message feature.” Test “him” out, and you’ll discover that Archie can be quite useful with information requests about animal control, business licenses, etc.

No, a chatbot isn’t “10x government,” and it won’t attract much praise from Verhulst’s “laboratory.” But in the long run, modest tools like it stand a better chance to aid the folks who pay local government’s bills than wildly ambitious attempts to “channel AI for the greater good.”

D. Dowd Muska is a researcher and writer who studies public policy from the limited-government perspective. A veteran of several think tanks, he writes a column and publishes other content at No Dowd About It.

Nothing contained in this blog is to be construed as necessarily reflecting the views of the Pacific Research Institute or as an attempt to thwart or aid the passage of any legislation.

Scroll to Top