How communities can become smarter about artificial intelligence

With governments calling for more AI oversight, the technology sector needs to educate the public on protective features they can implement right now.

The growing use of artificial intelligence for smart cities is changing the rules for everyone, including government agencies, elected officials, constituents and AI vendors. Research on embedded bias in the algorithms running predictive models has led several cities across the U.S., from Portland to Boston, to ban all uses of AI-enabled facial recognition by city-funded agencies.

Global installations of video cameras will reach 1 billion by 2021. In the U.S., there will soon be one camera for every four people, leading many citizens to pepper their representatives with questions about privacy ramifications for proposed AI-enabled projects.

Today, nearly a hundred governments, nonprofits, universities and NGOs are calling for “trustworthy AI.” This includes Pope Francis, who recently encouraged the ethical use of AI and has partnered with IBM and Microsoft to codify “human-centered” ways of designing AI.

Having participated in outreach sessions with city planners, senior citizens, and students of all ages, I’ve learned that AI-related questions are no longer confined to software experts and platform specialists. Concerns about privacy, consent and transparency reach every demographic. In this environment, it’s essential for industry insiders to educate the public on existing AI safeguards. Only then can the world’s 7.8 billion residents realize the promise of AI to optimize services, resources and safety.

How AI Tools Provide Benefits To Cities

AI platforms deliver on their promises in two ways. First, by training machine learning algorithms on large datasets, AI tools can automate a surprising variety of complex tasks that require added judgment to make predictive recommendations or take specific actions. Second, these systems analyze a staggering amount of data in real time, seeing patterns and flagging anomalies that are beyond the ability of human comprehension to organize in a timely way.

Today, more than two-thirds of U.S. cities have launched a “smart city” project to leverage data-driven digital technologies to manage core functions. Analysts found that AI is a critical feature in 30% of those applications. Notably, there is a corresponding jump in recent years of “chief data officers” and “chief innovation officers” hired by city and state IT departments to approve and regulate these projects.

Urban transportation and public works departments have been the earliest beneficiaries of cloud-enabled AI. Spending on roads and highways is consistently one of the top five budget categories of U.S. cities. Compare how AI-enabled video sensors versus older systems can save money while optimizing the flow of traffic and reduce congestion throughout urban corridors and roadways.

Previously, optical loops embedded in asphalt counted cars as they rolled toward the intersection. Triggered by tire pressure, the wires emit a radio frequency to change traffic signals. With AI, street signals can be synchronized across miles of traffic. Computer-aided dispatch, also called E-911, can redirect traffic flow for emergency first responders. Algorithms trained to match vehicle license plate recognition (LPR) can notify highway patrol officers when states or counties report stolen vehicles or Amber Alerts.

In the U.S., AI-generated alerts that result in legal consequences are coupled with human oversight and intervention. Overall, AI data-crunching platforms help cities to stretch their budgets and achieve significant savings. Researchers predicted governments could achieve $4.95 billion in annual savings by 2022. Departments such as public works maintenance will see budget savings of 30%.

Key Protections When Using AI

Technology platforms are designed and deployed in partnership with AI vendors, city officials and their constituents. Everyone should be familiar with the existing safeguards built into these systems.

  • Privacy: Municipal agencies are legally bound by privacy settings with respect to archiving video for a proscribed amount of time. Some projects, such as pedestrian safety on school campuses, never save their videos. The material is deleted once metrics are extracted from the platform’s analysis. Every AI vendor can write code that blurs faces, private property landmarks and addresses to anonymize the collected data in real time while also tracking larger patterns for actionable decisions.
  • Accountability: Video retention policies by government officials include the use of audit logs. Any official who sees access to data archives must stipulate their reasons and verify their credentials under federal guidelines developed by the FBI’s Criminal Justice Information System.
  • Verification: As healthcare specialists know, AI’s deep learning neural networks can identify objects and visual irregularities with great nuance and precision. But to understand context, algorithms still need even more training on diverse datasets, analyzed by powerful computing hardware. For now, human intervention must be part of the decision-making loop. Department managers should require software engineers and partners to extract random data and verify their findings.
  • Transparency: City officials can choose the “confidence” level of an algorithm’s prediction or assessment that vendors can easily build into the product. Digging into studies on AI’s embedded bias reveals instances where uncatalogued data is treated as legitimate. Instead, software teams can program algorithms to rank predictions by levels of uncertainty, enabling operators to take no action, override the prediction or log an error. This is how engineers open up AI’s so-called black box to human inspection.
  • Consent: Given the importance of the European Union market, the EU’s 2016 General Data Protection Regulation (GDPR) for data privacy has become a de facto standard for many U.S-based AI vendors. City managers should confirm their platforms contain intuitive, one-click options for users to share their data — or not. Vendors should also provide detailed contingency plans and security measures to handle incidents of data loss or cyberthreats.

Many of these safeguards depend on industry experts following best practices. But the successful adoption of AI will require an engaged partnership between technologists, officials, and the public. Vendors need to build more privacy protections into their products. We should also be educating the public on the robust guardrails that already exist. Together, we can steer the growth of AI toward greater accuracy and accountability — enabling everyone to reap the benefits.

Related Blogs