When a major lab says it wants to “scale AI for everyone,” most people latch onto the numbers: funding, users, model capability curves. Those numbers explain attention. They don’t explain what happens inside a city. A city isn’t an app store. It’s a machine that runs every day, under stress, in public. The question isn’t whether the model is smart. It’s whether city-level supply is stable and the institutional interfaces are aligned.

Compute is like electricity. What matters isn’t voltage. It’s whether you dare to wire it into the production line and trust it not to trip during peak load. Cities have natural peaks: holidays, extreme weather, incidents, major events. Each one pushes hotlines, service windows, and emergency coordination toward the edge. A brilliant model can’t outsmart supply-side wobble. What citizens experience is latency, failures, and “please call again.” They won’t care which model you used. They’ll remember the day the system didn’t hold.

Data is like water. The pain isn’t whether there’s a well. It’s whether the source is clean, the pipes are connected, and the pressure stays stable. City data often exists, yet remains unusable. Unusable doesn’t mean missing fields. It means missing credibility, provenance, and context. Where did this record come from, who changed it, when did it take effect, can it be used as an official basis, will it survive an audit—those are the real gates. The stronger the model, the easier it is to turn noisy inputs into plausible answers, and to dissolve accountability into fog.

Services are like roads. Roads don’t matter because the asphalt is new. They matter because vehicles can reliably reach the destination under rules. City services aren’t hard because answering is hard. They’re hard because doing is hard. That’s why once AI enters real government workflows, the first friction isn’t language. It’s interfaces. Pure inquiry can be propped up by text. The moment you touch changes, applications, approvals, tickets, dispatch, you are back to institutional granularity: where the permission boundary sits, which actions require human review, which can execute automatically, how actions are logged, explained, reversed. Models generate. Cities need execution with responsibility.

Governance is like traffic rules. Rules aren’t there to look serious. They exist so dense systems don’t collapse. Cross-department collaboration isn’t blocked by unwillingness. It’s blocked by risk that cannot be carried. When AI crosses boundaries, risk crosses with it. That makes permissions and audit hard constraints, not “compliance extras.” Without unified identity, least privilege, traceable calls, and an evidence chain, an AI capability will never move from a pilot in one bureau to a citywide norm. It stays in demos and small circles because no one will buy uncontrolled spillover risk.

That’s why I treat city-scale AI as infrastructure work, not a feature upgrade. It forces three chains to be rebuilt. The first is data usability and credibility, separating what can serve as official basis from what is merely displayable, and making citations, provenance, versions, and timeliness basic capabilities. The second is cross-department permissions and audit, not as policy but as replayable records of each access, call, and decision path. The third is delivery: turning pilots into norms requires engineering discipline and operations craft—the invisible work that makes systems stable, failures predictable, upgrades controlled.

So when a lab says its product is getting faster, steadier, safer at scale, a city should read a deeper signal: scale turns infrastructure into competitive advantage. Whoever can connect compute supply, data governance, service execution, and audit responsibility into a stable pipe will make AI feel native to the city. Models will iterate. Narratives will rotate. The pipe, once built, becomes durable productivity.

Reference: OpenAI “Scaling AI for Everyone” https://openai.com/zh-Hans-CN/index/scaling-ai-for-everyone/