## Notes from 15 April 2026
[[2026-04-14|← Previous note]] ┃ [[2026-04-16|Next note →]]
I found this interesting text from _[Datapolis](https://fermonge.substack.com/p/governments-may-not-be-interested)_ by [[Fernando Fernandez-Monge]], published on February 24, 2026. It explores the idea that governments often focus on how they can use AI (the supply side), while ignoring how citizens will use AI to interact with them (the demand side). Monge argues that even if a government chooses not to adopt AI internally, it will still have to manage an "[[AI and Civil Service|AI-shaped administrative reality]]" (amazing concept!) created by its citizens.
The text suggests that technology doesn't just create efficiency but also uncovers unmet needs, which actually increases the workload for administrators. When citizens start using AI agents to navigate bureaucracy, it creates a "demand-side shock" that public agencies cannot ignore. This shift is broken down into three logical levels:
- **Level 1: Form-filling agents**
- The AI acts as a proxy to navigate web forms and portals. The user provides a high-level intent, like "renew my license," and the agent handles the clicks, fields, and document uploads.
- **Level 2: Discovery agents**
- The AI helps citizens find services or benefits they didn't know existed. It maps out permits, subsidies, or regulatory obligations across different agencies to solve the "information bottleneck."
- **Level 3: Life-event agents**
- A persistent AI profile that anticipates needs based on life changes. For example, it might automatically verify pension entitlements or alert a user to a regulatory deadline based on their specific circumstances.
The piece concludes by noting that these "agentic" interactions will likely arrive from the outside in. This forces governments to make practical decisions about whether to block automated bots, how to handle the equity gap for those without AI access, and how to stress-test digital infrastructure against a surge of automated traffic.