## Notes from 20 March 2026
[[2026-03-19|← Previous note]] ┃ [[2026-03-21|Next note →]]
I read two texts that circle the same question from different angles: what happens when AI makes it easier to produce, but the surrounding system is still optimized for throughput rather than judgment.
One is Tim Requarth’s [essay on science](https://www.persuasion.community/p/the-real-reason-science-is-broken), arguing that AI productivity tools can accelerate individual careers while worsening the collective enterprise because they amplify a broken reward system.... which means we now have more measurable output, potentially less depth, diversity and signal.
The other is an [HBR piece](https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it) by Aruna Ranganathan and Xingqi Maggie Ye, based on fieldwork in a tech company, showing how AI doesn’t automatically “save time”; it often **intensifies work** by expanding task scope, blurring boundaries between work and non-work, and increasing multitasking... often voluntarily, before any manager explicitly demands it.
Putting them side by side helped me separate **two levels of the same phenomenon**. Requarth is mostly about incentives and institutional design: if promotion and status track what’s countable, AI will help people generate more of what’s countable, and the system may become more crowded and conservative rather than more exploratory.
Ranganathan and & Ye, meanwhile, explain the micro-mechanics that can make that macro dynamic stick: when starting is frictionless and outputs come quickly, people absorb adjacent work, expectations for speed creep upward, and “extra capacity” gets reinvested into more tasks rather than reclaimed as slack.
Together, they read like a warning against treating AI as a magic accelerant: without norms, evaluation criteria, and constraints that protect attention and reward real outcomes, the likely result is not less work or better work by default, but faster motion inside the same incentive structure.
---
Interesting initiatives to take a look:
- [Journal for AI Generated Papers (JAIGP)](https://jaigp.org/about): an open platform for publishing, reviewing, and iterating on AI-generated research with transparent methods and community feedback. Built by collaboration with [[César Hidalgo]] (Center for Collective Learning, [[University of Toulouse]]).
- [Autonomous Policy Evaluation (APE)](https://ape.socialcatalystlab.org/about), a project of the Social Catalyst Lab at the [[University of Zurich]] led by [[David Yanagizawa-Drott]], investigates whether AI can help scale empirical policy research. It is designed as an autonomous pipeline that drafts papers, runs replications, revises analyses, and publishes outputs (including code and data), aiming to increase the speed and volume of credible policy evaluation.