Raw Content
How I Run Three AI Models in Parallel Without Losing My Mind
| **By Katie Parrott | Every** |
Core Argument
Katie Parrott describes a fundamental shift in how knowledge workers operate: rather than choosing between being a maker (requiring deep focus) or a manager (coordinating others), AI has created a hybrid role she calls “model manager”—someone who strategically allocates attention across multiple AI processes running simultaneously.
Key Framework: The Allocation Economy
Parrott references her own concept of the “allocation economy,” where value derives from deploying attention strategically rather than diving deeply into single tasks. She notes: “The old paradigm assumed you were either building or coordinating—never both at once. AI breaks that assumption.”
This challenges Paul Graham’s 2009 “Maker’s Schedule, Manager’s Schedule” framework, which treated these roles as incompatible. AI enables a different working pattern entirely.
Real-World Workflow
On productive days, Parrott:
- Sends ChatGPT a transcript to extract quotes and structure
- Uses voice dictation (Monologue) for revisions while waiting
- Has Claude analyze published pieces for style patterns
- Cycles between tasks as models process
The result: “My actual writing time has dropped 40 percent, but my weekly output has tripled.”
Managing Context Switching Costs
Despite gains, context switching creates significant overhead. Parrott’s four practical boundaries:
- Compartmentalize rigorously — Separate chats per project preserve context
- Leave breadcrumbs — One-line notes about next steps prevent lost threads
- Limit active tasks — Never exceed three simultaneous processes
- Capture immediately — Export finished work to permanent storage before continuing
The Paradox
This workflow enables solo operation at small-team scale but raises questions about sustainability and whether increased output justifies the cognitive load.