Raw Content
How Two Engineers Ship Like a Team of 15 With AI Agents
Overview
Engineers Kieran Klaassen and Nityesh Agarwal demonstrate how strategic AI agent workflows enable a small team to achieve outsized productivity. In one week, they shipped six features, five bug fixes, and three infrastructure updates by designing systematic processes where each task compounds the next.
Key Workflow: Prompt That Writes Prompts
The engineers developed a meta-approach using Anthropic’s Prompt Improver to create a custom Claude Code command that transforms rough feature ideas into detailed GitHub issues. Each issue includes:
- Clear problem explanation
- Proposed solution
- Technical implementation details
- Step-by-step execution plan
- Relevant existing code and best practices
“Unlike Cursor, which is made to code,” Kieran explains, Claude Code reduces friction for thinking through problems before jumping into execution.
Critical Mental Model: Fix Problems Early
Drawing from Andy Grove’s High Output Management, Nityesh emphasizes catching issues during planning phases when stakes remain low. “There are chances that Claude’s plan wasn’t the direction you wanted to go, and you want to catch that before you ask Claude to go and implement.”
This prevents costly rework after code generation begins.
AI Coding Assistants Ranked
Top tier:
- Claude Code (clear winner for research, workflows, and comprehensive capabilities)
- Amp (praised for ergonomic design)
- Friday (opinionated workflows that function well)
Lower tier:
- Windsurf (rapidly declining without Claude 4 access)
- GitHub Copilot (ranked last, though newer agentic features untested)
Related Tools
The team uses Monologue, Every’s voice-to-text tool, to eliminate typing friction throughout workflows.