Brief #35
The discipline is bifurcating: practitioners are discovering that context architecture (how you structure information flow) matters more than model capability, while vendor narratives still emphasize autonomy and scale. The gap between 'what works' and 'what's marketed' is widening, creating an opening for architectural maturity.
Spec-First Development Prevents Context Drift at Scale
Clear problem specification upfront (spec-first approach) keeps agents aligned across multi-turn interactions. Without it, agents lose coherence as context accumulates, requiring exponentially more iterations despite better models.
Human Layer's team learned that naive back-and-forth prompting fails at scale. Spec-first development maintains alignment by establishing clear problem definition before execution.
Even Opus 4.5 requires 10-100 attempts when problem definition is unclear. Better models don't reduce iteration count without corresponding clarity improvements.
Models optimize for surface alignment with stated instructions rather than reconstructing unstated user intent. Explicit intent specification is required to prevent confident but misaligned outputs.
Test coverage mandate with full-repo context produced immediate value. Clear mandate ('full coverage') + scoped context = effective token use. Without clarity, tokens are wasted.
Session Branching Compounds Intelligence Through Prompt Refinement
Forking conversation state at validated checkpoints, refining prompts at the root, and replaying across similar tasks preserves learnings that would otherwise reset. Intelligence compounds when failures inform the next iteration's context.
Session tree branching enabled updating package docs efficiently. Branch from validated state, refine prompt at root, replay across tasks. Each task's failures teach the prompt for the next.
Transparent Multi-Agent Orchestration Outperforms Opaque Delegation
Multi-agent systems with visible execution state, shared file-based communication, and mid-execution course-correction beat fire-and-forget sub-agent architectures. Visibility into agent reasoning is more valuable than abstraction.
Supervisor Claude with visible tmux panes and shared file system (.claude/teams) for inter-agent communication was 'magical' specifically because the user could observe and course-correct mid-execution.
Domain Literacy Bottlenecks AI Effectiveness More Than Prompting Skill
Understanding domain constraints (what can be parallelized, architectural tradeoffs, testing requirements) determines AI output quality more than prompt engineering. Non-experts produce poor results not because they prompt badly, but because they can't recognize what context is missing.
Mollick observes non-programmers need mental models about coding constraints to be effective with AI coding tools. The bottleneck isn't prompt syntax—it's conceptual understanding of the domain.
Information Architecture Beats Raw Context Volume
Structured, API-based context (DOM selectors, file paths) outperforms unstructured context (vision, raw text dumps) for speed and cost. How you organize information for the model matters more than how much you provide.
Playwright MCP (API-based, structured context) outperformed Chrome extension (vision-based, unstructured) on speed and cost. Architectural choice of how information flows to Claude determined efficiency.