Brief #86
Standardization is creating unexpected security and distribution surfaces in AI systems. MCP adoption reveals that clarity about context boundaries isn't just an engineering optimization—it's now a security requirement and competitive distribution channel.
Training Data Recency Creates Tool Extinction Events
Tool relevance in LLM outputs depends on content freshness in training data, not product quality. Redis dropped from 93% to 29% recommendation rate between Claude versions—not because it got worse, but because it stopped generating fresh training signal.
Research showing Prisma/Redis 'extinction events' (93%→29%) caused by training data recency, not functionality. Tools that ship frequently and generate documentation stay visible; tools that go quiet disappear from recommendations.
Practitioners observing they must explicitly specify libraries/frameworks to override Claude's default choices. The need for this override confirms that LLM preferences reflect training data patterns, not objective suitability.
Standardized Protocols Create Discoverable Attack Surface
MCP's success as a standard makes AI infrastructure predictably discoverable. Teams treating MCP endpoints casually are exposing their entire context surface—data sources, tool capabilities, workflow patterns—to reconnaissance and exploitation.
Bitsight discovered publicly exposed MCP endpoints revealing complete AI system architecture. Unlike proprietary integrations that obscure endpoints, MCP's standardization makes infrastructure predictable and discoverable via DNS/docs.
Infrastructure Constraints Beat Prompt-Based Control
Reliable agent behavior comes from execution boundaries (containers, resource limits, timeouts) rather than sophisticated prompt engineering. Sandboxing at infrastructure level is more trustworthy than trying to control LLM decisions through context.
Author built effective coding agent with minimal context engineering but strong containerization. Key insight: 'agent works great even if you put little thought into context management' because execution boundaries (container mounts, systemd-run limits) prevent damage.
AI Adoption Without Boundaries Creates Work Multiplication
People using AI are working more hours, not fewer. Without explicit problem boundaries and task definitions, AI tools create scope creep, coordination overhead, and attention fragmentation that exceed productivity gains.
Research finding that AI adoption increases work intensity: knowledge gaps filled by AI → task expansion; easy prompting → boundary erosion; parallel AI threads → attention fragmentation. Lack of clarity about problem scope causes work multiplication.
Two-Stage Agent Filtering Outperforms Single Model
Fast exploration with low-latency models (Kimi) feeding high-capability validation (Claude) beats using one model for everything. Context handoff between specialized agents compounds effectiveness better than trying to optimize one agent.
DHH using Kimi K2.5 for fast iteration, escalating to Claude Code for validation/'second opinion.' The workflow explicitly separates speed (exploration) from confidence (refinement), with developer orchestrating context handoff.