prompt engineering
583 articles · 15 co-occurring · 10 contradictions · 12 briefs
Over the past couple of years, building applications with large language models (LLMs) has shifted focus from prompt engineering to context engineering. In early LLM applications, users spent time cra
[direct] "The prompt was reasonable, sometimes even handcrafted by a "prompt expert". The answer was still wrong, outdated, or dangerously confident." — Article argues that prompt engineering alone cannot overcome context layer failures; challenges prompt-centric approaches
[STRONG] "Gartner declared that 'context engineering is in, and prompt engineering is out,' urging AI leaders to prioritize context-aware architectures over clever prompting." — Article explicitly positions context engineering as superseding prompt engineering as the primary approach for enterprise AI in 2026, citing Gartner's position.
Article frames multi-agent as evolution beyond 'single-prompt', implying that decomposition + delegation is superior to monolithic prompts
[strong] "Your prompt is the sticky note you handed them this morning. Important, sure. But the stack of folders on their desk? The CRM they have open? That's what determines whether they actually help the customer or fumble around asking questions the customer already answered." — Article challenges the primacy of prompt engineering, arguing that context (available information) has greater practical impact than initial instructions. This challenges traditional prompt engineering assumptions.
Article explicitly positions prompt engineering as secondary to context engineering. The thesis: no amount of prompt clarity solves context window architecture problems.
Article explicitly positions context engineering as a replacement/evolution of prompt engineering, suggesting prompt engineering is insufficient for production systems
Author argues context engineering is primary and prerequisite; prompt engineering alone on incomplete context yields poor results
Article explicitly positions context engineering as the evolution beyond prompt engineering: 'Prompt work still matters, but it is only one layer.' This reframes the problem.
Article explicitly positions context engineering as movement AWAY from prompt engineering (wording manipulation) toward systematic token management. Marks clear evolution/rejection of prior paradigm.
Article explicitly positions prompt engineering as solving the wrong problem ('None of that addresses the real issue'). Claims context engineering will supersede it as the focus of platform engineering.
Context engineering combines prompt engineering, retrieval-augmented generation (RAG), and multi-agent techniques into one system, instead of using them separately." — Article explicitly names prompt
Over the past couple of years, building applications with large language models (LLMs) has shifted focus from prompt engineering to context engineering. In early LLM applications, users spent time cra
[direct] "The prompt was reasonable, sometimes even handcrafted by a "prompt expert". The answer was still wrong, outdated, or dangerously confident." — Article argues that prompt engineering alone ca
LLMs do best when given focused prompts: implement one function, fix one bug, add one feature at a time" — Article explicitly demonstrates that effective LLM coding requires carefully scoped, focused
Article explicitly positions context engineering as a replacement/evolution of prompt engineering, suggesting prompt engineering is insufficient for production systems
Article explicitly positions context engineering as movement AWAY from prompt engineering (wording manipulation) toward systematic token management. Marks clear evolution/rejection of prior paradigm.
Article explicitly distinguishes context engineering from prompt engineering as distinct disciplines, arguing CE is the broader/more critical field for production systems.
Article explicitly positions context engineering as distinct from and broader than prompt writing, suggesting prompt engineering is insufficient
The work of the "prompt engineer" hasn't become obsolete, but it must evolve into a context" — Article explicitly frames prompt engineering as evolving into a broader context design discipline, adding
Some principals are better at maximizing agentic outcomes than others. Principal characteristics predict performance of agent, suggesting new source of inequality." — Research demonstrates that prompt
beautifully explains Prompt Caching and why it matters to cut costs" — Article explicitly demonstrates prompt caching as a practical cost-reduction technique for AI engineers
Context Engineering, a formal discipline that transcends simple prompt design to encompass the systematic optimization of information payloads for LLMs" — Survey explicitly positions Context Engineeri
best-performing files included: Relevant executable commands in an early section, Code examples over explanations, Set clear boundaries, Specified the stack" — Analysis of 2,500+ agent .md files provi
We will evolve from models to systems when it comes to deploying AI for real world impact" — Article explicitly advocates for systems-based thinking as the evolution needed for effective AI deployment
Keep prompts clear and minimal to avoid contradictory instructions, distracting information and reduce hallucinations." — Article provides direct guidance on prompt clarity and minimalism as a design
scans the last 30 days on Reddit, X, and the web for any topic and returns prompt patterns + new releases + workflows that work right now" — The tool directly extracts and surfaces prompt patterns fro
[DIRECT] "Output Styles modify the system prompt to adapt the agent's behavior" — Article demonstrates practical use of system prompt modification to change Claude Code's interaction style and behavio
Add "cache_control": {"type": "ephemeral"} and get up to 90% off cached reads and 85% faster responses." — Article demonstrates practical implementation of prompt caching with specific API syntax and
You are an expert support agent that is summarizing customer emails for internal review." — Demonstrates concrete system prompt with persona definition for LLM behavior specification
context engineering acknowledges that an LLM by itself knows nothing relevant about a task. Its effectiveness depends on the quality and completeness of the context it receives" — Article explicitly f
Told CLAUDE to write gerkin scenarios for it. Told it to build a java app based on those scenarios" — Direct demonstration of using Claude to generate code from specifications (gherkin scenarios)
deep research is the most underused tool in AI right now... not because it doesn't work, because people can't prompt it" — Article directly identifies prompt quality as the core limitation preventing
Prompt Engineering: "How to Talk to AI"" — Article explicitly names prompt engineering as 2023-2024 phase, establishing its place in AI development timeline
Changed one word in your GPT-4 prompt and accuracy dropped 15%?" — Demonstrates empirical evidence that single-word changes in prompts cause measurable accuracy shifts, illustrating prompt sensitivity
CLAUDE.md file, skills, and PRD writer I built over 100+ iterations" — Article demonstrates practical prompt engineering through open-sourcing a fully-iterated CLAUDE.md system file, showing structure
[direct] "the key to writing AI content that sounds human isn't in the prompt" — Article directly challenges the notion that prompt crafting is the primary lever for human-like AI output, positioning
A sufficiently detailed spec is code" — Post title directly challenges the claim that detailed specs serve as code; author argues they devolve into pseudocode.
old prompting tricks like chain-of-thought often hurt their performance" — Article directly contradicts the effectiveness of chain-of-thought prompting for reasoning models, showing empirical evidence
You didn't write good enough instructions, didn't set up the right memory tool, or didn't parallelize correctly" — Karpathy explicitly identifies instruction quality as a primary failure mode in agent
Claude Code has 37 hidden reactive messages that nudge the agent mid-conversation" — Article demonstrates specific implementation of system prompts as reactive guidance messages that steer agent behav
The advantage now belongs to whoever can specify the problem precisely." — Article argues that precise problem specification is the new competitive advantage, extending the concept beyond traditional
Out of the box it talked too much for what we want, so we tuned it to act more like Codex: a model that goes off, reads, thinks, and comes back with work done." — Demonstrates practical prompt/behavio
Research → Plan → Annotate(反复) → Todo List → Implement → Feedback & Iterate" — Article explicitly advocates a structured planning workflow that separates planning from code execution, demonstrating th
how you phrase a prompt and what context you provide can drastically change your model's behavior" — Article explicitly discusses how prompt phrasing directly impacts model behavior, emphasizing the i
不要直接下指令,要先设定语境。告诉 Claude 它不仅仅是一个聊天机器人,而是你的"内容策略师"和"系统架构师"。" — Article demonstrates core prompt engineering principle: setting context and role definition to shape AI behavior and response patterns.
carefully engineered rubrics matter a lot" — The article explicitly demonstrates through A/B testing that rubric engineering directly impacts model output quality and user preference (77% preference d
When you enter a command, that prompt gets appended to the current conversation/context and the main agent begins to perform the task." — Article explicitly describes how custom commands work as promp
Write a design document. You're going to use this to organize your own thinking, also as an input to the LLM that your prompts refer to." — Article explicitly describes a prompt engineering technique:
Context Engineering is a System, Not a prompt String. It's dynamic, task-specific, and format-aware." — Article explicitly positions context engineering as an evolution/extension of prompt engineering
But Context Engineering is more than just writing prompts. It's about designing everything around the prompt" — Article explicitly positions context engineering as an evolution beyond prompt engineeri
Stop memorizing LeetCode. Start learning how to prompt, how to architect, and how to debug AI-generated code." — Directly states prompt engineering as an essential skill for AI-native developers, posi
Here's why it works: The Task step makes the model write, "Get the car to the car wash." Once it generates that text, every token that follows is conditioned on it." — Article demonstrates STAR prompt
Prompt engineering is quietly dying." — Article explicitly argues that prompt engineering as a discipline is becoming obsolete, presenting it as fundamentally flawed.
LLM reads the code + performance report. LLM diagnoses weaknesses. LLM proposes improvements." — ProFiT demonstrates prompt engineering in action: the LLM receives code and performance data as context
Before solving, explicitly construct a model of the problem. List: Relevant Entities, State Variables (what changes?), Actions (what is allowed?), Constraints (what is forbidden?)" — Article introduce
Two prompts can look nearly identical on the surface while implying very different outcomes. Humans pick up that difference instantly. Models often don't." — Directly discusses how prompt formulation
Instead of manually searching through your codebase for broken prompts, the system can look at the execution history of a skill, including past runs, failures, feedback, and tool errors, and suggest a
Working with GPT-5.3 is a test of patience and a good part of the job is making sure I anticipate all possible dumb ways it could interpret my prompt, and write exact words to drift it away from that
In 3 lines: who the page is for, what action I want to trigger, which sections are mandatory." — Article demonstrates concrete prompt structure best practices for AI design tool, showing how to frame
A reference trajectory from a successful run is available. Use the read_file the approach that led to a successful solution. Use it as guidance for which steps to take, but adapt as needed based on th