← All concepts

human ai collaboration

115 articles · 15 co-occurring · 7 contradictions · 7 briefs

I supervise these trials, but let Codex run free to conduct the trials and make trade off decisions." — Demonstrates a concrete human-AI partnership where human provides oversight while AI agent indep

@GaryMarcus: wow. GenAI is definitely not living up to the hype, for most people.

[STRONG] "Workers avoiding AI entirely have figured out the tool doesn't work well enough for their tasks, or haven't been given the training or incentive to make it work. Neither group is irrational." — Workers are making rational decisions to opt out of AI collaboration because tools don't meet their needs or lack proper enablement. This demonstrates that effective human-AI collaboration requires more than tool deployment—it requires proper training, task fit, and incentive alignment.

@bakkermichiel: 🚨📄 New preprint! We find the "boiling the frog" equivalent of AI use. In a ...

[strong] "after just 10 min of AI assistance people perform worse and give up more often than those who never used AI" — Empirical RCT evidence showing that AI assistance, contrary to common assumptions, can degrade human task performance and increase task abandonment in short timeframes

@petergyang: Ok all the Claude Code hype aside, I still use Google Docs/Sheets for knowled...

[INFERRED] "Local md files for specs and roadmaps don't solve the collaboration problem—unless the answer is storing them in a GitHub repo and having teammates submit PRs to change a spec?" — Author argues Claude Code does not adequately address real team collaboration needs for knowledge work; current solutions either remain siloed (local files) or require developer workflow overhead (GitHub PRs).

@alxfazio: writing is thinking. before llms, a lot of thoughts never made it into words....

[STRONG] "outsourcing the skeptical part to the chatbot and then mistaking fluent output for solid judgment" — Article warns against inappropriate delegation of critical judgment to LLMs in collaborative workflows. Challenges the assumption that human-AI partnership automatically produces sound decision-making.

Struggling engineers identify with the craft. Thriving engineers identify more...

[strong] "For that engineer, code was his identity... Now, no one walks up to him because the AI can answer it." — Article challenges the narrative that AI collaboration enhances engineer value; instead shows how AI-driven answers can diminish expert role visibility and social contribution

@no_earthquake: let's see the bracelet

[DIRECT] "it's all claude. he does all my work. if anyone has questions i just pipe it to claude" — Article presents problematic collaboration pattern where human becomes passive conduit, contradicting healthy collaborative design where humans retain agency and decision-making.

@alexhillman: this is an extremely good take.

[STRONG] "I really don't get all these @openclaw "mission control" and dashboard projects. It defeats the purpose." — Article challenges dashboard-based human-AI interaction patterns, arguing they undermine agent value. Provides counterargument to control-centric collaboration.

2026-W15
537
2026-W14
2

Nothing runs until you say so." — Explicit statement that automated suggestions require human approval before execution, exemplifying human-in-the-loop AI control patterns.

LangGraph example_of

LangGraph agents seamlessly collaborate with humans by writing drafts for review and awaiting approval before acting" — LangGraph demonstrates human-in-the-loop collaboration through built-in review a

two humans driving two claudes on two workstations iterating on a single RPI design discussion" — Direct demonstration of humans and AI agents (Claudes) working together in real-time on a shared task

It's a live collaborative document editor where humans and AI agents work together in the same doc." — Proof demonstrates real-time co-authoring between humans and AI agents in a single document inter

It's amazing that machines can do the grunt work of extracting claims and organizing highlights, without costing me the chance to learn from my notes." — Real-world example of AI handling administrati

the agent is just there to help blow through the codebase, optionally challenge your madness, and do the typing for you" — Article demonstrates a specific collaborative workflow where human drives dec

AIs are good assistants for issues like this; but this is going to be a very slow slog, and requires a human with significant insight into the system and software engineering issues to direct." — Arti

I supervise these trials, but let Codex run free to conduct the trials and make trade off decisions." — Demonstrates a concrete human-AI partnership where human provides oversight while AI agent indep

you will be able to use prompts that inquire about or update drafts. You can ask Claude for ideas of what you can do, some idea for prompts to try are things like: Assign the tag "test" to the current

The mis en place for any task is almost always a mix of research and "what do we already know or have." the newsletter now takes about 20 mins of actual human writing." — Illustrates complementary div

its been very helpful internally, and we think this will help bridge the ai - human collaboration going forward" — Agent Trace is explicitly framed as solving the bridge between AI and human collabora

Review every AI-generated function as if you are a Lead Engineer. If you can't spot the potential security flaw or inefficiency, you aren't ready to use it." — Article advocates mandatory human expert

Codex and I iterated on a solution that identifies specs that could be improved" — Direct demonstration of back-and-forth iteration between human developer and Codex AI to refine solution quality

Workers avoiding AI entirely have figured out the tool doesn't work well enough for their tasks, or haven't been given the training or incentive to make it work. Neither group is irrational." — Worker

i think my role flipped from "writing and fixing code" to "managing AI tools"" — Reveals novel dimension of human-AI collaboration: as models improve, developer role shifts from code production to AI

Cowork brings Claude right to your desktop as an assistant that can actually manage your files and handle tasks for you. For instance, you can ask it to clean up your messy desktop, organize all your

[high] "I reviewed every line of code manually and constantly nudged the agents in the right direction." — Concrete example of human-in-the-loop validation where human review and iterative direction e

I just post an issue on the tracker and @nicopreme sends a PR a few hours later" — Demonstrates practical human-AI workflow where developer posts issues and AI agent autonomously creates pull requests

The client doesn't hesitate to shoot down AI-generated concepts they don't like. No wasted time worrying about hurt feelings." — Illustrates a key advantage of human-AI collaboration: AI removes socia

I still call shots, but it helps me make my decisions informed. Good example of how I work with my Claude code assistant to see if new ideas and open source projects make sense for us" — Author explic

AI can make work faster, but a fear is that relying on it may make it harder to learn new skills on the job." — Article reveals that the effectiveness of human-AI teamwork depends critically on how re

This layer resists automation because it depends on framing, taste, and deep conceptual synthesis rather than procedural construction." — Article emphasizes that research work requires human judgment,

AI can handle days, weeks, or even months of work, but it still needs humans involved" — Directly articulates the complementary nature of AI and human involvement; argues against autonomous-only model

the risk is higher if you're moving humans a little further out of the loop" — Explicitly addresses risk implications of autonomous agents with reduced human oversight

now that coding is 80% automated, the limiting factor is my ability to design, comprehend, and safely change systems" — Reveals a novel insight: as automation handles code generation, human roles shif

[direct] "agents are generally only as effective as the context they're provided, the tools they have access to, the human's ability to keep them on track or review their work, and incorporate that wo

MCP also introduced a human-in-the-loop capabilities for humans to provide additional data and approve execution." — Article provides concrete evidence that MCP implements human approval gates in auto

Developers can also insert human checkpoints into a workflow, allowing for manual review or approval before moving forward." — LangGraph provides explicit checkpoint insertion for human-in-the-loop va

your product design can leverage a "staging pattern" and ask users to review and edit the generated Cover Letter for factual accuracy and tone, rather than directly sending an AI-generated cover lette

当 1 和 2 被充分文档化 → 企业能重建工作方式 → 对判断力的需求激增 → 需要更多人来做出情境化决策" — Article makes explicit argument that documenting expert and tribal knowledge increases demand for human judgment and contextual decision-makin

this is why we would continue needing tight integration loops with experts for the time being" — Article directly argues that human expert integration is essential for catching LLM mistakes in special

multiple AI agents" — Article discusses multi-agent systems as core to orchestration

but it works best when you treat it like a pair programmer, not a code generator" — Provides specific insight on optimal usage pattern: collaborative steering vs. pure generation, suggesting pair prog

combines human-in-the-loop capabilities using CrewAI, CopilotKit, and Serper" — Article explicitly demonstrates human-in-the-loop pattern by integrating CopilotKit frontend UI with CrewAI agents for i

the human iterates on the prompt (.md) [and] the AI agent iterates on the training code (.py)" — Exemplifies complementary roles: human controls high-level research direction via prompts, AI autonomou

The response for now? Junior and mid-level engineers can no longer push AI-assisted code without a senior signing off" — Article advocates for human oversight as the necessary response to AI tool risk

I reverted back to a known good state and then asked the AI what we should do. The AI suggested building a little script that inspected the code looking for illicit reads of the game map." — Shows hum

And it's not just developers talking about it. It's product managers, writers, researchers, consultants, you name it." — Article extends the collaboration concept by demonstrating how Claude Code enab

This is not a tool that will make human judgment irrelevant. AI will automate a great deal, but not everything. The world is too interconnected, too complex, and too dependent on human creativity, dom

Now I have the AI monitor while I play the game. When I see a behavior that I don't like I draw the rectangle around the offending area of the screen. Then I describe to the AI the symptom I am seeing

with the help and supervision of a great team. The bottleneck has shifted to being how fast we can help and supervise the outcome." — Article identifies human supervision as the critical constraint in

ALWAYS read/self review the code before opening the PR, the onus is on the AI wielder to make sure the code is up to par with what they would do themselves before inflicting their teammates" — Article

The job of the human becomes to give AI the right context, systems, and feedback loops to do its best work and to apply human taste along the way" — Article articulates a novel paradigm shift in how h

Senior engineers benefit most from AI, and designers at his company take on broader roles, blending design with product building." — Article provides evidence that human expertise (senior engineers, d

It doesn't remove my creative work, it lets me do AND SHIP more creative work." — Article directly argues that AI augmentation preserves human creativity while increasing output velocity—a key benefit

Orchestrating Human-AI Teams" — The article directly addresses the research challenge of coordinating human and AI agents in team settings, providing a framework perspective on human-AI collaboration.

if something is unclear during planning, Claude asks you instead of guessing" — Demonstrates a key human-in-the-loop pattern: the agent clarifies ambiguities with the human rather than proceeding with

For now, humans are still needed to fix and manage AI-generated code." — Article directly asserts that human expertise remains essential for maintenance and correction of AI code.

harness engineering in brownfield and blackfield will remain a human problem with an AI assistant bolted on." — Argues that without behavioral contract access, agents cannot autonomously solve legacy

I had it do a first pass which helps me structure my thought, but I think I threw out basically all of the copy for it." — Shows human maintaining control and critical judgment, using AI output as sca

query this concept
$ db.articles("human-ai-collaboration")
$ db.cooccurrence("human-ai-collaboration")
$ db.contradictions("human-ai-collaboration")