prompt compression
1 articles · 6 co-occurring · 0 contradictions · 0 briefs
Caveman-speak is a compression technique, but inverted: instead of compressing input context, it compresses output to reduce downstream token consumption. Demonstrates compression is bidirectional.
@shao__meng: **Caveman 这个教 AI Agent 说话的 SKill,大幅节约 LLM Token (output: 75%, input: 45%)gith... extends
Caveman-speak is a compression technique, but inverted: instead of compressing input context, it compresses output to reduce downstream token consumption. Demonstrates compression is bidirectional.
query this concept
$ db.articles("prompt-compression")
$ db.cooccurrence("prompt-compression")
$ db.contradictions("prompt-compression")