Quick Start
Up and running in three steps.
STEP 1 · INSTALL
curl -fsSL https://compresr.ai/api/install | shSTEP 2 · LAUNCH
context-gatewaySTEP 3 · ENJOY
Use your agent as usual — Context Gateway handles the rest.
How It Works
One proxy. Three superpowers. Zero config changes.
🤖Your Agent
Claude Code, Cursor, etc.full payload
Context Gateway
History Compression
Proactive background summarization
active
Tool Output Compression
Compresses bulky tool responses inline
active
Tool Discovery
Picks only relevant tools each turn
active
lean & mean
🧠LLM API
Anthropic, OpenAI, GeminiUnder the Hood
Three engines working behind the scenes, so your agent doesn't skip a beat.
History Compression
Old conversation turns get summarized in the background. When context gets tight, compaction is instant — not a 30-second pause.
- Proactive background summarization
- Zero-wait compaction when context fills up
- Preserves key facts across long sessions
Turn 1-10
compressed
Turn 11-20
compressed
Turn 21-25
ready
Turn 26
live
Tool Output Compression
Agent reads a 2,000-line file? We compress it before it devours your context budget. Compression up to 20x — all the signal, none of the bloat.
- Inline compression of bulky tool responses
- Up to 95% token reduction (20x compression)
- Critical details preserved, redundancy stripped
auth_module.py10,000 tokens
compressed500 tokens
Tool Discovery
Your agent sends all 24 tools every turn. We pick the 3 that matter. Less noise, better decisions.
- Context-aware tool selection per turn
- Eliminates irrelevant tool definitions
- Sharper decisions, faster responses
run_terminal
read_file
write_file
list_directory
search_files
replace_in_file
create_directory
delete_file
execute_code
git_commit
git_push
install_package
web_search
fetch_url
run_tests
lint_code
format_code
compile_project
deploy_service
database_query
screenshot
send_email
schedule_task
debug_attach
3 of 24 sent. That's the whole trick.