Skip to content

See Your AI

Know exactly what your AI agent context costs — per request, per model. Zero setup.

Every time an AI agent processes a prompt, it loads your context files into the context window — CLAUDE.md, linked docs, MCP tool definitions. You pay for these tokens on every single request, before the agent reads a line of your code.

This is your context tax. And most developers have no idea what it costs.

Terminal window
npx seeyourai

One command. Zero config. Instant answers.

Context File Graph

Maps every file auto-loaded from your CLAUDE.md and AGENTS.md, recursively following links to show the full token footprint.

MCP Server Overhead

Scans all configured MCP servers across Claude Code, Cursor, Windsurf, and more — showing token cost per server.

Cost Per Request

Shows exact cost across 7 models: Opus, Sonnet, GPT-5, Gemini, and more. Updated pricing from LiteLLM.

Session Projections

Extrapolates per-request cost to real sessions (10, 30, 100 requests) so you see the actual dollar impact.

Terminal window
# Run instantly with npx
npx seeyourai
# Or install globally
npm install -g seeyourai

sya is available as a short alias: npx sya

Works on any project with CLAUDE.md, AGENTS.md, or MCP servers configured. No accounts, no API keys, no setup.

seeyourai

Scan your project and show context costs. Just run it — zero config required.

seeyourai context --json

Machine-readable JSON output for CI pipelines. Track context cost over time.

seeyourai mcp

Deep-dive into your MCP tool surface with per-server token breakdowns.

seeyourai eval run

Define and run quality assertions for your agent context files.

SourceWhat seeyourai Finds
CLAUDE.md / AGENTS.mdStructure issues, broken links, missing sections
Linked files (recursive)Token count for every auto-loaded file
.claude/ memory dirClaude Code memory files and their size
MCP servers (12+ tools)Claude Code, Cursor, Windsurf, Codex, VS Code, Kiro, Amazon Q, Goose
Pricing (bundled)7 models with up-to-date per-token costs