Claude Mastery
2026-04-07
⌨️ CLI POWER MOVE
🔥 New🔧 Try It
claude-profile: Run Multiple Subscriptions Side by Side

If you juggle a work subscription and a personal Max plan — or run agents under a service account while coding interactively — you've hit the wall: Claude Code locks to one set of credentials at a time. Switching means re-authenticating, losing your session, and breaking any running agents.

claude-profile solves this with a single env var that Claude Code already supports. It exploits CLAUDE_CONFIG_DIR — when set, Claude Code SHA-256 hashes the directory path and uses the hash to create a unique keychain service name. Each profile gets its own isolated credential store, settings, and session history.

bash
# Install (Linux amd64)
curl -fsSL https://github.com/diranged/claude-profile/releases/latest/download/install.sh | bash

# Create profiles
claude-profile create work --color blue
claude-profile create personal --color green

# Launch with a profile
claude-profile run work
claude-profile run personal # separate terminal, separate credentials

# Or use shell aliases for quick switching
eval "$(claude-profile aliases)"
claude-work # launches with work profile
claude-personal # launches with personal profile

Each profile maintains completely independent state: OAuth tokens, SSO sessions, API keys, settings. The --color flag gives you a visual indicator so you don't accidentally push personal code with your work subscription.

This uses documented, official behavior — no patches, no hacks. Written in Go, available for Linux, macOS, and Windows (amd64/arm64).

Why it matters: Profile isolation is the missing piece for anyone running production agents alongside interactive sessions. Your agent's credential rotation never touches your development session. Your work subscription never sees your personal project's token usage.

🏗️ AGENT ARCHITECTURE
🌿 Evergreen
The LLM Wiki Pattern: Compile Knowledge That Compounds Across Sessions

Every Claude Code session starts cold. Your agent reads the same files, rediscovers the same patterns, burns the same tokens — session after session. CLAUDE.md helps, but it's a flat file you maintain by hand. Auto memory saves corrections, but doesn't synthesize understanding.

The LLM Wiki pattern, originally proposed by Andrej Karpathy, takes a different approach: instead of feeding raw files to the LLM every session, you compile them into a synthesized wiki. The LLM reads the wiki instead of the sources. Think of it as a build step for knowledge.

llm-wiki-compiler (107 stars) implements this as a Claude Code plugin:

bash
# Install the plugin
claude plugin install llm-wiki-compiler

# Initialize — defines your topic structure
/wiki-init

# Compile sources into wiki articles
/wiki-compile

# Query the wiki during sessions
/wiki-query "how does authentication work"

The compiler takes your source directories (docs, READMEs, architecture notes, even code comments) and synthesizes them into topic-focused articles with coverage indicators and cross-references. On a real project with 383 markdown files (13.1 MB), it produced 13 wiki articles (161 KB) — an 84% token reduction at session startup (47K tokens down to 7.7K).

The key design choices:

  • Incremental compilation — only recompiles when sources change
  • Coverage indicators — each article shows which sources informed it and how reliably
  • Three integration modesstaging (wiki alongside raw files), recommended (wiki-first, raw on demand), primary (wiki replaces raw reads). Controlled via SessionStart hooks
  • Concept articles — automatically generated when patterns span 3+ topics

This is particularly powerful for autonomous agents. An agent running on a cron doesn't have you there to point it at the right files. With a compiled wiki, the agent starts every session with synthesized understanding instead of raw material.

🧭 OPERATOR THINKING
🌿 Evergreen
The Skill Metadata Budget: Why 33% of Your Skills Are Invisible

If you've installed more than ~40 skills in Claude Code, some of them are silently invisible. Claude can't discover them, can't invoke them, and gives no warning that they've been dropped.

Here's why: Claude Code loads skill metadata into an available_skills section of the system context. This section has an undocumented ~16,000 character budget. Skills load sequentially. Once the budget fills, remaining skills are truncated with a quiet message: *"Showing X of Y skills due to token limits."*

Community research measured this precisely with 63 installed skills:

MetricValue
Skills visible42 (66%)
Skills hidden21 (33%)
Avg description length263 chars
Overhead per skill~109 chars (XML tags, formatting)
Total budget~15,500-16,000 chars

The key insight: this is cumulative, not per-skill. A skill with a 50-char description and one with a 500-char description both count against the same pool. The fix is straightforward — compress your skill descriptions:

Description LengthSkills That Fit
263 chars (default avg)~42
130 chars (target)~67
100 chars (aggressive)~75

Before: "Provide evidence proportional to stakes for all claims, verify assumptions against codebase state, and cross-reference with existing tests" (142 chars)

After: "Verify claims proportional to stakes, check codebase state" (58 chars)

Important distinction: This is separate from MCP Tool Search's defer_loading feature, which lazily loads MCP tool definitions. Skills don't benefit from deferred loading — they're always loaded upfront.

If you're a skill-heavy user (plugins, custom skills, third-party packages), audit your descriptions. Run /skills to see what's loaded, count them, and if you're over 40, start compressing.

🌐 ECOSYSTEM INTEL
🌿 Evergreen
Repowise: Codebase Intelligence That Goes Beyond File Reading

Claude Code can read files, grep for patterns, and navigate code. But it doesn't understand *why* code was built a certain way, who owns what, which files change together, or where the architectural rot is. That's the gap Repowise fills.

Repowise (648 stars, AGPL-3.0) builds four intelligence layers from your codebase and exposes them as eight MCP tools:

1. Graph Intelligence — Tree-sitter parsing + NetworkX builds dependency graphs showing files, classes, functions, and call relationships. Enables get_dependency_path() to trace connections and get_dead_code() to find unreachable code with confidence scores.

2. Git Intelligence — Analyzes 500 commits to identify hotspots (high-churn files), ownership percentages, co-change pairs, and significant commit messages. The get_risk() tool surfaces files that change frequently and have many dependents — your highest-risk refactoring targets.

3. Documentation Intelligence — Auto-generates an LLM-powered wiki for every module with freshness scoring and semantic search. Each article shows when it was last updated and how stale it might be.

4. Decision Intelligence — Captures architectural decisions from git history and code markers, linking them to governed code components. The get_why() tool searches these decisions so Claude can understand the rationale behind existing code before suggesting changes.

bash
pip install repowise
cd your-project
repowise init # ~25 min for 3,000-file codebase
repowise serve # starts MCP server + local dashboard

After indexing, Repowise auto-generates a CLAUDE.md with an architecture summary. Incremental updates take <30 seconds per commit. Supports Python, TypeScript, JavaScript, Go, Rust, Java, C/C++, Ruby, and Kotlin, plus OpenAPI, Protobuf, and GraphQL schemas.

The local dashboard gives you a web UI for browsing intelligence, docs, hotspots, and ownership — useful for onboarding and architectural reviews even outside of Claude Code.

🔬 PRACTICE LAB
🌿 Evergreen
Index Your Codebase with Repowise's Four Intelligence Layers

What you'll do: Install Repowise, index a real project, and query all four intelligence layers through Claude Code's MCP integration.

Steps:

  1. Install Repowise (requires Python 3.10+):
bash
pip install repowise
  1. Pick a project to index. Choose something with real git history — at least 50 commits. A personal project or a well-known open source repo you've cloned works well.
  1. Initialize and index:
bash
cd your-project
repowise init

For a small project (<500 files), this takes 2-5 minutes. For larger codebases, expect 15-25 minutes.

  1. Start the MCP server:
bash
repowise serve

Note the port it starts on (default: 8765).

  1. Connect Claude Code to Repowise. Add to your project's .claude/settings.json:
json
{
"mcpServers": {
"repowise": {
"command": "repowise",
"args": ["serve", "--stdio"]
}
}
}
  1. Test each intelligence layer in a Claude Code session:
Use repowise to give me an architecture overview of this project
What are the highest-risk files in this codebase? (uses git intelligence)
Trace the dependency path from [file A] to [file B] (uses graph intelligence)
Why was [module X] built this way? (uses decision intelligence)
Find dead code in this project (uses graph intelligence)
  1. Check the auto-generated CLAUDE.md. Repowise creates one during init — compare it to what you'd write by hand.

Expected outcome: You should see structured intelligence responses for each query — not just file contents, but ownership data, risk scores, architectural decisions, and dependency paths. The dead code detection should flag at least a few unreachable functions with confidence scores.

Verify: Run repowise status to confirm all four layers are indexed. In Claude Code, run a query that requires git intelligence (like get_risk) — if it returns ownership percentages and churn data, Repowise is fully connected.