flowchart LR
subgraph Input["Code Source"]
Source1[GitHub Repository]
Source2[Local Directory]
end
subgraph Pipeline["Salt Docs Pipeline"]
Crawl[Crawl & Analyze]
Identify[LLM Identify Abstractions]
Generate[Generate Markdown Docs]
end
subgraph Output["Local Wiki"]
Docs[Markdown Files]
MCP[MCP Server]
end
subgraph Assistants["AI Assistants"]
Cursor[Cursor]
Claude[Claude]
Continue[Continue]
end
Source1 --> Crawl
Source2 --> Crawl
Crawl --> Identify --> Generate --> Docs --> MCP
MCP --> Cursor
MCP --> Claude
MCP --> Continue
pip install salt-docsgit clone https://github.com/usesalt/salt-docs.git
cd salt-docs
pip install -e .Run the setup wizard to configure your API keys and preferences:
salt-docs initsalt-docs run https://github.com/username/reposalt-docs run /path/to/your/codebasesalt-docs run https://github.com/username/repo --output /custom/path --language spanish --max-abstractions 10Salt Docs stores configuration in a per-user config file and uses your system's keyring for secure API key storage.
- macOS/Linux:
~/.config/saltdocs/config.json(or$XDG_CONFIG_HOME/saltdocs/config.json) - Windows:
%APPDATA%\saltdocs\config.json
llm_provider: LLM provider to use (gemini, openai, anthropic, openrouter, ollama) - default: geminillm_model: Model name to use (e.g., "gemini-2.5-flash", "gpt-4o-mini", "claude-3-5-sonnet-20241022") - default: gemini-2.5-flashoutput_dir: Default output directorylanguage: Default language for generated docsmax_abstractions: Default number of abstractions to identifymax_file_size: Maximum file size in bytesuse_cache: Enable/disable LLM response cachinginclude_patterns: Default file patterns to includeexclude_patterns: Default file patterns to excludeollama_base_url: Custom Ollama base URL (optional, default: http://localhost:11434)
salt-docs config show# Update API key for any provider (interactive)
salt-docs config update-api-key gemini
salt-docs config update-api-key openai
salt-docs config update-api-key anthropic
salt-docs config update-api-key openrouter
# Legacy command (still works, redirects to update-api-key)
salt-docs config update-gemini-key
# Update GitHub token (interactive)
salt-docs config update-github-token
# Update GitHub token directly
salt-docs config update-github-token "your-token-here"# Change LLM provider
salt-docs config set llm-provider openai
# Change LLM model
salt-docs config set llm-model gpt-4o-mini
# Change default language
salt-docs config set language spanish
# Change max abstractions
salt-docs config set max_abstractions 15
# Disable caching
salt-docs config set use_cache false
# Update output directory
salt-docs config set output_dir /custom/pathSalt Docs includes an MCP (Model Context Protocol) server that exposes your generated documentation to AI assistants in IDEs like Cursor, Continue.dev, and Claude Desktop.
The MCP server provides these tools:
list_docs- List all available documentation filesget_docs- Fetch the full content of a documentation file (by resource name or absolute path)search_docs- Full-text search across documentation (paths, names, and resource names)index_directories- Index directories for fast searching
-
Open or create your MCP configuration file:
- macOS/Linux:
~/.cursor/mcp.json - Windows:
%APPDATA%\Cursor\mcp.json
- macOS/Linux:
-
Add the salt-docs server configuration:
{
"mcpServers": {
"salt-docs": {
"command": "salt-docs",
"args": ["mcp"]
}
}
}-
Restart Cursor to load the MCP server.
-
The AI assistant in Cursor can now access your documentation using tools like:
- "What documentation do we have?"
- "Get me the documentation for 'SALT project"
- "Read the README documentation"
-
Open or create your Claude configuration file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json
- macOS:
-
Add the salt-docs server configuration:
{
"mcpServers": {
"salt-docs": {
"command": "salt-docs",
"args": ["mcp"]
}
}
}- Restart Claude Desktop to load the MCP server.
- Command not found: Make sure
salt-docsis in your PATH. You can verify by runningsalt-docs --versionin your terminal. - Server not starting: Ensure you've run
salt-docs initand have generated at least one documentation project. - No docs found: The MCP server discovers docs from your configured
output_dir. Runsalt-docs config showto check your output directory.
You can test the MCP server directly:
salt-docs mcpThis will start the server in stdio mode (for MCP clients). To test locally, you can use the test scripts in the tests/ directory.
Salt Docs supports multiple LLM providers, allowing you to choose the best option for your needs:
-
Google Gemini (default)
- Recommended models: gemini-2.5-pro, gemini-2.5-flash, gemini-1.5-pro, gemini-1.5-flash
- API key required: Yes (GEMINI_API_KEY)
-
OpenAI
- Recommended models: gpt-4o-mini, gpt-4.1-mini, gpt-5-mini, gpt-5-nano
- API key required: Yes (OPENAI_API_KEY)
- Supports o1 models with reasoning capabilities
-
Anthropic Claude
- Recommended models: claude-3-5-sonnet, claude-3-5-haiku, claude-3-7-sonnet (with extended thinking), claude-3-opus
- API key required: Yes (ANTHROPIC_API_KEY)
-
OpenRouter
- Recommended models: google/gemini-2.5-flash:free, meta-llama/llama-3.1-8b-instruct:free, openai/gpt-4o-mini, anthropic/claude-3.5-sonnet
- API key required: Yes (OPENROUTER_API_KEY)
- Access multiple models through a single API
-
Ollama (Local)
- Recommended models: llama3.2, llama3.1, mistral, codellama, phi3
- API key required: No (runs locally)
- Default URL: http://localhost:11434
- Perfect for privacy-sensitive projects or offline usage
You can switch between providers at any time:
# Switch to OpenAI
salt-docs config set llm-provider openai
salt-docs config set llm-model gpt-4o-mini
salt-docs config update-api-key openai
# Switch to Ollama (local)
salt-docs config set llm-provider ollama
salt-docs config set llm-model llama3.2
# No API key needed for Ollama!run- GitHub repo URL, current open directory or local directory path--repoor--dir- GitHub repo URL or local directory path (depricated)
-n, --name- Project name (derived from repo/directory if omitted)-t, --token- GitHub personal access token-o, --output- Output directory (overrides config default)-i, --include- File patterns to include (e.g., ".py", ".js")-e, --exclude- File patterns to exclude (e.g., "tests/", "docs/")-s, --max-size- Maximum file size in bytes (default: 100KB)--language- Language for generated docs (default: "english")--no-cache- Disable LLM response caching--max-abstractions- Maximum number of abstractions to identify (default: 10)
