Configuration System
kimi-code
Configuration System
Configuration File
The main configuration is stored at platform-specific locations:
| Platform | Path |
|---|---|
| Linux | ~/.config/kimi/config.toml |
| macOS | ~/Library/Application Support/kimi/config.toml |
| Windows | %APPDATA%\kimi\config.toml |
Alternatively, set KIMI_CONFIG_FILE environment variable.
Configuration Structure
src/kimi_cli/config.py
Full Config Model
class Config(BaseModel):
model_config = ConfigDict(frozen=True)
models: dict[str, LLMModel] = {}
providers: dict[str, LLMProvider] = {}
default_model: str | None = None
loop_control: LoopControl = LoopControl()
services: Services = Services()
mcp: MCPConfig = MCPConfig()LLM Provider
class LLMProvider(BaseModel):
type: ProviderType # "kimi", "openai_legacy", "anthropic", etc.
base_url: str | None = None # API endpoint
api_key: str | None = None # API key (can use env: prefix)
env: str | None = None # Environment variable for API key
custom_headers: dict[str, str] = {}
oauth: OAuthConfig | None = NoneLLM Model
class LLMModel(BaseModel):
provider: str # Reference to provider name
model: str # Model identifier
max_context_size: int = 128_000 # Token limit
capabilities: list[ModelCapability] | None = NoneModel Capabilities:
image_in- Accepts image inputvideo_in- Accepts video inputthinking- Supports extended thinkingalways_thinking- Always uses thinking mode
Loop Control
class LoopControl(BaseModel):
max_steps_per_turn: int = 100
max_retries_per_step: int = 3
max_ralph_iterations: int = 20
reserved_context_size: int = 50_000Services
class Services(BaseModel):
moonshot_search: MoonshotSearchConfig | None = None
moonshot_fetch: MoonshotFetchConfig | None = NoneMCP Config
class MCPConfig(BaseModel):
tool_call_timeout_ms: int = 60_000Example Configuration
default_model = "kimi"
[models.kimi]
provider = "kimi"
model = "moonshot-v1-128k"
max_context_size = 128000
capabilities = ["image_in", "thinking"]
[models.gpt4]
provider = "openai"
model = "gpt-4-turbo"
max_context_size = 128000
[providers.kimi]
type = "kimi"
base_url = "https://api.moonshot.cn/v1"
api_key = "env:KIMI_API_KEY"
[providers.openai]
type = "openai_legacy"
base_url = "https://api.openai.com/v1"
env = "OPENAI_API_KEY"
[loop_control]
max_steps_per_turn = 100
max_retries_per_step = 3
reserved_context_size = 50000
[mcp]
tool_call_timeout_ms = 60000Environment Variables
LLM Provider Overrides
Environment variables can override config file settings.
Kimi Provider (llm.py:56-94):
| Variable | Purpose |
|---|---|
KIMI_BASE_URL |
Override API base URL |
KIMI_API_KEY |
Override API key |
KIMI_MODEL_NAME |
Override model name |
KIMI_MODEL_MAX_CONTEXT_SIZE |
Override context size |
KIMI_MODEL_CAPABILITIES |
Comma-separated capabilities |
KIMI_MODEL_TEMPERATURE |
Generation temperature |
KIMI_MODEL_TOP_P |
Generation top-p |
KIMI_MODEL_MAX_TOKENS |
Max generation tokens |
KIMI_MODEL_CACHE_KEY |
Prompt caching key |
OpenAI Provider:
| Variable | Purpose |
|---|---|
OPENAI_BASE_URL |
Override API base URL |
OPENAI_API_KEY |
Override API key |
Application Settings
| Variable | Purpose |
|---|---|
KIMI_CONFIG_FILE |
Custom config file path |
KIMI_LOG_LEVEL |
Logging level (DEBUG, INFO, etc.) |
KIMI_LOG_FILE |
Custom log file path |
Configuration Loading
config.py - load_config()
def load_config(path: Path | None = None) -> Config:
"""Load configuration from file."""
if path is None:
path = get_config_file()
if not path.exists():
# Create default config
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(DEFAULT_CONFIG)
content = path.read_text()
return load_config_from_string(content)
def load_config_from_string(content: str) -> Config:
"""Parse TOML or JSON config string."""
try:
data = tomllib.loads(content)
except tomllib.TOMLDecodeError:
data = json.loads(content)
return Config.model_validate(data)Environment Variable Augmentation
llm.py - augment_provider_with_env_vars()
def augment_provider_with_env_vars(
provider: LLMProvider,
model: LLMModel,
) -> tuple[LLMProvider, LLMModel, dict[str, str]]:
"""Apply environment variable overrides."""
env_overrides = {}
if provider.type == "kimi":
if base_url := os.environ.get("KIMI_BASE_URL"):
provider = provider.model_copy(update={"base_url": base_url})
env_overrides["base_url"] = base_url
if api_key := os.environ.get("KIMI_API_KEY"):
provider = provider.model_copy(update={"api_key": api_key})
env_overrides["api_key"] = "***"
# ... more overrides
return provider, model, env_overridesAgent Configuration
Agents are configured via YAML files.
Default Agent Location
src/kimi_cli/agents/default/agent.yaml
Agent Spec Structure
name: kimi
system_prompt: |
You are Kimi, an AI assistant...
Current time: ${KIMI_NOW}
Working directory: ${KIMI_WORK_DIR}
tools:
- kimi_cli.tools.shell:Shell
- kimi_cli.tools.file:ReadFile
- kimi_cli.tools.file:WriteFile
- kimi_cli.tools.file:StrReplaceFile
- kimi_cli.tools.file:Glob
- kimi_cli.tools.file:Grep
- kimi_cli.tools.web:SearchWeb
- kimi_cli.tools.web:FetchURL
- kimi_cli.tools.multiagent:Task
- kimi_cli.tools.multiagent:CreateSubagent
- kimi_cli.tools.dmail:SendDMail
- kimi_cli.tools.think:Think
- kimi_cli.tools.todo:SetTodoList
subagents:
coder:
path: ./coder.yaml
description: "Specialized for coding tasks"System Prompt Variables
Jinja2 template variables available in system prompts:
| Variable | Source | Description |
|---|---|---|
${KIMI_NOW} |
Runtime | Current timestamp |
${KIMI_WORK_DIR} |
Session | Working directory path |
${KIMI_WORK_DIR_LS} |
Runtime | Directory listing |
${KIMI_AGENTS_MD} |
Runtime | Loaded agents documentation |
${KIMI_SKILLS} |
Runtime | Available skills list |
Session Storage
Sessions are stored at:
~/.kimi/sessions/
├── {session_uuid}/
│ ├── context.jsonl # Message history
│ └── wire.jsonl # Wire protocol log
└── .meta # Session index
Logging
Logging is configured via app.py - enable_logging():
def enable_logging(
level: str = "INFO",
log_file: Path | None = None,
):
"""Configure loguru logging."""
loguru.logger.remove()
if log_file:
loguru.logger.add(
log_file,
rotation="10 MB",
retention="7 days",
level=level,
)Default log location: ~/.kimi/logs/kimi.log
MCP Server Configuration
MCP servers can be configured via:
- CLI argument:
--mcp-config <file> - Project file:
.agents/mcp.json
MCP Config Format
{
"servers": [
{
"name": "my-server",
"command": "npx",
"args": ["-y", "@my/mcp-server"],
"env": {
"API_KEY": "secret"
}
}
]
}MCPServerConfig Model
class MCPServerConfig(BaseModel):
name: str
command: str
args: list[str] = []
env: dict[str, str] = {}
oauth: OAuthConfig | None = NoneConfiguration Hierarchy
Priority order (highest to lowest):
- CLI arguments (
--model,--thinking, etc.) - Environment variables (
KIMI_*,OPENAI_*) - Config file (
config.toml) - Default values (in code)
CLI Args
│
▼
Environment Variables
│
▼
Config File (~/.config/kimi/config.toml)
│
▼
Default Values (config.py)
Related Documentation
- Overview - High-level architecture
- Entry Points - CLI commands
- Core Logic - Agent loop details