Model Management
Commands for listing, inspecting, adding, and removing LLM model configurations.
evonic model list
Section titled “evonic model list”List all configured LLM models.
evonic model listOutput:
ID Name Provider Status--------------------------------------------------------------------------cf4cbe3b-1e2f-4ce7-811d-bb0a24ac09aa Gemma4-local llama.cpp enablede1e18b95-dbe0-4b94-bd1a-39c40ab40268 Grok-4.1-Fast openrouter enabled603b799f-c203-44ad-871a-0bb7394f0aa3 Kimi-K2-Thinking openrouter enabledevonic model get <model_id>
Section titled “evonic model get <model_id>”Show detailed information about a specific model.
evonic model get 603b799f-c203-44ad-871a-0bb7394f0aa3Output:
ID: 603b799f-c203-44ad-871a-0bb7394f0aa3Name: Kimi-K2-ThinkingType: remoteProvider: openrouterModel Name: moonshotai/kimi-k2-thinkingBase URL: https://openrouter.ai/api/v1API Key: ***afb76aMax Tokens: 32768Timeout: 60Temperature: NoneThinking: yesEnabled: yesDefault: noevonic model add <model_id>
Section titled “evonic model add <model_id>”Add a new LLM model configuration.
evonic model add <model_id> --name <name> --provider <provider> [--api-key <key>] [--base-url <url>]| Flag | Required | Description |
|---|---|---|
--name | Yes | Display name for the model |
--provider | Yes | Provider (e.g. openai, anthropic, groq, openrouter, llama.cpp) |
--api-key | No | API key for the provider |
--base-url | No | Base URL for the API endpoint |
Example:
# Add OpenAI modelevonic model add gpt4o --name "GPT-4o" --provider openai --api-key "sk-..." --base-url "https://api.openai.com/v1"
# Add local llama.cpp modelevonic model add local_llama --name "Local Llama 3" --provider llama.cpp --base-url "http://localhost:8080/v1"Output:
Model added: GPT-4o (gpt4o)evonic model rm <model_id>
Section titled “evonic model rm <model_id>”Remove a model configuration. Requires interactive confirmation.
evonic model rm gpt4oOutput:
Model to remove: ID: gpt4o Name: GPT-4o Provider: openai Status: enabledAre you sure? [y/N]: yModel removed: gpt4o