Installation
Prerequisites
Section titled “Prerequisites”- Python 3.8+
- An LLM endpoint — any OpenAI-compatible API (llama.cpp, Ollama, vLLM, OpenRouter, etc.)
- Git — for cloning the repository
- Docker — required for isolated
runpyandbashtool execution (see Docker Setup)
Clone and Install
Section titled “Clone and Install”git clone <your-repo-url> evonic-ai-platformcd evonic-ai-platformpip install -r requirements.txtPython Dependencies
Section titled “Python Dependencies”| Package | Purpose |
|---|---|
flask>=3.0 | Web framework |
requests>=2.31 | HTTP client for LLM API |
python-dotenv>=1.0.0 | Environment variable loading |
anthropic>=0.40.0 | Anthropic API (optional, for improver module) |
Optional Dependencies
Section titled “Optional Dependencies”For the Telegram channel integration (agent platform):
pip install python-telegram-botDocker Setup
Section titled “Docker Setup”The agent tools runpy and bash execute code inside an isolated Docker container by default (via DockerBackend). This sandbox provides filesystem isolation, resource limits, and network restrictions — ensuring agent code runs safely without affecting the host system.
Prerequisites: Docker must be installed and the daemon running.
Build the sandbox image:
docker build -t evonic-sandbox:latest docker/tools/The image is built from docker/tools/Dockerfile and includes Python 3.11, system utilities (curl, git, ripgrep, sqlite3, etc.), and a non-root devuser matching the host UID/GID. The host workspace is mounted at /workspace and the runpy_helpers package is automatically available inside the container.
Configuration (in .env):
# Docker image name (default)SANDBOX_IMAGE=evonic-sandbox:latest
# Resource limitsSANDBOX_MEMORY_LIMIT=512mSANDBOX_CPU_LIMIT=1SANDBOX_NETWORK=none # or 'bridge'SANDBOX_MAX_CONTAINERS=10
# Idle timeout in seconds (containers are destroyed after this)SANDBOX_IDLE_TIMEOUT=1800Note: If Docker is unavailable, set
sandbox_enabled=0on the agent to fall back to local subprocess execution (less isolated).
Install a Local Model Runner (Optional)
Section titled “Install a Local Model Runner (Optional)”Ollama (Recommended for Beginners)
Section titled “Ollama (Recommended for Beginners)”# macOSbrew install ollama
# Linuxcurl -fsSL https://ollama.com/install.sh | sh
# Windows# Download from https://ollama.com/llama.cpp (For Edge/CPU-Only)
Section titled “llama.cpp (For Edge/CPU-Only)”git clone https://github.com/ggerganov/llama.cpp.gitcd llama.cppcmake -B buildcmake --build build --config Release -j $(nproc)vLLM (For High-Throughput Production)
Section titled “vLLM (For High-Throughput Production)”pip install vllmVerify Installation
Section titled “Verify Installation”python3 -c "import flask; import requests; print('OK')"Using the CLI
Section titled “Using the CLI”The evonic CLI provides commands for managing the platform. Check available commands with:
evonic --helpThe CLI covers server management, agents, skills, skillsets, models, plugins, and schedules. See each section for detailed CLI usage:
- Plugin Management — Install, list, configure plugins
- Skills — Install, enable, and manage skills
- Creating Agents — Create and manage agents
- Local Models — Manage LLM model configurations
- Skillsets — Apply agent templates
- Scheduler — Create scheduled jobs
Starting the Server
Section titled “Starting the Server”Start the Evonic Flask server:
evonic start [--port PORT] [--host HOST] [--debug] [-f]| Flag | Required | Description |
|---|---|---|
--port | No | Port number (default: from config or 8080) |
--host | No | Host to bind (default: 0.0.0.0) |
--debug | No | Enable debug mode |
-f, --foreground | No | Run server in foreground (blocking mode) |
Examples:
# Start on default portevonic start
# Start on custom portevonic start --port 9000
# Start in foreground with debug modeevonic start -f --debugOutput:
Server started (PID: 12345)Host: 0.0.0.0Port: 8080URL: http://localhost:8080Stopping the Server
Section titled “Stopping the Server”evonic stopChecking Status
Section titled “Checking Status”evonic statusOutput (running):
Server is running (PID: 12345)Port: 8080URL: http://localhost:8080Updating the Server
Section titled “Updating the Server”Check for and apply updates from the Git remote. Requires the update supervisor to be set up first.
evonic update [--check] [--tag TAG] [--rollback] [--force]| Flag | Description |
|---|---|
--check | Fetch tags and report what is available — no update is applied |
--tag TAG | Update to a specific tag instead of the latest |
--rollback | Roll back to the previous stable release |
--force | Skip SSH signature verification (development only) |
Examples:
# Check what version is availableevonic update --check
# Trigger an immediate update check on the running supervisorevonic update
# Update to a specific tagevonic update --tag v1.3.0
# Roll back to the previous releaseevonic update --rollbackWhen the update supervisor is running in the background, evonic update signals it via SIGUSR1 to trigger an immediate check. If the supervisor is not running, the update is performed inline in the current process.
See also: Update System guide