Skip to content

Installation

  • Python 3.8+
  • An LLM endpoint — any OpenAI-compatible API (llama.cpp, Ollama, vLLM, OpenRouter, etc.)
  • Git — for cloning the repository
  • Docker — required for isolated runpy and bash tool execution (see Docker Setup)
Terminal window
git clone <your-repo-url> evonic-ai-platform
cd evonic-ai-platform
pip install -r requirements.txt
PackagePurpose
flask>=3.0Web framework
requests>=2.31HTTP client for LLM API
python-dotenv>=1.0.0Environment variable loading
anthropic>=0.40.0Anthropic API (optional, for improver module)

For the Telegram channel integration (agent platform):

Terminal window
pip install python-telegram-bot

The agent tools runpy and bash execute code inside an isolated Docker container by default (via DockerBackend). This sandbox provides filesystem isolation, resource limits, and network restrictions — ensuring agent code runs safely without affecting the host system.

Prerequisites: Docker must be installed and the daemon running.

Build the sandbox image:

Terminal window
docker build -t evonic-sandbox:latest docker/tools/

The image is built from docker/tools/Dockerfile and includes Python 3.11, system utilities (curl, git, ripgrep, sqlite3, etc.), and a non-root devuser matching the host UID/GID. The host workspace is mounted at /workspace and the runpy_helpers package is automatically available inside the container.

Configuration (in .env):

# Docker image name (default)
SANDBOX_IMAGE=evonic-sandbox:latest
# Resource limits
SANDBOX_MEMORY_LIMIT=512m
SANDBOX_CPU_LIMIT=1
SANDBOX_NETWORK=none # or 'bridge'
SANDBOX_MAX_CONTAINERS=10
# Idle timeout in seconds (containers are destroyed after this)
SANDBOX_IDLE_TIMEOUT=1800

Note: If Docker is unavailable, set sandbox_enabled=0 on the agent to fall back to local subprocess execution (less isolated).

Terminal window
# macOS
brew install ollama
# Linux
curl -fsSL https://ollama.com/install.sh | sh
# Windows
# Download from https://ollama.com/
Terminal window
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build --config Release -j $(nproc)
Terminal window
pip install vllm
Terminal window
python3 -c "import flask; import requests; print('OK')"

The evonic CLI provides commands for managing the platform. Check available commands with:

Terminal window
evonic --help

The CLI covers server management, agents, skills, skillsets, models, plugins, and schedules. See each section for detailed CLI usage:

Start the Evonic Flask server:

Terminal window
evonic start [--port PORT] [--host HOST] [--debug] [-f]
FlagRequiredDescription
--portNoPort number (default: from config or 8080)
--hostNoHost to bind (default: 0.0.0.0)
--debugNoEnable debug mode
-f, --foregroundNoRun server in foreground (blocking mode)

Examples:

Terminal window
# Start on default port
evonic start
# Start on custom port
evonic start --port 9000
# Start in foreground with debug mode
evonic start -f --debug

Output:

Server started (PID: 12345)
Host: 0.0.0.0
Port: 8080
URL: http://localhost:8080
Terminal window
evonic stop
Terminal window
evonic status

Output (running):

Server is running (PID: 12345)
Port: 8080
URL: http://localhost:8080

Check for and apply updates from the Git remote. Requires the update supervisor to be set up first.

Terminal window
evonic update [--check] [--tag TAG] [--rollback] [--force]
FlagDescription
--checkFetch tags and report what is available — no update is applied
--tag TAGUpdate to a specific tag instead of the latest
--rollbackRoll back to the previous stable release
--forceSkip SSH signature verification (development only)

Examples:

Terminal window
# Check what version is available
evonic update --check
# Trigger an immediate update check on the running supervisor
evonic update
# Update to a specific tag
evonic update --tag v1.3.0
# Roll back to the previous release
evonic update --rollback

When the update supervisor is running in the background, evonic update signals it via SIGUSR1 to trigger an immediate check. If the supervisor is not running, the update is performed inline in the current process.

See also: Update System guide