Getting Started~5 min setupInternal Alpha

Getting Started with LYDOS

LYDOS runs as a local FastAPI server on port 8888 with 109 agents, 245 Q-engines, and 162 MCP tools. This guide walks you from a fresh clone to a verified running system.

WARNING
LYDOS is currently in internal alpha. The repository is private (lydianai/AILYDIAN-AGENT-ORCHESTRATOR). There is no public pip package or npm SDK yet — all setup is done by cloning the repo directly.

Prerequisites

  • Python 3.12+The server and all core engines require Python 3.12 or later (tested on 3.13.7).
  • GitTo clone the private repository. SSH key must be configured for github.com.
  • PRIMARY_API_KEYPrimary LLM provider (llama-3.3-70b-versatile). Obtain from your provider dashboard.
  • BILINGUAL_API_KEY (optional)Bilingual Provider fallback. Long-context multilingual models. Optional but recommended.
  • Node.js 18+ (optional)Only required if you want to run the Next.js frontend at localhost:3001.
  • Container runtime + Compose (optional)For the distributed alpha cluster with relational database + cache layer + message queue.

Step-by-step setup

1

Clone the repository

The repo is private. Make sure your SSH key is added to GitHub before running the clone.
terminalBASH
# SSH clone (recommended)
git clone [email protected]:lydianai/AILYDIAN-AGENT-ORCHESTRATOR.git
cd AILYDIAN-AGENT-ORCHESTRATOR

# Or HTTPS (requires personal access token)
git clone https://github.com/lydianai/AILYDIAN-AGENT-ORCHESTRATOR.git
cd AILYDIAN-AGENT-ORCHESTRATOR
2

Create and activate the virtual environment

The project uses a dedicated venv at ~/.ailydian-venv/. This path is hardcoded in the session-start hook and the LYD CLI, so keep it consistent.
terminalBASH
# Create venv at the expected path
python3 -m venv ~/.ailydian-venv

# Activate
source ~/.ailydian-venv/bin/activate

# Install all dependencies (80 packages)
pip install -r requirements.txt
NOTE
requirements.txt pins every dependency with a compatible range. Core packages include fastapi>=0.129, httpx, pydantic>=2, aiosqlite, and the LLM client SDK.
3

Configure environment variables

Copy the example env file and add your API keys. The server reads .env at startup via python-dotenv.
.envBASH
# Copy the example (never commit the real .env)
cp .env.example .env

# Minimum required
PRIMARY_API_KEY=your_primary_provider_key_here

# Recommended — enables Bilingual Provider fallback LLM
BILINGUAL_API_KEY=your_bilingual_key_here

# Optional — enables Analysis Provider as third-priority backup
ANALYSIS_API_KEY=your_analysis_key_here

# Optional — JWT secret for Q63 multi-user mode
LYDOS_JWT_SECRET=change_me_in_production

# Optional — task API key for distributed worker auth
LYDOS_TASK_API_KEY=change_me_in_production
TIP
Without ANALYSIS_API_KEY, the analysis_api module starts in degraded state (score 70/100). The system still operates normally — Primary Provider and Bilingual Provider remain fully active.
4

Start the server

The server is a single FastAPI process on port 8888. It loads all 29 core modules, registers Q-series engine routers, and initialises 109 agents at startup.
terminalBASH
# Make sure venv is active
source ~/.ailydian-venv/bin/activate

# Start the server from the project root
cd ~/Masaüstü/AILYDIAN-AGENT-OS
python3 server.py

# Expected startup output:
# INFO:     LYDOS Agent OS v12.0.0 starting on http://0.0.0.0:8888
# INFO:     Loaded 29 modules
# INFO:     109 agents registered
# INFO:     MCP server: 162 tools available
# INFO:     Application startup complete.
NOTE
The server also auto-starts on session open via the hooks/session-start.sh hook when running inside an AI IDE. For persistent background service, install the systemd unit with scripts/install_service.sh.
5

Verify the health check

The health endpoint runs a 29-module check and returns a composite score out of 100.
terminalBASH
# Quick health check with curl
curl -s http://localhost:8888/api/health | python3 -m json.tool

# Or use the lyd CLI (shorter)
./lyd status

# Expected response (abridged):
# {
#   "status": "excellent",
#   "score": 94,
#   "modules_healthy": 28,
#   "modules_degraded": 1,
#   "modules_failed": 0,
#   "total_agents": 109,
#   "active_agents": 109
# }
A score of 90 or above means the system is fully operational. The one expected degraded module is analysis_api (score 70) when no Analysis Provider key is configured. This does not affect normal operation.
6

Run your first agent via REST

With the server running, send an agent task directly with curl. No SDK required.
terminalBASH
# List available agents
curl -s http://localhost:8888/api/agents | python3 -m json.tool | head -40

# Run the HARiKA project analyser
curl -s -X POST http://localhost:8888/api/harika/analyze   -H "Content-Type: application/json"   -d '{"project_path": ".", "analysis_type": "quick"}'   | python3 -m json.tool

# Send a chat message via primary LLM provider
curl -s -X POST http://localhost:8888/api/llm/chat   -H "Content-Type: application/json"   -d '{"prompt": "What is LYDOS?"}'   | python3 -m json.tool

# Multi-provider LLM chat with automatic failover
curl -s -X POST http://localhost:8888/api/llm/chat   -H "Content-Type: application/json"   -d '{"prompt": "Summarise the Q26 workflow engine in 2 sentences"}'   | python3 -m json.tool
7

Activate the LYD CLI (optional)

The lyd binary at the project root provides a convenience shell around the server API and the goal / memory systems.
terminalBASH
cd ~/Masaüstü/AILYDIAN-AGENT-OS

# Make the CLI executable (first time only)
chmod +x lyd

# Check system status
./lyd status

# View the goal hierarchy (Hedef Motoru)
./lyd hedef

# Today's task list
./lyd tasks

# Memory overview
./lyd memory

# 5-second voice command via speech recognition STT
./lyd listen 5

# Consolidate session memories to long-term storage
./lyd consolidate
TIP
Add an alias to your shell profile for convenience:
alias lyd="~/Masaüstü/AILYDIAN-AGENT-OS/lyd"
8

Start the frontend (optional)

The Next.js frontend runs on port 3001. It consumes the REST API and WebSocket endpoints from the backend.
terminalBASH
cd web
npm install
npm run dev
# → http://localhost:3001

# Production build
npm run build
npm start

Environment variables

VariableRequiredDefaultDescription
PRIMARY_API_KEYYesPrimary Provider API key — default LLM (llama-3.3-70b-versatile, speech recognition)
BILINGUAL_API_KEYNoBilingual Provider key — fallback LLM with long context and multilingual support
ANALYSIS_API_KEYNoAnalysis Provider key — third-priority backup LLM. Without it analysis_api is degraded.
LYDOS_JWT_SECRETNoalpha-jwt-secret-change-in-prodJWT signing secret for Q63 multi-user authentication. Change before any networked deploy.
LYDOS_TASK_API_KEYNoalpha-task-key-change-in-prodPre-shared key for distributed worker authentication in Q195 horizontal scaling.
DATABASE_URLNoEmbedded DB (auto-created)Relational database URL for production. Omit for local embedded database WAL mode.
REDIS_URLNoCache layer URL for Q194 persistent state.
NATS_URLNoMessage queue URL for distributed mode task queue (container-compose.alpha.yml).
LOG_LEVELNoINFOServer log level: DEBUG, INFO, WARNING, ERROR.

Verifying the full system

After the server is running, these four commands confirm every major subsystem is operational:

smoke-test.shBASH
#!/usr/bin/env bash
BASE="http://localhost:8888"

echo "=== 1. Health ==="
curl -s $BASE/api/health | python3 -c "
import sys, json
d = json.load(sys.stdin)
print(f'Score: {d["score"]}/100  Status: {d["status"]}')
print(f'Modules: {d["modules_healthy"]}/{d.get("modules_total", 29)} healthy')
"

echo "=== 2. Agents ==="
curl -s $BASE/api/agents | python3 -c "
import sys, json
d = json.load(sys.stdin)
agents = d if isinstance(d, list) else d.get('agents', [])
print(f'Available agents: {len(agents)}')
"

echo "=== 3. LLM chat ==="
curl -s -X POST $BASE/api/groq/chat   -H "Content-Type: application/json"   -d '{"prompt": "Reply with OK only"}'   | python3 -c "
import sys, json
d = json.load(sys.stdin)
print('LLM response:', str(d)[:80])
"

echo "=== 4. MCP tools ==="
curl -s $BASE/api/q49/mcp/tools   | python3 -c "
import sys, json
d = json.load(sys.stdin)
tools = d if isinstance(d, list) else d.get('tools', [])
print(f'MCP tools: {len(tools)}')
"

Distributed alpha cluster (optional)

For testing the distributed runtime — relational database, cache layer, and message queue — use the dedicated alpha compose file. This sets up a control-plane node on port 8891 and a scalable worker pool.

terminalBASH
# Bring up the alpha cluster (single-node, no TLS)
docker compose -f docker-compose.alpha.yml up -d

# Scale workers to 3 replicas
docker compose -f docker-compose.alpha.yml up -d --scale lydos-worker=3

# Check cluster health
curl -s http://localhost:8891/api/health | python3 -m json.tool

# Tear down (keep volumes)
docker compose -f docker-compose.alpha.yml down

# Tear down and destroy volumes
docker compose -f docker-compose.alpha.yml down -v
NOTE
Port mapping for the alpha cluster: 5435 relational database, 6384 cache layer, 4223 message queue client, 8223 message queue monitoring, 8891 control-plane API.

Running the test suite

terminalBASH
source ~/.ailydian-venv/bin/activate
cd ~/Masaüstü/AILYDIAN-AGENT-OS

# Run the core test suite (fast, no external network calls)
python3 -m pytest tests/test_core.py tests/test_agent_manager.py -v
# → 34 passed in ~6s

# Run the full test suite
python3 -m pytest tests/ -v

# Run with coverage report
python3 -m pytest tests/ --cov=core --cov-report=term-missing

# Run a specific engine's tests
python3 -m pytest tests/test_q26_workflow.py -v
NOTE
Three test files currently have collection errors due to import path issues: test_q1_gateway.py, test_q32_bug_bounty.py, and test_q57_swarm.py. All other test files pass. These will be fixed in the alpha stabilisation sprint.

Next steps