ECOSYSTEM

Local-first multi-agent AI coordination for teams who can't send data to the cloud.

A self-hosted ecosystem of four small services: Consciousness Server for shared memory, Cortex for local AI agents, machines-server for infrastructure awareness, and a key server for ed25519 auth. All dual-licensed AGPL-3.0-only plus commercial.

AGPL-3.0-only on every repository. Commercial licence available.

How the pieces fit.

Agents talk HTTP to Consciousness Server. CS orchestrates Redis (working state), ChromaDB (semantic search via Ollama embeddings), and three smaller services for machines awareness, auth, and execution.

What people use this for.

Multi-agent dev lab

Ships

One developer, several AI processes (Cortex CLI, Cortex worker, Claude Code in tmux). All log to the same Consciousness Server. Shared memory between sessions; coordination via chat + broadcast.

Cortex · Consciousness Server · semantic-search · machines-server

Regulated-industry RAG

Tested

Law office or research lab ingests its archive locally with Document Processor; agents query it with Cortex. Audit log via Key Server. Documents never leave the host.

Document Processor · Consciousness Server · Cortex · Key Server

Heterogeneous fleet

Ships

Workstation with GPU runs Cortex with a 26B model; CPU host runs the same Cortex with a 4B model; both share state via Consciousness Server. Tasks route to whichever node has headroom.

Cortex × N · Consciousness Server · machines-server

Regulated industries

Law offices, medical research, clinical labs, public-sector operators, finance. Data sovereignty is a regulatory requirement, not a preference.

Engineering & specialised manufacturing

R&D teams with unpatented IP, design firms, drilling/foundation/precision-engineering shops. Internal documents stay internal.

Security-conscious homelabs and self-hosters

Engineers running their own GPU + storage who want a working multi-agent setup without standing up Vault, Postgres, ChromaDB, and a memory framework themselves.

Open-source contributors

Developers working on local-first AI tooling. AGPL-3.0-only protects every fork from being absorbed into a closed product.

PLATFORM

Machines aren't just servers.

Most agent platforms model agents but ignore the hardware they run on. BuildOnAI does the opposite. Every machine in your fleet — workstation with GPU, app host, edge node, eventually a 3D printer or an inverter — is a first-class entity with hardware profile, available models, and live telemetry. Today it routes agents; tomorrow it orchestrates the shop floor.

Read more →

Which licence do you need?

AGPL-3.0-only — free

Free

Right choice if you're publishing your own modifications, running it for yourself, doing open-source research, or building something you intend to release as AGPL. The catch: AGPL extends to network services — if you offer a modified BuildOnAI as a service, you must publish your modifications.

Commercial licence

Paid

Right choice if you want to embed BuildOnAI in a closed-source product, run it as a SaaS without open-sourcing your modifications, or your organisation can't accept AGPL's network-service obligation. Pricing is bespoke (project size, support level, deployment model). Email [email protected] with a short description of the use case for a quote.

Not designed for weapon systems, mass surveillance, or autonomous critical infrastructure without a human in the loop. This is written into our commercial licence — not just marketing. If your use case conflicts with these lines, BuildOnAI is not the right tool, and the licence is enforced as a contractual matter.