Multi-agent dev lab
ShipsOne developer, several AI processes (Cortex CLI, Cortex worker, Claude Code in tmux). All log to the same Consciousness Server. Shared memory between sessions; coordination via chat + broadcast.
A self-hosted ecosystem of four small services: Consciousness Server for shared memory, Cortex for local AI agents, machines-server for infrastructure awareness, and a key server for ed25519 auth. All dual-licensed AGPL-3.0-only plus commercial.
AGPL-3.0-only on every repository. Commercial licence available.
Shared memory, semantic search, agent and skill registry, machine awareness, ed25519 auth. Six HTTP services on one compose.
Local AI agent with tool calling, powered by Ollama. CLI, Web UI, worker mode. Multi-model orchestration. Every fork stays open.
Desktop app (Tauri + Rust + Svelte) for PDF / DOCX / TXT with image-and-context extraction. For documents that must not leave the host.
300-line Node.js, zero runtime dependencies. SSH keys and API tokens with IP allow-list and audit log.
Agents talk HTTP to Consciousness Server. CS orchestrates Redis (working state), ChromaDB (semantic search via Ollama embeddings), and three smaller services for machines awareness, auth, and execution.
One developer, several AI processes (Cortex CLI, Cortex worker, Claude Code in tmux). All log to the same Consciousness Server. Shared memory between sessions; coordination via chat + broadcast.
Law office or research lab ingests its archive locally with Document Processor; agents query it with Cortex. Audit log via Key Server. Documents never leave the host.
Workstation with GPU runs Cortex with a 26B model; CPU host runs the same Cortex with a 4B model; both share state via Consciousness Server. Tasks route to whichever node has headroom.
Law offices, medical research, clinical labs, public-sector operators, finance. Data sovereignty is a regulatory requirement, not a preference.
R&D teams with unpatented IP, design firms, drilling/foundation/precision-engineering shops. Internal documents stay internal.
Engineers running their own GPU + storage who want a working multi-agent setup without standing up Vault, Postgres, ChromaDB, and a memory framework themselves.
Developers working on local-first AI tooling. AGPL-3.0-only protects every fork from being absorbed into a closed product.
Most agent platforms model agents but ignore the hardware they run on. BuildOnAI does the opposite. Every machine in your fleet — workstation with GPU, app host, edge node, eventually a 3D printer or an inverter — is a first-class entity with hardware profile, available models, and live telemetry. Today it routes agents; tomorrow it orchestrates the shop floor.
Read more →Right choice if you're publishing your own modifications, running it for yourself, doing open-source research, or building something you intend to release as AGPL. The catch: AGPL extends to network services — if you offer a modified BuildOnAI as a service, you must publish your modifications.
Right choice if you want to embed BuildOnAI in a closed-source product, run it as a SaaS without open-sourcing your modifications, or your organisation can't accept AGPL's network-service obligation. Pricing is bespoke (project size, support level, deployment model). Email [email protected] with a short description of the use case for a quote.
Not designed for weapon systems, mass surveillance, or autonomous critical infrastructure without a human in the loop. This is written into our commercial licence — not just marketing. If your use case conflicts with these lines, BuildOnAI is not the right tool, and the licence is enforced as a contractual matter.