PRE-LAUNCH BRIEF
Governed AI
for the perimeter.
We are two engineers building governed AI infrastructure for environments where data cannot leave the network. This brief describes what we have built so far, what we are building toward, and what we ask of our first design partners.
Contents
Mission
Most "AI for the enterprise" deploys hand customer prompts to someone else's GPUs. That trade is unacceptable in regulated and high-assurance environments — clinical, defense, financial-control, critical-infrastructure — where the data is the asset.
We are building an AI substrate that runs inside the perimeter: a 4-stage layered airlock between the user and the model, a policy engine that decides what crosses, and an append-only audit log that the AO can read. The customer's data and the customer's hardware never separate.
We are not the first to claim this. We are trying to be the first to show it working end-to-end with a public design before we sell it.
What we have built
The list below is verifiable. Click the demo — you can replicate every number in your own browser session.
The 4-stage airlock
The airlock is the part you can touch in the demo. It is a typed, deterministic-first pipeline. The constrained-rewrite stage is opt-in, depth-bounded, and never the default path.
user prompt
│
▼
┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐
│ STAGE 1 │ ─▶ │ STAGE 2 │ ─▶ │ STAGE 3 │ ─▶ │ STAGE 4 │
│ regex + │ │ schema │ │ policy │ │ rewrite │
│ recog │ │ shape │ │ (OPA) │ │ LLM-fb │
│ │ │ │ │ │ │ depth = 1 │
│ refuse │ │ refuse │ │ route │ │ opt-in │
└─────┬─────┘ └─────┬─────┘ └─────┬─────┘ └─────┬─────┘
│ │ │ │
└────────────────┴────────────────┴────────────────┘
│
▼
┌───────────────────┐
│ inference target │
│ (customer GPUs) │
└───────────────────┘
│
▼
┌───────────────────┐
│ review queue │
│ audit log → │
└───────────────────┘
- 1 Deterministic regex + entity recognizer. Real PII (SSN, credit card, secrets, internal hostnames) triggers a hard reject before the model is invoked. You can verify this in the demo by pasting a synthetic SSN.
- 2 Strict JSON-schema validation. The envelope is enforced with
additionalProperties: false. Anything out-of-shape is refused at the gateway. - 3 Policy engine (OPA Rego). Per-class redaction, approval-queue routing, provenance-aware promotion gates. The policy bundle is versioned and signed.
- 4 Constrained LLM rewrite. Fallback only. Depth-bounded. Opt-in per request class. Never the default path.
What we are building toward
Everything in this section is a TARGET. None of it is shipped. We will tell you the truth about where we are in any briefing.
We have no certifications today. We are not in production at any agency, hospital, bank, or utility. We are designing the deployment topology to be cert-friendly — air-gap install, signed artifacts, append-only audit, customer-managed keys — so that when a pilot customer asks for a cert path, we are not starting from zero.
Design-partner program
We are looking for two to four design partners. The fit is technical leaders at organizations that already block public LLMs and have an unmet need for sanctioned AI. Healthcare, finance, critical infrastructure, defense-adjacent.
What you get
- Direct line to both founders, no account management layer
- Pilot deployment on your hardware, your network, your terms
- Co-design of the policy bundle for your environment
- Free for the duration of M1–M6 (~12 months)
- Source-available reference implementation
- Right of first refusal on commercial terms
What we ask
- One named technical owner on your side
- One real workflow we can co-design against
- Bi-weekly 30-min sync, 12 weeks minimum
- Permission to publish anonymized lessons learned
- Honest feedback — especially the "this will not work" feedback
- An intro to one peer if it works
Open design
We publish our design decisions before we ship the code. Read these before booking a call:
- 01 Architecture — dual-plane + airlock topology, network policy, isolation rules
- 02 Stack — LLM gateway, inference servers, policy engine, RAG library — with rationale
- 09 Policy engine — OPA Rego DSL, rule classes, state machine, signing
- 10 Non-functional requirements — SLOs, retention tiers, key custody, DR
- 11 Threat model — STRIDE coverage, attacker classes, mitigations
- 12 Airlock contract — envelope schema, signed decisions, validation pseudocode
- 13 Redaction corpus — methodology, ≥100 entries with adversarial test cases
The design docs are currently in a private repo. We will share read access in any pilot conversation. Ask for access →
Contact
One email. One inbox. Both founders read it.
If you are not yet ready for a call, the demo is the cheapest way to see whether the airlock UX matches what you would buy. It runs in your browser. No signup. No tracking beyond the per-device demo key you can throw away.
References & footnotes
- [1] Demo runtime. The live demo (/demo.html) runs on Cloudflare Workers AI using the Llama 3.1 8B model. Synthetic incidents only. Real-PII patterns are hard-rejected server-side before the model is invoked. The on-prem deploy uses customer GPUs.
- [2] Design documents. 14 markdown files:
00-decisions,01-architecture,02-stack,03-pitch,04-financial-model,05-governed-agents,06-mvp-roadmap,07-cost-infra,08-pre-coding-readiness,09-policy-engine,10-nfr,11-threat-model,12-airlock-contract,13-redaction-corpus,14-ops-dashboard. Read access shared in any pilot conversation. - [3] Reference military-design corpus. 8 markdown files at
refs/military-design/— vendor patterns, AIOps copy, tactical UX, visual design. Cited URLs throughout. - [4] Status of certifications. None held. None claimed. Cert path is a TARGET per §4.00 M7.
- [5] Founders. Two engineers. We will tell you our names and locations on the first call. We do not claim a country of origin on this page because the product runs on your hardware, in your jurisdiction, by design.