SECURITY

Trust is engineered, not promised

Zero secrets in the binary. OS-encrypted credential storage. Private mesh networking. Defence-in-depth prompt injection controls. Built for IT review.

SECRETS & ENCRYPTION

Where do secrets live? Nowhere you can reach.

Zero Secrets in the Binary

The installer ships with no API keys, tokens, or credentials. Secrets are fetched at runtime from a config server and stored in OS-level encrypted storage.

3-Tier Config Loading

Primary: remote config server. Fallback: OS-encrypted local cache. Emergency: non-sensitive defaults only. Offline operation uses the encrypted cache — secrets never appear in plaintext.

OS-Level Encryption at Rest

Cached secrets use Windows DPAPI, bound to the OS user account. The encrypted store cannot be decrypted on another machine or by another user — even with direct disk access.

Network Isolation via Tailscale

Config fetching, fleet sync, messaging relay, and SSH tunnelling run over a Tailscale private mesh. No public endpoints. Messaging server binds to 127.0.0.1. Zero internet-facing attack surface.

AUTHENTICATION & ACCESS CONTROL

Who can access the system? Only verified users.

License Key Authentication

Every config request includes the license key as a Bearer token, validated server-side. Invalid or revoked keys rejected immediately. The config server restricts access to registered license holders only.

HMAC-SHA256 Message Verification

Inbound messaging webhooks (WhatsApp, Slack via N8N) are HMAC-SHA256 signed and verified before processing. Telegram and Discord connect via native authenticated SDKs. No unsigned messages enter the pipeline.

Sender Verification with Pairing Codes

Unknown senders cannot issue commands. Each new sender completes a 6-digit one-time pairing challenge to link their account. Unverified messages are dropped.

Rate Limiting at Every Layer

Config server: 10 req/user/hr. Messaging: 10 msg/sender/min. Payload: 4,000 char max. Prevents credential stuffing, abuse, and denial-of-service.

PROMPT INJECTION DEFENCE

What if someone tries to manipulate the AI?

Defence in depth — no single layer is relied upon. Every boundary validates, sanitises, and constrains.

Input Sanitization

Control characters stripped. Strict length limits enforced. Malformed or oversized payloads rejected at the boundary before reaching any AI model.

Sender Authentication Gate

Only verified, paired senders can submit prompts via messaging. Anonymous users cannot interact with the AI. External injection surface closed by default.

HMAC-Verified Command Chain

Every message in the pipeline is HMAC-signed. Commands cannot be injected mid-chain without the signing secret, which never appears in the binary or in plaintext.

Scoped Tool Execution

AI tool calls route through a typed IPC protocol with a static allowlist. Only pre-approved operations are available — no arbitrary system commands.

Session Isolation

Rolling 50-message context window per session. Auto-expiry: 60 min voice, 30 min text. Expired context is purged — no stale injection vectors persist.

Process Sandboxing

Renderer runs with contextIsolation enabled and nodeIntegration disabled. All sensitive operations go through a typed context bridge with explicit permission boundaries.

DATA SOVEREIGNTY & COMPLIANCE

Your data. Your machines. Your rules.

DESKTOP-NATIVE

Data stays on your machines. AI calls use your API keys with providers you choose. No intermediary cloud.

DATA SOVEREIGNTY

No vendor lock-in. No data mining. No model training on your content. GDPR-compliant with deletion on request.

FULL AUDIT TRAIL

Every action, command, and data access logged with timestamps and user IDs. Exportable to your SIEM or compliance system.

OPEN ARCHITECTURE

Every config file, log, and data flow is inspectable. Full architecture walkthrough included in client onboarding.

COMMON QUESTIONS FROM IT TEAMS

Questions your IT team will ask

Does any of our data leave the machine?

Only when you choose it. AI model calls go to your configured provider using your own API keys. Inter-machine sync runs exclusively over your private Tailscale mesh. No cloud backend touches your data.

What happens if the config server is compromised?

The config server serves secrets only to license-key authenticated, registered users. Rate-limited to 10 req/hr per user. Revoking a license key on the server invalidates all cached copies on next fetch.

Can the AI execute arbitrary commands on our machines?

No. Tool execution routes through a scoped IPC protocol. The AI can only invoke pre-approved operations. The renderer has no access to Node.js, the filesystem, or system processes.

How do you handle employee offboarding?

Revoke their license key on the config server. DPAPI-encrypted cached secrets become useless once the server stops accepting the key. Tailscale device removal cuts network access immediately.

Is there an audit trail for compliance?

Every action, tool invocation, and data access is logged with timestamps and user IDs. Logs are stored locally and can be forwarded to your SIEM or compliance system.

What about GDPR?

Data is stored locally — you are the data controller. Messaging pairings in Convex use JWT-validated queries with per-user isolation. Deletion: clear the local memory directory and revoke the Convex record.

Want the full security walkthrough?

We walk through the complete architecture during onboarding. Book a demo and bring your IT team.

BOOK A DEMO