SECURITY

Trust is engineered, not promised

Built for organisations where security is a requirement, not a feature. Every layer of Yma Agent is designed to satisfy IT teams, compliance officers, and security audits.

SECRETS & ENCRYPTION

Where do secrets live? Nowhere you can reach.

Zero Secrets in the Binary

The packaged installer contains zero API keys, tokens, or credentials. All secrets are fetched at runtime from a dedicated config server and stored in OS-level encrypted storage. There is nothing to extract from the binary.

3-Tier Config Loading

Secrets flow through three tiers: remote config server (primary), OS-encrypted local cache (fallback), and non-sensitive defaults (emergency). If the server is unreachable, the app works offline from encrypted cache — but secrets are never exposed in plaintext.

OS-Level Encryption at Rest

Cached secrets are encrypted using Windows DPAPI (Data Protection API), tied directly to the OS user account. The encrypted cache cannot be decrypted on a different machine or by a different user — even with direct disk access.

Network Isolation via Tailscale

All inter-machine communication — config fetching, fleet sync, messaging relay, SSH tunneling — happens over a Tailscale private mesh VPN. No public endpoints. No exposed ports. The messaging server binds to 127.0.0.1 only. Zero internet-facing attack surface.

AUTHENTICATION & ACCESS CONTROL

Who can access the system? Only verified users.

JWT Authentication on Every Request

Users authenticate via Clerk. Every request to the config server includes a short-lived JWT validated server-side. Expired or invalid tokens are rejected immediately. An optional user allowlist restricts which accounts can access secrets.

HMAC-SHA256 Message Verification

Every inbound webhook from messaging platforms (Telegram, WhatsApp, Slack) is signed with HMAC-SHA256 and verified before processing. This prevents message spoofing, replay attacks, and unauthorized command injection.

Sender Verification with Pairing Codes

Unknown senders cannot issue commands. Each new sender must complete a 6-digit one-time pairing code challenge to link their messaging account to an authenticated user. Unverified messages are ignored.

Rate Limiting at Every Layer

Config server: 10 requests per user per hour. Messaging: 10 messages per sender per minute. Content: 4,000 character maximum per message. These limits prevent abuse, credential stuffing, and denial-of-service.

PROMPT INJECTION DEFENCE

What if someone tries to manipulate the AI?

Prompt injection is one of the most discussed attack vectors in AI systems. We take a defence-in-depth approach — no single layer is relied upon. Every boundary validates, sanitises, and constrains.

Input Sanitization

All inbound messages are stripped of control characters and enforced against strict length limits before reaching any AI model. Malformed or oversized payloads are rejected at the boundary.

Sender Authentication Gate

Only verified, paired senders can submit prompts via messaging channels. Anonymous or unverified users cannot interact with the AI — the attack surface for external prompt injection is closed by default.

HMAC-Verified Command Chain

Every message in the pipeline is HMAC-signed. An attacker cannot inject commands mid-chain without the signing secret, which is never exposed in the binary or transmitted in plaintext.

Scoped Tool Execution

AI tool calls are routed through a typed IPC protocol with a static allowlist. The AI cannot invoke arbitrary system commands — only pre-approved, explicitly whitelisted operations are available.

Session Isolation

Each conversation session maintains its own context with a rolling window cap (50 messages). Sessions auto-expire after inactivity (60 min voice, 30 min text). Expired session context is purged — no stale injection vectors persist.

Process Sandboxing

The renderer process runs with contextIsolation enabled and nodeIntegration disabled. The AI interface cannot access Node.js APIs, the filesystem, or system processes directly. All sensitive operations go through a typed context bridge with explicit permission boundaries.

DATA SOVEREIGNTY & COMPLIANCE

Your data. Your machines. Your rules.

DESKTOP-NATIVE

Your data lives on your machines. AI processing uses your own API keys with the providers you choose. No intermediary cloud.

DATA SOVEREIGNTY

You own your data. No vendor lock-in. No data mining. No model training on your content. Full GDPR compliance with data deletion on request.

FULL AUDIT TRAIL

Every action, command, and data access is logged in structured session files. Complete traceability for compliance reviews and security audits.

OPEN ARCHITECTURE

Inspect every config file, every log, every data flow. Nothing is hidden. We walk through the full architecture in every client onboarding.

COMMON QUESTIONS FROM IT TEAMS

We've heard these before

Does any of our data leave the machine?

Only when you explicitly choose it. AI model calls go to your configured provider (OpenAI, etc.) using your own API keys. Inter-machine sync happens exclusively over your private Tailscale VPN. We have no cloud backend that touches your data.

What happens if the config server is compromised?

The config server only serves secrets to JWT-authenticated users on an explicit allowlist. Even in a breach scenario, secrets are rate-limited (10 req/hr per user), and rotating credentials on the server immediately invalidates all cached copies on next fetch.

Can the AI execute arbitrary commands on our machines?

No. Tool execution is routed through a typed IPC protocol with a static allowlist. The AI can only invoke pre-approved operations. The renderer process has no access to Node.js, the filesystem, or system processes directly.

How do you handle employee offboarding?

Revoke the user's Clerk account and remove them from the config server allowlist. Their cached secrets (DPAPI-encrypted) become useless once the config server stops issuing new tokens. Tailscale device removal cuts network access.

Is there an audit trail for compliance?

Yes. Every action, command, tool invocation, and data access is logged in structured session files with timestamps and user IDs. These logs are stored locally on the machine and can be forwarded to your SIEM or compliance system.

What about GDPR?

Data is stored locally on your machines — you are the data controller. Per-user isolation in Convex (for messaging pairings) uses JWT-validated queries. Data deletion requests can be fulfilled by clearing the local memory directory and revoking the Convex record.

Want the full security walkthrough?

We walk through the complete architecture in every client onboarding. Book a demo and bring your IT team.

BOOK A DEMO