Security
Maestro v1 is a self-hosted product. The security posture follows from that: the box is yours, the data is yours, the keys are yours. There’s no Maestro-managed cloud holding any of it. This page documents what protects what, what the operator is responsible for, and what’s planned for the eventual managed cloud.
Threat model in plain language
For a v1 self-host install, the realistic adversaries are:
| Adversary | Realistic? | Defense |
|---|---|---|
| Random internet scanner probing your domain | Yes — happens to every public host | Cloudflare Access blocks unauthenticated requests at the edge before they reach your box |
| Someone who guesses your domain and tries to log in | Yes | Cloudflare Access requires a valid email-PIN or SSO; brute-force is rate-limited by Cloudflare |
| Someone with read access to a leaked Postgres dump | Plausible (cloud snapshot leaks, lost laptop) | Secrets in the DB are AES-256-GCM encrypted; the master key is in .env, not in the dump |
| Someone who steals the box itself | Plausible | Secrets are encrypted but the master key is on the same box. Full-disk encryption (BitLocker on Windows, LUKS on Linux) closes this gap. |
| Malicious skill code | Not realistic in v1 | Skills are first-party Python in your repo. You control the catalog. Third-party skills from a marketplace are a v2+ concern. |
| Compromised Anthropic, Apollo, Gmail | Out of scope | If your upstream APIs are compromised, no architecture protects you. Use 2FA on those accounts. |
What’s not in this list: nation-state attackers, sophisticated APT campaigns, side-channel attacks on AES-256-GCM. If those are in your threat model, Maestro v1 self-host is not the right product for you yet.
What’s protected, layer by layer
Network ingress
Cloudflare Tunnel runs on your box and connects outbound to Cloudflare’s edge. No inbound ports on your residential connection. There is no 0.0.0.0:443 listener — there’s nothing for an internet scanner to find.
Cloudflare Access sits in front of the tunnel. Every request to app.yourdomain.com requires a valid Access JWT (issued after email-PIN or SSO login). Maestro itself does not authenticate requests — Access is the perimeter. If Access is bypassed (it isn’t, but in theory), the API would happily serve any caller. This is intentional: layered defense isn’t worth the complexity for a single-tenant install where Access is reliable.
Local network exposure is minimized:
- The API container binds to
127.0.0.1:3001only — never0.0.0.0. Nothing on your LAN can reach it directly. - Postgres is on the internal Docker network only. Even on the host, you can’t reach it without an explicit
docker exec.
Secrets at rest
Every secret you add via Maestro’s Secrets UI — Apollo API keys, Gmail OAuth tokens, Anthropic keys — is encrypted with AES-256-GCM before it lands in the database.
secrets id, workspace_id, name, kind, description, created_at
secret_versions id, secret_id, version, ciphertext, nonce, created_at
Each version of a secret has its own random 12-byte nonce. The 16-byte GCM authentication tag is appended to the ciphertext, so any tampering is detected at decryption time.
The encryption key never lives in the database. It’s read from MAESTRO_SECRET_KEY (base64-encoded 32 bytes) in your environment at process start.
This split — values in DB, key in env — means a compromised database backup is not a compromised secrets vault. The attacker would need both the DB dump and the master key.
Secrets in transit
Inside the Docker compose network, Maestro talks to Postgres over the internal Docker bridge — not over public network, not over the host network. Plain TCP is acceptable here because the traffic never leaves the box.
Outbound calls (Anthropic, Gmail, Apollo, Tavily) all go over HTTPS. The skill SDK uses httpx with default cert verification.
Browser to Cloudflare: TLS terminated at Cloudflare’s edge with their certificate. Cloudflare to your box: TLS over the tunnel; the tunnel is Cloudflare’s authenticated transport.
Skill code execution
Skills are Python packages under skills/catalog/ shipped with the Maestro repo. You can read every line; nothing is loaded from a remote registry at runtime. Adding a new skill means dropping a directory + restarting the runtime.
The runtime container has access to:
- Read/write the Maestro Postgres database (via DATABASE_URL).
- The encrypted secrets vault (skill-by-skill, only what each skill’s manifest declares).
- The internet (for outbound API calls).
It does not have access to:
- The host filesystem outside the container.
- Other containers’ file systems.
- The TLS-private key for your domain (lives at Cloudflare).
OAuth tokens
Gmail OAuth bundles are stored as a single secret with kind = "oauth2" and a JSON payload:
{
"access_token": "ya29.a0...",
"refresh_token": "1//0...",
"expires_at": 1777995923000,
"scopes": ["gmail.readonly", "gmail.send", "gmail.labels", "gmail.modify"],
"account_email": "you@example.com"
}
The whole bundle is encrypted as one ciphertext. Refresh-on-401 happens transparently inside the OAuth client; the rotated bundle is written back as a new secret_versions row.
Refresh tokens never appear in logs. The skill detail UI shows “Connected as user@example.com” with no token material.
Error responses
In production (NODE_ENV=production), the API sanitizes error responses:
- Top-level handler returns generic
{"error": "Internal error"}for unhandled exceptions. - Send-draft endpoint returns a generic Gmail-failure message; the upstream Gmail error body is logged server-side only.
- Secret decrypt failures return a generic message that doesn’t vary based on crypto state (no oracle for an attacker probing).
Stack traces and detailed exception messages stay in container logs (docker logs maestro-api).
What the operator is responsible for
A few things Maestro can’t protect for you:
MAESTRO_SECRET_KEY hygiene
This 32-byte master key is the lynchpin. Lose it and every encrypted secret in the database is unrecoverable. Treat it like a long-term identity:
- Generate a unique key per environment. Dev’s key should differ from production’s. A dev DB backup should never decrypt against the prod key.
- Back it up separately from your DB backups. A password manager or a sealed envelope works. Same physical+logical storage as the DB defeats the encryption.
- Set
.envpermissions to 0600 (Linux) or appropriate ACLs (Windows). Don’t leave it world-readable. - Don’t commit
.envto git. It’s in.gitignore; verify withgit check-ignore .env. - Don’t log it. Maestro never logs the key, but if you’re debugging your own configuration, don’t paste
.envcontents into a public issue tracker.
.env file in general
Same rules as above. Other things in .env: database password, Anthropic key, Postgres password. All sensitive.
Cloudflare Access policy
Access is the auth perimeter. Misconfiguring it lets the wrong people in, OR locks the right people out. Specifically:
- Don’t leave the policy on
Allow → Everyone. Default is restrictive; verify before assuming. - Use a real identity provider for SSO if your team is more than 3 people. Email-PIN is fine for the closed beta; Google/Microsoft SSO scales better for production.
- Audit the access logs periodically in the Cloudflare dashboard to see who’s been logging in.
Backup hygiene
Postgres dumps contain encrypted secrets, contact data, run history, and activities. Treat them as sensitive even though the secrets themselves are encrypted:
- Encrypt backups at rest. Even though secrets in the dump are encrypted, contact data and email content are in plaintext.
- Limit retention. Old backups are old liabilities.
- Test restore quarterly. A backup you can’t restore is no backup.
Operating system hygiene
The Maestro container is only as secure as the host OS:
- Keep Windows 11 Pro updated. Security patches.
- Run with a non-admin user for daily use. The Docker daemon needs admin rights but interactive logon doesn’t.
- Enable BitLocker on the system drive. Closes the “stolen laptop” gap noted in the threat model.
- Audit Cloudflared. It runs as a Windows service. Verify it’s the official binary from cloudflare.com (or your package manager).
What’s planned for v2
When Maestro Cloud ships, the security model changes substantively:
- First-party authentication —
usersandsessionstables in the schema. No more relying on Cloudflare Access exclusively. - Multi-tenant data isolation — schema-level or row-level segregation between customer workspaces.
- Audit logging — explicit log of who did what when. Required for SOC 2.
- Rate limiting — per-route, per-user, per-org.
- Penetration testing — a real third-party audit before Cloud GA.
- SOC 2 Type 2 — when customers ask for it.
- Hardware-backed key management — AWS KMS or equivalent for the master key, replacing the env-var pattern.
- Secret rotation tooling — re-encrypt every version under a new master key without downtime.
- DKIM signing for outbound mail — proper email auth for the marketing site’s transactional sends.
None of these are in v1 because v1 is a single-tenant self-host product. They become relevant when Maestro hosts data for someone other than the operator.
Reporting a security issue
If you find a vulnerability — bypass of Cloudflare Access, secret leak, SQL injection, anything that puts an operator’s data at risk — email nick@letmaestro.com with [security] in the subject. Don’t post it as a public GitHub issue first.
For a closed-beta install (single founder, three design partners), responses come within 24h. Once Maestro Cloud ships, this becomes a real responsible-disclosure process.
Related
- Secrets — deep-dive on how the encrypted vault works.
- Deploy the Maestro app — operational walkthrough.
- Skills overview — how skill code interacts with secrets.