Skip to content

Security

Sanctum uses multiple layers of security:

  1. Network isolation — VM has no internet access (host-only networking)
  2. Firewall — pf rules on LAN interface block unauthorized port access
  3. SSH hardening — Key-only auth, no root login, AllowUsers restriction
  4. Encrypted secrets — SOPS+age on VM, macOS Keychain on Mac
  5. Automatic rotation — 8 secrets rotated monthly
  6. Service binding — Services bound to specific interfaces (not 0.0.0.0)
  7. PII anonymization — Personal data scrubbed from all external LLM requests

Rules in /etc/pf.anchors/sanctum block external access to internal ports on the LAN interface (en1):

Blocked ports include: gateway (18789), dashboard dev (3000, 3004), XTTS (8020), MLX (8899), Firewalla bridge (18094), and others.

Monthly via com.sanctum.rotate-secrets LaunchAgent (1st of month, 3:30am).

Rotates 8 secrets:

  1. Home Assistant main + Windu tokens
  2. Firewalla bridge token
  3. Gateway token
  4. Network control token
  5. Backup encryption key
  6. OpenRouter API key
  7. Deepgram API key

Plus verifies the Cloudflare tunnel token.

Terminal window
# Manual rotation
bash ~/Backups/rotate-secrets.sh
# Dry run
bash ~/Backups/rotate-secrets.sh --dry-run

Both Mac and VM have hardened SSH configs:

  • Key-only authentication (no passwords)
  • No root login
  • AllowUsers restriction
  • Post-quantum key exchange (on VM)

The Ubuntu VM runs on host-only networking (bridge100, 10.10.10.0/24). It can only reach the Mac at 10.10.10.1 — no direct internet access. All external communication goes through the Mac.

LocationPurpose
macOS KeychainRuntime token access
1PasswordBackup copies of all secrets
VM SOPS (age)Encrypted secrets for VM services

Secrets are never stored in config files, git repos, or environment variables on disk.

All requests routed to external LLM providers (OpenRouter) pass through a Presidio-based anonymization layer before leaving the network. This ensures personal data never reaches third-party inference APIs.

The guardrail injector proxy (port 4000) sits in front of LiteLLM (port 4001). When a request targets an OpenRouter model — either directly or via fallback routing — the proxy:

  1. Extracts text from user and assistant messages (system prompts are left untouched)
  2. Sends the text to a local Presidio analyzer container to detect PII entities
  3. Sends detected entities to a local Presidio anonymizer container to replace them with placeholders
  4. Forwards the scrubbed request to the external provider
  5. De-anonymizes the response before returning it to the caller
EntityExampleReplacement
Person namesJohn Smith<PERSON>
Email addressesjohn@example.com<EMAIL_ADDRESS>
Phone numbers514-555-1234<PHONE_NUMBER>
Credit cards4111-1111-1111-1111<CREDIT_CARD>
SSN / bank numbers123-45-6789<US_SSN>

Only entities above 0.7 confidence score are anonymized. IP addresses, locations, and code identifiers are intentionally excluded to avoid breaking technical context.

  • Anthropic (Claude) — direct API, no PII scrubbing (trusted provider, privacy policy reviewed)
  • Local models (LM Studio, Council-27B MLX) — never leave the network
  • Gemini — routed via Google AI Studio API (separate trust decision)
  • System prompts — contain instructions, not personal data
Client → Guardrail Injector (port 4000) → LiteLLM (port 4001) → Provider
| |
+-- Presidio Analyzer (Docker, port 5002) |
+-- Presidio Anonymizer (Docker, port 5001) |
| |
+-- PII scrubbed before exit ◄─────────────────────────+
+-- PII restored on response

The analyzer and anonymizer run as local Docker containers, bound to localhost only:

Terminal window
# Check status
docker ps --filter name=presidio
# Restart if needed
docker restart presidio-analyzer presidio-anonymizer