In Development · Early Access Open

Stop PII from ever reaching your LLM

PrivacyLayer sits between your applications and the network. PII is intercepted before any packet leaves your machine. The AI still gets a useful query. You never get a breach.

without privacylayer
John Smith · SSN 234-56-7890 · BCBS policy #99A2
↓ sent directly to LLM
⚠ PII in model context · GDPR risk · data retained by provider
with privacylayer
John Smith · SSN 234-56-7890 · BCBS policy #99A2
↓ intercepted at kernel / extension
Michael Torres · SSN 891-23-4567 · BCBS policy #44C7
↓ obfuscated query sent to LLM
✓ model responds · PII never leaves your machine
↓ response reconstructed
✓ clean answer, real names restored for your app
// the problem

Most AI deployments are one prompt away from a breach

Every time someone pastes a customer record into ChatGPT, that data hits a third-party server. You can't policy your way out of it — you need a technical control.

⚠️ Without PrivacyLayer

  • PII lands in LLM provider training data or logs
  • GDPR breach risk on every customer-data prompt
  • HIPAA exposure for any healthcare workflow
  • No visibility into what sensitive data was shared
  • Data retention policies of AI providers are out of your control
  • AUP violations if employees use consumer AI tools for work

🛡 With PrivacyLayer

  • PII is intercepted before any network call leaves your machine
  • Obfuscated data is semantically equivalent — responses are still useful
  • Works with every LLM, every app, every browser tab
  • Full audit log of what was obfuscated and when
  • Context is reconstructed cleanly after the model responds
  • Deploy system-wide (kernel) or per-tab (extension)

Obfuscation, not redaction.
Context preserved.

Redaction destroys the context the AI needs. PrivacyLayer replaces PII with realistic, coherent substitutes — so the model gets a query it can actually answer.

1

Intercept at the transport layer

PrivacyLayer sits between your apps and the network — kernel module for system-wide coverage, browser extension for per-tab. Every LLM API call is intercepted before the packet leaves your machine.

2

Detect and classify PII

A local, on-device detection engine identifies PII entities: names, emails, phone numbers, SSNs, account numbers, addresses, medical identifiers, and custom entity types you define.

3

Obfuscate with context-aware substitutes

Each PII entity is replaced with a realistic, contextually consistent substitute. Names stay culturally plausible, numbers stay structurally valid. The model sees a coherent query.

4

Reconstruct the response

When the model responds, PrivacyLayer maps obfuscated entities back to their originals. Your app receives a clean, accurate answer — with real data where relevant.

// deployment

Two modes. One decision.

Choose based on how you use AI — both modes provide full protection. You can run both simultaneously.

🖥
Kernel Module
System-wide · OS-level protection

Runs at the OS level, intercepting all outbound LLM API traffic regardless of which app makes the call. No app can bypass it. Ideal for orgs where employees use a mix of AI tools — web apps, desktop clients, CLIs, IDE extensions.

  • Covers every app and browser on the machine
  • No per-app configuration required
  • Works even if users switch to new AI tools
  • Centrally managed via policy file
🧩
Browser Extension
Per-tab · Web app contexts

Intercepts LLM API calls made by web apps in real time. Ideal for teams using ChatGPT, Claude.ai, Perplexity, or your own internal AI apps — per-tab or per-site control without OS-level access.

  • Chrome and Firefox support
  • Per-site allow/block configuration
  • No OS permissions required
  • Works in BYOD and contractor environments

Built for regulated environments

PrivacyLayer is a technical control, not a policy. It enforces data protection at the transport layer — before sensitive data can leave your machine.

GDPR
Data minimization & purpose limitation
HIPAA
PHI de-identification
SOC 2
Access control & logging
CCPA
Consumer data rights
ISO 27001
Information security

PrivacyLayer is a technical control, not a legal certification. Review your specific obligations with qualified legal counsel.

// faq

Common questions

PrivacyLayer detects a broad set of PII entity types out of the box: full names, email addresses, phone numbers, social security numbers, passport numbers, driver's license numbers, dates of birth, physical addresses, credit card numbers, bank account numbers, IP addresses, medical record identifiers, and more. You can also define custom entity types for domain-specific identifiers.
Entirely locally. The PII detection and obfuscation engine runs on-device — the exact data you're trying to protect never leaves your machine in its original form. Nothing passes through Daedalus Dynamics servers. This is not a proxy service.
Response quality is preserved. Because PrivacyLayer uses contextually realistic obfuscation rather than blunt redaction, the model receives a coherent, meaningful query. If you ask about "John Smith's account balance," the model sees "Michael Torres's account balance" — structurally identical, semantically equivalent, and completely de-identified.
Yes, though if you're running a fully self-hosted LLM with no external network calls, you have inherently stronger data isolation. PrivacyLayer adds an additional layer of protection and audit logging even in self-hosted scenarios — useful for internal compliance and zero-trust architectures.
PrivacyLayer is designed for minimal latency impact. The detection and obfuscation pipeline runs in microseconds for typical prompts. Because LLM API calls have latency measured in hundreds of milliseconds to seconds, the overhead added by PrivacyLayer is imperceptible to end users.
PrivacyLayer is in active development. We are currently accepting early access applications for select organizations who want to help shape the product. Email hello@daedalusdynamics.com with a brief description of your use case and we'll be in touch.
Early Access Now Open

Get early access to PrivacyLayer

We're working with a small group of early adopters to shape the product. Tell us your use case — we'll follow up within 48 hours.

Request Early Access Talk to the team