Back to Research
Research · January 5, 2026

Event Horizon: Designing Safety Guardrails for Foundation Models

Our approach to AI safety goes beyond content filtering. Introducing Event Horizon, a verification layer that ensures every output meets compliance and safety standards.

Safety as a boundary, not a filter

Most AI safety systems work like content filters: they scan the model’s output for problematic patterns and block or modify responses that match. This approach is reactive, brittle, and easy to circumvent. It’s the equivalent of putting a censorship layer on top of an unconstrained system.

Event Horizon takes a different approach. Named after the boundary of a black hole, the point beyond which nothing can escape, it functions as a structural boundary between model outputs and users. Nothing passes through Event Horizon without meeting defined standards for accuracy, safety, and compliance.

How Event Horizon works

Event Horizon operates as a multi-layered verification boundary:

Source verification

Before any output reaches the user, Event Horizon verifies that claims are grounded in authoritative source material. Outputs that cannot be substantiated are flagged, modified, or blocked.

Compliance checking

For outputs in regulated domains, Event Horizon evaluates compliance with relevant frameworks: GDPR data handling requirements, EU AI Act transparency obligations, sector-specific regulations. This goes beyond static rule checks into contextual compliance evaluation.

Safety boundaries

Event Horizon enforces hard boundaries that no model output can cross. These include factual accuracy thresholds, domain-specific safety constraints (e.g., medical contraindications), and ethical guidelines aligned with European values.

Audit logging

Every decision Event Horizon makes is logged with full cryptographic integrity. Auditors can review not just what the AI said, but what was evaluated and why.

Why this matters

The EU AI Act introduces specific requirements for high-risk AI systems around transparency, human oversight, and robustness. Meeting these requirements with after-the-fact filtering is difficult. Meeting them with an architecturally embedded verification layer is natural.

Event Horizon allows organizations to deploy AI with confidence that:

  • No unverified claim reaches an end user without clear labeling
  • Compliance with relevant regulatory frameworks is continuously checked
  • Safety boundaries are structurally enforced, not policy-dependent
  • Every interaction is audit-ready by default

The boundary that protects

In astrophysics, the event horizon marks the boundary of maximum safety. Beyond it, escape is impossible. In AI, Event Horizon serves a similar purpose: it marks the boundary beyond which unsafe, unverified, or non-compliant outputs cannot pass.

As we develop and refine Event Horizon, we’ll share more about specific techniques and evaluation results. Our goal is to set a new standard for what AI safety means in practice. Not as a marketing claim, but as a mathematical property of the system.