Back to Research
Policy · February 10, 2026

The Case for Data Sovereignty in AI

The Cloud Act, FISA 702, and the hidden risks of building critical systems on infrastructure you don't control. Why data sovereignty matters.

The invisible dependency

Today, most organizations using AI depend on infrastructure controlled by a handful of providers. The models, the compute, the data pipelines: all flow through systems subject to foreign jurisdictions. For many applications, this is acceptable. For critical infrastructure, government operations, and regulated industries, it is not.

The issue isn’t capability. Current models are remarkably powerful. The issue is legal exposure and control.

The Cloud Act and FISA 702

The CLOUD Act (2018) grants US law enforcement the authority to compel US-headquartered companies to hand over data stored anywhere in the world, including data stored in non-US data centers. FISA Section 702 allows mass surveillance of non-US persons’ communications processed by US service providers.

For organizations outside the US, this means:

  • Patient data processed through a US AI provider can be legally accessed by US authorities without local oversight
  • Trade secrets embedded in AI prompts or fine-tuning data may be subject to compelled disclosure
  • Classified government communications processed by US AI services are potentially exposed
  • Intellectual property used to train custom models may not remain confidential

The standard response, “we use local data centers,” does not resolve this. The Cloud Act applies to the company, not the server location.

Beyond compliance: strategic autonomy

Data sovereignty isn’t just about legal risk. It’s about strategic autonomy. The ability to operate AI systems independently, without dependence on providers who may change terms, restrict access, or be subject to export controls, is increasingly a matter of economic security.

Consider a scenario where geopolitical tensions lead to restricted AI model access for certain industries. If your manufacturing processes, healthcare diagnostics, or financial risk models depend entirely on a single foreign AI provider, you have a single point of failure that no compliance framework can mitigate.

What true data sovereignty requires

True AI sovereignty requires more than hosting a third-party model on your own servers. It requires:

  • Independent AI capabilities: not wrappers around someone else’s technology, but systems built under your jurisdiction
  • Controlled infrastructure: compute and data pipelines owned and operated under your legal framework
  • Regulatory-native design: compliance built into the system, not bolted on after
  • Transparent governance: clarity on system behavior and decision-making processes

Building the alternative

At Starlex, we’re building exactly this. Foundation models designed for organizations that require full control over their AI infrastructure and data.

This isn’t about being anti-anyone. It’s about ensuring that organizations have genuine choice, including options built with their regulatory requirements, security needs, and sovereignty concerns in mind.

The question isn’t whether you should control your AI infrastructure. It’s whether you act now, or discover too late that you needed to.