← Guides

EU AI Act 2026: What It Means for Your Privacy and the Tools That Comply

The EU AI Act is the first major AI law with real bite. Full enforcement for the main banned categories started on August 2, 2026. Fines reach €35 million or 7% of global annual revenue. The EU also issued €2.3 billion in GDPR fines in 2025, up 38% year over year. That matters. Read the full text of Article 5 (prohibited AI practices) or the European Commission's AI regulatory framework overview.

What matters in practice: which AI systems the Act bans, what high-risk systems must do, and which tools keep your stack private without putting you on the wrong side of EU law.

For people in the EU: You now have hard legal protections against real-time facial recognition in public, facial scraping at web scale, and emotion recognition at work. These are enforceable bans, not policy talk.

€35M or 7%
MAX FINE
Of global annual revenue
€2.3B
GDPR FINES 2025
+38% year-on-year
Feb 2025
PROHIBITIONS IN FORCE
Unacceptable-risk AI banned
Aug 2026
FULL ENFORCEMENT
Critical infrastructure AI

Prohibited AI Systems (In Effect From February 2025)

The EU bans these uses outright:

  • Clearview AI-style facial databases: Mass scraping of facial images from the web or CCTV to build recognition databases. Clearview AI was fined €20M by Italy and €20M by France under GDPR. The AI Act now bans the practice directly.
  • Real-time biometric surveillance in public: Remote biometric ID in publicly accessible spaces is banned, except for narrow law-enforcement cases with judicial authorization.
  • Social scoring: Systems that score people in ways that affect rights or social participation are banned.
  • Emotion recognition in workplaces and schools: AI that infers emotion from faces, voice, or biometrics is banned without explicit consent and a valid purpose.
  • Subliminal manipulation: AI that pushes behavior below conscious awareness to override free choice is banned.
  • Exploitation of vulnerabilities: AI that targets people based on age, disability, or social vulnerability in ways that can cause harm is banned.

High-Risk AI: Transparency and Accountability Requirements

High-risk systems include tools used in employment, credit scoring, biometric categorization, critical infrastructure, law enforcement, migration, and justice. They must meet these rules:

  • Risk assessment and documentation
  • Human oversight. No fully automated high-stakes decisions
  • Explanation rights for affected people
  • Conformity assessment before deployment
  • Registration in an EU-wide AI database

What This Means for AI Surveillance in Practice

AI surveillance practices: EU AI Act and GDPR status
PracticeEU AI Act statusGDPR status
Real-time facial recognition in publicProhibited (narrow exceptions)Also prohibited (special category data)
Internet facial image scraping for recognitionExplicitly prohibitedProhibited (no lawful basis)
Emotion detection at work without consentProhibitedRequires explicit consent
AI-driven credit scoringHigh-risk, regulatedRight to explanation required
Behavioural targeting via AILimitedConsent-based only
Cloud AI storing EU user queriesGPAI transparency requiredData minimisation and purpose limitation

Building a Privacy-Compliant Tech Stack

If you want the upside of AI without feeding surveillance systems, build around local control and low retention.

AI and Search

  • Local LLMs via Ollama: Data stays on your machine. No cloud processing. No provider storage. See How to Run AI Privately.
  • Mistral Le Chat: French company under EU law, no ad model, short retention. One of the better cloud options if you need hosted AI.
  • SearXNG (self-hosted): Privacy-respecting metasearch. You avoid tying queries to one search provider. Listed on cunicula.com.

Communications

DNS and Network

  • Mullvad DNS: Swedish jurisdiction, no query logging, tracker and ad blocking.
  • Quad9: Swiss non-profit DNS with threat blocking.
  • Mullvad VPN: Swedish jurisdiction, audited no-logs policy, accepts XMR and cash.

Cloud Storage

  • Proton Drive: Swiss jurisdiction, end-to-end encrypted at rest and in transit. No AI scanning of content.
  • Filen: German company, zero-knowledge encryption, open-source client.

Filing Complaints Under the AI Act

The Act gives individuals several ways to push back:

  • File complaints with national AI authorities
  • File complaints with data protection authorities when GDPR also applies
  • Seek judicial redress if a violation caused harm
  • Use civil society groups for collective complaints

The enforcement system is still taking shape. The EU AI Office and national bodies are still building process. Early cases will likely target obvious banned practices first, much like early GDPR enforcement did.


Not legal advice. For compliance questions, consult a qualified EU data protection attorney. Affiliate disclosure.

Follow the Money

AI regulation created a new compliance market. Big Tech spent $100M+ lobbying against the Act while auditors and law firms lined up to bill for it.

$EU AI Act compliance industry: who profits from regulation
Compliance winners
SGS / Bureau Veritas (AI auditors) · law firms (compliance counsel) · ENISA (funded tools) · Gartner estimate: $36B AI governance spend globally by 2027
Big Tech opposition
US Big Tech $100M+ EU lobbying spend 2022–2024, trying to weaken the Act before it bites
Both outcomes profit
Regulation fails → Big Tech keeps data. Regulation passes → compliance industry wins fees. Either way, citizens pay.

Frequently Asked Questions

When does the EU AI Act fully take effect?

The EU AI Act entered into force on August 1, 2024. It rolls out in stages: bans on unacceptable-risk AI applied from February 2, 2025; general-purpose AI rules from August 2, 2025; key high-risk rules from August 2, 2026; and the remaining high-risk obligations from August 2027. The main prohibited categories now face full enforcement, with fines up to €35 million or 7% of global annual revenue.

What AI systems are banned under the EU AI Act?

The Act bans several uses outright. That includes real-time remote biometric ID in public spaces, with narrow law-enforcement exceptions; social scoring by governments; systems that exploit age or disability; systems that manipulate behavior below conscious awareness; emotion recognition at work or school without consent; and mass scraping of facial images from the internet to build recognition databases.

Does the EU AI Act apply to non-EU companies?

Yes. Like GDPR, the AI Act reaches beyond the EU. It applies to providers placing AI systems on the EU market, deployers using AI in the EU, and providers outside the EU when the system output is used in the EU. US firms serving EU users can fall under the Act and face penalties for non-compliance.

How does the EU AI Act protect individuals from AI surveillance?

It blocks several surveillance uses directly. The Act bans real-time biometric surveillance in public spaces in most cases, bans scraping facial images to build recognition databases, and bans emotion recognition at work and in schools without consent. It also adds complaint rights, redress routes, and transparency duties for some AI systems.

What is a privacy-compliant AI tool stack under EU regulations?

Start with local-first AI on your own hardware, such as Ollama or LM Studio. If you need cloud AI, pick providers under EU law, such as Mistral. For search, use self-hosted SearXNG or Brave Search. For communication, use end-to-end encrypted tools like Signal, Proton Mail, or Tuta. Avoid platforms that profile behavior for ads, scoring, or surveillance.