About Cunicula
Mission, editorial standards, and how we score trust in privacy tools.
Mission
Cunicula is a privacy-first directory and editorial project built for people who do not want every payment, message, login, or device tied to a permanent identity graph. We focus on tools that reduce surveillance, minimize data collection, and preserve optionality.
We do not promote surveillance tech as privacy tech. We are especially skeptical of products that depend on invasive analytics, mandatory identity checks, behavioral tracking, opaque ownership, or jurisdictions closely tied to intelligence-sharing blocs. That includes a hard bias against listing VPNs and adjacent services rooted in Five Eyes risk when materially safer options exist.
Editorial standards
The Cunicula editorial team uses a documented vetting framework before we recommend a service or write a favorable guide around it. Our baseline questions are simple: who owns it, what data does it collect, what jurisdiction governs it, how does it make money, has it been audited, is it open source where it matters, and does its real-world behavior match its marketing?
Our methodology is laid out in How to Vet Any Privacy Tool Before You Trust It. That article explains the checklist we use when reviewing VPNs, wallets, carriers, messengers, hosting providers, and other services in the directory.
We may publish under a pseudonymous editorial voice, but the standards are consistent: documented claims, adversarial review of privacy marketing, and preference for tools that minimize trust rather than merely asking for it.
What we exclude
We intentionally avoid promoting products that normalize identity surveillance, backdoor-friendly design, aggressive telemetry, or hostile ownership structures. A slick UI or sponsorship budget is not enough. If a service creates unnecessary exposure for users, we would rather leave it unlisted than help whitewash the risk.
How trust scoring works
Trust scores are a directional signal, not a promise. They summarize the factors we believe most affect user risk: jurisdiction, ownership transparency, payment anonymity, data retention, open-source posture, audit history, operational track record, and whether the product can be used without creating a durable identity link.
A higher score means a service appears to demand less trust and expose less personal data under normal use. A lower score does not automatically mean fraud; it usually means more surveillance exposure, more legal or ownership risk, or weaker transparency. We expect readers to use the score as a starting point and then read the underlying notes before acting.
Why this matters
Most privacy failures do not come from one dramatic hack. They come from small defaults that normalize logging, account binding, wallet tracing, device fingerprinting, and mandatory KYC until users have no room left to maneuver. Our job is to map the safer paths, explain the tradeoffs honestly, and help readers choose tools that preserve autonomy instead of quietly eroding it.