Upload a speech (text, PDF, or link). Our transparent pipeline scores it against democratic risk indicators and returns a human-readable report you can cite.
Research basis: Delgado-Mohatar & Alelú-Paz, When Algorithms Guard Democracy — integrating Levitsky & Ziblatt’s four dimensions with LLM analysis.
Scores are derived from transparent prompts and rubric; single extreme utterances matter (maximum-score emphasis).
Provide a transcript, PDF, or a URL to a formal address by a public official or candidate.
We apply an auditable rubric derived from four core dimensions: rules, legitimacy, violence, liberties. Indicators and prompts are public.
You receive a concise PDF/JSON report with indicator maxima, excerpts, and caveats you can cite.
The approach prioritizes early warnings by tracking maximum values per indicator; a single extreme statement can normalize anti-democratic behavior.
The system alerts; it does not censor. It’s a public instrument for vigilance and accountability.
Prompts, indicators, and evaluation criteria are published so anyone can reproduce results.
The corpus and models are regularly updated to reflect new rhetoric and languages.
We highlight dataset limits, translation bias, and the risks of over-generalization across cultures and eras.
Sign in with Google to receive your results securely. We only process publicly relevant political addresses.
We map language to four diagnostic dimensions: rejection of democratic rules, denial of opponents’ legitimacy, tolerance of violence, and readiness to restrict civil liberties. Results emphasize maximum indicator scores to capture extreme utterances.
Yes. We will publish prompts, indicators, and evaluation criteria so others can replicate and critique findings.
No. This is a preventive, public-interest monitoring service. It alerts, it does not censor.
Analyses depend on transcript quality, translation, and historical/cultural context. Numerical operationalization is a simplification and should be interpreted cautiously.