Human Rights Foundation · AI Program · 2024–2026

From Surveillance
to Sovereignty

How HRF went from documenting AI-enabled tyranny to equipping the world's human rights defenders with the most powerful tools ever created for freedom.

Scroll

In 2024, HRF watched with alarm as authoritarian regimes weaponized artificial intelligence — surveillance systems, deepfakes, social scoring — at a scale Orwell could only dream of. Then something shifted. The same technology that could enslave billions could also empower the individuals fighting for their freedom. This is the story of that realization — and what happened next.

A Visual Chronology

The Road to Sovereign AI

2024 — Prehistory
Surveillance Policy Research Alarm

The Orwellian Reckoning:
AI as a Tool of Tyranny

For more than a decade before this story begins, HRF had watched authoritarian regimes learn to wield digital tools as instruments of control. Social media became surveillance infrastructure. Smartphones became tracking devices. Encrypted communications became targets. By the early 2020s, a new and more powerful tool had arrived: AI. China was leading the charge — harnessing machine learning for population surveillance, facial recognition at scale, predictive policing, and social credit systems. This was not theoretical. It was mass control infrastructure being exported to allied dictatorships worldwide. By 2024, the question for HRF was no longer whether AI was dangerous to civil liberties. It was: what do we do about it?

Authoritarian AI surveillance
January 2025
Program Launch Grantmaking Research

The Pivot: AI for Individual Rights Program

But then a strange thing happened. The technology that had appeared entirely in the service of authoritarian control began to show a different face — one pointed toward individuals. AI started becoming a tool that anyone could wield: to write, to build, to organize, to create. This wasn't obvious at first. But HRF was perhaps uniquely positioned to notice it early. In nearly two decades of operating, HRF had watched activists embrace technology after technology — the internet, encryption, Bitcoin — each one initially feared or dismissed, each one ultimately becoming a force multiplier for the people fighting for freedom. The pattern was familiar. And so, in a decisive pivot, HRF launched the AI for Individual Rights Program — betting that the same force enabling authoritarian control could, in the right hands, become the most powerful tool individual human rights defenders had ever had. Initial focus: funding research into exactly how authoritarian regimes were abusing AI, and what tools could make activists stronger. The threat and the opportunity were the same technology. And crucially, HRF understood something the tech industry had not fully grasped: human rights defenders and civil society organizations operate with skeleton crews and minimal budgets. The capabilities that a Fortune 500 company takes for granted — polished websites, video production, annual reports, marketing campaigns, app development — are simply out of reach for most NGOs. If AI could change that equation, it could change everything.

February 2025
Vibe Coding Andrej Karpathy Cultural Shift

A New Word Enters the World:
"Vibe Coding"

The same month HRF was laying its groundwork, Andrej Karpathy coined a term that would reshape how humanity thinks about building software: vibe coding. The idea was radical — describe what you want in plain language, and let AI build it. Non-technical people could now create. The implications for human rights defenders were hard to overstate. These groups had always been outgunned on the creative and technical front: a dissident organization in Venezuela or a civil society group in Belarus couldn't afford the agencies, developers, and designers that a well-funded institution could. Vibe coding threatened to erase that gap entirely.

Spring – Summer 2025
Replit No-Code Empowerment

Non-Technical People Start to Create

Across the following months, a wave of tools — Replit, Lovable, and others — began unlocking creative and technical capabilities for people who had never written a line of code. Think about what this meant concretely for a small human rights organization: a team of three people in Nairobi or Minsk, running on donated funds, could suddenly produce a professional website, a polished annual report, a compelling video, a functional app — things that previously required hiring outside firms at costs they couldn't afford, or a creative team they'd never have the budget to staff. The playing field wasn't just tilting. It was being flipped.

Activists using AI
Late 2025
Sovereign AI Infrastructure Policy

A New North Star: Sovereign AI

As 2025 progressed, HRF began coalescing around a powerful idea: it wasn't enough to give activists access to corporate AI tools. True freedom required sovereignty — the ability to run your own inference, own your own data, and operate beyond the reach of platform policies or government pressure. HRF began funding infrastructure for activists to run AI in their own way.

November 2025 — San Francisco
AI Summit for Individual Rights Policy Technology Dissidents

AI Summit for Individual Rights:
Dissidents Meet Technologists

For the first time, HRF brought together the dissident, policy, and technology communities in one room. The AI Summit for Individual Rights in San Francisco asked hard questions: What does sovereign AI look like for activists? How do we build infrastructure that can't be seized, blocked, or surveilled? The conversations were historic — and they produced a concrete action item: a hackathon.

AI Summit San Francisco
November 2025
Grantmaking Open Source Education

First Grant Round:
8 Projects Funded

Announced alongside the AI Summit for Individual Rights, HRF deployed the first round of capital from the AI for Individual Rights Fund — supporting 8 grantees across two tracks. The first: open-source tools to give activists sovereignty over their AI (privacy-preserving inference, censorship-resistant access via Nostr, local-first coding agents). The second: education and capacity-building — developer courses, co-working cohorts, and consultation services to help rights defenders build and ship their own tools. Grantees included Maple AI, Routstr, OpenCode, PlebDevs, CANVAS/GENE, and Citizen Power Initiatives for China.

January 2026
Claude Desktop Anthropic Agents

Claude Desktop Launches — AI Coding Goes Mainstream

Anthropic's Claude Desktop made local AI agents accessible to non-technical users for the first time at scale — you could describe a task in plain language and watch a computer carry it out. For the first time, people who had never written a line of code began building real software. The timing was not lost on HRF: this was precisely the tool that would power its first AI Hack for Freedom, held the same month in Austin.

January 2026 — Austin, Texas
Hackathon Claude Desktop Builders

AI Hack for Freedom:
A Weekend That Changed Everything

Six weeks after the Summit, HRF convened the first AI Hack for Freedom in Austin, Texas — timed precisely as Claude Desktop launched and the tools became genuinely powerful. In a single weekend, activist captains guided developers in building real, working technology. The activists couldn't write the code themselves yet — but they could articulate what they needed, direct the work, and push for what mattered. The developers held the keyboard; the defenders set the mission. It was a first step — and a vital proof of concept that the collaboration was possible.

AI Hack for Freedom hackathon
January – February 2026
Open Claw Agentic AI Anyone Can Build

Open Claw Goes Viral — Anyone Can Have an Agent

In January and February 2026, Open Claw spread across the internet and changed public perception of AI overnight. For the first time, millions of ordinary people saw what an AI agent actually looks like in action — not a chatbot answering questions, but a system that autonomously browses, decides, and executes on your behalf. The realization hit broadly and fast: this wasn't a tool for engineers or tech companies. Anyone could have a personal agent. For HRF's work with human rights defenders, the timing was electric — it was exactly the shift the program had been built to harness.

May 12, 2026
Grantmaking Open Source Education

Second Grant Round:
10 More Projects Funded

HRF announced a second wave of 10 grants from the AI for Individual Rights Fund — doubling down on both tracks. On the open-source tooling side: projects giving activists private, local, permissionless AI access — including tools using Bitcoin's Lightning Network for pay-per-task AI in East Africa, a "VPN for AI inference" that anonymizes queries to closed models, compression techniques to run powerful LLMs on ordinary phones and laptops, and Enclave — a secure case management tool for political prisoners, conceived by Nicaraguan defender Berta Valle and first built at the AI Hack for Freedom hackathon. On the education side: new curricula targeting Gen Z and women in authoritarian countries, teaching vibe coding and how to deploy personal AI agents that expand their power rather than expose them. Supporting research grantees track CCP surveillance tactics and maintain the China Dissent Monitor — an AI tool that has logged over 14,000 cases of protest activity before censors can erase them.

March – April 2026
Gemma 3 Nemotron Google · Nvidia Local Inference

Big Tech Bets on Open Source — Local Models Become Viable

In a striking signal of where the industry was heading, both Google and Nvidia released powerful open-weight models: Gemma 3 and Nemotron. These weren't hobbyist experiments — they were frontier-class models from two of the world's largest AI companies, released for anyone to run locally. The message was clear: even the biggest players were leaning into open source. For HRF, this was validation of the entire sovereign AI thesis — powerful intelligence that runs on your own hardware, beyond any government's reach, was no longer a distant dream. It was shipping.

Early May 2026 — Nashville, Tennessee
AI Hack for Freedom Activists as Developers Vibe Coding

AI Hack for Freedom II:
From Bystanders to Builders

Four months after Austin, HRF held the second AI Hack for Freedom in Nashville, Tennessee — and something had fundamentally shifted. This time, the human rights defenders weren't guiding developers from the sidelines. They were the developers. Armed with AI tools and the skills from Agent Camp, activists wrote, built, and shipped their own digital tools over a single weekend. It stands as one of the most profound demonstrations of what this program has achieved: the journey from bystander to creator, compressed into months. And it's worth pausing on what this means before we even get to the full local inference future. Stage one — cloud-based AI tools, vibe coding, personal agents — is already transformative. A human rights defender in Nashville who two years ago couldn't produce a website, a video, or an app without paying firms they couldn't afford, can now build all of it themselves, in a weekend, in a room full of peers. The final frontier of private local inference will matter enormously. But what's already true right now cannot be understated.

Spring 2026
Agent Camp Training Superpowers

Agent Camp: 10 Defenders. Transformed.

HRF launched Agent Camp — intensive cohort training for the world's most prominent human rights activists, giving them a personal AI agent and the skills to use it. In the first batches, 10 of the world's most prominent human rights activists went from zero AI capability to steering complex coding projects — by speaking to a computer in plain language. The civilization step-change was real: in roughly six months, they went from observers to co-creators. These aren't hypothetical gains. Defenders who previously had to spend months fundraising to hire a developer, or go without a tool entirely, could now describe what they needed and have it built in an afternoon. PowerPoint decks, campaign websites, research reports, translation workflows, secure communications tools — the creative and operational output of a well-resourced team, available to anyone with a laptop and a good idea.

May 2026
Local Compute Hardware Sovereign AI

Building the Freedom Cluster:
Local & Private Inference

HRF began assembling its own compute cluster — hosted inference hardware that activists can use without relying on commercial API providers. Simultaneously, the Local Intelligence program started deploying hardware directly to NGOs and defenders. The goal: AI that runs entirely under your control, on your terms, unreachable by any corporation or government. Edge AI, local compute, open weights — the architecture of freedom.

HRF local AI cluster hardware
June 1–3, 2026 — Oslo, Norway
Oslo Freedom Forum Vibe Coding Lounge Hands-On

Oslo Freedom Forum:
The AI Program Goes Public

The 2026 Oslo Freedom Forum is the coming-out party for HRF's AI program. In the Vibe Coding Lounge, attendees from every corner of the human rights world will sit down at a computer, describe their movement's needs, and watch AI build for them in real time. For many, it will be the first time they've ever used an agent. The demand for training — not corporate apps, but your own agent, running on your terms — is expected to be torrential.

"In essentially six months, they went from not being able to contribute at all to steering the coding on their own projects — by speaking to a computer. This is just completely amazing stuff." — Alex Gladstein, Chief Strategy Officer, Human Rights Foundation
Key Milestones

What HRF Has Built

01
Program
AI for Individual Rights Program
Launched January 2025. The first HRF program dedicated entirely to harnessing AI for human rights defenders and researching authoritarian AI abuse.
02
Event
AI Summit for Individual Rights
November 2025. First-ever gathering uniting dissidents, policy experts, and AI technologists to develop the sovereign AI agenda.
03
Grantmaking
18 Projects Funded
Two rounds of grants (Nov 2025 + May 2026) deploying capital into open-source privacy tools, local inference, censorship-resistant AI access, and educational programs for activists worldwide.
04
Event
AI Hack for Freedom, Austin
January 2026. Activist captains guided developers to build real tools together in a single weekend — proving that co-creation across the technical divide is possible.
05
Training
Agent Camp Launches
Spring 2026. 10 world-class human rights defenders given their own AI agents and the training to deploy them effectively in their work.
06
Infrastructure
Local Intelligence Program
HRF-funded grants and hardware for NGOs to run local AI inference — no tokens, no corporate dependencies, full data privacy.
07
Vision
Sovereign AI Roadmap
HRF's north star: every activist running open-weight models on their own hardware — a phone or laptop they own — with full privacy and autonomy.
Looking Ahead

The Road to Full Sovereignty

As of May 2026, HRF is halfway there. The mission is clear — give activists AI superpowers in an open-source way they control — but the final frontier remains: local, private inference on commodity hardware.

But "halfway there" undersells what has already been achieved. Stage one of this revolution — cloud-based AI agents, vibe coding, personal assistants — has already transformed what's possible for human rights defenders right now. Small civil society organizations that couldn't afford to hire a video editor, a web developer, or a marketing agency are producing all of that work themselves, in hours instead of months. The creative and operational gap between a well-funded institution and a three-person NGO in a repressive country is closing fast. That is not a footnote. That is the headline.

Open-weight models are improving faster than Moore's Law. The same laptop that struggled to run a useful model in 2024 can run something genuinely powerful in 2026. By the end of this year — early 2027 at the latest — running a Claude-level model privately on a phone or modest laptop looks not just possible, but likely.

When that day comes, a dissident in Tehran, a journalist in Minsk, or an activist in Caracas will have access to the most powerful intelligence tool in human history — completely beyond any government's reach.

Research & Grantmaking 90%
Activist Training Programs 75%
Hosted Inference Infrastructure 55%
Fully Local & Private AI 30%
Sovereign AI infrastructure

The Future of Freedom
Runs on Sovereign AI

HRF is onboarding the world's dissidents onto open, private, self-custodial AI. Join the program, apply for a grant, or bring an activist to Oslo.

AI for Individual Rights → hrf.org