Joining Google Trust & Safety
Over the last several years, Data & AI Governance has undergone a quiet but profound transformation. What once required evangelizing, piloting, and culturally rewiring entire enterprises has now matured into a broadly accepted discipline. Fortune 500 companies have either fully adopted or are deep in the process of adopting distributed governance strategies… federated models that allow teams to self-regulate within shared, technically enforced frameworks. These architectures create room for nuance, accommodate wide-ranging use cases, and establish norms that scale.
That momentum is real. And it’s why I felt the timing was right to pivot.
1. AI must evolve from efficiency to common-good application
We’ve excavated early efficiency gains out of AI (faster workflows, cheaper operations, more intelligent automation); there is now an AI race between the United States and China. AI expansion will continue without pressing the gas pedal. So, naturally, I’m interested in AI crossing into domains that touch human identity, collective psychology, political stability, and societal continuity. The next era won't be defined by acceleration alone…
This requires moving past surface-level optimizations and beginning to solve root-level problems. Problems that determine whether society remains coherent, whether people maintain trust in institutions, and whether AI becomes a tool of flourishing… or fragmentation.
Trust & Safety is where those questions are being asked directly, without euphemism.
2. We are at an technological and economic junction that forces us to rethink scale
Scale is no longer about compute.
It’s about human capacity.
Trust & Safety is evolving because AI itself is evolving. AI already functions as a distributed intelligence system (expanding an end user’s knowledge, sharpening their skills, and meeting them at whatever depth of curiosity they bring). As AI scales, Trust & Safety will need to mirror this distribution. Instead of centralized oversight structures, we’ll need systems where knowledge, context, and protective capabilities are broadly distributed, aligned to geographically or anthropologically established norms and expectations.
At the population level, distributed AI lifts the cognitive baseline. It expands the average person’s access to expertise, context, and problem-solving ability, which has a duel use. Instead of a workforce bound by procedural scripts or rigid workflows, people gain the autonomy and judgment to resolve nuanced issues that once required escalation. Likewise, the “average worker” becomes less average — not through innate genius, but through access to intelligence amplifiers that raise the entire population’s capability. When millions of people operate with deeper context and higher agency, Trust & Safety becomes more stable, more adaptive, and more human-aligned.
But there is also the second layer: the continuity of exceptional talent. Every era relies on a small number of nonlinear thinkers whose contributions dramatically redirect technological and societal trajectories. If the world depends on a handful of “Elon Musks,” we inherit a fragile future. What happens when these singular individuals disappear? How do we safeguard their intuition, their conceptual frameworks, their problem-solving heuristics? Distributed AI offers a path here, too. As systems learn from the best we have, we create the possibility of replicating competence rather than losing it. The goal is not to replace gifted individuals, but to ensure that their abilities echo through the ecosystem — captured, distilled, and re-applied by the many.
When these two layers move together (1. uplifting the population while 2. preserving and amplifying the nonlinear few) labor becomes fundamentally more capable, and the trust & safety is self-evident.
Not only does the average person gain more capacity, but the outlier’s impact becomes scalable rather than singular. This dual motion creates a civilization less dependent on extraordinary individuals, yet more capable of producing them. And in that world, Trust & Safety becomes a distributed system of shared competence, continuous learning, and collective resilience… an ecosystem where knowledge flows outward instead of upward.
3. I’m driven by problems that impact everyone
Data & AI Governance taught me how to build frameworks that scale. Trust & Safety is where those frameworks meet real lives.
At my core, I want to solve problems that shape the opportunity landscape for millions of people. Problems where the goal is not only efficiency or compliance, but human flourishing…. creating the maximum room for freedom without sacrificing collective safety.
That requires principled architecture, foresight, not reactive policy.
4. Trust & Safety operates at a scale few people ever see
A mature Trust & Safety organization processes over a billion content reviews per year.
For the past decade, basic AI models have annotated content, triaged patterns, and orchestrated responses like an air-traffic control system. Now we’re moving toward AI agents coordinating actions across screens and systems, that once required thousands of manual reviewers. The old world (high-volume offshore labor, oversensitive classifiers, widespread overflagging) created real harm. People were deplatformed unfairly. Western fabric was strained.
Foreign adversaries took advantage of that strain, which cannot continue.
5. Treating life—and AI—as a continuous experiment in pursuit of the good
We need systems that are not only powerful, but appropriate, measured, and contextually wise.
One of my guiding convictions is that life is an ongoing experiment 1 Thessalonians 5:21. We test constantly, we learn constantly (we discard what does harm) and we revisit the structure of facts to ensure they remain structurally true; because knowledge is not static in a civilization that pursues exploration.
In such a civilization… AI, “efficiency,” “accuracy,” and “revenue” are not sufficient metrics.
We must aim for good.
Not good in the abstract.
Not good in a quarterly report.
But good in context (appropriate to the problem), aligned with human values, proportional to risk, and grounded in wisdom rather than expedience.
Trust & Safety is one of the few domains where teams actively practice this discipline every day. Some teams realize it better than others.
Why Trust & Safety? Because this is where the next decade’s work begins.
Distributed Data & AI Governance has laid a foundation for the largest companies in the world, to steward digital systems, absent of excessive government bureaucracy. The world is discovering that AI governance cannot be an afterthought, it must be architected at the same level as the technology itself. There are a handful of people exploring and discovering various technological options and endeavors; the experiments are underway.
Trust & Safety is where that architecture is being built: by engineers, philosophers, product thinkers, researchers, policy experts, and systems designers.
This is where tomorrow’s most consequential problems surface early.
This is where the stakes are highest for everyone.
This is where alignment isn’t theoretical.
This is where wisdom matters.