Speaking at Missional AI Global 2026

For anyone who missed the presentation, here’s a link.

Artificial intelligence Models now translate languages, synthesize research, generate software, and reason through complex problems at speeds that reshape industries. The central constraint has shifted. As AI systems expand across institutions, supply chains, and digital infrastructure, the pressing question becomes whether the origins, movements, and rules of these systems (including the content they alter or create) remains traceable to the humans.

AI provenance

The emerging discipline of AI provenance addresses the first half of that challenge. Provenance provides the historical and relational record of an AI artifact as it moves through time and across systems. Governance addresses the second half. Governance defines the technical and architectural structures that constrain how AI behaves once it is deployed, including the type of content it can handle, alter, or generate. Together, they form the skeleton of a good AI ecosystem. However, without provenance, organizations lose the ability to trace where intelligence originates and how it propagates. Likewise, without governance, organizations lose the ability to shape how intelligence operates once it is embedded in critical systems; this risks the loss of data chronology. So the next era of AI development revolves around re-thinking what makes intelligence traceable, accountable, and structurally coherent across the entire computing stack, while acknolweding the inherant risks of duel-use (if these two topics are not handled with humility).

AI provenance begins with a chronology of ownership. Modern models rarely emerge from a single moment of creation. A foundation model may begin as a research project, evolve through internal experimentation, receive additional training datasets, undergo multiple fine-tuning cycles, and eventually enter production environments across several organizations. Each step introduces a new actor who holds control over the artifact. A chronological ownership record captures this progression in time. It records which institution provided the data, what team trained the base model, which teams modified it, which infrastructure providers hosted it, and which organizations ultimately deploy it. This timeline establishes a chain of custody that mirrors the recordkeeping practices used in financial systems, legal evidence, and scientific research. Accountability emerges from the ability to reconstruct the full lineage of a model or dataset. When a system produces an outcome that affects customers, institutions, or public decision making, investigators must be able to reconstruct the sequence of decisions that shaped the system.

Ownership alone does not capture how intelligence moves through modern digital ecosystems. AI artifacts rarely remain confined to the environment in which they were created. Outputs generated by one model become inputs to another system. Predictions exported through APIs appear inside dashboards, automated workflows, and downstream decision engines. Generated datasets feed the training pipelines of subsequent models. Each of these interactions produces a lineage of use across systems. Use is therefore the most important metric. This lineage describes how artifacts propagate through distributed technical environments and how their influence spreads across organizational boundaries. Engineers must gain the ability to trace how a specific output influenced downstream models, which decisions incorporated that output, and how the artifact contributed to the training of future systems. Distributed lineage tracking therefore transforms AI ecosystems into observable networks. Organizations gain visibility into how intelligence spreads across platforms, services, and computational environments.

AI Governance

AI governance must ultimately lives inside the computing stack itself. Policies written in documents or frameworks cannot meaningfully constrain systems that operate at machine speed. Effective governance must exist inside the technical architecture that executes intelligence. Each layer of the stack therefore, becomes a control surface. Protocols define how intelligence communicates. Hardware defines the boundaries of computation. Kernels coordinate resource access and isolation. Operating systems orchestrate reasoning and knowledge retrieval. Together these layers determine whether an AI ecosystem behaves as a controlled infrastructure or an unpredictable network of models. Governance at this level resembles the architecture of civil infrastructure. Traffic rules matter, but highways, intersections, and physical road barriers ultimately determine how vehicles move. AI governance functions the same way. Structural design at the protocol, hardware, kernel, and operating system layers determines the behavioral limits of intelligent systems long before policy documents or regulatory guidance ever enter the picture. The stack becomes the enforcement layer.

Protocol Designs (TCAI vs TCP/IP)

Right now, most of what we’re doing to restore trust in digital systems lives at the surface layer. We’re using things like AI moderation, cryptographic signing at the hardware level, content credentials like C2PA, and watermarking. These are all important. They represent real progress. But they are still reactive. They are attempting to reintroduce trust into systems that were never designed to preserve it at scale. What’s starting to emerge now is a much deeper shift. We are seeing the formation of trust infrastructure. This includes things like Model Context Protocol, metadata chains, hardware-bound identity, and shared global standards. At the center of this is the idea of a structured contract, something like PIC, where every meaningful AI action carries intent, impact, provenance, evidence, and the action itself. In other words, not just what happened, but why it happened, what it was based on, and the level of confidence behind it. This represents a fundamentally different model of computing, one that is traceable, verifiable, and context-aware by design.

To understand why this matters, it helps to understand what protocols actually do. Protocols establish the language through which intelligent systems communicate. Every interaction between agents, models, APIs, and infrastructure passes through protocol rules that define message structure, authentication methods, verification procedures, and routing logic. These rules determine whether a system can verify where information came from, whether it has been altered, and whether it should be trusted before taking action. In emerging AI ecosystems, protocol frameworks are beginning to embed identity and traceability directly into the communication layer. Each message can carry cryptographic signatures that identify the model, dataset, or agent responsible for generating it. Downstream systems can verify both the origin and integrity of that information before acting on it. This establishes a verifiable chain of information flow across systems, where knowledge retains its origin, transformation history, and contextual grounding as it moves through networks.

Protocol design governs how knowledge moves, how decisions are made, and how systems coordinate across technological or geographic boundaries. When communication standards incorporate provenance metadata, authentication, and verification rules, distributed AI systems gain the ability to operate with shared context and verifiable truth. Protocol governance becomes the connective tissue of the ecosystem, ensuring that information traveling between systems retains its identity, context, and traceable history from origin to outcome.

At the same time, something bigger is unfolding in the background. We are entering a new kind of “standards war”. Different groups, including major technology companies, open-source communities, governments, institutions, and concentrated networks of capital, are all working to define what this next protocal layer looks like. Because whoever defines the protocol defines the system for a very long time. And whoever defines the system ultimately shapes how truth is established, how identity is verified, and how decisions are made at scale.

The Problem of Duel Use

The same structural power that allows these systems to protect the integrity of information can also enable unprecedented forms of control. When governance becomes embedded directly into protocols, chips, operating systems, and storage media, the infrastructure acquires the ability to determine what kinds of computation may occur and what information may exist. The distinction between protecting systems and controlling behavior becomes dangerously thin. The very tools that make AI ecosystems stable, can also be weilded to restrict participation in digital life itself.

The combined effect across all layers of the stack produces a powerful dynamic. Provenance identifies the origin of every artifact. Protocols control which actors may communicate. Hardware constrains which computations may execute. Kernels coordinate which processes receive resources. Operating systems determine which knowledge sources influence reasoning. Storage filters decide which artifacts become part of long-term memory. Each layer individually improves system stability and security. Together they create a comprehensive architecture capable of governing the entire lifecycle of information.

This dual-use reality defines one of the central design challenges of the AI era. Engineers must build systems that protect the integrity of intelligence without transforming that protection into mechanisms of exclusion. The architecture must support accountability without eliminating openness. Provenance must reveal the origin of artifacts while preserving the possibility of participation across diverse actors. Storage systems must defend knowledge archives without allowing institutional bias to shape what becomes part of the permanent record; the balance between resilience and freedom.

The future therefore, depends on structural humility, within the design of the stack itself. Engineers and institutions building these systems must recognize that every protective mechanism carries the potential for coercive application. Protocols, chips, kernels, and storage media will shape the boundaries of digital life for decades to come. The challenge lies in designing infrastructure that provides flourishing and preserves the integrity of knowledge without concentrating power over who is allowed to create, exchange, and preserve that knowledge.

Previous
Previous

Graduating From Google 🎓

Next
Next

Maybe AI Made The FED Obsolete