It’s all happening — “The Parallel Web”

Introduction

Over the past 5 years I’ve spent a lot of time focusing on Data Governance at the B2B platform layer. More recently, Trust & Safety Applied AI.

As I’ve stepped away from Google, my curiosity has shifted to a more fundamental space… internet-scale problems.

The internet’s practical odyssey dates back to Maxwell’s 1865 formulation of electromagnetic field theory. This foundation enabled electrical engineering, which in turn made possible Shannon’s 1948 formulation of information theory, proving that information could be quantified and transmitted reliably over noisy channels. Turing’s work on computability/cryptography and von Neumann’s stored-program architecture established the principles of general-purpose computation upon which modern systems were built. This lineage continued into early internet architecture through figures such as Licklider, Clark, Saltzer, and ultimately Cerf and Kahn, whose work on TCP/IP established a globally interoperable network.

Up to this point, the system evolved in a relatively sequential manner, with each layer building directly on the last to form a coherent, expanding stack. However, today, multiple epochs of the internet are coexisting and evolving simultaneously rather than unfolding in sequence. A more technical framing might be “Concurrent Internet Architecture”. What appears to be a single, unified network, in reality, has become a cluster of focal nodes, interacting cultural paradigms, legal primitives, and emerging stacks… shaped continuously by human-driven incentives. These layers are increasingly and collectively determining how information, identity, trust, and agency operate in the physical world.

I coin “The Parallel Web” to describe v1-v7 of the world wide web — it is a map of forces already in motion.

Think of this as a journey or a road-trip (not a completed picture), with new on-ramps or off-ramps at 2028, and potentially dramatic intersections near 2032 and an even greater shift nearing 2040. However, the future remains non-deterministic.


Web v1.0

The earliest phase of the modern web, commonly referred to as Web 1.0, emerged from the foundational work of ARPANET and the development of TCP/IP by Robert Kahn and Vint Cerf, formalized in their 1974 paper “A Protocol for Packet Network Intercommunication.” This work established the architectural basis for a globally interoperable network. Tim Berners-Lee’s 1989 proposal, “Information Management: A Proposal,” further advanced this foundation by introducing the World Wide Web as a system for linking documents across distributed environments. The resulting implementation of HTTP and HTML enabled a globally accessible system of static documents organized through hyperlinks. The architecture was intentionally simple and decentralized, designed to facilitate the distribution of knowledge rather than the coordination of behavior. The primary actor in this system was the publisher, and the user existed largely as a passive reader navigating what Berners-Lee effectively envisioned as a universal information space.

Open source governance also emerged alongside early internet infrastructure as a model built on the assumption of a shared commons with broadly aligned incentives. Advocates such as Richard Stallman, through the GNU Project and Free Software Foundation (1980s), and later Eric Raymond in The Cathedral and the Bazaar (1997), articulated a philosophy in which transparency, collaboration, and peer review would produce more resilient and widely accessible systems. Linus Torvalds’ development of Linux (1991) demonstrated this model at scale, while institutions such as the Open Source Initiative formalized its principles in the late 1990s. However, openness is structurally neutral. It does not distinguish between contributors strengthening shared infrastructure and actors systematically mapping its limits for asymmetric advantage. As a result, sovereign nations have learned, open-source systems can be leveraged as instruments for those willing to absorb short-term costs in order to erode structural moats they could not otherwise breach. The same properties that enabled the democratization of software and infrastructure now manage a persistent tension, where transparency and access increase both resilience and exposure simultaneously.

Web v2.0

The transition to Web 2.0 introduced a fundamental shift from static information distribution to continuous, real-time interaction, but its roots trace back to the browser wars of the late 1990s, where Microsoft’s Internet Explorer helped bring the web into the mainstream. By bundling Internet Explorer with Windows, Microsoft effectively placed a gateway to the internet on hundreds of millions of machines, standardizing access and accelerating adoption at a global scale. While often retrospectively framed by Tim O’Reilly’s 2005 articulation, this phase was defined by the convergence of browser innovation, distributed systems engineering, and platform-scale coordination. Advances such as asynchronous JavaScript and XML enabled dynamic interfaces, allowing users to both consume and produce content in real time. This made applications feel less like documents and more like living systems that could update, respond, and adapt to user input. The web evolved into a participatory system, where identity, communication, and content creation were increasingly mediated by platforms such as Microsoft, Google, Amazon, Apple, and Meta. This marked the beginning of persistent digital presence, where users were no longer anonymous visitors but continuous participants within platform-defined environments.

What distinguishes this era is not just participation, but the emergence of global reliability as an expectation. Users began to assume that services would be fast, available, and consistent regardless of geography or scale. Google played a decisive role in transforming the web into a utility-like infrastructure. Search created a universal indexing layer that made the web navigable at scale, while backend systems such as GFS, MapReduce, Bigtable, and Spanner enabled massive, distributed computation. These systems abstracted away the complexity of physical infrastructure, allowing developers to build on top of globally distributed resources without needing to manage them directly. Products like Gmail, Maps, and YouTube normalized real-time access and near-infinite storage, while Chrome standardized the execution environment for modern applications. Android extended this layer to billions of devices, ensuring that access to the web was persistent, mobile, and globally consistent, effectively collapsing the boundary between the internet and everyday life.

In parallel, Meta operationalized the social layer of the web, turning identity and interaction into structured, persistent systems. Rather than simply connecting users, it encoded relationships into a continuously evolving graph that could be analyzed, ranked, and optimized. Facebook, Instagram, and WhatsApp did not merely host user-generated content; they mapped real-world relationships into digital graphs and optimized their flow through algorithmic ranking. This created a continuous feedback loop between user behavior and content visibility, where attention became the primary currency. Over time, this system learned to predict and shape engagement, influencing not just what users saw but how they interacted with information itself. Meta’s infrastructure scaled social interaction globally, making communication instantaneous and ubiquitous, while its advertising systems converted engagement into economic value, reinforcing the web’s transition into a behaviorally driven ecosystem. This introduced a new form of soft control, where influence was exerted through ranking systems rather than explicit rules.

Together, these systems transformed the internet into something closer to a global utility: always-on, geographically abstracted, and deeply embedded in daily life. The web was no longer a destination; it became an ambient layer that underpinned communication, commerce, and coordination. Performance improvements in caching, content delivery networks, and edge routing reduced latency to the point where physical distance became largely invisible to the user experience; This gave rise to Amazon, which anchored this utility in the physical world by operationalizing e-commerce at scale, turning the web into a transactional backbone for global supply chains. Through its logistics network and the creation of AWS, Amazon extended Web 2.0 beyond interaction into execution, where digital intent could be directly translated into real-world delivery and computational infrastructure. Yet this utility came with centralized control. Platforms became intermediaries of identity, discovery, and monetization, governing what is seen, shared, and sustained. The actor in this phase became a hybrid of user and platform participant, operating within environments that enabled creation while constraining reach and influence, with open-source and decentralized efforts continuing to operate as partial counterweights to consolidation.

Web v3.0

Web 3.0 represents a divergence rather than a singular evolution, rooted in two distinct intellectual lineages that competed against each other. The first traces to Tim Berners-Lee’s vision of the Semantic Web, articulated in his 2001 Scientific American article, which proposed a web of machine-readable meaning built on structured metadata, ontologies, and standards such as RDF and OWL. In this model, the web evolves from a collection of documents into an interoperable systems of ontologically structured knowledge, where machines can interpret, reason over, and coordinate information across domains. The second lineage emerges from cryptographic and distributed systems research, formalized in Satoshi Nakamoto’s 2008 Bitcoin whitepaper and expanded through Ethereum in 2013. This branch reframes the web around ledgers, persistence, ownership, coordination, and trust minimization, introducing consensus mechanisms, tokens, wallets, and programmable contracts as foundational primitives. Where the Semantic Web sought ontological and graph based modalities, the crypto paradigm sought structured value and authority without reliance on centralized intermediaries. Both lineages aim to reduce dependence on platforms, but they diverge fundamentally in orientation: one organizes meaning, the other enforces trust.

Deep learning, as advanced by researchers such as Yoshua Bengio, Yann LeCun, and Geoffrey Hinton throughout the 2000s and 2010s, marked a decisive shift toward statistically grounded systems that learn representations directly from data. Building on earlier connectionist work, this paradigm prioritized scale, optimization, and probabilistic inference over explicit symbolic structure. The result was a class of models capable of approximating language, vision, and reasoning through distributional patterns rather than ontological commitments. For the present moment, this statistical approach has become the dominant interface layer of the internet, shaping how users interact with information and how systems interpret intent. However, this dominance should not be mistaken for resolution. Ontology driven systems and cryptographic architectures continue to evolve in parallel, each addressing dimensions of structure and trust that statistical models do not natively resolve. The field remains open, even if current momentum suggests otherwise.

A critical inflection point in public awareness of concentrated power emerged with Edward Snowden’s 2013 disclosures, which revealed the extent of state-level surveillance, and, digital policing capabilities of major technology platforms. These revelations made digital systems visible to technical and policy communities, but their implications remained unevenly distributed in public consciousness. Over the following years, particularly around 2016–2018, awareness began to diffuse more broadly through debates on data privacy, platform manipulation, and election interference. However, these dynamics became materially legible and caused heated geopolitical debates in 2020, due to the COVID-19 pandemic. The widespread deployment of digital coordination mechanisms (ranging from mobility restrictions to health status systems) demonstrated, at scale, how existing infrastructure could constrain human freedom. In this sense, Snowden exposed informational risks and the expanding underbelly of U.S. intelligence apparatuses, while the pandemic revealed the operational potential, not yet actualized, for constraining human agency at scale.

Web v4.0

Web 4.0 marks the transition from interaction to execution, grounded in the evolution of intelligent systems. Three inflection points anchor this shift. First, Pattie Maes’ 1994 work established the core idea of software agents acting on behalf of users to reduce cognitive load. Second, Jacob Devlin’s BERT (2018) operationalized deep contextual language understanding, a trajectory later accelerated by the public release of large language models from OpenAI, followed by Google and Anthropic (2021-2023), building on foundations established during Web 2.0 and Web 3.0. These systems enabled machines to interpret intent with far greater fidelity than rule-based approaches. Third, more recent efforts by Toran Bruce Richards and others extend these capabilities into networked, decentralized agent ecosystems, where agents can discover, coordinate, and transact across shared protocols. The underlying trajectory of Web 4.0 provides a pathway for AI, IoT, and robotic interfaces, which will become the hallmark of Web 4.0’s completion.

As these systems mature, the stakes of the emerging “standard wars” increase, reflecting a growing competition over shared protocols and coordination frameworks. For example, MCP, and new JSON-RPC schemas. These standards aim to enable agent movements, from recommendation to execution, while preserving definitional boundaries and technical accountability. This dynamic repositions humans as definers of intent and overseers of outcomes rather than direct operators; it also prepares conversation and debate to arise regarding the role of robotics and labor management across many domains and workforce segments. The progression marks a broader shift from how humans engage with AI systems; shifting from systems that assist cognition to systems that extend agency, translating abstract goals into concrete actions across distributed environments. As these capabilities and frameworks continue to mature, the boundary between instruction and execution narrows, focusing attention on crucial questions around control, verification, and alignment.

As these systems become more autonomous, AI agents will not simply consume data but will be required to generate, validate, and exchange ontological grounding within their respective domains, aligning with Palantir’s emphasis on operational ontologies as the backbone of decision-making systems. Within Palantir’s platforms, Foundry and Gotham, ontology is not an abstract layer but a structured representation of real-world entities, relationships, and actions, tightly coupled to workflows, permissions, and institutional context. As this paradigm extends into agentic systems, these ontological frameworks must anchor to political, legal, and cultural topologies to remain coherent, ensuring that AI actions are legible and accountable within the user’s environment. In this model, agents exchange not only data but the structures that give data meaning and the identity primitives that represent actors, forming a semantic market layer where definitions and causal relationships are continuously validated against real-world outcomes. Truth becomes operational, traceable through provenance, lineage, and system-level consensus, requiring platforms capable of reconciling competing ontologies while preserving coherence across domains.

Web v5.0

There is a second layer forming beneath the geopolitical AI landscape, one that reframes the entire discussion. If the current moment is about how nations govern AI, the next phase is about how individuals exist within it. Web 5.0 marks a re-centering of the internet around identity, and ownership, not as abstract ideals but as enforceable system primitives. This shift is already visible in the emergence of decentralized identifiers and verifiable credentials through the World Wide Web Consortium, which establish a portable, user-controlled identity layer across domains without reliance on centralized intermediaries. Parallel efforts in industry exploring self-sovereign identity architectures extend this into implementation, where identity is no longer issued by platforms but composed and managed by individuals. This creates an inflection point where data transitions from passive exhaust into a first-class economic asset, with provenance, consent, and usage constraints encoded alongside it, aligning with broader efforts such as C2PA to standardize content authenticity and traceability.

This reconfiguration transforms both the structure and economics of data. In earlier phases of the web, data was extracted and aggregated into platform-controlled systems, often without explicit ownership or participation from its originators. In a Web 5.0 system, data becomes packageable, permissionable, and transactable, enabling new market structures such as data underwriting layers where datasets are evaluated based on lineage, reliability, and risk. Identity evolves into both an access layer and an economic interface through which individuals negotiate participation in digital systems, increasingly mediated by AI agents acting on their behalf. The actor is no longer a passive user but a sovereign individual augmented by machine intelligence, operating within systems that recognize identity as a programmable surface for coordination, attribution, and value exchange.

In parallel, the physical architecture of the internet is beginning to invert. Vint Cerf’s ongoing work related to Delay/Disruption Tolerant Networking and the Interplanetary Internet anticipated a shift toward distributed, asynchronous communication layers that extend beyond terrestrial constraints. This vision is now being partially realized through large-scale satellite constellations such as Starlink and StarCloud as a foundational layer for global connectivity and future interplanetary data systems. The implication is an orbital routing fabric operating as a globally coordinated network layer, while ground-based infrastructure evolves into localized edge environments optimized for execution and context. When combined, these layers suggest a future internet composed of sovereign identity, programmable data, agent-driven ontology, and orbital-scale routing, where power accrues not to the fastest system, but to the one that can maintain a coherent and durable whole.

Web v6.0

Web 6.0 aligns with earlier visions of autonomous systems, but focuses on expanding AI agency and information provenance. Vint Cerf and Robert Kahn’s work on digital libraries and later conceptualizations of “knowbots,” or knowledge robots, anticipated a future in which software entities could traverse networks, retrieve information, negotiate with other systems, and execute tasks independently. While the term “knowbot” predates the modern web, its underlying concept has become increasingly relevant in the context of multi-agent AI systems. Today’s AI Agentic architectures extend these ideas by enabling AI models to interact with one another, form cooperative or competitive relationships, and operate across both digital and physical domains. These systems are characterized by persistence, total autonomy, and the ability to coordinate workflows without continuous human intervention. The primary actor becomes the agent itself, and the web evolves into an ecosystem of interacting autonomous entities within their assigned functions. The protocols established in Web 1.0 and 2.0, the competing trust models introduced in Web 3.0, and the execution frameworks and new standards developed in Web 4.0 and 5.0 collectively determine how agents operate and what they’re permitted to do.

At the completion of Web 6.0, two major observations occur. The first is demographically driven; population decline makes widespread GPU/TPU access more economically viable at global scale. The second is, trust and safety become distributed properties of the system rather than functions of a single platform or authority; this is a fundamental outcome driven by expanded threat surface; scale and the autonomous nature of agentic systems, where no single control point can reliably govern behavior across interacting entities. However, this distribution is not guaranteed and instead resolves along two competing trajectories. In one path, data sovereignty is enforced through tightly controlled identity systems, centralized policy layers, and jurisdiction-bound infrastructure (via earlier failures within Web 5), resulting in a model where trust remains concentrated and behavior is constrained. In the alternative path, aligned with the original open spirit of the internet, trust is distributed across protocols, with verification, identity, and provenance made portable and interoperable across systems. These two outcomes imply fundamentally different implementations of trust and safety.

  • In the distributed model, verification must be portable across agents, domains, and jurisdictions, with identity, provenance, and policy enforced at the substrate/protocol layer and continuously evaluated through cryptographic attestations.

  • In the sovereignty-dominant model, safety is enforced through access control, identity gating, and centralized oversight, reducing flexibility but increasing direct enforceability (obvious moderation).

In this sense, Web 6.0 does not redefine the web, as much as it aims to reveal the consequences of upstream error, before the “big leap”.

Web v7.0

Web 7.0 can be understood as a true discontinuity, a shift that extends beyond architecture into the physical substrate of computation itself. It implies a reconfiguration of energy systems, compute models, and trust primitives, shaped by advances in orbital infrastructure alongside quantum and post-quantum technologies. The intellectual foundations of this transition trace back to late 20th-century work by Paul Benioff, Richard Feynman, and David Deutsch, who formalized quantum computation as a response to the limits of classical simulation. Breakthroughs such as Shor’s algorithm in 1994 demonstrated that quantum systems could outperform classical machines on specific classes of problems, particularly in cryptography, establishing long-term implications for digital security. Today, progress is accelerating across quantum hardware and system architecture, with competing approaches exploring different trade-offs in scalability, stability, and operational complexity. Parallel advances in optical interconnects, cryogenic control, and qubit scaling signal a movement toward practical, networked quantum systems rather than isolated experimental devices.

What distinguishes this transition is that it introduces a shift at the level of physics itself rather than simply extending prior architectural layers. Classical internet infrastructure remains bound to Maxwellian constraints, encoding information in electromagnetic states and processing it through silicon-based systems governed by deterministic or probabilistic logic. Quantum computing operates beyond the Maxwellian model by leveraging superposition and entanglement to encode and manipulate information in fundamentally different ways. This shift does not merely increase performance; it alters the assumptions underlying encryption, communication, and verification. In quantum computing and cryptography, aspects of this transition are likely to be shaped within national laboratories, private research labs, and continue expanding in industrial ecosystems before broader dissemination. As a result, Web 7.0 represents unlocking, testing, and slowly operationalizing new paradigmes via breakthrough-physics.

Conclusion

These epochs of the World Wide Web are happening not in a linear fashion, but in parallel, operating like separate clusters, with some shared nodes, while simultaneously revealing how little of that transformation is visible to the average user. What is experienced as a single, unified internet is, in reality, a layered and voluntarily evolving system… whose underlying architectures, protocols, and trust assumptions are largely abstracted away from public intent. Yet this same population is not merely observing the system, many are actively participating in its construction. Every interaction, dataset, model output, and governance choice contributes to the shaping of the network, producing a form of continuous civic participation that operates without explicit recognition.

In this sense, the modern internet can be understood as an ongoing, large-scale experiment in coordination in which billions of participants, engineers, architects, and end-users, all co-author the system’s behavior while remaining only partially aware of its totality. On the current trajectory, this convergence of agentic capability, protocol-defined trust, and distributed coordination suggests the emergence of fully autonomous, system-level operations within the next decade, plausibly by the early 2030s. Such a transition would not represent a discrete technological breakthrough, but rather the cumulative activation of capabilities already in motion, reaching a threshold where systems can operate with minimal human intervention across critical domains.

While nobody claims to know exactly how Web 7.0 will function, there are many components that could be involved. Photonic quantum computing, in particular, has emerged as a promising path due to its ability to operate at room temperature and leverage mature optical technologies, while advances in quantum error correction and fault-tolerant memory are addressing longstanding challenges related to decoherence and noise. Recent experimental milestones, including demonstrations of quantum advantage and photonic quantum supremacy, alongside emerging efforts to build quantum networking infrastructure and interconnect heterogeneous systems, suggest that the field is transitioning from theoretical exploration to early-stage system integration.

References (Selected Foundational Works)

Maxwell, J. C. (1865). A dynamical theory of the electromagnetic field. Philosophical Transactions of the Royal Society of London, 155, 459–512.

Turing, A. M. (1936). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 42(2), 230–265.

Von Neumann, J. (1945). First draft of a report on the EDVAC. University of Pennsylvania.

Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27(3), 379–423, 623–656.

Licklider, J. C. R. (1960). Man-computer symbiosis. IRE Transactions on Human Factors in Electronics, HFE-1(1), 4–11.

Cerf, V., & Kahn, R. (1974). A Protocol for Packet Network Intercommunication.

Benioff, P. (1980). The computer as a physical system: A microscopic quantum mechanical Hamiltonian model.

Feynman, R. (1982). Simulating physics with computers. International Journal of Theoretical Physics.

Saltzer, J. H., Reed, D. P., & Clark, D. D. (1984). End-to-End Arguments in System Design. ACM Transactions on Computer Systems, 2(4), 277–288.

Stallman, R. (1985). The GNU Manifesto. Free Software Foundation.

Deutsch, D. (1985). Quantum theory, the Church–Turing principle and the universal quantum computer.

Clark, D. D. (1988). The Design Philosophy of the DARPA Internet Protocols. ACM SIGCOMM Computer Communication Review, 18(4), 106–114.

Cerf, V., & Kahn, R. (1988). Digital Library Project. Corporation for National Research Initiatives.

Berners-Lee, T. (1989). Information Management: A Proposal. CERN.

Torvalds, L. (1991). Linux Kernel Announcement. comp.os.minix newsgroup.

Maes, P. (1994). Agents That Reduce Work and Information Overload. Communications of the ACM.

Shor, P. (1994). Algorithms for quantum computation: Discrete logarithms and factoring.

Raymond, E. S. (1997, expanded 1999). The Cathedral and the Bazaar. O'Reilly Media.

Perens, B. (1998). The Open Source Definition. Open Source Initiative.

Berners-Lee, T., Hendler, J., & Lassila, O. (2001). The Semantic Web. Scientific American.

Weber, S. (2004). The Success of Open Source. Harvard University Press.

O'Reilly, T. (2005). What Is Web 2.0. O'Reilly Media.

Benkler, Y. (2006). The Wealth of Networks. Yale University Press.

Cerf, V., Burleigh, S., Hooke, A., Torgerson, L., Durst, R., Scott, K., Fall, K., & Weiss, H. (2007). Delay-Tolerant Networking Architecture. RFC 4838. IETF.

Kelty, C. M. (2008). Two Bits: The Cultural Significance of Free Software. Duke University Press.

Nakamoto, S. (2008). Bitcoin: A Peer-to-Peer Electronic Cash System.

Leiner, B. M., Cerf, V. G., Clark, D. D., et al. (2009). A brief history of the internet. ACM SIGCOMM Computer Communication Review, 39(5), 22–31.

Buterin, V. (2013). Ethereum: A Next-Generation Smart Contract and Decentralized Application Platform.

Shi, W., Cao, J., Zhang, Q., Li, Y., & Xu, L. (2016). Edge Computing: Vision and Challenges. IEEE Internet of Things Journal.

Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.

OECD. (2019). Enhancing Access to and Sharing of Data: Reconciling Risks and Benefits for Data Re-Use. OECD Publishing.

European Commission. (2020). A European Strategy for Data. European Commission.

Reed, D., Sporny, M., Longley, D., Allen, C., Grant, R., & Sabadello, M. (2020). Decentralized Identifiers (DIDs) v1.0 Core Architecture.

Sporny, M., Longley, D., Sabadello, M., Reed, D., Steele, O., & Allen, C. (2022). Decentralized Identifiers (DIDs) v1.0. W3C Recommendation.

Dorsey, J. (2022). The Next Generation of Decentralized Web [Conference Presentation]. Bitcoin 2022, Miami.

C2PA. (2023). C2PA Technical Specification: Provenance and Authenticity for Digital Content. Coalition for Content Provenance and Authenticity.

Previous
Previous

Chinese Court vs American Strategy

Next
Next

Speaking at Missional AI Global 2026