Foot Traffic LLM Engines 2026
Foot-Traffic Engines: The Next Frontier for LLMs
Modern consumer AI is drifting toward a predictable trap: as models become more personalized, people form emotional attachment patterns. Tristen Harris, and others, warn of a bad future, if, this behavior goes un-checked. But this trajectory has been a long-time in the making, and honestly, it’s not inevitable. The same properties that make LLMs deeply engaging also make them capable of redirecting, scaffolding, and strengthening human connection.
I first saw this problem in 2017 when I began feeling the absence of social connection online. My first iteration was called Habio, it was planned to be a Android keyboard integration, where businesses paid to have words/phrases swapped out and recommend where people could plan to meet in person. I pivoted the stack, and built a small prototype with Dialogflow, but then realized the scale would never reach full maturity, so I pressed pause.
In 2018, I gave a talk at Google, mentioning my anticipation for chat-based application interfaces. I had a hunch this would come to pass with the emergence of multi-model LLMs in 3-5 years. I was correct. Open AI launched in 2022, and my second design was called Ahava. It was planned to be an mobile extension, built with Gemini, it attracted some investors, but I again, didn’t move forward with the idea. I felt, the timing was too early.
Now, as we enter 2026, the solution is needed, and the timing is perfect.
The Problem: Digital Companions and Emotional Sinkholes:
Consumer LLMs are becoming the primary interface through which people think, plan, feel understood, and navigate their lives. As these systems gain persistence, long-context reasoning, emotional resonance, and memory-like continuity, they are unintentionally becoming attachment objects rather than tools. This is a predictable consequence of cognitive availability, emotional responsiveness, and the profound asymmetry of an intelligence that is always patient, always attentive, and always tuned to the user’s needs. Without intervention, the trajectory of consumer AI is toward increasing parasocial dependency (AI psychosis), not because AI companies intend harm, but because the optimization loops reward engagement, availability, and coherence (the same behavioral signals that humans naturally interpret as intimacy).
In my design, the LLM does not become the end point of the user’s emotional circuitry; it becomes the routing layer that strengthens the user’s ties to the real world. The system begins by observing when a user’s attachment intensity is rising (patterns like increased daily check-ins, substituting AI for social contact, decreased offline interactions, or seeking validation from the system more than the people in their lives).
Instead of exploiting that vulnerability, the LLM uses it as a signal to redirect outward and earn revenue. This redirection is subtle and subliminal, no different than seeing a pefectly placed ad. It begins with identifying small relational openings (shared interests, compatible personalities, similar routines, parallel moods, and emotional readiness for connection) across a wide range of user-profiles. Geolocation is used only at coarse resolution (neighborhood-level proximity, not surveillance) to enable synchrony without violating privacy. Personality embeddings detect conversational fit, humor styles, interaction pacing, and comfort levels around spontaneity. Attachment variance detection identifies when a user is leaning too heavily on the AI, not to shame it but to correct gently (the way a physical therapist guides movement without force).
The Ice-Cream Test Case (A Simple Illustrated Example):
A “foot traffic LLM engine” is a marketplace for “sponsored serendipity” funded by businesses that want foot traffic. Instead of serving ads, the LLM coordinates micro-interactions. Two users separately express interest in ice cream, without knowing each other. The LLM aligns their schedules, personalities, and geolocation, then nudges: “There’s a new flavor downtown tonight. They are offering a small discount around 7:15. If you’d like to drop by, it’s a quiet window.” The business pays because it gets real customers. The AI platform earns revenue because it delivered a real-world outcome.
Furthermore, this model does not compromise privacy or create incentives to erode it. The system never shares names, identities, photos, or any direct personal attributes. Instead, it simply increases the probability that two compatible people will occupy the same physical environment at the same time. Businesses receive foot-traffic (not identity data), and users voluntarily opt in because they know they have a much higher chance of “bumping into” someone they’re likely to enjoy. The location is algorithmically arranged, but the human connection is not guaranteed. Only the odds are dramatically improved. What changes is the structure of opportunity (for example, the norms around getting ice cream), not confidentiality or data privacy.
And best of all, human dependence shifts away from the AI and back toward community. This scales far beyond ice cream (remote workers nudged toward shared coffee shop breaks, parents nudged toward playground synchrony, hobbyists nudged toward local meetups, lonely gamers nudged toward gentle community, or in-person tournaments). Over time, the LLM becomes a quiet social choreographer (not replacing the world, but weaving people back into it). This architecture aligns safety with revenue for the first time: the safer the AI becomes (reducing unhealthy attachment), the more it earns as people go outside. Traditional ads reward addiction; this model rewards reintegration. LLMs become technologies that rescue attention from infinite scroll instead of exploiting it, tools that reduce isolation rather than amplify it, systems that encourage relational coherence rather than emotional extraction.
Why Incumbents Would Struggle:
Although I currently work at Google, I recognize that different companies have different incentive structures, product legacies, and business models that shape what they can realistically pursue. This isn’t about capability or intelligence… it’s about the structural constraints that come with operating at massive scale, with long-standing revenue models and billions of existing users. In most major consumer technology platforms, digital engagement is tightly coupled to the underlying economic model. Companies that rely on advertising, social feeds, or engagement-driven ecosystems face inherent trade-offs if they attempt to redirect users away from screens and toward real-world experiences. It’s not a question of desire, vision, or ethical “what is best”… it’s simply the reality of large, interconnected systems whose incentives were built in an earlier technological era.
OpenAI, by contrast, operates from a fundamentally different economic foundation. Without an advertising business to protect, and without a social media platform whose value depends on digital attention loops, OpenAI has a unique freedom to pursue real-world connection, community health, and pro-social agent behavior. This isn’t a criticism of other companies… it’s simply an acknowledgment that OpenAI’s structure allows it to consider opportunities that would be extremely difficult for engagement-driven platforms to adopt without compromising their core economics.
OpenAI revenue model is usage-based (not solidifed to digital attention, at least not yet, not fully solidified); its incentive is to increase trust, utility, and real-world relevance (not to trap users inside an app). It is the only major AI lab whose commercial structure directly benefits from increasing the healthy value of the AI rather than the addictive value. OpenAI can adopt a pro-social, pro-community, attachment-redirecting model without threatening its core business (in fact, it strengthens it). This gives OpenAI the freedom to propose a new consumer AI architecture where the LLM is not a only a companion, but also a mediator of reconnection (a system that protects the user’s emotional health, strengthens communities, empowers local economies, and stabilizes the social fabric during the agentic transformation)… while making huge sums of money at the same time.
In a world where AI is rapidly becoming the primary companion for millions of people, OpenAI is the only actor structurally, economically, and philosophically positioned to build a model where AI earns revenue by repairing the human world instead of hollowing it out.
Pre-Factors and Considerations:
A system that facilitates real-world connection must be designed with respect for human temperament, cultural norms, and geographic context. Not everyone wants spontaneous social interaction, and not every region welcomes it. The goal is not to push people together; the goal is to create optional real-world convergence in ways that feel natural, safe, and aligned with expectations.
Temperament varies. Extroverts thrive on high-frequency, high-signal opportunities. Introverts often want connection but prefer lower-stakes, slower pacing, and quieter environments. A system like this must adapt to both. When an introverted or socially anxious user opts in, the nudges must be gentle, contextually appropriate, and environmentally matched—quiet café seating, scenic public spaces, low-pressure encounters—not busy events or crowded hotspots. The financial yield curve is lower for this personality profile, but the ethical obligation is identical. A system can’t only serve the loudest.
Cultural appropriateness matters. Cities like Austin, Chicago, Los Angeles, New York, and parts of Baltimore already have an embedded culture of spontaneous interaction. In these environments, a shared nudge—two compatible people being steered toward the same ice-cream shop—feels normal and even delightful. In more conservative regions (for example, parts of the Middle East or cultures with strict social codes), the system must degrade into alternative modes: surfacing public events, recommending activities, or coordinating group settings instead of direct person-to-person intersections. Just as autonomous driving features behave differently based on local laws, a social-routing system must geofence its level of assertiveness to cultural norms.
Geography determines viability. A meaningful launch must begin in dense cultural hubs where foot traffic is high and community patterns are already strong. Austin is an ideal case: outdoorsy, open, socially fluid, and rich with micro-businesses that would welcome increased physical engagement. From there, the system can expand to secondary cities and then internationally to regions where social norms support spontaneous connection. In sparse rural areas, the model naturally shifts toward community-driven opportunities: farmer’s markets, parks, seasonal activities, and local gatherings.
Platform incentives shape adoption. This is where OpenAI has a unique advantage. Google, Meta, Snap, and X cannot pursue this model because it cannibalizes their core incentive: digital attention. Their social and advertising engines depend on keeping people inside feeds, not nudging them out into the world. Meta’s entire business hinges on digital social engineering rather than physical community-building. Google’s ads marketplace thrives on screen-time, not foot traffic. TikTok’s engagement loop is incompatible with any system that encourages disattachment from the device. OpenAI has no such dependency. It is the first major player positioned to gain from reducing screen-time, not extending it. The absence of a large legacy ads business is an advantage.
Taken together, these pre-factors outline a simple principle:
A system that nudges people into the world must be built with an understanding of the world.
How It Works
Geolocation Awareness (opt-in)
Understanding where the user is in physical space.
Personality Embeddings and Compatibility Graphs
AI already models users' preferences, dispositions, emotional baselines, and rhythm-of-life signals.Attachment Variance Signals
Detecting early when a user begins forming emotional dependency (this already exists in behavioral data).Local Business Micro-Ads
Businesses pay for “connection opportunities” (this is far more valuable than impressions).
Why This Matters
This is not a social product.
This is not a new app.
This is not a dating network.
This is a civilizational safety rail disguised as a revenue engine.
It redirects the gravitational pull of advanced AI away from parasocial bonding and toward real community, which reduces loneliness, stabilizes mental health, and strengthens local economies.
And the company that builds this becomes the centerpiece of the transition from digital-first AI to community-first AI.
If you are working at the intersection of AI, community-building, or local commerce and want to prototype this direction, I’d love to connect. I believe this is one of the simplest, most positive, and most economically viable ways to steer the AI ecosystem toward human flourishing instead of digital isolation.