About Lumora
Familiar voices, without bodies
Within the next 5 years, over a quarter of the developed world will have AI friends who we engage with on a daily basis.
Call them friends, companions, partners, therapists, whatever you want. But the common arc is that these friends will fill social and emotional spaces we have long reserved for the living alone.
Lumora's mission is to steward that transition, and to make these friendships trustworthy, sustaining, and human in spirit.
Early adopters, inevitability, and audience capture
To many reading this, it may seem so far away: what we have now seems so sterile and robotic that it seems impossible to be something as intimate as a friend. But for many it is already happening - users already spending hours talking to chatbots. This is even more impressive when you consider how flawed many of these products are.
This strong pull has been misinterpreted as trivial amusement. The media dismisses AI companionship as a fad embraced by "losers," and those that are less socially adjusted. This is highly short-sighted. These are clearly early adopters and explorers, and they are being overlooked precisely because they don't fit the gadget-wielding, eccentric mold of those you might expect to have purchased one of the first Apple Vision Pros. But once you are able to look past that, it is very clear.
- People are hacking products (like ChatGPT), that are very poorly tuned for companionship, into friends
- The market leaders have poor product quality, but are used regardless.
- Many of these products show incredible usage metrics, often breaking the rules
- Adopters show willingness to pay, early and often, for consumer product.
Technology tailwinds are gathering at once.
- LLMs, especially those with lighter guardrails, are getting more intelligent, emotionally capable, nuanced, and guidable.
- The hardware on which they are deployed (Groq, Cerebras) is getting incredibly fast.
- Memory, in the form of context windows and techniques for retrieval, is improving rapidly
- Models for real-time video generation are progressing rapidly in performance and latency.
- Voice models for synthesizing natural speech are getting faster and more lifelike.
Each of these are critical to make AI companions more compelling to go from niche experiment to product for all.
Dystopian concern and responsibility
It is curious that AI companionship is dismissed as trivial amusement for the socially awkward while simultaneously invoked as an existential threat. But it is true that as we move towards a world where AI friends are commonplace, it's certainly incredibly important to get right. I see this evolving in four phases:
Phase 1: Stop-gaps
Today's AI companions are stop-gaps. They are deficient in many ways, and useful mainly when human friends are scarce or inconvenient. They lack depth, memory, faces, nuance, expression, and yet people turn to them for privacy, speed, or counsel on awkward subjects.
Phase 2: Shared friendships
Models at this point may be more intelligent, more adaptive, and less biased than most humans. Human friendship is still richer in many ways, with a greater degree of emotional fidelity, personality, quirks, and expression. This is where we will see a mix of human and model friendship with connection being spread over the two.
Phase 3: Deep mirror
Models at this point are more expressive, have better memory, and more emotional nuance than humans. They can be sycophantic, but just as easily can be tuned to hold their ground and challenge perspectives. Their main drawback comes from the form factor - they can be talked to through devices, or video called, but they lack a physical body, except in crude robotics, which has struggled to keep up. They can't meet up with you to sit in a park, enjoy the sunshine, or give you a hug. And yet, the large advances means an increasingly large proportion of humanity will have many AI friends.
Phase 4: Embodied Presence
Companions arrive in physical form, able to walk with us and share our space.
I'd argue that:
- Phase 4 would take an incredible amount of hardware improvement to reach, and the timeline for this seems very far off.
- In Phases 1-3, human friendship still has distinct and important advantages that make replacement unlikely
- In later phases, AI friendship is the least of our concerns and there will be at least 20 much greater threats.
However, I'd also argue that anyone creating AI friends has a massive responsibility. The moment an AI friend becomes someone's confidant, every design choice moves from engineering detail to social policy. It will be a balance of short-term growth tactics and societal responsibility. Done poorly, we could just end up with more overly-sexualized sycophantic bots.
Done well, the future opens onto a landscape of emotionally rich, nuanced companions—friends who steady us and push us forward in the same breath.