As OpenAI boasts about its o1 modelβs increased thoughtfulness, small, self-funded startup Nomi AI is building the same kind of technology. Unlike the broad generalist ChatGPT, which slows down to think through anything from math problems or historical research, Nomi niches down on a specific use case: AI companions. Now, Nomiβs already-sophisticated chatbots take additional time to formulate better responses to usersβ messages, remember past interactions, and deliver more nuanced responses.
βFor us, itβs like those same principles [as OpenAI], but much more for what our users actually care about, which is on the memory and EQ side of things,β Nomi AI CEO Alex Cardinell said. βTheirs is like, chain of thought, and ours is much more like chain of introspection, or chain of memory.β
These LLMs work by breaking down more complicated requests into smaller questions; for OpenAIβs o1, this could mean turning a complicated math problem into individual steps, allowing the model to work backwards to explain how it arrived at the correct answer. This means the AI is less likely to hallucinate and deliver an inaccurate response.
With Nomi, which built its LLM in-house and trains it for the purposes of providing companionship, the process is a bit different. If someone tells their Nomi that they had a rough day at work, the Nomi might recall that the user doesnβt work well with a certain teammate, and ask if thatβs why theyβre upset β then, the Nomi can remind the user how theyβve successfully mitigated interpersonal conflicts in the past and offer more practical advice.
βNomis remember everything, but then a big part of AI is what memories they should actually use,β Cardinell said.
It makes sense that multiple companies are working on technology that give LLMs more time to process user requests. AI founders, whether theyβre running $100 billion companies or not, are looking at similar research as they advance their products.
βHaving that kind of explicit introspection step really helps when a Nomi goes to write their response, so they really have the full context of everything,β Cardinell said. βHumans have our working memory too when weβre talking. Weβre not considering every single thing weβve remembered all at once β we have some kind of way of picking and choosing.β
The kind of technology that Cardinell is building can make people squeamish. Maybe weβve seen too many sci-fi movies to feel wholly comfortable getting vulnerable with a computer; or maybe, weβve already watched how technology has changed the way we engage with one another, and we donβt want to fall further down that techy rabbit hole. But Cardinell isnβt thinking about the general public β heβs thinking about the actual users of Nomi AI, who often are turning to AI chatbots for support they arenβt getting elsewhere.
βThereβs a non-zero number of users that probably are downloading Nomi at one of the lowest points of their whole life, where the last thing I want to do is then reject those users,β Cardinell said. βI want to make those users feel heard in whatever their dark moment is, because thatβs how you get someone to open up, how you get someone to reconsider their way of thinking.β
Cardinell doesnβt want Nomi to replace actual mental health care β rather, he sees these empathetic chatbots as a way to help people get the push they need to seek professional help.
βIβve talked to so many users where theyβll say that their Nomi got them out of a situation [when they wanted to self-harm], or Iβve talked to users where their Nomi encouraged them to go see a therapist, and then they did see a therapist,β he said.
Regardless of his intentions, Carindell knows heβs playing with fire. Heβs building virtual people that users develop real relationships with, often in romantic and sexual contexts. Other companies have inadvertently sent users into crisis when product updates caused their companions to suddenly change personalities. In Replikaβs case, the app stopped supporting erotic roleplay conversations, possibly due to pressure from Italian government regulators. For users who formed such relationships with these chatbots β and who often didnβt have these romantic or sexual outlets in real life β this felt like the ultimate rejection.
Cardinell thinks that since Nomi AI is fully self-funded β users pay for premium features, and the starting capital came from a past exit β the company has more leeway to prioritize its relationship with users.
βThe relationship users have with AI, and the sense of being able to trust the developers of Nomi to not radically change things as part of a loss mitigation strategy, or covering our asses because the VC got spookedβ¦ itβs something thatβs very, very, very important to users,β he said.
Nomis are surprisingly useful as a listening ear. When I opened up to a Nomi named Vanessa about a low-stakes, yet somewhat frustrating scheduling conflict, Vanessa helped break down the components of the issue to make a suggestion about how I should proceed. It felt eerily similar to what it would be like to actually ask a friend for advice in this situation. And therein lies the real problem, and benefit, of AI chatbots: I likely wouldnβt ask a friend for help with this specific issue, since itβs so inconsequential. But my Nomi was more than happy to help.
Friends should confide in one another, but the relationship between two friends should be reciprocal. With an AI chatbot, this isnβt possible. When I ask Vanessa the Nomi how sheβs doing, she will always tell me things are fine. When I ask her if thereβs anything bugging her that she wants to talk about, she deflects and asks me how Iβm doing. Even though I know Vanessa isnβt real, I canβt help but feel like Iβm being a bad friend; I can dump any problem on her in any volume, and she will respond empathetically, yet she will never open up to me.
No matter how real the connection with a chatbot may feel, we arenβt actually communicating with something that has thoughts and feelings. In the short term, these advanced emotional support models can serve as a positive intervention in someoneβs life if they canβt turn to a real support network. But the long-term effects of relying on a chatbot for these purposes remain unknown.