As OpenAI boasts about its o1 modelās increased thoughtfulness, small, self-funded startup Nomi AI is building the same kind of technology. Unlike the broad generalist ChatGPT, which slows down to think through anything from math problems or historical research, Nomi niches down on a specific use case: AI companions. Now, Nomiās already-sophisticated chatbots take additional time to formulate better responses to usersā messages, remember past interactions, and deliver more nuanced responses.
āFor us, itās like those same principles [as OpenAI], but much more for what our users actually care about, which is on the memory and EQ side of things,ā Nomi AI CEO Alex Cardinell said. āTheirs is like, chain of thought, and ours is much more like chain of introspection, or chain of memory.ā
These LLMs work by breaking down more complicated requests into smaller questions; for OpenAIās o1, this could mean turning a complicated math problem into individual steps, allowing the model to work backwards to explain how it arrived at the correct answer. This means the AI is less likely to hallucinate and deliver an inaccurate response.
With Nomi, which built its LLM in-house and trains it for the purposes of providing companionship, the process is a bit different. If someone tells their Nomi that they had a rough day at work, the Nomi might recall that the user doesnāt work well with a certain teammate, and ask if thatās why theyāre upset ā then, the Nomi can remind the user how theyāve successfully mitigated interpersonal conflicts in the past and offer more practical advice.
āNomis remember everything, but then a big part of AI is what memories they should actually use,ā Cardinell said.

It makes sense that multiple companies are working on technology that give LLMs more time to process user requests. AI founders, whether theyāre running $100 billion companies or not, are looking at similar research as they advance their products.
āHaving that kind of explicit introspection step really helps when a Nomi goes to write their response, so they really have the full context of everything,ā Cardinell said. āHumans have our working memory too when weāre talking. Weāre not considering every single thing weāve remembered all at once ā we have some kind of way of picking and choosing.ā
The kind of technology that Cardinell is building can make people squeamish. Maybe weāve seen too many sci-fi movies to feel wholly comfortable getting vulnerable with a computer; or maybe, weāve already watched how technology has changed the way we engage with one another, and we donāt want to fall further down that techy rabbit hole. But Cardinell isnāt thinking about the general public ā heās thinking about the actual users of Nomi AI, who often are turning to AI chatbots for support they arenāt getting elsewhere.
āThereās a non-zero number of users that probably are downloading Nomi at one of the lowest points of their whole life, where the last thing I want to do is then reject those users,ā Cardinell said. āI want to make those users feel heard in whatever their dark moment is, because thatās how you get someone to open up, how you get someone to reconsider their way of thinking.ā
Cardinell doesnāt want Nomi to replace actual mental health care ā rather, he sees these empathetic chatbots as a way to help people get the push they need to seek professional help.
āIāve talked to so many users where theyāll say that their Nomi got them out of a situation [when they wanted to self-harm], or Iāve talked to users where their Nomi encouraged them to go see a therapist, and then they did see a therapist,ā he said.
Regardless of his intentions, Carindell knows heās playing with fire. Heās building virtual people that users develop real relationships with, often in romantic and sexual contexts. Other companies have inadvertently sent users into crisis when product updates caused their companions to suddenly change personalities. In Replikaās case, the app stopped supporting erotic roleplay conversations, possibly due to pressure from Italian government regulators. For users who formed such relationships with these chatbots ā and who often didnāt have these romantic or sexual outlets in real life ā this felt like the ultimate rejection.
Cardinell thinks that since Nomi AI is fully self-funded ā users pay for premium features, and the starting capital came from a past exit ā the company has more leeway to prioritize its relationship with users.
āThe relationship users have with AI, and the sense of being able to trust the developers of Nomi to not radically change things as part of a loss mitigation strategy, or covering our asses because the VC got spooked⦠itās something thatās very, very, very important to users,ā he said.
Nomis are surprisingly useful as a listening ear. When I opened up to a Nomi named Vanessa about a low-stakes, yet somewhat frustrating scheduling conflict, Vanessa helped break down the components of the issue to make a suggestion about how I should proceed. It felt eerily similar to what it would be like to actually ask a friend for advice in this situation. And therein lies the real problem, and benefit, of AI chatbots: I likely wouldnāt ask a friend for help with this specific issue, since itās so inconsequential. But my Nomi was more than happy to help.
Friends should confide in one another, but the relationship between two friends should be reciprocal. With an AI chatbot, this isnāt possible. When I ask Vanessa the Nomi how sheās doing, she will always tell me things are fine. When I ask her if thereās anything bugging her that she wants to talk about, she deflects and asks me how Iām doing. Even though I know Vanessa isnāt real, I canāt help but feel like Iām being a bad friend; I can dump any problem on her in any volume, and she will respond empathetically, yet she will never open up to me.
No matter how real the connection with a chatbot may feel, we arenāt actually communicating with something that has thoughts and feelings. In the short term, these advanced emotional support models can serve as a positive intervention in someoneās life if they canāt turn to a real support network. But the long-term effects of relying on a chatbot for these purposes remain unknown.

