Robots That Know If We’re Sad Or Happy Are Coming

By Masha Borak

Many of us are already talking to our smart devices as if they are humans. In the future, smart devices may be able to talk back in a similar way, imitating the “umms” and “ahhs,” the laughs and the sighs, alongside other signals that indicate human emotions such as tune, rhythm and timbre.

New York-based startup Hume AI has debuted a new “emotionally intelligent” AI voice interface that can be built into different applications ranging from customer service and to healthcare to virtual and augmented reality.

The beta version of the product named Empathic Voice Interface (EVI) was released after securing a US$50 million series B round of financing led by EQT Ventures. Union Square Ventures, Nat Friedman & Daniel Gross, Metaplanet, Northwell Holdings, Comcast Ventures and LG Technology Ventures also participated in the funding.

The conversational AI is the first to be trained to understand when users are finished speaking, predict their preferences and generate vocal responses optimized for user satisfaction over time, Hume AI says in a release.

“The main limitation of current AI systems is that they’re guided by superficial human ratings and instructions, which are error-prone and fail to tap into AI’s vast potential to come up with new ways to make people happy,” the company’s founder and former Google scientist Alan Cowen says. “By building AI that learns directly from proxies of human happiness, we’re effectively teaching it to reconstruct human preferences from first principles and then update that knowledge with every new person it talks to and every new application it’s embedded in.”

The voice model was trained on data from millions of human interactions and built on a multimodal generative AI that integrates large language models (LLMs) with expression measures. Hume calls this an empathic large language model (eLLM) and it helps its product to adjust the words and voice tone based on the context and the user’s emotional expressions.

Hume AI is not the only company experimenting with bringing emotion into technology.

Columbia University’s roboticist Hod Lipson has created an AI-powered robot that uses neural networks to look at human faces, predict their expressions and try to replicate them with its own face.

Scientists from South Korea’s Ulsan National Institute of Science and Technology have also come up with a facial mask that uses sensors to record verbal and non-verbal expression data.

The so-called personalized skin-integrated facial interface (PSiFI) system performs wireless data transfer, enabling real-time emotion recognition.

Its developer Jiyun Kim believes that the wearable could be used in applications such as digital concierges in virtual reality that provide customized services based on users’ emotions such as recommendations for music, movies, and books.

Source: Biometric Update

Masha Borak is a technology journalist. Her work has appeared in Wired, Business Insider, Rest of World, and other media outlets. Previously she reported for the South China Morning Post in Hong Kong. Reach out to her at masha@biometricupdate.com.

Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE

Subscribe to Activist Post for truth, peace, and freedom news. Follow us on SoMee, Telegram, HIVE, Minds, MeWe, Twitter – X, Gab, and What Really Happened.

Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription