“AI” is Being Used to Profile People From Their Head Vibrations – But Is There Enough Evidence To Support It?

By James Wright, Alan Turing Institute

Digital video surveillance systems can’t just identify who someone is. They can also work out how someone is feeling and what kind of personality they have. They can even tell how they might behave in the future. And the key to unlocking this information about a person is the movement of their head.

That is the claim made by the company behind the VibraImage artificial intelligence (AI) system. (The term “AI” is used here in a broad sense to refer to digital systems that use algorithms and tools such as automated biometrics and computer vision). You may never have heard of it, but digital tools based on VibraImage are being used across a broad range of applications in Russia, China, Japan and South Korea.

But as I show in my recent research, published in Science, Technology and Society, there is very little reliable, empirical evidence that VibraImage and systems like it are actually effective at what they claim to do.

Among other things, these applications include identifying “suspect” individuals among crowds of people. They are also used to grade the mental and emotional states of employees. Users of VibraImage include police forces, the nuclear industry and airport security. The technology has already been deployed at two Olympic Games, a FIFA World Cup and a G7 Summit.

In Japan, clients of such systems include one of the world’s leading facial recognition providers (NEC), one of the largest security services companies (ALSOK), as well as Fujitsu and Toshiba. In South Korea, among other uses it is being developed as a contactless lie detection system for use in police interrogations. In China, it has already been officially certified for police use to identify suspicious individuals at airports, border crossings and elsewhere.

Across east Asia and beyond, algorithmic security, surveillance, predictive policing and smart city infrastructure are becoming mainstream. VibraImage forms one part of this emerging infrastructure. Like other algorithmic emotion detection systems being developed and deployed globally, it promises to take video surveillance to a new level. As I explain in my paper, it claims to do this by generating information about subjects’ characters and inner lives that they don’t even know about themselves.

VibraImage has been developed by Russian biometrist Viktor Minkin through his company ELSYS Corp since 2001. Other emotion detection systems try to calculate people’s emotional states by analysing their facial expressions. By contrast, VibraImage analyses video footage of the involuntary micro movements, or “vibrations”, of a person’s head, which are caused by muscles and the circulatory system. The analysis of facial expressions to identify emotions has come under growing criticism in recent years. Could VibraImage provide a more accurate approach?

Image of the audience at a football match.

Surveillance systems can profile individuals in huge crowns. Csaba Peterdi/Shutterstock

Minkin puts forward two theories apparently supporting the idea that these movements are tied to emotional states. The first is the existence of a “vestibulo-emotional reflex” based on the idea that the body’s system responsible for balance and spatial orientation is related to psychological and emotional states. The second is a “thermodynamic model of emotions”, which draws a direct link between specific emotional-mental states and the amount of energy expended by muscles. What’s more, Minkin claims this energy can be measured through tiny vibrations of the head.

According to these theories, involuntary movement of the face and head are therefore emotion, intention and personality made visible. In addition to spotting suspect individuals, supporters of VibraImage also believe this data can be used to determine personality type, identifying adolescents more likely to commit crimes, or categorising types of intelligence based on nationality and ethnicity. They even suggest it could be used to create a 1984-style test of loyalty to the values of a company or nation, based on how someone’s head vibrations change in response to statements.

But the many claims made about its effects seem unprovable. Very few scientific articles on VibraImage have been published in academic journals with rigorous peer review processes – and many are written by those with an interest in the success of the technology. This research often relies on experiments that already assume VibraImage is effective. How exactly certain head movements are linked to specific emotional-mental states is not explained. One study from Kagawa University of Japan found almost no correlation between the results of a VibraImage assessment and those of existing psychological tests.

In a statement in response to the claims in this article, Minkin says that VibraImage is not an AI technology, but “is based on understandable physics and cybernetics and physiology principles and transparent equations for emotions calculations”. It may use AI processing in behaviour detection or emotion recognition when they have “technical necessity for it”.

He also argues that people might assume the technology is “fake” as “contactless and simple technology of psychophysiological detection looks so fantastic”, and because it is associated with Russia. Minkin has also published a technical response to my paper.

‘Suspect AI’

One of the main reasons why it is so difficult to prove whether VibraImage works is its underlying premise that the system reveals more about subjects than they know about themselves. But there is no compelling evidence that that’s the case.

I propose the term “suspect AI” to describe the growing number of systems that algorithmically classify individuals as suspects, yet I argue are themselves deeply suspect. They are opaque, unproven, developed and implemented without democratic input or oversight. They are also largely unregulated, and possess the potential for serious harm.

VibraImage is not the only such system out there. Other AI systems to detect suspicious or deceptive individuals have been trialled. For example, Avatar has been tested on the US-Mexico border, and iBorderCtrl at the EU’s borders. Both are designed to detect deception among migrants. In China, VibraImage-based systems and similar products are being used for a growing range of applications in law enforcement, security and healthcare.

The broader algorithmic emotion recognition industry was worth up to US$12 billion in 2018. It is expected to reach US$37.1 billion by 2026. Amid growing global concern about the need to create rules around the ethical development of AI, we need to look far more closely at such opaque algorithmic systems of surveillance and control.

The European Commission’s recently announced draft AI regulations categorise emotion recognition systems as “high-risk” and subject to a higher level of governance control. This is an important start. Other countries should now follow this lead to ensure that possible harms from these high-risk systems are minimised.The Conversation

James Wright, Research Associate, Alan Turing Institute

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Top image: Does technology more about us than we know about ourselves? Trismegist san/Shutterstock

Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE

Subscribe to Activist Post for truth, peace, and freedom news. Follow us on Telegram, SoMee, HIVE, Flote, Minds, MeWe, Twitter, Gab, Ruqqus and What Really Happened.

Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Be the first to comment on "“AI” is Being Used to Profile People From Their Head Vibrations – But Is There Enough Evidence To Support It?"

Leave a comment