|AI uses face recognition and vision tech to understand different facial expressions as a means to identify our emotions / Photo Credit: ImageFlow via Shutterstock|
Artificial intelligence has given life to a large number of applications, and emotion-detecting technology is a perfect example of this. It can understand human emotions as well as predict them. Emotion-detecting tech can also be paired with vision tech and face recognition to create a strong positive impact on understanding different facial expressions.
However, experts are worried about emotion-detecting tech especially as it’s becoming more popular now. A recent report by the AI Now Institute revealed that the industry is undergoing a period of significant growth and can already be worth as much as $20 billion. "It's being used everywhere, from how do you hire the perfect employee to assessing patient pain, to tracking which students seem to be paying attention in class,” the institute’s co-founder professor Kate Crawford said.
"At the same time as these technologies are being rolled out, large numbers of studies are showing that there is...no substantial evidence that people have this consistent relationship between the emotion that you are feeling and the way that your face looks,” she added.
Thus, AI Now stated that emotion-detecting tech should be restricted by law since it was "built on markedly shaky foundations.” Emteq, a Brighton-based firm trying to integrate emotion-detecting tech into virtual-reality headsets, stated that knowing what a person feels is not a simple matter. This means that the tech’s capability of recognizing different facial expressions to understand a subject’s emotions is not enough.
According to BBC, a British public service broadcaster, Charles Nduka, a plastic, reconstructive, and cosmetic surgeon, stated that we need to understand the context first where the emotional expression is being made. "For example, a person could be frowning their brow not because they are angry but because they are concentrating or the Sun is shining brightly and they are trying to shield their eyes. Context is key, and this is what you can't get just from looking at computer-vision mapping of the face,” he said.