Google said its new family of AI models has a curious feature: the ability to “recognize” emotions. The PaliGemma 2 family of AI models, unveiled Thursday, can analyze images, allowing the AI to generate captions and answer questions about the people it “sees” in photos, the company said.
«PaliGemma 2 generates detailed, contextually relevant captions for images, going beyond simple object identification to describe actions, emotions and the overall narrative of a scene,” Google writes in its official blog. To recognize emotions, PaliGemma 2 must be configured accordingly. Without this it doesn’t work. However, experts with whom TechCrunch journalists spoke were alarmed by the prospect of a publicly accessible emotion detector.
«This worries me a lot. I find it problematic to think that we can “read” people’s emotions. It’s like asking a Magic 8 Ball for advice on big issues,” Sandra Wachter, a professor of data ethics and AI at the Oxford Research Institute, told TechCrunch. Internet.
AI technology companies have been trying for years to create artificial intelligence that can detect emotional aspects in everything from sales training to tasks related to accident prevention.
The development of most emotion detectors is based on the early work of American psychologist Paul Ekman, who proposed that people have six basic types of emotions: anger, surprise, disgust, pleasure, fear and sadness. However, subsequent research on this issue has cast doubt on Ekman’s hypothesis and has shown that there are serious differences in the way representatives of different strata of society express their feelings.
«Defining emotions in some universal general way is impossible because people experience emotions in complex ways. We think that by looking at people we can determine how they feel. And this technique has been tried by various people over the years, including intelligence agencies and advertising companies. I’m sure in some cases it is absolutely possible to find some common features, but that’s not what this is about and not something for which there will ultimately be a universal answer,” said Mike, an AI researcher at Queen Mary University of London. Mike Cook in an interview with TechCrunch.
Experts believe that emotion detection systems are generally unreliable and biased due to the particular views of the developers of such systems. In a 2020 MIT study, researchers showed that facial analysis models can form unintentional preferences for certain facial expressions, such as smiling. More recent research shows that emotion analysis models attribute more negative emotions to the faces of black people than to the faces of white people.
Google says it conducted “extensive testing” to assess demographic bias in PaliGemma 2 and found “significantly lower levels of toxicity and profanity” compared to industry benchmarks. However, the company did not provide a full list of benchmarks used or indicate what types of tests were performed.
The only benchmark that Google has talked about is FairFace, which consists of a set of tens of thousands of portraits of people. The company claims that PaliGemma 2 performed well in the FairFace assessment. But some experts have criticized the benchmark for its bias, noting that FairFace does not represent all racial groups, but only a few.
«Interpreting emotions is a rather subjective matter that goes beyond the use of visual media and is closely related to personal and cultural context. AI aside, research shows that we can’t recognize emotions from facial features alone, says Heidy Khlaaf, chief AI scientist at the AI Now Institute, a nonprofit that studies the social implications of artificial intelligence.
In the EU, the use of emotion recognition systems has raised serious questions among regulators. The AI Act, the EU’s main piece of AI legislation, bans schools and employers from using emotion detectors. However, it does not prohibit their use by law enforcement agencies.
Experts’ main concern about open AI models like PaliGemma 2, which is distributed through various sources including AI development platform Hugging Face, is that they can be abused and misused, which can lead to real harm.
«If this so-called “emotional identification” is based on pseudoscientific biases and prejudices, then there is a significant risk of consequences in how this ability can be used to subsequently and falsely discriminate against various marginalized groups by law enforcement, employers, border patrol, etc.” , says Khlaaf.
Google says that the company tests its AI models for a predisposition to “representational harm” (stereotypes about social groups, such as racial, ethnic, gender or religious, that lead to negative consequences for those groups and their representatives). “We have robustly assessed PaliGemma 2 models from an ethical and safety perspective, including child safety and content safety,” the company added.
Professor of data ethics and AI at the Oxford Internet Institute, Sandra Wachter, was not convinced by the company’s response: “Responsible innovation is when you think about the consequences of developing it from the first day you work on it, every time you step into your laboratory, and then continue to do it.” this is throughout the entire product life cycle. I can imagine a lot of potential problems that such models could create and ultimately lead to a dystopian future in which your emotions will determine whether you get a job, get a loan, and go to college.”
Not defined Roguelikes with a first-person perspective are a fairly niche genre segment, but they…
ASRock Industrial, according to the CNX-Software resource, presented industrial computers of a small form factor…
This week, Congress held confirmation hearings for new ministers nominated by new US President Donald…
California-based startup Vast Space has announced the completion of the world's first commercial space station,…
In October, Qualcomm introduced the flagship processor Snapdragon 8 Elite, which received two powerful cores…
Samsung has previously announced the development of functions based on Vision AI artificial intelligence that…