Instagram profile of ‘proud Black queer momma’, created by Meta, said her development team included no Black people
Meta is deleting Facebook and Instagram profiles of AI characters the company created over a year ago after users rediscovered some of the profiles and engaged them in conversations, screenshots of which went viral.
The company had first introduced these AI-powered profiles in September 2023 but killed off most of them by summer 2024. However, a few characters remained and garnered new interest after the Meta executive Connor Hayes told the Financial Times late last week that the company had plans to roll out more AI character profiles.
“We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do,” Hayes told the FT. The automated accounts posted AI-generated pictures to Instagram and answered messages from human users on Messenger.
Conversations with the characters quickly went sideways when some users peppered them with questions including who created and developed the AI. Liv, for instance, said that her creator team included zero Black people and was predominantly white and male. It was a “pretty glaring omission given my identity”, the bot wrote in response to a question from the Washington Post columnist Karen Attiah.
In the hours after the profiles went viral, they began to disappear. Users also noted that these profiles could not be blocked, which a Meta spokesperson, Liz Sweeney, said was a bug. Sweeney said the accounts were managed by humans and were part of a 2023 experiment with AI. The company removed the profiles to fix the bug that prevented people from blocking the accounts, Sweeney said.
“There is confusion: the recent Financial Times article was about our vision for AI characters existing on our platforms over time, not announcing any new product,” Sweeney said in a statement. “The accounts referenced are from a test we launched at Connect in 2023. These were managed by humans and were part of an early experiment we did with AI characters. We identified the bug that was impacting the ability for people to block those AIs and are removing those accounts to fix the issue.”
While these Meta-generated accounts are being removed, users still have the ability to generate their own AI chatbots. User-generated chatbots that were promoted to the Guardian in November included a “therapist” bot.
Upon opening the conversation with the “therapist”, the bot suggested some questions to ask to get started including “what can I expect from our sessions?” and “what’s your approach to therapy”.
“Through gentle guidance and support, I help clients develop self-awareness, identify patterns and strengths and cultivate coping strategies to navigate life’s challenges,” the bot, created by an account with 96 followers and 1 post, said in response.
Courts have not yet answered how responsible chatbot creators are for what their artificial companions say. US law protects the makers of social networks from legal liability for what their users post. However, a suit filed in October against the startup Character.ai, which makes a customizable, role-playing chatbot used by 20 million people, alleges the company designed an addictive product that encouraged a teenager to kill himself.
Source: www.theguardian.com