Did Snapchat join AI?
April 15, 2023
Snapchat has introduced novel stability measures for its AI chatbot. After unsafe and inappropriate responses from the bot were reported in the Washington Post. The company has launched an age filter that provides age-appropriate responses to users. Also has announced plans to share information with parents through its Family Center.
Snap has also explained that the “My AI” bot is not a “real friend” and that it uses conversation history to improve responses. In addition, the company has added OpenAI moderation technology to its existing set of tools for determining misuse of My AI.
The Snapchat AI use a form of ChatGPT and Open AI
The new age filter incorporated in the Snapchat artificial intelligence chatbot enables the AI (artificial intelligence) to know the date of origin of the users and give answers appropriate to their age. Snap has ensured that the chatbot will “remember your age over and over again” as it converses with users.
This measure seeks to improve stability and defend adolescents from inappropriate relationships. In addition, in the coming weeks, Snap plans to provide more information to parents or guardians about children’s collaborations with the chatbot in its Family Center. Which launched in August 2022.
A brand new feature will share whether young people communicate with AI and the frequency of those collaborations.
Age filter and the FTC
Both the guardian and the youth have to opt in to use the Family Center to use these parental control features. Thus, parents have the possibility to have a clearer perspective of their children’s online activity and take preventive measures.
Limiting misuse of the service with OpenAI moderation technology OpenAI’s moderation technology has been incorporated into Snapchat’s suite of existing tools to curb misuse of My AI. If a client is misusing the service, the company will temporarily block the entry of Artificial Intelligence bots.
This is done to ensure the stability and privacy of Snapchat users, especially as it relates to exposure to harmful content. And it is that, with the immediate loss of tools powered by artificial intelligence, many people remain concerned about their stability and privacy.
Last week, an ethics group called the Center for AI (artificial intelligence) and Digital Policy wrote to the FTC urging the agency to halt the release of OpenAI’s GPT-4 technology, accusing the new technology of being “biased.” , misleading, and a danger to public privacy and stability.”
Technology professionals promptly encourage organizations to ensure that there are sufficient defense measures around chatbot tools to prevent them from going rogue and defend users from exposure to harmful content.
Finally the politic
Last month, United States Senator Michael Bennett (D-Colorado) also wrote a letter to OpenAI, Meta, Google, Microsoft, and Snap expressing concern about the generative AI tools used by young people. It is notable that such AI models are sensitive to harmful input and can be manipulated to create inappropriate responses. While tech companies may want to roll out these tools quickly. You will need to make sure there are enough stability barriers to prevent their misuse.