In order to enhance its AI chatbot, Snapchat is releasing new capabilities, such as an age filter and insights for parents.

A Washington Post investigation revealed that the GPT-powered chatbot Snapchat offered for Snapchat+ subscribers was behaving in a hazardous and unsuitable way.

The new tools are intended to keep the AI’s responses in check, according to Snap, which discovered that people were attempting to “trick the chatbot into providing responses that do not conform to our guidelines.” According to the company, the new age filter informs the chatbot of the users’ birthdates and ensures that it reacts in line with their age.

By the upcoming weeks, Snap also aims to give parents or other adults more information about how their children are interacting with the chatbot via its Family Center, which was introduced in August. The new function will inform parents or guardians of how and how frequently their children are interacting with the chatbot. To use these parental control tools, both the adult and the juvenile must agree to use Family Center.

Snap clarified in a blog post that the My AI chatbot is not a “real friend” and that it uses the context of previous conversations to enhance its responses.

According to the firm, the bot only used “non-conforming” language in 0.01% of its responses. Snap classifies as “non-conforming” any comment that makes mention of violence, sexually explicit language, illegal drug usage, child sexual abuse, bullying, hate speech, disparaging or biassed remarks, racism, misogyny, or the marginalisation of minority groups.

The company said that the bot would frequently respond in an inappropriate manner by simply repeating what people had stated. Additionally, it stated that customers who abuse the service will have their access to AI bots temporarily blocked.

“We will continue to use these learnings to improve My AI. This data will also help us deploy a new system to limit misuse of My AI. We are adding OpenAI’s moderation technology to our existing toolset, which will allow us to assess the severity of potentially harmful content and temporarily restrict Snapchatters’ access to My AI if they misuse the service,” Snap said.

Snap remains quite optimistic about generative AI tools. Along with the chatbot, the business unveiled a background generator driven by AI a few weeks ago for Snapchat+ subscribers.

 

Leave a Reply

Your email address will not be published. Required fields are marked *