Chatbots against harassment: IA to filter offenses from your project

Por Kayalla Barreto

|

2 de August de 2022
6 min. de leitura
chatbots against harassment

Have you ever heard of “harassment”? It happens against all kinds of people. It is characterized by abusive conduct through gestures, words, behavior or actions. No matter what form it takes, it can cause irreversible damage to a person. 

Harassment has gained momentum in the digital environment. Because it is free of physical barriers, and often guarantees anonymity, it has been the chosen place for cases of swearing. Ignoring this fact and naturalizing offenses can contribute to this behavior being naturalized in our society.

This is why there is a lot of talk about ways to fight virtual harassment. In this article, in addition to contextualizing the problem of harassment of women and female-voiced virtual assistants, we will explain how Weni has built its own base of Xinging Identification Intelligence

It is worth reading on! 

Virtual harassment against women

If you are a woman, unfortunately, you are very likely to know what this is all about.

Estimates from the United Nations (UN) indicate that 95% of internet harassment is directed at women.

To illustrate the problem, we bring you some data from Plan International’s survey, Online Freedom? How girls and young women deal with harassment on social networks, conducted in 2020.

It was observed that 77% of the girls and young women interviewed (out of a total of 500) had already suffered from virtual harassment in Brazil. 

The alarming numbers of the survey explain why society has awakened to the fact that even if it occurs in digital media, harassment is no less serious than those that happen in physical environments.

Maia: project that helps girls in abusive relationships

Maia (Minha Amiga Inteligência Artificial = My Artificial Intelligence Friend) is a chatbot developed as part of the #NamoroLegal project, idealized by Valéria Scarance, a prosecutor and coordinator of the MPSP’s Gender Center.

Based on a booklet that helps to identify abusive relationships, Maia was designed to advise teenagers in the beginning of their love lives. From there, she identifies abusive behavior coming from their partners. 

When this happens, girls are given guidance on what to do to avoid further suffering. Launched in June 2019, the program has been discontinued, but still filling us with pride.

Thanks to Weni Platform, the entire content of the booklet has been transformed into dialogues. Moreover, with the help of our AI, it was possible to program Maia to interpret the girls’ questions and thus direct them in the best possible way.

The problem of chatbot harassment

That’s right, female virtual assistants are also harassed. Curses and other verbal attacks by real users against virtual attendants are frequent. 

Research indicates that most chatbots are built female. Probably because the female is perceived as being able to help solve problems, while the male is seen as an authority figure who gives answers.

In the case of chatbots, harassment disrupts the service expected by the virtual assistant and reflects what we experience outside the internet. The increase in cases has caused companies to revisit their standard responses and reprogram their intelligences to respond firmly when identifying offenses.

Can you see how important it is to have initiatives in place to filter out offenses and name-calling on the Internet? Now let’s talk about Weni’s Slur Identification Intelligence, which has come to humanize and improve the user experience.

With it, it is possible to circumvent swearing and verbal abuse problems in your service channels and respond to harassment, whether it is directly linked to femininity or not.

Weni’s Xinging Identification Intelligence

Our intelligence uses the BERT algorithm, which is able to identify words beyond the scope of training, through their meaning and possible variations. 

BERT was chosen because it can summarize a wider variety of scenarios without the need for so many training sentences, enabling a complete design without the need for countless variations of the same words.

Intentions

Briefly, intentions are like a “topic” that we train with several sentences, making the bot able to identify what is behind the interpreted sentence.

The initial intent structure of our AI was divided into about 100 sentences for the “swearing” intent and another 100 sentences for the “bias” intent. 

With this, the intention bias is used to pick up subjects outside the scope of the Intelligence, for example, if a person says “I feel horrible”.

As the phrase does not indicate a curse, but a feeling of the user, it will be directed to the intention bias.

  • Bias: must exist in all intelligences. Its use is limited to scenarios where the bot should not understand the user’s sentence because it is totally out of its scope, OR it can also be used for scenarios where a certain sentence fragment should not be understood or has its confidence reduced.
  • Cursing: contemplates the scenario where the user cusses, and may use phrases with swear words or insults, example:
  • You are horrible;
  • What an unbearable chatbot;
  • I want you to (swear)

Intelligence is built to distinguish between the two intentions. Words like “horrible” have multiple meanings, if the user says “I feel horrible today”, he is talking about a feeling and not saying a swear word,

So, in this case, the intelligence would identify the intention bias. On the other hand, when the message is “you’re horrible”, the identified intention will be to swear.  

Testing

Testing is a crucial point when measuring the quality of intelligence. Currently, we try to maintain a rule of 10 test sentences for each intent. The average was chosen according to the standard needed for intelligence to be considered “strong” on the Weni Plataform.

Tip: don’t put too similar test sentences, this can cause a false impression of quality.

There are two measures of assertiveness: Precision and Recall:

  • Accuracy: these are the times when the bot should take your test sentence and send it to that intent and succeeded. In case one of them recognizes another intent, the assertiveness of the precision will decrease.
  • Recall: is the number of cases where the bot pointed to the right intention. The ideal is to train the intelligence until it reaches a 100% recall.

This is how Weni strives to improve the answers that chatbots give. With our AI, it is possible to replace “I didn’t understand, can you repeat that?” with “hey, those words are inappropriate, they should not be used with me or anyone else!”.

Conclusion

Offending cybercrime is a demonstration of power and does not necessarily depend on physical contact, so it must be combated.

This movement to raise awareness and educate is vital if we are to improve social interactions. The idea is to get the discussion going that, real or not, no one deserves to hear certain atrocities.

For the digital environment, there are only benefits to adding AI to virtual assistants.

Chatbots with Weni’s Cussing Identification Intelligence are able to respond to any offense, either against the chatbots themselves, or to help others suffering from abuse.

To extend the reach of this solution and help more people, Weni’s anti-bullying AI is open and can be used in your project. To learn more about it, please visit our website. 

Public Relations professional with a passion for a good dose of (well) applied communication. I work as a content producer, with a focus on branding, adopting copywriter and SEO strategies. I am responsible for making people fall in love with brands.

Conteúdos relacionados

    Copyright © 2022 Weni. All rights reserved.