Our toxicity filter uses machine learning to identify potentially harmful or toxic messages. If a participant types a toxic chat message, they will be notified when they try to send it to the group, and the chat will be blocked.
A simple way to identify toxic comments is to check for the presence of a list of words, including profanity. We did not want to identify toxic messages just by the words in the message, we also wanted to consider the context. We used machine learning to accomplish that goal.
Hosts can toggle the chat toxicity filter functionality on and off. The chat toxicity filter will be turned on by default so users cannot send toxic messages. When a participant attempts to send a toxic message, they receive a private warning and cannot send the message. Hosts are not notified of unsent toxic messages.
Important to know:
- A host's chat toxicity preference will be set as the default for all other sessions
- Hosts will be notified if a co-host turns the chat toxicity filter on or off during their session
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article