1. Profile
  2.  → Collections
  3.  → 
    Fairwords Watchlist
Fairwords Watchlist
  • Warble offers a service for U.S. employers to get better information faster about toxic workplace behavior. This service allows employees to report anonymously on over 70 types of toxic behavior. HR and management can then easily collaborate to address issues, document outcomes, and get aggregated reporting across the entire organization.
    Learn More
    Warble
  • Vanilla is a tool that scans your social media for objectionable posts and allows you to manage them. Think of it as spring cleaning for your social history.
    Learn More
    Vanilla
  • A simple, powerful tool to detect non-inclusive and gender biased language. Companies are scanning their websites, blogs, job descriptions, and internal comms using Diversity Sphere.
    Learn More
    Diversity Sphere
  • Tune is an experimental Chrome extension from Google's Jigsaw that lets people customize how much toxicity they want to see in comments. Set the “volume” of conversations on a number of popular platforms including YouTube, Facebook, Twitter, Reddit, and Disqus
    Learn More
    Tune by Google
  • Safesocial's free Chrome browser extension uses A.I. and M.L. to flag questionable posts as you type. We help create a less toxic environment in social media (LinkedIn, Facebook, Twitter.) Premium add-on coming soon. Learn more at https://safesocial.io/
    Learn More
    SafeSocial
  • Bodyguard protects people and businesses against online toxic content. - Real-time & preventive - Understands context & Internet language - Multilingual - Highly customizable Available as a free app for individuals and an API-based solution for businesses.
    Learn More
  • This tool checks content for non-inclusive language, explains the context of why it might be offensive, and then gives suggestions to fix it. The English language has evolved over centuries and today's readers/ customers may find dated language offensive.
    Learn More
    Inclusive Language Tool
  • Dost (pronounced like toast) means a friend. Dost for Slack is an AI assistant that helps people create safe and inclusive messages in Slack. Dost detects micro-aggressions & toxicity in messages, educates the sender & nudges them to take corrective action.
    Learn More
    Dost for Slack
  • Profanity & Toxicity Detection for User-Generated Content is a set of dedicated semantic models regarding toxic and aggressive content. It was made on a various type of user-generated content (comments, forums, tweets, fb, etc.).
    Learn More
    Profanity & Toxicity Detection