- Profile
- → Collections
- → Fairwords Watchlist
Fairwords Watchlist
- Warble offers a service for U.S. employers to get better information faster about toxic workplace behavior. This service allows employees to report anonymously on over 70 types of toxic behavior. HR and management can then easily collaborate to address issues, document outcomes, and get aggregated reporting across the entire organization.
- Vanilla is a tool that scans your social media for objectionable posts and allows you to manage them. Think of it as spring cleaning for your social history.
- A simple, powerful tool to detect non-inclusive and gender biased language. Companies are scanning their websites, blogs, job descriptions, and internal comms using Diversity Sphere.
- Tune is an experimental Chrome extension from Google's Jigsaw that lets people customize how much toxicity they want to see in comments. Set the “volume” of conversations on a number of popular platforms including YouTube, Facebook, Twitter, Reddit, and Disqus
- Safesocial's free Chrome browser extension uses A.I. and M.L. to flag questionable posts as you type. We help create a less toxic environment in social media (LinkedIn, Facebook, Twitter.) Premium add-on coming soon. Learn more at https://safesocial.io/
- Bodyguard protects people and businesses against online toxic content. - Real-time & preventive - Understands context & Internet language - Multilingual - Highly customizable Available as a free app for individuals and an API-based solution for businesses.
- This tool checks content for non-inclusive language, explains the context of why it might be offensive, and then gives suggestions to fix it. The English language has evolved over centuries and today's readers/ customers may find dated language offensive.
- Dost (pronounced like toast) means a friend. Dost for Slack is an AI assistant that helps people create safe and inclusive messages in Slack. Dost detects micro-aggressions & toxicity in messages, educates the sender & nudges them to take corrective action.
- Profanity & Toxicity Detection for User-Generated Content is a set of dedicated semantic models regarding toxic and aggressive content. It was made on a various type of user-generated content (comments, forums, tweets, fb, etc.).