Social bias analysis of webpages (built by researchers, for researchers).
Analyze a webpage for social bias, using AI models from state-of-the-art bias detection papers (and implemented by the researchers who wrote them).
You can find papers behind the models under the hood of the extension in our docs (https://ethical-spectacle-research.gitbook.io/fair-ly).
The tasks this browser extension is meant to demonstrate are:
- Binary text classification (sentence -> biased/unbiased).
- Multi-label aspect classification (sentence -> gender bias, racial bias, etc).
- Mutli-label token classification (sentence -> word level labels for generalization, unfairness, and stereotypes)
Our Python package (the-fairly-project) also has a model inference pipeline set up for easy usage in code ;).
A note about The Fair-ly Project:
The Fair-ly Project is an open-source suite of resources for bias detection, started by authors of recent bias detection papers, to make these analysis tools more accessible to devs and users of all kinds. The project includes stream-lined documentation, a Python package for many models/tasks, and this extension. If you'd like to contribute, check out our GitHub: https://github.com/Ethical-Spectacle/fair-ly.