Description from extension meta
Automatically sanitize sensitive information in prompts before submitting to LLM interfaces
Image from store
Description from store
Sitr automatically sanitizes sensitive, censored, or custom-defined keywords in AI prompts before they are submitted to large language model (LLM) interfaces. This ensures safer interactions by filtering out undesired or inappropriate terms while preserving the core intent of your input. Whether you're working in a professional environment, need to comply with content policies, or simply want to protect personal or confidential data, Sitr provides seamless protection without disrupting your workflow. Lightweight, efficient, and privacy-conscious, it runs silently in the background to enhance prompt hygiene and support responsible AI usage.
Latest reviews
- (2025-08-07) hassan Almeftah: Can’t believe this didn’t exist before. Instantly made me feel safer using AI tools. Subtle, smart, and just works.
Statistics
Installs
13
history
Category
Rating
5.0 (5 votes)
Last update / version
2025-08-09 / 1.1.3
Listing languages
en