AI security
taaft.com/ai-securityThere are 6 AI tools and 3 GPTs for AI security.
Get alerts
Number of tools
6
Most popular
ModelRed
▼ Most saved
Free mode
100% free
Freemium
Free Trial
Top featured
-
Notis is the AI intern one message away from your entire tool stack. Dictate ideas, delegate the busywork, and watch it update everything from your CRM to your socials — right from WhatsApp, iMessage, Telegram, or emails. In this week’s release: • Slack: Full support for Slack as a Notis channel. Call @notis from any channel or DM and chat with Notis directly via DMs. • Notis for Teams (PRO+): Invite your team to Notis and manage their licenses. Easily share automations templates with your team mates. • Custom MCP: Add your own custom MCPs to Notis; also available in Advanced Voice Mode. • Multiple triggers for automations: add multiple integration triggers to a single automation (e.g., when a new calendar event is created and when a new calendar event is updated). • Channel page: New https://app.notis.ai/channels channels page in the portal to connect Notis to all your favourite messaging apps. Bug Fixes & QOL improvements • You can now use reactions (e.g 👍) to reply to Notis on Telegram, Slack and iMessage. • Integration triggers duplication bug (only worked first fire!) • Automatically pausing automations that fail to deliver messages to the chosen channel. • Rate limiting each automations to 50/day to avoid burning through your credits by mistake. • You can now delete your Notis account from the user settings. • Usage tab date selector fixed. Will spend next week focusing on updating the documentation and creating a bunch of videos to explain how I’ve designed all of this to work together to transform your business. Wishing you all a wonderful productive week! Flo
-
Stewan Silva🛠️ 2 toolsJan 19, 2026@StatuzWell, I use Statuz myself, so maybe I'm biased. The only way to find out is to try it yourself. Let us know here.
Specialized tools 6
-
Bulletproof your AI models with adaptive red teaming.We built ModelRed because most teams don't test AI products for security vulnerabilities until something breaks. ModelRed continuously tests LLMs for prompt injections, data leaks, and exploits. Works with any provider and integrates into CI/CD. Happy to answer questions about AI security!
-
GPT security specialist with tailored test scenarios. -
Expert in AI prompt security testing. -
Adversarial AI expert aiding in AI red teaming -
Secure generative AI without compromising data
-
Secure your AI models from risks and attacks.
