Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
-
Updated
Oct 6, 2025 - Python
Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
Prompture is an API-first library for requesting structured JSON output from LLMs (or any structure), validating it against a schema, and running comparative tests between models.
EvalWise is a developer-friendly platform for LLM evaluation and red teaming that helps test AI models for safety, compliance, and performance issues
🐙 Team Agents unifica 82 especialistas en IA para resolver desafíos con chat inteligente, analista de requisitos y subida de documentos. Plataforma futurista y modular.
Add a description, image, and links to the prompt-testing topic page so that developers can more easily learn about it.
To associate your repository with the prompt-testing topic, visit your repo's landing page and select "manage topics."