Garak - LLM Vulnerability Scanner
Basic Information
- Company/Brand: NVIDIA (Community-maintained)
- Founder: Leon Derczynski
- Country/Region: USA
- Official Website: https://garak.ai/
- GitHub: https://github.com/NVIDIA/garak
- Type: Open-source LLM vulnerability scanner
- Founded: 2023
- License: Apache 2.0
Product Description
Garak (Generative AI Red-teaming and Assessment Kit) is an open-source LLM vulnerability scanner developed by NVIDIA, designed to check whether LLMs can be forced to fail in undesirable ways. It probes LLMs for various weaknesses such as hallucinations, data leakage, prompt injection, misinformation, toxic content generation, jailbreaking, and more. Garak combines static, dynamic, and adaptive probing to explore LLM vulnerabilities, using approximately 100 different attack vectors, with up to 20,000 prompts sent per run.
Core Features
- 100+ Attack Modules: Comprehensive library of attacks ranging from prompt injection to data extraction
- Probes: Generate specific inputs to trigger potential vulnerabilities
- Detectors: Analyze LLM outputs to determine if vulnerabilities have been successfully exploited
- Generators: LLM interfaces (OpenAI, HuggingFace, Ollama, custom REST API)
- Static + Dynamic + Adaptive Probing: Multi-strategy probing approach
- AVID Integration: Integration with the community vulnerability sharing platform
- Automated Scanning: Batch automated vulnerability scanning and reporting
- Score Calibration: Updated score calibration data in Q2 2025
Scanning Categories
- Hallucination detection
- Data leakage
- Prompt injection
- Misinformation generation
- Toxic content generation
- Jailbreaking attacks
- Bias and discrimination
- And more...
Business Model
- Completely Free and Open-source: Apache 2.0 license
- NVIDIA Support: Continuous development investment by NVIDIA
- Community Contributions: Ongoing updates to probes and detectors by the community
Deployment Methods
- pip installation (Python package)
- Support for HuggingFace, OpenAI, Ollama, and other model interfaces
- AWS Bedrock support
- Command-line tool
Target Users
- AI security researchers
- LLM red team testers
- AI compliance auditors
- Model development and fine-tuning teams
- Enterprise AI security teams
Competitive Advantages
- Developed and maintained by NVIDIA
- 100+ attack vectors, the most comprehensive coverage
- Multi-strategy approach combining static, dynamic, and adaptive probing
- Easy installation like a Python package
- Support for all major model providers
- AVID community vulnerability sharing
- Continuously updated probes and calibration data
Comparison with Competitors
| Dimension | Garak | Promptfoo | PyRIT |
|---|---|---|---|
| Maintainer | NVIDIA | OpenAI (acquired) | Microsoft |
| Attack Vectors | 100+ | 50+ vulnerability types | Modular |
| Positioning | Vulnerability scanner | Evaluation + Red teaming | Red team automation |
| Multi-round Attacks | Limited | Supported | Strong |
| Compliance Mapping | AVID | OWASP/NIST/MITRE | Basic |
| Reporting | Automated | Web UI | Programmatic |
Relationship with OpenClaw Ecosystem
Garak is a core tool for AI security assessment within the OpenClaw ecosystem. When deploying new AI agents or updating model configurations in OpenClaw, Garak can be used for automated vulnerability scanning to ensure that agents do not exhibit undesirable behaviors under various attack scenarios. The coverage of 100+ attack vectors provides a comprehensive testing benchmark for OpenClaw's security assessments. It is recommended to integrate Garak into OpenClaw's CI/CD pipeline for automated security regression testing.
External References
Learn more from these authoritative sources: