Summary of "Don't Make This Mistake: Painful Learnings of Applying AI in Security"
Summary of “Don’t Make This Mistake: Painful Learnings of Applying AI in Security“
Key Technological Concepts and Analysis
Problem Context in Security & AI
- Security efforts often slow down business and development because fixing vulnerabilities is time-consuming and resource-intensive.
- The vulnerability remediation backlog is huge; fixing a single vulnerability can take 5-7 hours, which translates to thousands of hours for companies with many vulnerabilities.
- Developers generally dislike vulnerability scanning and remediation tasks since these activities do not contribute positively to their professional growth or business impact.
Goal of Using AI in Security
- Help developers fix security vulnerabilities efficiently without slowing down feature development.
- Minimize mean time to remediate (MTTR) vulnerabilities.
- Enable fixing vulnerabilities at scale.
Initial Attempts with AI (ChatGPT and Others)
- Simple prompts to fix vulnerabilities (e.g., XSS in JavaScript) sometimes worked but often failed due to lack of context or incorrect assumptions.
- AI-generated fixes had mixed results:
- About 29% were good fixes that resolved issues.
- 90% sometimes introduced new vulnerabilities or only partially fixed problems.
- 52% were outright bad, introducing broken code or irrelevant changes.
- Many fixes were generic templates requiring manual implementation.
- AI hallucination increased with more input data, counterintuitively causing less reliable fixes.
- AI often produced persuasive but incomplete or incorrect fixes (e.g., improper escaping for header manipulation, partial encryption fixes).
- Blindly applying AI fixes at scale is risky due to false positives and incomplete context.
Challenges Identified
- AI lacks full context of the application and codebase, leading to partial or incorrect fixes.
- Fixes often affect only one part of the code, ignoring dependencies and integration points.
- AI-generated fixes sometimes break application logic or do not fully resolve the vulnerability.
- Managing false positives from static analysis tools (SAST) complicates automated remediation.
- Large-scale automated remediation requires precise, context-aware, and validated fixes.
Improved Approach – Hybrid AI
- Combine AI with traditional static analysis and code parsing techniques.
- Use Abstract Syntax Trees (AST) to parse and understand code structure precisely.
- Provide AI with very focused, custom prompts targeting specific vulnerabilities and code patterns.
- Validate AI output programmatically and integrate fixes into templates or algorithms to maintain consistency.
- This approach reduces hallucinations and improves repeatability and scalability of fixes.
- Still requires human supervision and validation before merging fixes.
Research & Tools
- Tested AI fixes on known vulnerable apps (WebGoat, Juice Shop) and commercial SaaS tools.
- Examples of AI tools tested include ChatGPT (versions 3, 3.5, 4), GitHub Advanced Security’s automatic remediation.
- GitHub claims about 90% of findings can be fixed automatically, but fixes still require review.
- Discussion on emerging MCP (Model Control Plane) workflows that validate vulnerabilities through live testing before generating fixes and pull requests.
- Emphasis on the need to prove vulnerabilities exist before fixing them automatically.
Practical Advice and Insights
- AI tools are like eager junior developers: they try to help but require oversight.
- Companies forbidding AI use may fall behind competitively.
- Vibe coding (AI-assisted coding from product specs) is gaining traction in B2B software companies.
- AI can save development time but never fully replaces human judgment.
- Security teams must take responsibility to verify and fix vulnerabilities thoroughly.
- Zero trust and continuous verification remain essential principles.
Guides, Tutorials, and Recommendations
How to Use AI in Security Remediation
- Use AI for targeted, specific fixes rather than broad, unfocused prompts.
- Combine AI suggestions with static analysis and AST parsing for reliable fixes.
- Implement multi-stage validation including testing and rescanning.
- Avoid blindly applying AI fixes; always review and test before merging.
Dealing with AI Hallucinations
- Limit input context to avoid overwhelming the AI.
- Separate system prompts (trusted instructions) from user prompts to prevent prompt injection.
- Use stepwise prompting or retrieval-augmented generation (RAG) methods for complex fixes.
Scaling Fixes
- Develop fix templates for common vulnerability patterns.
- Automate integration of AI-generated code into these templates.
- Focus on repeatability and consistency.
Human Factors
- Encourage developers to embrace AI tools but maintain critical oversight.
- Educate teams on the limitations and risks of AI-generated fixes.
- Promote a culture of verification and continuous improvement.
Main Speakers and Sources
- Primary Speaker: Atan (co-founder of Mob, former IBM application security engineer)
- Other Contributors:
- Kevin (referenced but details incomplete)
Category
Technology
Share this summary
Featured Products
Artificial Intelligence for Cybersecurity: Develop AI approaches to solve cybersecurity problems in your organization
Digital Static Electricity Tester - Precise Measuring Instrument Electrostatic Analyzer Self Calibration Resistivity Tester
Audacity - Sound and Music Editing and Recording Software - Download Version [Download]
ChatGPT Uses & Prompts: a QuickStudy Laminated Reference Guide
24 Deadly Sins of Software Security: Programming Flaws and How to Fix Them