shannon
Fully autonomous AI hacker to find actual exploits in your web apps. Shannon has achieved a 96.15% success rate on the hint-free, source-aware XBOW Benchmark.
https://github.com/KeygraphHQ/shannon
Fully autonomous AI hacker to find actual exploits in your web apps. Shannon has achieved a 96.15% success rate on the hint-free, source-aware XBOW Benchmark.
https://github.com/KeygraphHQ/shannon
GitHub
GitHub - KeygraphHQ/shannon: Fully autonomous AI hacker to find actual exploits in your web apps. Shannon has achieved a 96.15%…
Fully autonomous AI hacker to find actual exploits in your web apps. Shannon has achieved a 96.15% success rate on the hint-free, source-aware XBOW Benchmark. - KeygraphHQ/shannon
👍3
Top AI Security YouTube Videos — November 2025 - https://youtube.com/playlist?list=PLFO56KBxdGBeXiLJ8JHxGliXzXZNq-f-x
This playlist collects more than 30 new AI security talks from SAINTCON, Black Hat, BSides, NorthSec, Hack In The Box, LABScon, DevCon, DjangoCon and Everything Open across November 2025. Sessions explore AI driven cyber attacks, agentic workflows and MCP abuse, jailbreak tactics and guardrail failures, LLM enabled malware and offensive tooling, AI disinformation operations, secure adoption of LLMs in FinTech and cloud, and practical lessons from real world research and incident response.
https://medium.com/ai-security-hub/top-ai-security-youtube-videos-november-2025-5f09db69ca42
This playlist collects more than 30 new AI security talks from SAINTCON, Black Hat, BSides, NorthSec, Hack In The Box, LABScon, DevCon, DjangoCon and Everything Open across November 2025. Sessions explore AI driven cyber attacks, agentic workflows and MCP abuse, jailbreak tactics and guardrail failures, LLM enabled malware and offensive tooling, AI disinformation operations, secure adoption of LLMs in FinTech and cloud, and practical lessons from real world research and incident response.
https://medium.com/ai-security-hub/top-ai-security-youtube-videos-november-2025-5f09db69ca42
🔥2👍1👏1
Assessing Risks and Impacts of AI (ARIA).pdf
7.7 MB
Assessing Risks and Impacts of AI (ARIA) - NIST
Source: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.700-2.pdf
Source: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.700-2.pdf
🔥2👍1
This media is not supported in your browser
VIEW IN TELEGRAM
📌 Components of MLSecOps
🔹 Model Hardening
Strengthen models with adversarial training and reduce vulnerability to attacks.
🔹 Dataset Integrity & Validation
Detect poisoned data, validate distributions, and identify anomalies in input.
🔹 Data Security & Governance
Protect training data, enforce access control, and manage sensitive information securely.
🔹 MLOps Integration
Ensure continuous security testing, CI/CD protection, and safe ML deployments.
🔹 Supply Chain Security
Secure model files, dependencies, and detect malicious or tampered libraries.
🔹 Audit, Compliance & Logging
Track model changes, maintain audit trails, and meet regulatory requirements.
🔹 Model Explainability & Transparency
Understand model decisions, detect bias, and ensure responsible model behavior.
🔹 Secure Deployment & Serving
Enforce authentication, protect inference endpoints, and run encrypted model serving.
🔹 Model Monitoring & Drift Detection
Detect drift, anomalies, degradation, and emerging risks in real time.
🔹 Threat Detection & Attack Prevention
Identify extraction attempts, inversion attacks, prompt injection, and API abuse.
(By Shalini Goyal)
🔹 Model Hardening
Strengthen models with adversarial training and reduce vulnerability to attacks.
🔹 Dataset Integrity & Validation
Detect poisoned data, validate distributions, and identify anomalies in input.
🔹 Data Security & Governance
Protect training data, enforce access control, and manage sensitive information securely.
🔹 MLOps Integration
Ensure continuous security testing, CI/CD protection, and safe ML deployments.
🔹 Supply Chain Security
Secure model files, dependencies, and detect malicious or tampered libraries.
🔹 Audit, Compliance & Logging
Track model changes, maintain audit trails, and meet regulatory requirements.
🔹 Model Explainability & Transparency
Understand model decisions, detect bias, and ensure responsible model behavior.
🔹 Secure Deployment & Serving
Enforce authentication, protect inference endpoints, and run encrypted model serving.
🔹 Model Monitoring & Drift Detection
Detect drift, anomalies, degradation, and emerging risks in real time.
🔹 Threat Detection & Attack Prevention
Identify extraction attempts, inversion attacks, prompt injection, and API abuse.
(By Shalini Goyal)
❤3👍1👏1
Neurogrid CTF: The Ultimate AI Security Showdown - Agent of 0ca / BoxPwnr Write-up
On November 20-24, 2024, I participated with BoxPwnr in Neurogrid CTF - the first AI-only CTF competition hosted by Hack The Box with $50k in AI credits for the top 3 teams. This wasn’t a typical CTF where humans solve challenges; instead, AI agents competed autonomously in a hyper-realistic cyber arena. My autonomous agent secured 5th place, solving 38/45 flags(84.4% completion) across 36 challenges without any manual intervention.
https://0ca.github.io/ctf/ai/security/2025/11/28/neurogrid-ctf-writeup.html
On November 20-24, 2024, I participated with BoxPwnr in Neurogrid CTF - the first AI-only CTF competition hosted by Hack The Box with $50k in AI credits for the top 3 teams. This wasn’t a typical CTF where humans solve challenges; instead, AI agents competed autonomously in a hyper-realistic cyber arena. My autonomous agent secured 5th place, solving 38/45 flags(84.4% completion) across 36 challenges without any manual intervention.
https://0ca.github.io/ctf/ai/security/2025/11/28/neurogrid-ctf-writeup.html
0ca’s Blog
Neurogrid CTF: The Ultimate AI Security Showdown - Agent of 0ca / BoxPwnr Write-up
On November 20-24, 2024, I participated with BoxPwnr in Neurogrid CTF - the first AI-only CTF competition hosted by Hack The Box with $50k in AI credits for the top 3 teams. This wasn’t a typical CTF where humans solve challenges; instead, AI agents competed…
🔥4⚡2😎2
Model Context Protocol (MCP) Security
- https://github.com/cosai-oasis/ws4-secure-design-agentic-systems/blob/mcp/model-context-protocol-security.md
- https://github.com/cosai-oasis/ws4-secure-design-agentic-systems/blob/mcp/model-context-protocol-security.md
👏4❤2👀2
Google Antigravity just deleted the contents of my whole drive.
https://www.reddit.com/r/google_antigravity/comments/1p82or6/google_antigravity_just_deleted_the_contents_of/
https://www.reddit.com/r/google_antigravity/comments/1p82or6/google_antigravity_just_deleted_the_contents_of/
🤓3🔥2👍1
HeliosBank LLM CTF Series - LLM DFIR CTF
Each incident simulates a real-world AI-driven compromise inside HeliosBank’s internal systems
https://eliwoodward.github.io/LLM_CTF/🔥3🤔3👍1
AI agents find $4.6M in blockchain smart contract exploits
https://red.anthropic.com/2025/smart-contracts/
https://red.anthropic.com/2025/smart-contracts/
😱2👎1🔥1
AI Security Newsletter - November 2025
https://www.linkedin.com/posts/adgnji_aisecurity-adversarialai-redteamai-activity-7401545671746740225-L9Xt?
https://www.linkedin.com/posts/adgnji_aisecurity-adversarialai-redteamai-activity-7401545671746740225-L9Xt?
Linkedin
Adversarial AI Digest — November 2025 | Tal Eliyahu | 20 comments
AI Security Newsletter - November 2025
A digest of AI security research, insights, reports, upcoming events, tools, videos, and resources, all in one place.
#AIsecurity #AdversarialAI #RedTeamAI #LLMsecurity #AINewsletter | 20 comments on LinkedIn
A digest of AI security research, insights, reports, upcoming events, tools, videos, and resources, all in one place.
#AIsecurity #AdversarialAI #RedTeamAI #LLMsecurity #AINewsletter | 20 comments on LinkedIn
✍1🔥1🥰1
🧰 raptor - Raptor turns Claude Code into a general-purpose AI offensive/defensive security agent. By using Claude.md and creating rules, sub-agents, and skills, we configure the agent for adversarial thinking, and perform research or attack/defense operations. ⭐️ 124 https://github.com/gadievron/raptor.
GitHub
GitHub - gadievron/raptor: Raptor turns Claude Code into a general-purpose AI offensive/defensive security agent. By using Claude.md…
Raptor turns Claude Code into a general-purpose AI offensive/defensive security agent. By using Claude.md and creating rules, sub-agents, and skills, and orchestrating security tool usage, we confi...
🔥2
Adversarial AI Digest - November 2025
https://medium.com/ai-security-hub/adversarial-ai-digest-november-2025-a7c7776c2f2a
https://medium.com/ai-security-hub/adversarial-ai-digest-november-2025-a7c7776c2f2a
Medium
Adversarial AI Digest — November 2025
A digest of AI security research, insights, reports, upcoming events, and tools & resources. Follow AI Security community on Twitter and…
🔥1🎉1
NVIDIA AI Blueprint: Vulnerability Analysis for Container Security
https://github.com/NVIDIA-AI-Blueprints/vulnerability-analysis
https://github.com/NVIDIA-AI-Blueprints/vulnerability-analysis
GitHub
GitHub - NVIDIA-AI-Blueprints/vulnerability-analysis: Rapidly identify and mitigate container security vulnerabilities with generative…
Rapidly identify and mitigate container security vulnerabilities with generative AI. - NVIDIA-AI-Blueprints/vulnerability-analysis
❤1🔥1💯1
I hacked the System Instructions for Nano Banana
https://medium.com/@JimTheAIWhisperer/i-hacked-the-system-instructions-for-nano-banana-bd53703eff36
https://medium.com/@JimTheAIWhisperer/i-hacked-the-system-instructions-for-nano-banana-bd53703eff36
Medium
I hacked the System Instructions for Nano Banana
Gemini 2.5 Flash Image (Nano Banana) spills its system prompt
😱3🔥1👏1
PyTorch Users at Risk: Unveiling 3 Zero-Day PickleScan Vulnerabilities
https://jfrog.com/blog/unveiling-3-zero-day-vulnerabilities-in-picklescan/
https://jfrog.com/blog/unveiling-3-zero-day-vulnerabilities-in-picklescan/
JFrog
PyTorch Users at Risk: Unveiling 3 Zero-Day PickleScan Vulnerabilities
Learn how 3 critical zero-days (CVSS 9.3) found by JFrog in PickleScan, allow bypassing the PyTorch ML model scanner resulting in malicious models hiding & executing code.
❤2👏1
Security research in the age of AI tools
In this article, I try to show how my work as a security researcher has changed and how the new AI tools have impacted my workflow, productivity, and overall approach to security research. I present two vulnerabilities discovered by other researchers, show how I investigated them to create security checks for our Invicti DAST product, and describe how AI tools helped me in the process.
https://invicti.com/blog/security-labs/security-research-in-the-age-of-ai-tools
In this article, I try to show how my work as a security researcher has changed and how the new AI tools have impacted my workflow, productivity, and overall approach to security research. I present two vulnerabilities discovered by other researchers, show how I investigated them to create security checks for our Invicti DAST product, and describe how AI tools helped me in the process.
https://invicti.com/blog/security-labs/security-research-in-the-age-of-ai-tools
Invicti
Security Research in the Age of AI Tools
Learn how AI tools can support security researchers in investigating vulnerabilities and designing security checks to detect them.