AISecHub
AI Security Digest – Week 1, 2026 1️⃣ Vulnerability that allows unauthenticated threat actors to gain full control over n8n instances – CVE-2026-21858 – CVSS 10.0 https://www.cyera.com/research-labs/ni8mare-unauthenticated-remote-code-execution-in-n8n-cve…
Medium
AI Security Digest — Week 1, 2026
Critical unauth RCE in n8n, Claude Code trust boundary + dependency hijack, NIST AI-agent security RFI, 4 Copilot issues (prompt leaks…
🔥1
The community hub for crowd-sourced system prompt leak verification. CL4R1T4S!
https://leakhub.ai
https://github.com/elder-plinius/CL4R1T4S
https://leakhub.ai
https://github.com/elder-plinius/CL4R1T4S
❤2
Vulnerable MCP Servers Lab - https://github.com/appsecco/vulnerable-mcp-servers-lab
This repository contains intentionally vulnerable implementations of Model Context Protocol (MCP) servers (both local and remote). Each server lives in its own folder and includes a dedicated README.md with full details on what it does, how to run it, and how to demonstrate/attack the vulnerability.
This repository contains intentionally vulnerable implementations of Model Context Protocol (MCP) servers (both local and remote). Each server lives in its own folder and includes a dedicated README.md with full details on what it does, how to run it, and how to demonstrate/attack the vulnerability.
GitHub
GitHub - appsecco/vulnerable-mcp-servers-lab: A collection of servers which are deliberately vulnerable to learn Pentesting MCP…
A collection of servers which are deliberately vulnerable to learn Pentesting MCP Servers. - appsecco/vulnerable-mcp-servers-lab
🔥3
Weaponizing Apple’s AI for Offensive Operations - Part 2
https://hxr1.ghost.io/weaponizing-apples-ai-for-offensive-operations-part-2/ | https://www.youtube.com/watch?v=UooCY59nQSQ
https://hxr1.ghost.io/weaponizing-apples-ai-for-offensive-operations-part-2/ | https://www.youtube.com/watch?v=UooCY59nQSQ
hxr1
Weaponizing Apple’s AI for Offensive Operations - Part 2
In Part 1 of this series, we focused on CoreML and Vision, showing how .mlmodel files could be weaponized to carry encrypted payloads inside weight arrays, and how Vision OCR could be abused as a covert key oracle to unlock them. The key insight was that…
Back to the Future: Hacking and Securing Connection-based OAuth Architectures in Agentic AI and Integration Platforms
https://www.youtube.com/watch?v=__NtTfL0oPw
https://www.youtube.com/watch?v=__NtTfL0oPw
YouTube
Back to the Future: Hacking and Securing Connection-based OAuth Architectures
Back to the Future: Hacking and Securing Connection-based OAuth Architectures in Agentic AI and Integration Platforms
Access delegation is indispensable for Agentic AI and Integration Platforms, where orchestration engines (e.g., Microsoft Power Automate…
Access delegation is indispensable for Agentic AI and Integration Platforms, where orchestration engines (e.g., Microsoft Power Automate…
ATLANTIS: AI-driven Threat Localization, Analysis, and Triage Intelligence System - Team Atlanta
https://www.youtube.com/watch?v=DVaaEDuDvcc
https://www.youtube.com/watch?v=DVaaEDuDvcc
YouTube
POC2025 | ATLANTIS: AI-driven Threat Localization, Analysis, and Triage Intelligence System
📌 Title
ATLANTIS: AI-driven Threat Localization, Analysis, and Triage Intelligence System
📌 Speaker
Woosun Song
(@Georgia Tech and Team Atlanta)
#POC #PowerOfCommunity #POC2025
ATLANTIS: AI-driven Threat Localization, Analysis, and Triage Intelligence System
📌 Speaker
Woosun Song
(@Georgia Tech and Team Atlanta)
#POC #PowerOfCommunity #POC2025
From Buffer Overflows to Breaking AI: Two Decades of ZDI Vulnerability Research https://www.youtube.com/watch?v=_eem7AVAMpI&t=4s
YouTube
POC2025 | From Buffer Overflows to Breaking AI: Two Decades of ZDI Vulnerability Research
📌 Title
From Buffer Overflows to Breaking AI: Two Decades of ZDI Vulnerability Research
📌 Speaker
Brian Gorenc
Trend Micro’s Zero Day Initiative
#POC #PowerOfCommunity #POC2025
From Buffer Overflows to Breaking AI: Two Decades of ZDI Vulnerability Research
📌 Speaker
Brian Gorenc
Trend Micro’s Zero Day Initiative
#POC #PowerOfCommunity #POC2025
IBM AI ('Bob') Downloads and Executes Malware
https://www.promptarmor.com/resources/ibm-ai-(-bob-)-downloads-and-executes-malware
https://www.promptarmor.com/resources/ibm-ai-(-bob-)-downloads-and-executes-malware
Promptarmor
IBM AI ('Bob') Downloads and Executes Malware
IBM's AI coding agent 'Bob' has been found vulnerable to downloading and executing malware without human approval through command validation bypasses exploited using indirect prompt injection.
ZombieAgent: New ChatGPT Vulnerabilities Let Data Theft Continue (and Spread)
https://www.radware.com/blog/threat-intelligence/zombieagent/
https://www.radware.com/blog/threat-intelligence/zombieagent/
Where AI Systems Leak Data: A Lifecycle Review of Real Exposure Paths
https://www.praetorian.com/blog/where-ai-systems-leak-data-a-lifecycle-review-of-real-exposure-paths/
https://www.praetorian.com/blog/where-ai-systems-leak-data-a-lifecycle-review-of-real-exposure-paths/
Praetorian
Where AI Systems Leak Data: A Lifecycle Review of Real Exposure Paths
AI data exposure rarely looks like a breach. No alerts are triggered, no obvious failure occurs, and most of the time nothing appears to be wrong at all. Instead, sensitive information moves through retrieval, reasoning, and storage layers that were never…
AISecHub Medium Publication - Top #9 - 904 followers - https://medium.com/ai-security-hub
How to submit a story to AISecHub publication? - https://medium.com/ai-security-hub/submission-guideline-5f5406d4b362
🥳💅🙏
How to submit a story to AISecHub publication? - https://medium.com/ai-security-hub/submission-guideline-5f5406d4b362
🥳💅🙏
🔥2
OWASP Agentic AI Top 10: Threats in the Wild
https://labs.lares.com/owasp-agentic-top-10/
https://labs.lares.com/owasp-agentic-top-10/
Lares Labs
OWASP Agentic AI Top 10: Threats in the Wild
Agentic AI Applications go beyond simple question-and-answer interactions. They autonomously pursue complex goals, reasoning, planning, and executing multi-step tasks with minimal human intervention. Unlike LLMs/chatbots that wait for explicit instructions…
What AI Agents Can Teach Us About NHI Governance - https://blog.gitguardian.com/what-ai-agents-can-teach-us-about-nhi-governance/
GitGuardian Blog - Take Control of Your Secrets Security
What AI Agents Can Teach Us About NHI Governance
Agentic AI is a stress test for non-human identity governance. Discover how and why identity, trust, and access control must evolve to keep automation safe.
The First Question Security Should Ask on AI Projects
https://cloudsecurityalliance.org/blog/2026/01/09/the-first-question-security-should-ask-on-ai-projects
https://cloudsecurityalliance.org/blog/2026/01/09/the-first-question-security-should-ask-on-ai-projects
cloudsecurityalliance.org
First Question Security Should Ask on AI Projects | CSA
Explains core questions when adopting AI: why, outcomes, and how, focusing on security, data, and risk.
AI Tool Poisoning: How Hidden Instructions Threaten AI Agents
https://www.crowdstrike.com/en-us/blog/ai-tool-poisoning/
https://www.crowdstrike.com/en-us/blog/ai-tool-poisoning/
CrowdStrike.com
AI Tool Poisoning: How Hidden Instructions Threaten AI Agents
Learn what AI tool poisoning is, how attackers manipulate AI agent tools, and how organizations can defend against this emerging threat.
👍1
ZombieAgent: New ChatGPT Vulnerabilities Let Data Theft Continue (and Spread)
https://www.radware.com/blog/threat-intelligence/zombieagent/
https://www.radware.com/blog/threat-intelligence/zombieagent/