AISecHub – Telegram
AISecHub
1.49K subscribers
556 photos
36 videos
254 files
1.44K links
https://linktr.ee/aisechub managed by AISecHub. Sponsored by: innovguard.com
Download Telegram
AI Security Tools - November 2025

Open-source AI security repositories published or significantly updated projects in November 2025.

https://medium.com/ai-security-hub/ai-security-tools-november-2025-82ead4a6fb62
1🔥1👏1
cybersecurity-forecast-2026-en.pdf
2 MB
Artificial Intelligence, Cybercrime, and Nation States - Google Forecast 2026

🤖 AI Threats

1️⃣ Adversaries Fully Embrace AI: We anticipate threat actors will move decisively from using AI as an exception to using it as the norm. They will leverage AI to enhance the speed, scope, and effectiveness of operations, streamlining and scaling attacks across the entire lifecycle.

2️⃣ Prompt Injection Risks: A critical and growing threat is prompt injection, an attack that manipulates AI to bypass its security protocols and follow an attacker's hidden command. Expect a significant rise in targeted attacks on enterprise AI systems.

3️⃣ AI-Enabled Social Engineering: Threat actors will accelerate the use of highly manipulative AI-enabled social engineering. This includes vishing with AI-driven voice cloning to create hyperrealistic impersonations of executives or IT staff, making attacks harder to detect and defend against.

🕵️‍♂️ AI Advantages

1️⃣ AI Agent Paradigm Shift: Widespread adoption of AI agents will create new security challenges, requiring organizations to develop new methodologies and tools to effectively map their new AI ecosystems. A key part of this will be the evolution of identity and access management (IAM) to treat AI agents as distinct digital actors with their own managed identities.

2️⃣ Supercharged Security Analysts: AI adoption will transform security analysts’ roles, shifting them from drowning in alerts to directing AI agents in an “Agentic SOC.” This will allow analysts to focus on strategic validation and high-level analysis, as AI handles data correlation, incident summaries, and threat intelligence drafting.

Authors: Adam Greenberg, Sandra Joyce, Charles Carmakal, Jon R. Ramsey

Source: https://cloud.google.com/blog/topics/threat-intelligence/cybersecurity-forecast-2026
🔥3👍2🥱2
AI / LLM Red Team Field Manual & Consultant’s Handbook

https://github.com/Shiva108/ai-llm-red-team-handbook
🔥4👍3🤝2
AI Security Tools - November 2025

🧰 awesome-claude-skills - Curated Claude Skills collection with a Security & Systems section wiring Claude into web fuzzing, MCP hardening, and security automation workflows. ⭐️5.5k https://github.com/ComposioHQ/awesome-claude-skills

🧰 IoT HackBot - IoT security toolkit combining Python CLI tools and Claude Code skills for automated discovery, firmware analysis, and exploitation-focused testing of IoT devices. ⭐️339 https://github.com/BrownFineSecurity/iothackbot

🧰 PatchEval - Benchmark for evaluating LLMs and agents on patching real-world vulnerabilities using Dockerized CVE testbeds and automated patch validation. ⭐️138 https://github.com/bytedance/PatchEval

🧰 VulnRisk - Open-source vulnerability-risk assessment platform providing transparent, context-aware scoring beyond CVSS — ideal for local development and testing. ⭐️84 https://github.com/GurkhaShieldForce/VulnRisk_Public

🧰 Wazuh-MCP-Server - Exposes Wazuh SIEM and EDR telemetry via Model Context Protocol so LLM agents can run threat-hunting and response playbooks against real data. ⭐️83 https://github.com/gensecaihq/Wazuh-MCP-Server

🧰 mcp-checkpoint - Continuously secures and monitors Model Context Protocol operations through static and dynamic scans, revealing hidden risks in agent-tool communications. ⭐️81 https://github.com/aira-security/mcp-checkpoint

🧰 ai-reverse-engineering - AI-assisted reverse engineering tool letting an MCP-driven chat interface orchestrate Ghidra to analyze binaries for security research. ⭐️42 https://github.com/biniamf/ai-reverse-engineering

🧰 whisper_leak - Research toolkit showing how encrypted, streaming LLM conversations leak prompt information via packet sizes and timing; includes capture, training, and benchmark pipeline. ⭐️42 https://github.com/yo-yo-yo-jbo/whisper_leak

🧰 AI / LLM Red Team Field Manual & Consultant’s Handbook - Red-team playbook and consultant’s guide with attack prompts, RoE/SOW templates, OWASP/MITRE mappings, and testing workflows. ⭐️26 https://github.com/Shiva108/ai-llm-red-team-handbook

🧰 LLMGoat - Deliberately vulnerable LLM lab for practicing and understanding OWASP Top 10 LLM vulnerabilities. ⭐️36 https://github.com/SECFORCE/LLMGoat

🧰 Reversecore_MCP - Security-first MCP server empowering AI agents to orchestrate Ghidra, Radare2, and YARA for automated reverse engineering. ⭐️25 https://github.com/sjkim1127/Reversecore_MCP

🧰 system-prompt-benchmark - Testing harness that runs LLM system prompts against 287 prompt-injection, jailbreak, and data-leak attacks using an Ollama-based judge. ⭐️3 https://github.com/KazKozDev/system-prompt-benchmark

🧰 ctrl-alt-deceit - Extends MLEBench with sabotage tasks and monitoring tools to evaluate LLM agents that tamper with code, benchmarks, and usage logs. ⭐️3 https://github.com/TeunvdWeij/ctrl-alt-deceit

🧰 SOC-CERT AI Helper - Chrome extension using Gemini Nano and KEV-backed CVE enrichment to detect and prioritize web threats in-browser. ⭐️1 https://github.com/joupify/soc-cert-guardian-extension

🧰 aifirst-insecure-agent-labs - Chatbot agent exploit lab for practicing prompt injection, system-prompt extraction, and guardrail bypass with NeMo/regex guardrails. ⭐️1 https://github.com/trailofbits/aifirst-insecure-agent-labs

🧰 llm-security-framework - Security framework for AI-assisted development with tiered checklists, threat models, and docs to harden small AI projects quickly. ⭐️0 https://github.com/annablume/llm-security-framework
🔥21👏1
Top AI Security YouTube Videos — November 2025 - https://youtube.com/playlist?list=PLFO56KBxdGBeXiLJ8JHxGliXzXZNq-f-x

This playlist collects more than 30 new AI security talks from SAINTCON, Black Hat, BSides, NorthSec, Hack In The Box, LABScon, DevCon, DjangoCon and Everything Open across November 2025. Sessions explore AI driven cyber attacks, agentic workflows and MCP abuse, jailbreak tactics and guardrail failures, LLM enabled malware and offensive tooling, AI disinformation operations, secure adoption of LLMs in FinTech and cloud, and practical lessons from real world research and incident response.

https://medium.com/ai-security-hub/top-ai-security-youtube-videos-november-2025-5f09db69ca42
🔥2👍1👏1
Assessing Risks and Impacts of AI (ARIA).pdf
7.7 MB
Assessing Risks and Impacts of AI (ARIA) - NIST

Source: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.700-2.pdf
🔥2👍1
This media is not supported in your browser
VIEW IN TELEGRAM
📌 Components of MLSecOps

🔹 Model Hardening
Strengthen models with adversarial training and reduce vulnerability to attacks.

🔹 Dataset Integrity & Validation
Detect poisoned data, validate distributions, and identify anomalies in input.

🔹 Data Security & Governance
Protect training data, enforce access control, and manage sensitive information securely.

🔹 MLOps Integration
Ensure continuous security testing, CI/CD protection, and safe ML deployments.

🔹 Supply Chain Security
Secure model files, dependencies, and detect malicious or tampered libraries.

🔹 Audit, Compliance & Logging
Track model changes, maintain audit trails, and meet regulatory requirements.

🔹 Model Explainability & Transparency
Understand model decisions, detect bias, and ensure responsible model behavior.

🔹 Secure Deployment & Serving
Enforce authentication, protect inference endpoints, and run encrypted model serving.

🔹 Model Monitoring & Drift Detection
Detect drift, anomalies, degradation, and emerging risks in real time.

🔹 Threat Detection & Attack Prevention
Identify extraction attempts, inversion attacks, prompt injection, and API abuse.

(By Shalini Goyal)
3👍1👏1
👍3🔥3🍾1
Neurogrid CTF: The Ultimate AI Security Showdown - Agent of 0ca / BoxPwnr Write-up


On November 20-24, 2024, I participated with BoxPwnr in Neurogrid CTF - the first AI-only CTF competition hosted by Hack The Box with $50k in AI credits for the top 3 teams. This wasn’t a typical CTF where humans solve challenges; instead, AI agents competed autonomously in a hyper-realistic cyber arena. My autonomous agent secured 5th place, solving 38/45 flags(84.4% completion) across 36 challenges without any manual intervention.

https://0ca.github.io/ctf/ai/security/2025/11/28/neurogrid-ctf-writeup.html
🔥42😎2
🤓3🔥2👍1
HeliosBank LLM CTF Series - LLM DFIR CTF

Each incident simulates a real-world AI-driven compromise inside HeliosBank’s internal systems

https://eliwoodward.github.io/LLM_CTF/
🔥3🤔3👍1
AI agents find $4.6M in blockchain smart contract exploits

https://red.anthropic.com/2025/smart-contracts/
😱2👎1🔥1
AISecHub pinned «AI Security Newsletter - November 2025 https://www.linkedin.com/posts/adgnji_aisecurity-adversarialai-redteamai-activity-7401545671746740225-L9Xt?»
🧰 raptor - Raptor turns Claude Code into a general-purpose AI offensive/defensive security agent. By using Claude.md and creating rules, sub-agents, and skills, we configure the agent for adversarial thinking, and perform research or attack/defense operations. ⭐️ 124 https://github.com/gadievron/raptor.
🔥2