Daily Security – Telegram
Forwarded from Officer’s Articles
👍1
In Kazakhstan, the largest crypto exchange that worked for the drug trade was closed

The service was considered «respected» in the underground environment and cooperated with 20 largest «Darknet»-marketing, where total audience exceeded 5 million users. More than 200 drug trafficking sites from Kazakhstan, Russia, Ukraine and Moldova passed through it.

The total turnover of «RAKS exchange» exceeded 224 million USD.

https://sozmedia.kz/94819/
Forwarded from AISecHub
State of MCP Server Security 2025: 5,200 Servers, Credential Risks - https://astrix.security/learn/blog/state-of-mcp-server-security-2025/

We analyzed over 5,200 unique, open-source MCP server implementations to understand how they manage credentials and what this means for the security of the growing AI agent ecosystem.

- 88% of MCP servers need credentials to function
- 53% rely on static API keys and Personal Access Tokens (PAT)
- Only 8.5% use modern OAuth authentication
- 79% store API keys in basic environment variables

#MCP #ModelContextProtocol #AIAgents #AgentSecurity #CredentialSecurity #SecretsManagement #OAuth #APIKeys #PATs #SecretRotation #LeastPrivilege #AstrixSecurity
1
Forwarded from Atoms Research
🚨 Balancer potentially exploited

$70.9M moved to a fresh wallet. Tokens moved:
- 6.85K $OSETH
- 6.59K $WETH
- 4.26K $wSTETH

Source

🫡 Atoms Research | ✈️ Boost | 💬 Chat
Please open Telegram to view this post
VIEW IN TELEGRAM
Forwarded from Atoms Research
🔴 Balancer Exploit Update

The project team commented on the incident:

We’re aware of a potential exploit impacting Balancer v2 pools.

Our engineering and security teams are investigating with high priority.

We’ll share verified updates and next steps as soon as we have more information.


🫡 Atoms Research | ✈️ Boost | 💬 Chat
Please open Telegram to view this post
VIEW IN TELEGRAM
1
Forwarded from Vladimir S. | Officer's Channel (Vladimir S. | officercia)
A very detailed Balancer hack post-mortem (unofficial): https://x.com/officer_secret/status/1985961846805843984?s=46

#security
Forwarded from Netlas.io
📌 LLM Vulnerabilities: how AI apps break — and how to harden them

This piece maps the most common ways LLM-powered systems fail in the real world and turns them into a practical hardening plan. From prompt and indirect injection to over-privileged tools, leaky RAG pipelines, data poisoning, jailbreaks, and supply-chain traps — plus the guardrails that actually help in production.

Key takeaways:
1️⃣ Prompt & indirect injection: attackers hide instructions in web pages, files, or retrieved notes; the model obeys them and exfiltrates secrets or performs unwanted actions.
2️⃣ Jailbreaks & policy evasion: harmless-looking reformulations bypass safety layers; outputs become unsafe or operationally risky.
3️⃣ RAG data leaks: sloppy retrieval exposes internal docs, customer data, and system prompts; cross-tenant bleed is a real risk.
4️⃣ Over-privileged tools/agents: broad filesystem, network, or payment permissions turn one prompt into a breach.
5️⃣ Poisoning & supply chain: tainted datasets, third-party prompts, and unpinned models/extensions undermine trust.
6️⃣ Output trust & hallucinations: fabricated facts sneak into workflows, tickets, or code — and humans often rubber-stamp them.
7️⃣ Telemetry gaps: without red-team sims and runtime monitoring, you won’t see injection attempts until damage is done.

👉 Read here: https://netlas.io/blog/llm_vulnerabilities/
2
be careful)
Forwarded from Cointelegraph
🚨 ALERT: A fake Hyperliquid app has appeared on the Google Play Store, according to ZachXBT.

News | Markets | YouTube
Forwarded from AISecHub
12 LLM CTFs & Challenges - https://taleliyahu.medium.com/llm-ctfs-challenges-03dd55a9b7e4

Hands on CTFs and labs for LLM security. Train on prompt injection, jailbreaks, guardrail bypass, tool and agent abuse, data leaks, model inversion, and MCP issues.
2🔥2
Worth a read
Forwarded from AISecHub
AI-Powered CAPTCHA Solver

This project is a Python-based command-line tool that uses large multimodal models (LMMs) like OpenAI's GPT-4o and Google's Gemini to automatically solve various types of CAPTCHAs. It leverages Selenium for web browser automation to interact with web pages and solve CAPTCHAs in real-time.

https://github.com/aydinnyunus/ai-captcha-bypass
🤝3
Forwarded from Security Harvester
Analysis of 8 Foundational Cache Poisoning Attacks (HackerOne, GitHub, Shopify) - Part 1
https://herish.me/blog/cache-poisoning-case-studies-part-1-foundational-attacks/:

1. The first part of a three-section deep dive analyzing early real-world cache poisoning bugs across HackerOne, GitHub, Shopify, and private programs.
2. Although it once appeared niche, cache poisoning has evolved into a high-impact attack vector affecting CDNs, cloud platforms, server frameworks, and multi-tenant SaaS providers.
3. These early reports demonstrate not only how straightforward misconfigurations can lead to devastating effects, but also how attackers learned to weaponize headers, request behaviors, and cache key inconsistencies to breach platforms with millions of users.

@secharvester