Hacking Articles – Telegram
Hacking Articles
12.8K subscribers
680 photos
133 files
437 links
House of Pentester
Download Telegram
Java Security Risks Explained
Twitter: Share this thread

JNDI Injection
Scenario: Fake delivery → RCE via LDAP.
Risk: logback.xml loads malicious classes.
Fix: Disable reloadByURL; use Java ≥8u191.

Deserialization
Scenario: Tampered package → RCE.
Risk: ObjectInputStream executes gadget chains.
Fix: Use ValidatingObjectInputStream; whitelist classes.

XXE
Scenario: Malicious XML → file read.
Risk: DocumentBuilder parses external entities.
Fix: Disable DTDs: setFeature("disallow-doctype-decl", true).

Auth Bypass
Scenario: Path manipulation → admin access.
Risk: startsWith()/endsWith() filters bypassed.
Fix: Normalize paths; strict validation.

Key Defenses
Patch: Update Java/JNDI.
Log: Monitor Runtime.exec().
Least Privilege: Restrict RMI/JMX.
🚀 AI Penetration Training (Online) – Register Now! 🚀

🔗 Register here: https://forms.gle/bowpX9TGEs41GDG99
💬 WhatsApp: https://wa.me/message/HIOPPNENLOX6F1

📧 Email: info@ignitetechnologies.in

Limited slots available! Hurry up to secure your spot in this exclusive training program offered by Ignite Technologies.

🧠 LLM Architecture
🔐 LLM Security Principles
🗄️ Data Security in AI Systems
🛡️ Model Security
🏗️ Infrastructure Security
📜 OWASP Top 10 for LLMs
⚙️ LLM Installation and Deployment
📡 Model Context Protocol (MCP)
🚀 Publishing Your Model Using Ollama
🔍 Introduction to Retrieval-Augmented Generation (RAG)
🌐 Making Your AI Application Public
📊 Types of Enumeration Using AI
🎯 Prompt Injection Attacks
🐞 Exploiting LLM APIs: Real-World Bug Scenarios
🔑 Password Leakage via AI Models
🎭 Indirect Prompt Injection Techniques
⚠️ Misconfigurations in LLM Deployments
👑 Exploitation of LLM APIs with Excessive Privileges
📝 Content Manipulation in LLM Outputs
📤 Data Extraction Attacks on LLMs
🔒 Securing AI Systems
🧾 System Prompts and Their Security Implications
🤖 Automated Penetration Testing with AI
1
2FA Bugs
Azure Mindmap
Azure Service
Cloud Security Framework
ADCS ESC16 – Security Extension Disabled on CA (Globally)

Twitter: https://x.com/hackinarticles

The ESC16 vulnerability in AD CS allows attackers to bypass certificate validation and escalate privileges through misconfigured templates, UPN mapping, and shadow credentials.

📘 Overview of the ESC16 Attack
📋 Prerequisites
🧪 Lab Setup
🎯 Enumeration & Exploitation

🧠 Post Exploitation
🔁 Lateral Movement & Privilege Escalation Using Evil-WinRM

🛡️ Mitigation
🚀 Join Ignite Technologies' Red Team Operation Course Online! 🚀

🔗 Register here: https://forms.gle/bowpX9TGEs41GDG99
💬 WhatsApp: https://wa.me/message/HIOPPNENLOX6F1

📧 Email: info@ignitetechnologies.in

Enroll now in our exclusive "Red Teaming" Training Program and explore the following modules:

Introduction to Red Team
📩 Initial Access & Delivery
⚙️ Weaponization
🌐 Command and Control (C2)
🔼 Escalate Privileges
🔐 Credential Dumping
🖧 Active Directory Exploitation
🔀 Lateral Movement
🔄 Persistence
📤 Data Exfiltration
🛡️ Defense Evasion
📝 Reporting

Join us for a comprehensive learning experience! 🔒💻🔍
Linux Grep Cheat Sheet
👍3
DNS Record Types
👍1👎1
Twitter OSINT
Public Key Cryptocraphy
1
CLI Tools for Linux Admin
1👍1
🚀 AI Penetration Training (Online) – Register Now! 🚀

🔗 Register here: https://forms.gle/bowpX9TGEs41GDG99
💬 WhatsApp: https://wa.me/message/HIOPPNENLOX6F1

📧 Email: info@ignitetechnologies.in

Limited slots available! Hurry up to secure your spot in this exclusive training program offered by Ignite Technologies.

🧠 LLM Architecture
🔐 LLM Security Principles
🗄️ Data Security in AI Systems
🛡️ Model Security
🏗️ Infrastructure Security
📜 OWASP Top 10 for LLMs
⚙️ LLM Installation and Deployment
📡 Model Context Protocol (MCP)
🚀 Publishing Your Model Using Ollama
🔍 Introduction to Retrieval-Augmented Generation (RAG)
🌐 Making Your AI Application Public
📊 Types of Enumeration Using AI
🎯 Prompt Injection Attacks
🐞 Exploiting LLM APIs: Real-World Bug Scenarios
🔑 Password Leakage via AI Models
🎭 Indirect Prompt Injection Techniques
⚠️ Misconfigurations in LLM Deployments
👑 Exploitation of LLM APIs with Excessive Privileges
📝 Content Manipulation in LLM Outputs
📤 Data Extraction Attacks on LLMs
🔒 Securing AI Systems
🧾 System Prompts and Their Security Implications
🤖 Automated Penetration Testing with AI
2
Top 25 SSRF
1
Top 25 LFI
Top 25 XSS