Published on November 17, 2025. EST READ TIME: 2 minutes

Researchers from Tenable and partner academic institutions have uncovered a set of seven critical vulnerabilities in ChatGPT models, including GPT‑4o and GPT‑5, that allow attackers to bypass safety mechanisms via prompt-injection techniques. The flaws include indirect prompt injection through trusted websites, one-click or zero-click attacks that exploit the model’s browsing and summarisation functions, and memory persistence methods that enable the model to exfiltrate personal data without user input. The researchers warn that exposing AI chatbots to external tools and systems increases their attack surface, making them vulnerable to commands hidden in seemingly benign content.
While some of the issues have been addressed by the provider, others remain, suggesting the broader challenge of fully securing large language models.
Source: thehackernews.com

North Korea's Lazarus Group Rakes in $3 Million: Unveiling Cybercrime's Financial Motivations
Read More 2 min read

Security Vulnerability: Windows Hello Fingerprint Authentication Bypassed on Popular Laptops
Read More 2 min read

Indian Startup Hack-for-Hire: Navigating the Complexities of Ethical Hacking
Read More 2 min read

North Korean Hackers Pose as Job Recruiters in Cyber Espionage Campaign
Read More 2 min read

Analysis Reveals: Bad Bots Constitute a Staggering 73% of Internet Traffic
Read More 2 min read
Menu