Published on October 14, 2024. EST READ TIME: 2 minutes
In a major cybersecurity breakthrough, OpenAI blocked more than 20 global malicious campaigns in 2024 that exploited its AI models for disinformation and cybercrime. These operations involved developing malware, influencing elections in the U.S., Rwanda, India, and the EU, and manipulating social media platforms. Among the disrupted campaigns were groups like “SweetSpecter” and “Cyber Av3ngers,” which leveraged AI to enhance their hacking techniques, including spear-phishing and reconnaissance. Other operations, like STOIC from Israel, used AI to generate fake profiles and social media content aimed at influencing political discourse. OpenAI’s intervention highlights the evolving misuse of AI technologies and its ongoing efforts to combat cyber threats globally.
North Korea's Lazarus Group Rakes in $3 Million: Unveiling Cybercrime's Financial Motivations
Read More 2 min read
Security Vulnerability: Windows Hello Fingerprint Authentication Bypassed on Popular Laptops
Read More 2 min read
Indian Startup Hack-for-Hire: Navigating the Complexities of Ethical Hacking
Read More 2 min read
North Korean Hackers Pose as Job Recruiters in Cyber Espionage Campaign
Read More 2 min read
Analysis Reveals: Bad Bots Constitute a Staggering 73% of Internet Traffic
Read More 2 min read
Menu