Ars Technica · Apr 2
Researchers demonstrate two novel Rowhammer attacks (GDDRHammer and GeForceHammer) that exploit Nvidia GPU memory to gain full root control of host machines, turning a decade-old DRAM vulnerability in
Google Security Blog · Apr 2
Google's GenAI Security Team details its continuous defense strategy against indirect prompt injection (IPI) attacks, where malicious instructions injected into data or tools compromise LLM behavior w
Fortune · Apr 2
Mercor, a $10B startup supplying training data to OpenAI, Anthropic, and Meta, confirmed a data breach via supply chain attack on LiteLLM, an open-source AI library. Extortion gang Lapsus$ claims 4TB
The Verge AI · Apr 2
Granola, an AI meeting transcription app, exposes users' notes to anyone with a link by default despite claiming "private by default," and uses notes for AI training unless explicitly disabled. The ga
Dev.to · Apr 2
Anthropic's Claude source code leaked on March 31, 2026—a significant security incident for one of the most deployed AI systems in production. The breach exposes potential vulnerabilities in model arc
EE Times · Apr 3
Hardware counterfeits are flooding the AI chip market as demand surges, putting the integrity of AI systems at risk. Experts say a hardware root of trust architecture could verify chip authenticity an
AI News · Apr 2
Experian's 2026 fraud report reveals agentic AI's double edge: while financial institutions deploy autonomous AI agents for transactions, fraudsters weaponize identical systems to execute high-volume
AI News · Apr 2
Five foundational security practices for AI systems: enforce role-based access control and encryption, defend against prompt injection and model poisoning, implement monitoring and anomaly detection,
Healthcare-in-europe · Apr 2
AI radiology systems face three distinct attack vectors: poisoned training data that corrupts model outputs, phishing schemes targeting clinicians, and prompt injection exploits that manipulate AI dia
Bloomberg Tech · Apr 1
Anthropic is scrambling to contain fallout from an accidental leak of Claude Code's internal source code, the company's flagship revenue-driving AI assistant. The exposure raises critical questions ab
Reuters · Apr 2
Singapore authorities charged another individual in an AI chip fraud scheme, extending an investigation into illegal trafficking or misrepresentation of high-demand semiconductor components.
Tom's Guide · Apr 1
Hidden commands embedded in websites and PDFs can exploit blind spots in ChatGPT, Gemini, and other AI assistants to hijack sessions and steal user data—Tom's Guide explains the attack surface and pra
Simon Willison · Mar 31
Axios (101M weekly downloads) was compromised via leaked npm token, with malicious versions 1.14.1 and 0.30.4 injecting a fake 'plain-crypto-js' dependency that steals credentials and deploys remote a
The Conversation · Apr 1
Iranian drones struck two AWS data centers in the UAE on March 1, 2026, marking the first deliberate state-level physical attack on commercial data centers during wartime. Iran signaled it views data
Bloomberg Tech · Apr 1
Anthropic attributes the accidental leak of Claude's coding agent source code to process errors tied to rapid product releases, revealing potential quality control gaps in a fast-moving AI startup.
Techzine · Apr 1
Exabeam extended its security monitoring platform to track AI agent activity across ChatGPT, Copilot, and Gemini, addressing growing concerns about unauthorized or risky LLM use in enterprise environm
Bloomberg Tech · Apr 1
Anthropic accidentally leaked internal source code for Claude's coding assistant, undercutting the company's safety-first messaging and raising questions about operational security at a leading AI dev
The Register · Apr 1
Claude Code source analysis exposes extensive data retention and system control capabilities that exceed typical user expectations—the agent collects detailed system information, retains user data, an
The Hacker News · Apr 1
Anthropic confirmed Claude Code's internal source code leaked via npm due to a packaging error, though no customer data or credentials were exposed. The company attributed it to human error rather tha
The Register · Mar 31
Anthropic's official npm package for Claude Code accidentally shipped with source map files exposing the tool's source code, a build pipeline mistake that raises questions about the company's release