Despite concerns over the environmental impacts of AI models, it’s surprisingly hard to find precise, reliable data on the CO2 […]
Tag: LLMs
Reddit CEO pledges site will remain “written by humans and voted on by humans”
Reddit is in an “arms race” to protect its devoted online communities from a surge in artificial intelligence-generated content, with […]
Key fair use ruling clarifies when books can be used for AI training
Skip to content In landmark ruling, judge likens AI training to schoolchildren learning to write. Artificial intelligence companies don’t need […]
Toy-maker Mattel accused of planning “reckless” AI social experiment on kids
OpenAI and Mattel defend partnership In Mattel’s press release, the toy maker behind brands like Barbie and Hot Wheels remained […]
AI chatbots tell users what they want to hear, and that’s problematic
After the model has been trained, companies can set system prompts, or guidelines, for how the model should behave to […]
“Godfather” of AI calls out latest models for lying to users
One of the “godfathers” of artificial intelligence has attacked a multibillion-dollar race to develop the cutting-edge technology, saying the latest […]
xAI says an “unauthorized” prompt change caused Grok to focus on “white genocide”
When analyzing social media posts made by others, Grok is given the somewhat contradictory instructions to “provide truthful and based […]
OpenAI helps spammers plaster 80,000 sites with messages that bypassed filters
“AkiraBot’s use of LLM-generated spam message content demonstrates the emerging challenges that AI poses to defending websites against spam attacks,” […]
Gemini hackers can deliver more potent attacks with a helping hand from… Gemini
MORE FUN(-TUNING) IN THE NEW WORLD Hacking LLMs has always been more art than science. A new attack on Gemini […]
New hack uses prompt injection to corrupt Gemini’s long-term memory
INVOCATION DELAYED, INVOCATION GRANTED There’s yet another way to inject malicious prompts into chatbots. The Google Gemini logo. Credit: Google […]