Chroma 1.0 is a real time speech to speech dialogue model that takes audio as input and returns audio as […]
Category: Machine Learning
Wikipedia volunteers spent years cataloging AI tells. Now there’s a plugin to avoid them.
To work around those rules, the Humanizer skill tells Claude to replace inflated language with plain facts and offers this […]
How AutoGluon Enables Modern AutoML Pipelines for Production-Grade Tabular Models with Ensembling and Distillation
In this tutorial, we build a production-grade tabular machine learning pipeline using AutoGluon, taking a real-world mixed-type dataset from raw […]
Liquid AI Releases LFM2.5-1.2B-Thinking: a 1.2B Parameter Reasoning Model That Fits Under 1 GB On-Device
Liquid AI has released LFM2.5-1.2B-Thinking, a 1.2 billion parameter reasoning model that runs fully on device and fits in about […]
Microsoft Research Releases OptiMind: A 20B Parameter Model that Turns Natural Language into Solver Ready Optimization Models
Microsoft Research has released OptiMind, an AI based system that converts natural language descriptions of complex decision problems into mathematical […]
A Coding Guide to Understanding How Retries Trigger Failure Cascades in RPC and Event-Driven Architectures
In this tutorial, we build a hands-on comparison between a synchronous RPC-based system and an asynchronous event-driven architecture to understand […]
OpenAI to test ads in ChatGPT as it burns through billions
Financial pressures and a changing tune OpenAI’s advertising experiment reflects the enormous financial pressures facing the company. OpenAI does not […]
TSMC says AI demand is “endless” after record Q4 earnings
TSMC posted net income of NT$505.7 billion (about $16 billion) for the quarter, up 35 percent year over year and […]
Google AI Releases TranslateGemma: A New Family of Open Translation Models Built on Gemma 3 with Support for 55 Languages
Google AI has released TranslateGemma, a suite of open machine translation models built on Gemma 3 and targeted at 55 […]
NVIDIA AI Open-Sourced KVzap: A SOTA KV Cache Pruning Method that Delivers near-Lossless 2x-4x Compression
As context lengths move into tens and hundreds of thousands of tokens, the key value cache in transformer decoders becomes […]
