The Shift in Agentic AI System Needs LLMs are widely admired for their human-like capabilities and conversational skills. However, with […]
Category: Applications
AREAL: Accelerating Large Reasoning Model Training with Fully Asynchronous Reinforcement Learning
Introduction: The Need for Efficient RL in LRMs Reinforcement Learning RL is increasingly used to enhance LLMs, especially for reasoning […]
From Fine-Tuning to Prompt Engineering: Theory and Practice for Efficient Transformer Adaptation
The Challenge of Fine-Tuning Large Transformer Models Self-attention enables transformer models to capture long-range dependencies in text, which is crucial […]
EPFL Researchers Introduce MEMOIR: A Scalable Framework for Lifelong Model Editing in LLMs
The Challenge of Updating LLM Knowledge LLMs have shown outstanding performance for various tasks through extensive pre-training on vast datasets. […]
StepFun Introduces Step-Audio-AQAA: A Fully End-to-End Audio Language Model for Natural Voice Interaction
Rethinking Audio-Based Human-Computer Interaction Machines that can respond to human speech with equally expressive and natural audio have become a […]
EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. […]
OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs
The Inefficiency of Static Chain-of-Thought Reasoning in LRMs Recent LRMs achieve top performance by using detailed CoT reasoning to solve […]
Internal Coherence Maximization (ICM): A Label-Free, Unsupervised Training Framework for LLMs
Post-training methods for pre-trained language models (LMs) depend on human supervision through demonstrations or preference feedback to specify desired behaviors. […]
MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language Models
LLMs are increasingly seen as key to achieving Artificial General Intelligence (AGI), but they face major limitations in how they […]
Sakana AI Introduces Text-to-LoRA (T2L): A Hypernetwork that Generates Task-Specific LLM Adapters (LoRAs) based on a Text Description of the Task
Transformer models have significantly influenced how AI systems approach tasks in natural language understanding, translation, and reasoning. These large-scale models, […]