MiniMax-Text-01 and MiniMax-VL-01 Released: Scalable Models with Lightning Attention, 456B Parameters, 4M Token Contexts, and State-of-the-Art Accuracy

Large Language Models (LLMs) and Vision-Language Models (VLMs) transform natural language understanding, multimodal integration, and complex reasoning tasks. Yet, one […]

OpenBMB Just Released MiniCPM-o 2.6: A New 8B Parameters, Any-to-Any Multimodal Model that can Understand Vision, Speech, and Language and Runs on Edge Devices

Artificial intelligence has made significant strides in recent years, but challenges remAIn in balancing computational efficiency and versatility. State-of-the-art multimodal […]

UC Berkeley Researchers Released Sky-T1-32B-Preview: An Open-Source Reasoning LLM Trained for Under $450 Surpasses OpenAI-o1 on Benchmarks like Math500, AIME, and Livebench

The rapid advancements in artificial intelligence have opened new possibilities, but the associated costs often limit who can benefit from […]