LFM2-8B-A1B is an efficient on-device Mixture-of-Experts (MoE) model from Liquid AI’s LFM2 family, built for fast, high-quality inference on edge hardware. It uses 8.3B total parameters with only ~1.5B active per token, delivering strong performance while keeping compute and memory usage low—making it ideal for phones, tablets, and laptops.
With CoreAI, you can start chatting with LiquidAI: LFM2-8B-A1B instantly — no separate subscription needed. CoreAI bundles access to LiquidAI: LFM2-8B-A1B along with 300+ other AI models from Liquid and other providers like OpenAI, Anthropic, Google, Meta, and more.
Chat with LiquidAI: LFM2-8B-A1B and 300+ other AI models — all in one app.