LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability.
With CoreAI, you can start chatting with LiquidAI: LFM2-24B-A2B instantly — no separate subscription needed. CoreAI bundles access to LiquidAI: LFM2-24B-A2B along with 300+ other AI models from Liquid and other providers like OpenAI, Anthropic, Google, Meta, and more.
Chat with LiquidAI: LFM2-24B-A2B and 300+ other AI models — all in one app.