An LLM model made specifically to run locally on laptops



We release LFM2-24B-A2B, a large language model specifically made to run locally on laptops or desktops.
– LFM2 architecture optimized for on-device inference
– Mixture-of-Experts with around 2B active – further boosting efficiency
– 24B total parameters to fit comfortably in 32GB of memory (with 4-bit quantization) while leaving room for the OS and applications

Download and docs:
https://huggingface.co/LiquidAI/LFM2-24B-A2B
https://docs.liquid.ai/lfm/models/lfm2-24b-a2b

source

Similar Posts