$351 Mini PC Runs 35B AI Model 😳 20+ Tokens/s on MINISFORUM UM790 Pro



Can a $351 mini PC really run a 30B+ AI model locally? 👀

We tested the MINISFORUM UM790 Pro with Qwen 35B and Gemma models — and the results are surprisingly solid.

💻 Setup:
• AMD Ryzen™ 9 7940HS
• Radeon 780M iGPU
• 48GB DDR5 RAM (dual-channel)
• 512GB SSD

⚙️ What we did:
• Increased UMA frame buffer
• Installed Adrenalin Edition & dependencies
• Ran llama.cpp with local LLMs

🚀 Result:
Smooth local inference with 20+ tokens/sec (real test, may vary by setup)

No cloud. No API fees. Just local AI. 🔒

🔗 Learn more: https://s.minisforum.com/MiniPC

#MINISFORUM #UM790Pro #LocalAI #MiniPC #LLM #AI #Tech

source

Similar Posts