llama.cpp (llm/llama.cpp) Updated: 2 days, 14 hours ago Add to my watchlist
LLM inference in C/C++The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.
Version: 8400 License: MIT
GitHub
Statistics for selected duration
2026-Feb-18 to 2026-Mar-20
| Total Installations | 11 |
|---|---|
| Requested Installations | 11 |
Loading Chart 
Loading Chart 
Loading Chart 
Loading Chart 
| Variants | Count |
|---|
Monthly Statistics
Can remain cached for up to 24 hours
Loading Chart
Percentage of installations per version per month
Loading Chart 