llama.cpp (llm/llama.cpp) Updated: 2 months, 1 week ago Add to my watchlist

LLM inference in C/C++

The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.

Version: 4534 License: MIT GitHub
Displaying statistics for 1,154 users who made submissions during: until

Statistics for selected duration

2025-Mar-08 to 2025-Apr-07


Total Installations 5
Requested Installations 5


macOS Versions

Loading Chart

Port Versions

Loading Chart



Xcode Versions

Loading Chart

CLT Versions

Loading Chart



Variants table

Variants Count


Monthly Statistics

Can remain cached for up to 24 hours