llama.cpp (llm/llama.cpp) Updated: 2 months, 1 week ago Add to my watchlist

LLM inference in C/C++

The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.

Version: 4534 License: MIT GitHub
Maintainers i0ntempest
Categories llm
Homepage https://github.com/ggerganov/llama.cpp
Platforms darwin
Variants
  • blas (Uses BLAS, improves performance)
  • debug (Enable debug binaries)
  • native (Force local build and optimize for CPU)
  • openmp (enable parallelism support using OpenMP)
  • universal (Build for multiple architectures)

"llama.cpp" depends on

lib (2)
build (3)

Ports that depend on "llama.cpp"

No ports


Port Health:

Loading Port Health

Installations (30 days)

5

Requested Installations (30 days)

5

Livecheck results

llama.cpp seems to have been updated (port version: 4534, new version: 5022)

livecheck ran: 21 hours ago