llama.cpp (llm/llama.cpp) Updated: 1 week ago Add to my watchlist
LLM inference in C/C++The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.
Version: 6800 License: MIT
GitHub
| Maintainers | i0ntempest |
| Categories | llm |
| Homepage | https://github.com/ggerganov/llama.cpp |
| Platforms | darwin |
| Variants |
|
"llama.cpp" depends on
lib (2)
build (3)
Ports that depend on "llama.cpp"
No ports
Port Health:
Loading Port Health
Installations (30 days)
7
Requested Installations (30 days)
7
Livecheck results
llama.cpp seems to have been updated (port version: 6800, new version: 6838)
livecheck ran: 1 day, 22 hours ago