llama.cpp for Slackware 15.0
=== Building ===
Select the appropriate BACKEND for your computer:
BACKEND=cpu
Use CPU only, no additional packages required.
BACKEND=vulkan
Use GPU, supported by both AMD and NVIDIA, requires updated SDK.
BACKEND=cuda
Use GPU, NVIDIA only. CUDA must be installed before building the package.
Run the following commands:
# wget -rnH "ftp://firebird1967.ru/llama.cpp"
# env BACKEND=cpu sh llama.cpp/llama.cpp.SlackBuild
# upgradepkg --install-new /tmp/llama.cpp-*.tgz
=== How to use (Simplest option) ===
$ llama-server -hf ggml-org/gemma-3-1b-it-GGUF
=== How to use (Web UI) ===
This directory contains rc.llama cpp, it launches the web interface.
By default it will use Qwen3, but you can change it for your model. When you
first launch it, it will download the 18 GB model.
# useradd -m -d /var/lib/llamacpp -s /bin/bash llamacpp
# cp llama.cpp/rc.llamacpp /etc/rc.d/rc.llamacpp
# chmod +x /etc/rc.d/rc.llamacpp
# /etc/rc.d/rc.llamacpp start
$ firefox localhost:4401