[Buildroot] [PATCH v3] package/llama-cpp: new package

Joseph Kogut joseph.kogut at gmail.com
Tue Oct 28 16:53:48 UTC 2025


Hi Thomas,

On Tue, Oct 28, 2025 at 1:36 AM Thomas Perale <thomas.perale at mind.be> wrote:
>
> In reply of:
> > Add a package for llama.cpp, a C/C++ LLM inference library, used in
> > popular projects like Ollama, RamaLama, and more.
> >
> > Signed-off-by: Joseph Kogut <joseph.kogut at gmail.com>
>
> Hi Joseph,
>
> > ---
> > This patch adds a package for llama.cpp, an LLM inference library,
> > supporting many popular models, including LLaMa, Gemma, Deepseek, Qwen,
> > and many more.
> >
> > This library includes tools that can be used for standalone inference,
> > like llama-cli and llama-server, as well as benchmarking, in
> > llama-bench. The library has a variety of software and hardware
> > accelerated backends, but this patch focuses on OpenBLAS and Vulkan
> > support to start.
> >
> > The patch disables building for z13 due to ggml requiring z14 or higher.
> > Older GCC versions are also problematic, and GCC > v9 is required.
> >
> > As mentioned in the virglrenderer series, this package was used to test
> > virglrenderer with venus in a nested qemu guest using virtio-gpu.
> >
> > https://lists.buildroot.org/pipermail/buildroot/2025-June/781395.html
> >
> > For v3 of this series, enabling building with additional toolchains
> > using uclibc and musl surfaced a few new issues, such as lack of support
> > for threads, wchar, and linking to libatomic. I've also addressed those.
> >
> > $ utils/test-pkg -c llama-cpp.config -p llama-cpp -C12 -a
> >                              arm-aarch64 [ 1/35]: OK
> >                    bootlin-aarch64-glibc [ 2/35]: OK
> >                bootlin-arcle-hs38-uclibc [ 3/35]: OK
> >                     bootlin-armv5-uclibc [ 4/35]: OK
> >                      bootlin-armv7-glibc [ 5/35]: OK
> >                       bootlin-armv7-musl [ 6/35]: OK
> >                    bootlin-armv7m-uclibc [ 7/35]: SKIPPED
> >                 bootlin-m68k-5208-uclibc [ 8/35]: SKIPPED
> >                bootlin-m68k-68040-uclibc [ 9/35]: OK
> >              bootlin-microblazeel-uclibc [10/35]: OK
> >                    bootlin-mipsel-uclibc [11/35]: OK
> >                 bootlin-mipsel32r6-glibc [12/35]: OK
> >                  bootlin-openrisc-uclibc [13/35]: OK
> >            bootlin-powerpc-e500mc-uclibc [14/35]: OK
> >         bootlin-powerpc64le-power8-glibc [15/35]: OK
> >                    bootlin-riscv32-glibc [16/35]: OK
> >                    bootlin-riscv64-glibc [17/35]: OK
> >                     bootlin-riscv64-musl [18/35]: OK
> >                  bootlin-s390x-z13-glibc [19/35]: SKIPPED
> >                       bootlin-sh4-uclibc [20/35]: OK
> >                     bootlin-sparc-uclibc [21/35]: OK
> >                    bootlin-sparc64-glibc [22/35]: OK
> >                     bootlin-x86-64-glibc [23/35]: OK
> >                      bootlin-x86-64-musl [24/35]: OK
> >                    bootlin-x86-64-uclibc [25/35]: OK
> >                    bootlin-x86-i686-musl [26/35]: OK
> >                    bootlin-xtensa-uclibc [27/35]: OK
> >                             br-arm-basic [28/35]: SKIPPED
> >                     br-arm-full-nothread [29/35]: SKIPPED
> >                       br-arm-full-static [30/35]: SKIPPED
> >                    br-i386-pentium4-full [31/35]: OK
> >                       br-mips64-n64-full [32/35]: SKIPPED
> >                  br-mips64r6-el-hf-glibc [33/35]: OK
> >                br-powerpc-603e-basic-cpp [34/35]: SKIPPED
> >                br-powerpc64-power7-glibc [35/35]: OK
> >
> > 35 builds, 8 skipped, 0 build failed, 0 legal-info failed, 0 show-info failed
> > ---
> > Changes in v3:
> > - Bump version to b6854
> > - Use standard _ARCH_SUPPORTS pattern in Config.in [Julien]
> > - Remove dependency on !BR2_riscv, as this is fixed
> > - Replace dependency on !BR2_s390x w/ !BR2_s390x_z13
> > - Enable builds with uclibc, selecting libexecinfo when needed [Julien]
> > - Depend on BR2_INSTALL_LIBSTDCPP [Julien]
> > - Add homepage to config help string [Julien]
> > - Move comment to end of Config.in and remove condition on glibc/musl
> >   [Julien]
> > - Fix symbol for libcurl [Julien]
> > - Handle disabling build features which are not selected [Julien]
> > - Pass ldflags for libexecinfo [Julien]
> > - Enable building statically using config from project readme
> > - Link with libatomic when available, fixes bootlin-sparc-uclibc
> > - Add dependency on toolchain threads, skips br-arm-full-nothread
> >   failure
> > - Add dependency on wchar, skips br-powerpc-603e-basic-cpp failure
> > - Vulkan support depends on !BR2_ARM_CPU_ARMV5, skips
> >   bootlin-armv5-uclibc failure
> > - Link to v2: https://lore.kernel.org/r/20251022-llama-cpp-v2-1-c41a43382093@gmail.com
> >
> > Changes in v2:
> > - Bump version to b6818
> > - Link to v1: https://lore.kernel.org/r/20250619-llama-cpp-v1-1-0d4fe6710102@gmail.com
> > ---
> >  DEVELOPERS                       |  1 +
> >  package/Config.in                |  1 +
> >  package/llama-cpp/Config.in      | 47 +++++++++++++++++++++++++++
> >  package/llama-cpp/llama-cpp.hash |  4 +++
> >  package/llama-cpp/llama-cpp.mk   | 68 ++++++++++++++++++++++++++++++++++++++++
> >  5 files changed, 121 insertions(+)
> >
> > diff --git a/DEVELOPERS b/DEVELOPERS
> > index 66199a5b72..44bac1d63f 100644
> > --- a/DEVELOPERS
> > +++ b/DEVELOPERS
> > @@ -1753,6 +1753,7 @@ F:      package/at-spi2-core/
> >  F:   package/earlyoom/
> >  F:   package/gconf/
> >  F:   package/libnss/
> > +F:   package/llama-cpp/
> >  F:   package/llvm-project/clang/
> >  F:   package/llvm-project/lld/
> >  F:   package/llvm-project/llvm/
> > diff --git a/package/Config.in b/package/Config.in
> > index 161d61728b..851bc35bc1 100644
> > --- a/package/Config.in
> > +++ b/package/Config.in
> > @@ -2290,6 +2290,7 @@ comment "linux-pam plugins"
> >       source "package/libpam-tacplus/Config.in"
> >  endif
> >       source "package/liquid-dsp/Config.in"
> > +     source "package/llama-cpp/Config.in"
> >       source "package/llvm-project/llvm/Config.in"
> >       source "package/lttng-libust/Config.in"
> >       source "package/matio/Config.in"
> > diff --git a/package/llama-cpp/Config.in b/package/llama-cpp/Config.in
> > new file mode 100644
> > index 0000000000..d29dbcd311
> > --- /dev/null
> > +++ b/package/llama-cpp/Config.in
> > @@ -0,0 +1,47 @@
> > +config BR2_PACKAGE_LLAMA_CPP_ARCH_SUPPORTS
> > +     bool
> > +     default y
> > +     depends on !BR2_s390x_z13 # ggml requires z14 or higher
> > +
> > +config BR2_PACKAGE_LLAMA_CPP
> > +     bool "llama.cpp"
> > +     depends on BR2_INSTALL_LIBSTDCPP
> > +     depends on BR2_PACKAGE_LLAMA_CPP_ARCH_SUPPORTS
> > +     depends on BR2_TOOLCHAIN_HAS_THREADS
> > +     depends on BR2_TOOLCHAIN_GCC_AT_LEAST_9
> > +     depends on !BR2_TOOLCHAIN_USES_UCLIBC \
> > +             || (BR2_TOOLCHAIN_USES_UCLIBC && !BR2_STATIC_LIBS)
> > +     depends on BR2_USE_WCHAR
> > +     select BR2_PACKAGE_LIBEXECINFO if BR2_TOOLCHAIN_USES_UCLIBC
> > +     help
> > +       LLM inference in C/C++
> > +
> > +       https://github.com/ggml-org/llama.cpp
> > +
> > +if BR2_PACKAGE_LLAMA_CPP
> > +
> > +config BR2_PACKAGE_LLAMA_CPP_TOOLS
> > +     bool "Enable tools"
> > +     help
> > +       Build CLI tools like llama-cli, llama-bench, etc.
> > +
> > +config BR2_PACKAGE_LLAMA_CPP_SERVER
> > +     bool "Enable server"
> > +     help
> > +       Build OpenAI API-compatible web server, llama-server.
> > +
> > +config BR2_PACKAGE_LLAMA_CPP_VULKAN
> > +     bool "Vulkan support"
> > +     depends on !BR2_ARM_CPU_ARMV5
> > +     depends on !BR2_STATIC_LIBS # vulkan-loader
> > +     select BR2_PACKAGE_VULKAN_LOADER
> > +     help
> > +       Enable Vulkan backend for GPU acceleration.
> > +endif
> > +
> > +comment "llama-cpp needs a toolchain w/ C++, wchar, threads, and gcc >= 9"
> > +     depends on !BR2_INSTALL_LIBSTDCPP \
> > +             || !BR2_TOOLCHAIN_GCC_AT_LEAST_9
> > +
> > +comment "llama-cpp needs a uclibc toolchain w/ dynamic library"
> > +     depends on BR2_TOOLCHAIN_USES_UCLIBC && BR2_STATIC_LIBS
> > diff --git a/package/llama-cpp/llama-cpp.hash b/package/llama-cpp/llama-cpp.hash
> > new file mode 100644
> > index 0000000000..d5c171cdf6
> > --- /dev/null
> > +++ b/package/llama-cpp/llama-cpp.hash
> > @@ -0,0 +1,4 @@
> > +# Locally calculated
> > +sha256  ec824d51e500d9e81a400d85f7f437cc8d1e1a96c09ee3ca8206688ded8a4187  b6854.tar.gz
> > +# License
> > +sha256  e562a2ddfaf8280537795ac5ecd34e3012b6582a147ef69ba6a6a5c08c84757d  LICENSE
> > diff --git a/package/llama-cpp/llama-cpp.mk b/package/llama-cpp/llama-cpp.mk
> > new file mode 100644
> > index 0000000000..0b1682b138
> > --- /dev/null
> > +++ b/package/llama-cpp/llama-cpp.mk
> > @@ -0,0 +1,68 @@
> > +################################################################################
> > +#
> > +# llama.cpp
> > +#
> > +################################################################################
> > +
> > +LLAMA_CPP_VERSION = 6854
> > +LLAMA_CPP_SOURCE = b$(LLAMA_CPP_VERSION).tar.gz
> > +LLAMA_CPP_SITE = https://github.com/ggml-org/llama.cpp/archive/refs/tags
> > +LLAMA_CPP_LICENSE = MIT
> > +LLAMA_CPP_LICENSE_FILES = LICENSE
> > +LLAMA_CPP_INSTALL_STAGING = YES
> > +LLAMA_CPP_CONF_OPTS = \
> > +     -DLLAMA_BUILD_TESTS=OFF \
> > +     -DLLAMA_BUILD_EXAMPLES=OFF \
> > +     -DLLAMA_FATAL_WARNINGS=OFF
> > +
>
> I see that the llama-cpp project has a 'security' page on github with attached
> CVE [1].
>
> It looks like it uses the 'ggml:llama.cpp' vendor:product CPE tuple [2]. So I
> would add the following
>
> ```
> LLAMA_CPP_CPE_ID_VENDOR = ggml
> LLAMA_CPP_CPE_ID_PRODUCT = llama.cpp
> ```
>
> Also the CVE [2] uses package version with the 'b' prefix. So I would change
> the following to be matched to the CPE correctly.
>
> LLAMA_CPP_VERSION = b6854
> LLAMA_CPP_SOURCE = $(LLAMA_CPP_VERSION).tar.gz
>
> [1] https://github.com/ggml-org/llama.cpp/security
> [2] https://nvd.nist.gov/vuln/detail/cve-2024-41130
>

Good catch, much appreciated. I'll add this to the patch.

> Thomas
>
> > +ifeq ($(BR2_PACKAGE_LIBEXECINFO),y)
> > +LLAMA_CPP_DEPENDENCIES += libexecinfo
> > +LLAMA_CPP_LDFLAGS += -lexecinfo
> > +endif
> > +
> > +ifeq ($(BR2_TOOLCHAIN_HAS_LIBATOMIC),y)
> > +LLAMA_CPP_LDFLAGS += -latomic
> > +endif
> > +
> > +LLAMA_CPP_CONF_OPTS += \
> > +     -DCMAKE_EXE_LINKER_FLAGS="$(LLAMA_CPP_LDFLAGS)"
> > +
> > +ifeq ($(BR2_STATIC_LIBS),y)
> > +LLAMA_CPP_CONF_OPTS += -DBUILD_SHARED_LIBS=OFF \
> > +                    -DCMAKE_POSITION_INDEPENDENT_CODE=ON
> > +endif
> > +
> > +ifeq ($(BR2_PACKAGE_LIBCURL),y)
> > +LLAMA_CPP_CONF_OPTS += -DLLAMA_CURL=ON
> > +LLAMA_CPP_DEPENDENCIES += libcurl
> > +else
> > +LLAMA_CPP_CONF_OPTS += -DLLAMA_CURL=OFF
> > +endif
> > +
> > +ifeq ($(BR2_PACKAGE_LLAMA_CPP_TOOLS),y)
> > +LLAMA_CPP_CONF_OPTS += -DLLAMA_BUILD_TOOLS=ON
> > +else
> > +LLAMA_CPP_CONF_OPTS += -DLLAMA_BUILD_TOOLS=OFF
> > +endif
> > +
> > +ifeq ($(BR2_PACKAGE_LLAMA_CPP_SERVER),y)
> > +LLAMA_CPP_CONF_OPTS += -DLLAMA_BUILD_SERVER=ON
> > +else
> > +LLAMA_CPP_CONF_OPTS += -DLLAMA_BUILD_SERVER=OFF
> > +endif
> > +
> > +ifeq ($(BR2_PACKAGE_LLAMA_CPP_VULKAN),y)
> > +LLAMA_CPP_DEPENDENCIES += vulkan-loader
> > +LLAMA_CPP_CONF_OPTS += -DGGML_VULKAN=ON
> > +else
> > +LLAMA_CPP_CONF_OPTS += -DGGML_VULKAN=OFF
> > +endif
> > +
> > +ifeq ($(BR2_PACKAGE_OPENBLAS),y)
> > +LLAMA_CPP_DEPENDENCIES += openblas
> > +LLAMA_CPP_CONF_OPTS += -DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS
> > +else
> > +LLAMA_CPP_CONF_OPTS += -DGGML_BLAS=OFF
> > +endif
> > +
> > +$(eval $(cmake-package))
> >
> > ---
> > base-commit: c555b6565f2747047603dc8022f81b7ea14b4890
> > change-id: 20251024-llama-cpp-v2-5f37be7f121a
> >
> > Best regards,
> > --
> > Joseph Kogut <joseph.kogut at gmail.com>
> >
> > _______________________________________________
> > buildroot mailing list
> > buildroot at buildroot.org
> > https://lists.buildroot.org/mailman/listinfo/buildroot


More information about the buildroot mailing list