This guide covers installing and building llmedge for local development and integrating the library into an Android project.
Prerequisites
- JDK 17+ (for Gradle Kotlin DSL)
- Android SDK & NDK (if building Android native parts)
- CMake and a C++ toolchain supporting your target (clang/gcc)
- Python 3.8+ for some auxiliary scripts (optional)
- (Optional) Conda/venv for any Python tooling
Project layout
llmedge/— main Android library module with JNI/C++ sourcesllmedge-examples/app/— Android sample app showing use-casesllama.cpp/— vendored/third-party inference code used by native layer
For simple usage, you can download prebuilt AARs from the Releases page and include them in your Android project.
For development and building from source, follow these steps: Clone
git clone https://github.com/Aatricks/llmedge.git
cd llmedge
git submodule update --init --recursive
Build (Android Studio)
- Open the
llmedgeroot in Android Studio (or thellmedge-examplesproject). - Let Gradle sync. Android Studio will download required SDK/NDK and Gradle toolchains per
local.propertiesandgradle.properties. - Build the
:llmedgelibrary and example app (Build -> Make Project).
Command-line build
- Build AAR of
llmedge(from project root):
./gradlew :llmedge:assembleDebug
- Build the example app:
./gradlew :llmedge-examples:app:assembleDebug
Native build notes
- The native code resides in
llmedge/src/main/cppand uses JNI bindings andllama.cpp-style readers. - If you run into issues with the C++ toolchain, verify
ANDROID_NDK_HOMEandANDROID_SDK_ROOTinlocal.propertiesor environment variables.
GGUF/Model files
GGUFReadersupports loading GGUF model files. Place model files on device storage or in the app's files directory and point APIs at those paths.
Hugging Face integration
- The
io.aatricks.llmedge.huggingfacepackage can download models from the Hugging Face Hub. - Use
SmolLM.loadFromHuggingFace()for automatic download and loading. - Optionally provide an HF token for private repositories via the
tokenparameter.
Troubleshooting
- Gradle sync failed: delete
.gradleand try again - Native build fails: ensure correct NDK version and API level
- Missing model: confirm the model path and file permissions on device
If you prefer a containerized CI build, refer to llama.cpp/ci for example scripts that set up required toolchains.