This's just a work to reimplement llama2.c in Zig, also a toy project for me to explore Zig.
This repo would be more like a direct implementation to keep things simple and easy to understand (at least for myself).
If you are looking for a stable & fast implementations, please consider checking out cgbur/llama2.zig and clebert/llama2.zig!
- zig: 0.11.0
# XXX: Currently the build have to look up `ztracy` even if it's dependency for
# development only, so you have to fetch the submodule once.
# $ git submodule update --init --recursive
$ zig build -Doptimize=ReleaseFastAlmost all arguments in llama2.c are supported except those ones related
to chat mode:
# For stories15M, remember to download the model and tokenizer first:
# $ wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin -P models
# $ wget https://github.com/karpathy/llama2.c/raw/master/tokenizer.bin -P models
$ ./zig-out/bin/run models/stories15M.bin \
-z models/tokenizer.bin -t 0.8 -n 256 -i "One day, Lily met a Shoggoth"(if you want to compare the output with llama2.c, remember to specify an identical seed)
To run tests, it currently requires installing PyTorch to load checkpoint for
checking whether weights are correctly mapped.
# Remember to download the model `stories15M.pt` (PyTorch model) first:
# wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.pt -P models
$ zig test tests.zigIf you want to profile the code, please fetch the submodules:
$ git submodule update --init --recursiveThen build the code with tracy enabled:
$ zig build -Doptimize=ReleaseFast -Duse_tracy=trueFor further details, please checkout docs/INSTALL.md.