rnj-1
Collection
4 items
โข
Updated
โข
33
This is a GGUF-formatted checkpoint of rnj-1-instruct suitable for use in llama.cpp, Ollama, or others. This has been quantized with the Q4_K_M scheme, which results in model weights of size 4.8GB.
For llama.cpp, install (after version 7328) and run either of these commands:
llama-cli -hf EssentialAI/rnj-1-instruct-GGUF
llama-server -hf EssentialAI/rnj-1-instruct-GGUF -c 0 # and open browser to localhost:8080
For Ollama, install (after version v0.13.3) and run:
ollama run rnj-1
4-bit