Zen Coder 24B (MLX)
Optimized MLX quantization of Zen Coder 24B for Apple Silicon.
Specs
| Property | Value |
|---|---|
| Parameters | 24B |
| Format | MLX 4-bit quantized |
| License | Apache 2.0 |
| Developer | Hanzo AI |
Usage
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("zenlm/zen-coder-24b-mlx")
prompt = "Write a Python function to sort a list"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
Links
- Downloads last month
- 181
Model size
24B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
4-bit