nm-testing/tinyllama-one-shot-w4a16-group-packed
Text Generation
•
0.3B
•
Updated
•
2
nm-testing/llama1.1b_0.5_sparse_bitmask
Text Generation
•
0.8B
•
Updated
nm-testing/llama7b-one-shot-2_4-w4a16-packed
Text Generation
•
1B
•
Updated
•
1
nm-testing/tinyllama-one-shot-w4a16-group128-packed
Text Generation
•
0.3B
•
Updated
nm-testing/tinyllama-one-shot-w4a16-channel-packed
Text Generation
•
0.3B
•
Updated
nm-testing/tinyllama-one-shot-w4a16-channel-compressed
Text Generation
•
1B
•
Updated
•
1
nm-testing/tinyllama-one-shot-dynamic-test
Text Generation
•
1B
•
Updated
nm-testing/tinyllama-one-shot-static-quant-test-compressed
Text Generation
•
1B
•
Updated
nm-testing/asym-w8w8-int8-static-per-tensor-tiny-llama
1B
•
Updated
•
2.2k
nm-testing/tinyllama-oneshot-w8a8-channel-dynamic-token-v2-asym
1B
•
Updated
•
1
nm-testing/OLMoE-1B-7B-0924-Instruct-FP8
7B
•
Updated
•
37
nm-testing/DeepSeek-Coder-V2-Lite-Instruct-W8A8
16B
•
Updated
•
7
nm-testing/TinyLlama-1.1B-Chat-v1.0-actorder-weight
Text Generation
•
0.3B
•
Updated
•
1
nm-testing/TinyLlama-1.1B-Chat-v1.0-actorder-group
Text Generation
•
0.3B
•
Updated
•
785
nm-testing/tinyllama-w8a16-dense
1B
•
Updated
•
255
nm-testing/tinyllama-w8a8-compressed
1B
•
Updated
•
822
nm-testing/tinyllama-w4a16-compressed
0.3B
•
Updated
•
672
nm-testing/tinyllama-fp8-dynamic-compressed
1B
•
Updated
•
402
nm-testing/SmolLM-1.7B-Instruct-quantized.w4a16
Text Generation
•
0.4B
•
Updated
•
3
nm-testing/SmolLM-360M-Instruct-quantized.w4a16
0.1B
•
Updated
nm-testing/SmolLM-135M-Instruct-quantized.w4a16
Text Generation
•
71.6M
•
Updated
nm-testing/Mixtral-8x7B-Instruct-v0.1-W4A16-channel-quantized
6B
•
Updated
•
848
nm-testing/Meta-Llama-3-8B-Instruct-fp8-compressed
nm-testing/Phi-3-mini-128k-instruct-FP8
4B
•
Updated
•
1.05k
nm-testing/Mixtral-8x7B-Instruct-v0.1-FP8-quantized
47B
•
Updated
nm-testing/Mixtral-8x7B-Instruct-v0.1-W8A16-quantized
12B
•
Updated
•
861
nm-testing/Mixtral-8x7B-Instruct-v0.1-W4A16-quantized
6B
•
Updated
•
865
nm-testing/tinyllama-oneshot-w8a8-dynamic-token-v2-asym
Text Generation
•
1B
•
Updated
•
8
nm-testing/Qwen2-1.5B-Instruct-FP8W8
Text Generation
•
2B
•
Updated
•
6
nm-testing/Meta-Llama-3-8B-Instruct-W4A16-ACTORDER-compressed-tensors-test
Text Generation
•
2B
•
Updated