-
-
-
-
-
-
Inference Providers
Active filters:
4-bit
TheBloke/Thespis-Mistral-7B-Alpha-v0.7-AWQ
Text Generation
•
7B
•
Updated
•
13
•
1
TheBloke/openbuddy-mixtral-7bx8-v16.3-32k-AWQ
Text Generation
•
47B
•
Updated
•
5
•
3
TheBloke/Thespis-Mistral-7B-Alpha-v0.7-GPTQ
Text Generation
•
7B
•
Updated
•
25
•
3
TheBloke/openbuddy-mixtral-7bx8-v16.3-32k-GPTQ
Text Generation
•
47B
•
Updated
•
14
•
2
msaavedra1234/TinyLlama-Alpaca-unsloth
Text Generation
•
1B
•
Updated
•
6
Text Generation
•
7B
•
Updated
•
5
•
2
Text Generation
•
7B
•
Updated
•
6
•
1
Text Generation
•
13B
•
Updated
•
4
•
2
TheBloke/OpenCAI-13B-GPTQ
Text Generation
•
13B
•
Updated
•
6
•
4
msaavedra1234/tinyllama_alpaca
Text Generation
•
1B
•
Updated
•
12
Text Generation
•
7B
•
Updated
•
13
dmntrd/zephyr-7b-beta-rocio-2
Text Generation
•
7B
•
Updated
•
4
TheBloke/Unholy-v2-13B-GPTQ
Text Generation
•
13B
•
Updated
•
7
•
21
TheBloke/Unholy-v2-13B-AWQ
Text Generation
•
13B
•
Updated
•
18
•
7
Text Generation
•
47B
•
Updated
DS-Archive/limarp-deepseek-67b-qlora
yujiepan/Llama-2-7b-hf-awq-w4g128
Text Generation
•
7B
•
Updated
•
2
yujiepan/Llama-2-13b-hf-awq-w4g128
Text Generation
•
13B
•
Updated
•
2
TheBloke/Panda-7B-v0.1-GPTQ
Text Generation
•
7B
•
Updated
•
8
•
1
TheBloke/Panda-7B-v0.1-AWQ
Text Generation
•
7B
•
Updated
•
6
•
1
chekable/mistral-abstract-finetune-quantize
Text Generation
•
7B
•
Updated
Text Generation
•
1B
•
Updated
•
14
•
1
DS-Archive/limarp-zloss-mixtral-8x7b-qlora
Updated
•
4
•
2
TheBloke/FlatDolphinMaid-8x7B-AWQ
Text Generation
•
47B
•
Updated
•
3
•
3
TheBloke/FlatDolphinMaid-8x7B-GPTQ
Text Generation
•
47B
•
Updated
•
5
•
6
lewtun/zephyr-7b-sft-qlora
Updated
mlx-community/CodeLlama-7b-Python-hf-4bit-mlx
Text Generation
•
Updated
•
10
mlx-community/Mistral-7B-Instruct-v0.1-4bit-mlx
Text Generation
•
Updated
•
16
•
1
TheBloke/WordWoven-13B-AWQ
Text Generation
•
13B
•
Updated
•
5
•
2
TheBloke/WordWoven-13B-GPTQ
Text Generation
•
13B
•
Updated
•
10
•
3