| | --- |
| | |
| | |
| | base_model: mistralai/Mixtral-8x7B-Instruct-v0.1 |
| | tags: |
| | - text-generation-inference |
| | - transformers |
| | - unsloth |
| | - mixtral |
| | - mixture-of-experts |
| | - qlora |
| | - code-generation |
| | - python-coder |
| | - code-alpaca |
| | license: apache-2.0 |
| | language: |
| | - en |
| | --- |
| | |
| | # Puxis97/Mixtral-8x7B-Python-Coder-CodeAlpaca 🐍 |
| |
|
| | This model is a **Mixtral 8x7B Instruct** model fine-tuned using **QLoRA** on the **CodeAlpaca 20K** dataset to specialize in **Python code instruction following and generation**. |
| |
|
| | - **Developed by:** Puxis97 |
| | - **License:** apache-2.0 |
| | - **Finetuned from model :** mistralai/Mixtral-8x7B-Instruct-v0.1 |
| |
|
| | ### Training Details |
| |
|
| | This fine-tuned model was built for high-efficiency using **Unsloth's QLoRA optimizations** and the Hugging Face TRL library, resulting in a powerful, instruction-following code generation model that runs on consumer GPUs. |
| |
|
| | | Setting | Value | |
| | | :--- | :--- | |
| | | **Base Model** | `mistralai/Mixtral-8x7B-Instruct-v0.1` | |
| | | **Dataset** | `HuggingFaceH4/CodeAlpaca_20K` | |
| | | **Method** | QLoRA (4-bit quantization) | |
| | | **Task** | Code Instruction Following / Python Coding | |
| |
|
| | [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |