| --- | |
| license: llama2 | |
| base_model: codellama/CodeLlama-7b-Instruct-hf | |
| tags: | |
| - generated_from_trainer | |
| library_name: peft | |
| model-index: | |
| - name: work/10283/sarella/ls6/exlong-internal/_work/exp/conditionnestack2e-no-name-ft/lora-codellama-7b-123 | |
| results: [] | |
| --- | |
| <!-- This model card has been generated automatically according to the information the Trainer had access to. You | |
| should probably proofread and complete it, then remove this comment. --> | |
| [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) | |
| <details><summary>See axolotl config</summary> | |
| axolotl version: `0.4.0` | |
| ```yaml | |
| adapter: lora | |
| base_model: codellama/CodeLlama-7b-Instruct-hf | |
| base_model_config: codellama/CodeLlama-7b-Instruct-hf | |
| bf16: true | |
| dataset_prepared_path: null | |
| datasets: | |
| - path: /work/10283/sarella/ls6/exlong-internal/_work/setup/conditionnestack2e-no-name-ft/train/train/train-conditionnestack2e-no-name-ft.jsonl | |
| type: | |
| field_input: input | |
| field_instruction: instruction | |
| field_output: output | |
| field_system: system | |
| format: '{instruction}' | |
| no_input_format: '{instruction}' | |
| system_format: '{system}' | |
| system_prompt: You are a helpful programming assistant and an expert Java programmer. | |
| You are helping a user writing exceptional-behavior tests for their Java code. | |
| debug: null | |
| deepspeed: null | |
| early_stopping_patience: null | |
| eval_sample_packing: false | |
| eval_steps: 20 | |
| flash_attention: true | |
| fp16: false | |
| fsdp: null | |
| fsdp_config: null | |
| gradient_accumulation_steps: 8 | |
| gradient_checkpointing: true | |
| group_by_length: false | |
| is_llama_derived_model: true | |
| learning_rate: 0.0002 | |
| load_in_4bit: false | |
| load_in_8bit: true | |
| local_rank: null | |
| logging_steps: 1 | |
| lora_alpha: 16 | |
| lora_dropout: 0.05 | |
| lora_fan_in_fan_out: null | |
| lora_model_dir: null | |
| lora_r: 32 | |
| lora_target_linear: true | |
| lr_scheduler: cosine | |
| micro_batch_size: 4 | |
| model_type: LlamaForCausalLM | |
| num_epochs: 3 | |
| optimizer: adamw_bnb_8bit | |
| output_dir: /work/10283/sarella/ls6/exlong-internal/_work/exp/conditionnestack2e-no-name-ft/lora-codellama-7b-123 | |
| pad_to_sequence_len: true | |
| resume_from_checkpoint: null | |
| sample_packing: true | |
| save_steps: null | |
| seed: 123 | |
| sequence_len: 4096 | |
| special_tokens: | |
| bos_token: <s> | |
| eos_token: </s> | |
| unk_token: <unk> | |
| strict: false | |
| tf32: false | |
| tokenizer_type: CodeLlamaTokenizer | |
| train_on_inputs: false | |
| val_set_size: 0.01 | |
| wandb_entity: null | |
| wandb_log_model: null | |
| wandb_project: null | |
| wandb_run_id: null | |
| wandb_watch: null | |
| warmup_steps: 10 | |
| weight_decay: 0.0 | |
| xformers_attention: null | |
| ``` | |
| </details><br> | |
| # work/10283/sarella/ls6/exlong-internal/_work/exp/conditionnestack2e-no-name-ft/lora-codellama-7b-123 | |
| This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on the None dataset. | |
| It achieves the following results on the evaluation set: | |
| - Loss: 0.4931 | |
| ## Model description | |
| More information needed | |
| ## Intended uses & limitations | |
| More information needed | |
| ## Training and evaluation data | |
| More information needed | |
| ## Training procedure | |
| ### Training hyperparameters | |
| The following hyperparameters were used during training: | |
| - learning_rate: 0.0002 | |
| - train_batch_size: 4 | |
| - eval_batch_size: 4 | |
| - seed: 123 | |
| - gradient_accumulation_steps: 8 | |
| - total_train_batch_size: 32 | |
| - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 | |
| - lr_scheduler_type: cosine | |
| - lr_scheduler_warmup_steps: 10 | |
| - num_epochs: 3 | |
| ### Training results | |
| | Training Loss | Epoch | Step | Validation Loss | | |
| |:-------------:|:-----:|:----:|:---------------:| | |
| | 0.8379 | 0.01 | 1 | 1.0354 | | |
| | 0.3779 | 0.16 | 20 | 0.4820 | | |
| | 0.3361 | 0.31 | 40 | 0.4560 | | |
| | 0.3153 | 0.47 | 60 | 0.4467 | | |
| | 0.2735 | 0.63 | 80 | 0.4457 | | |
| | 0.2437 | 0.78 | 100 | 0.4400 | | |
| | 0.2941 | 0.94 | 120 | 0.4416 | | |
| | 0.2153 | 1.08 | 140 | 0.4466 | | |
| | 0.2583 | 1.23 | 160 | 0.4499 | | |
| | 0.2026 | 1.39 | 180 | 0.4540 | | |
| | 0.185 | 1.55 | 200 | 0.4541 | | |
| | 0.2296 | 1.7 | 220 | 0.4604 | | |
| | 0.2059 | 1.86 | 240 | 0.4591 | | |
| | 0.1998 | 2.02 | 260 | 0.4626 | | |
| | 0.1879 | 2.15 | 280 | 0.4828 | | |
| | 0.1861 | 2.31 | 300 | 0.4944 | | |
| | 0.1561 | 2.47 | 320 | 0.4947 | | |
| | 0.1888 | 2.62 | 340 | 0.4939 | | |
| | 0.1665 | 2.78 | 360 | 0.4945 | | |
| | 0.1627 | 2.94 | 380 | 0.4931 | | |
| ### Framework versions | |
| - PEFT 0.10.0 | |
| - Transformers 4.39.0.dev0 | |
| - Pytorch 2.1.2 | |
| - Datasets 2.18.0 | |
| - Tokenizers 0.15.0 |