jeiku commited on
Commit
2d15bdb
·
verified ·
1 Parent(s): 0153cff

Model save

Browse files
Files changed (1) hide show
  1. README.md +165 -0
README.md ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: other
4
+ base_model: jeiku/completion4B
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: instructered4B
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
17
+ <details><summary>See axolotl config</summary>
18
+
19
+ axolotl version: `0.4.1`
20
+ ```yaml
21
+ base_model: jeiku/completion4B
22
+ model_type: AutoModelForCausalLM
23
+ tokenizer_type: AutoTokenizer
24
+
25
+ load_in_8bit: false
26
+ load_in_4bit: false
27
+ strict: false
28
+
29
+ hub_model_id: jeiku/instructered4B
30
+ hub_strategy: "all_checkpoints"
31
+ push_dataset_to_hub:
32
+ hf_use_auth_token: true
33
+
34
+ datasets:
35
+ - path: FourOhFour/Instruct_Phase
36
+ type: sharegpt
37
+ conversation: chatml
38
+
39
+ chat_template: chatml
40
+
41
+ shuffle_merged_datasets: true
42
+ val_set_size: 0.0025
43
+ output_dir: ./outputs/out
44
+
45
+ adapter:
46
+ lora_r:
47
+ lora_alpha:
48
+ lora_dropout:
49
+ lora_target_linear:
50
+
51
+ sequence_len: 8192
52
+ sample_packing: true
53
+ eval_sample_packing: false
54
+ pad_to_sequence_len: true
55
+
56
+ plugins:
57
+ - axolotl.integrations.liger.LigerPlugin
58
+ liger_rope: true
59
+ liger_rms_norm: true
60
+ liger_swiglu: true
61
+ liger_fused_linear_cross_entropy: true
62
+
63
+ wandb_project: EXP4B
64
+ wandb_entity:
65
+ wandb_watch:
66
+ wandb_name: EXP4B
67
+ wandb_log_model:
68
+
69
+ gradient_accumulation_steps: 12
70
+ micro_batch_size: 3
71
+ num_epochs: 2
72
+ optimizer: adamw_bnb_8bit
73
+ lr_scheduler: cosine
74
+ learning_rate: 0.00001
75
+ weight_decay: 0.05
76
+
77
+ train_on_inputs: false
78
+ group_by_length: false
79
+ bf16: auto
80
+ fp16:
81
+ tf32: true
82
+
83
+ gradient_checkpointing: true
84
+ early_stopping_patience:
85
+ resume_from_checkpoint:
86
+ local_rank:
87
+ logging_steps: 1
88
+ xformers_attention:
89
+ flash_attention: true
90
+
91
+ warmup_ratio: 0.1
92
+ evals_per_epoch: 4
93
+ eval_table_size:
94
+ eval_max_new_tokens: 128
95
+ saves_per_epoch: 2
96
+
97
+ debug:
98
+ deepspeed: deepspeed_configs/zero3_bf16.json
99
+ fsdp:
100
+ fsdp_config:
101
+
102
+ special_tokens:
103
+ pad_token: <|finetune_right_pad_id|>
104
+
105
+ ```
106
+
107
+ </details><br>
108
+
109
+ # instructered4B
110
+
111
+ This model is a fine-tuned version of [jeiku/completion4B](https://huggingface.co/jeiku/completion4B) on the None dataset.
112
+ It achieves the following results on the evaluation set:
113
+ - Loss: 1.3713
114
+
115
+ ## Model description
116
+
117
+ More information needed
118
+
119
+ ## Intended uses & limitations
120
+
121
+ More information needed
122
+
123
+ ## Training and evaluation data
124
+
125
+ More information needed
126
+
127
+ ## Training procedure
128
+
129
+ ### Training hyperparameters
130
+
131
+ The following hyperparameters were used during training:
132
+ - learning_rate: 1e-05
133
+ - train_batch_size: 3
134
+ - eval_batch_size: 3
135
+ - seed: 42
136
+ - distributed_type: multi-GPU
137
+ - num_devices: 2
138
+ - gradient_accumulation_steps: 12
139
+ - total_train_batch_size: 72
140
+ - total_eval_batch_size: 6
141
+ - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
142
+ - lr_scheduler_type: cosine
143
+ - lr_scheduler_warmup_steps: 68
144
+ - num_epochs: 2
145
+
146
+ ### Training results
147
+
148
+ | Training Loss | Epoch | Step | Validation Loss |
149
+ |:-------------:|:------:|:----:|:---------------:|
150
+ | 1.336 | 0.0029 | 1 | 1.7114 |
151
+ | 0.9631 | 0.2516 | 86 | 1.4098 |
152
+ | 0.9347 | 0.5032 | 172 | 1.3828 |
153
+ | 0.9142 | 0.7548 | 258 | 1.3693 |
154
+ | 0.7967 | 1.0037 | 344 | 1.3659 |
155
+ | 0.7912 | 1.2551 | 430 | 1.3728 |
156
+ | 0.7957 | 1.5065 | 516 | 1.3730 |
157
+ | 0.7951 | 1.7579 | 602 | 1.3713 |
158
+
159
+
160
+ ### Framework versions
161
+
162
+ - Transformers 4.46.0.dev0
163
+ - Pytorch 2.4.0+cu121
164
+ - Datasets 2.21.0
165
+ - Tokenizers 0.20.0