Model Stock: All we need is just a few fine-tuned models
Paper
•
2403.19522
•
Published
•
13
This is a merge of pre-trained language models created using mergekit.
Thanks to TheDrummer for his hard work on the Cydonia series of models.
v1.0
This model was merged using the Model Stock merge method using TheDrummer/Cydonia-24B-v3 as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: TheDrummer/Cydonia-24B-v3 # Cydonia v3
merge_method: model_stock
dtype: bfloat16
models:
- model: aixonlab/Eurydice-24b-v3.5 # storytelling / RP
- model: TheDrummer/Cydonia-24B-v3 # sprinkle in some extra Cydonia v3
- model: PocketDoc/Dans-PersonalityEngine-V1.3.0-24b # Prompt Adherence
- model: LatitudeGames/Harbinger-24B # Adventure
- model: sarvamai/sarvam-m # intelligence