Labira/LabiraPJOK_6_100_Group

This model is a fine-tuned version of Labira/LabiraPJOK_5_100_Group on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.0384
  • Validation Loss: 0.0017
  • Epoch: 99

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 400, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Validation Loss Epoch
3.2140 2.5795 0
1.5796 1.6377 1
1.2599 1.1880 2
1.0232 0.9323 3
0.7286 0.7767 4
0.6644 0.6911 5
0.5911 0.6071 6
0.4856 0.5126 7
0.4193 0.4042 8
0.3771 0.3472 9
0.2351 0.3399 10
0.3774 0.2887 11
0.3694 0.2545 12
0.2110 0.2572 13
0.2303 0.2229 14
0.1635 0.1686 15
0.1560 0.1591 16
0.2134 0.1437 17
0.1097 0.1447 18
0.1041 0.1354 19
0.1667 0.1058 20
0.0982 0.0677 21
0.0717 0.0470 22
0.0966 0.0506 23
0.1553 0.0578 24
0.1038 0.0798 25
0.1154 0.0802 26
0.0830 0.0559 27
0.0554 0.0403 28
0.0856 0.0337 29
0.0638 0.0296 30
0.0671 0.0228 31
0.0666 0.0162 32
0.0787 0.0114 33
0.0704 0.0115 34
0.0476 0.0117 35
0.0451 0.0099 36
0.0600 0.0077 37
0.0932 0.0056 38
0.0483 0.0045 39
0.0841 0.0052 40
0.0584 0.0062 41
0.0403 0.0080 42
0.1138 0.0052 43
0.0494 0.0043 44
0.0592 0.0040 45
0.0639 0.0036 46
0.0481 0.0036 47
0.0485 0.0041 48
0.0590 0.0044 49
0.0271 0.0040 50
0.0426 0.0036 51
0.0463 0.0035 52
0.0468 0.0035 53
0.1085 0.0035 54
0.0487 0.0035 55
0.0271 0.0035 56
0.0278 0.0034 57
0.0291 0.0031 58
0.0496 0.0028 59
0.0642 0.0030 60
0.0467 0.0029 61
0.0449 0.0030 62
0.0509 0.0030 63
0.0622 0.0028 64
0.0709 0.0037 65
0.0566 0.0047 66
0.0701 0.0048 67
0.0510 0.0040 68
0.0404 0.0032 69
0.0189 0.0027 70
0.0369 0.0025 71
0.0595 0.0021 72
0.0736 0.0022 73
0.0554 0.0025 74
0.0432 0.0026 75
0.0180 0.0027 76
0.0415 0.0027 77
0.0391 0.0026 78
0.0276 0.0026 79
0.0426 0.0025 80
0.0757 0.0025 81
0.0331 0.0024 82
0.0595 0.0025 83
0.0371 0.0023 84
0.0419 0.0022 85
0.0509 0.0022 86
0.0459 0.0021 87
0.0574 0.0020 88
0.0233 0.0022 89
0.0195 0.0020 90
0.0515 0.0019 91
0.0348 0.0019 92
0.0790 0.0018 93
0.0533 0.0018 94
0.0408 0.0018 95
0.0609 0.0017 96
0.0442 0.0017 97
0.0427 0.0017 98
0.0384 0.0017 99

Framework versions

  • Transformers 4.45.2
  • TensorFlow 2.17.0
  • Datasets 2.20.0
  • Tokenizers 0.20.1
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for Labira/LabiraPJOK_6_100_Group