efficientnet-b0

This model is a fine-tuned version of google/efficientnet-b0 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0460
  • Accuracy: 0.9868
  • Precision: 0.9944
  • Recall: 0.9768
  • F1: 0.9855
  • Tp: 1600
  • Tn: 1901
  • Fp: 9
  • Fn: 38

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 55
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Accuracy Precision Recall F1 Tp Tn Fp Fn
0.1420 0.0991 11 0.0896 0.9777 0.9809 0.9707 0.9758 1590 1879 31 48
0.1258 0.1982 22 0.0769 0.9845 0.9944 0.9719 0.9830 1592 1901 9 46
0.1397 0.2973 33 0.0904 0.9797 0.9821 0.9737 0.9779 1595 1881 29 43
0.1355 0.3964 44 0.0796 0.9803 0.9775 0.9799 0.9787 1605 1873 37 33
0.1395 0.4955 55 0.0982 0.9707 0.9565 0.9811 0.9687 1607 1837 73 31
0.1446 0.5946 66 0.0911 0.9775 0.9779 0.9731 0.9755 1594 1874 36 44
0.1362 0.6937 77 0.1183 0.9653 0.9501 0.9762 0.9630 1599 1826 84 39
0.1351 0.7928 88 0.1260 0.9704 0.9582 0.9786 0.9683 1603 1840 70 35
0.1650 0.8919 99 0.2204 0.9448 0.9083 0.9792 0.9424 1604 1748 162 34
0.1695 0.9910 110 0.0850 0.9794 0.9827 0.9725 0.9776 1593 1882 28 45
0.1571 1.0901 121 0.1192 0.9642 0.9967 0.9255 0.9598 1516 1905 5 122
0.1219 1.1892 132 0.0881 0.9738 0.9911 0.9518 0.9710 1559 1896 14 79
0.1456 1.2883 143 0.0895 0.9760 0.9911 0.9567 0.9736 1567 1896 14 71
0.1357 1.3874 154 0.1397 0.9572 0.9360 0.9737 0.9545 1595 1801 109 43
0.1516 1.4865 165 0.0801 0.9760 0.9755 0.9725 0.9740 1593 1870 40 45
0.1309 1.5856 176 0.0961 0.9707 0.9593 0.9780 0.9686 1602 1842 68 36
0.1430 1.6847 187 0.0573 0.9862 0.9975 0.9725 0.9849 1593 1906 4 45
0.1649 1.7838 198 0.0741 0.9808 0.9846 0.9737 0.9791 1595 1885 25 43
0.1575 1.8829 209 0.0638 0.9828 0.9962 0.9664 0.9811 1583 1904 6 55
0.1463 1.9820 220 0.0575 0.9853 0.9994 0.9689 0.9839 1587 1909 1 51
0.1375 2.0811 231 0.0587 0.9848 0.995 0.9719 0.9833 1592 1902 8 46
0.1264 2.1802 242 0.0637 0.9842 0.9913 0.9744 0.9828 1596 1896 14 42
0.1238 2.2793 253 0.0579 0.9851 0.9932 0.9744 0.9837 1596 1899 11 42
0.1521 2.3784 264 0.0558 0.9868 0.9963 0.9750 0.9855 1597 1904 6 41
0.1453 2.4775 275 0.0728 0.9853 0.9877 0.9805 0.9841 1606 1890 20 32
0.1527 2.5766 286 0.0702 0.9893 0.9975 0.9792 0.9883 1604 1906 4 34
0.1387 2.6757 297 0.0544 0.9884 0.9975 0.9774 0.9874 1601 1906 4 37
0.1397 2.7748 308 0.1035 0.9665 0.9502 0.9786 0.9642 1603 1826 84 35
0.1193 2.8739 319 0.0624 0.9851 0.9865 0.9811 0.9838 1607 1888 22 31
0.1358 2.9730 330 0.0782 0.9794 0.9815 0.9737 0.9776 1595 1880 30 43
0.1298 3.0721 341 0.0548 0.9873 0.9920 0.9805 0.9862 1606 1897 13 32
0.1428 3.1712 352 0.0909 0.9760 0.9698 0.9786 0.9742 1603 1860 50 35
0.1350 3.2703 363 0.0829 0.9777 0.9716 0.9805 0.9760 1606 1863 47 32
0.1231 3.3694 374 0.0606 0.9839 0.9829 0.9823 0.9826 1609 1882 28 29
0.1355 3.4685 385 0.0676 0.9814 0.9834 0.9762 0.9798 1599 1883 27 39
0.1236 3.5676 396 0.0571 0.9845 0.9919 0.9744 0.9831 1596 1897 13 42
0.1331 3.6667 407 0.0565 0.9851 0.9919 0.9756 0.9837 1598 1897 13 40
0.1495 3.7658 418 0.0656 0.9825 0.9864 0.9756 0.9810 1598 1888 22 40
0.1236 3.8649 429 0.0532 0.9870 0.9956 0.9762 0.9858 1599 1903 7 39
0.1385 3.9640 440 0.0583 0.9842 0.9883 0.9774 0.9828 1601 1891 19 37
0.1266 4.0631 451 0.0523 0.9859 0.9938 0.9756 0.9846 1598 1900 10 40
0.1266 4.1622 462 0.0950 0.9698 0.9587 0.9768 0.9676 1600 1841 69 38
0.1549 4.2613 473 0.0660 0.9797 0.9751 0.9811 0.9781 1607 1869 41 31
0.1208 4.3604 484 0.0493 0.9876 0.9969 0.9762 0.9864 1599 1905 5 39
0.1195 4.4595 495 0.0789 0.9763 0.9709 0.9780 0.9745 1602 1862 48 36
0.1163 4.5586 506 0.0504 0.9873 0.9975 0.9750 0.9861 1597 1906 4 41
0.1489 4.6577 517 0.0565 0.9834 0.9852 0.9786 0.9819 1603 1886 24 35
0.1176 4.7568 528 0.0534 0.9848 0.9901 0.9768 0.9834 1600 1894 16 38
0.1236 4.8559 539 0.0514 0.9859 0.9907 0.9786 0.9846 1603 1895 15 35
0.1268 4.9550 550 0.0460 0.9868 0.9944 0.9768 0.9855 1600 1901 9 38

Framework versions

  • Transformers 5.2.0
  • Pytorch 2.9.0+cu126
  • Datasets 4.0.0
  • Tokenizers 0.22.2
Downloads last month
14
Safetensors
Model size
4.05M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for waelhasan/efficientnet-b0

Finetuned
(48)
this model