Research
February 19, 2026
Accepted paper in Neurocomputing (Elsevier, SCIE Q1) — 2026
CVISLab is accepted in Neurocomputing (Elsevier): Towards enhancing learning on imbalanced data: A novel adaptive weighting strategy
Congratulations to our lab members on two papers accepted in Neurocomputing (Elsevier, SCIE Q1) for 2026: (1) "Towards enhancing learning on imbalanced data: A novel adaptive weighting strategy" and (2) "A Lightweight Multi-Scale Attention Model for Small Object Detection in UAV Imagery"...
Representative models such as Convolutional Neural Networks (CNNs) and Transformers have played a key role in this development. These models have been widely applied and have shown high effectiveness in image recognition tasks. However, data imbalance remains a considerable challenge, as models tend to focus excessively on easy samples while overlooking hard or infrequent ones. To address this issue, the Focal Loss function was introduced to down-weight easy samples and enhance the learning of harder ones through predefined parameters. Nevertheless, keeping these parameters fixed throughout the training process may not be optimal, as their suitable values can vary depending on the training stage and dataset characteristics. To overcome this limitation, this paper proposes an adaptive weighting strategy in which the parameters are dynamically adjusted across different training stages. Experimental results show that the proposed method not only improves the detection of hard samples but also enhances the overall performance of the model on imbalanced datasets. These results suggest that dynamically adjusting loss parameters is an effective approach for addressing data imbalance in deep learning models.
Representative models such as Convolutional Neural Networks (CNNs) and Transformers have played a key role in this development. These models have been widely applied and have shown high effectiveness in image recognition tasks. However, data imbalance remains a considerable challenge, as models tend to focus excessively on easy samples while overlooking hard or infrequent ones. To address this issue, the Focal Loss function was introduced to down-weight easy samples and enhance the learning of harder ones through predefined parameters. Nevertheless, keeping these parameters fixed throughout the training process may not be optimal, as their suitable values can vary depending on the training stage and dataset characteristics. To overcome this limitation, this paper proposes an adaptive weighting strategy in which the parameters are dynamically adjusted across different training stages. Experimental results show that the proposed method not only improves the detection of hard samples but also enhances the overall performance of the model on imbalanced datasets. These results suggest that dynamically adjusting loss parameters is an effective approach for addressing data imbalance in deep learning models.