Contents of Volume 33 (2023)
1/20232/2023
- [1] Yumoto M., Hagiwara M. (Japan)
Selective classification considering time series characteristics for spiking neural networks , pp. 49-66
Full text
Abstract: In this paper, we propose new methods for estimating the relative reliability of prediction and rejection methods for selective classification for spiking neural networks (SNNs). We also optimize and improve the efficiency of the RC curve, which represents the relationship between risk and coverage in selective classification. Efficiency here means greater coverage for risk and less risk for coverage in the RC curve. We use the model internal representation when time series data is input to SNN, rank the prediction results that are the output, and reject them at an arbitrary rate. We propose multiple methods based on the characteristics of datasets and the architecture of models. Multiple methods, such as a simple method with discrete coverage and a method with continuous and flexible coverage, yielded results that exceeded the performance of the existing method, softmax response.https://doi.org/10.14311/NNW.2023.33.004
- [2] Akkur E., Türk F., Erogul O. (Turkey)
Breast cancer classification using a novel hybrid feature selection approach, pp. 67-83
Full text
Abstract: Many women around the world die due to breast cancer. If breast cancer is treated in the early phase, mortality rates may significantly be reduced. Quite a number of approaches have been proposed to help in the early detection of breast cancer. A novel hybrid feature selection model is suggested in this study. This novel hybrid model aims to build an efficient feature selection method and successfully classify breast lesions. A combination of relief and binary Harris hawk optimization (BHHO) hybrid model is used for feature selection. Then, k-nearest neighbor (k-NN), support vector machine (SVM), logistic regression (LR) and naive Bayes (NB) methods are preferred for the classification task. The suggested hybrid model is tested by three different breast cancer datasets which are Wisconsin diagnostic breast cancer dataset (WDBC), Wisconsin breast cancer dataset (WBCD) and mammographic breast cancer dataset (MBCD). According to the experimental results, the relief and BHHO hybrid model improves the performance of all classification algorithms in all three datasets. For WDBC, relief-BHO-SVM model shows the highest classification rates with an of accuracy of 98.77%, precision of 97.17%, recall of 99.52%, F1-score of 98.33%, specificity of 99.72% and balanced accuracy of 99.62%. For WBCD, relief-BHO-SVM model achieves of accuracy of 99.28%, precision of 98.76%, recall of 99.17%, F1-score of 98.96%, specificity of 99.56% and balanced accuracy of 99.36%. Relief-BHO-SVM model performs the best with an accuracy of 97.44%, precision of 97.41%, recall of 98.26%, F1-score of 97.84%, specificity of 97.47% and balanced accuracy of 97.86% for MBCD. Furthermore, the relief-BHO-SVM model has achieved better results than other known approaches. Compared with recent studies on breast cancer classification, the suggested hybrid method has achieved quite good results.https://doi.org/10.14311/NNW.2023.33.005
- [3] Sohaib M., Tehseen S. (Pakistan)
Forgery detection of low quality deepfake videos, pp. 85-99
Full text
Abstract: The rapid growth of online media over different social media platforms or over the internet along with many benefits have some negative effects as well. Deep learning has many positive applications like medical, animations and cybersecurity etc. But over the past few years, it is observed that it is been used for negative aspects as well such as defaming, black-mailing and creating privacy concerns for the general public. Deepfake is common terminology used for facial forgery of a person in media like images or videos.The advancement in the forgery creation area have challenged the researchers to create and develop advance forgery detection systems capable to detect facial forgeries. Proposed forgery detection system works on the CNN-LSTM model in which we first extracted faces from the frames using MTCNN then performed spatial feature extraction using pretrained Xception network and then used LSTM for temporal feature extraction. At the end classification is performed to predict the video as real or fake. The system is capable to detect low quality videos. The current system has shown good accuracy results for detecting real or fake videos on the Google deepfake AI dataset.https://doi.org/10.14311/NNW.2023.33.006
- [4] Zhang X., Zhao N., Lv Q., Ma Z., Qin Q., Gan W., Bai J., Gan L. (China)
Garbage classification based on a cascade neural network, pp. 101-112
Full text
Abstract: Most existing methods of garbage classification utilize transfer learning to acquire acceptable performance. They focus on some smaller categories. For example, the number of the dataset is small or the number of categories is few. However, they are hardly implemented on small devices, such as a smart phone or a Raspberry Pi, because of the huge number of parameters. Moreover, those approaches have insufficient generalization capability. Based on the aforementioned reasons, a promising cascade approach is proposed. It has better performance than transfer learning in classifying garbage in a large scale. In addition, it requires less parameters and training time. So it is more suitable to a potential application, such as deployment on a small device. Several commonly used backbones of convolutional neural networks are investigated in this study. Two different tasks, that is, the target domain being the same as the source domain and the former being different from the latter, are conducted besides. Results indicate with ResNet101 as the backbone, our algorithm outperforms other existing approaches. The innovation is that, as far as we know, this study is the first work combining a pre-trained convolutional neural network as a feature extractor with extreme learning machine to classify garbage. Furthermore, the training time and the number of trainable parameters is significantly shorter and less, respectively.https://doi.org/10.14311/NNW.2023.33.007
- [2] Akkur E., Türk F., Erogul O. (Turkey)
1/2023
- [1] Guo H., Tao X., Li X. (China)
Water quality image classification for aquaculture using deep transfer learning , pp. 1-18
Full text
Abstract: With the development of high-density and intensive aquaculture production and the increasing frequency of water quality changes in aquaculture water bodies, the number of pollution sources in aquaculture ponds is also increasing. As the water quality of aquaculture ponds is a crucial factor affecting the production and quality of pond aquaculture products, water quality assessment and management are more important than in the past. Water quality analysis is a crucial way to evaluate the water quality of fish farming water bodies. Traditional water quality analysis is usually obtained by practitioners through experience and visual observation. There is an observability deviation caused by subjectivity. Deep transfer learning-based water quality monitoring system is easier to deploy and can avoid unnecessary duplication of efforts to save costs for aquaculture industry. This paper uses the transfer learning model of artificial intelligence to analyze the water color image automatically. 5203 water quality images are collected to create a water quality image dataset, which contains five classes based on water color. Based on the dataset, a deep transfer learning-based classification model is proposed to identify water quality images. The experimental results show that the deep learning model based on transfer learning achieves 99% accuracy and has excellent performance.https://doi.org/10.14311/NNW.2023.33.001
- [2] Hlubuček A. (CZ)
Integration of railway infrastructure topological description elements from the microL2 to the macroN0,L0 level of detail, pp. 19-34
Full text
Abstract: The paper presents the method of integration, which is supposed to be applied to the structure of the railway infrastructure topological description system expressed at the level of detail designated as microL2 in order to transform it into the structure expressed at the level of detail designated as macroN0,L0 . The microL2 level is the level of detail at which individual tracks in the structural sense and turnout branches are recognized, while the macroN0,L0 level is the level of individual operational points and line sections. The proposed integration algorithm takes into account both the parameter values of the individual elements appearing at the reference level of detail microL2 and their topological interconnectedness. Based on these aspects, these elements are integrated into the elements of the derived level of detail macroN0,L0 that can be described by the transformed parameter values. The relations between the respective elements are also transformed accordingly. While describing the transformation algorithm, the terminology and principles of the UIC RailTopoModel are used.https://doi.org/10.14311/NNW.2023.33.002
- [3] Xu Z.Z., Zhang W.J. (China)
3D CNN hand pose estimation with end-to-end hierarchical model and physical constraints from depth images, pp. 35-42
Full text
Abstract: Previous studies are mainly focused on the works that depth image is treated as flat image, and then depth data tends to be mapped as gray values during the convolution processing and features extraction. To address this issue, an approach of 3D CNN hand pose estimation with end-to-end hierarchical model and physical constraints is proposed. After reconstruction of 3D space structure of hand from depth image, 3D model is converted into voxel grid for further hand pose estimation by 3D CNN. The 3D CNN method makes improvements by embedding end-to-end hierarchical model and constraints algorithm into the networks, resulting to train at fast convergence rate and avoid unrealistic hand pose. According to the experimental results, it reaches 87.98% of mean accuracy and 8.82 mm of mean absolute error (MAE) for all 21 joints within 24 ms at the inference time, which consistently outperforms several well-known gesture recognition algorithms.https://doi.org/10.14311/NNW.2023.33.003
- [2] Hlubuček A. (CZ)