Contents of Volume 33 (2023)

1/2023

1/2023

  • [1] Guo H., Tao X., Li X. (China)
    Water quality image classification for aquaculture using deep transfer learning , pp. 1-18

      Full text         https://doi.org/10.14311/NNW.2023.33.001

    Abstract: With the development of high-density and intensive aquaculture production and the increasing frequency of water quality changes in aquaculture water bodies, the number of pollution sources in aquaculture ponds is also increasing. As the water quality of aquaculture ponds is a crucial factor affecting the production and quality of pond aquaculture products, water quality assessment and management are more important than in the past. Water quality analysis is a crucial way to evaluate the water quality of fish farming water bodies. Traditional water quality analysis is usually obtained by practitioners through experience and visual observation. There is an observability deviation caused by subjectivity. Deep transfer learning-based water quality monitoring system is easier to deploy and can avoid unnecessary duplication of efforts to save costs for aquaculture industry. This paper uses the transfer learning model of artificial intelligence to analyze the water color image automatically. 5203 water quality images are collected to create a water quality image dataset, which contains five classes based on water color. Based on the dataset, a deep transfer learning-based classification model is proposed to identify water quality images. The experimental results show that the deep learning model based on transfer learning achieves 99% accuracy and has excellent performance.

  • [2] Hlubuček A. (CZ)
    Integration of railway infrastructure topological description elements from the microL2 to the macroN0,L0 level of detail, pp. 19-34

      Full text         https://doi.org/10.14311/NNW.2023.33.002

    Abstract: The paper presents the method of integration, which is supposed to be applied to the structure of the railway infrastructure topological description system expressed at the level of detail designated as microL2 in order to transform it into the structure expressed at the level of detail designated as macroN0,L0 . The microL2 level is the level of detail at which individual tracks in the structural sense and turnout branches are recognized, while the macroN0,L0 level is the level of individual operational points and line sections. The proposed integration algorithm takes into account both the parameter values of the individual elements appearing at the reference level of detail microL2 and their topological interconnectedness. Based on these aspects, these elements are integrated into the elements of the derived level of detail macroN0,L0 that can be described by the transformed parameter values. The relations between the respective elements are also transformed accordingly. While describing the transformation algorithm, the terminology and principles of the UIC RailTopoModel are used.

  • [3] Xu Z.Z., Zhang W.J. (China)
    3D CNN hand pose estimation with end-to-end hierarchical model and physical constraints from depth images, pp. 35-42

      Full text         https://doi.org/10.14311/NNW.2023.33.003

    Abstract: Previous studies are mainly focused on the works that depth image is treated as flat image, and then depth data tends to be mapped as gray values during the convolution processing and features extraction. To address this issue, an approach of 3D CNN hand pose estimation with end-to-end hierarchical model and physical constraints is proposed. After reconstruction of 3D space structure of hand from depth image, 3D model is converted into voxel grid for further hand pose estimation by 3D CNN. The 3D CNN method makes improvements by embedding end-to-end hierarchical model and constraints algorithm into the networks, resulting to train at fast convergence rate and avoid unrealistic hand pose. According to the experimental results, it reaches 87.98% of mean accuracy and 8.82 mm of mean absolute error (MAE) for all 21 joints within 24 ms at the inference time, which consistently outperforms several well-known gesture recognition algorithms.