Contents of Volume 29 (2019)

1/2019

1/2019

  • [1] Vaitová M., Štemberk P., Rosseel T.M. (CZ)
    Fuzzy logic model of irradiated aggregates , pp 1-18

      Full text     DOI: http://dx.doi.org/10.14311/NNW.2019.29.001

    Abstract: The worldwide need for nuclear power plant (NPP) lifetime extension to meet future national energy requirements while reducing greenhouse gases raises the question of the condition of concrete structures exposed to ionizing radiation. Although research into the effects of radiation has a long history and the phenomenon of deterioration of concrete due to irradiation is not yet completely understood, the main assumed degradation mode is radiation-induced volumetric expansion of aggregates. There are experimental data on irradiated concrete obtained over decades under different conditions; however, the collection of data exhibits considerable scatter. Fuzzy logic modeling offers an effective tool that can interconnect various data sets obtained by different teams of experts under different conditions. The main goal of this work is to utilize available data on irradiated concrete components such as minerals and aggregates that expand upon irradiation. Furthermore, aggregate radiation-induced volumetric expansion gives an estimate of the change in mechanical properties of aggregate after years of reactor operation. The mechanical properties of irradiated aggregate can then be used for modeling irradiated concrete in the actual NPP structure based on the composition of concrete, the average temperature on the surface of the biological shield structure, and the neutron dose received by biological shield.

  • [2] Snor J., Kukal J., Van Tran Q. (CZ)
    SOM in Hilbert space , pp. 19-31

      Full text     DOI: http://dx.doi.org/10.14311/NNW.2019.29.002

    Abstract: The self organization can be performed in an Euclidean space as usually defined or in any metric space which is generalization of previous one. Both approaches have advantages and disadvantages. A novel method of batch SOM learning is designed to yield from the properties of the Hilbert space. This method is able to operate with finite or infinite dimensional patterns from vector space using only their scalar product. The paper is focused on the formulation of objective function and algorithm for its local minimization in a discrete space of partitions. General methodology is demonstrated on pattern sets from a space of functions.

  • [3] Fu X.Y., Luo H., Zhang G.Y., Zhong S.S. (China)
    A lazy support vector regression model for prediction problems with small sample size, pp. 33-44

      Full text     DOI: http://dx.doi.org/10.14311/NNW.2019.29.003

    Abstract: Prediction problems with small sample size are problems which widely exist in engineering application. Because lazy prediction algorithms can utilize the information of predicted individual, it is often possible for them to achieve better predictive effect. Traditional lazy prediction algorithms generally use sample information directly, and therefore the predictive effect still has room for improvement. In this paper, we combine support vector regression (SVR) with lazy prediction algorithm, and propose a lazy support vector regression (LSVR) model. The insensitive loss function in LSVR depends on the distance between the individual in training sample set and the predicted individual. The smaller the distance, the smaller the lossless interval of the individual in training sample set, which means that the individual in training sample set has a great impact on the predicted individual. To solve the LSVR model, a generalized Lagrangian function is introduced to obtain the dual problem of the primal problem, and the solution to the primal problem is obtained by solving the dual problem. Finally, three numerical experiments are conducted to validate the predictive effect of LSVR. The experimental results show that the predictive effect of LSVR is better than those of e-SVR, neural network (NN) and random forest (RF), and it is also better than that of k-nearest neighbor (k-NN) algorithm when the sample size is not too small and the distance between the predicted individual and the individual in training sample set is not too large. Therefore, LSVR not only has the advantage of good generalization ability of traditional SVR, but also has the advantage of good local accuracy of lazy prediction algorithm.

  • [4] Yildirim O., Baloglu U.B. (Turkey, UK) ,
    RegP: A new pooling algorithm for deep convolutional neural networks, pp. 45-60

      Full text     DOI: http://dx.doi.org/10.14311/NNW.2019.29.004

    Abstract: In this paper, we propose a new pooling method for deep convolutional neural networks. Previously introduced pooling methods either have very simple assumptions or they depend on stochastic events. Different from those methods, RegP pooling intensely investigates the input data. The main idea of this approach is finding the most distinguishing parts in regions of the input by investigating neighborhood regions to construct the pooled representation. RegP pooling improves the efficiency of the learning process, which is clearly visible in the experimental results. Further, the proposed pooling method outperformed other widely used hand-crafted pooling methods on several benchmark datasets.