Contents of Volume 29 (2019)

1/2019 2/2019

2/2019

  • [1] Kratochvíl R., Jánešová M. (CZ)
    Use of clustering for creating economic-mathematical model of a web portal, pp. 61-70

      Full text     DOI: http://dx.doi.org/10.14311/NNW.2019.29.005

    Abstract: This article describes the mathematical economic model of a communication web portal. To create the model, we use Cluster analysis as one of the areas of artifficial intelligence. Based on real data obtained from the operation of the communication web portal and the subsequent identification of the individual data clusters, a model is created that mathematically describes the dependence of economic variables (income from sales of services and selling price of services) on other variables (time of sales of services and field of offered services). Using this analysis, the model clusters together all data to parameterized sets of given properties. The main purpose of creating a model is the suitable classification of data. Consequently, it is possible to streamline the sale of the services and maximize the profits of web portals offering this type of service.

  • [2] Ling Y.,Chai C., Hou W., Hei D., Qing S.,Jia W. (China)
    A new method for nuclear accident source term inversion based on ga-bpnn algorithm, pp. 71-82

      Full text     DOI: http://dx.doi.org/10.14311/NNW.2019.29.006

    Abstract: Rapid and accurate prediction and evaluation of accident consequences can provide scientific basis for decision-making of nuclear emergency measures. Accident source term estimation under reactor accident conditions is an important part of nuclear accident consequence evaluation. In order to accurately estimate the information of radioactive source terms released from nuclear power plants to the environment, an inversion model of accident source terms based on BP neural network algorithm (BPNN) was constructed. And to resolve the defect that BPNN is easy to fall into local minimum during training process, genetic algorithm (GA) was used to optimize the weights and thresholds of BPNN. In this paper, referring to the release rates of radioactive source term from the Fukushima nuclear accident. The release rates of 131I and 137Cs diffused into the environment in stable atmosphere were taken as the two target outputs of the GA-BPNN, and the meteorological data for one hour at fixed monitoring points were taken as the target inputs. And the simulation results showed that for the release rate of 131I and 137Cs, the mean relative errors of the training and the testing sample sets were both below 2% which indicates that the GA-BPNN model not only improves the shortcoming of BPNN, but also increases the speed and accuracy of source term inversion.

  • [3] Cheng Y., Ye Z., Wang M., Zhang Q. (China)
    Document classification based on convolutional neural network and hierarchical attention network, pp. 83-98

      Full text     DOI: http://dx.doi.org/10.14311/NNW.2019.29.007

    Abstract: Numerous studies have demonstrated that the neural network model can achieve satisfactory performance in various natural language processing (NLP) tasks. In recent years, document classification is one of the NLP tasks that has gain considerable attention from researchers. For NLP tasks, convolutional neural network (CNN), recurrent neural network (RNN) and attention mechanism can be used. In this work, it is assumed that a document can be divided into two levels, word level and sentence level. In this paper, an effective and novel model called C-HAN (Convolutional Neural Network-based and Hierarchical Attention Network with RNN as basic units-based model) is proposed for document classification by combining the advantages of CNN, RNN and attention model. The CNN is used to extract the abstract relations between different words that are then fed into an attention based bidirectional long short-term memory recurrent neural network (Bi-LSTM) to obtain the high-level abstract representation of sentences. The representation of a document consists of sentences is obtained by using another attention based Bi-LSTM. Lastly, the classification ability of the proposed C-HAN model is evaluated on two datasets. The experimental results demonstrate that the C-HAN model outperforms previous deep learning methods and achieves the state-of-art performance.

  • [4] Ata A., Khan M.A., Abbas S., Ahmad G., Fatima A. (Pakistan)
    Modelling smart road traffic congestion control system using machine learning techniques, pp. 99-110

      Full text     DOI: http://dx.doi.org/10.14311/NNW.2019.29.008

    Abstract: By the dramatic growth of the population in cities requires the traffic systems to be designed eciently and sustainably by taking full advantage of modern-day technology. Dynamic traffic flow is a significant issue which brings about a block of traffic movement. Thus, for tackling this issue, this paper aims to provide a mechanism to predict the traffic congestion with the help of Artificial Neural Networks (ANN) which shall control or minimize the blockage and result in the smoothening of road traffic. Proposed Modeling Smart Road Traffic Congestion Control using Artificial Back Propagation Neural Networks (MSR2C-ABPNN) for road traffic increase transparency, availability and efficiency in services offered to the citizens. In this paper, the prediction of congestion is operationalized by using the algorithm of backpropagation to train the neural network. The proposed system aims to provide a solution that will increase the comfort level of travellers to make intelligent and better transportation decision, and the neural network is a plausible approach to find traffic situations. Proposed MSR2C-ABPNN with Time series gives attractive results concerning MSE as compared to the fitting approach.


1/2019

  • [1] Vaitová M., Štemberk P., Rosseel T.M. (CZ)
    Fuzzy logic model of irradiated aggregates , pp. 1-18

      Full text     DOI: http://dx.doi.org/10.14311/NNW.2019.29.001

    Abstract: The worldwide need for nuclear power plant (NPP) lifetime extension to meet future national energy requirements while reducing greenhouse gases raises the question of the condition of concrete structures exposed to ionizing radiation. Although research into the effects of radiation has a long history and the phenomenon of deterioration of concrete due to irradiation is not yet completely understood, the main assumed degradation mode is radiation-induced volumetric expansion of aggregates. There are experimental data on irradiated concrete obtained over decades under different conditions; however, the collection of data exhibits considerable scatter. Fuzzy logic modeling offers an effective tool that can interconnect various data sets obtained by different teams of experts under different conditions. The main goal of this work is to utilize available data on irradiated concrete components such as minerals and aggregates that expand upon irradiation. Furthermore, aggregate radiation-induced volumetric expansion gives an estimate of the change in mechanical properties of aggregate after years of reactor operation. The mechanical properties of irradiated aggregate can then be used for modeling irradiated concrete in the actual NPP structure based on the composition of concrete, the average temperature on the surface of the biological shield structure, and the neutron dose received by biological shield.

  • [2] Snor J., Kukal J., Van Tran Q. (CZ)
    SOM in Hilbert space , pp. 19-31

      Full text     DOI: http://dx.doi.org/10.14311/NNW.2019.29.002

    Abstract: The self organization can be performed in an Euclidean space as usually defined or in any metric space which is generalization of previous one. Both approaches have advantages and disadvantages. A novel method of batch SOM learning is designed to yield from the properties of the Hilbert space. This method is able to operate with finite or infinite dimensional patterns from vector space using only their scalar product. The paper is focused on the formulation of objective function and algorithm for its local minimization in a discrete space of partitions. General methodology is demonstrated on pattern sets from a space of functions.

  • [3] Fu X.Y., Luo H., Zhang G.Y., Zhong S.S. (China)
    A lazy support vector regression model for prediction problems with small sample size, pp. 33-44

      Full text     DOI: http://dx.doi.org/10.14311/NNW.2019.29.003

    Abstract: Prediction problems with small sample size are problems which widely exist in engineering application. Because lazy prediction algorithms can utilize the information of predicted individual, it is often possible for them to achieve better predictive effect. Traditional lazy prediction algorithms generally use sample information directly, and therefore the predictive effect still has room for improvement. In this paper, we combine support vector regression (SVR) with lazy prediction algorithm, and propose a lazy support vector regression (LSVR) model. The insensitive loss function in LSVR depends on the distance between the individual in training sample set and the predicted individual. The smaller the distance, the smaller the lossless interval of the individual in training sample set, which means that the individual in training sample set has a great impact on the predicted individual. To solve the LSVR model, a generalized Lagrangian function is introduced to obtain the dual problem of the primal problem, and the solution to the primal problem is obtained by solving the dual problem. Finally, three numerical experiments are conducted to validate the predictive effect of LSVR. The experimental results show that the predictive effect of LSVR is better than those of e-SVR, neural network (NN) and random forest (RF), and it is also better than that of k-nearest neighbor (k-NN) algorithm when the sample size is not too small and the distance between the predicted individual and the individual in training sample set is not too large. Therefore, LSVR not only has the advantage of good generalization ability of traditional SVR, but also has the advantage of good local accuracy of lazy prediction algorithm.

  • [4] Yildirim O., Baloglu U.B. (Turkey, UK) ,
    RegP: A new pooling algorithm for deep convolutional neural networks, pp. 45-60

      Full text     DOI: http://dx.doi.org/10.14311/NNW.2019.29.004

    Abstract: In this paper, we propose a new pooling method for deep convolutional neural networks. Previously introduced pooling methods either have very simple assumptions or they depend on stochastic events. Different from those methods, RegP pooling intensely investigates the input data. The main idea of this approach is finding the most distinguishing parts in regions of the input by investigating neighborhood regions to construct the pooled representation. RegP pooling improves the efficiency of the learning process, which is clearly visible in the experimental results. Further, the proposed pooling method outperformed other widely used hand-crafted pooling methods on several benchmark datasets.