Contents of Volume 29 (2019)

1/2019 2/2019 3/2019


  • [1] Sharif S.M.A., Mahboob M. (Bangladesh, US) ,
    Deep HOG: A hybrid model to classify Bangla isolated alpha-numerical symbols, pp. 111-133

      Full text     DOI:

    Abstract: Bangla is known to be the second most widely used script in the South Asian region. Despite its wide usage, a complete study with all available Bangla handwritten image classes is still due. This work proposes a hybrid model to classify all available handwritten image classes and unifying the existing benchmark datasets. The feasibility of the di erent handcrafted features in the hybrid model also has been demonstrated. Moreover, the proposed hybrid model obtain a maximum accuracy of 89.91% in validation phase with a total of 259 Bangla alpha-numerical image classes. With the same number of image classes, the proposed hybrid model shows a testing accuracy of 89.28% on 15,175 testing samples. The comparison results demonstrate that the proposed hybrid-HOG model can outperform the existing state-of-the-art classification models in Bangla handwritten alpha-numerical image classification. The code will be available on

  • [2] Min Xia, Chong Zhang, Yin Wang, Jia Liu, Chunzheng Li (China)
    Memory based decision making: A spiking neural circuit model, pp. 135-149

      Full text     DOI:

    Abstract: Conscious decision making is one of the important functions of human behavior. Episodic memory is the source of knowledge for conscious decision making. The mechanism of how episodic memory a ects conscious decision-making is unclear. To investigate the brain mechanism of conscious decision making, we investigated a biologically-based network model of spiking neurons for competition between automatic response and conscious decision making. The proposed model integrates episodic memory modular and brain decision-making modular, and uses episodic memory output as the top-down input of decision making. In the decision making, the network realizes the competition between decision patterns through mutual inhibition, finally reaches the conscious decision making. The simulations show that the proposed model can well implement multimodal coherent decision making under sequential memory control. The proposed model can effectively explain the transmission mechanism of conscious decision information.

  • [3] Fister D., Mun J.C., Jagric V., Jagric T. (Slovenia, US) ,
    Deep learning for stock market trading: a superior trading strategy?, pp. 151-171

      Full text     DOI:

    Abstract: Deep-learning initiatives have vastly changed the analysis of data. Complex networks became accessible to anyone in any research area. In this paper we are proposing a deep-learning long short-term memory network (LSTM) for automated stock trading. A mechanical trading system is used to evaluate its performance. The proposed solution is compared to traditional trading strategies, i.e., passive and rule-based trading strategies, as well as machine learning classifiers. We have discovered that the deep-learning long short-term memory network has outperformed other trading strategies for the German blue-chip stock, BMW, during the 2010–2018 period.

  • [4] Rybicková A., Mocková D., Teichmann D. (CZ)
    Genetic algorithm for the continuous location-routing problem, pp. 173-187

      Full text     DOI:

    Abstract: This paper focuses on the continuous location-routing problem that comprises of the location of multiple depots from a given region and determining the routes of vehicles assigned to these depots. The objective of the problem is to design the delivery system of depots and routes so that the total cost is minimal. The standard location-routing problem considers a finite number of possible locations. The continuous location-routing problem allows location to infinite number of locations in a given region and makes the problem much more complex. We present a genetic algorithm that tackles both location and routing subproblems simultaneously.


  • [1] Kratochvíl R., Jánešová M. (CZ)
    Use of clustering for creating economic-mathematical model of a web portal, pp. 61-70

      Full text     DOI:

    Abstract: This article describes the mathematical economic model of a communication web portal. To create the model, we use Cluster analysis as one of the areas of artifficial intelligence. Based on real data obtained from the operation of the communication web portal and the subsequent identification of the individual data clusters, a model is created that mathematically describes the dependence of economic variables (income from sales of services and selling price of services) on other variables (time of sales of services and field of offered services). Using this analysis, the model clusters together all data to parameterized sets of given properties. The main purpose of creating a model is the suitable classification of data. Consequently, it is possible to streamline the sale of the services and maximize the profits of web portals offering this type of service.

  • [2] Ling Y.,Chai C., Hou W., Hei D., Qing S.,Jia W. (China)
    A new method for nuclear accident source term inversion based on ga-bpnn algorithm, pp. 71-82

      Full text     DOI:

    Abstract: Rapid and accurate prediction and evaluation of accident consequences can provide scientific basis for decision-making of nuclear emergency measures. Accident source term estimation under reactor accident conditions is an important part of nuclear accident consequence evaluation. In order to accurately estimate the information of radioactive source terms released from nuclear power plants to the environment, an inversion model of accident source terms based on BP neural network algorithm (BPNN) was constructed. And to resolve the defect that BPNN is easy to fall into local minimum during training process, genetic algorithm (GA) was used to optimize the weights and thresholds of BPNN. In this paper, referring to the release rates of radioactive source term from the Fukushima nuclear accident. The release rates of 131I and 137Cs diffused into the environment in stable atmosphere were taken as the two target outputs of the GA-BPNN, and the meteorological data for one hour at fixed monitoring points were taken as the target inputs. And the simulation results showed that for the release rate of 131I and 137Cs, the mean relative errors of the training and the testing sample sets were both below 2% which indicates that the GA-BPNN model not only improves the shortcoming of BPNN, but also increases the speed and accuracy of source term inversion.

  • [3] Cheng Y., Ye Z., Wang M., Zhang Q. (China)
    Document classification based on convolutional neural network and hierarchical attention network, pp. 83-98

      Full text     DOI:

    Abstract: Numerous studies have demonstrated that the neural network model can achieve satisfactory performance in various natural language processing (NLP) tasks. In recent years, document classification is one of the NLP tasks that has gain considerable attention from researchers. For NLP tasks, convolutional neural network (CNN), recurrent neural network (RNN) and attention mechanism can be used. In this work, it is assumed that a document can be divided into two levels, word level and sentence level. In this paper, an effective and novel model called C-HAN (Convolutional Neural Network-based and Hierarchical Attention Network with RNN as basic units-based model) is proposed for document classification by combining the advantages of CNN, RNN and attention model. The CNN is used to extract the abstract relations between different words that are then fed into an attention based bidirectional long short-term memory recurrent neural network (Bi-LSTM) to obtain the high-level abstract representation of sentences. The representation of a document consists of sentences is obtained by using another attention based Bi-LSTM. Lastly, the classification ability of the proposed C-HAN model is evaluated on two datasets. The experimental results demonstrate that the C-HAN model outperforms previous deep learning methods and achieves the state-of-art performance.

  • [4] Ata A., Khan M.A., Abbas S., Ahmad G., Fatima A. (Pakistan)
    Modelling smart road traffic congestion control system using machine learning techniques, pp. 99-110

      Full text     DOI:

    Abstract: By the dramatic growth of the population in cities requires the traffic systems to be designed eciently and sustainably by taking full advantage of modern-day technology. Dynamic traffic flow is a significant issue which brings about a block of traffic movement. Thus, for tackling this issue, this paper aims to provide a mechanism to predict the traffic congestion with the help of Artificial Neural Networks (ANN) which shall control or minimize the blockage and result in the smoothening of road traffic. Proposed Modeling Smart Road Traffic Congestion Control using Artificial Back Propagation Neural Networks (MSR2C-ABPNN) for road traffic increase transparency, availability and efficiency in services offered to the citizens. In this paper, the prediction of congestion is operationalized by using the algorithm of backpropagation to train the neural network. The proposed system aims to provide a solution that will increase the comfort level of travellers to make intelligent and better transportation decision, and the neural network is a plausible approach to find traffic situations. Proposed MSR2C-ABPNN with Time series gives attractive results concerning MSE as compared to the fitting approach.


  • [1] Vaitová M., Štemberk P., Rosseel T.M. (CZ)
    Fuzzy logic model of irradiated aggregates , pp. 1-18

      Full text     DOI:

    Abstract: The worldwide need for nuclear power plant (NPP) lifetime extension to meet future national energy requirements while reducing greenhouse gases raises the question of the condition of concrete structures exposed to ionizing radiation. Although research into the effects of radiation has a long history and the phenomenon of deterioration of concrete due to irradiation is not yet completely understood, the main assumed degradation mode is radiation-induced volumetric expansion of aggregates. There are experimental data on irradiated concrete obtained over decades under different conditions; however, the collection of data exhibits considerable scatter. Fuzzy logic modeling offers an effective tool that can interconnect various data sets obtained by different teams of experts under different conditions. The main goal of this work is to utilize available data on irradiated concrete components such as minerals and aggregates that expand upon irradiation. Furthermore, aggregate radiation-induced volumetric expansion gives an estimate of the change in mechanical properties of aggregate after years of reactor operation. The mechanical properties of irradiated aggregate can then be used for modeling irradiated concrete in the actual NPP structure based on the composition of concrete, the average temperature on the surface of the biological shield structure, and the neutron dose received by biological shield.

  • [2] Snor J., Kukal J., Van Tran Q. (CZ)
    SOM in Hilbert space , pp. 19-31

      Full text     DOI:

    Abstract: The self organization can be performed in an Euclidean space as usually defined or in any metric space which is generalization of previous one. Both approaches have advantages and disadvantages. A novel method of batch SOM learning is designed to yield from the properties of the Hilbert space. This method is able to operate with finite or infinite dimensional patterns from vector space using only their scalar product. The paper is focused on the formulation of objective function and algorithm for its local minimization in a discrete space of partitions. General methodology is demonstrated on pattern sets from a space of functions.

  • [3] Fu X.Y., Luo H., Zhang G.Y., Zhong S.S. (China)
    A lazy support vector regression model for prediction problems with small sample size, pp. 33-44

      Full text     DOI:

    Abstract: Prediction problems with small sample size are problems which widely exist in engineering application. Because lazy prediction algorithms can utilize the information of predicted individual, it is often possible for them to achieve better predictive effect. Traditional lazy prediction algorithms generally use sample information directly, and therefore the predictive effect still has room for improvement. In this paper, we combine support vector regression (SVR) with lazy prediction algorithm, and propose a lazy support vector regression (LSVR) model. The insensitive loss function in LSVR depends on the distance between the individual in training sample set and the predicted individual. The smaller the distance, the smaller the lossless interval of the individual in training sample set, which means that the individual in training sample set has a great impact on the predicted individual. To solve the LSVR model, a generalized Lagrangian function is introduced to obtain the dual problem of the primal problem, and the solution to the primal problem is obtained by solving the dual problem. Finally, three numerical experiments are conducted to validate the predictive effect of LSVR. The experimental results show that the predictive effect of LSVR is better than those of e-SVR, neural network (NN) and random forest (RF), and it is also better than that of k-nearest neighbor (k-NN) algorithm when the sample size is not too small and the distance between the predicted individual and the individual in training sample set is not too large. Therefore, LSVR not only has the advantage of good generalization ability of traditional SVR, but also has the advantage of good local accuracy of lazy prediction algorithm.

  • [4] Yildirim O., Baloglu U.B. (Turkey, UK) ,
    RegP: A new pooling algorithm for deep convolutional neural networks, pp. 45-60

      Full text     DOI:

    Abstract: In this paper, we propose a new pooling method for deep convolutional neural networks. Previously introduced pooling methods either have very simple assumptions or they depend on stochastic events. Different from those methods, RegP pooling intensely investigates the input data. The main idea of this approach is finding the most distinguishing parts in regions of the input by investigating neighborhood regions to construct the pooled representation. RegP pooling improves the efficiency of the learning process, which is clearly visible in the experimental results. Further, the proposed pooling method outperformed other widely used hand-crafted pooling methods on several benchmark datasets.