Archives

  • 2022-09
  • 2022-08
  • 2022-07
  • 2022-06
  • 2022-05
  • 2022-04
  • 2021-03
  • 2020-08
  • 2020-07
  • 2020-03
  • 2019-11
  • 2019-10
  • 2019-09
  • 2019-08
  • 2019-07
  • The main purpose of using the autoencoder is to

    2022-09-07

    The main purpose of using the autoencoder is to obtain new data by eliminating the noise in the data. In the encoder phase, Eq. (5) is applied to reduce the dimension of the data from the in-put layer and feed them to the hidden layer. In the decoder phase, the reduced data in the hidden layer are decoded by Eq. (6) to obtain the data closer to the input data. After these two phases, the backpropagation algorithm given in Eq. (7) is used in order to make the new values closer to the data in the input layer. The pur-pose of this process, which is performed via hidden layer, is to try to reveal important data in the dataset (Bastürk¸ et al., 2017; Ben-gio, 2009; Chen, Gou, Wang, Li, & Jiao, 2018; Kaynar, Yüksek, et al., 2017; Kaynar, Aydın, & Görmez, 2017; Le et al., 2011). 560 K. Adem et al. / Expert Systems With Applications 115 (2019) 557–564
    Fig. 2. A general autoencoder model.
    In Eq. (8), γ is the regularization parameter, L(w) is the weight ad-justment parameter. As seen in Eq. (8), overfitting is avoided by multiplying the error term and weighting factor by the backpropa-gation algorithm. Here, the parameters after the sigma are not con-stant, but are determined by trial and error in order to obtain the best result (Bastürk¸ et al., 2017).
    Since complex networks and their classification require a net-work structure with more hidden layers, multiple autoencoders are connected in succession to obtain stacked autoencoder model as shown in Fig. 3.
    As shown in Fig. 3, in stacked autoencoder structure, the suc-cess rate is reduced due to the Oleic acid of the input data dimen-sion. The softmax layer is applied to overcome this problem. The Softmax layer is a probability-based linear classifier used in cases where there are two or more classes. This layer increases the clas-sification performance by using the attributes received from the stacked autoencoder structure. L2WeightRegularization and Sparsi-tyRegularization parameters are used in this model to reduce over-fitting during training. By using these parameters, it is possible to eliminate the overfitting and to get better classification outcomes. The SparsityProportion parameter is used to control the sparseness of the data in the hidden neurons.
    In the Stacked AutoEncoder method, non-linear maps (activa-tion functions) are used, as in neural networks, when the dimen-sion reduction process is performed. According to the PCA model, this situtation negatively affects the time performance even though it allows for more effective attributes to be foreground in data sets with complex relationships (Vincent, Larochelle, Lajoie, Bengio, & Manzagol, 2010).
    Experiments were carried out as shown in Fig. 4 using three components of the deep neural network consisting of the data set used in the study, the autoencoder and the softmax layers.
    As shown in Fig. 4, a classifier was created by adding two hid-den autoencoders and a softmax layer at the end of the second au-toencoder. The input data set has been implemented with softmax classification and stacked autoencoder by the following steps.
    Step 1. The input dataset was given to the first autoencoder and the values were trained according to Eq. (7).
    Step 2. The hidden layer of the first autoencoder is given as in-put to the second autoencoder as in Fig. 4. The second au-toencoder is trained as in Step 1. (As a result of the exper-imental studies carried out, it was found that the best case was provided with two autoencoders.)
    (5) Step 3. Output values of stacked autoencoder is given to the SoftMax layer and classified as in Fig. 4. (6) The Stacked Autoencoder with softmax can provide better clas-sification performance by detecting more complex relations on the dataset. In addition, with each of the autoencoders included, it re-