Applied Soft Computing, cilt.122, 2022 (SCI-Expanded)
© 2022 Elsevier B.V.In this paper, two novel Multilayer Extreme Learning Machine (ML-ELM) networks are presented. We call them Improved Multilayer Extreme Learning Machines (IML-ELM). The proposed network architectures use neuron activations both during and after the training. In the first IML-ELM (IML-ELM1) network, each layer has connection weights assigned randomly as orthonormal. On the other hand, the second IML-ELM (IML-ELM2) has connection weights assigned randomly as orthonormal only in the first layer. Its following layers’ connection weights are taken from the previous layer's output weight matrix. This assignment strategy made in the IML-ELM2 decreases the computation time even more. The networks’ modeling performances on seven benchmark dynamic systems are investigated and it is shown that the proposed IML-ELM1 and IML-ELM2 perform better modeling than the ML-ELM. They have better modeling performance of more than 70% for both training and test data sets compared to ML-ELM for some systems studied. For instance, using 100 nodes, ML-ELM, IML-ELM1 and IML-ELM2 gave average testing root mean square error results of 0.627977, 0.104272 (83%) and 0.092683 (85%) respectively for BDS 7. In addition, it has been experimentally determined that the developed networks provide improvements in terms of average training time, and this improvement exceeds 60% in some cases. These achievements clearly prove that the proposed improved multilayer extreme learning machines are efficient tools for system modeling applications.