Valery Nikolaevich Belovodsky Ph.D., Associate Professor, DonNTU, Department of Computer Modeling and Design.
283001, 118 b Artyoma str., Donetsk.
Research interests: modeling of technical systems, nonlinear dynamics, fractals and mathematical design, neural networks.
UDC 519.6 DOI 10.24412/2413-7383-2024-3-4-19 Language: Russian Annotation:
The evolution of neural networks and the ever-expanding field of their application dictate the need to study its theory basics at early stages of student learning. And it is natural to begin this process with consideration of typical problems of the approximation theory of functions in course of computational methods. Such tasks are discussed in this article. Using the example of a two-layer neural network, the possibilities of neural networks for interpolation and approximation of functions of one variable are studied. Computational experiments are being performed with the help of a specially developed program in which the error minimization of the neural network is carried out using a subroutine implementing Levenberg – Marquardt methods. Minimizing errors in the identification of network parameters is carried out by varying their initial values using the Sobol sequence. The results obtained are compared with classical algebraic approaches. The advantages and disadvantages of the neural approach are noted, the generalizations are made. Keywords: interpolation, approximation, neural network, minimization, Sobol sequence.
List of literature: 1 Ermolenko, T. V. A study of the efficiency of forecasting models for a system of analysis and monitoring of energy consumption in coal industry enterprises/ T.V. Ermolenko, V.N. Kotenko, V.V. Vinnik// Problems of artificial intelligence. 2022. №4 (27) . P. 25-34.
2. Ancyferov, S. S. Methodology for the development of intelligent systems/ S.S. Ancyferov, A.S. Sigov, K.N. Fazilova // Problems of artificial intelligence. 2022. №2(25).P.42-47.
3. Ermolenko, T. V. Classification of heartbeat anomalies using deep learning /T.V. Ermolenko, D.V. Rolik // Problems of artificial intelligence. 2022. № 1(24). P. 40-53.
4 Berezin, I.S. Methods of calculation, vol. 1/ I.S. Berezin, ZHidkov N.P. - Moscow: Publishing house of physical and mathematical literature, 1962. - 464 p.
5. Fihtengol'c, G.M. A course of differential and integral calculus. Vol. III/ G.M. Fihtengol'c. - Moscow: Nauka, 1966.- 656 p.
6. Bernshtejn, S.N. Proof of Weierstrass's theorem based on the theory of probability/ S.N. Bernshtejn, Collected works, vol.1, Publishing House of the Academy of Sciences of the USSR, 1952. - p. 105-106.
7. Kolmogorov, A.N. Selected works. Mathematics and mechanics. - Moscow: Nauka, 1985. p. 179-182
8. Lorentz, George. Metric entropy, widths, and superpositions of functions // American Mathematical Monthly : journal. — 1962. — Vol. 69. — P. 469—485.
9. Hecht-Nielsen, R. Kolmogorov’s Mapping Neural Network Exsistence Theorem// R. Hecht-Nielsen, Proceedings of the First IEEE International Conference on Neural Networks.1987, III: pp. 11-13.- https://cs.uwaterloo.ca/~y328yu/classics/Hecht-Nielsen.pdf Access mode - Title from the screen. 03/09/2024
10. Gorban', A.N. Generalized approximation theorem and computational capabilities of neural networks/ A.N. Gorban' //Siberian Journal of Computational Mathematics, 1998. Vol.1, № 1. P. 12-24.
11. Cybenko, G. Approximation by Superpositions of a Sigmoidal Function/ G. Cybenko// Mathematics of Control, Signals, and Systems, 1989, 2. - p.p.303 - 314.
12. Kruglov, V.V. Artificial neural networks. Theory and practice /V.V. Kruglov, V.V. Borisov.- Moscow: Hotline - Telecom, 2002 - 382 p.
13. YAsnickij, L. Artificial intelligence: a popular introduction for teachers and schoolchildren./ L. YAsnickij // Journal "Informatics", № 23 (600), 1-15.12.2009 .- https://inf.1sept.ru/view_article.php?ID=200902304 - Access mode - Title from the screen. 03/09/2024.
14. Mitchell, T.M. Machine Learning/T.M.Mitchel. - McGraw - Hill Science/ Engineering/ Math, 1997. - 432 p.
15. Hajkin, S. Neural networks: a complete course/ S. Hajkin. - Moscow: Publishing house "Williams", 2006. - 1104 p.
16. Galushkin, A.I. Neural networks: fundamentals of the theory/ A.I. Galushkin. - Moscow: Hotline - Telecom. 2012. - 496 p.
17. Nikolenko, S. Deep learning /S.Nikolenko, A.Kadurin, E. Arhangel'skaya. - St. Petersburg: Piter, 2018. - 480 p.
18. Galkin, V.A. Some aspects of approximation and interpolation of functions by artificial neural networks/ V.A. Galkin, T.V. Gavrilenko, A. D. Smorodinov // Vestnik KRAUNC. Phys.-mat. sciences. 2022. Vol. 38. № 1. P. 54-73.DOI: 10.26117/2079-6641-2022-38-1-54-73.
19. Sobol', I.M. Selection of optimal parameters in problems with many criteria/ I.M. Sobol', R.B. Statnikov. - Moscow: Nauka, 1981. - 110 p.
20. Ketkov, YU.L. Matlab 7: programming, numerical methods/ YU.L., Ketkov, A.YU. Ketkov, M.M. SHul'c - St. Petersburg: BHV-Peterburg, 2005.- 752 p.
21. Levenberg–Marquardt algorithm - https://ru.wikipedia.org/wiki/ Algoritm_ Levenberga _—_Markvardta - Access mode - Title from the screen. 03/09/2024
22. Ivahnenko, A.G. Inductive method of self-organization of models of complex systems / A.G. Ivahnenko — Kiev : Naukova dumka , 1981 — 296 p.
Release: 3(34)'2024
Chapter: MATHEMATICAL MODELING, NUMERICAL METHODS AND SOFTWARE SYSTEMS
How to quote:
Belovodskiy V. N. ON INTERPOLATION AND APPROXIMATION OF FUNCTIONS USING NEURAL NETWORKS [Text]
/ V. N. Belovodskiy
// Problems of artificial intelligence. - 2024. № 3 (34). - P. 4-19. - http://paijournal.guiaidn.ru/ru/2024/3(34)-1.html