کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
403989 | 677379 | 2016 | 9 صفحه PDF | دانلود رایگان |
• A learning theory based on the variational principle of least cognitive action.
• Supervised On-line Learning evolving as a dissipative dynamic system.
• Stochastic or Batch Gradient Descent are obtained by varying the dissipation level.
• Experimental evaluation on standard and custom benchmarks.
This paper analyzes the practical issues and reports some results on a theory in which learning is modeled as a continuous temporal process driven by laws describing the interactions of intelligent agents with their own environment. The classic regularization framework is paired with the idea of temporal manifolds by introducing the principle of least cognitive action, which is inspired by the related principle of mechanics. The introduction of the counterparts of the kinetic and potential energy leads to an interpretation of learning as a dissipative process. As an example, we apply the theory to supervised learning in neural networks and show that the corresponding Euler–Lagrange differential equations can be connected to the classic gradient descent algorithm on the supervised pairs. We give preliminary experiments to confirm the soundness of the theory.
Journal: Neural Networks - Volume 81, September 2016, Pages 72–80