کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
689077 889589 2014 11 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Reducing the computational effort of optimal process controllers for continuous state spaces by using incremental learning and post-decision state formulations
ترجمه فارسی عنوان
کاهش تلاش محاسباتی کنترل کننده های فرایند بهینه برای فضاهای حالت مداوم با استفاده از فرمولاسیون یادگیری افزایشی و تصمیم گیری دولت
کلمات کلیدی
موضوعات مرتبط
مهندسی و علوم پایه مهندسی شیمی تکنولوژی و شیمی فرآیندی
چکیده انگلیسی


• We have realized multiple optimal control approaches for continuous state spaces.
• Stochastic influences in the state space are taken into account.
• We compare the control approaches by their efficiency and computational effort.
• The evaluation sample problem allows us to apply all suggested approaches.
• We compare batch and incremental learning for Artificial Neural Networks.

Multistage optimization problems that are represented by Markov Decision Processes (MDPs) can be solved by the approach of Dynamic Programming (DP). However, in process control problems involving continuous state spaces, the classical DP formulation leads to computational intractability known as the ‘curse of dimensionality’. This issue can be overcome by the approach of Approximate Dynamic Programming (ADP) using simulation-based sampling in combination with value function approximators replacing the traditional value tables. In this paper, we investigate different approaches of ADP in the context of a deep cup drawing process, which is simulated by a finite element model. In applying ADP to the problem, Artificial Neural Networks (ANNs) are created as global parametric function approximators to represent the value functions as well as the state transitions. For each time step of the finite time horizon, time-indexed function approximations are built. We compare a classical DP approach to a backward ADP approach with batch learning of the ANNs and a forward ADP approach with incremental learning of the ANNs. In the batch learning mode, the ANNs are trained from temporary value tables constructed by exhaustive search backwards in time. In the incremental learning mode, on the other hand, the ANNs are initialized and then improved continually using data obtained by stochastic sampling of the simulation moving forward in time. For both learning modes, we obtain value function approximations with good performance. The cup deep drawing process under consideration is of medium model complexity and therefore allows us to apply all three methods and to perform a comparison with respect to the achieved efficiency and the associated computational effort as well as the decision behavior of the controllers.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Journal of Process Control - Volume 24, Issue 3, March 2014, Pages 133–143
نویسندگان
, , , ,