کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
5127476 | 1489056 | 2017 | 8 صفحه PDF | دانلود رایگان |
- Variable neighborhood search is used to dynamic job shop scheduling problem.
- Reinforcement learning is used to enhance the performance of the scheduling method.
- The method has the ability of updating optimum strategies in the form of Q-factor.
In this paper, reinforcement learning (RL) with a Q-factor algorithm is used to enhance performance of the scheduling method proposed for dynamic job shop scheduling (DJSS) problem which considers random job arrivals and machine breakdowns. In fact, parameters of an optimization process at any rescheduling point are selected by continually improving policy which comes from RL. The scheduling method is based on variable neighborhood search (VNS) which is introduced to address the DJSS problem. A new approach is also introduced to calculate reward values in learning processes based on quality of selected parameters. The proposed method is compared with general variable neighborhood search and some common dispatching rules that have been widely used in the literature for the DJSS problem. Results illustrate the high performance of the proposed method in a simulated environment.
Journal: Computers & Industrial Engineering - Volume 110, August 2017, Pages 75-82