|کد مقاله||کد نشریه||سال انتشار||مقاله انگلیسی||ترجمه فارسی||نسخه تمام متن|
|4951514||1364360||2018||13 صفحه PDF||ندارد||دانلود کنید|
â¢A new fair-share scheduler is proposed for performance-asymmetric multicore systems.â¢Scaled virtual runtime (SVR) is introduced to capture the asymmetry among cores.â¢A task migration policy is proposed to balance SVRs among cores.â¢The approach bounds the SVR differences between tasks in a cluster by a constant.â¢Our approach incurs only negligible run-time and energy overhead.
Performance-asymmetric multicore processors have been increasingly adopted in embedded systems due to their architectural benefits in improved performance and power savings. While fair-share scheduling is a crucial kernel service for such applications, it is still at an early stage with respect to performance-asymmetric multicore architecture. In this article, we first propose a new fair-share scheduler by adopting the notion of scaled CPU time that reflects the performance asymmetry between different types of cores. Using the scaled CPU time, we revise the virtual runtime of the completely fair scheduler (CFS) of the Linux kernel, and extend it into the scaled virtual runtime (SVR). In addition, we propose an SVR balancing algorithm that bounds the maximum SVR difference of tasks running on the same core types. The SVR balancing algorithm periodically partitions the tasks in the system into task groups and allocates them to the cores in such a way that tasks with smaller SVR receive larger SVR increments and thus proceed more quickly. We formally show the fairness property of the proposed algorithm. To demonstrate the effectiveness of the proposed approach, we implemented our approach into Linaroâs scheduling framework on ARMâs Versatile Express TC2 board and performed a series of experiments using the PARSEC benchmarks. The experiments show that the maximum SVR difference is only 4.09Â ms in our approach, whereas it diverges indefinitely with time in the original Linaroâs scheduling framework. In addition, our approach incurs a run-time overhead of only 0.4% with an increased energy consumption of only 0.69%.
Journal: Journal of Parallel and Distributed Computing - Volume 111, January 2018, Pages 174-186