Article ID Journal Published Year Pages File Type
493936 Sustainable Computing: Informatics and Systems 2014 14 Pages PDF
Abstract

•Reduction in energy consumption of the system between 30% and 53% depending on whether it is desirable to maintain the existing level of overheads.•Development of a selection of Reinforcement Learning approaches along with a detailed comparison of their performance.•Analysis of trace logs from an existing High Throughput Computing system highlighting the need for an adaptive approach.

Volunteer computing systems provide an easy mechanism for users who wish to perform large amounts of High Throughput Computing work. However, if the volunteer computing system is deployed over a shared set of computers where interactive users can seize back control of the computers this can lead to wasted computational effort and hence wasted energy. Determining on which resource to deploy a particular piece of work, or even to choose not to deploy the work at the current time, is a difficult problem to solve, depending both on the expected free time available on the computers within the Volunteer computing system and the expected runtime of the work – both of which are difficult to determine a priori. We develop here a Reinforcement Learning approach to solving this problem and demonstrate that it can provide a reduction in energy consumption between 30% and 53% depending on whether we can tolerate an increase in the overheads incurred.

Related Topics
Physical Sciences and Engineering Computer Science Computer Science (General)
Authors
, ,