Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
11002397 | Future Generation Computer Systems | 2019 | 33 Pages |
Abstract
Workflows are a set of tasks and the dependency among them, which are divided into scientific and business categories. To avoid problems of centralized execution of workflows, they are broken into segments that is known as fragmentation. To fragment the workflow, it is highly important to consider the dependency among tasks and runtime conditions. The cooperation between the scheduler and fragmentor must be such that the latter generates appropriate tasks with optimized communication cost, delay time, response time, and throughput. To this end, in the present study, a framework is proposed for scheduling and fragmentation of tasks in scientific workflows that are conducted in fragmentation and scheduling phases. In the fragmentation phase, the fragments are generated with regard to the number of virtual machines available during runtime. In the scheduling phase, the virtual machines are selected with the aim of reducing bandwidth usage. The experiments are performed with three Configurations during both phases of fragmentation and scheduling. Response time, throughput, and cost (BW and RAM) were improved compared to the baseline studies on Sipht, Inspiral, Epigenomics, Montage, and CyberShake workflows as datasets.
Related Topics
Physical Sciences and Engineering
Computer Science
Computational Theory and Mathematics
Authors
Zahra Momenzadeh, Faramarz Safi-Esfahani,