Article ID Journal Published Year Pages File Type
424665 Future Generation Computer Systems 2013 13 Pages PDF
Abstract

As big-data processing and analysis dominates the usage of the Cloud systems, the need for Cloud-hosted data scheduling and optimization services increases. One key component for such a service is to provide available bandwidth and achievable throughput estimation capabilities, since all scheduling and optimization decisions would be built on top of this information. The biggest challenge in providing these estimation capabilities is the dynamic decision of what proportion of the actual dataset, when transferred, would give us an accurate estimate of the bandwidth and throughput achieved by transferring the whole data set. That proportion of data is called the sampling size (or the probe size). Although small fixed sample sizes worked well for high-latency low-bandwidth networks in the past, high-bandwidth networks require much larger and more dynamic sample sizes, since an accurate estimation now also depends on how fast the transfer protocol can saturate that fat network link. In this study, we present a model to decide the optimal sampling size based on the data size and estimated capacity of the network. Our results show that the predicted sampling size is very accurate compared to the targeted best sampling size for a certain file transfer in a majority of the cases.

► We present a model to calculate the smallest amount of sampling data to transfer to find the file transfer throughput. ► An online sampling strategy is combined with the parallel stream optimization model. ► The proposed model is used in a Cloud-hosted data scheduler. ► The estimated optimal throughput calculated with the sampling transfers is accurate compared to the actual optimized file transfer throughput.

Related Topics
Physical Sciences and Engineering Computer Science Computational Theory and Mathematics
Authors
, , ,