کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
462976 | 696937 | 2014 | 14 صفحه PDF | دانلود رایگان |
Modern computers with hard-disk storage and networks with dynamic spectrum access illustrate systems having a resource RR that allows fragmented allocations. We model RR as a sequence of M>1M>1 units for which requests are made by items in a FIFO queue; each request is for a desired amount of RR, the item size, and the residence time during which it is needed. So long as there are enough currently available units of RR, an item at the head of the queue can be divided into fragments accommodated by the gaps in RR formed by these units. Under the key assumption that allocations given to items cannot be changed prior to their departures, fragmentation in the form of alternating gaps and allocated resource builds up randomly as items come and go. The improvements in resource utilization created by fragmentation are acquired at a processing cost, so how fragmentation evolves is an important performance issue.We define a probability model specifying distributions for item sizes and residence times, and then analyze the system operating at capacity. If MM is much larger than the maximum item size, then as the fragmentation process approaches equilibrium, nearly all of the allocated items are completely fragmented, i.e., the occupied units are mutually disjoint. In a suite of four theorems, we show how this result specializes for certain classes of item-size distributions. However, as a counterpoint to these rather intimidating results, we present the findings of extensive experiments which show that the delays in reaching the inefficient states of nearly complete fragmentation can be surprisingly long, and hence that even moderately fragmented states are usually of no concern.
Journal: Performance Evaluation - Volume 79, September 2014, Pages 273–286