Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
489914 | Procedia Computer Science | 2015 | 6 Pages |
Abstract
The Map Reduce framework provides a scalable model for large scale data intensive computing and fault tolerance. In this paper, we propose an algorithm to improve the I/O performance of the Hadoop distributed file system. The results prove that the proposed algorithm show better I/O performance with comparatively less synchronization
Related Topics
Physical Sciences and Engineering
Computer Science
Computer Science (General)