Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
10345419 | Computer Methods and Programs in Biomedicine | 2013 | 10 Pages |
Abstract
From microarrays and next generation sequencing to clinical records, the amount of biomedical data is growing at an exponential rate. Handling and analyzing these large amounts of data demands that computing power and methodologies keep pace. The goal of this paper is to illustrate how high performance computing methods in SAS can be easily implemented without the need of extensive computer programming knowledge or access to supercomputing clusters to help address the challenges posed by large biomedical datasets. We illustrate the utility of database connectivity, pipeline parallelism, multi-core parallel process and distributed processing across multiple machines. Simulation results are presented for parallel and distributed processing. Finally, a discussion of the costs and benefits of such methods compared to traditional HPC supercomputing clusters is given.
Related Topics
Physical Sciences and Engineering
Computer Science
Computer Science (General)
Authors
Justin R. Brown, Valentin Dinu,