کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
4947006 1439560 2017 10 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Node-level parallelization for deep neural networks with conditional independent graph
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر هوش مصنوعی
پیش نمایش صفحه اول مقاله
Node-level parallelization for deep neural networks with conditional independent graph
چکیده انگلیسی
Deep neural networks require high performance computing and highly effective implementation to constrain the running time into a reasonable range. We proposed a novel node-level parallelization, conditional independent parallelization, of the forward and backward propagations to improve the level of concurrency. The propagations exploit a conditional independent graph (CIG) built in O(N) times, which consists of conditional independent sets of nodes. Each set in the CIG is sequentially visited, while the nodes in the set are calculated concurrently. Besides, we analyze the properties of the CIG and prove the correctness of the propagations with the CIG, then study the theoretical speedup ratios of the parallelization. Moreover, this parallelism can be applied to arbitrary structures of neural networks without influencing convergence, which only needs a conditional independent graph. It can be further integrated into other frameworks with batch-level and data-level parallelism to improve the level of concurrency. Since modern GPU supports concurrent kernels, the parallelization can also be implemented on GPU directly. To verify the parallelization in experiments, we implement an autoencoder, a dependency parser and an image recognizer with the parallelization and test them on a 4-core CPU I7 4790K with 32 GB memory. The results demonstrate that it can achieve maximum speedups of 3.965 × for the autoencoder, of 3.106 × for the parsing and of 2.966 × for the recognizer.
ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Neurocomputing - Volume 267, 6 December 2017, Pages 261-270
نویسندگان
, , , ,