کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
1140676 | 956737 | 2011 | 8 صفحه PDF | دانلود رایگان |
عنوان انگلیسی مقاله ISI
Learning over sets with Recurrent Neural Networks: An empirical categorization of aggregation functions
دانلود مقاله + سفارش ترجمه
دانلود مقاله ISI انگلیسی
رایگان برای ایرانیان
موضوعات مرتبط
مهندسی و علوم پایه
سایر رشته های مهندسی
کنترل و سیستم های مهندسی
پیش نمایش صفحه اول مقاله

چکیده انگلیسی
Numerous applications benefit from parts-based representations resulting in sets of feature vectors. To apply standard machine learning methods, these sets of varying cardinality need to be aggregated into a single fixed-length vector. We have evaluated three common Recurrent Neural Network (RNN) architectures, Elman, Williams & Zipser and Long Short Term Memory networks, on approximating eight aggregation functions of varying complexity. The goal is to establish baseline results showing whether existing RNNs can be applied to learn order invariant aggregation functions. The results indicate that the aggregation functions can be categorized according to whether they entail (a) selection of a subset of elements and/or (b) non-linear operations on the elements. We have found that RNNs can very well learn to approximate aggregation functions requiring either (a) or (b) and those requiring only linear sub functions with very high accuracy. However, the combination of (a) and (b) cannot be learned adequately by these RNN architectures, regardless of size and architecture.
ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Mathematics and Computers in Simulation - Volume 82, Issue 3, November 2011, Pages 442-449
Journal: Mathematics and Computers in Simulation - Volume 82, Issue 3, November 2011, Pages 442-449
نویسندگان
W. Heidl, C. Eitzinger, M. Gyimesi, F. Breitenecker,