کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
404364 | 677415 | 2011 | 7 صفحه PDF | دانلود رایگان |
Approximation capabilities of two types of computational models are explored: dictionary-based models (i.e., linear combinations of nn-tuples of basis functions computable by units belonging to a set called “dictionary”) and linear ones (i.e., linear combinations of nn fixed basis functions). The two models are compared in terms of approximation rates, i.e., speeds of decrease of approximation errors for a growing number nn of basis functions. Proofs of upper bounds on approximation rates by dictionary-based models are inspected, to show that for individual functions they do not imply estimates for dictionary-based models that do not hold also for some linear models. Instead, the possibility of getting faster approximation rates by dictionary-based models is demonstrated for worst-case errors in approximation of suitable sets of functions. For such sets, even geometric upper bounds hold.
► Linear models: linear combinations of nn fixed computational units.
► Dictionary-based models: linear combinations of all nn-tuples from a set of units.
► Worst-case errors in approximation of sets of functions.
► Models compared in terms of rates of decrease of worst-case errors for growing nn.
► Faster rates for dictionary-based approximation of certain sets of functions.
Journal: Neural Networks - Volume 24, Issue 8, October 2011, Pages 881–887