Article ID Journal Published Year Pages File Type
404364 Neural Networks 2011 7 Pages PDF
Abstract

Approximation capabilities of two types of computational models are explored: dictionary-based models (i.e., linear combinations of nn-tuples of basis functions computable by units belonging to a set called “dictionary”) and linear ones (i.e., linear combinations of nn fixed basis functions). The two models are compared in terms of approximation rates, i.e., speeds of decrease of approximation errors for a growing number nn of basis functions. Proofs of upper bounds on approximation rates by dictionary-based models are inspected, to show that for individual functions they do not imply estimates for dictionary-based models that do not hold also for some linear models. Instead, the possibility of getting faster approximation rates by dictionary-based models is demonstrated for worst-case errors in approximation of suitable sets of functions. For such sets, even geometric upper bounds hold.

► Linear models: linear combinations of nn fixed computational units. ► Dictionary-based models: linear combinations of all nn-tuples from a set of units. ► Worst-case errors in approximation of sets of functions. ► Models compared in terms of rates of decrease of worst-case errors for growing nn. ► Faster rates for dictionary-based approximation of certain sets of functions.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , ,