کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
531812 | 869876 | 2016 | 12 صفحه PDF | دانلود رایگان |
• A new design framework of AMs is proposed by constructing Network Potential Fields.
• Visual saliency is introduced to AMs by shaping the EBF kernels.
• The AMs with sparse and dynamic synapses are guaranteed to be asymptotically stable.
In this paper, we present a framework to construct a general class of recurrent neural networks (RNNs) as associative memories (AMs) for pattern storage and retrieval. Different from the traditional AM models that treat elements of a pattern equally, the proposed framework introduces visual saliency of a target pattern into the AM design process by encoding saliency values into the ellipsoidal basis function (EBF) kernel that calculates the weighted distance between the input and target patterns. Network potential fields (NPFs) are then constructed as the linear combination of EBF/radial basis function (RBF) kernels for auto-associative memories (AAMs) and hetero-associative memories (HAMs), respectively. Sparse and dynamic synapses for both the proposed AAMs and HAMs are determined according to the gradients of the NPFs with high efficiency and without the continuity assumptions of the RNN's activation function, which is usually required by traditional AMs. With the proposed method, the target patterns are assigned as fixed point attractors of the network and the AMs are proven to be able to converge to one of these attractors via Lyapunov analysis. The resulted AAMs and HAMs are demonstrated to have excellent tolerance and robustness to a variety of input noises and corruption via comparative experiments on retrieval and association of 2D color images.
Journal: Pattern Recognition - Volume 60, December 2016, Pages 669–680