کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
406378 | 678081 | 2015 | 13 صفحه PDF | دانلود رایگان |
• We propose a general folded-concave penalization formulation for tensor completion problem.
• We derive an efficient LLA-ALM algorithm for finding a good local solution of the resulting nonconvex optimization problem.
• A two-step LLA strategy is used to speed up the proposed LLA-ALM algorithm.
• A series of numerical experiments have been carried out to demonstrate the superiority of the suggested new formulation over the traditional nuclear norm based formulation.
The existing studies involving matrix or tensor completion problems are commonly under the nuclear norm penalization framework due to the computational efficiency of the resulting convex optimization problem. Folded-concave penalization methods have demonstrated surprising developments in sparse learning problems due to their nice practical and theoretical properties. To share the same light of folded-concave penalization methods, we propose a new tensor completion model via folded-concave penalty for estimating missing values in tensor data. Two typical folded-concave penalties, the minmax concave plus (MCP) penalty and the smoothly clipped absolute deviation (SCAD) penalty, are employed in the new model. To solve the resulting nonconvex optimization problem, we develop a local linear approximation augmented Lagrange multiplier (LLA-ALM) algorithm which combines a two-step LLA strategy to search a local optimum of the proposed model efficiently. Finally, we provide numerical experiments with phase transitions, synthetic data sets, real image and video data sets to exhibit the superiority of the proposed model over the nuclear norm penalization method in terms of the accuracy and robustness.
Journal: Neurocomputing - Volume 152, 25 March 2015, Pages 261–273