|کد مقاله||کد نشریه||سال انتشار||مقاله انگلیسی||ترجمه فارسی||نسخه تمام متن|
|537852||870923||2016||6 صفحه PDF||سفارش دهید||دانلود کنید|
• Pairwise comparison (PC), rather than conventional numerical rating of image quality, is explored for IQA.
• A new optimization objective for optimizing image quality ranks is established.
• Image quality ratings are used as weights in rank model additionally, making it rating-sensitive.
To know what kinds of image features are crucial for image quality assessment (IQA) and how these features affect the human visual system (HVS) is still largely beyond human knowledge. Hence, machine learning (ML) is employed to build IQA by simulating the HVS behavior in IQA processes. Support vector machine/regression (SVM/SVR) is a major member of ML. It has been successfully applied to IQA recently. As to image quality rating, the human’s opinion about it is not always reliable. In fact, the subjects cannot precisely rate the small difference of image quality in subjective testing, resulting in unreliable Mean Opinion Scores (MOSs). However, they can easily identify the better/worse one from two given images, even their qualities do not differ much. In this sense, the human’s opinion on pairwise comparison (PC) of image quality is more reliable than image quality rating. Thus, PC has been exploited in developing IQA metrics. In this paper, a rank learning optimization framework is firstly developed to model IQA. Particularly, the PCs of image quality instead of numerical ratings are incorporated into the optimization framework. Then, a novel no-reference (NR)-IQA is proposed to infer image quality in terms of image quality ranks. By importing rank learning theory and PC into IQA, a fundamental and meaningful departure from the existing framework of IQA could be expected. The experimental results confirm that the proposed Pairwise Rank Learning based Image Quality Metric (PRLIQM) can achieve comparable performance over the state-of-the-art NR-IQA approaches.
Journal: Displays - Volume 44, September 2016, Pages 21–26