کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
443949 692824 2012 13 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
MIND: Modality independent neighbourhood descriptor for multi-modal deformable registration
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر گرافیک کامپیوتری و طراحی به کمک کامپیوتر
پیش نمایش صفحه اول مقاله
MIND: Modality independent neighbourhood descriptor for multi-modal deformable registration
چکیده انگلیسی

Deformable registration of images obtained from different modalities remains a challenging task in medical image analysis. This paper addresses this important problem and proposes a modality independent neighbourhood descriptor (MIND) for both linear and deformable multi-modal registration. Based on the similarity of small image patches within one image, it aims to extract the distinctive structure in a local neighbourhood, which is preserved across modalities. The descriptor is based on the concept of image self-similarity, which has been introduced for non-local means filtering for image denoising. It is able to distinguish between different types of features such as corners, edges and homogeneously textured regions. MIND is robust to the most considerable differences between modalities: non-functional intensity relations, image noise and non-uniform bias fields. The multi-dimensional descriptor can be efficiently computed in a dense fashion across the whole image and provides point-wise local similarity across modalities based on the absolute or squared difference between descriptors, making it applicable for a wide range of transformation models and optimisation algorithms. We use the sum of squared differences of the MIND representations of the images as a similarity metric within a symmetric non-parametric Gauss–Newton registration framework. In principle, MIND would be applicable to the registration of arbitrary modalities. In this work, we apply and validate it for the registration of clinical 3D thoracic CT scans between inhale and exhale as well as the alignment of 3D CT and MRI scans. Experimental results show the advantages of MIND over state-of-the-art techniques such as conditional mutual information and entropy images, with respect to clinically annotated landmark locations.


► Image descriptor for multi-modal registration.
► Discriminative for different features independent of modality.
► Robust against noise, bias-fields and large misalignments.
► Accurate deformable registration of inhale/exhale CT and CT/MRI lung volumes.
► Reduced anatomical landmark error compared to state-of-the-art methods.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Medical Image Analysis - Volume 16, Issue 7, October 2012, Pages 1423–1435
نویسندگان
, , , , , , ,