کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
377535 658788 2016 11 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Survival analysis for high-dimensional, heterogeneous medical data: Exploring feature extraction as an alternative to feature selection
ترجمه فارسی عنوان
تجزیه و تحلیل بقا برای داده های پزشکی ناهمگن با ابعاد بالا: بررسی استخراج ویژگی به عنوان جایگزینی برای قابلیت انتخاب
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر هوش مصنوعی
چکیده انگلیسی


• We propose random survival forests for feature extraction for survival analysis.
• We formulate two constraints on the neighborhood graph specific to survival analysis.
• We implement a comparative analysis of 16 feature extraction/selection methods.
• For small sample sizes, models with built-in feature selection are preferred.
• For large sample sizes, feature extraction methods performed comparably.

BackgroundIn clinical research, the primary interest is often the time until occurrence of an adverse event, i.e., survival analysis. Its application to electronic health records is challenging for two main reasons: (1) patient records are comprised of high-dimensional feature vectors, and (2) feature vectors are a mix of categorical and real-valued features, which implies varying statistical properties among features. To learn from high-dimensional data, researchers can choose from a wide range of methods in the fields of feature selection and feature extraction. Whereas feature selection is well studied, little work focused on utilizing feature extraction techniques for survival analysis.ResultsWe investigate how well feature extraction methods can deal with features having varying statistical properties. In particular, we consider multiview spectral embedding algorithms, which specifically have been developed for these situations. We propose to use random survival forests to accurately determine local neighborhood relations from right censored survival data. We evaluated 10 combinations of feature extraction methods and 6 survival models with and without intrinsic feature selection in the context of survival analysis on 3 clinical datasets. Our results demonstrate that for small sample sizes – less than 500 patients – models with built-in feature selection (Cox model with ℓ1 penalty, random survival forest, and gradient boosted models) outperform feature extraction methods by a median margin of 6.3% in concordance index (inter-quartile range: [−1.2 % ;14.6 %]).ConclusionsIf the number of samples is insufficient, feature extraction methods are unable to reliably identify the underlying manifold, which makes them of limited use in these situations. For large sample sizes – in our experiments, 2500 samples or more – feature extraction methods perform as well as feature selection methods.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Artificial Intelligence in Medicine - Volume 72, September 2016, Pages 1–11
نویسندگان
, , , ,