Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
515548 | Information Processing & Management | 2013 | 21 Pages |
Preference elicitation is a challenging fundamental problem when designing recommender systems. In the present work we propose a content-based technique to automatically generate a semantic representation of the user’s musical preferences directly from audio. Starting from an explicit set of music tracks provided by the user as evidence of his/her preferences, we infer high-level semantic descriptors for each track obtaining a user model. To prove the benefits of our proposal, we present two applications of our technique. In the first one, we consider three approaches to music recommendation, two of them based on a semantic music similarity measure, and one based on a semantic probabilistic model. In the second application, we address the visualization of the user’s musical preferences by creating a humanoid cartoon-like character – the Musical Avatar – automatically inferred from the semantic representation. We conducted a preliminary evaluation of the proposed technique in the context of these applications with 12 subjects. The results are promising: the recommendations were positively evaluated and close to those coming from state-of-the-art metadata-based systems, and the subjects judged the generated visualizations to capture their core preferences. Finally, we highlight the advantages of the proposed semantic user model for enhancing the user interfaces of information filtering systems.
► We propose preference elicitation technique based on explicit preference examples. ► We study audio-based approaches to music recommendation and preference visualization. ► Approaches based on semantics inferred from audio surpass low-level timbral methods. ► Such approaches are close to metadata-based system being suitable for music discovery. ► Proposed visualization captures the core musical preferences of the participants.