Article ID Journal Published Year Pages File Type
465889 Pervasive and Mobile Computing 2016 18 Pages PDF
Abstract

Human activity recognition is a core component of context-aware, ubiquitous computing systems. Traditionally, this task is accomplished by analysing signals of wearable motion sensors. While successful for low-level activities (e.g. walking or standing), high-level activities (e.g. watching movies or attending lectures) are difficult to distinguish from motion data alone. Furthermore, instrumentation of complex body sensor network at population scale is impractical. In this work, we take an alternative approach of leveraging rich, dynamic, and crowd-generated self-report data from social media platforms as the basis for in-situ activity recognition. By treating the user as the “sensor”, we make use of implicit signals emitted from natural use of mobile smartphones, in the form of textual content, semantic location, and time. Tackling both the task of recognizing a main activity (multi-class classification) and recognizing all applicable activity categories (multi-label tagging) from one instance, we are able to obtain mean accuracies of more than 75%. We conduct a thorough analysis and interpret of our model to illustrate a promising first step towards comprehensive, high-level activity recognition using instrumentation-free, crowdsourced, social media data.

Related Topics
Physical Sciences and Engineering Computer Science Computer Networks and Communications
Authors
, , ,