Exploring earable technology to detect chewing moments in sedentary common daily activities

Loading...
Thumbnail Image
Date
2020-08-20
Authors
Lotfi, Roya
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
The feasibility of collecting various data from built-in wearable sensors has enticed many researchers to use these devices for analyzing human activities and behaviors. In particular, audio, video, and motion data have been utilized for automatic dietary monitoring. In this research, we investigate the feasibility of detecting chewing activities based on audio and inertial sensor data obtained from an ear-worn device, eSense. We process each sensor data separately and determine the accuracy of each sensing modality for chewing detection when using MFCC and Spectral Centroid as features and Logistic Regression, Decision Tree, and Random Forest as classifiers. We also measure the performance of chewing detection when fusing features extracted from both audio and inertial sensor data. We evaluate the chewing detection algorithm by running a pilot study inside a lab environment on a total of 5 participants. This consists of 130 minutes of audio and inertial measurement unit (IMU) data. The results of this study indicate that an in-ear IMU with an accuracy of 95% outperforms audio data in detecting chewing and fusing both modalities improves the accuracy up to 97%.
Description
Keywords
Chewing detection, Earable, Machine learning, Signal Processing, MFCC
Citation
Lotfi, R., Tzanetakis, G., Eskicioglu, R., & Irani, P. (2020, May). A comparison between audio and IMU data to detect chewing events based on an earable device. In Proceedings of the 11th Augmented Human International Conference (pp. 1-8).