<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Journal Article</dc:product_type><dc:title>Auritus: An Open-Source Optimization Toolkit for Training and Development of Human Movement Models and Filters Using Earables</dc:title><dc:creator>Saha, Swapnil Sayan; Sandha, Sandeep Singh; Pei, Siyou; Jain, Vivek; Wang, Ziqi; Li, Yuchen; Sarker, Ankur; Srivastava, Mani</dc:creator><dc:corporate_author/><dc:editor/><dc:description>Smart ear-worn devices (called earables) are being equipped with various onboard sensors and algorithms, transforming earphones from simple audio transducers to multi-modal interfaces making rich inferences about human motion and vital signals. However, developing sensory applications using earables is currently quite cumbersome with several barriers in the way. First, time-series data from earable sensors incorporate information about physical phenomena in complex settings, requiring machine-learning (ML) models learned from large-scale labeled data. This is challenging in the context of earables because large-scale open-source datasets are missing. Secondly, the small size and compute constraints of earable devices make on-device integration of many existing algorithms for tasks such as human activity and head-pose estimation difficult. To address these challenges, we introduce Auritus, an extendable and open-source optimization toolkit designed to enhance and replicate earable applications. Auritus serves two primary functions. Firstly, Auritus handles data collection, pre-processing, and labeling tasks for creating customized earable datasets using graphical tools. The system includes an open-source dataset with 2.43 million inertial samples related to head and full-body movements, consisting of 34 head poses and 9 activities from 45 volunteers. Secondly, Auritus provides a tightly-integrated hardware-in-the-loop (HIL) optimizer and TinyML interface to develop lightweight and real-time machine-learning (ML) models for activity detection and filters for head-pose tracking. To validate the utlity of Auritus, we showcase three sample applications, namely fall detection, spatial audio rendering, and augmented reality (AR) interfacing. Auritus recognizes activities with 91% leave 1-out test accuracy (98% test accuracy) using real-time models as small as 6-13 kB. Our models are 98-740x smaller and 3-6% more accurate over the state-of-the-art. We also estimate head pose with absolute errors as low as 5 degrees using 20kB filters, achieving up to 1.6x precision improvement over existing techniques. We make the entire system open-source so that researchers and developers can contribute to any layer of the system or rapidly prototype their applications using our dataset and algorithms.</dc:description><dc:publisher/><dc:date>2022-07-04</dc:date><dc:nsf_par_id>10385253</dc:nsf_par_id><dc:journal_name>Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies</dc:journal_name><dc:journal_volume>6</dc:journal_volume><dc:journal_issue>2</dc:journal_issue><dc:page_range_or_elocation>1 to 34</dc:page_range_or_elocation><dc:issn>2474-9567</dc:issn><dc:isbn/><dc:doi>https://doi.org/10.1145/3534586</dc:doi><dcq:identifierAwardId>1640813; 1823221; 1822935</dcq:identifierAwardId><dc:subject/><dc:version_number/><dc:location/><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>