Although the average healthy adult transitions from sit to stand over 60 times per day, most research on powered prosthesis control has only focused on walking. In this paper, we present a data-driven controller that enables sitting, standing, and walking with minimal tuning. Our controller comprises two high level modes of sit/stand and walking, and we develop heuristic biomechanical rules to control transitions. We use a phase variable based on the user's thigh angle to parameterize both walking and sit/stand motions, and use variable impedance control during ground contact and position control during swing. We extend previous work on data-driven optimization of continuous impedance parameter functions to design the sit/stand control mode using able-bodied data. Experiments with a powered knee-ankle prosthesis used by a participant with above-knee amputation demonstrate promise in clinical outcomes, as well as trade-offs between our minimal-tuning approach and accommodation of user preferences. Specifically, our controller enabled the participant to complete the sit/stand task 20% faster and reduced average asymmetry by half compared to his everyday passive prosthesis. The controller also facilitated a timed up and go test involving sitting, standing, walking, and turning, with only a mild (10%) decrease in speed compared to the everyday prosthesis. Our sit/stand/walk controller enables multiple activities of daily life with minimal tuning and mode switching.
more »
« less
An empirical evaluation of activities and classifiers for user identification on smartphones
Abstract: In the past few years, smart mobile devices have become ubiquitous. Most of these devices have embedded sensors such as GPS, accelerometer, gyroscope, etc. There is a growing trend to use these sensors for user identification and activity recognition. Most prior work, however, contains results on a small number of classifiers, data, or activities. We present a comprehensive evaluation often representative classifiers used in identification on two publicly available data sets (thus our work is reproducible). Our results include data obtained from dynamic activities, such as walking and running; static postures such as sitting and standing; and an aggregate of activities that combine dynamic, static, and postural transitions, such as sit-to-stand or stand-to-sit. Our identification results on aggregate data include both labeled and unlabeled activities. Our results show that the k-Nearest Neighbors algorithm consistently outperforms other classifiers. We also show that by extracting appropriate features and using appropriate classifiers, static and aggregate activities can be used for user identification. We posit that this work will serve as a resource and a benchmark for the selection and evaluation of classification algorithms for activity based identification on smartphones.
more »
« less
- Award ID(s):
- 1527795
- PAR ID:
- 10036391
- Date Published:
- Journal Name:
- Biometrics Theory, Applications and Systems (BTAS), 2016 IEEE 8th International Conference on
- Page Range / eLocation ID:
- 1 to 8
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Human activity recognition (HAR) and, more broadly, activities of daily life recognition using wearable devices have the potential to transform a number of applications, including mobile healthcare, smart homes, and fitness monitoring. Recent approaches for HAR use multiple sensors on various locations on the body to achieve higher accuracy for complex activities. While multiple sensors increase the accuracy, they are also susceptible to reliability issues when one or more sensors are unable to provide data to the application due to sensor malfunction, user error, or energy limitations. Training multiple activity classifiers that use a subset of sensors is not desirable, since it may lead to reduced accuracy for applications. To handle these limitations, we propose a novel generative approach that recovers the missing data of sensors using data available from other sensors. The recovered data are then used to seamlessly classify activities. Experiments using three publicly available activity datasets show that with data missing from one sensor, the proposed approach achieves accuracy that is within 10% of the accuracy with no missing data. Moreover, implementation on a wearable device prototype shows that the proposed approach takes about 1.5 ms for recovering data in the w-HAR dataset, which results in an energy consumption of 606 μJ. The low-energy consumption ensures that SensorGAN is suitable for effectively recovering data in tinyML applications on energy-constrained devices.more » « less
-
A human lower-limb biomechanics and wearable sensors dataset during cyclic and non-cyclic activitiesAbstract Tasks of daily living are often sporadic, highly variable, and asymmetric. Analyzing these real-world non-cyclic activities is integral for expanding the applicability of exoskeletons, protheses, wearable sensing, and activity classification to real life, and could provide new insights into human biomechanics. Yet, currently available biomechanics datasets focus on either highly consistent, continuous, and symmetric activities, such as walking and running, or only a single specific non-cyclic task. To capture a more holistic picture of lower limb movements in everyday life, we collected data from 12 participants performing 20 non-cyclic activities (e.g. sit-to-stand, jumping, squatting, lunging, cutting) as well as 11 cyclic activities (e.g. walking, running) while kinematics (motion capture and IMUs), kinetics (force plates), and electromyography (EMG) were collected. This dataset provides normative biomechanics for a highly diverse range of activities and common tasks from a consistent set of participants and sensors.more » « less
-
The use of audio and video modalities for Human Activity Recognition (HAR) is common, given the richness of the data and the availability of pre-trained ML models using a large corpus of labeled training data. However, audio and video sensors also lead to significant consumer privacy concerns. Researchers have thus explored alternate modalities that are less privacy-invasive such as mmWave doppler radars, IMUs, motion sensors. However, the key limitation of these approaches is that most of them do not readily generalize across environments and require significant in-situ training data. Recent work has proposed cross-modality transfer learning approaches to alleviate the lack of trained labeled data with some success. In this paper, we generalize this concept to create a novel system called VAX (Video/Audio to 'X'), where training labels acquired from existing Video/Audio ML models are used to train ML models for a wide range of 'X' privacy-sensitive sensors. Notably, in VAX, once the ML models for the privacy-sensitive sensors are trained, with little to no user involvement, the Audio/Video sensors can be removed altogether to protect the user's privacy better. We built and deployed VAX in ten participants' homes while they performed 17 common activities of daily living. Our evaluation results show that after training, VAX can use its onboard camera and microphone to detect approximately 15 out of 17 activities with an average accuracy of 90%. For these activities that can be detected using a camera and a microphone, VAX trains a per-home model for the privacy-preserving sensors. These models (average accuracy = 84%) require no in-situ user input. In addition, when VAX is augmented with just one labeled instance for the activities not detected by the VAX A/V pipeline (~2 out of 17), it can detect all 17 activities with an average accuracy of 84%. Our results show that VAX is significantly better than a baseline supervised-learning approach of using one labeled instance per activity in each home (average accuracy of 79%) since VAX reduces the user burden of providing activity labels by 8x (~2 labels vs. 17 labels).more » « less
-
The ubiquitousness of smart and wearable devices with integrated acoustic sensors in modern human lives presents tremendous opportunities for recognizing human activities in our living spaces through ML-driven applications. However, their adoption is often hindered by the requirement of large amounts of labeled data during the model training phase. Integration of contextual metadata has the potential to alleviate this since the nature of these meta-data is often less dynamic (e.g. cleaning dishes, and cooking both can happen in the kitchen context) and can often be annotated in a less tedious manner (a sensor always placed in the kitchen). However, most models do not have good provisions for the integration of such meta-data information. Often, the additional metadata is leveraged in the form of multi-task learning with sub-optimal outcomes. On the other hand, reliably recognizing distinct in-home activities with similar acoustic patterns (e.g. chopping, hammering, knife sharpening) poses another set of challenges. To mitigate these challenges, we first show in our preliminary study that the room acoustics properties such as reverberation, room materials, and background noise leave a discernible fingerprint in the audio samples to recognize the room context and proposed AcouDL as a unified framework to exploit room context information to improve activity recognition performance. Our proposed self-supervision-based approach first learns the context features of the activities by leveraging a large amount of unlabeled data using a contrastive learning mechanism and then incorporates this feature induced with a novel attention mechanism into the activity classification pipeline to improve the activity recognition performance. Extensive evaluation of AcouDL on three datasets containing a wide range of activities shows that such an efficient feature fusion-mechanism enables the incorporation of metadata that helps to better recognition of the activities under challenging classification scenarios with 0.7-3.5% macro F1 score improvement over the baselines.more » « less
An official website of the United States government

