skip to main content


Title: IoT-Enabled Smart Mobility Devices for Aging and Rehabilitation
Many elderly individuals have physical restrictions that require the use of a walker to maintain stability while walking. In addition, many of these individuals also have age-related visual impairments that make it difficult to avoid obstacles in unfamiliar environments. To help such users navigate their environment faster, safer and more easily, we propose a smart walker augmented with a collection of ultrasonic sensors as well as a camera. The data collected by the sensors is processed using echo-location based obstacle detection algorithms and deep neural networks based object detection algorithms, respectively. The system alerts the user to obstacles and guides her on a safe path through audio and haptic signals.  more » « less
Award ID(s):
1852002
NSF-PAR ID:
10225521
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
IEEE International Conference on Communications (ICC) 2020
Page Range / eLocation ID:
1 to 6
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Breathing biomarkers, such as breathing rate, fractional inspiratory time, and inhalation-exhalation ratio, are vital for monitoring the user's health and well-being. Accurate estimation of such biomarkers requires breathing phase detection, i.e., inhalation and exhalation. However, traditional breathing phase monitoring relies on uncomfortable equipment, e.g., chestbands. Smartphone acoustic sensors have shown promising results for passive breathing monitoring during sleep or guided breathing. However, detecting breathing phases using acoustic data can be challenging for various reasons. One of the major obstacles is the complexity of annotating breathing sounds due to inaudible parts in regular breathing and background noises. This paper assesses the potential of using smartphone acoustic sensors for passive unguided breathing phase monitoring in a natural environment. We address the annotation challenges by developing a novel variant of the teacher-student training method for transferring knowledge from an inertial sensor to an acoustic sensor, eliminating the need for manual breathing sound annotation by fusing signal processing with deep learning techniques. We train and evaluate our model on the breathing data collected from 131 subjects, including healthy individuals and respiratory patients. Experimental results show that our model can detect breathing phases with 77.33% accuracy using acoustic sensors. We further present an example use-case of breathing phase-detection by first estimating the biomarkers from the estimated breathing phases and then using these biomarkers for pulmonary patient detection. Using the detected breathing phases, we can estimate fractional inspiratory time with 92.08% accuracy, the inhalation-exhalation ratio with 86.76% accuracy, and the breathing rate with 91.74% accuracy. Moreover, we can distinguish respiratory patients from healthy individuals with up to 76% accuracy. This paper is the first to show the feasibility of detecting regular breathing phases towards passively monitoring respiratory health and well-being using acoustic data captured by a smartphone. 
    more » « less
  2. Billard, A. ; Asfour, T. ; Khatib, O. (Ed.)
    Underwater navigation presents several challenges, including unstructured unknown environments, lack of reliable localization systems (e.g., GPS), and poor visibility. Furthermore, good-quality obstacle detection sensors for underwater robots are scant and costly; and many sensors like RGB-D cameras and LiDAR only work in-air. To enable reliable mapless underwater navigation despite these challenges, we propose a low-cost end-to-end navigation system, based on a monocular camera and a fixed single-beam echo-sounder, that efficiently navigates an underwater robot to waypoints while avoiding nearby obstacles. Our proposed method is based on Proximal Policy Optimization (PPO), which takes as input current relative goal information, estimated depth images, echo-sounder readings, and previous executed actions, and outputs 3D robot actions in a normalized scale. End-to-end training was done in simulation, where we adopted domain randomization (varying underwater conditions and visibility) to learn a robust policy against noise and changes in visibility conditions. The experiments in simulation and real-world demonstrated that our proposed method is successful and resilient in navigating a low-cost underwater robot in unknown underwater environments. The implementation is made publicly available at https://github.com/dartmouthrobotics/deeprl-uw-robot-navigation. 
    more » « less
  3. The objective of this study is to validate reduced graphene oxide (RGO)-based volatile organic compounds (VOC) sensors, assembled by simple and low-cost manufacturing, for the detection of disease-related VOCs in human breath using machine learning (ML) algorithms. RGO films were functionalized by four different metalloporphryins to assemble cross-sensitive chemiresistive sensors with different sensing properties. This work demonstrated how different ML algorithms affect the discrimination capabilities of RGO–based VOC sensors. In addition, an ML-based disease classifier was derived to discriminate healthy vs. unhealthy individuals based on breath sample data. The results show that our ML models could predict the presence of disease-related VOC compounds of interest with a minimum accuracy and F1-score of 91.7% and 83.3%, respectively, and discriminate chronic kidney disease breath with a high accuracy, 91.7%. 
    more » « less
  4. We investigate algorithmic control of a large swarm of mobile particles (such as robots, sensors, or building material) that move in a 2D workspace using a global input signal (such as gravity or a magnetic field). Upon activation of the field, each particle moves maximally in the same direction until forward progress is blocked by a stationary obstacle or another stationary particle. In an open workspace, this system model is of limited use because it has only two controllable degrees of freedom—all particles receive the same inputs and move uniformly. We show that adding a maze of obstacles to the environment can make the system drastically more complex but also more useful. We provide a wide range of results for a wide range of questions. These can be subdivided into external algorithmic problems, in which particle configurations serve as input for computations that are performed elsewhere, and internal logic problems, in which the particle configurations themselves are used for carrying out computations. For external algorithms, we give both negative and positive results. If we are given a set of stationary obstacles, we prove that it is NP-hard to decide whether a given initial configuration of unit-sized particles can be transformed into a desired target configuration. Moreover, we show that finding a control sequence of minimum length is PSPACE-complete. We also work on the inverse problem, providing constructive algorithms to design workspaces that efficiently implement arbitrary permutations between different configurations. For internal logic, we investigate how arbitrary computations can be implemented. We demonstrate how to encode dual-rail logic to build a universal logic gate that concurrently evaluates AND, NAND, NOR, and OR operations. Using many of these gates and appropriate interconnects, we can evaluate any logical expression. However, we establish that simulating the full range of complex interactions present in arbitrary digital circuits encounters a fundamental difficulty: a FAN-OUT gate cannot be generated. We resolve this missing component with the help of 2 9 1 particles, which can create FAN-OUT gates that produce multiple copies of the inputs. Using these gates we provide rules for replicating arbitrary digital circuits. 
    more » « less
  5. To create safer and less congested traffic operating environments researchers at the University of Tennessee at Chattanooga (UTC) and the Georgia Tech Research Institute (GTRI) have fostered a vision of cooperative sensing and cooperative mobility. This vision is realized in a mobile application that combines visual data extracted from cameras on roadway infrastructure with a user’s coordinates via a GPS-enabled device to create a visual representation of the driving or walking environment surrounding the application user. By merging the concepts of computer vision, object detection, and mono-vision image depth calculation, this application is able to gather absolute Global Positioning System (GPS) coordinates from a user’s mobile device and combine them with relative GPS coordinates determined by the infrastructure cameras and determine the position of vehicles and pedestrians without the knowledge of their absolute GPS coordinates. The joined data is then used by an iOS mobile application to display a map showing the location of other entities such as vehicles, pedestrians, and obstacles creating a real-time visual representation of the surrounding area prior to the area appearing in the user’s visual perspective. Furthermore, a feature was implemented to display routing by using the results of a traffic scenario that was analyzed by rerouting algorithms in a simulated environment. By displaying where proximal entities are concentrated and showing recommended optional routes, users have the ability to be more informed and aware when making traffic decisions helping ensure a higher level of overall safety on our roadways. This vision would not be possible without high speed gigabit network infrastructure installed in Chattanooga, Tennessee and UTC’s wireless testbed, which was used to test many functions of this application. This network was required to reduce the latency of the massive amount of data generated by the infrastructure and vehicles that utilize the testbed; having results from this data come back in real-time is a critical component. 
    more » « less