skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Good-Eye: A Combined Computer-Vision and Physiological-Sensor based Device for Full-Proof Prediction and Detection of Fall of Adults
It is imperative to find the most accurate way to detect falls in elders to help mitigate the disastrous effects of such unfortunate injuries. In order to mitigate fall related accidents, we propose the Good-Eye System, an Internet of Things (IoT) enabled Edge Level Device which works when there is an orientation change detected by camera, and monitors physiological signal parameters. If the observed change is greater than the set threshold, the user is notified with information regarding a prediction of fall or a detection of fall, using LED lights. The Good-Eye System has a remote wall attached camera to monitor continuously the subject as long as the person is in a room along with a camera attached to a wearable to increase the accuracy of the model. The observed accuracy of the Good-Eye System as a whole is approximately 95%.  more » « less
Award ID(s):
1924112
PAR ID:
10158167
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Proceedings of the 2nd IFIP International Internet of Things (IoT) Conference (IFIP-IoT)
Page Range / eLocation ID:
273-288
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Cameras are increasingly being deployed in cities, enterprises and roads world-wide to enable many applications in public safety, intelligent transportation, retail, healthcare and manufacturing. Often, after initial deployment of the cameras, the environmental conditions and the scenes around these cameras change, and our experiments show that these changes can adversely impact the accuracy of insights from video analytics. This is because the camera parameter settings, though optimal at deployment time, are not the best settings for good-quality video capture as the environmental conditions and scenes around a camera change during operation. Capturing poor-quality video adversely affects the accuracy of analytics. To mitigate the loss in accuracy of insights, we propose a novel, reinforcement-learning based system APT that dynamically, and remotely (over 5G networks), tunes the camera parameters, to ensure a high-quality video capture, which mitigates any loss in accuracy of video analytics. As a result, such tuning restores the accuracy of insights when environmental conditions or scene content change. APT uses reinforcement learning, with no-reference perceptual quality estimation as the reward function. We conducted extensive real-world experiments, where we simultaneously deployed two cameras side-by-side overlooking an enterprise parking lot (one camera only has manufacturer-suggested default setting, while the other camera is dynamically tuned by APT during operation). Our experiments demonstrated that due to dynamic tuning by APT, the analytics insights are consistently better at all times of the day: the accuracy of object detection video analytics application was improved on average by ∼ 42%. Since our reward function is independent of any analytics task, APT can be readily used for different video analytics tasks. 
    more » « less
  2. Eye motion tracking plays a vital role in many applications such as human-computer interaction (HCI), virtual reality, and disease detection. Camera-based eye tracking, albeit accurate and easy to use, may raise privacy concerns and appear to be unreliable in poor lighting conditions. In this paper, we present RadEye, a radar system capable of detecting fine-grained human eye motions from a distance. RadEye is realized through an integrated hardware and software design. It customizes a sub-6GHz FMCW radar so as to detect millimeter-level eye movement while extending its detection range using low frequency. It further employs a deep neural network (DNN) to refine the detection accuracy through camera-guided supervisory training. We have built a prototype of RadEye. Extensive experimental results show that it achieves 90% accuracy when detecting human eye rotation directions (up, down, left, and right) in various scenarios. 
    more » « less
  3. We present a first-of-its-kind ultra-compact intelligent camera system, dubbed i-FlatCam, including a lensless camera with a computational (Comp.) chip. It highlights (1) a predict-then-focus eye tracking pipeline for boosted efficiency without compromising the accuracy, (2) a unified compression scheme for single-chip processing and improved frame rate per second (FPS), and (3) dedicated intra-channel reuse design for depth-wise convolutional layers (DW-CONV) to increase utilization. i-FlatCam demonstrates the first eye tracking pipeline with a lensless camera and achieves 3.16 degrees of accuracy, 253 FPS, 91.49 µJ/Frame, and 6.7mm×8.9mm×1.2mm camera form factor, paving the way for next-generation Augmented Reality (AR) and Virtual Reality (VR) devices. 
    more » « less
  4. This paper presents a novel technique to achieve autofocusing for a three-dimensional (3D) profilometry system with dual projectors. The proposed system uses a camera that is attached with an electronically focus-tunable lens (ETL) that allows dynamic change of camera’s focal plane such that the camera can focus on the object; the camera captures fringe patterns projected by each projector to establish corresponding points between two projectors, and two pre-calibrated projectors form triangulation for 3D reconstruction. We pre-calibrate the relationship between the depth and the current being used for each focal plane, perform a 3D shape measurement with an unknown focus level, and calculate the desired current value based on the initial 3D result. We developed a prototype system that can automatically focus on an object positioned between 450 mm to 850 mm. 
    more » « less
  5. In modern industrial manufacturing processes, robotic manipulators are routinely used in the assembly, packaging, and material handling operations. During production, changing end-of-arm tooling is frequently necessary for process flexibility and reuse of robotic resources. In conventional operation, a tool changer is sometimes employed to load and unload end-effectors, however, the robot must be manually taught to locate the tool changers by operators via a teach pendant. During tool change teaching, the operator takes considerable effort and time to align the master and tool side of the coupler by adjusting the motion speed of the robotic arm and observing the alignment from different viewpoints. In this paper, a custom robotic system, the NeXus, was programmed to locate and change tools automatically via an RGB-D camera. The NeXus was configured as a multi-robot system for multiple tasks including assembly, bonding, and 3D printing of sensor arrays, solar cells, and microrobot prototypes. Thus, different tools are employed by an industrial robotic arm to position grippers, printers, and other types of end-effectors in the workspace. To improve the precision and cycle-time of the robotic tool change, we mounted an eye-in-hand RGB-D camera and employed visual servoing to automate the tool change process. We then compared the teaching time of the tool location using this system and compared the cycle time with those of 6 human operators in the manual mode. We concluded that the tool location time in automated mode, on average, more than two times lower than the expert human operators. 
    more » « less