Abstract: New digital technologies can help create equitable educational outcomes. We used bibliometric methods, a powerful tool for analysis of large bibliographic datasets, with an open-source software to map the computer supported collaborative learning literature. Applying a diversity, equity, and inclusion lens, we considered strengths and weaknesses of this method and analyzed the resulting literature map. We offer recommendations to researchers using similar approaches and re-envision the transformational potential of bibliometric analysis.
more »
« less
Enhancing Digital Twins with Human Movement Data: A Comparative Study of Lidar-Based Tracking Methods
Digitals twins, used to represent dynamic environments, require accurate tracking of human movement to enhance their real-world application. This paper contributes to the field by systematically evaluating and comparing pre-existing tracking methods to identify strengths, weaknesses and practical applications within digital twin frameworks. The purpose of this study is to assess the efficacy of existing human movement tracking techniques for digital twins in real world environments, with the goal of improving spatial analysis and interaction within these virtual modes. We compare three approaches using indoor-mounted lidar sensors: (1) a frame-by-frame method deep learning model with convolutional neural networks (CNNs), (2) custom algorithms developed using OpenCV, and (3) the off-the-shelf lidar perception software package Percept version 1.6.3. Of these, the deep learning method performed best (F1 = 0.88), followed by Percept (F1 = 0.61), and finally the custom algorithms using OpenCV (F1 = 0.58). Each method had particular strengths and weaknesses, with OpenCV-based approaches that use frame comparison vulnerable to signal instability that is manifested as “flickering” in the dataset. Subsequent analysis of the spatial distribution of error revealed that both the custom algorithms and Percept took longer to acquire an identification, resulting in increased error near doorways. Percept software excelled in scenarios involving stationary individuals. These findings highlight the importance of selecting appropriate tracking methods for specific use. Future work will focus on model optimization, alternative data logging techniques, and innovative approaches to mitigate computational challenges, paving the way for more sophisticated and accessible spatial analysis tools. Integrating complementary sensor types and strategies, such as radar, audio levels, indoor positioning systems (IPSs), and wi-fi data, could further improve detection accuracy and validation while maintaining privacy.
more »
« less
- Award ID(s):
- 2149229
- PAR ID:
- 10543681
- Publisher / Repository:
- MDPI
- Date Published:
- Journal Name:
- Remote Sensing
- Volume:
- 16
- Issue:
- 18
- ISSN:
- 2072-4292
- Page Range / eLocation ID:
- 3453
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In this paper, we propose a method for tracking a microrobot’s three-dimensional position using microscope machine vision. The microrobot, theSolid Articulated Four Axis Microrobot (sAFAM), is being developed to enable the assembly and manipulation of micro and nanoscale objects. In the future, arrays of sAFAMS working together can be integrated into a wafer-scale nanofactory, Prior to use, microrobots in this microfactory need calibration, which can be achieved using the proposed measurement technique. Our approach enables faster and more accurate mapping of microrobot translations and rotations, and orders of magnitude larger datasets can be created by automation. Cameras feeds on a custom microscopy system is fed into a data processing pipeline that enables tracking of the microrobot in real-time. This particular machine vision method was implemented with a help of OpenCV and Python and can be used to track the movement of other micrometer-sized features. Additionally, a script was created to enable automated repeatability tests for each of the six trajectories traversable by the robot. A more precise microrobot workable area was also determined thanks to the significantly larger datasets enabled by the combined automation and machine vision approaches. Keywords: Micro robotics, machine vision, nano microscale manufacturing.more » « less
-
Abstract PurposeSpecialized robotic and surgical tools are increasing the complexity of operating rooms (ORs), requiring elaborate preparation especially when techniques or devices are to be used for the first time. Spatial planning can improve efficiency and identify procedural obstacles ahead of time, but real ORs offer little availability to optimize space utilization. Methods for creating reconstructions of physical setups, i.e., digital twins, are needed to enable immersive spatial planning of such complex environments in virtual reality. MethodsWe present a neural rendering-based method to create immersive digital twins of complex medical environments and devices from casual video capture that enables spatial planning of surgical scenarios. To evaluate our approach we recreate two operating rooms and ten objects through neural reconstruction, then conduct a user study with 21 graduate students carrying out planning tasks in the resulting virtual environment. We analyze task load, presence, perceived utility, plus exploration and interaction behavior compared to low visual complexity versions of the same environments. ResultsResults show significantly increased perceived utility and presence using the neural reconstruction-based environments, combined with higher perceived workload and exploratory behavior. There’s no significant difference in interactivity. ConclusionWe explore the feasibility of using modern reconstruction techniques to create digital twins of complex medical environments and objects. Without requiring expert knowledge or specialized hardware, users can create, explore and interact with objects in virtual environments. Results indicate benefits like high perceived utility while being technically approachable, which may indicate promise of this approach for spatial planning and beyond.more » « less
-
Blind & visually impaired individuals often face challenges in wayfinding in unfamiliar environments. Thus, an accessible indoor positioning and navigation system that safely and accurately positions and guides such individuals would be welcome. In indoor positioning, both Bluetooth Low Energy (BLE) beacons and Google Tango have their individual strengths but also have weaknesses that can affect the overall usability of a system that solely relies on either component. We propose a hybrid positioning and navigation system that combines both BLE beacons and Google Tango in order to tap into their strengths while minimizing their individual weaknesses. In this paper, we will discuss the approach and implementation of a BLE- and Tango-based hybrid system. The results of pilot tests on the individual components and a human subject test on the full BLE and hybrid systems are also presented. In addition, we have explored the use of vibrotactile devices to provide additional information to a user about their surroundings.more » « less
-
This paper presents SVIn2, a novel tightly-coupled keyframe-based Simultaneous Localization and Mapping (SLAM) system, which fuses Scanning Profiling Sonar, Visual, Inertial, and water-pressure information in a non-linear optimization framework for small and large scale challenging underwater environments. The developed real-time system features robust initialization, loop-closing, and relocalization capabilities, which make the system reliable in the presence of haze, blurriness, low light, and lighting variations, typically observed in underwater scenarios. Over the last decade, Visual-Inertial Odometry and SLAM systems have shown excellent performance for mobile robots in indoor and outdoor environments, but often fail underwater due to the inherent difficulties in such environments. Our approach combats the weaknesses of previous approaches by utilizing additional sensors and exploiting their complementary characteristics. In particular, we use (1) acoustic range information for improved reconstruction and localization, thanks to the reliable distance measurement; (2) depth information from water-pressure sensor for robust initialization, refining the scale, and assisting to limit the drift in the tightly-coupled integration. The developed software—made open source—has been successfully used to test and validate the proposed system in both benchmark datasets and numerous real world underwater scenarios, including datasets collected with a custom-made underwater sensor suite and an autonomous underwater vehicle Aqua2. SVIn2 demonstrated outstanding performance in terms of accuracy and robustness on those datasets and enabled other robotic tasks, for example, planning for underwater robots in presence of obstacles.more » « less
An official website of the United States government

