- Award ID(s):
- 1948547
- PAR ID:
- 10289674
- Date Published:
- Journal Name:
- Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services (ACM MobiSys)
- Page Range / eLocation ID:
- 215 to 227
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
In the past, researchers designed, deployed, and evaluated Wi-Fi based localization techniques in order to locate users and devices without adding extra or costly infrastructure. However, as infrastructure deployments change, one must reexamine the role of Wi-Fi localization. Today, cameras are becoming increasingly deployed, and therefore this work examines how contextual and vision data obtained from cameras can be integrated with Wi-Fi localization techniques. We present an approach called CALM that works on commodity APs and cameras. Our approach contains several contributions: a camera line fitting technique to restrict the search space of candidate locations, single AP and camera localization via a deprojection scheme inspired from 3D cameras, simple and robust AP weighting that analyzes the context of users via the camera, and a new virtual camera methodology to scale analysis. We motivate our scheme by analyzing real camera and AP topologies from a major vendor. Our evaluation over 9 rooms and 102,300 wireless readings shows CALM can obtain decimeter-level accuracy, improving performance over previous Wi-Fi techniques like FTM by 2.7× and SpotFi by 2.3×.more » « less
-
Abstract Imaging underwater environments is of great importance to marine sciences, sustainability, climatology, defense, robotics, geology, space exploration, and food security. Despite advances in underwater imaging, most of the ocean and marine organisms remain unobserved and undiscovered. Existing methods for underwater imaging are unsuitable for scalable, long-term, in situ observations because they require tethering for power and communication. Here we describe underwater backscatter imaging, a method for scalable, real-time wireless imaging of underwater environments using fully-submerged battery-free cameras. The cameras power up from harvested acoustic energy, capture color images using ultra-low-power active illumination and a monochrome image sensor, and communicate wirelessly at net-zero-power via acoustic backscatter. We demonstrate wireless battery-free imaging of animals, plants, pollutants, and localization tags in enclosed and open-water environments. The method’s self-sustaining nature makes it desirable for massive, continuous, and long-term ocean deployments with many applications including marine life discovery, submarine surveillance, and underwater climate change monitoring.
-
ABSTRACT In Smart City and Vehicle-to-Everything (V2X) systems, acquiring pedestrians’ accurate locations is crucial to traffic and pedestrian safety. Current systems adopt cameras and wireless sensors to estimate people’s locations via sensor fusion. Standard fusion algorithms, however, become inapplicable when multi-modal data is not associated. For example, pedestrians are out of the camera field of view, or data from the camera modality is missing. To address this challenge and produce more accurate location estimations for pedestrians, we propose a localization solution based on a Generative Adversarial Network (GAN) architecture. During training, it learns the underlying linkage between pedestrians’ camera-phone data correspondences. During inference, it generates refined position estimations based only on pedestrians’ phone data that consists of GPS, IMU, and FTM. Results show that our GAN produces 3D coordinates at 1 to 2 meters localization error across 5 different outdoor scenes. We further show that the proposed model supports self-learning. The generated coordinates can be associated with pedestrians’ bounding box coordinates to obtain additional camera-phone data correspondences. This allows automatic data collection during inference. Results show that after fine-tuning the GAN model on the expandedmore » « less
-
This demonstration presents the Location-Specific Public Broadcast system, in which localization and wireless broadcasts are combined to deliver a scalable, privacy preserving, and generic solution to location-based services. Other interactive location-based systems either preload information on the user devices, which are usually bulky, difficult to update and have to be custom-made for each venue, or fetch information from cloud based on location, which sacrifices user privacy. In our system, a wireless access point continuously broadcasts information tagged by locations of interest, and the mobile devices performing passive localization select and display the information pertinent to themselves. In this case, the location-specific information is stored only on the WiFi AP, and the phone app would be ultra lightweight with only the location calculation and information filtering functionalities, which can be used in any space. We envision our solution being adopted in public places, such as museums, aquariums, etc., for location-specific information delivery purposes, like enhancing interactive experience for visitors.more » « less
-
Abstract In this paper, a new framework is proposed for monitoring the dynamic performance of bridges using three different camera placements and a few visual data processing techniques at low cost and high efficiency. A deep learning method validated by an optical flow approach for motion tracking is included in the framework. To verify it, videos taken by stationary cameras of two shaking table tests were processed at first. Then, the vibrations of six pedestrian bridges were measured using structure-mounted, remote, and drone-mounted cameras, respectively. Two techniques, displacement and frequency subtractions, are applied to remove systematic motions of cameras and to capture the natural frequencies of the tested structures. Measurements on these bridges were compared with the data from wireless accelerometers and structural analysis. Influences of critical parameters for camera setting and data processing, such as video frame rates, data window size, and data sampling rates, were also studied carefully. The research results show that the vibrations and frequencies of structures on the shaking tables and existing bridges can be captured accurately with the proposed framework. These camera placements and data processing techniques can be successfully used for monitoring their dynamic performance.