skip to main content


Title: Wi-Mesh: A WiFi Vision-Based Approach for 3D Human Mesh Construction
Award ID(s):
2146354
NSF-PAR ID:
10415998
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
20th ACM Conference on Embedded Network Sensor Systems
Page Range / eLocation ID:
362 to 376
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Connectivity is at the heart of the future Internet-of-Things (IoT) infrastructure, which can control and communicate with remote sensors and actuators for the beacons, data collections, and forwarding nodes. Existing sensor network solutions cannot solve the bottleneck problems near the sink node; the tree-based Internet architecture has the single point of failure. To solve current deficiencies in multi-hop mesh network and cross-domain network design, we propose a mesh inside a mesh IoT network architecture. Our designed "edge router" incorporates these two mesh networks together and performs seamlessly transmission of multi-standard packets. The proposed IoT testbed interoperates with existing multi-standards (Wi-Fi, 6LoWPAN) and segments of networks, and provides both high-throughput Internet and resilient sensor coverage throughout the community. 
    more » « less
  2. Recovering multi-person 3D poses and shapes with absolute scales from a single RGB image is a challenging task due to the inherent depth and scale ambiguity from a single view. Current works on 3D pose and shape estimation tend to mainly focus on the estimation of the 3D joint locations relative to the root joint , usually defined as the one closest to the shape centroid, in case of humans defined as the pelvis joint. In this paper, we build upon an existing multi-person 3D mesh predictor network, ROMP, to create Absolute-ROMP. By adding absolute root joint localization in the camera coordinate frame, we are able to estimate multi-person 3D poses and shapes with absolute scales from a single RGB image. Such a single-shot approach allows the system to better learn and reason about the inter-person depth relationship, thus improving multi-person 3D estimation. In addition to this end to end network, we also train a CNN and transformer hybrid network, called TransFocal, to predict the f ocal length of the image’s camera. Absolute-ROMP estimates the 3D mesh coordinates of all persons in the image and their root joint locations normalized by the focal point. We then use TransFocal to obtain focal length and get absolute depth information of all joints in the camera coordinate frame. We evaluate Absolute-ROMP on the root joint localization and root-relative 3D pose estimation tasks on publicly available multi-person 3D pose datasets. We evaluate TransFocal on dataset created from the Pano360 dataset and both are applicable to in-the-wild images and videos, due to real time performance. 
    more » « less
  3. null (Ed.)