skip to main content


The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, June 13 until 2:00 AM ET on Friday, June 14 due to maintenance. We apologize for the inconvenience.

Search for: All records

Creators/Authors contains: "Sabharwal, Ashutosh"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    In type 2 diabetes (T2D), the dawn phenomenon is an overnight glucose rise recognized to contribute to overall glycemia and is a potential target for therapeutic intervention. Existing CGM-based approaches do not account for sensor error, which can mask the true extent of the dawn phenomenon. To address this challenge, we developed a probabilistic framework that incorporates sensor error to assign a probability to the occurrence of dawn phenomenon. In contrast, the current approaches label glucose fluctuations as dawn phenomena as a binary yes/no. We compared the proposed probabilistic model with a standard binary model on CGM data from 173 participants (71% female, 87% Hispanic/Latino, 54 ± 12 years, with either a diagnosis of T2D for six months or with an elevated risk of T2D) stratified by HbA1clevels into normal but at risk for T2D, with pre-T2D, or with non-insulin-treated T2D. The probabilistic model revealed a higher dawn phenomenon frequency in T2D [49% (95% CI 37–63%)] compared to pre-T2D [36% (95% CI 31–48%), p = 0.01] and at-risk participants [34% (95% CI 27–39%), p < 0.0001]. While these trends were also found using the binary approach, the probabilistic model identified significantly greater dawn phenomenon frequency than the traditional binary model across all three HbA1csub-groups (p < 0.0001), indicating its potential to detect the dawn phenomenon earlier across diabetes risk categories.

    more » « less
  2. Free, publicly-accessible full text available October 29, 2024
  3. Free, publicly-accessible full text available September 1, 2024
  4. Laser speckle contrast imaging is widely used in clinical studies to monitor blood flow distribution. Speckle contrast tomography, similar to diffuse optical tomography, extends speckle contrast imaging to provide deep tissue blood flow information. However, the current speckle contrast tomography techniques suffer from poor spatial resolution and involve both computation and memory intensive reconstruction algorithms. In this work, we present SpeckleCam, a camera-based system to reconstruct high resolution 3D blood flow distribution deep inside the skin. Our approach replaces the traditional forward model using diffuse approximations with Monte-Carlo simulations-based convolutional forward model, which enables us to develop an improved deep tissue blood flow reconstruction algorithm. We show that our proposed approach can recover complex structures up to 6 mm deep inside a tissue-like scattering medium in the reflection geometry. We also conduct human experiments to demonstrate that our approach can detect reduced flow in major blood vessels during vascular occlusion.

    more » « less
  5. Social ambiance describes the context in which social interactions happen, and can be measured using speech audio by counting the number of concurrent speakers. This measurement has enabled various mental health tracking and human-centric IoT applications. While on-device Socal Ambiance Measure (SAM) is highly desirable to ensure user privacy and thus facilitate wide adoption of the aforementioned applications, the required computational complexity of state-of-the-art deep neural networks (DNNs) powered SAM solutions stands at odds with the often constrained resources on mobile devices. Furthermore, only limited labeled data is available or practical when it comes to SAM under clinical settings due to various privacy constraints and the required human effort, further challenging the achievable accuracy of on-device SAM solutions. To this end, we propose a dedicated neural architecture search framework for Energy-efficient and Real-time SAM (ERSAM). Specifically, our ERSAM framework can automatically search for DNNs that push forward the achievable accuracy vs. hardware efficiency frontier of mobile SAM solutions. For example, ERSAM-delivered DNNs only consume 40 mW • 12 h energy and 0.05 seconds processing latency for a 5 seconds audio segment on a Pixel 3 phone, while only achieving an error rate of 14.3% on a social ambiance dataset generated by LibriSpeech. We can expect that our ERSAM framework can pave the way for ubiquitous on-device SAM solutions which are in growing demand. 
    more » « less
  6. Abstract

    One of the key ideas for reducing downlink channel acquisition overhead for FDD massive MIMO systems is to exploit a combination of two assumptions: (i) the dimension of channel models in propagation domain may be much smaller than the next-generation base-station array sizes (e.g., 64 or more antennas), and (ii) uplink and downlink channels may share the same low-dimensional propagation domain. Our channel measurements demonstrate that the two assumptions may not always hold, thereby impacting the predicted performance of methods that rely on the above assumptions. In this paper, we analyze the error in modeling the downlink channel using uplink measurements, caused by the mismatch from the above two assumptions. We investigate how modeling error varies with base-station array size and provide both numerical and experimental results. We observe that modeling error increases with the number of base-station antennas, and channels with larger angular spreads have larger modeling error. Utilizing our modeling error analysis, we then investigate the resulting beamforming performance rate loss. Accordingly, we observe that the rate loss increases with the number of base-station antennas, and channels with larger angular spreads suffer from higher rate loss.

    more » « less
  7. Optical imaging technologies hold powerful potential in healthcare. 
    more » « less
  8. Camera-based heart rate measurement is becoming an attractive option as a non-contact modality for continuous remote health and engagement monitoring. However, reliable heart rate extraction from camera-based measurement is challenging in realistic scenarios, especially when the subject is moving. In this work, we develop a motion-robust algorithm, labeled RobustPPG, for extracting photoplethysmography signals (PPG) from face video and estimating the heart rate. Our key innovation is to explicitly model and generate motion distortions due to the movements of the person’s face. We use inverse rendering to obtain the 3D shape and albedo of the face and environment lighting from video frames and then render the human face for each frame. The rendered face is similar to the original face but does not contain the heart rate signal; facial movements alone cause pixel intensity variation in the generated video frames. Finally, we use the generated motion distortion to filter the motion-induced measurements. We demonstrate that our approach performs better than the state-of-the-art methods in extracting a clean blood volume signal with over 2 dB signal quality improvement and 30% improvement in RMSE of estimated heart rate in intense motion scenarios.

    more » « less
  9. Massive multiple-input multiple-output (mMIMO) technology uses a very large number of antennas at base stations to significantly increase efficient use of the wireless spectrum. Thus, mMIMO is considered an essential part of 5G and beyond. However, developing a scalable and reliable mMIMO system is an extremely challenging task, significantly hampering the ability of the research community to research nextgeneration networks. This "research bottleneck" motivated us to develop a deployable experimental mMIMO platform to enable research across many areas. We also envision that this platform could unleash novel collaborations between communications, computing, and machine learning researchers to completely rethink next-generation networks. 
    more » « less