This work focuses on the problem of fusing a hyperspectral image (HSI) and a multispectral image (MSI) to produce a super-resolution image that admits high spatial and spectral resolutions. Existing algorithms are mostly based on joint low-rank factorization of the ma-tricized HSI and MSI. This framework is effective to some extent, but several challenges remain. First, it is unclear whether or not the super-resolution image is identifiable in theory under this framework, while identifiability usually plays an essential role in such estimation problems. Second, most algorithms assume that the degradation operators from the super-resolution image to the HSI and MSI are known or can be easily estimated - which is hardly true in practice. In this work, we propose a novel coupled tensor decomposition method that can effectively circumvent these issues. The proposed approach guarantees the identifiability of the super-resolution image under realistic conditions. The method can work even without knowing the spatial degradation operator, which could be hard to accurately estimate in practice. Simulations using AVIRIS Cuprite data are employed to demonstrate the effectiveness of the proposed approach.
more »
« less
An Open Problem: Energy Data Super-Resolution
In this notes paper, we present an open problem to the Buildsys community: energy data super-resolution, referring to the task of estimating the power consumption of a home at a higher resolution given the low-resolution power consumption. Super-resolution is especially useful when the smart meters collect data at a very low-sampling rate owing to a plethora of issues such as bandwidth, pricing, old hardware, among others. The problem is motivated by the success of image super resolution in the computer vision community. In this paper, we formally introduce the problem and present baseline methods and the algorithms we used to "solve" this problem. We evaluate the performance of the algorithms on a real-world dataset and discuss the results. We also discuss what makes this problem hard and why a trivial baseline is hard to beat.
more »
« less
- Award ID(s):
- 1646501
- PAR ID:
- 10211261
- Date Published:
- Journal Name:
- NILM'20: Proceedings of the 5th International Workshop on Non-Intrusive Load Monitoring
- Page Range / eLocation ID:
- 99 to 102
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In recent years, streamed 360° videos have gained popularity within Virtual Reality (VR) and Augmented Reality (AR) applications. However, they are of much higher resolutions than 2D videos, causing greater bandwidth consumption when streamed. This increased bandwidth utilization puts tremendous strain on the network capacity of the cloud providers streaming these videos. In this paper, we introduce L3BOU, a novel, three-tier distributed software framework that reduces cloud-edge bandwidth in the backhaul network and lowers average end-to-end latency for 360° video streaming applications. The L3BOU framework achieves low bandwidth and low latency by leveraging edge-based, optimized upscaling techniques. L3BOU accomplishes this by utilizing down-scaled MPEG-DASH-encoded 360° video data, known as Ultra Low Resolution (ULR) data, that the L3BOU edge applies distributed super-resolution (SR) techniques on, providing a high quality video to the client. L3BOU is able to reduce the cloud-edge backhaul bandwidth by up to a factor of 24, and the optimized super-resolution multi-processing of ULR data provides a 10-fold latency decrease in super resolution upscaling at the edge.more » « less
-
Next generation wireless communication systems are expected to combine millimeter-wave communication with massive multi-user multiple-input multiple-output technology. All-digital base-station implementations for such systems need to process high-dimensional data at extremely high rates, which results in excessively high power consumption. In this paper, we propose two-stage spatial equalizers that first reduce the problem dimension by means of a hardware-friendly, low-resolution linear transform followed by spatial equalization on a lower-dimensional signal. We consider adaptive and non-adaptive dimensionality reduction strategies and demonstrate that the proposed two-stage spatial equalizers are able to approach the performance of conventional linear spatial equalizers that directly operate on high-dimensional data, while offering the potential to reduce the power consumption of spatial equalization.more » « less
-
Abstract BackgroundTo address the limitations of large-scale high quality microscopy image acquisition, PSSR (Point-Scanning Super-Resolution) was introduced to enhance easily acquired low quality microscopy data to a higher quality using deep learning-based methods. However, while PSSR was released as open-source, it was difficult for users to implement into their workflows due to an outdated codebase, limiting its usage by prospective users. Additionally, while the data enhancements provided by PSSR were significant, there was still potential for further improvement. MethodsTo overcome this, we introduce PSSR2, a redesigned implementation of PSSR workflows and methods built to put state-of-the-art technology into the hands of the general microscopy and biology research community. PSSR2 enables user-friendly implementation of super-resolution workflows for simultaneous super-resolution and denoising of undersampled microscopy data, especially through its integrated Command Line Interface and Napari plugin. PSSR2 improves and expands upon previously established PSSR algorithms, mainly through improvements in the semi-synthetic data generation (“crappification”) and training processes. ResultsIn benchmarking PSSR2 on a test dataset of paired high and low resolution electron microscopy images, PSSR2 super-resolves high-resolution images from low-resolution images to a significantly higher accuracy than PSSR. The super-resolved images are also more visually representative of real-world high-resolution images. DiscussionThe improvements in PSSR2, in providing higher quality images, should improve the performance of downstream analyses. We note that for accurate super-resolution, PSSR2 models should only be applied to super-resolve data sufficiently similar to training data and should be validated against real-world ground truth data.more » « less
-
null (Ed.)In Vehicular Edge Computing (VEC) systems, the computing resources of connected Electric Vehicles (EV) are used to fulfill the low-latency computation requirements of vehicles. However, local execution of heavy workloads may drain a considerable amount of energy in EVs. One promising way to improve the energy efficiency is to share and coordinate computing resources among connected EVs. However, the uncertainties in the future location of vehicles make it hard to decide which vehicles participate in resource sharing and how long they share their resources so that all participants benefit from resource sharing. In this paper, we propose VECMAN, a framework for energy-aware resource management in VEC systems composed of two algorithms: (i) a resource selector algorithm that determines the participating vehicles and the duration of resource sharing period; and (ii) an energy manager algorithm that manages computing resources of the participating vehicles with the aim of minimizing the computational energy consumption. We evaluate the proposed algorithms and show that they considerably reduce the vehicles computational energy consumption compared to the state-of-the-art baselines. Specifically, our algorithms achieve between 7% and 18% energy savings compared to a baseline that executes workload locally and an average of 13% energy savings compared to a baseline that offloads vehicles workloads to RSUs.more » « less
An official website of the United States government

