skip to main content


Title: End-to-End Image Classification and Compression with variational autoencoders
The past decade has witnessed the rising dominance of deep learning and artificial intelligence in a wide range of applications. In particular, the ocean of wireless smartphones and IoT devices continue to fuel the tremendous growth of edge/cloudbased machine learning (ML) systems including image/speech recognition and classification. To overcome the infrastructural barrier of limited network bandwidth in cloud ML, existing solutions have mainly relied on traditional compression codecs such as JPEG that were historically engineered for humanend users instead of ML algorithms. Traditional codecs do not necessarily preserve features important to ML algorithms under limited bandwidth, leading to potentially inferior performance. This work investigates application-driven optimization of programmable commercial codec settings for networked learning tasks such as image classification. Based on the foundation of variational autoencoders (VAEs), we develop an end-to-end networked learning framework by jointly optimizing the codec and classifier without reconstructing images for given data rate (bandwidth). Compared with standard JPEG codec, the proposed VAE joint compression and classification framework achieves classification accuracy improvement by over 10% and 4%, respectively, for CIFAR-10 and ImageNet-1k data sets at data rate of 0.8 bpp. Our proposed VAE-based models show 65%􀀀99% reductions in encoder size,  1.5􀀀 13.1 improvements in inference speed and 25%􀀀99% savings in power compared to baseline models. We further show that a simple decoder can reconstruct images with sufficient quality without compromising classification accuracy.  more » « less
Award ID(s):
2002937 2029027
NSF-PAR ID:
10347113
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
IEEE Internet of Things Journal
ISSN:
2372-2541
Page Range / eLocation ID:
1 to 1
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Learning-based image/video codecs typically utilizethe well known auto-encoder structure where the encoder trans-forms input data to a low-dimensional latent representation.Efficient latent encoding can reduce bandwidth needs duringcompression for transmission and storage. In this paper, weexamine the effect of assigning high level coarse grouping labelsto each latent vector. Designing coding profiles for each latentgroup can achieve high compression encoding. We show thatsuch grouping can be learned via end-to-end optimization of thecodec and the deep learning (DL) model to optimize rate-accuracyfor a given data set. For cloud-based inference, source encodercan select a coding profile based on its learned grouping andencode the data features accordingly. Our test results on imageclassification show that significant performance improvementcan be achieved with learned grouping over its non-groupingcounterpart. 
    more » « less
  2. We propose a novel algorithm for quantizing continuous latent representations in trained models. Our approach applies to deep probabilistic models, such as variational autoencoders (VAEs), and enables both data and model compression. Unlike current end-to-end neural compression methods that cater the model to a fixed quantization scheme, our algorithm separates model design and training from quantization. Consequently, our algorithm enables “plug-and-play” compression at variable rate-distortion trade-off, using a single trained model. Our algorithm can be seen as a novel extension of arithmetic coding to the continuous domain, and uses adaptive quantization accuracy based on estimates of posterior uncertainty. Our experimental results demonstrate the importance of taking into account posterior uncertainties, and show that image compression with the proposed algorithm outperforms JPEG over a wide range of bit rates using only a single standard VAE. Further experiments on Bayesian neural word embeddings demonstrate the versatility of the proposed method. 
    more » « less
  3. We propose a novel algorithm for quantizing continuous latent representations in trained models. Our approach applies to deep probabilistic models, such as variational autoencoders (VAEs), and enables both data and model compression. Unlike current end-to-end neural compression methods that cater the model to a fixed quantization scheme, our algorithm separates model design and training from quantization. Consequently, our algorithm enables “plug-and-play” compression with variable rate-distortion trade-off, using a single trained model. Our algorithm can be seen as a novel extension of arithmetic coding to the continuous domain, and uses adaptive quantization accuracy based on estimates of posterior uncertainty. Our experimental results demonstrate the importance of taking into account posterior uncertainties, and show that image compression with the proposed algorithm outperforms JPEG over a wide range of bit rates using only a single standard VAE. Further experiments on Bayesian neural word embeddings demonstrate the versatility of the proposed method. 
    more » « less
  4. Abstract

    The incorporation of high‐performance optoelectronic devices into photonic neuromorphic processors can substantially accelerate computationally intensive matrix multiplication operations in machine learning (ML) algorithms. However, the conventional designs of individual devices and system are largely disconnected, and the system optimization is limited to the manual exploration of a small design space. Here, a device‐system end‐to‐end design methodology is reported to optimize a free‐space optical general matrix multiplication (GEMM) hardware accelerator by engineering a spatially reconfigurable array made from chalcogenide phase change materials. With a highly parallelized integrated hardware emulator with experimental information, the design of unit device to directly optimize GEMM calculation accuracy is achieved by exploring a large parameter space through reinforcement learning algorithms, including deep Q‐learning neural network, Bayesian optimization, and their cascaded approach. The algorithm‐generated physical quantities show a clear correlation between system performance metrics and device specifications. Furthermore, physics‐aware training approaches are employed to deploy optimized hardware to the tasks of image classification, materials discovery, and a closed‐loop design of optical ML accelerators. The demonstrated framework offers insights into the end‐to‐end and co‐design of optoelectronic devices and systems with reduced human supervision and domain knowledge barriers.

     
    more » « less
  5. In this paper, we propose a convolutional neural network (CNN) based, scenario-dependent and sensor (mobile device) adaptable hierarchical classification framework. Our proposed framework is designed to automatically categorize face data captured under various challenging conditions, before the FR algorithms (pre-processing, feature extraction and matching) are used. First, a unique multi-sensor database (using Samsung S4 Zoom, Nokia 1020, iPhone 5S and Samsung S5 phones) is collected containing face images indoors, outdoors, with yaw angle from -90 to +90 and at two different distances, i.e. 1 and 10 meters. To cope with pose variations, face detection and pose estimation algorithms are used for classifying the facial images into a frontal or a non-frontal class. Next, our proposed framework is used where tri-level hierarchical classification is performed as follows: Level 1, face images are classified based on phone type; Level 2, face images are further classified into indoor and outdoor images; and finally, Level 3 face images are classified into a close (1m) and a far, low quality, (10m) distance categories respectively. Experimental results show that classification accuracy is scenario dependent, reaching from 95 to more than 98% accuracy for level 2 and from 90 to more than 99% for level 3 classification. A set of experiments is performed indicating that, the usage of data grouping before the face matching is performed, resulted in a significantly improved rank-1 identification rate when compared to the original (all vs. all) biometric system. 
    more » « less