Bilinear pooling has been recently proposed as a feature encoding layer, which can be used after the convolutional
layers of a deep network, to improve performance in multiple
vision tasks. Different from conventional global average
pooling or fully connected layer, bilinear pooling gathers
2nd order information in a translation invariant fashion.
However, a serious drawback of this family of pooling
layers is their dimensionality explosion. Approximate pooling
methods with compact properties have been explored
towards resolving this weakness. Additionally, recent results
have shown that significant performance gains can be
achieved by adding 1st order information and applying matrix
normalization to regularize unstable higher order information.
However, combining compact pooling with matrix
normalization and other order information has not been
explored until now. In this paper, we unify bilinear pooling
and the global Gaussian embedding layers through the
empirical moment matrix. In addition, we propose a novel
sub-matrix square-root layer, which can be used to normalize
the output of the convolution layer directly and mitigate
the dimensionality problem with off-the-shelf compact pooling
methods. Our experiments on three widely used finegrained
classification datasets illustrate that our proposed
architecture, MoNet, can achieve similar or better performance
than with the state-of-art G2DeNet. Furthermore,
when combined with compact pooling technique, MoNet obtains
comparable performance with encoded features with
96% less dimensions.
more »
« less
MoNet: Moments Embedding Network
Bilinear pooling has been recently proposed as a feature
encoding layer, which can be used after the convolutional
layers of a deep network, to improve performance in mul-
tiple vision tasks. Different from conventional global aver-
age pooling or fully connected layer, bilinear pooling gath-
ers 2nd order information in a translation invariant fash-
ion. However, a serious drawback of this family of pooling
layers is their dimensionality explosion. Approximate pool-
ing methods with compact properties have been explored
towards resolving this weakness. Additionally, recent re-
sults have shown that significant performance gains can be
achieved by adding 1st order information and applying ma-
trix normalization to regularize unstable higher order in-
formation. However, combining compact pooling with ma-
trix normalization and other order information has not been
explored until now. In this paper, we unify bilinear pool-
ing and the global Gaussian embedding layers through the
empirical moment matrix. In addition, we propose a novel
sub-matrix square-root layer, which can be used to normal-
ize the output of the convolution layer directly and mitigate
the dimensionality problem with off-the-shelf compact pool-
ing methods. Our experiments on three widely used fine-
grained classification datasets illustrate that our proposed
architecture, MoNet, can achieve similar or better perfor-
mance than with the state-of-art G 2 DeNet. Furthermore,
when combined with compact pooling technique, MoNet ob-
tains comparable performance with encoded features with
96% less dimensions.
more »
« less
- Award ID(s):
- 1638234
- NSF-PAR ID:
- 10065771
- Date Published:
- Journal Name:
- Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
- Page Range / eLocation ID:
- 3175-3183
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Deformable Convolutional Networks (DCN) have been proposed as a powerful tool to boost the representation power of Convolutional Neural Networks (CNN) in computer vision tasks via adaptive sampling of the input feature map. Much like vision transformers, DCNs utilize a more flexible inductive bias than standard CNNs and have also been shown to improve performance of particular models. For example, drop-in DCN layers were shown to increase the AP score of Mask RCNN by 10.6 points while introducing only 1% additional parameters and FLOPs, improving the state-of-the art model at the time of publication. However, despite evidence that more DCN layers placed earlier in the network can further improve performance, we have not seen this trend continue with further scaling of deformations in CNNs, unlike for vision transformers. Benchmarking experiments show that a realistically sized DCN layer (64H×64W, 64 in-out channel) incurs a 4× slowdown on a GPU platform, discouraging the more ubiquitous use of deformations in CNNs. These slowdowns are caused by the irregular input-dependent access patterns of the bilinear interpolation operator, which has a disproportionately low arithmetic intensity (AI) compared to the rest of the DCN. To address the disproportionate slowdown of DCNs and enable their expanded use in CNNs, we propose DefT, a series of workload-aware optimizations for DCN kernels. DefT identifies performance bottlenecks in DCNs and fuses specific operators that are observed to limit DCN AI. Our approach also uses statistical information of DCN workloads to adapt the workload tiling to the DCN layer dimensions, minimizing costly out-of-boundary input accesses. Experimental results show that DefT mitigates up to half of DCN slowdown over the current-art PyTorch implementation. This translates to a layerwise speedup of up to 134% and a reduction of normalized training time of 46% on a fully DCN-enabled ResNet model.more » « less
-
Modern deep neural networks (DNNs) often require high memory consumption and large computational loads. In order to deploy DNN algorithms efficiently on edge or mobile devices, a series of DNN compression algorithms have been explored, including factorization methods. Factorization methods approximate the weight matrix of a DNN layer with the multiplication of two or multiple low-rank matrices. However, it is hard to measure the ranks of DNN layers during the training process. Previous works mainly induce low-rank through implicit approximations or via costly singular value decomposition (SVD) process on every training step. The former approach usually induces a high accuracy loss while the latter has a low efficiency. In this work, we propose SVD training, the first method to explicitly achieve low-rank DNNs during training without applying SVD on every step. SVD training first decomposes each layer into the form of its full-rank SVD, then performs training directly on the decomposed weights. We add orthogonality regularization to the singular vectors, which ensure the valid form of SVD and avoid gradient vanishing/exploding. Low-rank is encouraged by applying sparsity-inducing regularizers on the singular values of each layer. Singular value pruning is applied at the end to explicitly reach a low-rank model. We empirically show that SVD training can significantly reduce the rank of DNN layers and achieve higher reduction on computation load under the same accuracy, comparing to not only previous factorization methods but also state-of-the-art filter pruning methods.more » « less
-
The ever-increasing number of layers, millions of parameters, and large data volume make deep learning workloads resource-intensive and power-hungry. In this paper, we develop a convolutional neural network (CNN) acceleration framework, named MLCNN, which explores algorithm-hardware co-design to achieve cross-layer cooperative optimization and acceleration. MLCNN dramatically reduces computation and on-off chip communication, improving CNN’s performance. To achieve this, MLCNN reorders the position of nonlinear activation layers and pooling layers, which we prove results in a negligible accuracy loss; then the convolutional layer and pooling layer are cooptimized by means of redundant multiplication elimination, local addition reuse, and global addition reuse. To the best of our knowledge, MLCNN is the first of its kind that incorporates cooperative optimization across convolutional, activation, and pooling layers. We further customize the MLCNN accelerator to take full advantage of cross-layer CNN optimization to reduce both computation and on-off chip communication. Our analysis shows that MLCNN can significantly reduce (up to 98%) multiplications and additions. We have implemented a prototype of MLCNN and evaluated its performance on several widely used CNN models using both an accelerator-level cycle and energy model and RTL implementation. Experimental results show that MLCNN achieves 3.2× speedup and 2.9× energy efficiency compared with dense CNNs. MLCNN’s optimization methods are orthogonal to other CNN acceleration techniques, such as quantization and pruning. Combined with quantization, our quantized MLCNN gains a 12.8× speedup and 11.3× energy efficiency compared with DCNN.more » « less
-
Self-supervised training methods for transformers have demonstrated remarkable performance across various domains. Previous transformer-based models, such as masked autoencoders (MAE), typically utilize a single normalization layer for both the [CLS] symbol and the tokens. We propose in this paper a simple modification that employs separate normalization layers for the tokens and the [CLS] symbol to better capture their distinct characteristics and enhance downstream task performance. Our method aims to alleviate the potential negative effects of using the same normalization statistics for both token types, which may not be optimally aligned with their individual roles. We empirically show that by utilizing a separate normalization layer, the [CLS] embeddings can better encode the global contextual information and are distributed more uniformly in its anisotropic space. When replacing the conventional normalization layer with the two separate layers, we observe an average 2.7% performance improvement over the image, natural language, and graph domains.more » « less