%ALi, X.%ASun, W.%AWu, T.%AVedaldi, A. Ed.%D2020%I %K %MOSTI ID: 10279276 %PMedium: X %TAttentive Normalization %XIn state-of-the-art deep neural networks, both feature normalization and feature attention have become ubiquitous. They are usually studied as separate modules, however. In this paper, we propose a light-weight integration between the two schema and present Attentive Normalization (AN). Instead of learning a single affine transformation, AN learns a mixture of affine transformations and utilizes their weighted-sum as the final affine transformation applied to re-calibrate features in an instance-specific way. The weights are learned by leveraging channel-wise feature attention. In experiments, we test the proposed AN using four representative neural architectures. In the ImageNet-1000 classification benchmark and the MS-COCO 2017 object detection and instance segmentation benchmark. AN obtains consistent performance improvement for different neural architectures in both benchmarks with absolute increase of top-1 accuracy in ImageNet-1000 between 0.5\% and 2.7\%, and absolute increase up to 1.8\% and 2.2\% for bounding box and mask AP in MS-COCO respectively. We observe that the proposed AN provides a strong alternative to the widely used Squeeze-and-Excitation (SE) module. The source codes are publicly available at \href{https://github.com/iVMCL/AOGNet-v2}{the ImageNet Classification Repo} and \href{https://github.com/iVMCL/AttentiveNorm\_Detection}{the MS-COCO Detection and Segmentation Repo}. Country unknown/Code not availablehttps://doi.org/10.1007/978-3-030-58520-4_5OSTI-MSA