Abstract We extend the free convolution of Brown measures of $$R$$-diagonal elements introduced by Kösters and Tikhomirov [ 28] to fractional powers. We then show how this fractional free convolution arises naturally when studying the roots of random polynomials with independent coefficients under repeated differentiation. When the proportion of derivatives to the degree approaches one, we establish central limit theorem-type behavior and discuss stable distributions.
more »
« less
Tree convolution for probability distributions with unbounded support
We develop the complex-analytic viewpoint on the tree convolutions studied by the second author and Weihua Liu in Jekel and Liu (2020), which generalize the free, boolean, monotone, and orthogonal convolutions. In particular, for each rooted subtree T of the N-regular tree (with vertices labeled by alternating strings), we define the convolution \boxplus_T (µ1, . . . , µN) for arbitrary probability measures µ1, . . . , µN on R using a certain fixed-point equation for the Cauchy transforms. The convolution operations respect the operad structure of the tree operad from Jekel and Liu (2020). We prove a general limit theorem for iterated T -free convolution similar to Bercovici and Pata’s results in the free case (Bercovici and Pata 1999), and we deduce limit theorems for measures in the domain of attraction of each of the classical stable laws.
more »
« less
- Award ID(s):
- 2002826
- PAR ID:
- 10432681
- Date Published:
- Journal Name:
- Alea
- Volume:
- 18
- Issue:
- 2
- ISSN:
- 1980-0436
- Page Range / eLocation ID:
- 1585–1623
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We introduce a framework to study discrete-variable (DV) quantum systems based on qudits. It relies on notions of a mean state (MS), a minimal stabilizer-projection state (MSPS), and a new convolution. Some interesting consequences are: The MS is the closest MSPS to a given state with respect to the relative entropy; the MS is extremal with respect to the von Neumann entropy, demonstrating a “maximal entropy principle in DV systems.” We obtain a series of inequalities for quantum entropies and for Fisher information based on convolution, giving a “second law of thermodynamics for quantum convolutions.” We show that the convolution of two stabilizer states is a stabilizer state. We establish a central limit theorem, based on iterating the convolution of a zero-mean quantum state, and show this converges to its MS. The rate of convergence is characterized by the “magic gap,” which we define in terms of the support of the characteristic function of the state. We elaborate on two examples: the DV beam splitter and the DV amplifier.more » « less
-
Applications of neural networks like MLPs and ResNets in temporal data mining has led to improvements on the problem of time series classification. Recently, a new class of networks called Temporal Convolution Networks (TCNs) have been proposed for various time series tasks. Instead of time invariant convolutions they use temporally causal convolutions, this makes them more constrained than ResNets but surprisingly good at generalization. This raises an important question: How does a network with causal convolution solve these tasks when compared to a network with acausal convolutions? As the first attempt at answering these questions, we analyze different architectures through a lens of representational subspace similarity. We demonstrate that the evolution of input representations in the layers of TCNs is markedly different from ResNets and MLPs. We find that acausal networks are prone to form groupings of similar layers and TCNs on the other hand learn representations that are much more diverse throughout the network. Next, we study the convergence properties of internal layers across different architecture families and discover that the behaviour of layers inside Acausal network is more homogeneous when compared to TCNs. Our extensive empirical studies offer new insights into internal mechanisms of convolution networks in the domain of time series analysis and may assist practitioners gaining deeper understanding of each network.more » « less
-
Convolution is a fundamental operation with diverse applications in signal processing, computer vision, and machine learning. This article reviews three distinct convolutions: linear convolution (also referred to as aperiodic convolution), positive-wrapped convolution (PWC) (also known as circular convolution), and negative-wrapped convolution (NWC). Additionally, we propose an alternative approach to computing linear convolution without zero padding by leveraging the PWC and NWC. We compare two fast Fourier transform (FFT)-based methods to compute linear convolution: the traditional zero-padded PWC method and a new method based on the PWC and NWC. Through a detailed analysis of the flowgraphs (FGs), we demonstrate the equivalence of these methods while highlighting their unique characteristics. We show that computing the NWC using the weighted PWC method is equivalent to a part of the linear convolution computation with zero padding. Furthermore, it is possible to extract the PWC and NWC from structures to compute linear convolution with zero padding, where the last butterfly stage can be eliminated. This article aims to establish a clear connection among PWC, NWC, and linear convolution, illustrating new perspectives on computing different convolutions.more » « less
-
Flow-based generative models have recently become one of the most efficient approaches to model data generation. Indeed, they are constructed with a sequence of invertible and tractable transformations. Glow first introduced a simple type of generative flow using an invertible 1×1 convolution. However, the 1×1 convolution suffers from limited flexibility compared to the standard convolutions. In this paper, we propose a novel invertible n×n convolution approach that overcomes the limitations of the invertible 1×1 convolution. In addition, our proposed network is not only tractable and invertible but also uses fewer parameters than standard convolutions. The experiments on CIFAR-10, ImageNet and Celeb-HQ datasets, have shown that our invertible n×n convolution helps to improve the performance of generative models significantly.more » « less
An official website of the United States government

