skip to main content

This content will become publicly available on January 1, 2023

Title: Persistence landscapes of affine fractals
Abstract We develop a method for calculating the persistence landscapes of affine fractals using the parameters of the corresponding transformations. Given an iterated function system of affine transformations that satisfies a certain compatibility condition, we prove that there exists an affine transformation acting on the space of persistence landscapes, which intertwines the action of the iterated function system. This latter affine transformation is a strict contraction and its unique fixed point is the persistence landscape of the affine fractal. We present several examples of the theory as well as confirm the main results through simulations.
; ;
Award ID(s):
1830254 1934884
Publication Date:
Journal Name:
Demonstratio Mathematica
Page Range or eLocation-ID:
163 to 192
Sponsoring Org:
National Science Foundation
More Like this
  1. Deep learning models are vulnerable to adversarial examples. Most of current adversarial attacks add pixel-wise perturbations restricted to some L^p-norm, and defense models are evaluated also on adversarial examples restricted inside L^p-norm balls. However, we wish to explore adversarial examples exist beyond L^p-norm balls and their implications for attacks and defenses. In this paper, we focus on adversarial images generated by transformations. We start with color transformation and propose two gradient-based attacks. Since L^p-norm is inappropriate for measuring image quality in the transformation space, we use the similarity between transformations and the Structural Similarity Index. Next, we explore a larger transformation space consisting of combinations of color and affine transformations. We evaluate our transformation attacks on three data sets --- CIFAR10, SVHN, and ImageNet --- and their corresponding models. Finally, we perform retraining defenses to evaluate the strength of our attacks. The results show that transformation attacks are powerful. They find high-quality adversarial images that have higher transferability and misclassification rates than C&W's L^p attacks, especially at high confidence levels. They are also significantly harder to defend against by retraining than C&W's L^p attacks. More importantly, exploring different attack spaces makes it more challenging to train a universally robust model.
  2. Abstract We investigate the Hölder geometry of curves generated by iterated function systems (IFS) in a complete metric space. A theorem of Hata from 1985 asserts that every connected attractor of an IFS is locally connected and path-connected. We give a quantitative strengthening of Hata’s theorem. First we prove that every connected attractor of an IFS is (1/ s )-Hölder path-connected, where s is the similarity dimension of the IFS. Then we show that every connected attractor of an IFS is parameterized by a (1/ α)-Hölder curve for all α > s . At the endpoint, α = s , a theorem of Remes from 1998 already established that connected self-similar sets in Euclidean space that satisfy the open set condition are parameterized by (1/ s )-Hölder curves. In a secondary result, we show how to promote Remes’ theorem to self-similar sets in complete metric spaces, but in this setting require the attractor to have positive s -dimensional Hausdorff measure in lieu of the open set condition. To close the paper, we determine sharp Hölder exponents of parameterizations in the class of connected self-affine Bedford-McMullen carpets and build parameterizations of self-affine sponges. An interesting phenomenon emerges in the self-affine setting.more »While the optimal parameter s for a self-similar curve in ℝ n is always at most the ambient dimension n , the optimal parameter s for a self-affine curve in ℝ n may be strictly greater than n .« less
  3. Vedaldi, A. (Ed.)
    In state-of-the-art deep neural networks, both feature normalization and feature attention have become ubiquitous. They are usually studied as separate modules, however. In this paper, we propose a light-weight integration between the two schema and present Attentive Normalization (AN). Instead of learning a single affine transformation, AN learns a mixture of affine transformations and utilizes their weighted-sum as the final affine transformation applied to re-calibrate features in an instance-specific way. The weights are learned by leveraging channel-wise feature attention. In experiments, we test the proposed AN using four representative neural architectures. In the ImageNet-1000 classification benchmark and the MS-COCO 2017 object detection and instance segmentation benchmark. AN obtains consistent performance improvement for different neural architectures in both benchmarks with absolute increase of top-1 accuracy in ImageNet-1000 between 0.5\% and 2.7\%, and absolute increase up to 1.8\% and 2.2\% for bounding box and mask AP in MS-COCO respectively. We observe that the proposed AN provides a strong alternative to the widely used Squeeze-and-Excitation (SE) module. The source codes are publicly available at \href{}{the ImageNet Classification Repo} and \href{\_Detection}{the MS-COCO Detection and Segmentation Repo}.
  4. Crystallization is fundamental to materials science and is central to a variety of applications, ranging from the fabrication of silicon wafers for microelectronics to the determination of protein structures. The basic picture is that a crystal nucleates from a homogeneous fluid by a spontaneous fluctuation that kicks the system over a single free-energy barrier. However, it is becoming apparent that nucleation is often more complicated than this simple picture and, instead, can proceed via multiple transformations of metastable structures along the pathway to the thermodynamic minimum. In this article, we observe, characterize, and model crystallization pathways using DNA-coated colloids. We use optical microscopy to investigate the crystallization of a binary colloidal mixture with single-particle resolution. We observe classical one-step pathways and nonclassical two-step pathways that proceed via a solid–solid transformation of a crystal intermediate. We also use enhanced sampling to compute the free-energy landscapes corresponding to our experiments and show that both one- and two-step pathways are driven by thermodynamics alone. Specifically, the two-step solid–solid transition is governed by a competition between two different crystal phases with free energies that depend on the crystal size. These results extend our understanding of available pathways to crystallization, by showing that size-dependent thermodynamicmore »forces can produce pathways with multiple crystal phases that interconvert without free-energy barriers and could provide approaches to controlling the self-assembly of materials made from colloids.« less
  5. Abstract False theta functions form a family of functions with intriguing modular properties and connections to mock modular forms. In this paper, we take the first step towards investigating modular transformations of higher rank false theta functions, following the example of higher depth mock modular forms. In particular, we prove that under quite general conditions, a rank two false theta function is determined in terms of iterated, holomorphic, Eichler-type integrals. This provides a new method for examining their modular properties and we apply it in a variety of situations where rank two false theta functions arise. We first consider generic parafermion characters of vertex algebras of type $$A_2$$ A 2 and $$B_2$$ B 2 . This requires a fairly non-trivial analysis of Fourier coefficients of meromorphic Jacobi forms of negative index, which is of independent interest. Then we discuss modularity of rank two false theta functions coming from superconformal Schur indices. Lastly, we analyze $${\hat{Z}}$$ Z ^ -invariants of Gukov, Pei, Putrov, and Vafa for certain plumbing $$\mathtt{H}$$ H -graphs. Along the way, our method clarifies previous results on depth two quantum modularity.