skip to main content


Title: Persistence landscapes of affine fractals
Abstract We develop a method for calculating the persistence landscapes of affine fractals using the parameters of the corresponding transformations. Given an iterated function system of affine transformations that satisfies a certain compatibility condition, we prove that there exists an affine transformation acting on the space of persistence landscapes, which intertwines the action of the iterated function system. This latter affine transformation is a strict contraction and its unique fixed point is the persistence landscape of the affine fractal. We present several examples of the theory as well as confirm the main results through simulations.  more » « less
Award ID(s):
1830254 1934884
NSF-PAR ID:
10329468
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Demonstratio Mathematica
Volume:
55
Issue:
1
ISSN:
2391-4661
Page Range / eLocation ID:
163 to 192
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Chechik, M. ; Katoen, JP. ; Leucker, M. (Ed.)
    Efficient verification algorithms for neural networks often depend on various abstract domains such as intervals, zonotopes, and linear star sets. The choice of the abstract domain presents an expressiveness vs. scalability trade-off: simpler domains are less precise but yield faster algorithms. This paper investigates the octatope abstract domain in the context of neural net verification. Octatopes are affine transformations of n-dimensional octagons—sets of unit-two-variable-per-inequality (UTVPI) constraints. Octatopes generalize the idea of zonotopes which can be viewed as an affine transformation of a box. On the other hand, octatopes can be considered as a restriction of linear star set, which are affine transformations of arbitrary H-Polytopes. This distinction places octatopes firmly between zonotopes and star sets in their expressive power, but what about the efficiency of decision procedures? An important analysis problem for neural networks is the exact range computation problem that asks to compute the exact set of possible outputs given a set of possible inputs. For this, three computational procedures are needed: 1) optimization of a linear cost function; 2) affine mapping; and 3) over-approximating the intersection with a half-space. While zonotopes allow an efficient solution for these approaches, star sets solves these procedures via linear programming. We show that these operations are faster for octatopes than the more expressive linear star sets. For octatopes, we reduce these problems to min-cost flow problems, which can be solved in strongly polynomial time using the Out-of-Kilter algorithm. Evaluating exact range computation on several ACAS Xu neural network benchmarks, we find that octatopes show promise as a practical abstract domain for neural network verification. 
    more » « less
  2. Deep learning models are vulnerable to adversarial examples. Most of current adversarial attacks add pixel-wise perturbations restricted to some L^p-norm, and defense models are evaluated also on adversarial examples restricted inside L^p-norm balls. However, we wish to explore adversarial examples exist beyond L^p-norm balls and their implications for attacks and defenses. In this paper, we focus on adversarial images generated by transformations. We start with color transformation and propose two gradient-based attacks. Since L^p-norm is inappropriate for measuring image quality in the transformation space, we use the similarity between transformations and the Structural Similarity Index. Next, we explore a larger transformation space consisting of combinations of color and affine transformations. We evaluate our transformation attacks on three data sets --- CIFAR10, SVHN, and ImageNet --- and their corresponding models. Finally, we perform retraining defenses to evaluate the strength of our attacks. The results show that transformation attacks are powerful. They find high-quality adversarial images that have higher transferability and misclassification rates than C&W's L^p attacks, especially at high confidence levels. They are also significantly harder to defend against by retraining than C&W's L^p attacks. More importantly, exploring different attack spaces makes it more challenging to train a universally robust model. 
    more » « less
  3. null (Ed.)
    Abstract We investigate the Hölder geometry of curves generated by iterated function systems (IFS) in a complete metric space. A theorem of Hata from 1985 asserts that every connected attractor of an IFS is locally connected and path-connected. We give a quantitative strengthening of Hata’s theorem. First we prove that every connected attractor of an IFS is (1/ s )-Hölder path-connected, where s is the similarity dimension of the IFS. Then we show that every connected attractor of an IFS is parameterized by a (1/ α)-Hölder curve for all α > s . At the endpoint, α = s , a theorem of Remes from 1998 already established that connected self-similar sets in Euclidean space that satisfy the open set condition are parameterized by (1/ s )-Hölder curves. In a secondary result, we show how to promote Remes’ theorem to self-similar sets in complete metric spaces, but in this setting require the attractor to have positive s -dimensional Hausdorff measure in lieu of the open set condition. To close the paper, we determine sharp Hölder exponents of parameterizations in the class of connected self-affine Bedford-McMullen carpets and build parameterizations of self-affine sponges. An interesting phenomenon emerges in the self-affine setting. While the optimal parameter s for a self-similar curve in ℝ n is always at most the ambient dimension n , the optimal parameter s for a self-affine curve in ℝ n may be strictly greater than n . 
    more » « less
  4. Vedaldi, A. (Ed.)
    In state-of-the-art deep neural networks, both feature normalization and feature attention have become ubiquitous. They are usually studied as separate modules, however. In this paper, we propose a light-weight integration between the two schema and present Attentive Normalization (AN). Instead of learning a single affine transformation, AN learns a mixture of affine transformations and utilizes their weighted-sum as the final affine transformation applied to re-calibrate features in an instance-specific way. The weights are learned by leveraging channel-wise feature attention. In experiments, we test the proposed AN using four representative neural architectures. In the ImageNet-1000 classification benchmark and the MS-COCO 2017 object detection and instance segmentation benchmark. AN obtains consistent performance improvement for different neural architectures in both benchmarks with absolute increase of top-1 accuracy in ImageNet-1000 between 0.5\% and 2.7\%, and absolute increase up to 1.8\% and 2.2\% for bounding box and mask AP in MS-COCO respectively. We observe that the proposed AN provides a strong alternative to the widely used Squeeze-and-Excitation (SE) module. The source codes are publicly available at \href{https://github.com/iVMCL/AOGNet-v2}{the ImageNet Classification Repo} and \href{https://github.com/iVMCL/AttentiveNorm\_Detection}{the MS-COCO Detection and Segmentation Repo}. 
    more » « less
  5. Abstract

    In example‐based inverse linear blend skinning (LBS), a collection of poses (e.g. animation frames) are given, and the goal is finding skinning weights and transformation matrices that closely reproduce the input. These poses may come from physical simulation, direct mesh editing, motion capture or another deformation rig. We provide a re‐formulation of inverse skinning as a problem in high‐dimensional Euclidean space. The transformation matrices applied to a vertex across all poses can be thought of as a point in high dimensions. We cast the inverse LBS problem as one of finding a tight‐fitting simplex around these points (a well‐studied problem in hyperspectral imaging). Although we do not observe transformation matrices directly, the 3D position of a vertex across all of its poses defines an affine subspace, or flat. We solve a ‘closest flat’ optimization problem to find points on these flats, and then compute a minimum‐volume enclosing simplex whose vertices are the transformation matrices and whose barycentric coordinates are the skinning weights. We are able to create LBS rigs with state‐of‐the‐art reconstruction error and state‐of‐the‐art compression ratios for mesh animation sequences. Our solution does not consider weight sparsity or the rigidity of recovered transformations. We include observations and insights into the closest flat problem. Its ideal solution and optimal LBS reconstruction error remain an open problem.

     
    more » « less