Transition from laminar to turbulent flow occurring over a smooth surface is a particularly important route to chaos in fluid dynamics. It often occurs via sporadic inception of spatially localized patches (spots) of turbulence that grow and merge downstream to become the fully turbulent boundary layer. A long-standing question has been whether these incipient spots already contain properties of high-Reynolds-number, developed turbulence. In this study, the question is posed for geometric scaling properties of the interface separating turbulence within the spots from the outer flow. For high-Reynolds-number turbulence, such interfaces are known to display fractal scaling laws with a dimension
This content will become publicly available on June 1, 2025
Over the recent decades, a variety of indices, such as the fractal dimension, Hurst exponent, or Betti numbers, have been used to characterize structural or topological properties of art via a singular parameter, which could then help to classify artworks. A single fractal dimension, in particular, has been commonly interpreted as characteristic of the entire image, such as an abstract painting, whether binary, gray-scale, or in color, and whether self-similar or not. There is now ample evidence, however, that fractal exponents obtained using the standard box-counting are strongly dependent on the details of the method adopted, and on fitting straight lines to the entire scaling plots, which are typically nonlinear. Here, we propose a more discriminating approach with the aim of obtaining robust scaling plots and extracting relevant information encoded in them without any fitting routines. To this goal, we carefully average over all possible grid locations at each scale, rendering scaling plots independent of any particular choice of grids and, crucially, of the orientation of images. We then calculate the derivatives of the scaling plots, so that an image is described by a continuous function, its fractal contour, rather than a single scaling exponent valid over a limited range of scales. We test this method on synthetic examples, ordered and random, then on images of algorithmically defined fractals, and finally, examine selected abstract paintings and prints by acknowledged masters of modern art.
more » « less- Award ID(s):
- 2116422
- PAR ID:
- 10543309
- Publisher / Repository:
- AIP
- Date Published:
- Journal Name:
- Chaos: An Interdisciplinary Journal of Nonlinear Science
- Volume:
- 34
- Issue:
- 6
- ISSN:
- 1054-1500
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
, where the 1/3 excess exponent above 2 (smooth surfaces) follows from Kolmogorov scaling of velocity fluctuations. The data used in this study are from a direct numerical simulation, and the spot boundaries (interfaces) are determined by using an unsupervised machine-learning method that can identify such interfaces without setting arbitrary thresholds. Wide separation between small and large scales during transition is provided by the large range of spot volumes, enabling accurate measurements of the volume–area fractal scaling exponent. Measurements show a dimension of over almost 5 decades of spot volume, i.e., trends fully consistent with high-Reynolds-number turbulence. Additional observations pertaining to the dependence on height above the surface are also presented. Results provide evidence that turbulent spots exhibit high-Reynolds-number fractal-scaling properties already during early transitional and nonisotropic stages of the flow evolution. -
Convolutional Neural Networks (CNNs) have emerged as highly successful tools for image generation, recovery, and restoration. This success is often attributed to large amounts of training data. However, recent experimental findings challenge this view and instead suggest that a major contributing factor to this success is that convolutional networks impose strong prior assumptions about natural images. A surprising experiment that highlights this architectural bias towards natural images is that one can remove noise and corruptions from a natural image without using any training data, by simply fitting (via gradient descent) a randomly initialized, over-parameterized convolutional generator to the single corrupted image. While this over-parameterized network can fit the corrupted image perfectly, surprisingly after a few iterations of gradient descent one obtains the uncorrupted image. This intriguing phenomena enables state-of-the-art CNN-based denoising and regularization of linear inverse problems such as compressive sensing. In this paper we take a step towards demystifying this experimental phenomena by attributing this effect to particular architectural choices of convolutional networks, namely convolutions with fixed interpolating filters. We then formally characterize the dynamics of fitting a two layer convolutional generator to a noisy signal and prove that earlystopped gradient descent denoises/regularizes. This results relies on showing that convolutional generators fit the structured part of an image significantly faster than the corrupted portionmore » « less
-
Convolutional Neural Networks (CNNs) have emerged as highly successful tools for image generation, recovery, and restoration. A major contributing factor to this success is that convolutional networks impose strong prior assumptions about natural images. A surprising experiment that highlights this architectural bias towards natural images is that one can remove noise and corruptions from a natural image without using any training data, by simply fitting (via gradient descent) a randomly initialized, over-parameterized convolutional generator to the corrupted image. While this over-parameterized network can fit the corrupted image perfectly, surprisingly after a few iterations of gradient descent it generates an almost uncorrupted image. This intriguing phenomenon enables state-of-the-art CNN-based denoising and regularization of other inverse problems. In this paper, we attribute this effect to a particular architectural choice of convolutional networks, namely convolutions with fixed interpolating filters. We then formally characterize the dynamics of fitting a two-layer convolutional generator to a noisy signal and prove that early-stopped gradient descent denoises/regularizes. Our proof relies on showing that convolutional generators fit the structured part of an image significantly faster than the corrupted portion.more » « less
-
Background and Objective: Higuchi’s method of determining fractal dimension (HFD) occupies a valuable place in the study of a wide variety of physical signals. In comparison to other methods, it provides more rapid, accurate estimations for the entire range of possible fractal dimensions. However, a major difficulty in using the method is the correct choice of tuning parameter (kmax) to compute the most accurate results. In the past researchers have used various ad hoc methods to determine the appropriate kmax choice for their particular data. We provide a more objective method of determining, a priori, the best value for the tuning parameter, given a particular length data set. Methods: We create numerous simulations of fractional Brownian motion to perform Monte Carlo simulations of the distribution of the calculated HFD. Results: Experimental results show that HFD depends not only on kmax but also on the length of the time series, which enable derivation of an expression to find the appropriate kmax for an input time series of unknown fractal dimension. Conclusion: The Higuchi method should not be used indiscriminately without reference to the type of data whose fractal dimension is examined. Monte Carlo simulations with different fractional Brownian motions increases the confidence of evaluation results.more » « less
-
Many modern approaches to image reconstruction are based on learning a regularizer that implicitly encodes a prior over the space of images. For large-scale images common in imaging domains like remote sensing, medical imaging, astronomy, and others, learning the entire image prior requires an often-impractical amount of training data. This work describes a deep image patch-based regularization approach that can be incorporated into a variety of modern algorithms. Learning a regularizer amounts to learning the a prior for image patches, greatly reducing the dimension of the space to be learned and hence the sample complexity. Demonstrations in a remote sensing application illustrates that learning patch-based regularizers produces high-quality reconstructions and even permits learning from a single ground-truth image.more » « less