skip to main content


Title: Photoinduced peeling of molecular crystals
Block-like microcrystals composed of cis -dimethyl-2(3-(anthracen-9-yl)allylidene)malonate are grown from aqueous surfactant solutions. A pulse of 405 nm light converts a fraction of molecules to the trans isomer, creating an amorphous mixed layer that peels off the parent crystal. This photoinduced delamination can be repeated multiple times on the same block.  more » « less
Award ID(s):
1810514
NSF-PAR ID:
10177114
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Chemical Communications
Volume:
55
Issue:
26
ISSN:
1359-7345
Page Range / eLocation ID:
3709 to 3712
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Krylov subspace methods are a ubiquitous tool for computing near-optimal rank kk approximations of large matrices. While "large block" Krylov methods with block size at least kk give the best known theoretical guarantees, block size one (a single vector) or a small constant is often preferred in practice. Despite their popularity, we lack theoretical bounds on the performance of such "small block" Krylov methods for low-rank approximation. We address this gap between theory and practice by proving that small block Krylov methods essentially match all known low-rank approximation guarantees for large block methods. Via a black-box reduction we show, for example, that the standard single vector Krylov method run for t iterations obtains the same spectral norm and Frobenius norm error bounds as a Krylov method with block size ℓ≥kℓ≥k run for O(t/ℓ)O(t/ℓ) iterations, up to a logarithmic dependence on the smallest gap between sequential singular values. That is, for a given number of matrix-vector products, single vector methods are essentially as effective as any choice of large block size. By combining our result with tail-bounds on eigenvalue gaps in random matrices, we prove that the dependence on the smallest singular value gap can be eliminated if the input matrix is perturbed by a small random matrix. Further, we show that single vector methods match the more complex algorithm of [Bakshi et al. `22], which combines the results of multiple block sizes to achieve an improved algorithm for Schatten pp-norm low-rank approximation. 
    more » « less
  2. null (Ed.)
    Boolean functions play an important role in many different areas of computer science. The _local sensitivity_ of a Boolean function $f:\{0,1\}^n\to \{0,1\}$ on an input $x\in\{0,1\}^n$ is the number of coordinates whose flip changes the value of $f(x)$, i.e., the number of i's such that $f(x)\not=f(x+e_i)$, where $e_i$ is the $i$-th unit vector. The _sensitivity_ of a Boolean function is its maximum local sensitivity. In other words, the sensitivity measures the robustness of a Boolean function with respect to a perturbation of its input. Another notion that measures the robustness is block sensitivity. The _local block sensitivity_ of a Boolean function $f:\{0,1\}^n\to \{0,1\}$ on an input $x\in\{0,1\}^n$ is the number of disjoint subsets $I$ of $\{1,..,n\}$ such that flipping the coordinates indexed by $I$ changes the value of $f(x)$, and the _block sensitivity_ of $f$ is its maximum local block sensitivity. Since the local block sensitivity is at least the local sensitivity for any input $x$, the block sensitivity of $f$ is at least the sensitivity of $f$.The next example demonstrates that the block sensitivity of a Boolean function is not linearly bounded by its sensitivity. Fix an integer $k\ge 2$ and define a Boolean function $f:\{0,1\}^{2k^2}\to\{0,1\}$ as follows: the coordinates of $x\in\{0,1\}^{2k^2}$ are split into $k$ blocks of size $2k$ each and $f(x)=1$ if and only if at least one of the blocks contains exactly two entries equal to one and these entries are consecutive. While the sensitivity of the function $f$ is $2k$, its block sensitivity is $k^2$. The Sensitivity Conjecture, made by Nisan and Szegedy in 1992, asserts that the block sensitivity of a Boolean function is polynomially bounded by its sensivity. The example above shows that the degree of such a polynomial must be at least two.The Sensitivity Conjecture has been recently proven by Huang in [Annals of Mathematics 190 (2019), 949-955](https://doi.org/10.4007/annals.2019.190.3.6). He proved the following combinatorial statement that implies the conjecture (with the degree of the polynomial equal to four): any subset of more than half of the vertices of the $n$-dimensional cube $\{0,1\}^n$ induces a subgraph that contains a vertex with degree at least $\sqrt{n}$. The present article extends this result as follows: every Cayley graph with the vertex set $\{0,1\}^n$ and any generating set of size $d$ (the vertex set is viewed as a vector space over the binary field) satisfies that any subset of more than half of its vertices induces a subgraph that contains a vertex of degree at least $\sqrt{d}$. In particular, when the generating set consists of the $n$ unit vectors, the Cayley graph is the $n$-dimensional hypercube. 
    more » « less
  3. In blockchain and cryptocurrency, miners participate in a proof-of-work-based distributed consensus protocol to find and generate a valid block, process transactions, and earn the corresponding reward. Because cryptocurrency is designed to adapt to the dynamic miner network size, a miner's participation affects the block difficulty which sets the expected amount of work to find a valid block. We study the dependency between the mining power control and the block difficulty and study a rational miner utilizing such dependency to dynamically control its mining power over a longer horizon than just the impending block. More specifically, we introduce I-O Mining strategy where a miner takes advantage of the block difficulty adjustment rule and toggles between mining with full power and power off between the difficulty adjustments. In I-O Mining, the miner influences the block difficulty and mines only when the difficulty is low, gaming and violating the design integrity of the mining protocol for its profit gain. We analyze the I-O Mining's incentive/profit gain over the static-mining strategies and its negative impact on the rest of the blockchain mining network in the block/transaction scalability. Our results show that I-O Mining becomes even more effective and profitable as there are greater competitions for mining and the reward and the cost difference becomes smaller, which are the trends in cryptocurrencies. 
    more » « less
  4. Abstract

    We investigate the present‐day strike‐slip faulting and intracontinental deformation of North China using horizontal GPS velocities and a block modeling strategy. The data demonstrate that the fault slip rates and block rotations in North China can be better constrained using GPS velocities where the influence of groundwater extraction has been reduced by applying median spatial filtering of the velocity field. The modeled senses of motion and slip rates on active faults are generally in good agreement with the geological estimates. The left‐lateral slip rate along the boundary fault of Weihe Graben is apparently lower than the differential motion between the South China Block and Ordos Block. The missing left‐lateral slip is accommodated by the counterclockwise rotation of the Ordos Block. The sinistral slip rate along the eastern segment of the Altyn Tagh‐Haiyuan‐Qinling fault system decreases eastward from ∼1.0 to ∼0 mm/yr, much slower than that expected in the fast eastward extrusion hypothesis. We interpret the systematic counterclockwise rotation of blocks and left/right‐lateral faulting in North China to be driven both by the left‐lateral shear between the South China Block to the south and Yinshan‐Yanshan Block to the north and a push from the Tibetan Plateau on the southwestern margin of the Ordos Block.

     
    more » « less
  5. Abstract Seismic hazard assessment, such as the U.S. Geological Survey (USGS) National Seismic Hazard Model (NSHM), relies on estimates of fault slip rate based on geology and/or geodetic observations such as the Global Navigation Satellite System (GNSS), including the Global Positioning System. Geodetic fault slip rates may be estimated within a 3D spherical block model, in which the crust is divided into microplates bounded by mapped faults; fault slip rates are determined by the relative rotations of adjacent microplates. Uncertainty in selecting appropriate block-bounding faults and in forming closed microplates has limited the interpretability of block models for seismic hazard modeling. By introducing an automated block closure algorithm and regularizing the resulting densely spaced block model with total variation regularization, I develop the densest and most complete block model of the western continental United States to date. The model includes 853 blocks bounded by 1017 geologically identified fault sections from the USGS NSHM Fault Sections database. Microplate rotations and fault slip rates are constrained by 4979 GNSS velocities and 1243 geologic slip rates. I identify a regularized solution that fits the GNSS velocity field with a root mean square misfit of 1.9 mm/yr and reproduces 57% of geologic slip rates within reported geologic uncertainty and model sensitivity, consistent with other geodetic-based models in this Focus Section. This block model includes slip on faults that are not included in the USGS NSHM Fault sections database (but are required to form closed blocks) for an estimate of “off-fault” deformation of 3.62×1019  N·m/yr, 56% of the total calculated moment accumulation rate in the model. 
    more » « less