Let \begin{document}$$ \mathscr{M} $$\end{document} be a geometrically finite hyperbolic manifold. We present a very general theorem on the shrinking target problem for the geodesic flow, using its exponential mixing. This includes a strengthening of Sullivan's logarithm law for the excursion rate of the geodesic flow. More generally, we prove logarithm laws for the first hitting time for shrinking cusp neighborhoods, shrinking tubular neighborhoods of a closed geodesic, and shrinking metric balls, as well as give quantitative estimates for the time a generic geodesic spends in such shrinking targets.
more »
« less
Mechanism and kinetics of enzymatic degradation of polyester microparticles using a shrinking particle–shrinking core model
Generalized shrinking particle (SPM) and shrinking core (SCM) models were developed to describe the kinetics of heterogenous enzymatic degradation of polymer microparticles in a continuous microflow system.
more »
« less
- Award ID(s):
- 1719875
- PAR ID:
- 10548647
- Publisher / Repository:
- Royal Society of Chemistry
- Date Published:
- Journal Name:
- Lab on a Chip
- Volume:
- 23
- Issue:
- 20
- ISSN:
- 1473-0197
- Page Range / eLocation ID:
- 4456 to 4465
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Functionalized nanoparticles (NPs) are the foundation of diverse applications. Especially, in many biosensing applications, concentrating suspended NPs onto a surface without deteriorating their biofunction is usually an inevitable step to improve detection limit, which remains to be a great challenge. In this work, biocompatible deposition of functionalized NPs to optically transparent surfaces is demonstrated using shrinking bubbles. Leveraging the shrinking phase of bubble mitigates the biomolecule degradation problems encountered in traditional photothermal deposition techniques. The deposited NPs are closely packed, and the functional molecules are able to survive the process as verified by their strong fluorescence signals. Using high‐speed videography, it is revealed that the contracting contact line of the shrinking bubble forces the NPs captured by the contact line to a highly concentrated island. Such shrinking surface bubble deposition (SSBD) is low temperature in nature as no heat is added during the process. Using a hairpin DNA‐functionalized gold NP suspension as a model system, SSBD is shown to enable much stronger fluorescence signal compared to the optical‐pressure deposition and the conventional thermal bubble contact line deposition. The demonstrated SSBD technique capable of directly depositing functionalized NPs may significantly simplify biosensor fabrication and thus benefit a wide range of relevant applications.more » « less
-
We study shrinking targets problems for discrete time flows on a homogeneous space Γ\G with G a semisimple group and Γ an irreducible lattice. Our results apply to both diagonalizable and unipotent flows and apply to very general families of shrinking targets. As a special case, we establish logarithm laws for cusp excursions of unipotent flows, settling a problem raised by Athreya and Margulis.more » « less
-
CNNs are increasingly deployed across different hardware, dynamic environments, and low-power embedded devices. This has led to the design and training of CNN architectures with the goal of maximizing accuracy subject to such variable deployment constraints. As the number of deployment scenarios grows, there is a need to find scalable solutions to design and train specialized CNNs. Once-for-all training has emerged as a scalable approach that jointly co-trains many models (subnets) at once with a constant training cost and finds specialized CNNs later. The scalability is achieved by training the full model and simultaneously reducing it to smaller subnets that share model weights (weight-shared shrinking). However, existing once-for-all training approaches incur huge training costs reaching 1200 GPU hours. We argue this is because they either start the process of shrinking the full model too early or too late. Hence, we propose Delayed Epsilon-Shrinking (DepS) that starts the process of shrinking the full model when it is partially trained, which leads to training cost improvement and better in-place knowledge distillation to smaller models. The proposed approach also consists of novel heuristics that dynamically adjust subnet learning rates incrementally, leading to improved weight-shared knowledge distillation from larger to smaller subnets as well. As a result, DepS outperforms state-of-the-art once-for-all training techniques across different datasets including CIFAR10/100, ImageNet-100, and ImageNet-1k on accuracy and cost. It achieves higher ImageNet-1k top1 accuracy or the same accuracy with 1.3x reduction in FLOPs and 2.5x drop in training cost (GPU*hrs).more » « less
-
null (Ed.)Relying on the classical second moment formula of Rogers we give an effective asymptotic formula for the number of integer vectors v in a ball of radius t, with value Q(v) in a shrinking interval of size t^{−λ}, that is valid for almost all indefinite quadratic forms in n variables for any λmore » « less
An official website of the United States government

