skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Weakly holomorphic modular forms on $$\Gamma _{0}(4)$$ Γ 0 ( 4 ) and Borcherds products on unitary group $$\mathrm{U}(2,1)$$ U ( 2 , 1 )
Award ID(s):
1762289
PAR ID:
10090367
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Research in Number Theory
Volume:
4
Issue:
1
ISSN:
2522-0160
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
  2. The increasing computing demands of autonomous driving applications have driven the adoption of multicore processors in real-time systems, which in turn renders energy optimizations critical for reducing battery capacity and vehicle weight. A typical energy optimization method targeting traditional real-time systems finds a critical speed under a static deadline, resulting in conservative energy savings that are unable to exploit dynamic changes in the system and environment. We capture emerging dynamic deadlines arising from the vehicle’s change in velocity and driving context for an additional energy optimization opportunity. In this article, we extend the preliminary work for uniprocessors [66] to multicore processors, which introduces several challenges. We use the state-of-the-art real-time gang scheduling [5] to mitigate some of the challenges. However, it entails an NP-hard combinatorial problem in that tasks need to be grouped into gangs of tasks, gang formation, which could significantly affect the energy saving result. As such, we present EASYR, an adaptive system optimization and reconfiguration approach that generates gangs of tasks from a given directed acyclic graph for multicore processors and dynamically adapts the scheduling parameters and processor speeds to satisfy dynamic deadlines while consuming as little energy as possible. The timing constraints are also satisfied between system reconfigurations through our proposed safe mode change protocol. Our extensive experiments with randomly generated task graphs show that our gang formation heuristic performs 32% better than the state-of-the-art one. Using an autonomous driving task set from Bosch and real-world driving data, our experiments show that EASYR achieves energy reductions of up to 30.3% on average in typical driving scenarios compared with a conventional energy optimization method with the current state-of-the-art gang formation heuristic in real-time systems, demonstrating great potential for dynamic energy optimization gains by exploiting dynamic deadlines. 
    more » « less
  3. A<sc>bstract</sc> We describe the projective superspace approach to supersymmetric models with off-shell (0, 4) supersymmetry in two dimensions. In addition to the usual superspace coordinates, projective superspace has extra bosonic variables — one doublet for each SU(2) in the R-symmetry SU(2) × SU(2) which are interpreted as homogeneous coordinates onCP1×CP1. The superfields are analytic in theCP1coordinates and this analyticity plays an important role in our description. For instance, it leads to stringent constraints on the interactions one can write down for a given superfield content of the model. As an example, we describe in projective superspace Witten’s ADHM sigma model — a linear sigma model with non-derivative interactions whose target isR4with a Yang-Mills instanton solution. The hyperkähler nature of target space and the twistor description of instantons by Ward, and Atiyah, Hitchin, Drinfeld and Manin are natural outputs of our construction. 
    more » « less
  4. Sparse deep neural networks (DNNs) have the potential to deliver compelling performance and energy efficiency without significant accuracy loss. However, their benefits can quickly diminish if their training is oblivious to the target hardware. For example, fewer critical connections can have a significant overhead if they translate into long-distance communication on the target hardware. Therefore, hardware-aware sparse training is needed to leverage the full potential of sparse DNNs. To this end, we propose a novel and comprehensive communication-aware sparse DNN optimization framework for tile-based in-memory computing (IMC) architectures. The proposed technique, CANNON first maps the DNN layers onto the tiles of the target architecture. Then, it replaces the fully connected and convolutional layers with communication-aware sparse connections. After that, CANNON optimizes the communication cost with minimal impact on the DNN accuracy. Extensive experimental evaluations with a wide range of DNNs and datasets show up to 3.0× lower communication energy, 3.1× lower communication latency, and 6.8× lower energy-delay product compared to state-of-the-art pruning approaches with a negligible impact on the classification accuracy on IMC-based machine learning accelerators. 
    more » « less