Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Despite its long history, a canonical formulation of quantum ergodicity that applies to general classes of quantum dynamics, including driven systems, has not been fully established. Here we introduce and study a notion of quantum ergodicity for closed systems with time-dependent Hamiltonians, defined as statistical randomness exhibited in their longtime dynamics. Concretely, we consider the temporal ensemble of quantum states (time-evolution operators) generated by the evolution, and investigate the conditions necessary for them to be statistically indistinguishable from uniformly random states (operators) in the Hilbert space (space of unitaries). We find that the number of driving frequencies underlying the Hamiltonian needs to be sufficiently large for this to occur. Conversely, we show that statistical —indistinguishability up to some large but finite moment—can already be achieved by a quantum system driven with a single frequency, i.e., a Floquet system, as long as the driving period is sufficiently long. Our work relates the complexity of a time-dependent Hamiltonian and that of the resulting quantum dynamics, and offers a fresh perspective to the established topics of quantum ergodicity and chaos from the lens of quantum information. Published by the American Physical Society2024more » « lessFree, publicly-accessible full text available December 1, 2025
-
Free, publicly-accessible full text available December 15, 2025
-
The increasing popularity of deep learning models has created new opportunities for developing AI-based recommender systems. Designing recommender systems using deep neural networks requires careful architecture design, and further optimization demands extensive co-design efforts on jointly optimizing model architecture and hardware. Design automation, such as Automated Machine Learning (AutoML), is necessary to fully exploit the potential of recommender model design, including model choices and model-hardware co-design strategies. We introduce a novel paradigm that utilizes weight sharing to explore abundant solution spaces. Our paradigm creates a large supernet to search for optimal architectures and co-design strategies to address the challenges of data multi-modality and heterogeneity in the recommendation domain. From a model perspective, the supernet includes a variety of operators, dense connectivity, and dimension search options. From a co-design perspective, it encompasses versatile Processing-In-Memory (PIM) configurations to produce hardware-efficient models. Our solution space’s scale, heterogeneity, and complexity pose several challenges, which we address by proposing various techniques for training and evaluating the supernet. Our crafted models show promising results on three Click-Through Rates (CTR) prediction benchmarks, outperforming both manually designed and AutoML-crafted models with state-of-the-art performance when focusing solely on architecture search. From a co-design perspective, we achieve 2 × FLOPs efficiency, 1.8 × energy efficiency, and 1.5 × performance improvements in recommender models.more » « lessFree, publicly-accessible full text available December 9, 2025
-
The rise of deep neural networks offers new opportunities in optimizing recommender systems. However, optimizing recommender systems using deep neural networks requires delicate architecture fabrication. We propose NASRec, a paradigm that trains a single supernet and efficiently produces abundant models/sub-architectures by weight sharing. To overcome the data multi-modality and architecture heterogeneity challenges in the recommendation domain, NASRec establishes a large supernet (i.e., search space) to search the full architectures. The supernet incorporates versatile choice of operators and dense connectivity to minimize human efforts for finding priors. The scale and heterogeneity in NASRec impose several challenges, such as training inefficiency, operator-imbalance, and degraded rank correlation. We tackle these challenges by proposing single-operator any-connection sampling, operator-balancing interaction modules, and post-training fine-tuning. Our crafted models, NASRecNet, show promising results on three Click-Through Rates (CTR) prediction benchmarks, indicating that NASRec outperforms both manually designed models and existing NAS methods with state-of-the-art performance. Our work is publicly available here.more » « less