Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Abstract BackgroundThe coronavirus disease 2019 (COVID-19) presents critical diagnostic challenges for managing the pandemic. We investigated the 30-month changes in COVID-19 testing modalities and functional testing sites from the early period of the pandemic to the most recent Omicron surge in 2022 in Kyoto City, Japan. MethodsThis is a retrospective-observational study using a local anonymized population database that included patients' demographic and clinical information, testing methods and facilities from January 2020 to June 2022, a total of 30 months. We computed the distribution of symptomatic presentation, testing methods, and testing facilities among cases. Differences over time were tested using chi-square tests of independence. ResultsDuring the study period, 133,115 confirmed COVID-19 cases were reported, of which 90.9% were symptomatic. Although nucleic acid amplification testing occupied 68.9% of all testing, the ratio of lateral flow devices (LFDs) rapidly increased in 2022. As the pandemic continued, the testing capability was shifted from COVID-19 designated facilities to general practitioners, who became the leading testing providers (57.3% of 99,945 tests in 2022). ConclusionsThere was a dynamic shift in testing modality during the first 30 months of the pandemic in Kyoto City. General practitioners increased their role substantially as the use of LFDs spread dramatically in 2022. By comprehending and documenting the evolution of testing methods and testing locations, it is anticipated that this will contribute to the establishment of an even more efficient testing infrastructure for the next pandemic.more » « less
- 
            Abstract The Covid‐19 pandemic greatly impacted global public policy implementation. There is a lack of research synthesizing the lessons learned during Covid‐19 from a policy perspective. A systematic review was conducted following PRISMA guidelines to examine the literature on public policy implementation during the Covid‐19 pandemic in order to gain comprehensive insights into current topics and future directions. Five clusters of topics were identified: lessons from science, crisis governance, behavior and mental health, beyond the crisis, and frontlines and trust. Extensive collaboration among public health departments emerged as a significant research theme. Thirty recommendations for future research were identified, including the examination of frontline worker behavior, the use of just tech in policy implementation, and the investigation of policies driving improvements in global public health. The findings indicate that current research on public policy implementation during the Covid‐19 pandemic extends beyond health and economic crisis‐related policies. However, further studies in a post‐pandemic context are needed to validate the identified topics and future directions.more » « less
- 
            Abstract Location-based alerts have gained increasing popularity in recent years, whether in the context of healthcare (e.g., COVID-19 contact tracing), marketing (e.g., location-based advertising), or public safety. However, serious privacy concerns arise when location data are used in clear in the process. Several solutions employ searchable encryption (SE) to achievesecurealerts directly on encrypted locations. While doing so preserves privacy, the performance overhead incurred is high. We focus on a prominent SE technique in the public-key setting–hidden vector encryption, and propose a graph embedding technique to encode location data in a way that significantly boosts the performance of processing on ciphertexts. We show that the optimal encoding is NP-hard, and we provide three heuristics that obtain significant performance gains: gray optimizer, multi-seed gray optimizer and scaled gray optimizer. Furthermore, we investigate the more challenging case of dynamic alert zones, where the area of interest changes over time. Our extensive experimental evaluation shows that our solutions can significantly improve computational overhead compared to existing baselines.more » « less
- 
            Machine unlearning aims to eliminate the influence of a subset of training samples (i.e., unlearning samples) from a trained model. Effectively and efficiently removing the unlearning samples without negatively impacting the overall model performance is challenging. Existing works mainly exploit input and output space and classification loss, which can result in ineffective unlearning or performance loss. In addition, they utilize unlearning or remaining samples ineffectively, sacrificing either unlearning efficacy or efficiency. Our main insight is that the direct optimization on the representation space utilizing both unlearning and remaining samples can effectively remove influence of unlearning samples while maintaining representations learned from remaining samples. We propose a contrastive unlearning framework, leveraging the concept of representation learning for more effective unlearning. It removes the influence of unlearning samples by contrasting their embeddings against the remaining samples' embeddings so that their embeddings are closer to the embeddings of unseen samples. Experiments on a variety of datasets and models on both class unlearning and sample unlearning showed that contrastive unlearning achieves the best unlearning effects and efficiency with the lowest performance loss compared with the state-of-the-art algorithms. In addition, it is generalizable to different contrastive frameworks and other models such as vision-language models. Our main code is available on github.com/Emory-AIMS/Contrastive-Unlearningmore » « lessFree, publicly-accessible full text available September 1, 2026
- 
            Free, publicly-accessible full text available August 3, 2026
- 
            Free, publicly-accessible full text available August 1, 2026
- 
            With the growing adoption of privacy-preserving machine learning algorithms, such as Differentially Private Stochastic Gradient Descent (DP-SGD), training or fine-tuning models on private datasets has become increasingly prevalent. This shift has led to the need for models offering varying privacy guarantees and utility levels to satisfy diverse user requirements. Managing numerous versions of large models introduces significant operational challenges, including increased inference latency, higher resource consumption, and elevated costs. Model deduplication is a technique widely used by many model serving and database systems to support high-performance and low-cost inference queries and model diagnosis queries. However, none of the existing model deduplication works has considered privacy, leading to unbounded aggregation of privacy costs for certain deduplicated models and inefficiencies when applied to deduplicate DP-trained models. We formalize the problem of deduplicating DP-trained models for the first time and propose a novel privacy- and accuracy-aware deduplication mechanism to address the problem. We developed a greedy strategy to select and assign base models to target models to minimize storage and privacy costs. When deduplicating a target model, we dynamically schedule accuracy validations and apply the Sparse Vector Technique to reduce the privacy costs associated with private validation data. Compared to baselines, our approach improved the compression ratio by up to 35× for individual models (including large language models and vision transformers). We also observed up to 43× inference speedup due to the reduction of I/O operations.more » « lessFree, publicly-accessible full text available June 17, 2026
- 
            Free, publicly-accessible full text available June 2, 2026
- 
            Free, publicly-accessible full text available January 1, 2026
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
