Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available August 1, 2026
-
City-wide free WiFi is one of the most common initiatives of smart city infrastructures. While city-wide free WiFi services are not subject to privacy-focused regulations and appeal to a broader demographic, how users perceive privacy in such services is unknown. This study explores perspectives of users in the United States regarding the privacy practices of such services as well as their expectations. We conducted surveys with 199 participants of US, consisting of those who had used such services (i.e., experienced users, n=99) and those who had not (i.e., potential users, n=100), assessing their satisfaction with the services, perceptions regarding data privacy practices of city-wide free WiFi services, and general expectations of privacy. We identify 14 key findings by analyzing the responses from participants. We found that participants are aware of the data collection and data sharing by the WiFi services and are uncomfortable with both but are still inclined to use the services as the need for WiFi outweighs privacy, as well as because of the significant trust they place in the services due to their non-profit and government-run nature. Our analysis provides actionable takeaways for researchers and practitioners, arguing for long-term privacy gains through a regulatory approach that treats city-wide WiFi as a utility, given the trust consumers place in it, and the overall tendency of consumers to trade-off privacy for WiFi access in this context.more » « lessFree, publicly-accessible full text available July 1, 2026
-
Free, publicly-accessible full text available May 12, 2026
-
Free, publicly-accessible full text available April 27, 2026
-
Free, publicly-accessible full text available April 26, 2026
-
Free, publicly-accessible full text available March 31, 2026
-
Prior work has developed numerous systems that test the security and safety of smart homes. For these systems to be applicable in practice, it is necessary to test them with realistic scenarios that represent the use of the smart home, i.e., home automation, in the wild. This demo paper presents the technical details and usage of Helion, a system that uses n-gram language modeling to learn the regularities in user-driven programs, i.e., routines developed for the smart home, and predicts natural scenarios of home automation, i.e., event sequences that reflect realistic home automation usage. We demonstrate the HelionHA platform, developed by integrating Helion with the popular Home Assistant smart home platform. HelionHA allows an end-to-end exploration of Helion’s scenarios by executing them as test cases with real and virtual smart home devices.more » « less
-
Code completion aims at speeding up code writing by predicting the next code token(s) the developer is likely to write. Works in this field focused on improving the accuracy of the generated predictions, with substantial leaps forward made possible by deep learning (DL) models. However, code completion techniques are mostly evaluated in the scenario of predicting the next token to type, with few exceptions pushing the boundaries to the prediction of an entire code statement. Thus, little is known about the performance of state-of-the-art code completion approaches in more challenging scenarios in which, for example, an entire code block must be generated. We present a large-scale study exploring the capabilities of state-of-the-art Transformer-based models in supporting code completion at different granularity levels, including single tokens, one or multiple entire statements, up to entire code blocks (e.g., the iterated block of a for loop). We experimented with several variants of two recently proposed Transformer-based models, namely RoBERTa and the Text-To-Text Transfer Transformer (T5), for the task of code completion. The achieved results show that Transformer-based models, and in particular the T5, represent a viable solution for code completion, with perfect predictions ranging from ~29%, obtained when asking the model to guess entire blocks, up to ~69%, reached in the simpler scenario of few tokens masked from the same code statement.more » « less
An official website of the United States government

Full Text Available