skip to main content

Title: Smart Webcam Cover: Exploring the Design of an Intelligent Webcam Cover to Improve Usability and Trust
Laptop webcams can be covertly activated by malware and law enforcement agencies. Consequently, 59% percent of Americans manually cover their webcams to avoid being surveilled. However, manual covers are prone to human error---through a survey with 200 users, we found that 61.5% occasionally forget to re-attach their cover after using their webcam. To address this problem, we developed Smart Webcam Cover (SWC): a thin film that covers the webcam (PDLC-overlay) by default until a user manually uncovers the webcam, and automatically covers the webcam when not in use. Through a two-phased design iteration process, we evaluated SWC with 20 webcam cover users through a remote study with a video prototype of SWC, compared to manual operation, and discussed factors that influence users' trust in the effectiveness of SWC and their perceptions of its utility.
Authors:
; ; ; ; ; ;
Award ID(s):
2029519
Publication Date:
NSF-PAR ID:
10352044
Journal Name:
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Volume:
5
Issue:
4
Page Range or eLocation-ID:
1 to 21
ISSN:
2474-9567
Sponsoring Org:
National Science Foundation
More Like this
  1. We present the design of a multiuser networked wireless system to remotely configure and control the lighting of multiple webcam users at different locations. This system makes use of a Raspberry Pi and a wireless DMX transmitter as the wireless interface that can be used to control the DMX webcam lights. A lighting control software called OLA is used on the Raspberry Pi. A web interface is designed to issue commands to OLA API running on the Raspberry Pi to control DMX lights associated with Raspberry Pi. Multiple wireless interfaces, each for a specific user at a different location, can be simultaneously configured and managed using the web interface. The interactive web interface can be used to control the intensity and color of the DMX lights. The web interface follows a model controller view design and makes HTTP calls to the OLA software running on Raspberry pi. The proposed system enables an operator to provide optimum and artistic lighting effects for a group of online presenters.
  2. Bardanis, M. (Ed.)
    It is unlikely to predict the distribution of soil suction in the field deterministically. It is well established that there are various sources of uncertainty in the measurement of matric suction, and the suction measurements in the field are even more critical because of the heterogeneities in the field conditions. Hence it becomes necessary to probabilistically characterize the suction in the field for enhanced reliability. The objective of this study was to conduct a probabilistic analysis of measured soil suction of two different test landfill covers, compacted clay cover (CC) and engineered turf cover (ETC), under similar meteorological events. The size of the two test landfill covers was 3 m × 3 m (10 ft. × 10 ft.) and 1.2 m (4ft.) in depth. The covers were constructed by excavating the existing subgrade, placing 6-mil plastic sheets, and backfilling the excavated soil, followed by layered compaction. Then the covers were instrumented identically with soil water potential sensors up to specified depths. One of the covers acted as the CC, and the other cover was ETC. In ETC, engineered turf was laid over the compacted soil. The engineered turf consisted of a structured LLDPE geomembrane overlain by synthetic turf (polyethylene fibersmore »tufted through a double layer of woven polypropylene geotextiles). The sensors were connected to an automated data logging system and the collected data were probabilistically analyzed using the R program. There were significant inconsistencies in the descriptive statistical parameters of the measured soil suction at both covers under the same climatic conditions. Soil suction measured in the field ranged between almost 12 to 44 kPa in ETC, while it was in the range of almost 1 to 2020 kPa in the CC. The histogram and quantile-quantile (Q-Q) plot showed the data to be non-normally distributed in the field. A heavy-tailed leptokurtic (Kurtosis=13) distribution of suction was observed in the ETC with substantial outliers. In contrast, the suction distribution in CC was observed skewed to the right containing a thinner tail indicating an almost platykurtic distribution. The distribution of suction in the field under engineered turf was observed to be reasonably consistent with time compared to bare soil under the same meteorological events. The results obtained from this study revealed the engineered turf system to be an effective barrier to inducing changes in soil suction against climatic events.« less
  3. Culbertson, J. ; Perfors, A. ; Rabagliati, H. ; Ramenzoni, V. (Ed.)
    Source-goal events involve an object moving from the Source to the Goal. In this work, we focus on the representation of the object, which has received relatively less attention in the study of Source-goal events. Specifically, this study aims to investigate the mapping between language and mental representations of object locations in transfer-of-possession events (e.g. throwing, giving). We investigate two different grammatical factors that may influence the representation of object location in transfer-of-possession events: (a) grammatical aspect (e.g. threw vs. was throwing) and (b) verb semantics (guaranteed transfer, e.g. give vs. no guaranteed transfer, e.g. throw). We conducted a visual-world eye-tracking study using a novel webcam-based eye-tracking paradigm (Webgazer; Papoutsaki et al., 2016) to investigate how grammatical aspect and verb semantics in the linguistic input guide the real-time and final representations of object locations. We show that grammatical cues guide the real-time and final representations of object locations.
  4. It takes great effort to manually or semi-automatically convert free-text phenotype narratives (e.g., morphological descriptions in taxonomic works) to a computable format before they can be used in large-scale analyses. We argue that neither a manual curation approach nor an information extraction approach based on machine learning is a sustainable solution to produce computable phenotypic data that are FAIR (Findable, Accessible, Interoperable, Reusable) (Wilkinson et al. 2016). This is because these approaches do not scale to all biodiversity, and they do not stop the publication of free-text phenotypes that would need post-publication curation. In addition, both manual and machine learning approaches face great challenges: the problem of inter-curator variation (curators interpret/convert a phenotype differently from each other) in manual curation, and keywords to ontology concept translation in automated information extraction, make it difficult for either approach to produce data that are truly FAIR. Our empirical studies show that inter-curator variation in translating phenotype characters to Entity-Quality statements (Mabee et al. 2007) is as high as 40% even within a single project. With this level of variation, curated data integrated from multiple curation projects may still not be FAIR. The key causes of this variation have been identified as semantic vaguenessmore »in original phenotype descriptions and difficulties in using standardized vocabularies (ontologies). We argue that the authors describing characters are the key to the solution. Given the right tools and appropriate attribution, the authors should be in charge of developing a project's semantics and ontology. This will speed up ontology development and improve the semantic clarity of the descriptions from the moment of publication. In this presentation, we will introduce the Platform for Author-Driven Computable Data and Ontology Production for Taxonomists, which consists of three components: a web-based, ontology-aware software application called 'Character Recorder,' which features a spreadsheet as the data entry platform and provides authors with the flexibility of using their preferred terminology in recording characters for a set of specimens (this application also facilitates semantic clarity and consistency across species descriptions); a set of services that produce RDF graph data, collects terms added by authors, detects potential conflicts between terms, dispatches conflicts to the third component and updates the ontology with resolutions; and an Android mobile application, 'Conflict Resolver,' which displays ontological conflicts and accepts solutions proposed by multiple experts. a web-based, ontology-aware software application called 'Character Recorder,' which features a spreadsheet as the data entry platform and provides authors with the flexibility of using their preferred terminology in recording characters for a set of specimens (this application also facilitates semantic clarity and consistency across species descriptions); a set of services that produce RDF graph data, collects terms added by authors, detects potential conflicts between terms, dispatches conflicts to the third component and updates the ontology with resolutions; and an Android mobile application, 'Conflict Resolver,' which displays ontological conflicts and accepts solutions proposed by multiple experts. Fig. 1 shows the system diagram of the platform. The presentation will consist of: a report on the findings from a recent survey of 90+ participants on the need for a tool like Character Recorder; a methods section that describes how we provide semantics to an existing vocabulary of quantitative characters through a set of properties that explain where and how a measurement (e.g., length of perigynium beak) is taken. We also report on how a custom color palette of RGB values obtained from real specimens or high-quality specimen images, can be used to help authors choose standardized color descriptions for plant specimens; and a software demonstration, where we show how Character Recorder and Conflict Resolver can work together to construct both human-readable descriptions and RDF graphs using morphological data derived from species in the plant genus Carex (sedges). The key difference of this system from other ontology-aware systems is that authors can directly add needed terms to the ontology as they wish and can update their data according to ontology updates. a report on the findings from a recent survey of 90+ participants on the need for a tool like Character Recorder; a methods section that describes how we provide semantics to an existing vocabulary of quantitative characters through a set of properties that explain where and how a measurement (e.g., length of perigynium beak) is taken. We also report on how a custom color palette of RGB values obtained from real specimens or high-quality specimen images, can be used to help authors choose standardized color descriptions for plant specimens; and a software demonstration, where we show how Character Recorder and Conflict Resolver can work together to construct both human-readable descriptions and RDF graphs using morphological data derived from species in the plant genus Carex (sedges). The key difference of this system from other ontology-aware systems is that authors can directly add needed terms to the ontology as they wish and can update their data according to ontology updates. The software modules currently incorporated in Character Recorder and Conflict Resolver have undergone formal usability studies. We are actively recruiting Carex experts to participate in a 3-day usability study of the entire system of the Platform for Author-Driven Computable Data and Ontology Production for Taxonomists. Participants will use the platform to record 100 characters about one Carex species. In addition to usability data, we will collect the terms that participants submit to the underlying ontology and the data related to conflict resolution. Such data allow us to examine the types and the quantities of logical conflicts that may result from the terms added by the users and to use Discrete Event Simulation models to understand if and how term additions and conflict resolutions converge. We look forward to a discussion on how the tools (Character Recorder is online at http://shark.sbs.arizona.edu/chrecorder/public) described in our presentation can contribute to producing and publishing FAIR data in taxonomic studies.« less
  5. Two common approaches for automating IoT smart spaces are having users write rules using trigger-action programming (TAP) or training machine learning models based on observed actions. In this paper, we unite these approaches. We introduce and evaluate Trace2TAP, a novel method for automatically synthesizing TAP rules from traces (time-stamped logs of sensor readings and manual actuations of devices). We present a novel algorithm that uses symbolic reasoning and SAT-solving to synthesize TAP rules from traces. Compared to prior approaches, our algorithm synthesizes generalizable rules more comprehensively and fully handles nuances like out-of-order events. Trace2TAP also iteratively proposes modified TAP rules when users manually revert automations. We implemented our approach on Samsung SmartThings. Through formative deployments in ten offices, we developed a clustering/ranking system and visualization interface to intelligibly present the synthesized rules to users. We evaluated Trace2TAP through a field study in seven additional offices. Participants frequently selected rules ranked highly by our clustering/ranking system. Participants varied in their automation priorities, and they sometimes chose rules that would seem less desirable by traditional metrics like precision and recall. Trace2TAP supports these differing priorities by comprehensively synthesizing TAP rules and bringing humans into the loop during automation.