We present the design of a multiuser networked wireless system to remotely configure and control the lighting of multiple webcam users at different locations. This system makes use of a Raspberry Pi and a wireless DMX transmitter as the wireless interface that can be used to control the DMX webcam lights. A lighting control software called OLA is used on the Raspberry Pi. A web interface is designed to issue commands to OLA API running on the Raspberry Pi to control DMX lights associated with Raspberry Pi. Multiple wireless interfaces, each for a specific user at a different location, can be simultaneously configured and managed using the web interface. The interactive web interface can be used to control the intensity and color of the DMX lights. The web interface follows a model controller view design and makes HTTP calls to the OLA software running on Raspberry pi. The proposed system enables an operator to provide optimum and artistic lighting effects for a group of online presenters.
Smart Webcam Cover: Exploring the Design of an Intelligent Webcam Cover to Improve Usability and Trust
Laptop webcams can be covertly activated by malware and law enforcement agencies. Consequently, 59% percent of Americans manually cover their webcams to avoid being surveilled. However, manual covers are prone to human error---through a survey with 200 users, we found that 61.5% occasionally forget to re-attach their cover after using their webcam. To address this problem, we developed Smart Webcam Cover (SWC): a thin film that covers the webcam (PDLC-overlay) by default until a user manually uncovers the webcam, and automatically covers the webcam when not in use. Through a two-phased design iteration process, we evaluated SWC with 20 webcam cover users through a remote study with a video prototype of SWC, compared to manual operation, and discussed factors that influence users' trust in the effectiveness of SWC and their perceptions of its utility.
- Award ID(s):
- 2029519
- Publication Date:
- NSF-PAR ID:
- 10352044
- Journal Name:
- Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
- Volume:
- 5
- Issue:
- 4
- Page Range or eLocation-ID:
- 1 to 21
- ISSN:
- 2474-9567
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Bardanis, M. (Ed.)It is unlikely to predict the distribution of soil suction in the field deterministically. It is well established that there are various sources of uncertainty in the measurement of matric suction, and the suction measurements in the field are even more critical because of the heterogeneities in the field conditions. Hence it becomes necessary to probabilistically characterize the suction in the field for enhanced reliability. The objective of this study was to conduct a probabilistic analysis of measured soil suction of two different test landfill covers, compacted clay cover (CC) and engineered turf cover (ETC), under similar meteorological events. The size of the two test landfill covers was 3 m × 3 m (10 ft. × 10 ft.) and 1.2 m (4ft.) in depth. The covers were constructed by excavating the existing subgrade, placing 6-mil plastic sheets, and backfilling the excavated soil, followed by layered compaction. Then the covers were instrumented identically with soil water potential sensors up to specified depths. One of the covers acted as the CC, and the other cover was ETC. In ETC, engineered turf was laid over the compacted soil. The engineered turf consisted of a structured LLDPE geomembrane overlain by synthetic turf (polyethylene fibersmore »
-
Culbertson, J. ; Perfors, A. ; Rabagliati, H. ; Ramenzoni, V. (Ed.)Source-goal events involve an object moving from the Source to the Goal. In this work, we focus on the representation of the object, which has received relatively less attention in the study of Source-goal events. Specifically, this study aims to investigate the mapping between language and mental representations of object locations in transfer-of-possession events (e.g. throwing, giving). We investigate two different grammatical factors that may influence the representation of object location in transfer-of-possession events: (a) grammatical aspect (e.g. threw vs. was throwing) and (b) verb semantics (guaranteed transfer, e.g. give vs. no guaranteed transfer, e.g. throw). We conducted a visual-world eye-tracking study using a novel webcam-based eye-tracking paradigm (Webgazer; Papoutsaki et al., 2016) to investigate how grammatical aspect and verb semantics in the linguistic input guide the real-time and final representations of object locations. We show that grammatical cues guide the real-time and final representations of object locations.
-
It takes great effort to manually or semi-automatically convert free-text phenotype narratives (e.g., morphological descriptions in taxonomic works) to a computable format before they can be used in large-scale analyses. We argue that neither a manual curation approach nor an information extraction approach based on machine learning is a sustainable solution to produce computable phenotypic data that are FAIR (Findable, Accessible, Interoperable, Reusable) (Wilkinson et al. 2016). This is because these approaches do not scale to all biodiversity, and they do not stop the publication of free-text phenotypes that would need post-publication curation. In addition, both manual and machine learning approaches face great challenges: the problem of inter-curator variation (curators interpret/convert a phenotype differently from each other) in manual curation, and keywords to ontology concept translation in automated information extraction, make it difficult for either approach to produce data that are truly FAIR. Our empirical studies show that inter-curator variation in translating phenotype characters to Entity-Quality statements (Mabee et al. 2007) is as high as 40% even within a single project. With this level of variation, curated data integrated from multiple curation projects may still not be FAIR. The key causes of this variation have been identified as semantic vaguenessmore »
-
Two common approaches for automating IoT smart spaces are having users write rules using trigger-action programming (TAP) or training machine learning models based on observed actions. In this paper, we unite these approaches. We introduce and evaluate Trace2TAP, a novel method for automatically synthesizing TAP rules from traces (time-stamped logs of sensor readings and manual actuations of devices). We present a novel algorithm that uses symbolic reasoning and SAT-solving to synthesize TAP rules from traces. Compared to prior approaches, our algorithm synthesizes generalizable rules more comprehensively and fully handles nuances like out-of-order events. Trace2TAP also iteratively proposes modified TAP rules when users manually revert automations. We implemented our approach on Samsung SmartThings. Through formative deployments in ten offices, we developed a clustering/ranking system and visualization interface to intelligibly present the synthesized rules to users. We evaluated Trace2TAP through a field study in seven additional offices. Participants frequently selected rules ranked highly by our clustering/ranking system. Participants varied in their automation priorities, and they sometimes chose rules that would seem less desirable by traditional metrics like precision and recall. Trace2TAP supports these differing priorities by comprehensively synthesizing TAP rules and bringing humans into the loop during automation.