skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Selectively localized: Temporal and visual structure of smartphone screen activity across media environments
This study demonstrates how localization and homogenization can co-occur in different aspects of smartphone usage. Smartphones afford individualization of media behavior: users can begin, end, or switch between countless tasks anytime, but this individualization is shaped by shared environments such that smartphone usage may be similar among those who share such environments but contain differences, or localization, across environments or regions. Yet for all users, smartphone screen interactions are bounded and guided by nearly identical smartphone interfaces, suggesting that smartphone usage may be similar or homogenized across all individuals regardless of environment. We study homogenization and localization by comparing the temporal, visual, and experiential composition of screen activity among individuals in three dissimilar media environments—the United States, China, and Myanmar—using one week of screenshot data captured passively every 5 s by the novel Screenomics framework. We find that overall usage levels are consistently dissimilar across media environments, while metrics that depend more on moment-level decisions and user-interface design do not vary significantly across media environments. These results suggest that quantitative research on homogenization and localization should analyze behavior driven by user interfaces and by contextually determined parameters, respectively.  more » « less
Award ID(s):
1831481
PAR ID:
10379526
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Mobile Media & Communication
Volume:
10
Issue:
3
ISSN:
2050-1579
Page Range / eLocation ID:
487 to 509
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. NavCog3 is a smartphone turn-by-turn navigation assistant system we developed specifically designed to enable independent navigation for people with visual impairments. Using off-the-shelf Bluetooth beacons installed in the surrounding environment and a commodity smartphone carried by the user, NavCog3 achieves unparalleled localization accuracy in real-world large-scale scenarios. By leveraging its accurate localization capabilities, NavCog3 guides the user through the environment and signals the presence of semantic features and points of interest in the vicinity (e.g., doorways, shops).To assess the capability of NavCog3 to promote independent mobility of individuals with visual impairments, we deployed and evaluated the system in two challenging real-world scenarios. The first scenario demonstrated the scalability of the system, which was permanently installed in a five-story shopping mall spanning three buildings and a public underground area. During the study, 10 participants traversed three fixed routes, and 43 participants traversed free-choice routes across the environment. The second scenario validated the system’s usability in the wild in a hotel complex temporarily equipped with NavCog3 during a conference for individuals with visual impairments. In the hotel, almost 14.2h of system usage data were collected from 37 unique users who performed 280 travels across the environment, for a total of 30,200m 
    more » « less
  2. null (Ed.)
    Whereas social visual attention has been examined in computer-mediated (e.g., shared screen) or video-mediated (e.g., FaceTime) interaction, it has yet to be studied in mixed-media interfaces that combine video of the conversant along with other UI elements. We analyzed eye gaze of 37 dyads (74 participants) who were tasked with negotiating the price of a new car (as a buyer and seller) using mixed-media video conferencing under competitive or cooperative negotiation instructions (experimental manipulation). We used multidimensional recurrence quantification analysis to extract spatio-temporal patterns corresponding to mutual gaze (individuals look at each other), joint attention (individuals focus on the same elements of the interface), and gaze aversion (an individual looks at their partner, who is looking elsewhere). Our results indicated that joint attention predicted the sum of points attained by the buyer and seller (i.e., the joint score). In contrast, gaze aversion was associated with faster time to complete the negotiation, but with a lower joint score. Unexpectedly, mutual gaze was highly infrequent and unrelated to the negotiation outcomes and none of the gaze patterns predicted subjective perceptions of the negotiation. There were also no effects of gender composition or negotiation condition on the gaze patterns or negotiation outcomes. Our results suggest that social visual attention may operate differently in mixed-media collaborative interfaces than in face-to-face interaction. As mixed-media collaborative interfaces gain prominence, our work can be leveraged to inform the design of gaze-sensitive user interfaces that support remote negotiations among other tasks. 
    more » « less
  3. Bottoni, Paolo; Panizzi, Emanuele (Ed.)
    Many questions regarding single-hand text entry on modern smartphones (in particular, large-screen smartphones) remain under-explored, such as, (i) will the existing prevailing single-handed keyboards fit for large-screen smartphone users? and (ii) will individual customization improve single-handed keyboard performance? In this paper we study single-handed typing behaviors on several representative keyboards on large-screen mobile devices.We found that, (i) the user-adaptable-shape curved keyboard performs best among all the studied keyboards; (ii) users’ familiarity with the Qwerty layout plays a significant role at the beginning, but after several sessions of training, the user-adaptable curved keyboard can have the best learning curve and performs best; (iii) generally the statistical decoding algorithms via spatial and language models can well handle the input noise from single-handed typing. 
    more » « less
  4. null (Ed.)
    The effectiveness of user interfaces are limited by the tendency for the human mind to wander. Intelligent user interfaces can combat this by detecting when mind wandering occurs and attempting to regain user attention through a variety of intervention strategies. However, collecting data to build mind wandering detection models can be expensive, especially considering the variety of media available and potential differences in mind wandering across them. We explored the possibility of using eye gaze to build cross-domain models of mind wandering where models trained on data from users in one domain are used for different users in another domain. We built supervised classification models using a dataset of 132 users whose mind wandering reports were collected in response to thought-probes while they completed tasks from seven different domains for six minutes each (five domains are investigated here: Illustrated Text, Narrative Film, Video Lecture, Naturalistic Scene, and Reading Text). We used global eye gaze features to build within- and cross- domain models using 5-fold user-independent cross validation. The best performing within-domain models yielded AUROCs ranging from .57 to .72, which were comparable for the cross-domain models (AUROCs of .56 to .68). Models built from coarse-grained locality features capturing the spatial distribution of gaze resulted in slightly better transfer on average (transfer ratios of .61 vs .54 for global models) due to improved performance in certain domains. Instance-based and feature-level domain adaptation did not result in any improvements in transfer. We found that seven gaze features likely contributed to transfer as they were among the top ten features for at least four domains. Our results indicate that gaze features are suitable for domain adaptation from similar domains, but more research is needed to improve domain adaptation between more dissimilar domains. 
    more » « less
  5. This analysis focuses on a smartphone app known as “Transit” that is used to unlock shared bicycles in Chicago. Data from the app were utilized in a three-part analysis. First, Transit app bikeshare usage patterns were compared with system-wide bikeshare utilization using publicly available data. The results revealed that hourly usage on weekdays generally follows classical peaked commuting patterns; however, daily usage reached its highest level on weekends. This suggests that there may be large numbers of both commuting and recreational users. The second part aimed to identify distinct user groups via cluster analysis; the results revealed six different clusters: (1) commuters, (2) utility users, (3) leisure users, (4) infrequent commuters, (5) weekday visitors, and (6) weekend visitors. The group unlocking the most shared bikes (45.58% of all Transit app unlocks) was commuters, who represent 10% of Transit app bikeshare users. The third part proposed a trip chaining algorithm to identify “trip chaining bikers.” This term refers to bikeshare users who return a shared bicycle and immediately check out another, presumably to avoid paying extra usage fees for trips over 30 min. The algorithm revealed that 27.3% of Transit app bikeshare users exhibited this type of “bike chaining” behavior. However, this varied substantially between user groups; notably, 66% of Transit app bikeshare users identified as commuters made one or more bike chaining unlocks. The implications are important for bikeshare providers to understand the impact of pricing policies, particularly in encouraging the turn-over of bicycles. 
    more » « less