It is well known that text-based passwords are hard to remember and that users prefer simple (and non-secure) passwords. However, despite extensive research on the topic, no principled account exists for explaining when a password will be forgotten. This paper contributes new data and a set of analyses building on the ecological theory of memory and forgetting. We propose that human memory naturally adapts according to an estimate of how often a password will be needed, such that often used, important passwords are less likely to be forgotten. We derive models for login duration and odds of recall as a function of rate of use and number of uses thus far. The models achieved a root-mean-square error (RMSE) of 1.8 seconds for login duration and 0.09 for recall odds for data collected in a month-long field experiment where frequency of password use was controlled. The theory and data shed new light on password management, account usage, password security and memorability.
Recognition and Recall of Geographic Data In Cartograms
In this paper we investigate the memorability of two types of cartograms, both in terms of recognition of the visualization and recall of the data. A cartogram, or a value-by-area map, is a representation of a map in which geographic regions are modified to reflect a given statistic, such as population or income. Of the many different types of cartograms, the contiguous and Dorling types are among the most popular and most effective. With this in mind, we evaluate the memorability of these two cartogram types with a human-subjects study, using task-based experimental data and cartogram visualization tasks based on Bertin’s map reading levels. In particular, our results indicate that Dorling cartograms are associated with better recall of general patterns and trends. This, together with additional significant differences between the two most popular cartogram types, has implications for the design and use of cartograms, in the context of memorability.
- Publication Date:
- NSF-PAR ID:
- 10179516
- Journal Name:
- 13th International Conference on Advanced Visual Interfaces (AVI)
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
With phishing attacks, password breaches, and brute-force login attacks presenting constant threats, it is clear that passwords alone are inadequate for protecting the web applications entrusted with our personal data. Instead, web applications should practice defense in depth and give users multiple ways to secure their accounts. In this paper we propose login rituals, which define actions that a user must take to authenticate, and web tripwires, which define actions that a user must not take to remain authenticated. These actions outline expected behavior of users familiar with their individual setups on applications they use often. We show how we can detect and prevent intrusions from web attackers lacking this familiarity with their victim's behavior. We design a modular and application-agnostic system that incorporates these two mechanisms, allowing us to add an additional layer of deception-based security to existing web applications without modifying the applications themselves. Next to testing our system and evaluating its performance when applied to five popular open-source web applications, we demonstrate the promising nature of these mechanisms through a user study. Specifically, we evaluate the detection rate of tripwires against simulated attackers, 88% of whom clicked on at least one tripwire. We also observe web users'more »
-
Abstract Purpose. This study aims to develop and validate a multi-view learning method by the combination of primary tumor radiomics and lymph node (LN) radiomics for the preoperative prediction of LN status in gastric cancer (GC). Methods. A total of 170 contrast-enhanced abdominal CT images from GC patients were enrolled in this retrospective study. After data preprocessing, two-step feature selection approach including Pearson correlation analysis and supervised feature selection method based on test-time budget (FSBudget) was performed to remove redundance of tumor and LN radiomics features respectively. Two types of discriminative features were then learned by an unsupervised multi-view partial least squares (UMvPLS) for a latent common space on which a logistic regression classifier is trained. Five repeated random hold-out experiments were employed. Results. On 20-dimensional latent common space, area under receiver operating characteristic curve (AUC), precision, accuracy, recall and F1-score are 0.9531 ± 0.0183, 0.9260 ± 0.0184, 0.9136 ± 0.0174, 0.9468 ± 0.0106 and 0.9362 ± 0.0125 for the training cohort respectively, and 0.8984 ± 0.0536, 0.8671 ± 0.0489, 0.8500 ± 0.0599, 0.9118 ± 0.0550 and 0.8882 ± 0.0440 for the validation cohort respectively (reported as mean ± standard deviation). It shows a better discrimination capability than single-view methods, our previous method, and eight baseline methods. When the dimension was reduced to 2, the model not only has effective prediction performance,more »
-
Obeid, Iyad Selesnick (Ed.)Electroencephalography (EEG) is a popular clinical monitoring tool used for diagnosing brain-related disorders such as epilepsy [1]. As monitoring EEGs in a critical-care setting is an expensive and tedious task, there is a great interest in developing real-time EEG monitoring tools to improve patient care quality and efficiency [2]. However, clinicians require automatic seizure detection tools that provide decisions with at least 75% sensitivity and less than 1 false alarm (FA) per 24 hours [3]. Some commercial tools recently claim to reach such performance levels, including the Olympic Brainz Monitor [4] and Persyst 14 [5]. In this abstract, we describe our efforts to transform a high-performance offline seizure detection system [3] into a low latency real-time or online seizure detection system. An overview of the system is shown in Figure 1. The main difference between an online versus offline system is that an online system should always be causal and has minimum latency which is often defined by domain experts. The offline system, shown in Figure 2, uses two phases of deep learning models with postprocessing [3]. The channel-based long short term memory (LSTM) model (Phase 1 or P1) processes linear frequency cepstral coefficients (LFCC) [6] features from each EEGmore »
-
Information Retrieval (IR) plays a pivotal role indiverse Software Engineering (SE) tasks, e.g., bug localization and triaging, bug report routing, code retrieval, requirements analysis, etc. SE tasks operate on diverse types of documents including code, text, stack-traces, and structured, semi-structured and unstructured meta-data that often contain specialized vocabularies. As the performance of any IR-based tool critically depends on the underlying document types, and given the diversity of SE corpora, it is essential to understand which models work best for which types of SE documents and tasks.We empirically investigate the interaction between IR models and document types for two representative SE tasks (bug localization and relevant project search), carefully chosen as they require a diverse set of SE artifacts (mixtures of code and text),and confirm that the models’ performance varies significantly with mix of document types. Leveraging this insight, we propose a generalized framework, SRCH, to automatically select the most favorable IR model(s) for a given SE task. We evaluate SRCH w.r.t. these two tasks and confirm its effectiveness. Our preliminary user study shows that SRCH’s intelligent adaption of the IR model(s) to the task at hand not only improves precision and recall for SE tasks but may also improve users’more »