The concept of surprise has special significance in information retrieval in attracting user attention and arousing curiosity. In this paper, we introduced two computational measures of calculating the amount of surprise contained in a piece of text, and validated with the perceived surprise by users with different background knowledge expertise. We utilized a crowdsourcing approach and a lab-based user study to reach a large amount of users. The implication could be used to propose or refine future computational approaches to better predict human feeling of surprise triggered by reading a body of text. 
                        more » 
                        « less   
                    
                            
                            SURPRISE! and When to Schedule It.
                        
                    
    
            Information flow measures, over the duration of a game, the audience’s belief of who will win, and thus can reflect the amount of surprise in a game. To quantify the relationship between information flow and audiences' perceived quality, we conduct a case study where subjects watch one of the world’s biggest esports events, LOL S10. In addition to eliciting information flow, we also ask subjects to report their rating for each game. We find that the amount of surprise in the end of the game plays a dominant role in predicting the rating. This suggests the importance of incorporating when the surprise occurs, in addition to the amount of surprise, in perceived quality models. For content providers, it implies that everything else being equal, it is better for twists to be more likely to happen toward the end of a show rather than uniformly throughout. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2007256
- PAR ID:
- 10316793
- Date Published:
- Journal Name:
- Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            It is common to evaluate a set of items by soliciting people to rate them. For example, universities ask students to rate the teaching quality of their instructors, and conference organizers ask authors of submissions to evaluate the quality of the reviews. However, in these applications, students often give a higher rating to a course if they receive higher grades in a course, and authors often give a higher rating to the reviews if their papers are accepted to the conference. In this work, we call these external factors the" outcome" experienced by people, and consider the problem of mitigating these outcome-induced biases in the given ratings when some information about the outcome is available. We formulate the information about the outcome as a known partial ordering on the bias. We propose a debiasing method by solving a regularized optimization problem under this ordering constraint, and also provide a carefully designed cross-validation method that adaptively chooses the appropriate amount of regularization. We provide theoretical guarantees on the performance of our algorithm, as well as experimental evaluations.more » « less
- 
            Abstract In the classical information theoretic framework, information “value” is proportional to how novel/surprising the information is. Recent work building on such notions claimed that false news spreads faster than truth online because false news is more novel and therefore surprising. However, another determinant of surprise, semantic meaning (e.g., information’s consistency or inconsistency with prior beliefs), should also influence value and sharing. Examining sharing behavior on Twitter, we observed separate relations of novelty and belief consistency with sharing. Though surprise could not be assessed in those studies, belief consistency should relate to less surprise, suggesting the relevance of semantic meaning beyond novelty. In two controlled experiments, belief-consistent (vs. belief-inconsistent) information was shared more despite consistent information being the least surprising. Manipulated novelty did not predict sharing or surprise. Thus, classical information theoretic predictions regarding perceived value and sharing would benefit from considering semantic meaning in contexts where people hold pre-existing beliefs.more » « less
- 
            null (Ed.)The growing amount of online information today has increased opportunity to discover interesting and useful information. Various recommender systems have been designed to help people discover such information. No matter how accurately the recommender algorithms perform, users’ engagement with recommended results has been complained being less than ideal. In this study, we touched on two human-centered objectives for recommender systems: user satisfaction and curiosity, both of which are believed to play roles in maintaining user engagement and sustain such engagement in the long run. Specifically, we leveraged the concept of surprise and used an existing computational model of surprise to identify relevantly surprising health articles aiming at improving user satisfaction and inspiring their curiosity. We designed a user study to first test the validity of the surprise model in a health news recommender system, called LuckyFind. Then user satisfaction and curiosity were evaluated. We find that the computational surprise model helped identify surprising recommendations at little cost of user satisfaction. Users gave higher ratings on interestingness than usefulness for those surprising recommendations. Curiosity was inspired more for those individuals who have a larger capacity to experience curiosity. Over half of the users have changed their preferences after using LuckyFind, either discovering new areas, reinforcing their existing interests, or stopping following those they did not want anymore. The insights of the research will make researchers and practitioners rethink the objectives of today’s recommender systems as being more human-centered beyond algorithmic accuracy.more » « less
- 
            Software developers have difficulty understanding the rationale and intent behind original developers' design decisions. Code histories aim to provide richer contexts for code changes over time, but can introduce a large amount of information to the already cognitively demanding task of code comprehension. Storytelling has shown benefits in communicating complex, time-dependent information, yet programmers are reluctant to write stories for their code changes. We explored the utility of narratives made by generative AI. We conducted a within-subjects study comparing the performance of 16 programmers when recalling code history information from a list-view format versus a comparable AI-generated narrative format. Our study found that when using the story-view, participants were 16\% more successful at recalling code history information, and had 30\% less error when assessing the correctness of their responses. We did not find any significant differences in programmer's perceived mental effort or their attitudes towards reuse when using narrative code stories.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    