- Award ID(s):
- 1850097
- PAR ID:
- 10424905
- Date Published:
- Journal Name:
- Proceedings of the International AAAI Conference on Web and Social Media
- Volume:
- 17
- ISSN:
- 2162-3449
- Page Range / eLocation ID:
- 95 to 102
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Did you know health is not just about not being sick? It is about feeling well. In healthy ecosystems, you can find plants, animals, water, rocks, and soil, all interacting with many microbes. Thanks to this biodiversity we have clean air, fresh water, and nutritious food. Bees and other animals pollinate flowers to help grow fruits and vegetables. Birds spread seeds that grow into trees and forests. Plants clean the air we breathe. And people feel better in nature. Healthy ecosystems, therefore, keep people healthy. While public health programs teach people about healthy food and give them access to medicines, people make ecosystems healthier by protecting nature. You can help too, by taking care of your health and your surrounding ecosystem, learning about the world, and supporting decisions and actions that protect nature and people. By becoming guardians of Earth’s biodiversity, we can all have a healthy future together.more » « less
-
Cross-modal recipe retrieval has gained prominence due to its ability to retrieve a text representation given an image representation and vice versa. Clustering these recipe representations based on similarity is essential to retrieve relevant information about unknown food images. Existing studies cluster similar recipe representations in the latent space based on class names. Due to inter-class similarity and intraclass variation, associating a recipe with a class name does not provide sufficient knowledge about recipes to determine similarity. However, recipe title, ingredients, and cooking actions provide detailed knowledge about recipes and are a better determinant of similar recipes. In this study, we utilized this additional knowledge of recipes, such as ingredients and recipe title, to identify similar recipes, emphasizing attention especially on rare ingredients. To incorporate this knowledge, we propose a knowledge-infused multimodal cooking representation learning network, Ki-Cook, built on the procedural attribute of the cooking process. To the best of our knowledge, this is the first study to adopt a comprehensive recipe similarity determinant to identify and cluster similar recipe representations. The proposed network also incorporates ingredient images to learn multimodal cooking representation. Since the motivation for clustering similar recipes is to retrieve relevant information for an unknown food image, we evaluated the ingredient retrieval task. We performed an empirical analysis to establish that our proposed model improves the Coverage of Ground Truth by 12% and the Intersection Over Union by 10% compared to the baseline models. On average, the representations learned by our model contain an additional 15.33% of rare ingredients compared to the baseline models. Owing to this difference, our qualitative evaluation shows a 39% improvement in clustering similar recipes in the latent space compared to the baseline models, with an inter-annotator agreement of the Fleiss kappa score of 0.35.
-
When people receive advice while making difficult decisions, they often make better decisions in the moment and also increase their knowledge in the process. However, such incidental learning can only occur when people cognitively engage with the information they receive and process this information thoughtfully. How do people process the information and advice they receive from AI, and do they engage with it deeply enough to enable learning? To answer these questions, we conducted three experiments in which individuals were asked to make nutritional decisions and received simulated AI recommendations and explanations. In the first experiment, we found that when people were presented with both a recommendation and an explanation before making their choice, they made better decisions than they did when they received no such help, but they did not learn. In the second experiment, participants first made their own choice, and only then saw a recommendation and an explanation from AI; this condition also resulted in improved decisions, but no learning. However, in our third experiment, participants were presented with just an AI explanation but no recommendation and had to arrive at their own decision. This condition led to both more accurate decisions and learning gains. We hypothesize that learning gains in this condition were due to deeper engagement with explanations needed to arrive at the decisions. This work provides some of the most direct evidence to date that it may not be sufficient to provide people with AI-generated recommendations and explanations to ensure that people engage carefully with the AI-provided information. This work also presents one technique that enables incidental learning and, by implication, can help people process AI recommendations and explanations more carefully.more » « less
-
null (Ed.)Textual explanations have proved to help improve user satisfaction on machine-made recommendations. However, current mainstream solutions loosely connect the learning of explanation with the learning of recommendation: for example, they are often separately modeled as rating prediction and content generation tasks. In this work, we propose to strengthen their connection by enforcing the idea of sentiment alignment between a recommendation and its corresponding explanation. At training time, the two learning tasks are joined by a latent sentiment vector, which is encoded by the recommendation module and used to make word choices for explanation generation. At both training and inference time, the explanation module is required to generate explanation text that matches sentiment predicted by the recommendation module. Extensive experiments demonstrate our solution outperforms a rich set of baselines in both recommendation and explanation tasks, especially on the improved quality of its generated explanations. More importantly, our user studies confirm our generated explanations help users better recognize the differences between recommended items and understand why an item is recommended.more » « less
-
Abstract Climate change poses a multifaceted, complex, and existential threat to human health and well-being, but efforts to communicate these threats to the public lag behind what we know how to do in communication research. Effective communication about climate change’s health risks can improve a wide variety of individual and population health-related outcomes by: (1) helping people better make the connection between climate change and health risks and (2) empowering them to act on that newfound knowledge and understanding. The aim of this manuscript is to highlight communication methods that have received empirical support for improving knowledge uptake and/or driving higher-quality decision making and healthier behaviors and to recommend how to apply them at the intersection of climate change and health. This expert consensus about effective communication methods can be used by healthcare professionals, decision makers, governments, the general public, and other stakeholders including sectors outside of health. In particular, we argue for the use of 11 theory-based, evidence-supported communication strategies and practices. These methods range from leveraging social networks to making careful choices about the use of language, narratives, emotions, visual images, and statistics. Message testing with appropriate groups is also key. When implemented properly, these approaches are likely to improve the outcomes of climate change and health communication efforts.