ABSTRACT A key challenge in conducting comparative analyses across social units, such as religions, ethnicities, or cultures, is that data on these units is often encoded in distinct and incompatible formats across diverse datasets. This can involve simple differences in the variables and values used to encode these units (e.g., Roman Catholic is V130 = 1 vs. Q98A = 2 in two different datasets) or differences in the resolutions at which units are encoded (Maya vs. Kaqchikel Maya). These disparate encodings can create substantial challenges for the efficiency and transparency of data syntheses across diverse datasets. We introduce a user‐friendly set of tools to help users translate four kinds of categories (religion, ethnicity, language, and subdistrict) across multiple, external datasets. We outline the platform's key functions and current progress, as well as long‐range goals for the platform. 
                        more » 
                        « less   
                    
                            
                            Evaluation of Ad Transparency Systems
                        
                    
    
            In this research proposal, we outline our plans to examine the characteristics and affordances of ad transparency systems provided by 22 online platforms. We outline a user study designed to evaluate the usability of eight of these systems by studying the actions and behaviors each system enables, as well as users' understanding of these transparency systems. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2149680
- PAR ID:
- 10500580
- Publisher / Repository:
- Proceedings of ConPro 2024: IEEE SPW Workshop on Technology and Consumer Protection
- Date Published:
- Journal Name:
- ConPro 2024: IEEE SPW Workshop on Technology and Consumer Protection
- Format(s):
- Medium: X
- Location:
- San Francisco, CA
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            null (Ed.)Our work is situated in research on Computer Science (CS) learning in informal learning environments and literature on the factors that influence girls to enter CS. In this article, we outline design choices around the creation of a summer programming camp for middle school youth. In addition, we describe a near-peer mentoring model we used that was influenced by Bandura's self-efficacy theory. The purpose of this article, apart from promoting transparency of program design, was to evaluate the effectiveness of our camp design in terms of increasing youths’ interest, self-efficacy beliefs, and perceptions of parental support. We found significant gains for all three of these concepts. Additionally, we make connections between our design choices (e.g., videos, peer support, mentor support) and the affective gains by thematically analyzing interview data concerning the outcomes found in our camps.more » « less
- 
            Recent investments in automation and AI are reshaping the hospitality sector. Driven by social and economic forces affecting service delivery, these new technologies have transformed the labor that acts as the backbone to the industry-namely frontline service work performed by housekeepers, front desk staff, line cooks and others. We describe the context for recent technological adoption, with particular emphasis on algorithmic management applications. Through this work, we identify gaps in existing literature and highlight areas in need of further research in the domains of worker-centered technology development. Our analysis highlights how technologies such as algorithmic management shape roles and tasks in the high-touch service sector. We outline how harms produced through automation are often due to a lack of attention to non-management stakeholders. We then describe an opportunity space for researchers and practitioners to elicit worker participation at all stages of technology adoption, and offer methods for centering workers, increasing transparency, and accounting for the context of use through holistic implementation and training strategies.more » « less
- 
            As news organizations embrace transparency practices on their websites to distinguish themselves from those spreading misinformation, HCI designers have the opportunity to help them effectively utilize the ideals of transparency to build trust. How can we utilize transparency to promote trust in news? We examine this question through a qualitative lens by interviewing journalists and news consumers---the two stakeholders in a news system. We designed a scenario to demonstrate transparency features using two fundamental news attributes that convey the trustworthiness of a news article: source and message. In the interviews, our news consumers expressed the idea that news transparency could be best shown by providing indicators of objectivity in two areas (news selection and framing) and by providing indicators of evidence in four areas (presence of source materials, anonymous sourcing, verification, and corrections upon erroneous reporting). While our journalists agreed with news consumers' suggestions of using evidence indicators, they also suggested additional transparency indicators in areas such as the news reporting process and personal/organizational conflicts of interest. Prompted by our scenario, participants offered new design considerations for building trustworthy news platforms, such as designing for easy comprehension, presenting appropriate details in news articles (e.g., showing the number and nature of corrections made to an article), and comparing attributes across news organizations to highlight diverging practices. Comparing the responses from our two stakeholder groups reveals conflicting suggestions with trade-offs between them. Our study has implications for HCI designers in building trustworthy news systems.more » « less
- 
            Abstract Recently artificial intelligence (AI) and machine learning (ML) models have demonstrated remarkable progress with applications developed in various domains. It is also increasingly discussed that AI and ML models and applications should be transparent, explainable, and trustworthy. Accordingly, the field of Explainable AI (XAI) is expanding rapidly. XAI holds substantial promise for improving trust and transparency in AI-based systems by explaining how complex models such as the deep neural network (DNN) produces their outcomes. Moreover, many researchers and practitioners consider that using provenance to explain these complex models will help improve transparency in AI-based systems. In this paper, we conduct a systematic literature review of provenance, XAI, and trustworthy AI (TAI) to explain the fundamental concepts and illustrate the potential of using provenance as a medium to help accomplish explainability in AI-based systems. Moreover, we also discuss the patterns of recent developments in this area and offer a vision for research in the near future. We hope this literature review will serve as a starting point for scholars and practitioners interested in learning about essential components of provenance, XAI, and TAI.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    