MOKA: Moral Knowledge Augmentation for Moral Event Extraction
                        
                    - Award ID(s):
- 2127749
- PAR ID:
- 10567885
- Publisher / Repository:
- Association for Computational Linguistics
- Date Published:
- Page Range / eLocation ID:
- 4481 to 4502
- Format(s):
- Medium: X
- Location:
- Mexico City, Mexico
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            To enable robots to exert positive moral influence, we need to understand the impacts of robots’ moral communications, the ways robots can phrase their moral language to be most clear and persuasive, and the ways that these factors interact. Previous work has suggested, for example, that for certain types of robot moral interventions to be successful (i.e., moral interventions grounded in particular ethical frameworks), those interventions may need to be followed by opportunities for moral reflection, during which humans can critically engage with not only the contents of the robot’s moral language, but also with the way that moral language connects with their social-relational ontology and broader moral ecosystem. We conceptually replicate this prior work (N=119) using a design that more precisely manipulates moral ref lection. Our results confirm that opportunities for moral reflection are indeed critical to the success of robotic moral interventions—regardless of the ethical framework in which those interventions are grounded.more » « less
- 
            To enable robots to exert positive moral influence, we need to understand the impacts of robots’ moral communications, the ways robots can phrase their moral language to be most clear and persuasive, and the ways that these factors interact. Previous work has suggested, for example, that for certain types of robot moral interventions to be successful (i.e., moral interventions grounded in particular ethical frameworks), those interventions may need to be followed by opportunities for moral reflection, during which humans can critically engage with not only the contents of the robot’s moral language, but also with the way that moral language connects with their social-relational ontology and broader moral ecosystem. We conceptually replicate this prior work (N =119) using a design that more precisely manipulates moral reflection. Our results confirm that opportunities for moral reflection are indeed critical to the success of robotic moral interventions—regardless of the ethical framework in which those interventions are grounded.more » « less
- 
            We use network psychometrics to map a subsection of moral belief systems predicted by moral foundations theory (MFT). This approach conceptualizes moral systems as networks, with moral beliefs represented as nodes connected by direct relations. As such, it advances a novel test of MFT’s claim that liberals and conservatives have different systems of foundational moral values, which we test in three large datasets ( NSample1 = 854; NSample2 = 679; NSample3 = 2,572), from two countries (the United States and New Zealand). Results supported our first hypothesis that liberals’ moral systems show more segregation between individualizing and binding foundations than conservatives. Results showed only weak support for our second hypothesis, that this pattern would be more typical of higher educated than less educated liberals/conservatives. Findings support a systems approach to MFT and show the value of modeling moral belief systems as networks.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    