skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Human-Centric Versus State-Driven: A Comparative Analysis of the European Union's and China's Artificial Intelligence Governance Using Risk Management
This research examines the contrasting artificial intelligence (AI) governance strategies of the European Union (EU) and China, focusing on the dichotomy between human-centric and state-driven policies. The EU's approach, exemplified by the EU AI Act, emphasizes transparency, fairness, and individual rights protection, enforcing strict regulations for high-risk AI applications to build public trust. Conversely, China's state-driven model prioritizes rapid AI deployment and national security, often at the expense of individual privacy, as seen through its flexible regulatory framework and substantial investment in AI innovation. By applying the United States' National Institute of Standards and Technology (NIST) AI Risk Management Framework's Map, Measure, Manage, and Govern functions, this study explores how both regions balance technological advancement with ethical oversight. The study ultimately suggests that a harmonized approach, integrating elements of both models, could promote responsible global AI development and regulation.  more » « less
Award ID(s):
2100934
PAR ID:
10593163
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
IGI Global
Date Published:
Journal Name:
International Journal of Intelligent Information Technologies
Volume:
21
Issue:
1
ISSN:
1548-3657
Page Range / eLocation ID:
1 to 13
Format(s):
Medium: X Other: PDF
Sponsoring Org:
National Science Foundation
More Like this
  1. Cross-sectional surveys, despite their value, are unable to probe dynamics of risk perceptions over time. An earlier longitudinal panel study of Americans’ views on Ebola risk inspired this partial replication on Americans’ views of Zika risks, using multilevel modeling to assess temporal changes in these views and inter-individual factors affecting them, and to determine if similar factors were influential for both non-epidemics in the USA. Baseline Zika risk scores – as in the Ebola study – were influenced by dread of the Zika virus, perceptions of a near-miss outbreak, and perceived likelihood of an outbreak. Judgments of both personal risk and national risk from Zika declined significantly, and individual rates of news following predicted slower decline of perceived national risk in both cases. However, few other factors affected changes in Zika risk judgments, which did not replicate in a validation half-sample, whereas several factors slowed or increased the rate of decline in Ebola judgments of the U.S. risk. These differences might reflect differences in the diseases caused by these two viruses – e.g., Ebola’s much greater lethality – but more longitudinal studies across multiple diseases will be needed to test that speculation. Benefits of such studies to health risk analysis outweigh the difficulties they pose. 
    more » « less
  2. Abstract Artificial intelligence (AI) methods have revolutionized and redefined the landscape of data analysis in business, healthcare, and technology. These methods have innovated the applied mathematics, computer science, and engineering fields and are showing considerable potential for risk science, especially in the disaster risk domain. The disaster risk field has yet to define itself as a necessary application domain for AI implementation by defining how to responsibly balance AI and disaster risk. (1) How is AI being used for disaster risk applications; and how are these applications addressing the principles and assumptions of risk science, (2) What are the benefits of AI being used for risk applications; and what are the benefits of applying risk principles and assumptions for AI‐based applications, (3) What are the synergies between AI and risk science applications, and (4) What are the characteristics of effective use of fundamental risk principles and assumptions for AI‐based applications? This study develops and disseminates an online survey questionnaire that leverages expertise from risk and AI professionals to identify the most important characteristics related to AI and risk, then presents a framework for gauging how AI and disaster risk can be balanced. This study is the first to develop a classification system for applying risk principles for AI‐based applications. This classification contributes to understanding of AI and risk by exploring how AI can be used to manage risk, how AI methods introduce new or additional risk, and whether fundamental risk principles and assumptions are sufficient for AI‐based applications. 
    more » « less
  3. Abstract Drones are increasingly popular for collecting behaviour data of group‐living animals, offering inexpensive and minimally disruptive observation methods. Imagery collected by drones can be rapidly analysed using computer vision techniques to extract information, including behaviour classification, habitat analysis and identification of individual animals. While computer vision techniques can rapidly analyse drone‐collected data, the success of these analyses often depends on careful mission planning that considers downstream computational requirements—a critical factor frequently overlooked in current studies.We present a comprehensive summary of research in the growing AI‐driven animal ecology (ADAE) field, which integrates data collection with automated computational analysis focused on aerial imagery for collective animal behaviour studies. We systematically analyse current methodologies, technical challenges and emerging solutions in this field, from drone mission planning to behavioural inference. We illustrate computer vision pipelines that infer behaviour from drone imagery and present the computer vision tasks used for each step. We map specific computational tasks to their ecological applications, providing a framework for future research design.Our analysis reveals AI‐driven animal ecology studies for collective animal behaviour using drone imagery focus on detection and classification computer vision tasks. While convolutional neural networks (CNNs) remain dominant for detection and classification tasks, newer architectures like transformer‐based models and specialized video analysis networks (e.g. X3D, I3D, SlowFast) designed for temporal pattern recognition are gaining traction for pose estimation and behaviour inference. However, reported model accuracy varies widely by computer vision task, species, habitats and evaluation metrics, complicating meaningful comparisons between studies.Based on current trends, we conclude semi‐autonomous drone missions will be increasingly used to study collective animal behaviour. While manual drone operation remains prevalent, autonomous drone manoeuvrers, powered by edge AI, can scale and standardise collective animal behavioural studies while reducing the risk of disturbance and improving data quality. We propose guidelines for AI‐driven animal ecology drone studies adaptable to various computer vision tasks, species and habitats. This approach aims to collect high‐quality behaviour data while minimising disruption to the ecosystem. 
    more » « less
  4. Abstract Is AI disrupting jobs and creating unemployment? This question has stirred public concern for job stability and motivated studies assessing occupations’ automation risk. These studies used readily available employment and wage statistics to quantify occupational changes for employed workers. However, they did not directly examine unemployment dynamics primarily due to the lack of data across occupations, geography, and time. Here, we overcome this barrier using monthly occupation-level unemployment data from each US state’s unemployment insurance office from 2010 to 2020 to assess AI exposure models, job separations, and unemployment through a new measure called unemployment risk. We demonstrate that standard employment statistics are inadequate proxies for occupations’ unemployment risk and find that individual AI exposure models are poor predictors of occupations’ unemployment risk states’ total unemployment rates, and states’ total job separation rates. However, an ensemble approach exhibits substantial predictive power, accounting for an additional 18% of variation in unemployment risk across occupations, states, and time compared to a baseline model that controls for education, occupations’ skills, seasonality, and regional effects. These results suggest that competing models may capture different aspects of AI exposure and that automation shapes US unemployment. Our results demonstrate the power of occupation-specific job disruption data and that efforts using only one AI exposure score will misrepresent AI’s impact on the future of work. 
    more » « less
  5. With the rapid development of decision aids that are driven by AI models, the practice of AI-assisted decision making has become increasingly prevalent. To improve the human-AI team performance in decision making, earlier studies mostly focus on enhancing humans' capability in better utilizing a given AI-driven decision aid. In this paper, we tackle this challenge through a complementary approach—we aim to train behavior-aware AI by adjusting the AI model underlying the decision aid to account for humans' behavior in adopting AI advice. In particular, as humans are observed to accept AI advice more when their confidence in their own judgement is low, we propose to train AI models with a human-confidence-based instance weighting strategy, instead of solving the standard empirical risk minimization problem. Under an assumed, threshold-based model characterizing when humans will adopt the AI advice, we first derive the optimal instance weighting strategy for training AI models. We then validate the efficacy and robustness of our proposed method in improving the human-AI joint decision making performance through systematic experimentation on synthetic datasets. Finally, via randomized experiments with real human subjects along with their actual behavior in adopting the AI advice, we demonstrate that our method can significantly improve the decision making performance of the human-AI team in practice. 
    more » « less