skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, October 10 until 2:00 AM ET on Friday, October 11 due to maintenance. We apologize for the inconvenience.


Title: Approximate Expected Utility Rationalization
Abstract

We propose a new measure of deviations from expected utility theory. For any positive number e, we give a characterization of the datasets with a rationalization that is within e (in beliefs, utility, or perceived prices) of expected utility (EU) theory, under the assumption of risk aversion. The number e can then be used as a measure of how far the data is to EU theory. We apply our methodology to data from three large-scale experiments. Many subjects in these experiments are consistent with utility maximization, but not with EU maximization. Our measure of distance to expected utility is correlated with the subjects’ demographic characteristics.

 
more » « less
Award ID(s):
1919263
NSF-PAR ID:
10468380
Author(s) / Creator(s):
; ;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Journal of the European Economic Association
Volume:
21
Issue:
5
ISSN:
1542-4766
Format(s):
Medium: X Size: p. 1821-1864
Size(s):
p. 1821-1864
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider the problem of Influence Maximization (IM), the task of selecting k seed nodes in a social network such that the expected number of nodes influenced is maximized. We propose a community-aware divide-and-conquer framework that involves (i) learning the inherent community structure of the social network, (ii) generating candidate solutions by solving the influence maximization problem for each community, and (iii ) selecting the final set of seed nodes using a novel progressiv e budgeting scheme. Our experiments on real-world social networks show that the proposed framework outperforms the standard methods in terms of run-time and the heuristic methods in terms of influence. We also study the effect of the community structure on the performance of the proposed framework. Our experiments sho w that the community structures with higher modularity lead the proposed framework to perform better in terms of run-time an d influence. 
    more » « less
  2. Abstract Engineering design involves information acquisition decisions such as selecting designs in the design space for testing, selecting information sources, and deciding when to stop design exploration. Existing literature has established normative models for these decisions, but there is lack of knowledge about how human designers make these decisions and which strategies they use. This knowledge is important for accurately modeling design decisions, identifying sources of inefficiencies, and improving the design process. Therefore, the primary objective in this study is to identify models that provide the best description of a designer’s information acquisition decisions when multiple information sources are present and the total budget is limited. We conduct a controlled human subject experiment with two independent variables: the amount of fixed budget and the monetary incentive proportional to the saved budget. By using the experimental observations, we perform Bayesian model comparison on various simple heuristic models and expected utility (EU)-based models. As expected, the subjects’ decisions are better represented by the heuristic models than the EU-based models. While the EU-based models result in better net payoff, the heuristic models used by the subjects generate better design performance. The net payoff using heuristic models is closer to the EU-based models in experimental treatments where the budget is low and there is incentive for saving the budget. This indicates the potential for nudging designers’ decisions toward maximizing the net payoff by setting the fixed budget at low values and providing monetary incentives proportional to saved budget. 
    more » « less
  3. Purpose

    To introduce a quantitative tool that enables rapid forecasting of T1and T2parameter map errors due to normal and aliasing noise as a function of the MR fingerprinting (MRF) sequence, which can be used in sequence optimization.

    Theory and Methods

    The variances of normal noise and aliasing artifacts in the collected signal are related to the variances in T1and T2maps through derived quality factors. This analytical result is tested against the results of a Monte‐Carlo approach for analyzing MRF sequence encoding capability in the presence of aliasing noise, and verified with phantom experiments at 3 T. To further show the utility of our approach, our quality factors are used to find efficient MRF sequences for fewer repetitions.

    Results

    Experimental results verify the ability of our quality factors to rapidly assess the efficiency of an MRF sequence in the presence of both normal and aliasing noise. Quality factor assessment of MRF sequences is in agreement with the results of a Monte‐Carlo approach. Analysis of MRF parameter map errors from phantom experiments is consistent with the derived quality factors, with T1(T2) data yielding goodness of fit R2≥ 0.92 (0.80). In phantom and in vivo experiments, the efficient pulse sequence, determined through quality factor maximization, led to comparable or improved accuracy and precision relative to a longer sequence, demonstrating quality factor utility in MRF sequence design.

    Conclusion

    The here introduced quality factor framework allows for rapid analysis and optimization of MRF sequence design through T1and T2error forecasting.

     
    more » « less
  4. Purpose

    A new method for enhancing the sensitivity of diffusion MRI (dMRI) by combining the data from single (sPFG) and double (dPFG) pulsed field gradient experiments is presented.

    Methods

    This method uses our JESTER framework to combine microscopic anisotropy information from dFPG experiments using a new method called diffusion tensor subspace imaging (DiTSI) to augment the macroscopic anisotropy information from sPFG data analyzed using our guided by entropy spectrum pathways method. This new method, called joint estimation diffusion imaging (JEDI), combines the sensitivity to macroscopic diffusion anisotropy of sPFG with the sensitivity to microscopic diffusion anisotropy of dPFG methods.

    Results

    Its ability to produce significantly more detailed anisotropy maps and more complete fiber tracts than existing methods within both brain white matter (WM) and gray matter (GM) is demonstrated on normal human subjects on data collected using a novel fast, robust, and clinically feasible sPFG/dPFG acquisition.

    Conclusions

    The potential utility of this method is suggested by an initial demonstration of its ability to mitigate the problem of gyral bias. The capability of more completely characterizing the tissue structure and connectivity throughout the entire brain has broad implications for the utility and scope of dMRI in a wide range of research and clinical applications.

     
    more » « less
  5. In many real-world applications such as social network analysis and online advertising/marketing, one of the most important and popular problems is called influence maximization (IM), which finds a set of k seed users that maximize the expected number of influenced user nodes. In practice, however, maximizing the number of influenced nodes may be far from satisfactory for real applications such as opinion promotion and collective buying. In this paper, we explore the importance of stability and triangles in social networks, and formulate a novel problem in the influence spread scenario, named triangular stability maximization , over social networks, and generalize it to a general triangle influence maximization problem, which is proved to be NP-hard. We develop an efficient reverse influence sampling (RIS) based framework for the triangle IM with theoretical guarantees. To enable unbiased estimators, it demands probabilistic sampling of triangles, that is, sampling triangles according to their probabilities. We propose an edge-based triple sampling approach, which is exactly equivalent to probabilistic sampling and avoids costly triangle enumeration and materialization. We also design several pruning and reduction techniques, as well as a cost-model-guided heuristic algorithm. Extensive experiments and a case study over real-world graphs confirm the effectiveness of our proposed algorithms and the superiority of triangular stability maximization and triangle influence maximization. 
    more » « less