skip to main content


Search for: All records

Creators/Authors contains: "Ahn, S."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available May 1, 2024
  2. null (Ed.)
    Conversational partners develop shared knowledge. In referential communication tasks, partners collaboratively establish brief labels for hard-to-name images. These image-label mappings are associated in memory with that partner, evidenced by use of those brief labels with the same partner, and longer descriptions with new partners. According to the people-as-contexts view, the conversational partner functions as a contextual cue to support retrieval of conversationally-relevant information. Inspired by findings from the memory literature that context effects can be stronger when retrieval is more explicit, two experiments test the hypothesis that the speaker will be more likely to invoke the partner as a retrieval cue when retrieval processes are more explicit. The results indicated a strong effect of partner that, contrary to these predictions, was not boosted by explicit retrieval processes. The lack of an effect of retrieval processes speaks to the ubiquity with which language use in conversation is tailored to the particular people with whom we converse. 
    more » « less
  3. Human gaze behavior prediction is important for behavioral vision and for computer vision applications. Most models mainly focus on predicting free-viewing behavior using saliency maps, but do not generalize to goal-directed behavior, such as when a person searches for a visual target object. We propose the first inverse reinforcement learning (IRL) model to learn the internal reward function and policy used by humans during visual search. We modeled the viewer’s internal belief states as dynamic contextual belief maps of object locations. These maps were learned and then used to predict behavioral scanpaths for multiple target categories. To train and evaluate our IRL model we created COCO-Search18, which is now the largest dataset of highquality search fixations in existence. COCO-Search18 has 10 participants searching for each of 18 target-object categories in 6202 images, making about 300,000 goal-directed fixations. When trained and evaluated on COCO-Search18, the IRL model outperformed baseline models in predicting search fixation scanpaths, both in terms of similarity to human search behavior and search efficiency. Finally, reward maps recovered by the IRL model reveal distinctive targetdependent patterns of object prioritization, which we interpret as a learned object context. 
    more » « less
  4. Augmented reality (AR) technologies have seen significant improvement in recent years with several consumer and commercial solutions being developed. New security challenges arise as AR becomes increasingly ubiquitous. Previous work has proposed techniques for securing the output of AR devices and used reinforcement learning (RL) to train security policies which can be difficult to define manually. However, whether such systems and policies can be deployed on a physical AR device without degrading performance was left an open question. We develop a visual output security application using a RL trained policy and deploy it on a Magic Leap One head-mounted AR device. The demonstration illustrates that RL based visual output security systems are feasible. 
    more » « less