- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
0002000002000000
- More
- Availability
-
40
- Author / Contributor
- Filter by Author / Creator
-
-
Adeli, Hossein (4)
-
Ahn, Seoyoung (3)
-
Chen, Yupei (2)
-
Hoai, Minh (2)
-
Huang, Lihan (2)
-
Yang, Zhibo (2)
-
Zelinsky, Gregory J. (2)
-
Ahn, S (1)
-
Samaras, Dimitrios (1)
-
Samaras, Dimitris (1)
-
Wei, Zijun (1)
-
Zelinsky, Greg (1)
-
Zelinsky, Gregory (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
& Ahmed, K. (0)
-
- Filter by Editor
-
-
null (1)
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Ahn, Seoyoung; Adeli, Hossein; Zelinsky, Gregory (, Proceedings of 2023 Conference on Cognitive Computational Neuroscience)
-
Zelinsky, Gregory J.; Ahn, Seoyoung; Chen, Yupei; Yang, Zhibo; Adeli, Hossein; Huang, Lihan; Samaras, Dimitrios; Hoai, Minh (, Neurons, Behavior, Data analysis, and Theory)null (Ed.)
-
Zelinsky, Greg; Yang, Zhibo; Huang, Lihan; Chen, Yupei; Ahn, S; Wei, Zijun; Adeli, Hossein; Samaras, Dimitris; Hoai, Minh (, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW))The prediction of human shifts of attention is a widely-studied question in both behavioral and computer vision, especially in the context of a free viewing task. However, search behavior, where the fixation scanpaths are highly dependent on the viewer’s goals, has received far less attention, even though visual search constitutes much of a person’s everyday behavior. One reason for this is the absence of real-world image datasets on which search models can be trained. In this paper we present a carefully created dataset for two target categories, microwaves and clocks, curated from the COCO2014 dataset. A total of 2183 images were presented to multiple participants, who were tasked to search for one of the two categories. This yields a total of 16184 validated fixations used for training, making our microwave-clock dataset currently one of the largest datasets of eye fixations in categorical search. We also present a 40-image testing dataset, where images depict both a microwave and a clock target. Distinct fixation patterns emerged depending on whether participants searched for a microwave (n=30) or a clock (n=30) in the same images, meaning that models need to predict different search scanpaths from the same pixel inputs. We report the results of several state-of-the-art deep network models that were trained and evaluated on these datasets. Collectively, these datasets and our protocol for evaluation provide what we hope will be a useful test-bed for the development of new methods for predicting category-specific visual search behavior.more » « less