Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available January 1, 2023
-
Free, publicly-accessible full text available January 1, 2023
-
Neuroimaging data typically undergoes several preprocessing steps before further analysis and mining can be done. Affine image registration is one of the important tasks during preprocessing. Recently, several image registration methods which are based on Convolutional Neural Networks have been proposed. However, due to the high computational and memory requirements of CNNs, these methods cannot be used in real-time for large neuroimaging data like fMRI. In this paper, we propose a Dual-Attention Recurrent Network (DRN) which uses a hard attention mechanism to allow the model to focus on small, but task-relevant, parts of the input image – thus reducing computationalmore »
-
Attention-based image classification has gained increasing popularity in recent years. State-of-the-art methods for attention-based classification typically require a large training set and operate under the assumption that the label of an image depends solely on a single object (i.e., region of interest) in the image. However, in many real-world applications (e.g., medical imaging), it is very expensive to collect a large training set. Moreover, the label of each image is usually determined jointly by multiple regions of interest (ROIs). Fortunately, for such applications, it is often possible to collect the locations of the ROIs in each training image. In thismore »
-
With the rapid development of social media, visual sentiment analysis from image or video has become a hot spot in visual understanding researches. In this work, we propose an effective approach using visual and textual fusion for sentiment analysis of short GIF videos with textual descriptions. We extract both sequence-level and frame-level visual features for each given GIF video. Next, we build a visual sentiment classifier by using the extracted features. We also define a mapping function, which converts the sentiment probability from the classifier to a sentiment score used in our fusion function. At the same time, for themore »