Vision science, particularly machine vision, has been revolutionized by introducing large-scale image datasets and statistical learning approaches. Yet, human neuroimaging studies of visual perception still rely on small numbers of images (around 100) due to time-constrained experimental procedures. To apply statistical learning approaches that include neuroscience, the number of images used in neuroimaging must be significantly increased. We present BOLD5000, a human functional MRI (fMRI) study that includes almost 5,000 distinct images depicting real-world scenes. Beyond dramatically increasing image dataset size relative to prior fMRI studies, BOLD5000 also accounts for image diversity, overlapping with standard computer vision datasets by incorporating images from the Scene UNderstanding (SUN), Common Objects in Context (COCO), and ImageNet datasets. The scale and diversity of these image datasets, combined with a slow event-related fMRI design, enables fine-grained exploration into the neural representation of a wide range of visual features, categories, and semantics. Concurrently, BOLD5000 brings us closer to realizing Marr’s dream of a singular vision science–the intertwined study of biological and computer vision.
On the performance of machine learning fairness in image classification
In recent years, computer vision has made significant strides in enabling machines to perform a wide range of tasks, from image classification and segmentation to image generation and video analysis. It is a rapidly evolving field that aims to enable machines to interpret and understand visual information from the environment. One key task in computer vision is image classification, where algorithms identify and categorize objects in images based on their visual features. Image classification has a wide range of applications, from image search and recommendation systems to autonomous driving and medical diagnosis. However, recent research has highlighted the presence of bias in image classification algorithms, particularly with respect to human-sensitive attributes such as gender, race, and ethnicity. Some examples are computer programmers being predicted better in the context of men in images compared to women, and the accuracy of the algorithm being better on greyscale images compared to colored images. This discrepancy in identifying objects is developed through correlation the algorithm learns from the objects in context known as contextual bias. This bias can result in inaccurate decisions, with potential consequences in areas such as hiring, healthcare, and security. In this paper, we conduct an empirical study to investigate bias in the image classification domain based on sensitive attribute gender using deep convolutional neural networks (CNN) through transfer learning and minimize bias within the image context using data augmentation to improve overall model performance. In addition, cross-data generalization experiments are conducted to evaluate model robustness across popular open-source image datasets.
more »
« less
- Award ID(s):
- 2039583
- NSF-PAR ID:
- 10447288
- Editor(s):
- Solomon, Latasha; Schwartz, Peter J.
- Date Published:
- Journal Name:
- Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications V
- Page Range / eLocation ID:
- 60
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
Scanned images of patent or historical documents often contain localized zigzag noise introduced by the digitizing process; yet when viewed as a whole image, global structures are apparent to humans, but not to machines. Existing denoising methods work well for natural images, but not for binary diagram images, which makes feature extraction difficult for computer vision and machine learning methods and algorithms. We propose a topological graph-based representation to tackle this denoising problem. The graph representation emphasizes the shapes and topology of diagram images, making it ideal for use in machine learning applications such as classification and matching of scientific diagram images. Our approach and algorithms provide essential structure and lay important foundation for computer vision such as scene graph-based applications, because topological relations and spatial arrangement among objects in images are captured and stored in our skeleton graph. In addition, while the parameters for almost all pixel-based methods are not adaptive, our method is robust in that it only requires one parameter and it is adaptive. Experimental comparisons with existing methods show the effectiveness of our approach.more » « less
-
Geospatial analysis lacks methods like the word vector representations and pre-trained networks that significantly boost performance across a wide range of natural language and computer vision tasks. To fill this gap, we introduce Tile2Vec, an unsupervised representation learning algorithm that extends the distributional hypothesis from natural language — words appearing in similar contexts tend to have similar meanings — to spatially distributed data. We demonstrate empirically that Tile2Vec learns semantically meaningful representations for both image and non-image datasets. Our learned representations significantly improve performance in downstream classification tasks and, similarly to word vectors, allow visual analogies to be obtained via simple arithmetic in the latent space.more » « less
-
null (Ed.)Existing public face image datasets are strongly biased toward Caucasian faces, and other races (e.g., Latino) are significantly underrepresented. The models trained from such datasets suffer from inconsistent classification accuracy, which limits the applicability of face analytic systems to non-White race groups. To mitigate the race bias problem in these datasets, we constructed a novel face image dataset containing 108,501 images which is balanced on race. We define 7 race groups: White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latino. Images were collected from the YFCC-100M Flickr dataset and labeled with race, gender, and age groups. Evaluations were performed on existing face attribute datasets as well as novel image datasets to measure the generalization performance. We find that the model trained from our dataset is substantially more accurate on novel datasets and the accuracy is consistent across race and gender groups. We also compare several commercial computer vision APIs and report their balanced accuracy across gender, race, and age groups. Our code, data, and models are available at https://github.com/joojs/fairface.more » « less
-
Language is increasingly being used to define rich visual recognition problems with supporting image collections sourced from the web. Structured prediction models are used in these tasks to take advantage of correlations between co-occurring labels and visual input but risk inadvertently encoding social biases found in web corpora. In this work, we study data and models associated with multilabel object classification and visual semantic role labeling. We find that (a) datasets for these tasks contain significant gender bias and (b) models trained on these datasets further amplify existing bias. For example, the activity cooking is over 33% more likely to involve females than males in a training set, and a trained model further amplifies the disparity to 68% at test time. We propose to inject corpus-level constraints for calibrating existing structured prediction models and design an algorithm based on Lagrangian relaxation for collective inference. Our method results in almost no performance loss for the underlying recognition task but decreases the magnitude of bias amplification by 47.5% and 40.5% for multilabel classification and visual semantic role labeling, respectivelymore » « less