skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Computational Cut-Ups: The Influence of Dada
Can a tool designed to detect dogs detect Dada? We apply a cutting-edge image analysis tool, convolutional neural networks (CNNs), to a collection of page images from modernist journals. This process radically deforms the images, from cultural artifacts into lists of numbers. We determine whether the system can, nevertheless, distinguish Dada from other, non-Dada avant-garde, and in the process learn something about the cohesiveness of Dada as a visual form. We can also analyze the "mistakes" made in classifying Dada to search for the visual influence of Dada as a movement.  more » « less
Award ID(s):
1652536
PAR ID:
10092207
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Journal of modern periodical studies
Volume:
8
Issue:
2
ISSN:
2152-9272
Page Range / eLocation ID:
179-195
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Today, creators of data-hungry deep neural networks (DNNs) scour the Internet for training fodder, leaving users with little control over or knowledge of when their data, and in particular their images, are used to train models. To empower users to counteract unwanted use of their images, we design, implement and evaluate a practical system that enables users to detect if their data was used to train a DNN model for image classification. We show how users can create special images we call isotopes, which introduce ``spurious features'' into DNNs during training. With only query access to a model and no knowledge of the model-training process, nor control of the data labels, a user can apply statistical hypothesis testing to detect if the model learned these spurious features by training on the user's images. Isotopes can be viewed as an application of a particular type of data poisoning. In contrast to backdoors and other poisoning attacks, our purpose is not to cause misclassification but rather to create tell-tale changes in confidence scores output by the model that reveal the presence of isotopes in the training data. Isotopes thus turn DNNs' vulnerability to memorization and spurious correlations into a tool for data provenance. Our results confirm efficacy in multiple image classification settings, detecting and distinguishing between hundreds of isotopes with high accuracy. We further show that our system works on public ML-as-a-service platforms and larger models such as ImageNet, can use physical objects in images instead of digital marks, and remains robust against several adaptive countermeasures. 
    more » « less
  2. null (Ed.)
    Corrosion on steel bridge members is one of the most important bridge deficiencies that must be carefully monitored by inspectors. Human visual inspection is typically conducted first, and additional measures such as tapping bolts and measuring section losses can be used to assess the level of corrosion. This process becomes a challenge when some of the connections are placed in a location where inspectors have to climb up or down the steel members. To assist this inspection process, we developed a computervision based Unmanned Aerial Vehicle (UAV) system for monitoring the health of critical steel bridge connections (bolts, rivets, and pins). We used a UAV to collect images from a steel truss bridge. Then we fed the collected datasets into an instance level segmentation model using a region-based convolutional neural network to train characteristics of corrosion shown at steel connections with sets of labeled image data. The segmentation model identified locations of the connections in images and efficiently detected the members with corrosion on them. We evaluated the model based on how precisely it can detect rivets, bolts, pins, and corrosion damage on these members. The results showed robustness and practicality of our system which can also provide useful health information to bridge owners for future maintenance. These collected image data can be used to quantitatively track temporal changes and to monitor progression of damage in aging steel structures. Furthermore, the system can also assist inspectors in making decisions for further detailed inspections. 
    more » « less
  3. The purpose of a routine bridge inspection is to assess the physical and functional condition of a bridge according to a regularly scheduled interval. The Federal Highway Administration (FHWA) requires these inspections to be conducted at least every 2 years. Inspectors use simple tools and visual inspection techniques to determine the conditions of both the elements of the bridge structure and the bridge overall. While in the field, the data is collected in the form of images and notes; after the field work is complete, inspectors need to generate a report based on these data to document their findings. The report generation process includes several tasks: (1) evaluating the condition rating of each bridge element according to FHWA Recording and Coding Guide for Structure Inventory and Appraisal of the Nation’s Bridges; and (2) updating and organizing the bridge inspection images for the report. Both of tasks are time-consuming. This study focuses on assisting with the latter task by developing an artificial intelligence (AI)-based method to rapidly organize bridge inspection images and generate a report. In this paper, an image organization schema based on the FHWA Recording and Coding Guide for the Structure Inventory and Appraisal of the Nation’s Bridges and the Manual for Bridge Element Inspection is described, and several convolutional neural network-based classifiers are trained with real inspection images collected in the field. Additionally, exchangeable image file (EXIF) information is automatically extracted to organize inspection images according to their time stamp. Finally, the Automated Bridge Image Reporting Tool (ABIRT) is described as a browser-based system built on the trained classifiers. Inspectors can directly upload images to this tool and rapidly obtain organized images and associated inspection report with the support of a computer which has an internet connection. The authors provide recommendations to inspectors for gathering future images to make the best use of this tool. 
    more » « less
  4. Abstract As machine vision technology generates large amounts of data from sensors, it requires efficient computational systems for visual cognitive processing. Recently, in-sensor computing systems have emerged as a potential solution for reducing unnecessary data transfer and realizing fast and energy-efficient visual cognitive processing. However, they still lack the capability to process stored images directly within the sensor. Here, we demonstrate a heterogeneously integrated 1-photodiode and 1 memristor (1P-1R) crossbar for in-sensor visual cognitive processing, emulating a mammalian image encoding process to extract features from the input images. Unlike other neuromorphic vision processes, the trained weight values are applied as an input voltage to the image-saved crossbar array instead of storing the weight value in the memristors, realizing the in-sensor computing paradigm. We believe the heterogeneously integrated in-sensor computing platform provides an advanced architecture for real-time and data-intensive machine-vision applications via bio-stimulus domain reduction. 
    more » « less
  5. This paper presents a novel strategy to train keypoint detection models for robotics applications. Our goal is to develop methods that can robustly detect and track natural features on robotic manipulators. Such features can be used for vision-based control and pose estimation purposes, when placing artificial markers (e.g. ArUco) on the robot’s body is not possible or practical in runtime. Prior methods require accurate camera calibration and robot kinematic models in order to label training images for the keypoint locations. In this paper, we remove these dependencies by utilizing inpainting methods: In the training phase, we attach ArUco markers along the robot’s body and then label the keypoint locations as the center of those markers. We, then, use an inpainting method to reconstruct the parts of the robot occluded by the ArUco markers. As such, the markers are artificially removed from the training images, and labeled data is obtained to train markerless keypoint detection algorithms without the need for camera calibration or robot models. Using this approach, we trained a model for realtime keypoint detection and used the inferred keypoints as control features for an adaptive visual servoing scheme. We obtained successful control results with this fully model-free control strategy, utilizing natural robot features in the runtime and not requiring camera calibration or robot models in any stage of this process. 
    more » « less