Convolutional Neural Network (CNN) uses convolutional layers to explore spatial/temporal adjacency to construct new feature representations. So, CNN is commonly used for data with strong temporal/spatial correlations, but cannot be directly applied to generic learning tasks. In this paper, we propose to enable CNN for learning from generic data to improve classification accuracy. To take the full advantage of CNN’s feature learning power, we propose to convert each instance of the original dataset into a synthetic matrix/image format. To maximize the correlation in the constructed matrix/image, we use 0/1 optimization to reorder features and ensure that the ones with strong correlations are adjacent to each other. By using a feature reordering matrix, we are able to create a synthetic image to represent each instance. Because the constructed synthetic image preserves the original feature values and correlation, CNN can be applied to learn effective features for classification. Experiments and comparisons, on 22 benchmark datasets, demonstrate clear performance gain of applying CNN to generic datasets, compared to conventional machine learning methods. Furthermore, our method consistently outperforms approaches which directly apply CNN to generic datasets in naive ways. This research allows deep learning to be broadly applied to generic datasets.
more »
« less
Work-in-Progress: Novel Ethnographic Approaches for Investigating Engineering Practice
The aim of this work-in-progress paper is to introduce an exploratory project that will test innovative approaches to data collection and analysis for rapidly generating new knowledge about engineering practice.
more »
« less
- Award ID(s):
- 1832966
- PAR ID:
- 10192285
- Date Published:
- Journal Name:
- Proceedings of the American Society for Engineering Education Virtual Conference
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The ability to exert self-control varies within and across taxa. Some species can exert self-control for several seconds whereas others, such as large-brained vertebrates, can tolerate delays of up to several minutes. Advanced self-control has been linked to better performance in cognitive tasks and has been hypothesized to evolve in response to specific socio-ecological pressures. These pressures are difficult to uncouple because previously studied species face similar socio-ecological challenges. Here, we investigate self-control and learning performance in cuttlefish, an invertebrate that is thought to have evolved under partially different pressures to previously studied vertebrates. To test self-control, cuttlefish were presented with a delay maintenance task, which measures an individual's ability to forgo immediate gratification and sustain a delay for a better but delayed reward. Cuttlefish maintained delay durations for up to 50–130 s. To test learning performance, we used a reversal-learning task, whereby cuttlefish were required to learn to associate the reward with one of two stimuli and then subsequently learn to associate the reward with the alternative stimulus. Cuttlefish that delayed gratification for longer had better learning performance. Our results demonstrate that cuttlefish can tolerate delays to obtain food of higher quality comparable to that of some large-brained vertebrates.more » « less
-
The dataset includes the measurements of individual subduction zones defined in the convergence-parallel, trench-perpendicular, and spreading-parallel direction. </p> </p> Table S3. Location of each trench, arc, and back-arc defined in a direction parallel to the convergence, and the corresponding distance from the trench to the arc (D_TA), subarc slab depth (H), and from the trench to the back-arc spreading center (D_TB). The slab dip is measured at 50km (Dip50), 100km (Dip100), and 200km (Dip200) and averaged from 0 to 50 km (Dip050), 0 to 100km (Dip0100), 0 to 200km (Dip0200), and 50 to 200km (Dip50200). </p> Table S4. Location of each trench, arc, and back-arc defined in a direction perpendicular to the trench, and the corresponding distance from the trench to the arc (D_TA), subarc slab depth (H), and from the trench to the back-arc spreading center (D_TB). The slab dip is measured at 50km (Dip50), 100km (Dip100), and 200km (Dip200) and averaged from 0 to 50 km (Dip050), 0 to 100km (Dip0100), 0 to 200km (Dip0200), and 50 to 200km (Dip50200). </p> Table S5. Location of each trench, arc, and back-arc defined in a direction parallel to the spreading direction, and the corresponding distance from the trench to the arc (D_TA), subarc slab depth (H), and from the trench to the back-arc spreading center (D_TB). The slab dip is measured at 50km (Dip50), 100km (Dip100), and 200km (Dip200) and averaged from 0 to 50 km (Dip050), 0 to 100km (Dip0100), 0 to 200km (Dip0200), and 50 to 200km (Dip50200). </p> </p>more » « less
-
Interactions between bids to show ads online can lead to an advertiser's ad being shown to more men than women even when the advertiser does not target towards men. We design bidding strategies that advertisers can use to avoid such emergent discrimination without having to modify the auction mechanism. We mathematically analyze the strategies to determine the additional cost to the advertiser for avoiding discrimination, proving our strategies to be optimal in some settings. We use simulations to understand other settings.more » « less
-
For biomedical applications in targeted therapy delivery and interventions, a large swarm of micro-scale particles (“agents”) has to be moved through a maze-like environment (“vascular system”) to a target region (“tumor”). Due to limited on-board capabilities, these agents cannot move autonomously; instead, they are controlled by an external global force that acts uniformly on all particles. In this work, we demonstrate how to use a time-varying magnetic field to gather particles to a desired location. We use reinforcement learning to train networks to efficiently gather particles. Methods to overcome the simulation-to-reality gap are explained, and the trained networks are deployed on a set of mazes and goal locations. The hardware experiments demonstrate fast convergence, and robustness to both sensor and actuation noise. To encourage extensions and to serve as a benchmark for the reinforcement learning community, the code is available at Github.more » « less