skip to main content


Title: Investigating Underdetermination Through Interactive Computational Handweaving
Computational handweaving combines the repeatable precision of digital fabrication with relatively high production demands of the user: a weaver must be physically engaged with the system to enact a pattern, line by line, into a fabric. Rather than approaching co-presence and repetitive labor as a negative aspect of design, we look to current practices in procedural generation (most commonly used in game design and screen-based new media art) to understand how designers can create room for suprise and emergent phenomena within systems of precision and constraint. We developed three designs for blending real-time input with predetermined pattern features. These include: using camera imagery sampled at weaving time; a 1:1 scale tool for composing patterns on the loom; and a live "Twitch'' stream where spectators determine the woven pattern. We discuss how experiential qualities of the systems led to different balances of underdetermination in procedural generation as well as how such an approach might help us think beyond an artifact/experience dichotomy in fabrication.  more » « less
Award ID(s):
1718651
NSF-PAR ID:
10191351
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Proceedings of the 2020 ACM Designing Interactive Systems Conference
Page Range / eLocation ID:
1033 to 1046
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Arbitrary-precision integer multiplication is the core kernel of many applications including scientific computing, cryptographic algorithms, etc. Existing acceleration of arbitrary-precision integer multiplication includes CPUs, GPUs, FPGAs, and ASICs. To leverage the hardware intrinsics low-bit function units (32/64-bit), arbitrary-precision integer multiplication can be calculated using Karatsuba decomposition, and Schoolbook decomposition by decomposing the two large operands into several small operands, generating a set of low-bit multiplications that can be processed either in a spatial or sequential manner on the low-bit function units, e.g., CPU vector instructions, GPU CUDA cores, FPGA digital signal processing (DSP) blocks. Among these accelerators, reconfigurable computing, e.g., FPGA accelerators are promised to provide both good energy efficiency and flexibility. We implement the state-of-the-art (SOTA) FPGA accelerator and compare it with the SOTA libraries on CPUs and GPUs. Surprisingly, in terms of energy efficiency, we find that the FPGA has the lowest energy efficiency, i.e., 0.29x of the CPU and 0.17x of the GPU with the same generation fabrication. Therefore, key questions arise: Where do the energy efficiency gains of CPUs and GPUs come from? Can reconfigurable computing do better? If can, how to achieve that? We first identify that the biggest energy efficiency gains of the CPUs and GPUs come from the dedicated vector units, i.e., vector instruction units in CPUs and CUDA cores in GPUs. FPGA uses DSPs and lookup tables (LUTs) to compose the needed computation, which incurs overhead when compared to using vector units directly. New reconfigurable computing, e.g., “FPGA+vector units” is a novel and feasible solution to improve energy efficiency. In this paper, we propose to map arbitrary-precision integer multiplication onto such a “FPGA+vector units” platform, i.e., AMD/Xilinx Versal ACAP architecture, a heterogeneous reconfigurable computing platform that features 400 AI engine tensor cores (AIE) running at 1 GHz, FPGA programmable logic (PL), and a general-purpose CPU in the system fabricated with the TSMC 7nm technology. Designing on Versal ACAP incurs several challenges and we propose AIM: Arbitrary-precision Integer Multiplication on Versal ACAP to automate and optimize the design. AIM accelerator is composed of AIEs, PL, and CPU. AIM framework includes analytical models to guide design space exploration and AIM automatic code generation to facilitate the system design and on-board design verification. We deploy the AIM framework on three different applications, including large integer multiplication (LIM), RSA, and Mandelbrot, on the AMD/Xilinx Versal ACAP VCK190 evaluation board. Our experimental results show that compared to existing accelerators, AIM achieves up to 12.6x, and 2.1x energy efficiency gains over the Intel Xeon Ice Lake 6346 CPU, and NVidia A5000 GPU respectively, which brings reconfigurable computing the most energy-efficient platform among CPUs and GPUs. 
    more » « less
  2. Typical cybersecurity solutions emphasize on achieving defense functionalities. However, execution efficiency and scalability are equally important, especially for real-world deployment. Straightforward mappings of cybersecurity applications onto HPC platforms may significantly underutilize the HPC devices’ capacities. On the other hand, the sophisticated implementations are quite difficult: they require both in-depth understandings of cybersecurity domain-specific characteristics and HPC architecture and system model. In our work, we investigate three sub-areas in cybersecurity, including mobile software security, network security, and system security. They have the following performance issues, respectively: 1) The flow- and context-sensitive static analysis for the large and complex Android APKs are incredibly time-consuming. Existing CPU-only frameworks/tools have to set a timeout threshold to cease the program analysis to trade the precision for performance. 2) Network intrusion detection systems (NIDS) use automata processing as its searching core and requires line-speed processing. However, achieving high-speed automata processing is exceptionally difficult in both algorithm and implementation aspects. 3) It is unclear how the cache configurations impact time-driven cache side-channel attacks’ performance. This question remains open because it is difficult to conduct comparative measurement to study the impacts. In this dissertation, we demonstrate how application-specific characteristics can be leveraged to optimize implementations on various types of HPC for faster and more scalable cybersecurity executions. For example, we present a new GPU-assisted framework and a collection of optimization strategies for fast Android static data-flow analysis that achieve up to 128X speedups against the plain GPU implementation. For network intrusion detection systems (IDS), we design and implement an algorithm capable of eliminating the state explosion in out-of-order packet situations, which reduces up to 400X of the memory overhead. We also present tools for improving the usability of Micron’s Automata Processor. To study the cache configurations’ impact on time-driven cache side-channel attacks’ performance, we design an approach to conducting comparative measurement. We propose a quantifiable success rate metric to measure the performance of time-driven cache attacks and utilize the GEM5 platform to emulate the configurable cache. 
    more » « less
  3. null (Ed.)
    Reconfigurable antenna systems have gained much attention for potential use in the next generation wireless systems. However, conventional direction-of-arrival (DoA) estimation algorithms for antenna arrays cannot be used directly in reconfigurable antennas due to different design of the antennas. In this paper, we present an adjacent pattern power ratio (APPR) algorithm for two-port composite right/left-handed (CRLH) reconfigurable leaky-wave antennas (LWAs). Additionally, we compare the performances of the APPR algorithm and LWA-based MUSIC algorithms. We study how the computational complexity and the performance of the algorithms depend on number of selected radiation patterns. In addition, we evaluate the performance of the APPR and MUSIC algorithms with numerical simulations as well as with real world indoor measurements having both line-of-sight and non-line-of-sight components. Our performance evaluations show that the DoA estimates are in a considerably good agreement with the real DoAs, especially with the APPR algorithm. In summary, the APPR and MUSIC algorithms for DoA estimation along with the planar and compact LWA layout can be a valuable solution to enhance the performance of the wireless communication in the next generation systems. 
    more » « less
  4. Microfluidic cell sorters have shown great potential to revolutionize the current technique of enriching rare cells. In the past decades, different microfluidic cell sorters have been developed by researchers for separating circulating tumor cells, T-cells, and other biological markers from blood samples. However, it typically takes months or even years to design these microfluidic cell sorters by hand. Thus, researchers tend to use computer simulation (usually finite element analysis) to verify their designs before fabrication and experimental testing. Despite this, conducting precision finite element analysis of microfluidic devices is computationally expensive and labor-intensive. To address this issue, we recently presented a microfluidic simulation method that can simulate the behavior of fluids and particles in some typical microfluidic chips instantaneously. Our method decomposes the chip into channels and intersections. The behavior of fluid in each channel is determined by leveraging analogies with electronic circuits, and the behavior of fluid and particles in each intersection is determined by querying a database containing 92,934 pre-simulated channel intersections. While this approach successfully predicts the behavior of complex microfluidic chips in a fraction of the time required by existing techniques, we nonetheless identified three major limitations with this method: (1) the library of pre-simulated channel intersections is unnecessarily large (only 2,072 of 92,934 were used); (2) the library contains only cross-shaped intersections (and no other intersection geometries); and (3) the range of fluid flow rates in the library is limited to 0 to 2 cm/s. To address these deficiencies, in this work we present an improved method for instantaneously simulating the trajectories of particles in microfluidic chips. Firstly, inspired by dynamic programming, our new method optimizes the generation of pre-simulated intersection units and avoids generating unnecessary simulations. Secondly, we constructed a cloud database (http://cloud.microfluidics.cc) to share our pre-simulated results and to let users become contributors and upload their simulation results into the cloud database as a benefit to the whole microfluidic simulation community. Lastly, we investigated the impact of different channel angles and different fluid flow rates on predicting the trajectories of particles. We found a wide range of device geometries and flow rates over which our existing simulation results can be extended without having to perform additional simulations. Our method should accelerate the simulation of particles in microfluidic chips and enable researchers to design new microfluidic cell sorter chips more efficiently. 
    more » « less
  5. We (Meltzoff et al., 2018) described how Oostenbroek et al.’s (2016) design likely dampened infant imitation. In their commentary, Oostenbroek et al. (2018) argue that our points are post hoc. It is important for readers to know that they are not. Our paper restated “best practices” described in published papers. Based on the literature, the design used by Oostenbroek et al. (2016) would be predicted to dampen infant imitation. First, Oostenbroek et al.’s (2016) test periods were too brief. The stimulus presentation for each type of gesture was too short to ensure that neonates saw the display. The response measurement period did not allow neonates sufficient time to organize a motor response. Meltzoff and Moore (1983a, 1994) introduced experimental procedures specifically designed to address these issues (also, Simpson, Murray, Paukner, & Ferrari, 2014). Oostenbroek et al. did not capitalize on these procedural advances. Second, Oostenbroek et al. allowed uncontrolled experimenter–infant interactions during the test session itself. Previous papers on imitation provided analyses of how uncontrolled interactions with the experimenter can introduce “noise” in experiments of facial imitation (Meltzoff & Moore, 1983b, 1994). Third, Oostenbroek et al. used suboptimal eliciting conditions. Neonates cannot support their own heads; in Oostenbroek et al., infants’ heads were allowed to flop from side-to-side unsupported on the experimenter’s lap while the experimenter gestured with both hands. In addition, papers have listed techniques for maximizing visual attention (controlled lighting, homogeneous background) (Meltzoff & Moore, 1989, 1994). Oostenbroek et al. tested infants on a couch in the home. Despite a design that would blunt imitation, our reanalysis of Oostenbroek et al.’s data showed a response pattern that is consistent with the imitation of tongue protrusion (TP). In their commentary, Oostenbroek et al. (2018) now propose limiting analyses to a subset of their original controls. We reanalyzed their data accordingly. Again, the results support early imitation. Their cross-sectional data (Oostenbroek et al., 2016, Table S4) collapsed across age show significantly more infant TP in response to the TP demonstration than to the mean of the six dynamic face controls (mouth, happy, sad, mmm, ee, and click): t(104) = 4.62, p = 0.00001. The results are also significant using a narrower subset of stimuli (mouth, happy, and sad): t(104) = 3.20, p = 0.0018. These results rule out arousal, because the adult TP demonstration was significantly more effective in eliciting infant tongue protrusions than the category of dynamic face controls. Tongue protrusion matching is a robust phenomenon successfully elicited in more than two dozen studies (reviews: Meltzoff & Moore, 1997; Nagy, Pilling, Orvos, & Molnar, 2013; Simpson et al., 2014). There are more general lessons to be drawn. Psychology is experiencing what some call a “replication crisis.” Those who attempt to reproduce effects have scientific responsibilities, as do original authors. Both can help psychology become a more cumulative science. It is crucial for investigators to label whether or not a study is a direct replication attempt. If it is not a direct replication, procedural alterations and associated limitations should be discussed. It sows confusion to use procedures that are already predicted to dampen effects, without alerting readers. Psychology will be advanced by more stringent standards for reporting and evaluating studies aimed at reproducing published effects. Infant imitation is a fundamental skill prior to language and contributes to the development of social cognition. On this both Oostenbroek et al. and we agree. 
    more » « less