Database and data structure research can improve machine learning performance in many ways. One way is to design better algorithms on data structures. This paper combines the use of incremental computation as well as sequential and probabilistic filtering to enable “forgetful” tree-based learning algorithms to cope with streaming data that suffers from concept drift. (Concept drift occurs when the functional mapping from input to classification changes over time). The forgetful algorithms described in this paper achieve high performance while maintaining high quality predictions on streaming data. Specifically, the algorithms are up to 24 times faster than state-of-the-art incremental algorithms with, at most, a 2% loss of accuracy, or are at least twice faster without any loss of accuracy. This makes such structures suitable for high volume streaming applications.
more »
« less
Incremental Computation: What Is the Essence? (Invited Contribution)
Incremental computation aims to compute more efficiently on changed input by reusing previously computed results. We give a high-level overview of works on incremental computation, and highlight the essence underlying all of them, which we call incrementalization—the discrete counterpart of differentiation in calculus. We review the gist of a systematic method for incrementalization, and a systematic method centered around it, called Iterate-Incrementalize-Implement, for program design and optimization, as well as algorithm design and optimization. At a meta-level, with historical contexts and for future directions, we stress the power of high-level data, control, and module abstractions in developing new and better algorithms and programs as well as their precise complexities.
more »
« less
- Award ID(s):
- 1954837
- PAR ID:
- 10511075
- Publisher / Repository:
- ACM
- Date Published:
- ISBN:
- 9798400704871
- Page Range / eLocation ID:
- 39 to 52
- Format(s):
- Medium: X
- Location:
- London UK
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper presents a novel technique to reduce energy consumption of a machine learning classifier based on incremental-precision feature computation and classification. Specifically, the algorithm starts with features computed using the lowest possible precision. Depending on the classification accuracy, the features of the previous level are combined with features of the incremental-precision to compute the features in higher-precision. This process is continued till a desired accuracy is obtained. A certain threshold that allows many samples to be classified using a low-precision classifier can reduce energy consumption, but increases misclassification error. To implement hardware which provides the required updates in precision, an incremental-precision architecture based on data-path decomposition is proposed. One novel aspect of this work lies in the design of appropriate thresholds for multi-level classification using training data such that a family of designs can be obtained that enable trade-offs between classification accuracy and energy consumption. Another novel aspect involves the design of hardware architectures based on data-path decomposition which can incrementally increase precision upon demand. Using a seizure detection example, it is shown that the proposed incremental-precision based multi-level classification approach can reduce energy consumption by 35% while maintaining high sensitivity, or by about 50% at the expense of 15% degradation in sensitivity compared to similar approaches to seizure detection in literature. The reduction in energy is achieved at the expense of small area, timing and memory overheads as multiple classification steps are used instead of a single step.more » « less
-
Electromigration (EM) is a major failure effect for on-chip power grid networks of deep submicron VLSI circuits. EM degradation of metal grid lines can lead to excessive voltage drops (IR drops) before the target lifetime. In this paper, we propose a fast data-driven EM-induced IR drop analysis framework for power grid networks, named {\it GridNet}, based on the conditional generative adversarial networks (CGAN). It aims to accelerate the incremental full-chip EM-induced IR drop analysis, as well as IR drop violation fixing during the power grid design and optimization. More importantly, {\it GridNet} can naturally leverage the differentiable feature of deep neural networks (DNN) to {\it obtain the sensitivity information of node voltage with respect to the wire resistance (or width) with marginal cost}. {\it GridNet} treats continuous time and the given electrical features as input conditions, and the EM-induced time-varying voltage of power grid networks as the conditional outputs, which are represented as data series images. We show that {\it GridNet} is able to learn the temporal dynamics of the aging process in continuous time domain. Besides, we can take advantage of the sensitivity information provided by {\it GridNet} to perform efficient localized IR drop violation fixing in the late stage design and optimization. Numerical results on 36000 synthesized power grid network samples demonstrate that the new method can lead to $$10^5\times$$ speedup over the recently proposed full-chip coupled EM and IR drop analysis tool. We further show that localized IR drop violation fix for the same set of power grid networks can be performed remarkably efficiently using the cheap sensitivity computation from {\it GridNet}.more » « less
-
A Halbach array is a specialized arrangement of permanent magnets designed to generate a strong, uniform magnetic field in the designated region. This unique configuration has been widely utilized in various applications, including magnetic levitation (maglev) systems, electric motors, particle accelerators, and magnetic seals. The advantages of Halbach arrays include high efficiency, reduced weight, and precise directional control of the magnetic field. Halbach arrays are commonly categorized into two configurations: linear and cylindrical. A linear Halbach array produces a concentrated magnetic field on one face and is frequently employed in maglev trains and conveyor systems to ensure stable and efficient operation. In contrast, a cylindrical Halbach array consists of magnets arranged in a ring, generating a uniform magnetic field within the cylinder while suppressing the external field. This configuration is particularly advantageous in applications such as brushless electric motors and magnetic resonance imaging (MRI) systems. Traditionally, the design of electromagnetic systems incorporating Halbach arrays relied on engineers’ expertise and intuition due to the complexity of the permanent magnet configuration. However, advancements in numerical methods, particularly topology optimization, have introduced a systematic approach to optimizing the shape and distribution of permanent magnets within a given design domain. In the context of Halbach array design, topology optimization aims to maximize the total magnetic flux within a designated region while simultaneously determining the optimal material distribution to achieve a specified design objective. This approach enhances the performance and efficiency of Halbach arrays, providing a more precise and automated framework for their development. In this paper, we propose a Cardinal Basis Function (CBF)-based level-set method for designing a circular Halbach array capable of generating a uniform magnetic field within a designated region. The CBF-based level-set method offers significant computational advantages by reducing the computational cost and accelerating the convergence process. This approach enhances the efficiency of the optimization process, making it a promising technique for the systematic design of Halbach arrays.more » « less
-
null (Ed.)Formal computational approaches in the realm of engineering and architecture, such as parametric modelling and optimization, are increasingly powerful, allowing for systematic and rigorous design processes. However, these methods often bring a steep learning curve, require previous expertise, or are unintuitive and unnatural to human design. On the other hand, analog design methods such as hand sketching are commonly used by architects and engineers alike, and constitute quick, easy, and almost primal modes of generating and transferring design concepts, which in turn facilitates the sharing of ideas and feedback. In the advent of increasing computational power and developments in data analysis, deep learning, and other emerging technologies, there is a potential to bridge the gap between these seemingly divergent processes to develop new hybrid approaches to design. Such methods can provide designers with new opportunities to harness the systematic and data-driven power of computation and performance analysis while maintaining a more creative and intuitive design interface. This paper presents a new method for interpreting human designs in sketch format and predicting their structural performance using recent advances in deep learning. The paper also demonstrates how this new technique can be used in design workflows including performance-based guidance and interpolations between concepts.more » « less
An official website of the United States government

