Probit regression was first proposed by Bliss in 1934 to study mortality rates of insects. Since then, an extensive body of work has analyzed and used probit or related binary regression methods (such as logistic regression) in numerous applications and fields. This paper provides a fresh angle to such well-established binary regression methods. Concretely, we demonstrate that linearizing the probit model in combination with linear estimators performs on par with state-of-the-art nonlinear regression methods, such as posterior mean or maximum aposteriori estimation, for a broad range of real-world regression problems. We derive exact, closed-form, and nonasymptotic expressions for the mean-squared error of our linearized estimators, which clearly separates them from nonlinear regression methods that are typically difficult to analyze. We showcase the efficacy of our methods and results for a number of synthetic and real-world datasets, which demonstrates that linearized binary regression finds potential use in a variety of inference, estimation, signal processing, and machine learning applications that deal with binary-valued observations or measurements.
more »
« less
Long Short-Term Memory with Spin-Based Binary and Non-Binary Neurons
- Award ID(s):
- 1739635
- PAR ID:
- 10298978
- Date Published:
- Journal Name:
- IEEE International Midwest Symposium on Circuits and Systems (MWSCAS)
- Page Range / eLocation ID:
- 317 to 320
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Binary Chirps (BCs) are 2^m dimensional complex vectors employed in deterministic compressed sensing and in random/unsourced multiple access in wireless networks. The vectors are obtained by exponentiating codewords from a 2nd order Reed-Muller code defined over Z4, the ring of integers modulo 4. We doubled the size of the BC codebook, without compromising performance in wireless multiple access.more » « less
-
Abstract We present the analysis of a microlensing event KMT-2022-BLG-0086 of which the overall light curve is not described by a binary-lens single-source (2L1S) model, which suggests the existence of an extra lens or an extra source. We found that the event is best explained by the binary-lens binary-source (2L2S) model, but the 2L2S model is only favored over the triple-lens single-source (3L1S) model by Δχ2 ≃ 9. Although the event has noticeable anomalies around the peak of the light curve, they are not enough covered to constrain the angular Einstein radiusθE, thus we only measure the minimum angular Einstein radius . From the Bayesian analysis, it is found that that the binary lens system is a binary star with masses of at a distance of kpc, while the triple lens system is a brown dwarf or a massive giant planet in a low-mass binary-star system with masses of , at a distance of kpc, indicating a disk lens system. The 2L2S model yields the relative lens-source proper motion ofμrel ≥ 4.6 mas yr−1that is consistent with the Bayesian result, whereas the 3L1S model yieldsμrel ≥ 18.9 mas yr−1, which is more than three times larger than that of a typical disk object of ∼6 mas yr−1and thus is not consistent with the Bayesian result. This suggests that the event is likely caused by the binary-lens binary-source model.more » « less
-
Emerging edge devices such as sensor nodes are increasingly being tasked with non-trivial tasks related to sensor data processing and even application-level inferences from this sensor data. These devices are, however, extraordinarily resource-constrained in terms of CPU power (often Cortex M0-3 class CPUs), available memory (in few KB to MBytes), and energy. Under these constraints, we explore a novel approach to character recognition using local binary pattern networks, or LBPNet, that can learn and perform bit-wise operations in an end-to-end fashion. LBPNet has its advantage for characters whose features are composed of structured strokes and distinctive outlines. LBPNet uses local binary comparisons and random projections in place of conventional convolution (or approximation of convolution) operations, providing an important means to improve memory efficiency as well as inference speed. We evaluate LBPNet on a number of character recognition benchmark datasets as well as several object classification datasets and demonstrate its effectiveness and efficiency.more » « less
An official website of the United States government

