We expand on recent exciting work of Debris-Alazard, Ducas, and van Woerden [Transactions on Information Theory, 2022], which introduced the notion of basis reduction for codes, in analogy with the extremely successful paradigm of basis reduction for lattices. We generalize DDvW's LLL algorithm and size-reduction algorithm from codes over F_2 to codes over F_q, and we further develop the theory of proper bases. We then show how to instantiate for codes the BKZ and slide-reduction algorithms, which are the two most important generalizations of the LLL algorithm for lattices. Perhaps most importantly, we show a new and very efficient basis-reduction algorithm for codes, called full backward reduction. This algorithm is quite specific to codes and seems to have no analogue in the lattice setting. We prove that this algorithm finds vectors as short as LLL does in the worst case (i.e., within the Griesmer bound) and does so in less time. We also provide both heuristic and empirical evidence that it outperforms LLL in practice, and we give a variant of the algorithm that provably outperforms LLL (in some sense) for random codes. Finally, we explore the promise and limitations of basis reduction for codes. In particular, we show upper and lower bounds on how ``good'' of a basis a code can have, and we show two additional illustrative algorithms that demonstrate some of the promise and the limitations of basis reduction for codes.
more »
« less
Bending Seams - How to Create Couture Curves
Adapting ideas from [1], we design pattern pieces for a scarf that exhibits negative curvature. We also discuss the methods and materials used for constructing versions in both wool and polyester.
more »
« less
- Award ID(s):
- 1852519
- PAR ID:
- 10347623
- Date Published:
- Journal Name:
- Proceedings of Bridges 2021: Mathematics, Art, Music, Architecture, Culture
- Page Range / eLocation ID:
- 249β252
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We study pure-strategy Nash equilibrium (PSNE) computation in π-dimensional congestion games (π-DCGs) where the weights or demands of the players are π-dimensional vectors. We first show that deciding the existence of a PSNE in a π-DCG is NP-complete even for games when players have binary and unit demand vectors. We then focus on computing PSNE for π-DCGs and their variants with general, linear, and exponential cost functions. For general cost functions (potentially non-monotonic), we provide the first configuration-space framework to find a PSNE if one exists. For linear and exponential cost functions, we provide potential function-based algorithms to find a PSNE. These algorithms run in polynomial time under certain assumptions. We also study structured demands and cost functions, giving polynomial-time algorithms to compute PSNE for several cases. For general cost functions, we give a constructive proof of existence for an (πΌ, π½)-PSNE (for certain πΌ and π½), where πΌ and π½ are multiplicative and additive approximation factors, respectively.more » « less
-
The complete reason behind a decision is a Boolean formula that characterizes why the decision was made. This recently introduced notion has a number of applications, which include generating explanations, detecting decision bias and evaluating counterfactual queries. Prime implicants of the complete reason are known as sufficient reasons for the decision and they correspond to what is known as PI explanations and abductive explanations. In this paper, we refer to the prime implicates of a complete reason as necessary reasons for the decision. We justify this terminology semantically and show that necessary reasons correspond to what is known as contrastive explanations. We also study the computation of complete reasons for multi-class decision trees and graphs with nominal and numeric features for which we derive efficient, closed-form complete reasons. We further investigate the computation of shortest necessary and sufficient reasons for a broad class of complete reasons, which include the derived closed forms and the complete reasons for Sentential Decision Diagrams (SDDs). We provide an algorithm which can enumerate their shortest necessary reasons in output polynomial time. Enumerating shortest sufficient reasons for this class of complete reasons is hard even for a single reason. For this problem, we provide an algorithm that appears to be quite efficient as we show empirically.more » « less
-
Iwata, Satoru; Kakimura, Naonori (Ed.)We give a new rapid mixing result for a natural random walk on the independent sets of a graph G. We show that when G has bounded treewidth, this random walk - known as the Glauber dynamics for the hardcore model - mixes rapidly for all fixed values of the standard parameter Ξ» > 0, giving a simple alternative to existing sampling algorithms for these structures. We also show rapid mixing for analogous Markov chains on dominating sets, b-edge covers, b-matchings, maximal independent sets, and maximal b-matchings. (For b-matchings, maximal independent sets, and maximal b-matchings we also require bounded degree.) Our results imply simpler alternatives to known algorithms for the sampling and approximate counting problems in these graphs. We prove our results by applying a divide-and-conquer framework we developed in a previous paper, as an alternative to the projection-restriction technique introduced by Jerrum, Son, Tetali, and Vigoda. We extend this prior framework to handle chains for which the application of that framework is not straightforward, strengthening existing results by Dyer, Goldberg, and Jerrum and by Heinrich for the Glauber dynamics on q-colorings of graphs of bounded treewidth and bounded degree.more » « less
-
Regularization plays a key role in improving the prediction of emotions using attributes such as arousal, valence and dominance. Regularization is particularly important with deep neural networks (DNNs), which have millions of parameters. While previous studies have reported competitive performance for arousal and dominance, the prediction results for valence using acoustic features are significantly lower. We hypothesize that higher regularization can lead to better results for valence. This study focuses on exploring the role of dropout as a form of regularization for valence, suggesting the need for higher regularization. We analyze the performance of regression models for valence, arousal and dominance as a function of the dropout probability. We observe that the optimum dropout rates are consistent for arousal and dominance. However, the optimum dropout rate for valence is higher. To understand the need for higher regularization for valence, we perform an empirical analysis to explore the nature of emotional cues conveyed in speech. We compare regression models with speakerdependent and speaker-independent partitions for training and testing. The experimental evaluation suggests stronger speaker dependent traits for valence. We conclude that higher regularization is needed for valence to force the network to learn global patterns that generalize across speakers.more » « less
An official website of the United States government

