We expand on recent exciting work of Debris-Alazard, Ducas, and van Woerden [Transactions on Information Theory, 2022], which introduced the notion of basis reduction for codes, in analogy with the extremely successful paradigm of basis reduction for lattices. We generalize DDvW's LLL algorithm and size-reduction algorithm from codes over F_2 to codes over F_q, and we further develop the theory of proper bases. We then show how to instantiate for codes the BKZ and slide-reduction algorithms, which are the two most important generalizations of the LLL algorithm for lattices. Perhaps most importantly, we show a new and very efficient basis-reduction algorithm for codes, called full backward reduction. This algorithm is quite specific to codes and seems to have no analogue in the lattice setting. We prove that this algorithm finds vectors as short as LLL does in the worst case (i.e., within the Griesmer bound) and does so in less time. We also provide both heuristic and empirical evidence that it outperforms LLL in practice, and we give a variant of the algorithm that provably outperforms LLL (in some sense) for random codes. Finally, we explore the promise and limitations of basis reduction for codes. In particular, we show upper and lower bounds on how ``good'' of a basis a code can have, and we show two additional illustrative algorithms that demonstrate some of the promise and the limitations of basis reduction for codes.
more »
« less
Optimized SVM constellations for SDM fibers
Stokes vector modulation (SVM) can be used together with multimode and multicore fibers. We study various algorithms for optimal geometric constellation shaping and bit-to-symbol mapping in the generalized Stokesspace. We evaluate the performance of these constellations for optically-preamplified direct-detection receivers.
more »
« less
- Award ID(s):
- 1911183
- PAR ID:
- 10297207
- Date Published:
- Journal Name:
- IEEE Photonics Conference
- ISSN:
- 2374-0140
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We study truthful mechanisms for approximating the Maximin-Share (MMS) allocation of agents with additive valuations for indivisible goods. Algorithmically, constant factor approximations exist for the problem for any number of agents. When adding incentives to the mix, a jarring result by Amanatidis, Birmpas, Christodoulou, and Markakis [EC 2017] shows that the best possible approximation for two agents and m items is ⌊m2⌋. We adopt a learning-augmented framework to investigate what is possible when some prediction on the input is given. For two agents, we give a truthful mechanism that takes agents' ordering over items as prediction. When the prediction is accurate, we give a 2-approximation to the MMS (consistency), and when the prediction is off, we still get an ⌈m2⌉-approximation to the MMS (robustness). We further show that the mechanism's performance degrades gracefully in the number of mistakes" in the prediction; i.e., we interpolate (up to constant factors) between the two extremes: when there are no mistakes, and when there is a maximum number of mistakes. We also show an impossibility result on the obtainable consistency for mechanisms with finite robustness. For the general case of n≥2 agents, we give a 2-approximation mechanism for accurate predictions, with relaxed fallback guarantees. Finally, we give experimental results which illustrate when different components of our framework, made to insure consistency and robustness, come into play.more » « less
-
The complete reason behind a decision is a Boolean formula that characterizes why the decision was made. This recently introduced notion has a number of applications, which include generating explanations, detecting decision bias and evaluating counterfactual queries. Prime implicants of the complete reason are known as sufficient reasons for the decision and they correspond to what is known as PI explanations and abductive explanations. In this paper, we refer to the prime implicates of a complete reason as necessary reasons for the decision. We justify this terminology semantically and show that necessary reasons correspond to what is known as contrastive explanations. We also study the computation of complete reasons for multi-class decision trees and graphs with nominal and numeric features for which we derive efficient, closed-form complete reasons. We further investigate the computation of shortest necessary and sufficient reasons for a broad class of complete reasons, which include the derived closed forms and the complete reasons for Sentential Decision Diagrams (SDDs). We provide an algorithm which can enumerate their shortest necessary reasons in output polynomial time. Enumerating shortest sufficient reasons for this class of complete reasons is hard even for a single reason. For this problem, we provide an algorithm that appears to be quite efficient as we show empirically.more » « less
-
Regularization plays a key role in improving the prediction of emotions using attributes such as arousal, valence and dominance. Regularization is particularly important with deep neural networks (DNNs), which have millions of parameters. While previous studies have reported competitive performance for arousal and dominance, the prediction results for valence using acoustic features are significantly lower. We hypothesize that higher regularization can lead to better results for valence. This study focuses on exploring the role of dropout as a form of regularization for valence, suggesting the need for higher regularization. We analyze the performance of regression models for valence, arousal and dominance as a function of the dropout probability. We observe that the optimum dropout rates are consistent for arousal and dominance. However, the optimum dropout rate for valence is higher. To understand the need for higher regularization for valence, we perform an empirical analysis to explore the nature of emotional cues conveyed in speech. We compare regression models with speakerdependent and speaker-independent partitions for training and testing. The experimental evaluation suggests stronger speaker dependent traits for valence. We conclude that higher regularization is needed for valence to force the network to learn global patterns that generalize across speakers.more » « less
-
A<sc>bstract</sc> In our earlier work [1], we introduced a lattice Hamiltonian for Adjoint QCD2using staggered Majorana fermions. We found the gauge invariant space of states explicitly for the gauge group SU(2) and used them for numerical calculations of observables, such as the spectrum and the expectation value of the fermion bilinear. In this paper, we carry out a more in-depth study of our lattice model, extending it to any compact and simply-connected gauge groupG. We show how to find the gauge invariant space of states and use it to study various observables. We also use the lattice model to calculate the mixed ’t Hooft anomalies of Adjoint QCD2for arbitraryG. We show that the matrix elements of the lattice Hamiltonian can be expressed in terms of the Wigner 6j-symbols ofG. ForG= SU(3), we perform exact diagonalization for lattices of up to six sites and study the low-lying spectrum, the fermion bilinear condensate, and the string tension. We also show how to write the lattice strong coupling expansion for ground state energies and operator expectation values in terms of the Wigner 6j-symbols. For SU(3) we carry this out explicitly and find good agreement with the exact diagonalizations, and for SU(4) we give expansions that can be compared with future numerical studies.more » « less
An official website of the United States government

