skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Optimizing Minimum Redundancy Arrays for Robustness
Sparse arrays have received considerable attention due to their capability of resolving O(N2) uncorrelated sources with N physical sensors, unlike the uniform linear array (ULA) which identifies at most N − 1 sources. This is because sparse arrays have an O(N2)-long ULA segment in the difference coarray, defined as the set of differences between sensor locations. Among the existing array configurations, minimum redundancy arrays (MRA) have the largest ULA segment in the difference coarray with no holes. However, in practice, ULA is robust, in the sense of coarray invariance to sensor failure, but MRA is not. This paper proposes a novel array geometry, named as the robust MRA (RMRA), that maximizes the size of the hole-free difference coarray subject to the same level of robustness as ULA. The RMRA can be found by solving an integer program, which is computationally expensive. Even so, it will be shown that the RMRA still owns O(N^2) elements in the hole-free difference coarray. In particular, for sufficiently large N, the aperture for RMRA, which is approximately half of the size of the difference coarray, is bounded between 0.0625N^2 and 0.2174N^2.  more » « less
Award ID(s):
1712633
PAR ID:
10094556
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Asilomar Conf. Sig., Sys., and Computers
Page Range / eLocation ID:
79 to 83
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In array processing, minimum redundancy arrays (MRA) can identify up toO(N2) uncorrelated sources (theO(N2) property) withN physical sensors, but this property is susceptible to sensor failures. On the other hand, uniform linear arrays (ULA) are robust, but they resolve only O(N) sources. Recently, the robust MRA (RMRA) was shown to possess the O(N2) property and to be as robust as ULA. But finding RMRA is computationally difficult for large N. This paper proposes a novel array geometry called the composite Singer array, which is related to a classic paper by Singer in 1938, and to other results in number theory. For large N, composite Singer arrays could own theO(N2) property and are as robust as ULA. Furthermore, the sensor locations for the composite Singer array can be readily computed by the proposed recursive procedure. These properties will also be demonstrated by using numerical examples. 
    more » « less
  2. ABSTRACT Next-generation aperture arrays are expected to consist of hundreds to thousands of antenna elements with substantial digital signal processing to handle large operating bandwidths of a few tens to hundreds of MHz. Conventionally, FX correlators are used as the primary signal processing unit of the interferometer. These correlators have computational costs that scale as $$\mathcal {O}(N^2)$$ for large arrays. An alternative imaging approach is implemented in the E-field Parallel Imaging Correlator (EPIC) that was recently deployed on the Long Wavelength Array station at the Sevilleta National Wildlife Refuge (LWA-SV) in New Mexico. EPIC uses a novel architecture that produces electric field or intensity images of the sky at the angular resolution of the array with full or partial polarization and the full spectral resolution of the channelizer. By eliminating the intermediate cross-correlation data products, the computational costs can be significantly lowered in comparison to a conventional FX or XF correlator from $$\mathcal {O}(N^2)$$ to $$\mathcal {O}(N \log N)$$ for dense (but otherwise arbitrary) array layouts. EPIC can also lower the output data rates by directly yielding polarimetric image products for science analysis. We have optimized EPIC and have now commissioned it at LWA-SV as a commensal all-sky imaging back-end that can potentially detect and localize sources of impulsive radio emission on millisecond timescales. In this article, we review the architecture of EPIC, describe code optimizations that improve performance, and present initial validations from commissioning observations. Comparisons between EPIC measurements and simultaneous beam-formed observations of bright sources show spectral-temporal structures in good agreement. 
    more » « less
  3. Gas sensor arrays, also known as electronic noses, leverage a diverse set of materials to identify the components of complex gas mixtures. Metal-organic frameworks (MOFs) have emerged as promising materials for electronic noses due to their high-surface areas and chemical as well as structural tunability. Using our recently reported genetic algorithm design approach, we examined a set of 50 MOFs and searched through over 1.125 × 1015 unique array combinations to identify optimal arrays for the detection of CO2 in air. We found that despite individual MOFs having lower selectivity for O2 or N2 relative to CO2, intelligently selecting the right combinations of MOFs enables accurate prediction of the concentrations of all components in the mixture (i.e., CO2, O2, N2). We also analyzed the physical properties of the elements in the arrays to develop an intuition for improving array design. Notably, we found that an array whose MOFs have diversity in their volumetric surface areas has improved sensing. Consistent with this observation, we found that the best arrays consistently had greater structural diversity (e.g., pore sizes, void fractions, and surface areas) than the worst arrays. 
    more » « less
  4. Writing concurrent programs is notoriously hard due to scheduling non-determinism. The most common concurrency bugs are data races, which are accesses to a shared resource that can be executed concurrently. Dynamic data-race prediction is the most standard technique for detecting data races: given an observed, data-race-free trace t, the task is to determine whether t can be reordered to a trace t* that exposes a data-race. Although the problem has received significant practical attention for over three decades, its complexity has remained elusive. In this work, we address this lacuna, identifying sources of intractability and conditions under which the problem is efficiently solvable. Given a trace t of size n over k threads, our main results are as follows. First, we establish a general O(k · n2·(k-1) upper-bound, as well as an O(nk) upper-bound when certain parameters of t are constant. In addition, we show that the problem is NP-hard and even W[1]-hard parameterized by k, and thus unlikely to be fixed-parameter tractable. Second, we study the problem over acyclic communication topologies, such as server-clients hierarchies. We establish an O(k2 · d · n2 · log n) upper-bound, where d is the number of shared variables accessed in t. In addition, we show that even for traces with k = 2 threads, the problem has no O(n2-ϵ) algorithm under the Orthogonal Vectors conjecture. Since any trace with 2 threads defines an acyclic topology, our upper-bound for this case is optimal up to polynomial improvements for up to moderate values of k and d. Finally, motivated by existing heuristics, we study a distance-bounded version of the problem, where the task is to expose a data race by a witness trace that is similar to t. We develop an algorithm that works in O(n) time when certain parameters of t are constant. 
    more » « less
  5. null (Ed.)
    ABSTRACT Future generations of radio interferometers targeting the 21 cm signal at cosmological distances with N ≫ 1000 antennas could face a significant computational challenge in building correlators with the traditional architecture, whose computational resource requirement scales as $$\mathcal {O}(N^2)$$ with array size. The fundamental output of such correlators is the cross-correlation products of all antenna pairs in the array. The FFT-correlator architecture reduces the computational resources scaling to $$\mathcal {O}(N\log {N})$$ by computing cross-correlation products through a spatial Fourier transform. However, the output of the FFT-correlator is meaningful only when the input antenna voltages are gain- and phase-calibrated. Traditionally, interferometric calibration has used the $$\mathcal {O}(N^2)$$ cross-correlations produced by a standard correlator. This paper proposes two real-time calibration schemes that could work in parallel with an FFT-correlator as a self-contained $$\mathcal {O}(N\log {N})$$ correlator system that can be scaled to large-N redundant arrays. We compare the performance and scalability of these two calibration schemes and find that they result in antenna gains whose variance decreases as 1/log N with increase in the size of the array. 
    more » « less