skip to main content


Search for: All records

Creators/Authors contains: "Kumar, Mrinal"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This article presents a new evidential reasoning approach for estimating the state of an evolving wildfire in real time. Given assumed terrain information and localized wind velocity distribution parameters, a probabilistic representation (i.e., the belief state) of a wildfire is forecast across the spatiotemporal domain through a compilation of fire spread simulations. The forecast is updated through information fusion based on observations provided by: 1) embedded temperature sensors and 2) mobile vision agents that are advantageously directed toward locations of information extraction based on the current state estimate. This combination of uncertain sources is performed under the evidence-based Dempster’s rule of combination and is then used to enact sensor reconfiguration based on the updated estimate. This research finds that the evidential belief combination vastly outperforms the standard forecasting approach (where no sensor data are incorporated) in the presence of imprecise environmental parameters. 
    more » « less
  2. This paper considers path planning with resource constraints and dynamic obstacles for an unmanned aerial vehicle (UAV), modeled as a Dubins agent. Incorporating these complex constraints at the guidance stage expands the scope of operations of UAVs in challenging environments containing path-dependent integral constraints and time-varying obstacles. Path-dependent integral constraints, also known as resource constraints, can occur when the UAV is subject to a hazardous environment that exposes it to cumulative damage over its traversed path. The noise penalty function was selected as the resource constraint for this study, which was modeled as a path integral that exerts a path-dependent load on the UAV, stipulated to not exceed an upper bound. Weather phenomena such as storms, turbulence and ice are modeled as dynamic obstacles. In this paper, ice data from the Aviation Weather Service is employed to create training data sets for learning the dynamics of ice phenomena. Dynamic mode decomposition (DMD) is used to learn and forecast the evolution of ice conditions at flight level. This approach is presented as a computationally scalable means of propagating obstacle dynamics. The reduced order DMD representation of time-varying ice obstacles is integrated with a recently developed backtracking hybridA∗ graph search algorithm. The backtracking mechanism allows us to determine a feasible path in a computationally scalable manner in the presence of resource constraints. Illustrative numerical results are presented to demonstrate the effectiveness of the proposed path-planning method.

     
    more » « less
  3. Multivariate multipoint evaluation is the problem of evaluating a multivariate polynomial, given as a coefficient vector, simultaneously at multiple evaluation points. In this work, we show that there exists a deterministic algorithm for multivariate multipoint evaluation over any finite field F that outputs the evaluations of an m-variate polynomial of degree less than d in each variable at N points in time (dm + N)1+o(1) · poly(m, d, log |F|) for all m ∈ N and all sufficiently large d ∈ N. A previous work of Kedlaya and Umans (FOCS 2008, SICOMP 2011) achieved the same time complexity when the number of variables m is at most d^{o(1)} and had left the problem of removing this condition as an open problem. A recent work of Bhargava, Ghosh, Kumar and Mohapatra (STOC 2022) answered this question when the underlying field is not too large and has characteristic less than d^{o(1)}. In this work, we remove this constraint on the number of variables over all finite fields, thereby answering the question of Kedlaya and Umans over all finite fields. Our algorithm relies on a non-trivial combination of ideas from three seemingly different previously knownalgorithms for multivariate multipoint evaluation, namely the algorithms of Kedlaya and Umans, that of Björklund, Kaski and Williams (IPEC 2017, Algorithmica 2019), and that of Bhargava, Ghosh, Kumar and Mohapatra, together with a result of Bombieri and Vinogradov from analytic number theory about the distribution of primes in an arithmetic progression. We also present a second algorithm for multivariate multipoint evaluation that is completely elementary and in particular, avoids the use of the Bombieri–Vinogradov Theorem. However, it requires a mild assumption that the field size is bounded by an exponential-tower in d of bounded height. 
    more » « less
  4. We show that there is an equation of degree at most poly( n ) for the (Zariski closure of the) set of the non-rigid matrices: That is, we show that for every large enough field 𝔽, there is a non-zero n 2 -variate polynomial P ε 𝔽[ x 1, 1 , ..., x n, n ] of degree at most poly( n ) such that every matrix M that can be written as a sum of a matrix of rank at most n /100 and a matrix of sparsity at most n 2 /100 satisfies P(M) = 0. This confirms a conjecture of Gesmundo, Hauenstein, Ikenmeyer, and Landsberg [ 9 ] and improves the best upper bound known for this problem down from exp ( n 2 ) [ 9 , 12 ] to poly( n ). We also show a similar polynomial degree bound for the (Zariski closure of the) set of all matrices M such that the linear transformation represented by M can be computed by an algebraic circuit with at most n 2 /200 edges (without any restriction on the depth). As far as we are aware, no such bound was known prior to this work when the depth of the circuits is unbounded. Our methods are elementary and short and rely on a polynomial map of Shpilka and Volkovich [ 21 ] to construct low-degree “universal” maps for non-rigid matrices and small linear circuits. Combining this construction with a simple dimension counting argument to show that any such polynomial map has a low-degree annihilating polynomial completes the proof. As a corollary, we show that any derandomization of the polynomial identity testing problem will imply new circuit lower bounds. A similar (but incomparable) theorem was proved by Kabanets and Impagliazzo [ 11 ]. 
    more » « less
  5. This work provides a finite-time stable disturbance observer design for the discretized dynamics of an unmanned vehicle in three-dimensional translational and rotational motion. The dynamics of this vehicle is discretized using a Lie group variational integrator as a grey box dynamics model that also accounts for unknown additive disturbance force and torque. Therefore, the input-state dynamics is partly known. The unknown dynamics is lumped into a single disturbance force and a single disturbance torque, both of which are estimated using the disturbance observer we design. This disturbance observer is finite-time stable (FTS) and works like a real-time machine learning scheme for the unknown dynamics. 
    more » « less
  6. This paper considers resource constrained path planning for a Dubins agent. Resource constraints are modeled as path integrals that exert a path-dependent load on the agent that must not exceed an upper bound. A backtracking mechanism is proposed for the Hybrid-A* graph search algorithm to determine the minimum time path in the presence of the path loading constraint. The new approach is built on the premise that inadmissibility of a node on the graph must depend on the loading accumulated along the path taken to arrive at its location. Conventional hybrid-A* does not account for this fact, causing it to become suboptimal or even infeasible in the presence of resource constraints. The new approach helps "reset'' the graph search by backing away from a node when the loading constraint is exceeded, and redirecting the search to explore alternate routes to arrive at the same location, while keeping the path load under its stipulated threshold. Backtracking Stopping criterion is based on relaxation of the path load along the search path. Case studies are presented and numerical comparisons are made with the Lagrange relaxation method to solving equivalent resource-constrained shortest path problems. 
    more » « less
  7. Byrka, Jaroslaw ; Meka, Raghu (Ed.)
    In this work, we prove new relations between the bias of multilinear forms, the correlation between multilinear forms and lower degree polynomials, and the rank of tensors over F₂. We show the following results for multilinear forms and tensors. Correlation bounds. We show that a random d-linear form has exponentially low correlation with low-degree polynomials. More precisely, for d = 2^{o(k)}, we show that a random d-linear form f(X₁,X₂, … , X_d) : (F₂^{k}) ^d → F₂ has correlation 2^{-k(1-o(1))} with any polynomial of degree at most d/2 with high probability. This result is proved by giving near-optimal bounds on the bias of a random d-linear form, which is in turn proved by giving near-optimal bounds on the probability that a sum of t random d-dimensional rank-1 tensors is identically zero. Tensor rank vs Bias. We show that if a 3-dimensional tensor has small rank then its bias, when viewed as a 3-linear form, is large. More precisely, given any 3-dimensional tensor T: [k]³ → F₂ of rank at most t, the bias of the 3-linear form f_T(X₁, X₂, X₃) : = ∑_{(i₁, i₂, i₃) ∈ [k]³} T(i₁, i₂, i₃)⋅ X_{1,i₁}⋅ X_{2,i₂}⋅ X_{3,i₃} is at least (3/4)^t. This bias vs tensor-rank connection suggests a natural approach to proving nontrivial tensor-rank lower bounds. In particular, we use this approach to give a new proof that the finite field multiplication tensor has tensor rank at least 3.52 k, which is the best known rank lower bound for any explicit tensor in three dimensions over F₂. Moreover, this relation between bias and tensor rank holds for d-dimensional tensors for any fixed d. 
    more » « less
  8. null (Ed.)
    A method is developed for transforming chance constrained optimization problems to a form numerically solvable. The transformation is accomplished by reformulating the chance constraints as nonlinear constraints using a method that combines the previously developed Split-Bernstein approximation and kernel density estimator (KDE) methods. The Split-Bernstein approximation in a particular form is a biased kernel density estimator. The bias of this kernel leads to a nonlinear approximation that does not violate the bounds of the original chance constraint. The method of applying biased KDEs to reformulate chance constraints as nonlinear constraints transforms the chance constrained optimization problem to a deterministic optimization problems that retains key properties of the chance constrained optimization problem and can be solved numerically. This method can be applied to chance constrained optimal control problems. As a result, the Split-Bernstein and Gaussian kernels are applied to a chance constrained optimal control problem and the results are compared. 
    more » « less