skip to main content


Search for: All records

Creators/Authors contains: "Citti, Giovanna"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. We review the existing literature concerning regularity for the gradient of weak solutions of the subelliptic p-Laplacian differential operator in a domain Ω in the Heisenberg group H^n, with 1 ≤ p < ∞, and of its parabolic counterpart. We present some open problems and outline some of the difficulties they present. 
    more » « less
  2. We prove local Lipschitz regularity for weak solutions to a class of degenerate parabolic PDEs modeled on the parabolic p-Laplacian $\(\partial_t u= \sum_{i=1}^{2n} X_i (|\nabla_0 u|^{p-2} X_i u),\$ in a cylinder $\(\Omega\times\mathbb{R}^+\)$, where $ \(\Omega\)$ is domain in the Heisenberg group $\(\mathbb{H}^n\)$, and $\(2\le p \le 4\)$. The result continues to hold in the more general setting of contact subRiemannian manifolds. 
    more » « less
  3. We present a new algorithm for learning unknown gov- erning equations from trajectory data, using a family of neural net- works. Given samples of solutions x(t) to an unknown dynamical system x ̇ (t) = f (t, x(t)), we approximate the function f using a family of neural networks. We express the equation in integral form and use Euler method to predict the solution at every successive time step using at each iter- ation a different neural network as a prior for f. This procedure yields M-1 time-independent networks, where M is the number of time steps at which x(t) is observed. Finally, we obtain a single function f(t,x(t)) by neural network interpolation. Unlike our earlier work, where we numer- ically computed the derivatives of data, and used them as target in a Lipschitz regularized neural network to approximate f, our new method avoids numerical differentiations, which are unstable in presence of noise. We test the new algorithm on multiple examples in a high-noise setting. We empirically show that generalization and recovery of the governing equation improve by adding a Lipschitz regularization term in our loss function and that this method improves our previous one especially in the high-noise regime, when numerical differentiation provides low qual- ity target data. Finally, we compare our results with other state of the art methods for system identification. 
    more » « less