skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: The ghost stairs stabilize to sharp symplectic embedding obstructions: THE GHOST STAIRS STABILIZE TO SHARP SYMPLECTIC EMBEDDING OBSTRUCTIONS
Award ID(s):
1711976
PAR ID:
10056391
Author(s) / Creator(s):
 ;  ;  
Publisher / Repository:
Oxford University Press (OUP)
Date Published:
Journal Name:
Journal of Topology
Volume:
11
Issue:
2
ISSN:
1753-8416
Page Range / eLocation ID:
309 to 378
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper we present new proofs of the non-embeddability of countably branching trees into Banach spaces satisfying property beta_p and of countably branching diamonds into Banach spaces which are l_p-asymptotic midpoint uniformly convex (p-AMUC) for p>1. These proofs are entirely metric in nature and are inspired by previous work of Jiří Matoušek. In addition, using this metric method, we succeed in extending these results to metric spaces satisfying certain embedding obstruction inequalities. Finally, we give Tessera-type lower bounds on the compression for a class of Lipschitz embeddings of the countably branching trees into Banach spaces containing l_p-asymptotic models for p>=1. 
    more » « less
  2. null (Ed.)
  3. null (Ed.)
    A bstract Autoencoders have been proposed as a powerful tool for model-independent anomaly detection in high-energy physics. The operating principle is that events which do not belong to the space of training data will be reconstructed poorly, thus flagging them as anomalies. We point out that in a variety of examples of interest, the connection between large reconstruction error and anomalies is not so clear. In particular, for data sets with nontrivial topology, there will always be points that erroneously seem anomalous due to global issues. Conversely, neural networks typically have an inductive bias or prior to locally interpolate such that undersampled or rare events may be reconstructed with small error, despite actually being the desired anomalies. Taken together, these facts are in tension with the simple picture of the autoencoder as an anomaly detector. Using a series of illustrative low-dimensional examples, we show explicitly how the intrinsic and extrinsic topology of the dataset affects the behavior of an autoencoder and how this topology is manifested in the latent space representation during training. We ground this analysis in the discussion of a mock “bump hunt” in which the autoencoder fails to identify an anomalous “signal” for reasons tied to the intrinsic topology of n -particle phase space. 
    more » « less