skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Gabor single-frame and multi-frame multipliers in any given dimension
Award ID(s):
1712602
PAR ID:
10298462
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Journal of Functional Analysis
Volume:
280
Issue:
9
ISSN:
0022-1236
Page Range / eLocation ID:
108960
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Differential operators are widely used in geometry processing for problem domains like spectral shape analysis, data interpolation, parametrization and mapping, and meshing. In addition to the ubiquitous cotangent Laplacian, anisotropic second‐order operators, as well as higher‐order operators such as the Bilaplacian, have been discretized for specialized applications. In this paper, we study a class of operators that generalizes the fourth‐order Bilaplacian to support anisotropic behavior. The anisotropy is parametrized by asymmetric frame field, first studied in connection with quadrilateral and hexahedral meshing, which allows for fine‐grained control of local directions of variation. We discretize these operators using a mixed finite element scheme, verify convergence of the discretization, study the behavior of the operator under pullback, and present potential applications. 
    more » « less
  2. While image captioning provides isolated descriptions for individual images, and video captioning offers one single narrative for an entire video clip, our work explores an important middle ground: progress-aware video captioning at the frame level. This novel task aims to generate temporally fine-grained captions that not only accurately describe each frame but also capture the subtle progression of actions throughout a video sequence. Despite the strong capabilities of existing leading vision language models, they often struggle to discern the nuances of frame-wise differences. To address this, we propose ProgressCaptioner, a captioning model designed to capture the fine-grained temporal dynamics within an action sequence. Alongside, we develop the FrameCap dataset to support training and the FrameCapEval benchmark to assess caption quality. The results demonstrate that ProgressCaptioner significantly surpasses leading captioning models, producing precise captions that accurately capture action progression and set a new standard for temporal precision in video captioning. Finally, we showcase practical applications of our approach, specifically in aiding keyframe selection and advancing video understanding, highlighting its broad utility. 
    more » « less
  3. null (Ed.)