skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Friday, November 15 until 2:00 AM ET on Saturday, November 16 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Giannakis, Georgios B."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available August 15, 2025
  2. Free, publicly-accessible full text available July 11, 2025
  3. Free, publicly-accessible full text available July 11, 2025
  4. Free, publicly-accessible full text available July 11, 2025
  5. Free, publicly-accessible full text available July 11, 2025
  6. Free, publicly-accessible full text available May 11, 2025
  7. Free, publicly-accessible full text available May 11, 2025
  8. Free, publicly-accessible full text available April 19, 2025
  9. Utilizing task-invariant prior knowledge extracted from related tasks, meta-learning is a principled framework that empowers learning a new task especially when data records are limited. A fundamental challenge in meta-learning is how to quickly "adapt" the extracted prior in order to train a task-specific model within a few optimization steps. Existing approaches deal with this challenge using a preconditioner that enhances convergence of the per-task training process. Though effective in representing locally a quadratic training loss, these simple linear preconditioners can hardly capture complex loss geometries. The present contribution addresses this limitation by learning a nonlinear mirror map, which induces a versatile distance metric to enable capturing and optimizing a wide range of loss geometries, hence facilitating the per-task training. Numerical tests on few-shot learning datasets demonstrate the superior expressiveness and convergence of the advocated approach. 
    more » « less
    Free, publicly-accessible full text available April 14, 2025
  10. Free, publicly-accessible full text available April 19, 2025