- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources3
- Resource Type
-
0002000001000000
- More
- Availability
-
12
- Author / Contributor
- Filter by Author / Creator
-
-
Jain, Shubham (3)
-
Cao, Bryan Bo (2)
-
Sharma, Abhinav (2)
-
Ashok, Ashwin (1)
-
Bhumireddy, Venkata (1)
-
Bronzino, Francesco (1)
-
Coss, Michael (1)
-
Dana, Kristin J. (1)
-
Das, Samir R (1)
-
Gandhi, Anshul (1)
-
Gruteser, Marco (1)
-
Mandayam, Narayan B. (1)
-
O’Gorman, Lawrence (1)
-
Rachuri, Sri Pramodh (1)
-
Rahman, Md Rashed (1)
-
Sethuraman, T. V. (1)
-
Singh, Manavjeet (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available December 4, 2025
-
Cao, Bryan Bo; Sharma, Abhinav; O’Gorman, Lawrence; Coss, Michael; Jain, Shubham (, 27th International Conference on Pattern Recognition (ICPR))Although accuracy and computation benchmarks are widely available to help choose among neural network models, these are usually trained on datasets with many classes, and do not give a good idea of performance for few (<10) classes. The conventional procedure to predict performance involves repeated training and testing on the different models and dataset variations. We propose an efficient cosine similarity-based classification difficulty measure S that is calculated from the number of classes and intra- and inter-class similarity metrics of the dataset. After a single stage of training and testing per model family, relative performance for different datasets and models of the same family can be predicted by comparing difficulty measures – without further training and testing. Our proposed method is verified by extensive experiments on 8 CNN and ViT models and 7 datasets. Results show that S is highly correlated to model accuracy with correlation coefficient r=0.796, outperforming the baseline Euclidean distance at r=0.66. We show how a practitioner can use this measure to help select an efficient model 6 to 29x faster than through repeated training and testing. We also describe using the measure for an industrial application in which options are identified to select a model 42% smaller than the baseline YOLOv5-nano model, and if class merging from 3 to 2 classes meets requirements, 85% smaller.more » « lessFree, publicly-accessible full text available November 30, 2025
-
Rahman, Md Rashed; Sethuraman, T. V.; Gruteser, Marco; Dana, Kristin J.; Jain, Shubham; Mandayam, Narayan B.; Ashok, Ashwin (, IEEE Access)
An official website of the United States government
