- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
10030
- Availability
-
31
- Author / Contributor
- Filter by Author / Creator
-
-
Klimeck, G. (2)
-
Rahman, R. (2)
-
Bandyopadhyay, S. (1)
-
Biesemans, S. (1)
-
Collaert, N. (1)
-
Fallahi, S. (1)
-
Hollenberg, L. C. (1)
-
Lansbergen, G. P. (1)
-
M. M. Rahman, R. Marculescu (1)
-
Manfra, M. J. (1)
-
Nakamura, J. (1)
-
Novakovic, B. (1)
-
Povolotskyi, M. (1)
-
Rahman, R (1)
-
Rogge, S. (1)
-
Sahasrabudhe, H. (1)
-
Tettamanzi, G. C. (1)
-
Verduijn, J. (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
- Filter by Editor
-
-
null (1)
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Transformers have shown great success in medical image segmentation. However, transformers may exhibit a limited generalization ability due to the underlying single-scale selfattention (SA) mechanism. In this paper, we address this issue by introducing a Multiscale hiERarchical vIsion Transformer (MERIT) backbone network, which improves the generalizability of the model by computing SA at multiple scales. We also incorporate an attention-based decoder, namely Cascaded Attention Decoding (CASCADE), for further refinement of the multi-stage features generated by MERIT. Finally, we introduce an effective multi-stage feature mixing loss aggregation (MUTATION) method for better model training via implicit ensembling. Our experiments on two widely used medical image segmentation benchmarks (i.e., Synapse Multi-organ and ACDC) demonstrate the superior performance of MERIT over state-of-the-art methods. Our MERIT architecture and MUTATION loss aggregation can be used with other downstream medical image and semantic segmentation tasks.more » « lessFree, publicly-accessible full text available July 1, 2024
-
Rahman, R ; Bandyopadhyay, S. ( , Applied science)null (Ed.)
-
Sahasrabudhe, H. ; Novakovic, B. ; Nakamura, J. ; Fallahi, S. ; Povolotskyi, M. ; Klimeck, G. ; Rahman, R. ; Manfra, M. J. ( , Physical Review B)
-
Lansbergen, G. P. ; Rahman, R. ; Verduijn, J. ; Tettamanzi, G. C. ; Collaert, N. ; Biesemans, S. ; Klimeck, G. ; Hollenberg, L. C. ; Rogge, S. ( , Physical Review Letters)