Ultraviolet (UV), visible, and near‐infrared (NIR) broadband organic photodetectors are fabricated by sequential solution‐based thin film coatings of a polymer electron blocking layer (EBL) and a polymer photoactive layer. To avoid damage to a preceding polymer EBL during a subsequent solution‐based film coating of a polymer photoactive layer due to lack of solvent orthogonality, 2‐(((4‐azido‐2,3,5,6‐tetrafluorobenzoyl)oxy)methyl)−2‐ethylpropane‐1,3‐diyl bis(4‐azido‐2,3,5,6‐tetrafluorobenzoate) (FPA‐3F) is used as a novel organic cross‐linking agent activated by UV irradiation with a wavelength of 254 nm. Solution‐processed poly[N,N′‐bis(4‐butylphenyl)‐N,N′‐bis(phenyl)‐benzidine] (poly‐TPD) films, which are cross‐linked with a FPA‐3F photocrosslinker, are used for a preceding polymer EBL. A ternary blend film composed of PTB7‐Th, COi8DFIC, and PC71BM is used as a NIR‐sensitive organic photoactive layer with strong photosensitivity in multispectral (UV–visible–NIR) wavelengths of 300–1,050 nm. Poly‐TPD films are successfully cross‐linked even with a very small amount of 1 wt% FPA‐3F. Small amounts of FPA‐3F have little detrimental effect on the electrical and optoelectronic properties of the cross‐linked poly‐TPD EBL. Finally, organic NIR photodetectors with a poly‐TPD EBL cross‐linked by the small addition of FPA‐3F (1 wt%) show the detectivity values higher than 1 × 1012Jones for the entire UV–visible–NIR wavelengths from 300 nm to 1050 nm, and the maximum detectivity values of 1.41 × 1013Jones and 8.90 × 1012Jones at the NIR wavelengths of 900 and 1000 nm, respectively.
- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources3
- Resource Type
-
01000020000
- More
- Availability
-
21
- Author / Contributor
- Filter by Author / Creator
-
-
Nguyen, Tai (3)
-
Apidianaki, Marianna (1)
-
Callison-Burch, Chris (1)
-
Figueroa, Benjamin (1)
-
Fu, Dan (1)
-
Fu, Walter (1)
-
Jung, Hyocheol (1)
-
Kim, Do Young (1)
-
Kim, Soyeon (1)
-
Lee, Jae Woong (1)
-
Lee, Jihoon (1)
-
Lim, Chanwoo (1)
-
Ludan, Josh Magnus (1)
-
Lyu, Qing (1)
-
Manifold, Bryce (1)
-
Meng, Yixuan (1)
-
Park, Junsung (1)
-
Shah, Saurabh (1)
-
Shin, Kseniya (1)
-
Wise, Frank (1)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Free, publicly-accessible full text available June 23, 2025 -
Ludan, Josh Magnus ; Meng, Yixuan ; Nguyen, Tai ; Shah, Saurabh ; Lyu, Qing ; Apidianaki, Marianna ; Callison-Burch, Chris ( , Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics)Large Language Models (LLMs) are so powerful that they sometimes learn correlations between labels and features that are irrelevant to the task, leading to poor generalization on out-of-distribution data. We propose explanation-based finetuning as a general approach to mitigate LLMs’ reliance on spurious correlations. Unlike standard finetuning where the model only predicts the answer given the input, we finetune the model to additionally generate a free-text explanation supporting its answer. To evaluate our method, we finetune the model on artificially constructed training sets containing different types of spurious cues, and test it on a test set without these cues. Compared to standard finetuning, our method makes GPT-3 (davinci) remarkably more robust against spurious cues in terms of accuracy drop across four classification tasks: ComVE (+1.2), CREAK (+9.1), e-SNLI (+15.4), and SBIC (+6.5). The efficacy generalizes across multiple model families and scales, with greater gains for larger models. Finally, our method also works well with explanations generated by the model, implying its applicability to more datasets without human-written explanations.more » « less
-
Figueroa, Benjamin ; Fu, Walter ; Nguyen, Tai ; Shin, Kseniya ; Manifold, Bryce ; Wise, Frank ; Fu, Dan ( , Biomedical Optics Express)