Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Hydrophobic and long-chain molecule oleylamine is used to modify the spiro-OMeTAD matrix, which is then adopted for the hole-transport layer in perovskite solar cells. It is observed that after moderate doping, the power conversion efficiency of the devices increases from 17.82 (±1.47)% to 20.68 (±0.77)%, with the optimized efficiency of 21.57% (AM 1.5G, 100 mW/cm2). The improved efficiency is ascribed to the favored charge extraction and retarded charge recombination, as reflected by transient photovoltage/photocurrent curves and impedance spectroscopy measurement. In addition, the grazing incidence photoluminescence spectrum reveals that oleylamine doping causes a blue shift of the luminescence peak of the surface layer of the halide perovskite film, while the Mott−Schottky study observes 100 mV increment in the built-in potential, both of which indicate possible defect passivation behavior on the perovskite. Moreover, an accelerated damp test observes that moisture resistance of the device is also upgraded, which is due to the improved hydrophobicity of the spiro-OMeTAD matrix.more » « less
-
In this paper, we study the stability and its trade-off with optimization error for stochastic gradient descent (SGD) algorithms in the pairwise learning setting. Pairwise learning refers to a learning task which involves a loss function depending on pairs of instances among which notable examples are bipartite ranking, metric learning, area under ROC curve (AUC) maximization and minimum error entropy (MEE) principle. Our contribution is twofolded. Firstly, we establish the stability results for SGD for pairwise learning in the convex, strongly convex and non-convex settings, from which generalization errors can be naturally derived. Secondly, we establish the trade-off between stability and optimization error of SGD algorithms for pairwise learning. This is achieved by lower-bounding the sum of stability and optimization error by the minimax statistical error over a prescribed class of pairwise loss functions. From this fundamental trade-off, we obtain lower bounds for the optimization error of SGD algorithms and the excess expected risk over a class of pairwise losses. In addition, we illustrate our stability results by giving some specific examples of AUC maximization, metric learning and MEE.more » « less