The performance of electrocatalysts is critical for renewable energy technologies. While the electrocatalytic activity can be modulated through structural and compositional engineering following the Sabatier principle, the insufficiently explored catalyst-electrolyte interface is promising to promote microkinetic processes such as physisorption and desorption. By combining experimental designs and molecular dynamics simulations with explicit solvent in high accuracy, we demonstrated that dimethylformamide can work as an effective surface molecular pump to facilitate the entrapment of oxygen and outflux of water. Dimethylformamide disrupts the interfacial network of hydrogen bonds, leading to enhanced activity of the oxygen reduction reaction by a factor of 2 to 3. This strategy works generally for platinum-alloy catalysts, and we introduce an optimal model PtCuNi catalyst with an unprecedented specific activity of 21.8 ± 2.1 mA/cm2at 0.9 V versus the reversible hydrogen electrode, nearly double the previous record, and an ultrahigh mass activity of 10.7 ± 1.1 A/mgPt.
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available September 6, 2025
-
In the past decade, we have witnessed an exponential growth of deep learning models, platforms, and applications. While existing DL applications and Machine Learning as a service (MLaaS) frameworks assume fully trusted models, the need for privacy-preserving DNN evaluation arises. In a secure multi-party computation scenario, both the model and the data are considered proprietary, i.e., the model owner does not want to reveal the highly valuable DL model to the user, while the user does not wish to disclose their private data samples either. Conventional privacy-preserving deep learning solutions ask the users to send encrypted samples to the model owners, who must handle the heavy lifting of ciphertext-domain computation with homomorphic encryption. In this paper, we present a novel solution, namely, PrivDNN, which (1) offloads the computation to the user side by sharing an encrypted deep learning model with them, (2) significantly improves the efficiency of DNN evaluation using partial DNN encryption, (3) ensures model accuracy and model privacy using a core neuron selection and encryption scheme. Experimental results show that PrivDNN reduces privacy-preserving DNN inference time and memory requirement by up to 97% while maintaining model performance and privacy. Codes can be found at https://github.com/LiangqinRen/PrivDNN
Free, publicly-accessible full text available July 1, 2025