Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available July 1, 2025
-
A high-speed super-resolution computational imaging technique is introduced on the basis of classical and quantum correlation functions obtained from photon counts collected from quantum emitters illuminated by spatiotemporally structured illumination. The structured illumination is delocalized—allowing the selective excitation of separate groups of emitters as the modulation of the illumination light advances. A recorded set of photon counts contains rich quantum and classical information. By processing photon counts, multiple orders of Glauber correlation functions are extracted. Combinations of the normalized Glauber correlation functions convert photon counts into signals of increasing order that contain increasing spatial frequency information. However, the amount of information above the noise floor drops at higher correlation orders, causing a loss of accessible information in the finer spatial frequency content that is contained in the higher-order signals. We demonstrate an efficient and robust computational imaging algorithm to fuse the spatial frequencies from the low-spatial-frequency range that is available in the classical information with the spatial frequency content in the quantum signals. Because of the overlap of low spatial frequency information, the higher signal-to-noise ratio (SNR) information concentrated in the low spatial frequencies stabilizes the lower SNR at higher spatial frequencies in the higher-order quantum signals. Robust performance of this joint fusion of classical and quantum computational single-pixel imaging is demonstrated with marked increases in spatial frequency content, leading to super-resolution imaging, along with much better mean squared errors in the reconstructed images.more » « less
-
Durrett, G (Ed.)The BigCode community, an open-scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder and StarCoderBase: 15.5B parameter models with 8K context length, infilling capabilities and fast large-batch inference enabled by multi-query attention. StarCoderBase is trained on 1 trillion tokens sourced from The Stack, a large collection of permissively licensed GitHub repositories with inspection tools and an opt-out process. We fine-tuned StarCoderBase on 35B Python tokens, resulting in the creation of StarCoder. We perform the most comprehensive evaluation of Code LLMs to date and show that StarCoderBase outperforms every open Code LLM that supports multiple programming languages and matches or outperforms the OpenAI code-cushman-001 model. Furthermore, StarCoder outperforms every model that is fine-tuned on Python, can be prompted to achieve 40% pass@1 on HumanEval, and still retains its performance on other programming languages. We take several important steps towards a safe open-access model release, including an improved PII redaction pipeline and a novel attribution tracing tool, and make the StarCoder models publicly available under a more commercially viable version of the Open Responsible AI Model license.more » « less