Precisely measuring the three-dimensional position and orientation of individual fluorophores is challenging due to the substantial photon shot noise in single-molecule experiments. Facing this limited photon budget, numerous techniques have been developed to encode 2D and 3D position and 2D and 3D orientation information into fluorescence images. In this work, we adapt classical and quantum estimation theory and propose a mathematical framework to derive the best possible precision for measuring the position and orientation of dipole-like emitters for any fixed imaging system. We find that it is impossible to design an instrument that achieves the maximum sensitivity limit for measuring all possible rotational motions. Further, our vectorial dipole imaging model shows that the best quantum-limited localization precision is 4%–8% worse than that suggested by a scalar monopole model. Overall, we conclude that no single instrument can be optimized for maximum precision across all possible 2D and 3D localization and orientation measurement tasks.
more »
« less
Deep-SMOLM: deep learning resolves the 3D orientations and 2D positions of overlapping single molecules with optimal nanoscale resolution
Dipole-spread function (DSF) engineering reshapes the images of a microscope to maximize the sensitivity of measuring the 3D orientations of dipole-like emitters. However, severe Poisson shot noise, overlapping images, and simultaneously fitting high-dimensional information–both orientation and position–greatly complicates image analysis in single-molecule orientation-localization microscopy (SMOLM). Here, we report a deep-learning based estimator, termed Deep-SMOLM, that achieves superior 3D orientation and 2D position measurement precision within 3% of the theoretical limit (3.8° orientation, 0.32 sr wobble angle, and 8.5 nm lateral position using 1000 detected photons). Deep-SMOLM also demonstrates state-of-art estimation performance on overlapping images of emitters, e.g., a 0.95 Jaccard index for emitters separated by 139 nm, corresponding to a 43% image overlap. Deep-SMOLM accurately and precisely reconstructs 5D information of both simulated biological fibers and experimental amyloid fibrils from images containing highly overlapped DSFs at a speed ~10 times faster than iterative estimators.
more »
« less
- Award ID(s):
- 1653777
- PAR ID:
- 10371986
- Publisher / Repository:
- Optical Society of America
- Date Published:
- Journal Name:
- Optics Express
- Volume:
- 30
- Issue:
- 20
- ISSN:
- 1094-4087; OPEXFF
- Format(s):
- Medium: X Size: Article No. 36761
- Size(s):
- Article No. 36761
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
GaN has recently been shown to host bright, photostable, defect single-photon emitters in the 600–700 nm wavelength range that are promising for quantum applications. The nature and origin of these defect emitters remain elusive. In this work, we study the optical dipole structures and orientations of these defect emitters using the defocused imaging technique. In this technique, the far-field radiation pattern of an emitter in the Fourier plane is imaged to obtain information about the structure of the optical dipole moment and its orientation in 3D. Our experimental results, backed by numerical simulations, show that these defect emitters in GaN exhibit a single dipole moment that is oriented almost perpendicular to the wurtzite crystal c-axis. Data collected from many different emitters show that the angular orientation of the dipole moment in the plane perpendicular to the c-axis exhibits a distribution that shows peaks centered at the angles corresponding to the nearest Ga–N bonds and also at the angles corresponding to the nearest Ga–Ga (or N–N) directions. Moreover, the in-plane angular distribution shows little difference among defect emitters with different emission wavelengths in the 600–700 nm range. Our work sheds light on the nature and origin of these GaN defect emitters.more » « less
-
Interactions between biomolecules are characterized by where they occur and how they are organized, e.g., the alignment of lipid molecules to form a membrane. However, spatial and angular information are mixed within the image of a fluorescent molecule–the microscope’s dipole-spread function (DSF). We demonstrate the pixOL algorithm to simultaneously optimize all pixels within a phase mask to produce an engineered Green’s tensor–the dipole extension of point-spread function engineering. The pixOL DSF achieves optimal precision to simultaneously measure the 3D orientation and 3D location of a single molecule, i.e., 4.1° orientation, 0.44 sr wobble angle, 23.2 nm lateral localization, and 19.5 nm axial localization precisions in simulations over a 700 nm depth range using 2500 detected photons. The pixOL microscope accurately and precisely resolves the 3D positions and 3D orientations of Nile red within a spherical supported lipid bilayer, resolving both membrane defects and differences in cholesterol concentration in six dimensions.more » « less
-
ABSTRACT Cochlear hair cell stereocilia bundles are key organelles required for normal hearing. Often, deafness mutations cause aberrant stereocilia heights or morphology that are visually apparent but challenging to quantify. Actin-based structures, stereocilia are easily and most often labeled with phalloidin then imaged with 3D confocal microscopy. Unfortunately, phalloidin non-specifically labels all the actin in the tissue and cells and therefore results in a challenging segmentation task wherein the stereocilia phalloidin signal must be separated from the rest of the tissue. This can require many hours of manual human effort for each 3D confocal image stack. Currently, there are no existing software pipelines that provide an end-to-end automated solution for 3D stereocilia bundle instance segmentation. Here we introduce VASCilia, a Napari plugin designed to automatically generate 3D instance segmentation and analysis of 3D confocal images of cochlear hair cell stereocilia bundles stained with phalloidin. This plugin combines user-friendly manual controls with advanced deep learning-based features to streamline analyses. With VASCilia, users can begin their analysis by loading image stacks. The software automatically preprocesses these samples and displays them in Napari. At this stage, users can select their desired range of z-slices, adjust their orientation, and initiate 3D instance segmentation. After segmentation, users can remove any undesired regions and obtain measurements including volume, centroids, and surface area. VASCilia introduces unique features that measures bundle heights, determines their orientation with respect to planar polarity axis, and quantifies the fluorescence intensity within each bundle. The plugin is also equipped with trained deep learning models that differentiate between inner hair cells and outer hair cells and predicts their tonotopic position within the cochlea spiral. Additionally, the plugin includes a training section that allows other laboratories to fine-tune our model with their own data, provides responsive mechanisms for manual corrections through event-handlers that check user actions, and allows users to share their analyses by uploading a pickle file containing all intermediate results. We believe this software will become a valuable resource for the cochlea research community, which has traditionally lacked specialized deep learning-based tools for obtaining high-throughput image quantitation. Furthermore, we plan to release our code along with a manually annotated dataset that includes approximately 55 3D stacks featuring instance segmentation. This dataset comprises a total of 1,870 instances of hair cells, distributed between 410 inner hair cells and 1,460 outer hair cells, all annotated in 3D. As the first open-source dataset of its kind, we aim to establish a foundational resource for constructing a comprehensive atlas of cochlea hair cell images. Together, this open-source tool will greatly accelerate the analysis of stereocilia bundles and demonstrates the power of deep learning-based algorithms for challenging segmentation tasks in biological imaging research. Ultimately, this initiative will support the development of foundational models adaptable to various species, markers, and imaging scales to advance and accelerate research within the cochlea research community.more » « less
-
The ability to combine microscopy and spectroscopy is beneficial for directly monitoring physical and biological processes. Spectral imaging approaches, where a transmission diffraction grating is placed near an imaging sensor to collect both the spatial image and spectrum for each object in the field of view, provide a relatively simple method to simultaneously collect images and spectroscopic responses on the same sensor. Initially demonstrated with fluorescence spectroscopy, the use of spectral imaging in Raman spectroscopy and surface-enhanced Raman spectroscopy (SERS) can provide a vibrational spectrum containing molecularly specific information that can inform on chemical changes. However, a major complication to this approach is the spectral overlap that occurs when objects are spaced closely together horizontally. In this work, we add a dove prism to a spectral imaging instrument developed for SERS imaging, enabling rotation of the collected SERS image and dispersed spectrum onto the imaging complementary metal-oxide semiconductor (CMOS) sensor. We demonstrate that this effectively reduces spectral overlap for emitters with clear separation between them and emitters with slightly overlapping point spread functions thereby facilitating collection of unambiguous spectra from each emitter.more » « less
An official website of the United States government
