Abstract The field of nucleic acid self‐assembly has advanced significantly, enabling the creation of multi‐dimensional nanostructures with precise sizes and shapes. These nanostructures hold great potential for various applications, including biocatalysis, smart materials, molecular diagnosis, and therapeutics. Here, dynamic light scattering (DLS) and nanoparticle tracking analysis (NTA) are employed to investigate DNA origami nanostructures, focusing on size distribution and particle concentration. Compared to DLS, NTA provided higher resolution in size measurement with a smaller full‐width at half‐maximum (FWHM), making it particularly suitable for characterizing DNA nanostructure. To enhance sensitivity, a fluorescent NTA method is developed by incorporating an intercalation dye to amplify the fluorescence signals of DNA origami. This method is validated by analyzing various DNA origami structures, ranging from 1 and 2D flexible structures to 3D compact shapes, and evaluating structural assembly yields. Additionally, NTA is used to analyze dynamic DNA nanocages that undergo conformational switches among linear, square, and pyramid shapes in response to the addition of trigger strands. Quantitative size distribution data is crucial not only for production quality control but also for providing mechanistic insights into the various applications of DNA nanomaterials.
more »
« less
A user-friendly graphical user interface for dynamic light scattering data analysis
Dynamic light scattering (DLS) is a commonly used analytical tool for characterizing the size distribution of colloids in a dispersion or a solution. Typically, the intensity of a scattering produced from the sample at a fixed angle from an incident laser beam is recorded as a function of time and converted into time autocorrelation data, which can be inverted to estimate the distribution of colloid diffusivity to estimate the colloid size distribution. For polydisperse samples, this inversion problem, being a Fredholm integral equation of the first kind, is ill-posed and is typically handled using cumulant expansions or regularization methods. Here, we introduce a user-friendly graphical user interface (GUI) for analyzing the measured scattering intensity time autocorrelation data using both the cumulant expansion method and regularization methods, with the latter implemented using various commonly employed algorithms, including NNLS, CONTIN, REPES, and DYNALS. The GUI allows the user to modulate any and all of the fit parameters, offering extreme flexibility. Additionally, the GUI also enables a comparison of the size distributions generated by various algorithms and an evaluation of their performance. We present the fit results obtained from the GUI for model monomodal and bimodal dispersions to highlight the strengths, limitations, and scope of applicability of these algorithms for analyzing time autocorrelation data from DLS.
more »
« less
- Award ID(s):
- 2048285
- PAR ID:
- 10451848
- Date Published:
- Journal Name:
- Soft Matter
- Volume:
- 19
- Issue:
- 34
- ISSN:
- 1744-683X
- Page Range / eLocation ID:
- 6535 to 6544
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Nucleic acids self-assembly has rapidly advanced to produce multi-dimensional nanostructures with precise sizes and shapes. DNA nanostructures hold great potential for a wide range of applications, including biocatalysis, smart materials, molecular diagnosis, and therapeutics. Here, we present a study of using dynamic light scattering (DLS) and nanoparticles tracking analysis (NTA) to analyze DNA origami nanostructures for their size distribution and particles concentrations. Compared to DLS, NTA demonstrated higher resolution of size measurement with a smaller FWHM and was well suited for characterizing multimerization of DNA nanostructures. We future used intercalation dye to enhance the fluorescence signals of DNA origami to increase the detection sensitivity. By optimizing intercalation dyes and the dye-to-DNA origami ratio, fluorescent NTA was able to accurately quantify the concentration of dye-intercalated DNA nanostructures, closely matching with values obtained by UV absorbance at 260 nm. This optimized fluorescent NTA method offers an alternative approach for determining the concentration of DNA nanostructures based on their size distribution, in addition to commonly used UV absorbance quantification. This detailed information of size and concentration is not only crucial for production and quality control but could also provide mechanistic insights in various applications of DNA nanomaterials.more » « less
-
Continuous-time event data are common in applications such as individual behavior data, financial transactions, and medical health records. Modeling such data can be very challenging, in particular for applications with many different types of events, since it requires a model to predict the event types as well as the time of occurrence. Recurrent neural networks that parameterize time-varying intensity functions are the current state-of-the-art for predictive modeling with such data. These models typically assume that all event sequences come from the same data distribution. However, in many applications event sequences are generated by different sources, or users, and their characteristics can be very different. In this paper, we extend the broad class of neural marked point process models to mixtures of latent embeddings, where each mixture component models the characteristic traits of a given user. Our approach relies on augmenting these models with a latent variable that encodes user characteristics, represented by a mixture model over user behavior that is trained via amortized variational inference. We evaluate our methods on four large real-world datasets and demonstrate systematic improvements from our approach over existing work for a variety of predictive metrics such as log-likelihood, next event ranking, and source-of-sequence identification.more » « less
-
spOccupancy: An R package for single‐species, multi‐species, and integrated spatial occupancy modelsAbstract Occupancy modelling is a common approach to assess species distribution patterns, while explicitly accounting for false absences in detection–nondetection data. Numerous extensions of the basic single‐species occupancy model exist to model multiple species, spatial autocorrelation and to integrate multiple data types. However, development of specialized and computationally efficient software to incorporate such extensions, especially for large datasets, is scarce or absent.We introduce thespOccupancy Rpackage designed to fit single‐species and multi‐species spatially explicit occupancy models. We fit all models within a Bayesian framework using Pólya‐Gamma data augmentation, which results in fast and efficient inference.spOccupancyprovides functionality for data integration of multiple single‐species detection–nondetection datasets via a joint likelihood framework. The package leverages Nearest Neighbour Gaussian Processes to account for spatial autocorrelation, which enables spatially explicit occupancy modelling for potentially massive datasets (e.g. 1,000s–100,000s of sites).spOccupancyprovides user‐friendly functions for data simulation, model fitting, model validation (by posterior predictive checks), model comparison (using information criteria and k‐fold cross‐validation) and out‐of‐sample prediction. We illustrate the package's functionality via a vignette, simulated data analysis and two bird case studies.ThespOccupancypackage provides a user‐friendly platform to fit a variety of single and multi‐species occupancy models, making it straightforward to address detection biases and spatial autocorrelation in species distribution models even for large datasets.more » « less
-
Beyond the Return: Off-policy Function Estimation under User-specified Error-measuring DistributionsOff-policy evaluation often refers to two related tasks: estimating the expected return of a policy and estimating its value function (or other functions of interest, such as density ratios). While recent works on marginalized importance sampling (MIS) show that the former can enjoy provable guarantees under realizable function approximation, the latter is only known to be feasible under much stronger assumptions such as prohibitively expressive discriminators. In this work, we provide guarantees for off-policy function estimation under only realizability, by imposing proper regularization on the MIS objectives. Compared to commonly used regularization in MIS, our regularizer is much more flexible and can account for an arbitrary user-specified distribution, under which the learned function will be close to the ground truth. We provide exact characterization of the optimal dual solution that needs to be realized by the discriminator class, which determines the data-coverage assumption in the case of value-function learning. As another surprising observation, the regularizer can be altered to relax the data-coverage requirement, and completely eliminate it in the ideal case with strong side information.more » « less
An official website of the United States government

