<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Journal Article</dc:product_type><dc:title>PHYSICS-INFORMED NEURAL NETWORKS FOR INFORMED VACCINE DISTRIBUTION INMETA-POPULATIONS</dc:title><dc:creator>Arulandu, Alvan Caleb; Seshaiyer, Padmanabhan</dc:creator><dc:corporate_author/><dc:editor/><dc:description>&lt;p&gt;Accurate numerical and physical models play an important role in modeling the spread of infectious disease as well as informing policy decisions. Vaccination programs rely on the estimation of disease parameters from limited, error-prone reported data. Using physics-informed neural networks (PINNs) as universal function approximators of the susceptible-infected-recovered (SIR) compartmentalized differential equation model, we create a data-driven framework that uses reported data to estimate disease spread and approximate corresponding disease parameters. We apply this to datafrom a London boarding school, demonstrating the framework's ability to produce accurate disease and parameter estimations despite noisy data. However, real-world populations contain sub-populations, each exhibiting different levels of risk and activity. Thus, we expand our framework to model meta-populations of preferentially-mixed subgroups with various contact rates, introducing a new substitution to decrease the number of parameters. Optimal parameters are estimated throughPINNs which are then used in a negative gradient approach to calculate an optimal vaccine distribution plan for informed policy decisions. We also manipulate a new hyperparameter in the loss function of the PINNs network to expedite training. Together, our work creates a data-driven tool for future infectious disease vaccination efforts in heterogeneously mixed populations.&lt;/p&gt;</dc:description><dc:publisher>Begell House</dc:publisher><dc:date>2023-01-01</dc:date><dc:nsf_par_id>10535752</dc:nsf_par_id><dc:journal_name>Journal of Machine Learning for Modeling and Computing</dc:journal_name><dc:journal_volume>4</dc:journal_volume><dc:journal_issue>3</dc:journal_issue><dc:page_range_or_elocation>83 to 99</dc:page_range_or_elocation><dc:issn>2689-3967</dc:issn><dc:isbn/><dc:doi>https://doi.org/10.1615/JMachLearnModelComput.2023047642</dc:doi><dcq:identifierAwardId>2230117</dcq:identifierAwardId><dc:subject/><dc:version_number/><dc:location/><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>