<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Conference Paper</dc:product_type><dc:title>Augmenting Neural Networks with First-order Logic</dc:title><dc:creator>Li, Tao; Srikumar, Vivek</dc:creator><dc:corporate_author/><dc:editor/><dc:description>Today,  the  dominant  paradigm  for  training neural networks involves minimizing task loss on a large dataset.  Using world knowledge to inform  a  model,  and  yet  retain  the  ability  to perform end-to-end training remains an open question.   In  this  paper,  we  present  a  novel framework for introducing declarative knowledge to neural network architectures in order to guide training and prediction.   Our frame-work  systematically  compiles  logical  statements  into  computation  graphs  that  augment a  neural  network  without  extra  learnable  parameters  or  manual  redesign.We  evaluate our  modeling  strategy  on  three  tasks:   machine comprehension, natural language inference,  and  text  chunking.Our  experiments show that knowledge-augmented networks can strongly improve over baselines, especially in low-data regimes.</dc:description><dc:publisher/><dc:date>2019-07-01</dc:date><dc:nsf_par_id>10175282</dc:nsf_par_id><dc:journal_name>Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics</dc:journal_name><dc:journal_volume/><dc:journal_issue/><dc:page_range_or_elocation/><dc:issn/><dc:isbn/><dc:doi>https://doi.org/10.18653/v1/P19-1028</dc:doi><dcq:identifierAwardId>1801446</dcq:identifierAwardId><dc:subject/><dc:version_number/><dc:location/><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>