<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Conference Paper</dc:product_type><dc:title>Human Mobility Challenge: Are Transformers Effective for Human Mobility Prediction?</dc:title><dc:creator>Kong, Ruochen; Amiri, Hossein; Liu, Yueyang; Kennedy, Lance; Gupta, Misha; Kim, Joon-Seok; Züfle, Andreas</dc:creator><dc:corporate_author/><dc:editor/><dc:description>Transformer-based models are popular for time series forecasting and spatiotemporal prediction due to their ability to infer semantic correlations in long sequences. However, for human mobility prediction, temporal correlations, such as location patterns at the same time on previous days or weeks, are essential. While positional encodings help retain order, the self-attention mechanism causes a loss of temporal detail. To validate this claim, we used a simple approach in the 2nd ACM SIGSPATIAL Human Mobility Prediction Challenge, predicting locations based on past patterns weighted by reliability scores for missing data. Our simple approach was among the top 10 competitors and significantly outperformed the Transformer-based model that won the 2023 challenge.</dc:description><dc:publisher>ACM</dc:publisher><dc:date>2024-10-29</dc:date><dc:nsf_par_id>10582625</dc:nsf_par_id><dc:journal_name/><dc:journal_volume/><dc:journal_issue/><dc:page_range_or_elocation>60 to 63</dc:page_range_or_elocation><dc:issn/><dc:isbn>9798400711503</dc:isbn><dc:doi>https://doi.org/10.1145/3681771.3700130</dc:doi><dcq:identifierAwardId>2109647</dcq:identifierAwardId><dc:subject>Human Mobility, Patterns of Life, Historical Heuristic</dc:subject><dc:version_number/><dc:location>Atlanta GA USA</dc:location><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>