<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Journal Article</dc:product_type><dc:title>Learning Terrain-Aware Bipedal Locomotion via Reduced-Dimensional Perceptual Representations</dc:title><dc:creator>Castillo, Guillermo A [Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH, USA] (ORCID:0000000313265836); Lodha, Himanshu [Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH, USA] (ORCID:0000000150034063); Hereid, Ayonga [Department of Mechanical and Aerospace Engineering, The Ohio State University, Columbus, OH, USA] (ORCID:0000000241562013)</dc:creator><dc:corporate_author/><dc:editor/><dc:description>This work introduces a hierarchical strategy for terrain-aware bipedal locomotion that integrates reduced-dimensional perceptual representations to enhance the reinforcement learning (RL)-based high-level (HL) policies for real-time gait generation. Unlike end-to-end approaches, our framework leverages latent terrain encodings via a convolutional variational autoencoder (CNN-VAE) alongside reduced-order robot dynamics, optimizing the locomotion decision process with a compact state. We systematically analyze the impact of latent space dimensionality on learning efficiency and policy robustness. In addition, we extend our method to be history-aware, incorporating sequences of recent terrain observations into the latent representation to improve robustness. To address real-world feasibility, we introduce a distillation method to learn the latent representation directly from depth camera images and provide preliminary hardware validation by comparing simulated and real sensor data. We further validate our framework using the high-fidelity agility robotics (ARs) simulator, incorporating realistic sensor noise, state estimation, and actuator dynamics. The results confirm the robustness and adaptability of our method, underscoring its potential for hardware deployment.</dc:description><dc:publisher>IEEE</dc:publisher><dc:date>2026-01-01</dc:date><dc:nsf_par_id>10674043</dc:nsf_par_id><dc:journal_name>IEEE Transactions on Control Systems Technology</dc:journal_name><dc:journal_volume/><dc:journal_issue/><dc:page_range_or_elocation>1 to 13</dc:page_range_or_elocation><dc:issn>1063-6536</dc:issn><dc:isbn/><dc:doi>https://doi.org/10.1109/TCST.2026.3664022</dc:doi><dcq:identifierAwardId>2144156</dcq:identifierAwardId><dc:subject/><dc:version_number/><dc:location/><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>