Hierarchical RL Controller For High-Speed Autonomous Control
Location: Nagoya, JP | 05/2026 - 08/2026
Github: https://github.com/ruben-fonseca-castro/hierarchical_sim2real_RL_ruben
This project explores the development of a hierarchical controller for high-speed autonomous racing, combining Model Predictive Control (MPC) for long-horizon planning with a Reinforcement Learning (RL) waypoint follower for handling transient dynamics. Using NVIDIA Isaac Lab and the open-source Wheeled Lab framework, an RL policy was iteratively designed, trained, and refined for robust waypoint tracking under randomized environmental conditions.
The final policy demonstrated strong performance in simulation, though with some overshoot and maneuverability limitations. A deployment pipeline into a ROS 1 simulator was established, enabling trained policies to interface with an existing racing simulation environment. While current ROS integration faces performance issues due to action normalization errors, the infrastructure for rapid policy iteration and deployment is in place. Future work will integrate the MPC layer, refine reward shaping, and validate performance in both sim2sim and sim2real contexts for the Roboracer competition.