Problem Settings
|
|
Given a training dataset from state-control tracjectories collected under different disturbance realizations, we want to learn an adaptive controller, consisting of a trajectory-tracking controller that compensates for the disturbances and an adaptation law that estimates the disturbances online.
|
|
The dynamics of many robots, including ground, aerial, and underwater vehicles, are described in terms of their SE(3) pose and generalized velocity, which evolves on the SE(3) manifold, and satisfy conservation of energy principles
|
Technical Approach
|
|
Our approach learns disturbance features by training an neural ODE model with the datasets.
|
|
The Hamiltonian structure is used to design a trajectory-tracking controller from and energy perspective while the disturbance model with parameter theta is used to develop the adaptaion law.
|
Results
|
|
For pendulum, we collect data by applying random control on a pendulum gym simulator. After learning the disturbance model from data, the proposed adaptive controller drives the pendulum toward the angle 3*pi/4 using the learned dynamics.
|
|
For quadrotors, we use a crazyflie drone simulator in Pybullet and collect training data from 9 flights under 8 disturbance realizations.
|
|
Experiment 1 (no adaptive control + spiral trajectory): the drone drifts due to external winds and defective rotors.
|
|
Experiment 1 (adaptive control + spiral trajectory): we can estimate the disturbance after a few seconds, compensate for it and successfully finish the tracking task.
|
|
Experiment 2 (no adaptive control + diamond-shaped trajectory): as the rotors suddenly become defective after 5s, The drone drifts and drops down to the ground due to disturbances.
|
|
Experiment 2 (adaptive control + diamond-shaped trajectory): the drone is able to recover and finishes tracking the trajectory.
|