Grab & Dodge
Combining deep reinforcement learning methods to reach and grab targets while avoiding obstacles. (WIP)
Work in progress. Using deep reinforcement learning for a robot to reach goals whilst avoiding obstacles.
Current methodology:
- Use potential fields and gradient descent. The input for the gradient descent algorithm are 2 types of vectors: attractive vectors pointing towards the goal and repulsive vectors pointing away from obstacles.
- Problem: Robot gets stuck if net force is zero.
- To solve the zero net-force problem, we give a RL agent the same inputs as gradient descent, but also with awareness of current position and obstacles it’s about to face. Then, movement can be planned ahead and the obstacle can be avoided without getting stuck.
- Obstacles are made into 2D occupancy grid, where the color of each grid expresses the height of the obstacles at that position.
- The occupancy grid can be fed into a network as an image using convolutional filters.
- Attractive vectors, repulsive vectors, current location and the occupancy grid should be all the information required to reach the goal.
Other notes:
- Giving the agent direct control over the angels of the robot would mean it has to figure out inverse kinematics as well
- Movement generated by the agent can be very unsteady, requiring smoothing (generate nice curve with B-spline?)