Rigid robots have been widely used, even in fields where they are suboptimal. They are typically difficult to transport, prone to breaking, and struggle to adapt to new environments. In spite of this we are still using rigid robots where there are better alternatives, such as for space exploration. Soft robots, in particular Tensegrity robots, are more both versatile and more durable. Tensegrity robots are composed of strut elements held together by tensile elements, in our case springs. Our Tensegrity robot, VVValtr, moves by vibrating motors on 3 of its 6 struts. My research is focused on efficiently learning a motor policy to enable linear movement. The hollow nature of a Tensegrity makes it much lighter. Similarly, due to the resting state of equilibrium, Tensegrity robots can not only be squashed, meaning they take up considerably less space during transport, but they are also capable of recovering from external forces, either by simply returning to their state of equilibrium, or in the case of a break they can simply learn a new motor policy suited to the new structure. In my talk I will begin by outlining Tensegrities, their benefits, and the types of tasks they are well suited to. I will then discuss the benefits and drawbacks of using the open-loop, model-free Bayesian Optimization learning technique to learn a motor policy, and how we found that it is not a suitable learning policy for a vibrational Tensegrity. Furthermore, I will talk about how we used the closed-loop, model-based Blackdrops package to implement a new learning policy, the efficiency and effectiveness of Blackdrops, and the benefits and drawbacks of this policy. I will conclude by exploring further policy options, and I will explain how to use our results from using Bayesian Optimization and Blackdrops in order to help design further efficient learning policies.
Additional Speakers
Faculty Sponsors
Faculty Department/Program
Faculty Division
Presentation Type
Do You Approve this Abstract?
Approved
Time Slot
Room
Topic
Session
Moderator