Engineering news
The Bi-Touch system, developed by a team at the University of Bristol and the Bristol Robotics Laboratory, was trained for specific tasks in just a couple of hours using a learning method based on reward and punishment.
The development could lead to applications in industries such as fruit picking and domestic service, the researchers said. It could even be used to help recreate touch in artificial limbs.
“With our Bi-Touch system, we can easily train AI agents in a virtual world within a couple of hours to achieve bimanual tasks that are tailored towards touch. And, more importantly, we can directly apply these agents from the virtual world to the real world without further training,” said Yijiong Lin, lead author of a new paper on the work.
“The tactile bimanual agent can solve tasks even under unexpected perturbations, and manipulate delicate objects in a gentle way.”
The team trained the AI agents using a technique called deep reinforcement learning (deep RL), one of the most advanced techniques in robot learning. It is designed to teach robots to do things by letting them learn from trial and error, rewarding them when they accomplish goals and punishing them when they make mistakes.
The researchers taught the system its ‘bimanual’ (using both hands) skills by first building a virtual world containing two robot arms equipped with tactile sensors. They then designed reward functions and a ‘goal update mechanism’ that encouraged the agents to learn.
For robotic manipulation, the agent learns to make decisions by attempting various behaviours to achieve designated tasks, such as lifting up objects without dropping or breaking them. When it succeeds it gets a reward, and when it fails, it learns what not to do.
With time, it works out the best ways to pick things up. The AI agent is visually ‘blind’, relying only on proprioceptive feedback – a body’s ability to sense movement, action, location, and tactile feedback.
The engineers then developed a real-world ‘tactile dual-arm robot system’, which they applied the agent to. The robot used what it had learned to successfully lift fragile items, including the Pringle.
Bimanual manipulation with tactile feedback will be “key” to human-level robot dexterity, the research announcement said. The researchers said the topic has been less explored than single arms, partly due to the availability of suitable hardware and the complexity of designing effective controllers.
Co-author Professor Nathan Lepora said: “Our Bi-Touch system showcases a promising approach, with affordable software and hardware for learning bimanual behaviours with touch in simulation, which can be directly applied to the real world. Our developed tactile dual-arm robot simulation allows further research on… different tasks as the code will be open-source.”
Yijiong added: “Our Bi-Touch system allows a tactile dual-arm robot to learn solely from simulation, and to achieve various manipulation tasks in a gentle way, in the real world.”
The work was published in IEEE Robotics and Automation Letters.
Want the best engineering stories delivered straight to your inbox? The Professional Engineering newsletter gives you vital updates on the most cutting-edge engineering and exciting new job opportunities. To sign up, click here.
Content published by Professional Engineering does not necessarily represent the views of the Institution of Mechanical Engineers.