Horizon2020 SecondHands Consortium Develops a Collaborative Robot Which Can Proactively Assist Humans in Maintenance Tasks

May 5, 2020

Tested in real-world industry for robustness and high performance, the breakthroughs in AI Learning, Natural Language Processing and Robotic Manipulation achieve collaborative robotic capabilities which exceed the state-of-the-art

The EU Horizon2020 SecondHands consortium is today announcing the completion of its revolutionary robotic platform, ARMAR-6. Launched in 2015, the mission was to develop a collaborative robot, or “cobot” assistant to make industrial environments safer by proactively offering useful support and assistance to humans in maintenance tasks.

The SecondHands project was given its name because the robot provides a “second pair of hands” to workers. The robot can offer useful assistance - like holding, lifting, reaching, or passing objects. People can concentrate on the ‘skilled’ part of a job whilst the robot takes responsibility for the heavy lifting and support roles - thus enabling human and machine to actively enhance each other’s complementary strengths.

The underlying robot platform, the ARMAR-6, was developed at the Karlsruhe Institute of Technology (KIT) specifically for the project requirements with respect to human-robot interaction. The robot is the 6th generation and youngest member of the ARMAR family of humanoid robots.

The H2020 consortium partners are:

Grasping and Manipulation: Humanoid Robots Which Use Tools
The ARMAR-6, a humanoid robot, can safely and intuitively collaborate with humans.
Thanks to its high level of hardware-software integration, it combines advanced sensorimotor skills with learning and reasoning abilities. It can infer when a human needs help, and proactively offer the most appropriate assistance.

The robot can recognise human activities and intentions, reason about situations, and interact with humans in a natural way. It can also grasp and manipulate objects bimanually to accurately and safely use tools such as power drills and hammers.

Prof. Tamim Asfour from KIT, explains “Robots with sophisticated manipulation, interaction and learning abilities, such as ARMAR-6 will provide a second pair of hands to people in need of help at home and at work. The achievements in the project are important steps towards building humanoid robots with embodied intelligence."

Key breakthroughs made by the consortium members in this space include:

  • New humanoid robot incorporating two compliant 8 DoF arms, under-actuated hands, holonomic platform, a head with 5 cameras and a functional control architecture for integration of sensorimotor skills, learning and reasoning abilities.
  • Novel methods for grasping objects and maintenance tools by combining visual and haptic sensing with model-based and data-driven machine learning approaches
  • Methods for grasping small and challenging objects with under-actuated hands
  • Methods for manipulating large objects in collaboration with humans where the robot decides how to grasp an object or tool depending on human actions.
  • Learning techniques ranging from explorative learning to teaching or coaching by humans.

Prof. Aude Billard from EPFL, commented - “The breakthroughs demonstrated in "hand-over" scenario, "guard removal and insertion", "guard co-manipulation", and "obstacle avoidance" will play a crucial role in having reliable robots that can be used in every-day scenarios.”

Task Understanding for Proactive Collaboration
ARMAR-6’s versatile artificial intelligence capabilities allow it to act in situations that are not foreseen at programming time. This means it’s capable of autonomously performing maintenance tasks in industrial facilities. It can recognise it’s collaboration partners' need of help and offer assistance. The ability to teach how to see in 3D without the requirement of fully supervised 3D training data will revolutionise the field of Machine Learning.

Key breakthroughs made by the consortium members in this space include:

  • Learning:
    • new algorithms for gathering knowledge, from activities and actions sequence recognition to tools segmentation and context classification.
  • Reacting:
    • Novel architectures for Help Recognition for a fast interaction and assistance via anticipation and forecasting techniques.
    • Advancements in guaranteeing safety - a state-dependent dynamical system to provide solutions for compliant interaction with humans
    • Reactive motion planning which enables the robot to react to human/environment interaction forces
  • Scene Recognition:
    • New algorithms for dynamic 3D scene understanding that can build geometric and semantic 3D maps of the environment even when objects move independently from the camera.
    • Pioneering weakly supervised and unsupervised approaches for 3D reconstruction of human poses and semantic scene understanding that can learn from just 2D labels or even no labels at all.

Natural Language for Effective Human/Robot interaction
With collaborative robots, trust and adoption are key: what you build needs to fit into the natural ways people work, which includes being able to respond in human-suited timescales and in ways that are meaningful to humans. This means that developments in natural language are crucial to usability.

Key breakthroughs made by the consortium members in this space include:

  • Creation of a speech interface between humans and robots that is solely based on all-neural models: all-neural automatic speech recognition, all-neural dialog modelling and all-neural speech synthesis.

“The breakthroughs made in areas like natural language interfaces and task understanding will lead to better acceptance of ‘cobots’ by humans, and allow them to be used in a more easy, natural way. This is an important building block that will contribute to the emergence of true collaboration between humans and cobots.” says Dr. Sebastian Stüker, KIT

Tested in Real World Scenarios For Increased Robustness
The ability to test this use case in real-world environments of Ocado’s highly automated warehouse was fundamental to the programme's success. By providing the engineering expertise for dealing with large complex distributed software systems, Ocado Technology enabled the academic partners to focus on the scientific and technical challenges, and not have to focus on the process of “glueing” their solutions together.

Prof. Lourdes Agapito, UCL commented: “It was really valuable to be able to test in the Ocado warehouse as a real-world environment. The dynamic, constantly changing conditions tested our vision, robotics and language processing algorithms to go beyond the current state-of-the-art.”

Collaboration with industry to amplify the benefits of academic expertise
This project has brought together industry and academia, engaging world-leading experts from a range of robotics and scientific disciplines. The consortium partners have developed best-in-breed technologies across robotic perception, robotic communication, and human-robot interaction. After 5 years of collaboration, the innovations that have come out of this project far exceed what each partner could have accomplished alone.

Prof. Lourdes Agapito UCL, went on to say: “Scientists in different areas of AI often work in isolation so designing vision algorithms for ARMAR 6 has given us the chance for close collaborations with top roboticists, and has pushed us out of our "comfort zone" towards increasing the robustness and performance of our algorithms.”

Prof. Fiora Pirri, University La Sapienza commented: “These artificial intelligence innovations have been made possible thanks to the close collaboration of all the partners. The challenging real-world environment of the Ocado CFC allowed us to develop a reasoning system and perception algorithms to provide support in industrial maintenance tasks. It was an honour to work with KIT and EPFL’s creative haptic control and the inspiring UCL algorithms for 3D vision, all effectively coordinated by Ocado Technology, in order to realise these crucial breakthroughs in human-robot help.”

Solving Societal Challenges Beyond The Warehouse
Eventually these collaborative robots will be key for solving many societal challenges - both as assistive robots in industry, and out in the wider world. They will bring about major benefits in terms of allowing new levels of collaboration.

Dr Graham Deacon, Robotics Research Fellow, Ocado Technology: “Humanoid robots are key for improving flexibility and safety in industrial contexts in a way that is genuinely useful. The same technologies which enable the ARMAR-6 to communicate and interact with humans, like natural language comprehension, soft manipulation and 3D spatial awareness, also mean the robot could be developed further to help in other situations, like in helping to reduce contamination, or in assisted living."

About Us

University College London (UCL)
UCL Computer Science is home to some of the world's most influential and creative researchers in the field of computer science.
A global leader in experimental computer science research, our degree programmes recognise the importance of computer systems in commerce, industry, government and science. Offering teaching by the brightest minds, we create innovative technologies that change lives with computers. The Research Excellence Framework (REF), which assesses the quality of research in UK higher education institutes, ranked us top in its most recent evaluation.

École polytechnique fédérale de Lausanne (EPFL)
Learning Algorithms and Systems Laboratory (LASA). Research at LASA develops means by which humans can teach robots to perform skills with the level of dexterity displayed by humans in similar tasks. Our robots move seamlessly with smooth motions. They adapt adequately and on-the-fly to the presence of obstacles and to sudden perturbations, hence mimicking humans. immediate response when facing unexpected and dangerous situations

Karlsruhe Institute of Technology (KIT)
The High Performance Humanoid Technologies Lab (H²T) at the Institute of Anthropomatics and Robotics, Department of Informatics is the home of the ARMAR humanoid robots. Research at H2T focuses on the human-centred engineering of high performance 24/7 humanoid robotics as well as on the mechano-informatics of humanoids as the synergetic integration of informatics, artificial intelligence and mechatronics into humanoid robot systems, which are able to predict, act and interact in the real world. The major research topics include humanoid mechatronics, grasping and mobile manipulation, learning from human observation and experience, and software and hardware robot architectures.

The Interactive Systems Lab (ISL) at KIT focuses on technologies that facilitate the human experience, human mutual understanding and communication. Research ranges from speech recognition, translation, speech synthesis, language, vision technologies, person tracking and recognition, multi-modal and cross-modal perceptual interfaces, smart rooms and pervasive computing.

Ocado Technology
Ocado Technology is the online grocery technology pioneer. Part of Ocado Group, our world-class systems and solutions in automation, robotics, artificial intelligence, machine learning, simulation and big data power the online operations of grocery retailers around the world. Through our unique end-to-end ecommerce, fulfilment, and logistics platform, Ocado Smart Platform (OSP), Ocado enables our partners around the world to do online grocery scalably, sustainably and profitably. Ocado Technology continues to innovate every day by using its unparalleled technology, competencies, IP and know-how to transform the online grocery retail industry, and beyond.

University of Rome, La Sapienza.
ALCOR Lab. The research activity focuses mainly on the perceptual inference processes a cognitive robotics system should be able to prompt, in order to develop its knowledge about the environment and act as a mindful system. We work both on the perceptual inference processes and on the control and planning methods based on perception. The objective is to deal with many kind of conditions: from factory settings to harsh and unstructured settings, such as post disaster areas. We build our algorithms under two approaches: bottom-up attention and 3D reconstruction. The attention based approach starts from early attention. Under this perspective we study many aspects of visual salience, as induced by motion, by actions, and by interactions. We are also interested in determining how classification of the different aspects of saliency can lead to the recognition of different patterns of the human behavior.

Related Content

No items found.