bzdww

Get answers and suggestions for various questions from here

Collaborative Robot Workshop Summary

cms

The following contents are the technical summary of the Guangzhou Collaborative Robots - Frontier and Challenge Seminar organized on August 6 and 7, and the corresponding project introductions and future market expectations, which are for reference only. We will explore the technical difficulties and challenges of collaborative robots from five aspects: end effectors , sensors , robot software , safety and robot standards . More speeches PPT will be launched on our WeChat public account: Wuhan Cooper.

A: Technical summary

End effector

As the "last centimeter" of the collaborative robot, the end effector plays a vital role. It is a direct channel that the robot uses to physically interact with the outside world. The usual form is a terminal two-finger clamp, a terminal dexterous hand, and an end suction cup.


At present, in academics, the grasping plan of the dexterous hand and the grasping control have been studied for many years, but at present, the dexterous hand can do very limited. In practical applications, suction cups and two-finger clamps are still the most commonly used methods. See the Amazon 2015 Challenge in the picture below. The top three players are basically the suction cup plus two-finger clamp mode. This also tells us from one side that the dexterous hand is still quite far from the real application.

If we are to achieve a true collaborative robot, the Robot Hand is a big challenge, both from an application perspective and from a research perspective. Professor Kenji Tahara, a robotic hand design and control expert from Kyushu University in Japan, gave a topic titled “Dynamic Smart Operation of Multi-Finger Robot Hands”, focusing on the importance of robotic grab dynamics for improving the functionality of robotic hands.

Professor Kenji Tahara First, from the design point of view, and shows how to implement some of the staff are smart operation can not be done by the end of the finger to increase a degree of freedom. He also stressed that we should pay more attention to the function of the robot hand, not its shape.

Professor Kenji Tahara then introduced his latest work on dynamic object operations . Unlike the single robotic operation (Robot Manipulation), the object operation requires multiple fingers to cooperate in a small working space, and because of the sliding between the end of the finger and the object, it is difficult for us to accurately measure the object to be operated. Status, such as position and pose. Professor Kenji Tahara proposed the concept of a virtual coordinate, using only the kinematics of the finger to estimate the state of the object, without the need to use a large number of external sensors. This work is quite innovative in the robot's smart operation, and has received a lot of follow-up research from the German Space Agency, the Berkeley Robotics Group, and the Swiss Federal Institute of Technology in Lausanne.

Finally, Professor Kenji Tahara demonstrated how to combine visual information to further improve the estimation of the state of the object. Due to the low rate of vision, delay and occlusion, such controllers are very difficult to meet real-time requirements. In this work, they control the virtual coordinates in real time, and the actual object information from the vision is only used to update the desired state of the object.

Professor Kenji Tahara's entire work from a mechanical point of view, from the design and control, to the object operation has proposed a variety of innovative work. The whole studio is built on dynamic modeling, but in reality, due to the uncertainty of objects, robots, sensors, etc., it is difficult to ignore the gap between the model and reality.

In order to partially solve the gap between reality and model, Dr. Li Wei from the Swiss Federal Institute of Technology in Lausanne (founder of Wuhan Cooper Technology) made a collaborative application of sensor-guided object operations in the afternoon—design, control and learning. "Report. This report mainly introduces the use of learning to help design, plan, and control in object operations . Most of the work on object manipulation or robotic grabbing is usually limited to one aspect of the whole problem, such as robotic hand design or grab planning. But we have to admit that the robotic operating system is a whole, and we must solve this problem as a whole.



But from a holistic perspective, there are many factors that we need to consider. In particular, to achieve this, the first question we must figure out is, what do we have to do with the robot hand? That is, what is the task?!

As shown in the figure below, the robot hand we used in the kitchen and the robot hand used in the workshop obviously need different characteristics. But how do you get the right robotic hand design from these abstract task descriptions? At present, the next-generation flexible industrial robot project developed by Dr. Li Wei in collaboration with Prof. Kenji Tahara in Wuhan is to solve this problem in a combination of industry and academia.

Then, Dr. Li Wei introduced how to use machine learning to learn the inverse kinematics of robotic hands. This work turns a complex nonlinear optimization problem into an efficient search problem, which allows us to calculate the best robot grabbing configuration online.

Next, in the second part, Dr. Li Wei introduced how to combine touch to achieve dynamic crawl adaptive. This work developed Professor Kenji Tahara's virtual coordinate idea, which combines tactile information, virtual coordinate information, and control strategies organically through machine learning. This not only determines whether a grab is stable, but also enables the robot to make quick adjustments. This is the first real dynamic closed loop capture.


Summary: At present, a large number of suction cups are used in the industry. Two-finger clamps, suction cups and clamps are customized according to specific parts. The customization cycle is long, the system integration is complicated, and the end effector is poor in flexibility. In academic research, a large number of studies still stay in pure geometric/static research such as force closure, and are generally open-loop control, and have not yet been able to achieve closed-loop control. Future end-effectors, especially the end-effectors used on collaborative robots, must be an intelligent system that combines design, planning, control, and learning. There is still a lot of space to develop and mine, whether it is research or Develop industrial applications and expect more partners to join.


Next, we will continue to update the report summary of other aspects, welcome to exchange discussion!

2. Sensor

3. Robot software

4. Security

5. Standard

----------------------------------

Update:

Related slides download:

dropbox.com/s/snymyn56g

dropbox.com/s/imfcfl06c
The latest IROS workshop on this topic:

zkks.w3.kanazawa-u.ac.jp

Romans