The robot that is learning to suture by watching videos of how surgeons do it

Must Read

WhatsApp: how to send photos and videos without losing quality

We are going to show you how you can send photos and videos on WhatsApp without losing quality. Every time we send a file...

The Realme V5 5G will arrive with its MediaTek processor in Europe, according to XDA

More and more manufacturers are launching local versions of their mobile phones for regions such as India or China, and which later do not...

WhatsApp will limit the use of stickers in chats

WhatsApp will soon begin to limit the animated stickers used by chats with your friends. Discover here the real reason for this decision. The stickers...

5 reasons why you should use WhatsApp Business

We tell you what can be done with WhatsApp Business, and five reasons why you should use this tool if you have a business. 5...
Brian Adam
Professional Blogger, V logger, traveler and explorer of new horizons.

The robot that is learning to suture by watching videos of how surgeons do it

Suturing a patient after an operation is a relatively simple task for a doctor. After all, it is simply a matter of sewing following some basic guidelines. But is it for a robot? In a new collaborative University of California project, Intel decided to put this to the test and they taught a robot to suture.

Under the name of Motion2Vec, the project seeks to ensure that a robot can suture patients with the precision of a human. For this, it has been equipped with a semi-supervised deep learning system with which the robot learn by watching public surgical videos where sutures are made. from there it is the job of the AI ​​to learn to imitate the doctor’s movements and then imitate them with sufficient precision.

Imitating surgeons

The trick that the makers of Motion2Vec have followed is a technique used for image recognition called a siamese neural network. The idea behind this is that two identical networks receive two separate data sets and after processing and comparing them offers only one final result. In other words, on the one hand, the AI ​​obtains the video of the doctor making the future and on the other, it records the robot practising. Comparing these two videos learn to improve the precision of robot movements.

Shows Sample videos the robot has been watching.

The researchers add that they needed a total of 78 videos JIGSAWS as a database to teach the robot. As a result, Motion2Vec learned to suture with an 85.5% precision an average error of 0.94 cm in precision when hitting the exact point. It is not yet at the level required when operating with humans, but fortunately, the robot did not practice with humans.

Hardly a robot like this is going to be in charge of replacing a surgeon. As we have seen from time to time, AI at the moment of truth fails more than theory. However, it is plausible that in the future become an extra help in the operating room for very basic and monotonous tasks that can be done by someone with a qualification not as high as that of the doctor. The researchers say that using this technique, the robot can learn more than just suturing, such as removing impurities from a wound.