Robots are strong, durable and increasingly smart. They are precise. Science fiction tells us they’ll eventually take over the world.
But there’s one thing that they can’t do well: see. Robots can be equipped with cameras, but there’s normally a human set of eyes on the end. Robots that carry out free-flowing movements instead of repetitive tasks must be controlled as a result.
That is an obstacle CUHK robotics engineer Liu Yun-hui has overcome. He has pioneered a novel solution that gives robots “eyes” using what is known as vision-based 3D motion control. By programming a robot with that sort of vision, it can operate independently in a factory, warehouse and even the operating theatre, assisting surgeons.
The main reason that robots can’t see well is that they have trouble with depth perception. Humans assess depth naturally and instantaneously. But it is very hard for a robot to judge quickly and accurately how far apart objects are, and as a result even to place itself in a room. Whatever it looks at appears like a flat canvas.
It is because of the inability to interpret the environment that machines like iRobot’s Roomba vacuum cleaner move around a room randomly, changing course when they bump into a wall or object. But this is inefficient and makes it likely to “miss a spot”.
Professor Liu wants to eliminate that uncertainty. His vision-based 3D motion control allows robots to place themselves without using a Global Positioning System, which does not function well indoors. It also allows robots to manipulate soft objects that bend under pressure, that are “deformable.”
Professor Liu, who is the director of CUHK’s T. Stone Robotics Institute and holds a post in the Department of Mechanical and Automation Engineering, is applying his work to medicine. He and his team have created two robotic systems in conjunction with the Prince of Wales Hospital that are in testing.
One helps with nasal surgery by maneuvering an endoscope intelligently. At the moment, a surgeon must manipulate the endoscope manually with one hand, while the surgeon’s other hand performs surgery, which is obviously inefficient. The new robot can move on its own and be manipulated using a wearable control pedal on the doctor’s foot, freeing up both hands.
Another robot that Professor Liu hopes will enter clinical trials this year helps with hysterectomies. To perform the surgery requires moving the uterus so the surgeon can reach the tissue that must be cut so the uterus can be removed. In the current procedure, an assistant must manipulate an instrument and hold the uterus at the right angle.
It is a tedious task over the three hours of the operation. Assistants have even fallen asleep on the job. Professor Liu’s robot, equipped with its own vision-based controller, can find the necessary tissue and tilt the uterus as required. Of course, it does not get bored.
In the medical field, robots are unlikely to function completely autonomously any time soon. Safety measures require a surgeon to be present at all times. But a smart robot is a valuable assistant on a delicate task.
“The idea is to make the surgeon and robot collaborate together,” Professor Liu says. He believes robots will in the future be able to perform basic medical procedures entirely independently. The issue is whether they will be allowed to do so.
His team is the first in the world to attempt image-based deformable manipulation. His robots can manipulate deformable objects such as a sponge, clothing or even a human organ, programmed to link up various points on the object until they match in the correct position. It is as if there are two separate squares that the robot must maneuver until they overlap.
To be able to program the robot correctly, it was first necessary to model the problem mathematically. Previous attempts over the last 30 years had included depth into the equation as a denominator. Since the depth was variable, this made it impossible to come up with a solution.
It took four years for Professor Liu and his team to recalibrate the equation, with lots of trial and error. In the end, the final solution was surprisingly simple, removing depth from the equation. It has produced a solution with a “very beautiful property,” the professor says, in that it is a simple linear equation.
“Originally the mapping was nonlinear because of depth, but that made the control very difficult, and nobody could solve that,” Professor Liu says. “We made a small revision that changed the nonlinear to a linear relationship, and we could then apply existing theory to that linear relationship.”
The change eliminated depth and substituted a scanner effect. Amplifying and reducing an image as necessary essentially compensates for the need to measure actual depth.
Mathematically the necessary change was glaring once he and his team recognized it. “Sometimes it takes 10 years,” Professor Liu says. “You’re gradually improving the program, improving the intelligence of robots.”
Robots enabled with vision can operate independently in a warehouse or factory floor. Working with CUHK’s extension at the Shenzhen Research Institute, Professor Liu has created a “smart forklift” equipped with his control system. The smart forklift is now moving around parts in a factory in Jiangsu Province making components for high-speed trains.
The applications, if not quite endless, are very extensive. Professor Liu wants to see his vision system applied to baggage handling, the manipulation of soft substances such as food, and services such as care for the elderly. “There are still many problems to be solved to make robots work reliably and intelligently in the natural environment, like human beings,” he says.
By Alex Frew McMillan
This article was originally published on CUHK Homepage in Mar 2016.