A new soft sensor gives robots the power to see, feel, and make decisions

by | Oct 5, 2023

Groundbreaking soft sensors enable robots to both see and feel, paving the way for robots that can autonomously interact with and understand their environment.
A robotic gripper

Turn on YouTube and you can find all manner of soft robots performing agility and precision tasks. These robots are flexible by design and have been modeled after things like snakes and human hands. They stretch, bend, grip, and squeeze, allowing them to perform tasks and navigate the environment. But to improve their capabilities, more efficient and accurate sensors are needed.

For example, a robot hand designed to delicately manipulate objects cannot rely on a bulky camera for vision. The hand also needs tactile sensing to know how rough an object is, how to grab it, and if needed, manipulate it. Living organisms like humans have multiple sensory inputs that act simultaneously, letting us see, grab, climb, and touch our environment. However, replicating this in robots has been challenging.

The lab group of Li Wen at the School of Mechanical Engineering and Automation at Beijing University is dedicated to giving robots similar abilities by designing powerful sensors that detect multiple stimuli while remaining, soft, flexible and relatively cheap to produce.

Recently his group developed a sensor that produces both tactile and touchless information. The technology was published in Advanced Functional Materials and showed a soft robotic hand which perceives and describe objects, providing information on the material used, roughness, and shape.

Harnessing electrical phenomenon

The sensor works by harnessing two types of current-generating effects. The first is the triboelectric effect, which is the transfer of electric charge between two materials when they rub or slide against or close to each other. Think of static electricity that allows a balloon to stick to the wall. This part of the sensor perceives the object and detects what type of material it is with a touchless sweep over the object.

The second effect is called the giant magnetoelastic effect, which works by detecting the changes in a magnetic field caused by distorting an array of aligned magnets. More simply put, micromagnets arranged in a conductive film will produce a magnetic field that gets distorted when the film presses on an object. Distortions in the magnetic field then produce a detectable electric charge. This component detects features like roughness by touching the object.

Combining these two sensors, electrical signals are generated when the sensor approaches the object (touchless) and presses on it (tactile). This provides the robot with both raw input data about the object while also generating power to run the sensors.

Teaching a robot to touch and describe

After achieving the engineering feat of building the device, the team next had to interpret and separate the two incoming signals.

For this, a machine learning algorithm inspired by biological systems called a convolutional neural network was trained to recognize incoming data from the touchless and tactile part of the sensor. Like all machine learning algorithms, training is critical, and the team first had the hand approach and move along known objects, without touching them, allowing the algorithm to learn the touchless signals.

Next, they performed trials where the robot hand touched an object and recorded this data. Eventually, after many trails, the algorithm learned the differences between the signals. Subsequent tests showed that the robot hand could tell a user the shape of an object, the material it is made from, and describe how rough it was with 97% accuracy.

With this information it can then grip and sort the objects based on user-specified criteria, like shape or material.

Closing the loop between sensing and reacting

Further optimization of the current design is still required. As the group notes in the paper, the current sensor is sensitive to environmental conditions like humidity and temperature. However, they are confident that this can be overcome, and more sensing capabilities added to the robot.

“So now we have two layers and I think in the future we can add more. Each one representing a new sensing functionality,” Wen said.

Ultimately, Wen believes this technology can close the loop between a robot’s ability to sense the environment and react to it. “Having one sensing feedback besides vision especially the tactile sensing is really important,” he said. “This kind of descriptive information could be very important for the future behavioral decision of the robot.”

Robots could be empowered to decide how to move across different surfaces, pick the right tools for situations dealing with wood or rock, for example. All without specific instructions from a human operator.

 Reference: Li Wen, et al., An Intelligent Robotic System Capable of Sensing and Describing Objects Based on Bimodal, Self-Powered Flexible Sensors, Advanced Functional Materials (2023). DOI: 10.1002/adfm.202306368

Feature image credit: Li Wen et al.

ASN Weekly

Sign up for our weekly newsletter and receive the latest science news.

Related posts:

Invisible underwater robots

Invisible underwater robots

A transparent underwater robot camouflages itself to explore the ocean, reducing encounters with delicate sea life.