3D sense to help robots do household chores

(Photo: Getty Images)


In a step towards making robots more suitable for daily chores, researchers have developed a new technology that enables machines to make sense of 3D objects in a richer and more human-like way.

Beside helping it recognise something, the new technology also helps robots fill in the blind spots in its field of vision, to reconstruct the parts it cannot see.

"That has the potential to be invaluable in a lot of robotic applications," said Ben Burchfiel from Duke University in Durham, North Carolina.

A robot that clears dishes off a table, for example, must be able to adapt to an enormous variety of bowls, platters and plates in different sizes and shapes, left in disarray on a cluttered surface.

Humans can glance at a new object and intuitively know what it is, whether it is right side up, upside down or sideways, in full view or partially obscured by other objects.

Even when an object is partially hidden, we mentally fill in the parts we cannot see.

The robot perception algorithm that Burchfiel and George Konidaris from Brown University developed and can simultaneously guess what a new object is, and how it is oriented, without examining it from multiple angles first. 

It can also "imagine" any parts that are out of view, the researchers said.

A robot with this technology would not need to see every side of a teapot, for example, to know that it probably has a handle, a lid and a spout, and whether it is sitting upright or off-kilter on the stove.

The researchers said their approach, which was presented at the 2017 Robotics: Science and Systems Conference in Cambridge, Massachusetts, makes fewer mistakes and is three times faster than the best current methods.

The researchers trained their algorithm on a dataset of roughly 4,000 complete 3D scans of common household objects — an assortment of bathtubs, beds, chairs, desks, dressers, monitors, nightstands, sofas, tables and toilets.

Each 3D scan was converted into tens of thousands of little cubes, or voxels, stacked on top of each other like LEGO blocks to make them easier to process.

The algorithm learned categories of objects by combing through examples of each one and figuring out how they vary and how they stay the same, using a version of a technique called probabilistic principal component analysis, the researchers said.