Unity flexes its robotic muscles in Object Pose Estimation demo

Unity flexes its robotic muscles in Object Pose Estimation demo Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)


3D content development platform Unity is showing off how it can improve robotics in a new demonstration.

The new demo is called Object Pose Estimation and shows off Unity’s AI and machine learning (ML) capabilities. Training data is collected in Unity and used to train a deep neural network that predicts the pose of a cube. This model is then deployed in a simulated robot pick-and-place task.

Dr Danny Lange, Senior Vice President of Artificial Intelligence at Unity, said: 

“This is a powerful example of a system that learns instead of being programmed, and as it learns from the synthetic data, it is able to capture much more nuanced patterns than any programmer ever could.

Layering our technologies together shows how we are crossing a line, and we are starting to deal with something that is truly AI, and in this case, demonstrating the efficiencies possible in training robots.”

Specifically, the demo uses Unity’s AI and ML capabilities to enable simulations of real-world deployments of robots. The built-in physics engine of Unity provides a virtual approximation of the kind of real-world forces the robots may face such as high winds or fast currents.

Such virtual trials not only help to ensure safe robot deployments but also enable simulations of rare events and environments which are difficult, or expensive, to replicate in the real world.

“With Unity, we have not only democratized data creation, we’ve also provided access to an interactive system for simulating advanced interactions in a virtual setting.

You can develop the control systems for an autonomous vehicle, for example, or here for highly expensive robotic arms, without the risk of damaging equipment or dramatically increasing cost of industrial installations.

To be able to prove the intended applications in a high-fidelity virtual environment will save time and money for the many industries poised to be transformed by robotics combined with AI and Machine Learning.”

Unity’s latest demonstration follows a series of releases from the platform aimed at supporting the Robot Operating System (ROS).

ROS is a popular framework for writing robot software that’s supported by Unity:

Unity’s ROS-TCP-Connector reduces latency between ROS nodes and Unity to enable robots to react in near real-time in simulated environments.

Robotics developers can use Unity’s URDF Importer to import a robot into a Unity scene. Imported robots will also benefit from an update to Unity last year which added ArticulationBody, a physics component that makes it easier to simulate robotic arms and kinematic chains with realistic physics and movement.

You can find out more about Unity’s work in robotics here.

(Image Credit: Unity)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

Tags: , , , , , , , ,

View Comments
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *