Robotic Manipulation

Perception, Planning, and Control

Russ Tedrake

Note: These are working notes used for a course being taught at MIT . They will be updated throughout the Fall 2023 semester. Lecture videos are available on YouTube .-->

Search these notes

Pdf version of the notes.

You can also download a PDF version of these notes (updated much less frequently) from here .

The PDF version of these notes are autogenerated from the HTML version. There are a few conversion/formatting artifacts that are easy to fix (please feel free to point them out). But there are also interactive elements in the HTML version are not easy to put into the PDF. When possible, I try to provide a link. But I consider the online HTML version to be the main version.

Table of Contents

  • Chapter 1: Introduction
  • Manipulation is more than pick-and-place
  • Open-world manipulation
  • These notes are interactive
  • Model-based design and analysis
  • Organization of these notes
  • Chapter 2: Let's get you a robot
  • Robot description files
  • Position-controlled robots
  • Position Control.
  • An aside: link dynamics with a transmission.
  • Torque-controlled robots
  • A proliferation of hardware
  • Simulating the Kuka iiwa
  • Dexterous hands
  • Simple grippers
  • Soft/underactuated hands
  • Other end effectors
  • If you haven't seen it...
  • Putting it all together
  • HardwareStation
  • HardwareStationInterface
  • HardwareStation stand-alone simulation
  • More HardwareStation examples
  • Chapter 3: Basic Pick and Place
  • Monogram Notation
  • Pick and place via spatial transforms
  • Spatial Algebra
  • Representations for 3D rotation
  • Forward kinematics
  • The kinematic tree
  • Forward kinematics for pick and place
  • Differential kinematics (Jacobians)
  • Differential inverse kinematics
  • The Jacobian pseudo-inverse
  • Invertibility of the Jacobian
  • Defining the grasp and pre-grasp poses
  • A pick and place trajectory
  • Differential inverse kinematics with constraints
  • Pseudo-inverse as an optimization
  • Adding velocity constraints
  • Adding position and acceleration constraints
  • Joint centering
  • Tracking a desired pose
  • Alternative formulations
  • Chapter 4: Geometric Pose Estimation
  • Cameras and depth sensors
  • Depth sensors
  • Representations for geometry
  • Point cloud registration with known correspondences
  • Iterative Closest Point (ICP)
  • Dealing with partial views and outliers
  • Detecting outliers
  • Point cloud segmentation
  • Generalizing correspondence
  • Soft correspondences
  • Nonlinear optimization
  • Precomputing distance functions
  • Global optimization
  • Non-penetration and "free-space" constraints
  • Free space constraints as non-penetration constraints
  • Looking ahead
  • Chapter 5: Bin Picking
  • Generating random cluttered scenes
  • Falling things
  • Static equilibrium with frictional contact
  • Spatial force
  • Collision geometry
  • Contact forces between bodies in collision
  • The Contact Frame
  • The (Coulomb) Friction Cone
  • Static equilibrium as an optimization problem
  • Contact simulation
  • Model-based grasp selection
  • The contact wrench cone
  • Colinear antipodal grasps
  • Grasp selection from point clouds
  • Point cloud pre-processing
  • Estimating normals and local curvature
  • Evaluating a candidate grasp
  • Generating grasp candidates
  • The corner cases
  • Programming the Task Level
  • State Machines and Behavior Trees
  • Task planning
  • Large Language Models
  • A simple state machine for "clutter clearing"
  • Chapter 6: Motion Planning
  • Inverse Kinematics
  • From end-effector pose to joint angles
  • IK as constrained optimization
  • Global inverse kinematics
  • Inverse kinematics vs differential inverse kinematics
  • Grasp planning using inverse kinematics
  • Kinematic trajectory optimization
  • Trajectory parameterizations
  • Optimization algorithms
  • Sampling-based motion planning
  • Rapidly-exploring random trees (RRT)
  • The Probabilistic Roadmap (PRM)
  • Post-processing
  • Sampling-based planning in practice
  • Graphs of Convex Sets (GCS)
  • Graphs of Convex Sets
  • GCS (Kinematic) Trajectory Optimization
  • Convex decomposition of (collision-free) configuration space
  • Variations and Extensions
  • Time-optimal path parameterizations
  • Chapter 7: Mobile Manipulation
  • A New Cast of Characters
  • What's different about perception?
  • Partial views / active perception
  • Unknown (potentially dynamic) environments
  • Robot state estimation
  • What's different about motion planning?
  • Wheeled robots
  • Holonomic drives
  • Nonholonomic drives
  • Legged robots
  • What's different about simulation?
  • Mapping (in addition to localization)
  • Identifying traversable terrain
  • Chapter 8: Manipulator Control
  • The Manipulator-Control Toolbox
  • Assume your robot is a point mass
  • Trajectory tracking
  • (Direct) force control
  • Indirect force control
  • Hybrid position/force control
  • The general case (using the manipulator equations)
  • Joint stiffness control
  • Cartesian stiffness control
  • Some implementation details on the iiwa
  • Peg in hole
  • Chapter 9: Object Detection and Segmentation
  • Getting to big data
  • Crowd-sourced annotation datasets
  • Segmenting new classes via fine tuning
  • Annotation tools for manipulation
  • Synthetic datasets
  • Self-supervised learning
  • Even bigger datasets
  • Object detection and segmentation
  • Pretraining wth self-supervised learning
  • Leveraging large-scale models
  • Chapter 10: Deep Perception for Manipulation
  • Pose estimation
  • Pose representation
  • Loss functions
  • Pose estimation benchmarks
  • Limitations
  • Grasp selection
  • (Semantic) Keypoints
  • Dense Correspondences
  • Task-level state
  • Other perceptual tasks / representations
  • Chapter 11: Reinforcement Learning
  • RL Software
  • Policy-gradient methods
  • Black-box optimization
  • Stochastic optimal control
  • Using gradients of the policy, but not the environment
  • REINFORCE, PPO, TRPO
  • Control for manipulation should be easy
  • Value-based methods
  • Model-based RL
  • Chapter 12: Soft Robots and Tactile Sensing
  • Soft robot hardware
  • Soft-body simulation
  • Tactile sensing
  • What information do we want/need?
  • Visuotactile sensing
  • Whole-body sensing
  • Simulating tactile sensors
  • Perception with tactile sensors
  • Control with tactile sensors
  • Appendix A: Spatial Algebra
  • Position, Rotation, and Pose
  • Spatial velocity
  • Appendix B: Drake
  • Online Jupyter Notebooks
  • Running on Deepnote
  • Enabling licensed solvers
  • Running on your own machine
  • Getting help
  • Appendix C: DrakeGym Environments
  • Appendix D: Setting up your own "Manipulation Station"
  • Message Passing
  • Kuka LBR iiwa + Schunk WSG Gripper
  • Franka Panda
  • Intel Realsense D415 Depth Cameras
  • Appendix E: Miscellaneous
  • How to cite these notes
  • Annotation tool etiquette
  • Some great final projects
  • Please give me feedback!

You can find documentation for the source code supporting these notes here .

I've always loved robots, but it's only relatively recently that I've turned my attention to robotic manipulation. I particularly like the challenge of building robots that can master physics to achieve human/animal-like dexterity and agility. It was passive dynamic walkers and the beautiful analysis that accompanies them that first helped cement this centrality of dynamics in my view of the world and my approach to robotics. From there I became fascinated with (experimental) fluid dynamics, and the idea that birds with articulated wings actually "manipulate" the air to achieve incredible efficiency and agility. Humanoid robots and fast-flying aerial vehicles in clutter forced me to start thinking more deeply about the role of perception in dynamics and control. Now I believe that this interplay between perception and dynamics is truly fundamental, and I am passionate about the observation that relatively "simple" problems in manipulation (how do I button up my dress shirt?) expose the problem beautifully.

My approach to programming robots has always been very computational/algorithmic. I started out using tools primarily from machine learning (especially reinforcement learning) to develop the control systems for simple walking machines; but as the robots and tasks got more complex I turned to more sophisticated tools from model-based planning and optimization-based control. In my view, no other discipline has thought so deeply about dynamics as has control theory, and the algorithmic efficiency and guaranteed performance/robustness that can be obtained by the best model-based control algorithms far surpasses what we can do today with learning control. Unfortunately, the mathematical maturity of controls-related research has also led the field to be relatively conservative in their assumptions and problem formulations; the requirements for robotic manipulation break these assumptions. For example, robust control typically assumes dynamics that are (nearly) smooth and uncertainty that can be represented by simple distributions or simple sets; but in robotic manipulation, we must deal with the non-smooth mechanics of contact and uncertainty that comes from varied lighting conditions, and different numbers of objects with unknown geometry and dynamics. In practice, no state-of-the-art robotic manipulation system to date (that I know of) uses rigorous control theory to design even the low-level feedback that determines when a robot makes and breaks contact with the objects it is manipulating. An explicit goal of these notes is to try to change that.

In the past few years, deep learning has had an unquestionable impact on robotic perception, unblocking some of the most daunting challenges in performing manipulation outside of a laboratory or factory environment. We will discuss relevant tools from deep learning for object recognition, segmentation, pose/keypoint estimation, shape completion, etc. Now relatively old approaches to learning control are also enjoying an incredible surge in popularity, fueled in part by massive computing power and increasingly available robot hardware and simulators. Unlike learning for perception, learning control algorithms are still far from a technology, with some of the most impressive looking results still being hard to understand and to reproduce. But the recent work in this area has unquestionably highlighted the pitfalls of the conservatism taken by the controls community. Learning researchers are boldly formulating much more aggressive and exciting problems for robotic manipulation than we have seen before -- in many cases we are realizing that some manipulation tasks are actually quite easy, but in other cases we are finding problems that are still fundamentally hard.

Finally, it feels that the time is ripe for robotic manipulation to have a real and dramatic impact in the world, in fields from logistics to home robots. Over the last few years, we've seen UAVs/drones transition from academic curiosities into consumer products. Even more recently, autonomous driving has transitioned from academic research to industry, at least in terms of dollars invested. Manipulation feels like the next big thing that will make the leap from robotic research to practice. It's still a bit risky for a venture capitalist to invest in, but nobody doubts the size of the market once we have the technology. How lucky are we to potentially be able to play a role in that transition?

So this is where the notes begin... we are at an incredible crossroads between learning and control and robotics with an opportunity to have immediate impact in industrial and consumer applications and potentially even to forge entirely new eras for systems theory and controls. I'm just trying to hold on and to enjoy the ride.

A manipulation toolbox

Another explicit goal of these lecture notes is to provide high-quality implementations of the most useful tools in a manipulation scientist's toolbox. When I am forced to choose between mathematical clarity and runtime performance, the clear formulation is always my first priority; I will try to include a performant formulation, too, if possible or try to give pointers to alternatives. Manipulation research is moving quickly, and I aim to evolve these notes to keep pace. I hope that the software components provided in and in these notes can be directly useful to you in your own work.

If you would like to replicate any or all of the hardware that we use for these notes, you can find information and instructions in the appendix .

As you use the code, please consider contributing back (especially to the mature code in ). Even questions/bug reports can be important contributions. If you have questions/find issues with these notes, please submit them here .

First chapter

© Russ Tedrake, 2020-2023

Help Center Help Center

  • Help Center
  • Trial Software
  • Product Updates
  • Documentation

Robot Simulation

Author robot scenarios and incorporate sensor models to test autonomous robot algorithms in simulated environments. Validate your robot models in virtual simulation environments by co-simulating with Gazebo, Unreal Engine ® , and Simulink ® 3D Animation™ .

  • Cuboid Scenario Simulation Scenarios with static meshes, robot platforms, sensors
  • High-Fidelity Simulation Author scenes with realistic graphics, generate high-fidelity sensor data
  • Gazebo Co-Simulation High fidelity simulation using co-simulation
  • Bin-Picking Simulation Manipulator pick-and-place and bin-picking simulations
  • Warehouse Robot Simulation Multi-robot management, obstacle avoidance, inventory management

Featured Examples

Gazebo Simulation of Semi-Structured Intelligent Bin Picking for UR5e Using YOLO and PCA-Based Object Detection

Gazebo Simulation of Semi-Structured Intelligent Bin Picking for UR5e Using YOLO and PCA-Based Object Detection

Detailed workflow for simulating intelligent bin picking using Universal Robots UR5e cobot in Gazebo. The MATLAB project provided with this example consists of the Initialize, DataGeneration, Perception, Motion Planning, and Integration modules (project folders) to create a complete bin picking workflow.

Automate Virtual Assembly Line with Two Robotic Workcells

Automate Virtual Assembly Line with Two Robotic Workcells

Simulation of an automated assembly to demonstrate virtual commissioning applications. The assembly line is based on a modular industrial framework created by ITQ GmbH known as Smart4i. This system consists of four components: two robotic workcells that are connected by a shuttle track and a conveyor belt. One of the two robots places cups onto the shuttle, while the other robot places balls in the cups. A slider then delivers those cups to a container. This simulation uses Stateflow® to control the system control and demonstrates how you can use Unreal Engine™ to simulate a complete virtual commissioning application in Simulink®. For an example showing how to deploy the main logic in Stateflow using Simulink PLC Coder™, see Generate Structured Text Code for Shuttle and Robot Control.

Perform Path Planning Simulation with Mobile Robot

Perform Path Planning Simulation with Mobile Robot

Create a scenario to simulate a mobile robot navigating a room. The example demonstrates how to create a scenario, model a robot platform from a rigid body tree object, obtain a binary occupancy grid map from the scenario, and plan a path for the mobile robot to follow using the mobileRobotPRM path planning algorithm.

Control and Simulate Multiple Warehouse Robots

Control and Simulate Multiple Warehouse Robots

Control and simulate multiple robots working in a warehouse facility or distribution center.

MATLAB Command

You clicked a link that corresponds to this MATLAB command:

Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.

Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

  • Switzerland (English)
  • Switzerland (Deutsch)
  • Switzerland (Français)
  • 中国 (English)

You can also select a web site from the following list:

How to Get Best Site Performance

Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.

  • América Latina (Español)
  • Canada (English)
  • United States (English)
  • Belgium (English)
  • Denmark (English)
  • Deutschland (Deutsch)
  • España (Español)
  • Finland (English)
  • France (Français)
  • Ireland (English)
  • Italia (Italiano)
  • Luxembourg (English)
  • Netherlands (English)
  • Norway (English)
  • Österreich (Deutsch)
  • Portugal (English)
  • Sweden (English)
  • United Kingdom (English)

Asia Pacific

  • Australia (English)
  • India (English)
  • New Zealand (English)

Contact your local office

InfoQ Software Architects' Newsletter

A monthly overview of things you need to know as an architect or aspiring architect.

View an example

We protect your privacy.

InfoQ Dev Summit Munich (Sep 26-27): Save your spot with up to 60% off with our limited Summer Sale. Last chance. Register Now

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

  • English edition
  • Chinese edition
  • Japanese edition
  • French edition

Back to login

Login with:

Don't have an infoq account, helpful links.

  • About InfoQ
  • InfoQ Editors
  • Write for InfoQ
  • About C4Media

Choose your language

robot simulation presentation

Learn practical strategies to clarify critical development priorities. Register now.

robot simulation presentation

There are only a few days to save up to 60% off with the special Summer Sale.

robot simulation presentation

Level up your software skills by uncovering the emerging trends you should focus on. Register now.

robot simulation presentation

Your monthly guide to all the topics, technologies and techniques that every professional needs to know about. Subscribe for free.

InfoQ Homepage Presentations From Robot Simulation to the Real World

From Robot Simulation to the Real World

Louise Poubel overviews Gazebo's architecture with examples of projects using Gazebo, describing how to bridge virtual robots to their physical counterparts.

Louise Poubel is a software engineer at Open Robotics working on free and open source tools for robotics, like the robot simulator Gazebo and the Robot Operating System (ROS).

About the conference

QCon.ai is a practical AI and machine learning conference bringing together software teams working on all aspects of AI and machine learning.

Poubel: Let's get started. About six years ago, there was this huge robotics competition going on. The stakes were really high, the prizes were in the millions of dollars, and robots had to the tasks like this, driving vehicles in a disaster scenario kind of thing, handling tools that he would handle in this kind of scenario, and also traversing some tough terrain. The same robot had to do these tasks one after the other and in sequence. There were teams from all around the world competing, and as you can imagine- those pictures are from the finals in 2015 and that was really hard for the time, and they’re still tough tasks for robots to do today.

The competition didn't start right there straightaway, “Let's do it with the physical robots”. The competition actually had a first phase that was inside simulation. The robot had to do the same thing in simulation. They had to drive a little vehicle inside the simulator, they had to handle tools inside the simulator just like they would handle later on in the physical competition, and they also had to traverse some tough terrain. The way that the competition was structured is that the teams that did the best in this simulated competition, they would be granted a physical robot to compete in the physical competition later, so teams that couldn't afford their own physical robots or they didn't have the mechanical design of their own robots, they could just use the robots that they would get from the competition.

You can imagine the stakes were really high; these robots cost millions of dollars, and it was a fierce competition in a simulation phase as well that started in 2013. Teams were being very creative with how they were solving things inside the simulation and some teams had very interesting solutions to some of the problems. You can see that this is a very creative solution, it works and it got the team qualified, but there is a very important little detail. It's that you can't do that with the physical robot. Their arms are not strong enough to withstand the robot's weight like that, the hands are actually very delicate so you can't be banging it on the floor like this.

You would never try to do this with the physical robot, but they did it in stimulation and they qualified to compete later on with the physical robot. It's not like they didn't know. It's not like they tried to do this with the real robot and they broke a million dollar robot, they knew that there is this gap between the reality of the simulation and the reality of the physical world, and there will always be.

Today, I'll be talking to you a little bit about this process of going from simulation to the real life, to the real robot, environment, interacting with the physical world. Some of the things that we have to be aware when we are doing this transition and when we are training things and simulation and then to put the same code that we did in simulation inside the real robots. We have to be aware of the compromises done during the simulation, we have to be aware of the simplifying assumptions that were done while designing that simulation.

I'll be talking about this in the context of the simulator called Gazebo, which is where I'm running this presentation right now, which is a simulator that has been around for over 15 years. It's open source and free, people have been using it for a variety of different use cases all around the world. The reason why I'm focusing on Gazebo is that I am one of the core developers and I've been one of the core developers for the past five years. I work at Open Robotics, I'm a software engineer, and today, I'll be focusing on Gazebo because of that. This picture here is from my master thesis back when I still dealt with physical robots, not so much with robots that are just virtual inside the screen. I'll be talking a little bit also later about my experience when I was working on this and I also would use simulation then I went to the physical robot.

At Open Robotics, we work on open source software for robots, Gazebo is one of the projects. Another project that we have that some people here may have heard of is ROS, the robot operating system, and I'll mention it a little bit later as well. We are a big team of around 30 people all around the world, I'm right here in California in the headquarters. That's where I work from and all of us are split between Gazebo and ROS, and some other projects and everything that we do is free and open source.

Why Use Simulation?

For people here who are not familiar with robotics simulation, you may be wondering why, why would you even use simulation? Why don't you just do your whole development directly in the physical robot since that's the final goal, you want to control that physical robot. There are many different reasons, I selected a few that I think would be important for this kind of crowd here who are interested in AI. The first important reason is you can get very fast iterations when dealing with a simulation that is always inside your computer.

Imagine if you're dealing with a drone that is flying one kilometer away inside a farm, and every time that you change one line of code, you have to fly the drone and the drone falls and you have to run and pick it up, and fix it, and then put it to fly again, that doesn't scale. You can iterate on your code, everybody who's a software engineer knows that you don't get things right the first time and you keep trying, you can keep tweaking your code. With simulation, you can iterate much quicker than you would in a physical robot.

You can also spare the hardware, but hardware can be very expensive and mistakes can be very expensive too. If you have a one million dollar robot, you don't want to be wearing out its parts, you don't want to risk it falling and breaking parts all the time. In simulation, the robots are free; you just reset and the robot is back in one piece. There is also the safety matter, if you're developing with a physical robot and you are not sure exactly what the robot's going to do yet, you're in danger, depending on the size of the robot, depending on what the robot is doing, how the robot is moving in that environment. It's much safer to just do the risky things in simulation first, and then go to the physical robot.

Related to all of this is scalability, in simulation, it's free. You can just have 1,000 simulations running in parallel, while for you to have 1,000 robots training and doing things in parallel, that costs much more money. You can have for your team, the whole team would have one robot, then if you have all developers trying to use the same robot, they are not going to move as fast as if they each were working in a separate simulation.

When Simulation is Being Used

When are people using simulation? I think the parts that most people here would be interested in is machine learning training. For training, you usually need thousands and millions of repetitions for your robots to learn how to perform a task. You don't want to do that in the real hardware for all the reasons that I mentioned before. This is a big one, and people are using simulation, people are using Gazebo and other simulators for this goal. Besides that, there's also development, people are just good old fashioned, trying to send commands to the robot for the robots to do what he wants, to follow a line, or to pick up an object and use some computer vision.

All these developments, people were doing in simulation for the reasons I said before, but there's also prototyping. Sometimes you don't even have the physical robot yet and you want to create a robot in simulation first and see how things work and tweak the physical parameters of the robot, even before you manufacture it. There's also testing, a lot of people are ready CI in their robots, like every time you make a change to your robot code and maybe nightly or at every port request, you run that simulation to see if your robot's behavior is still what it should be.

What You Can Simulate

What can people simulate inside Gazebo? These are some examples that I took from the ignitionrobotics.org website, which is a website where you can get free models for using robotic simulation. You can see that there are some ground vehicles here, all these examples are wheeled, but you can also have legs robots, either bipeds with two leg or quadrupeds, or any other kinds of legged robot. You can see that there are some smaller robots, they are self-driving cars with sensors and some other form factors. There's also flying robots, both robots with fixed wing or quadcopters, hexacopters, you name it. Some more humanoid-like robots, this one is from NASA, this one is the PR2 robot. This one is on wheels, but you could have a robot like Atlas that I showed before that had legs. Besides these ones, there are also people simulating industrial robots, underwater robots. There are all sorts of robots being simulated inside Gazebo.

It all starts with how you describe your model, all those models that I showed you before, I showed you the visual appearance of the model and you may think, "This is just a 3D mesh." There's so much more to it, for the simulation, you need information, all the physics information about the robot like dynamics, where is the center of mass, what's the friction between each part of the robot and the external world, how bouncy is it, where exactly are the joints connected, are they springy? All this information has to be embedded into that robot model.

All those models that I showed you before are described in this format called the simulation description format, SDF. This format doesn't describe just the robot, but it describes also everything else in your scene. Everything else here in this world from the visual appearance, from where the lights are positioned, and the characteristic of the lights and the colors, every single thing, if there is wind, if there is a magnetic field, every single thing inside your simulation world is described using this format. It is an XML format, so everything is described with XML tags, so you have a tag for specular color of your materials or you have a tag for the friction of your materials.

But there's only so far that you can go with XML, sometimes you need more flexibility to put more complex behavior there, some more complex logic. For that, you use C++ plugins, Gazebo provides a variety of different interfaces that you can use to change things in simulation from the rendering side of things, so you can write a C++ plugin that implements different visual characteristics, makes things blink in different ways that you wouldn't be able to do just with the XML. The same goes for the physics, you can implement some different sensor noise models that you wouldn't be able to just with the SDF description.

The main language, like programming interface to Gazebo, is C++ right now, but I'll talk a little bit later about how you can use some other languages to also interact with simulation in very meaningful ways.

When people think about robot simulation, the first thing that you think about is the physics, how is the robot colliding with other things in the world? How is gravity pulling the robot down? That's indeed the main important part of the simulation. Gazebo, unlike other simulators, we don't implement our own physics engine. Instead, we have an abstraction layer that other people can use to integrate other physics engines. Right now, if you download the latest version of Gazebo, which is Gazebo 10, you're going to get these four physics engines that we support at the moment. The default is the Open Dynamics Engine, ODE, but we also support Dart, Bullet, and Simbody. These are all external projects that are also open source, but they are not part of the core Gazebo code.

Instead, we have this abstraction layer, what happens is that you describe your word only once, you describe your SDF file once, you write your C++ plugins only once, and at run time, you can choose which physics engine you're going to run with. Depending on your use case, you might prefer to run it with one or the other according to how many robots you have, according to the kinds of interactions that you have between objects, if you're doing manipulation or if you're doing more robot locomotion. All these things will affect what kind of physics engine you're going to choose to use in Gazebo.

Let's look a little bit at my little assistant for today. This is now, let's see some of the characteristics of the physics simulation that you should be aware of when you're planning to user simulation to then bring the codes to the physical world. Some of the simplifying assumptions that you can see are, for example, if I visualize here the collisions of the model- let me make it transparent- you are seeing these are orange boxes here, they are what the physics engine is actually seeing. The physics engine doesn't care about these blue and white parts, so for collision purposes, it's only calculating these boxes. It's not like you couldn't do with the more complex part, but it would just be very computationally expensive and not really worth it. It really depends on your final use case.

If you're really interested in the details of the parts are colliding with each other, then you want to use a more complex mesh, but for most use cases, you're only interested when the robot really bumped into something and for that, an approximation is much better and you gain so much in simulation performance. You have to be aware of this before you put the code in a physical robot, and you have to be aware of how much you can tune this. Depending on what you're using this robot for, you may want to choose these collisions a little bit different.

Some of the things that you can see here, for example, are that I didn't put collisions for the finger. The fingers are just going through here. If you're doing manipulation, you obviously need some collisions for the fingers, but if you're just making the robot play soccer, for example, you don't care about the collisions of the fingers, just remove them and gain a little bit of performance in your simulation. You can see here, for example, that actually the collision is hitting this box here, but if you remove the collision, if you're not looking at the collision, the complex part itself looks like the robot is floating a little bit. For most use cases, you really want the simplified shapes, but you have to keep that in mind before you go to the physical robot.

Another simplifying thing that you usually do, let's take a look at the joints and at the center of mass of the robot. This is the center of mass for each of the parts of the robot, and you can see here the axis of the joints, here on the neck, you can see that there is a joint up there that lets the neck go up and down, and then the neck can do like this. I think the robot has a total of 25 joints, and this description is made to spec, this is what the perfect robot would be like and that's what you put in simulation. In reality, your physically manufactured robot is going to deviate a lot from this, the joints are not going to be perfectly aligned from both sides of the robot. One arm is going to be a little bit heavier than the other, the center of mass may not be exactly in the center. Maybe the battery moved inside it and it's a little bit to the side. If you train your algorithms with a perfect robot inside the simulation, once you go and you take that to the physical robot, if it's overfitting for the perfect model, it's not going to work in the real model.

One thing that people usually do is randomize a little bit this while you're training your algorithms, for each iteration, you move that center of mass a little bit. You reduce and you increase the mass a little bit, you change the joints, you change all the parameters of the robot and the idea is not that you're going to find the real robot, because that doesn't exist. Each physical robot, they are manufactured differently, from one to the other, they're going to be different. Even one robot, over time, will change, it loses a screw and suddenly, the center of mass shifted. The idea of randomization is not to find the real world, it's to be robust enough to arrange a variation that once you put it in a real robot, the real robot is somewhere there in that range.

These are some of the interesting things, there are a bunch of other things, there's inertia, which is nice to look at too, but that's with the robot not transparent anymore. Here is a little clip from my master thesis and I did it within our robots and I did most of the work inside simulation. Only when I had it work in simulation, I went and I put the code in the real robot. A good rule of thumb is if it works in simulation, it may work in the real robot, if it doesn't work in simulation, it most probably is not going to work in the real robot, so at least you can take out all the cases that wouldn't work.

By the time I got here, I had put enough tolerances in the code and I had to test it a lot also with the physical robot, because it's important to periodically also test with the physical robot, that I was confident that the algorithm was working. You can see that there is someone's hands there in case something goes wrong, and this is mainly for a thing that we just had in model in simulation which is the physical robot. I was putting so much strength onto one of the feet all the time because I was trying to balance and those motors in the ankles were getting very hot and the robot comes with a built-in safety mechanism where it just screams, "Motor hot," and turns off all of its joints. The poor thing had the forehead all scratched, so the hand is there for these kind of use cases.

Let's talk a little bit about sensors, we talked about physics, how your robot interacts with the world, how you describe the dynamics and the kinematics of the robot, but how about how the robot is consuming information from the world in order to make decisions. Gazebo supports over 20 different types of sensors from cameras, GPS, IMUs, you name it. If you put something in the robot, we support it one way or the other. It's important to know by default what the simulation is going to give you, it's going to be very perfect data. It's always good for you to try to modify the data a little bit too also add that randomization, add that noise so that your data is not so perfect.

Let's go back to now, it has a few sensors right now, let's take a look at the cameras. I put two cameras in, it has one camera with noise and one camera with the perfect image. You can see the difference between them, this one is perfect, it doesn't have any noise, it’s basically what you're seeing through the user interface of the simulator and here, you can see that it has noise, I put a little bit too much. If you have a camera in the real robot with this much noise, maybe you should buy a new camera. I put some noise here, and you can see also there is some distortion because real cameras also have a little bit of a fisheye effect or the opposite, so you always have to take that into account. I did this all by passing parameters in XML, these are things that Gazebo just provides for you, but if your lens maybe has a different kind of distortion or you want to implement a different kind of noise, this is very simple gushing noise, but if you want to use a more elaborate thing, you can always write a C++ plugin for it.

Let's take a look at another sensor, this was a rendering sensor and we're using the rendering engine to collect all that information, but there is also physical sensors like an altimeter. I put this ball bouncing here and it has an altimeter, we can take a look at the vertical position and I also made it quite noisy, so you can see that the data is not perfect. If I hadn't put noise there, it would just look like a perfect parabola because that's what the simulator is doing, it's calculating everything perfectly for you. This is more what you would get from a real sensor and I also put the update rate very low, so the graph looks and better. The simulation is running at 1,000 hertz and, in theory, you could get data at 1,000 hertz, but then you have to see would your real sensor give your data at that rate and would it have some delay? You can tweak all these little things in the simulation.

Another thing to think about is interfaces, when you're programming for your physical robot, depending on the robot you're using, it may provide an SDK, it may provide some APIs that you can go and program it, maybe from the manufacturer, maybe something else, but then, how do you do the same in simulation? You want to write all your code once and then train in simulation, and then you just flip the switch, and that same code is acting now on the real robot, you don't want to have to write two separate codes and duplicate the logic in two places.

One way that people commonly do this is using ROS, the Robot Operating System, which is also, as I mentioned earlier, an open source project that we maintain at Open Robotics. ROS provides a bunch of tools in a common communication layer and libraries for you to be able to debug your robots better in a unified way and it has integration with the simulation so you can control your robot in simulation, and then you just switch to a physical robot and then you're controlling that physical robot with the same code. It's very convenient and ROS offers a variety of different language interfaces, you can use JavaScript, Java, Python, it's not limited just to C++ like Gazebo is. The interface between ROS and Gazebo C++ but once you're using ROS, you have access to all those other languages.

Let's look at some examples of past projects that we've done inside Gazebo in which we had both the simulation and the physical world component. This is a project called Haptics that happened a few years ago, and it was about controlling this prosthetic hand here. We developed the simulation and it had the same interface for you to control the hand in simulation and the physical hand. In this case, we were using MATLAB, you could just send the commands in MATLAB and the hand in simulation would perform the same way as the physical hand. We improved a lot the kind of surface contact that you would need for successful grasping inside simulation.

This was one project, another one, this one is a competition as well, it was called Sask and it was a tag competition between two swarms of drones. They could be fixed wing or quadcopters, or a mix of the two. Each team had up to 50 drones, imagine how it would have been to practice with that in the physical world. I have all these drones flying in for every single thing that you want to try, you have to go collecting all those drones, it's just not feasible.

The first phase of the competition was simulation, things were competing on the cloud. We had the simulation running on the cloud and they would just control their drones as if they were controlling with the same controls that they would eventually use in the physical world. Once they had practiced enough, they had the physical competition with swarms of drones playing tag in the real world, that's what the picture on the right is.

This one was the space robotics challenge that happened a couple of years ago. It was hosted by NASA using this robot here, which is called the Valkyrie, it's a NASA robot, it's also known as Robonaut 5. The final goal of Valkyrie is to go to Mars and organize the environment in Mars before humans go there. The competition was all set in Mars and you can see that the simulation here was set up in Mars, in a red planet, red sky, and the robot had to perform some tasks just like it's expected that it will have to do in the future.

Twenty teams from all around the world competed in the cloud. In this case, we weren't only simulating the physics and the sensors, but we were also simulating the kind of communication you would have with Mars. You would have a delay and you have very limited bandwidth, these were all parts of the challenge and the competition. What is super cool is that the winner of the competition had only interacted with the robot through simulation up until then. Once he won the competition, he was invited to go to a lab where they had reconstructed some of the tasks from the simulation. This is funny, they constructed in the real world something that we had created for simulation, instead of going the other way around. It took him only one day to get his codes that he used to win the competition in the virtual world to make the physical robot do the same thing.

This is another robot example that uses Gazebo and there's also a physical robot, this is developed by a group in Spain called Accutronix, and they integrated Gazebo with OpenAI Gym to train this robot. I think it can be extended for other robots to perform tasks in simulation, it is trained in simulation and then you can take what you learned in simulation and put the model inside the physical robot to perform the same task.

Now that you know a lot about Gazebo, let me tell you that we are currently rewriting it, as I mentioned earlier, Gazebo is over 15 years old and there is a lot of room for improvement. We want to make use of more modern things like running simulation, distribute it across machines in the cloud, we want to make use of more modern rendering technology, like physically-based rendering and retracing to have more realistic images in the camera sensors.

We're in the process of taking Gazebo which is currently a monolith huge code base and breaking it into smaller reusable libraries. It will have a lot of new features, like the physics abstraction is going to be more flexible so you can just write a physics plugin and use a different physics engine with Gazebo. The same thing will go for the rendering engine, we're just making a plugin interface so you can write those plugins to interface with any rendering engine that you want. Even if you have access to a proprietary one, you can just write a plugin and easily interface with it. There are a bunch of other improvements coming and this is what I've been spending most of my time on recently. That's it, I hope that I got you excited a little bit about simulation, and thank you.

Questions & Answers

Participant 1: You mentioned about putting in all the randomization to train the models. I don't have too much of a robotics background, so could you just shed some light on what kind of models and what do you mean by training those models?

Poubel: What I meant by those randomizations is in the description of your world where you have literally in your XML mass equals 1 kilogram and in the next simulation, instead of mass 1 kilogram, you can put mass 1.01 kilograms. You can change the position of them as a little bit, every time you load the simulation, you can have the simulation be a little bit different from the one before and when you're training your algorithms, like running 1,000 simulation, 10,000 or 100,000 simulations, having the model not be the same in every single one of them is going to make your final solution, your final model, much more robust to these variations. Once you come to the physical robot, it's going to be that more robust.

Participant 2: Thanks for the talk. As a follow up to the previous question, does that mean if you use no randomization, then the simulation is completely deterministic?

Poubel: Mostly, yes. There are still sometimes some numerical errors and there are some places where we use a random number generator that you can set the seed and make it be deterministic, but there's always a little bit of differences there, especially like sometimes we use some asynchronous mechanisms, so depending on the order the messages are coming, you may have a slightly different result.

Moderator: I was wondering if there are tips or tricks to use Gazebo in a continuous integration environment? Is it being done often?

Poubel: Yes, a lot of people are running CI and running Gazebo in the cloud. The first thing is turn off the user interface, you're not going to need it. There is a headless mode, right now, I'm running two processes, one for the back end and one for the front end. You don't need the front end when you're running tests, Gazebo comes with a test library that we use to test Gazebo and some people use to test their code. It's based on G test and you will have pass/fail and you can put expectations, you can say, "I expect my robot to hit the wall at this time," and, "I expect the robot not to have any disturbances during this whole run." Yes, these are some of the things that you're going to use if you use it for CI.

Participant 3: What kind of real-world simulations does Gazebo does support, like wind or heat, stuff like that? Does it do it or do we have to specify everything ourselves?

Poubel: I didn't get exactly what kind of real-world simulation.

Participant 4: As a simulation, in the real world, you have lots of physical effects from heat, wind. Does it have a library or something like that, or do we have to specify pretty much the whole simulation environment ourselves?

Poubel: There's a lot that comes with Gazebo, Gazebo ships with support for wind, for example, but it's a very simple wind model. It's a global wind, always going the same direction. If you want a more complex wind model, you would have to write your own C++ plugin to make that happen, or maybe import winds data from a different software.

We try to provide the basics and an example of all physical phenomena. There is buoyancy if you're underwater, there is a lift-drag for the fixed wings, we have the most basic things that are applied to most use cases and if you need something specific, you can always tweak, either download the code or start the plugin from scratch and tweak those parameters. It's all done through plugins, you don't need to compile Gazebo from source.

See more presentations with transcripts

robot simulation presentation

Recorded at:

robot simulation presentation

Jun 28, 2019

Louise Poubel

Related Sponsored Content

Exploring enterprise ai: an introduction to llms, vector databases, and more, related sponsor.

robot simulation presentation

Explore how HPE AI software Powered by Intel® Xeon® Scalable processors simplifies and accelerates your AI and LLM journey. Learn more .

This content is in the AI, ML & Data Engineering topic

Related topics:.

  • AI, ML & Data Engineering
  • QCon ai 2019 SF
  • Transcripts
  • QCon Software Development Conference
  • Artificial Intelligence

Related Editorial

Popular across infoq, retrieval-augmented generation (rag) patterns and best practices, optimizing spring boot config management with configmaps: environment variables or volume mounts, cloudflare ai gateway now generally available, edo liberty on vector databases for successful adoption of generative ai and llm based applications, liquid: a large-scale relational graph database, openai publishes gpt model specification for fine-tuning behavior.

robot simulation presentation

Into Robotics

Webots Simulation Tutorials and Resources

  • September 6, 2023

robot simulation presentation

Realistic simulation and modeling are the main features of the tool, which is also used for programming the robots in different programming languages including here C++, Java, or Python.

From colors to texture, from force simulation to interface sensors, Webots was designed for a long list of robotic projects with a large choice of sensors and actuators as well as a multi-robot simulation platform.

Programs developed with the built-in IDE or other development tools can be tested and transferred to educational or commercial physical robots.

Using 3D modeling could be created realistic environments and states of a robot with possibility to add artificial intelligence or computer vision with integrated tools.

Webots Simulation Tool

A wide range of robots has support for Webots. The list can be started with one of the most advanced humanoid robots like AIBO, and can continue with Nao, iCub, HOAP-2, Lego Mindstorms, etc.

Below are available a series of tutorials and guide as well as a series of resources for helping beginners or advanced user to use Webots tool.

Table of Contents

Collection of tutorials from how to start using Webots and to integration with Matlab or adding new plug-ins. Webots is a popular simulation and modeling tool used especially in research or educational projects. In this chapter are available a collection of tutorials and guides to learn how to use the Webots and start programming and simulation process.

  • Tutorials – in this article are available a series of tutorials for a first simulation to a guide how to use ROS with Webots in order to build powerful 3D robots;
  • Introduction to Webots – very good documentation to start using the Webots simulation tool;
  • Controller Programming – ‘Hello World Example’, ‘Reading Sensors’, ‘Using Actuators’, or ‘How to use wb_robot_step()’ are just a few concepts described in this comprehensive guide;
  • Supervisor Programming – programming example how to track the position and how to set the position of the robot;
  • Cyberbotics’ Robot Curriculum/Novice programming Exercises – simple guide how to simulate and programming lines for a wall following algorithm;
  • Designing and Building Multi-Robot Systems – presentation to learn how to control motors and how to avoid obstacles in Webots simulation software;
  • Robotics Lab Demonstrator – guide how to use Webots and Enki for simulation and program mobile robots with wheels, legs, or wings;
  • Modeling – guide with tips and tricks for modelling in Webots starting with how to build replicable/deterministic simulations to remove the noise from the simulation;
  • Webots for NAO – Nao is a faithful customer for Webots and this is a very helpful tutorial how to start using Nao together with Webots;
  • DifferentialWheels – a simulation guide for differential steering;
  • Kinematics and Motion Analysis of a Three-Dimensional Sidewinding Snakelike Robot – a guide that shows formulas and programming lines for a complex simulation of a snake-like robot with a wide range of movements;
  • Running Monitor on Webots – step-by-step guide to run MonitorShm on Webots;
  • Echo State Networks – Pattern Generation for Motor Control – tutorial with formulas, Webots simulation and Matlab integration;
  • Controller plug-in – Webots guide to add controller plug-in in order to develop easier code lines for a robot;
  • Webots Reference Manual -comprehensive material with information for a wide range of Webots features including the interface with sensors and actuators, and for texture in Webots;
  • C++/Java/Python – a collection of programming lines using different programming languages like C++, Java or Python;
  • Using Visual C++ with Webots – guide how to use Visual C++ programming language on Webots;
  • Webots – guide how to integrate Urbi and Webots simulation software;
  • Intro to Controllers – comprehensive overview of the Webots guide to start programming controller in Java;
  • A Neural Network Controller for Webots – article with basic information to understand and how to use neural networks to control a robot;
  • kaist_webots – comprehensive guide to start using Webots with ROS and installing kaist_webots;
  • Using MATLAB – guide start integrating Matlab and Webots;

Related Posts

robot simulation presentation

Unleashing Creative Potential With Embedded Arduino Systems

robot simulation presentation

Crafting Effective ROS Nodes: Rospy Publisher Essentials

robot simulation presentation

Unleashing the Potential of Micro-Computing with Intel Edison

robot simulation presentation

Your Ultimate Guide to Quality Components

robot simulation presentation

Get Experience in Developing for AI

robot simulation presentation

Current Trends and Predictions For Robotic Technician Careers

Never miss the next breakthrough.

Sign up for our newsletter and get all the latest tends, insight and technological breakthroughs delivered to your inbox! 

Don't Miss Out!

  • MATLAB Answers
  • File Exchange
  • AI Chat Playground
  • Discussions
  • Communities
  • Treasure Hunt
  • Community Advisors
  • Virtual Badges

Cleve’s Corner: Cleve Moler on Mathematics and Computing

IBM Hexadecimal Floating Point

The MATLAB Blog

Paged Matrix Functions in MATLAB (2024 edition)

Guy on Simulink

What’s New in Simulink R2024a?

MATLAB Community

Community Q&A – Zhaoxu Liu

Artificial Intelligence

Building Confidence in AI with Constrained Deep Learning

Developer Zone

Streamlining the Medical Imaging Software Development Lifecycle

Stuart’s MATLAB Videos

Creating a Simple Function with Test Script

Behind the Headlines

Three favorites from TIME Magazine’s “Best Innovations of 2023”

File Exchange Pick of the Week

Celebrating Pi Day with cool visualizations

Hans on IoT

New ThingSpeak IoT Examples and Curriculum Module: Hardware Connectivity in Action

Student Lounge

How AMZ Racing Designed the Motor Controller to Achieve 0 to 100 km/h in 0.956 Seconds

MATLAB ユーザーコミュニティー

Minidrone Competition at ICRA 2024

Startups, Accelerators, & Entrepreneurs

Startup Spotlight: Ensuring Safety in Teledriving Ride Shares (스타트업 스포트라이트: 원격 주행 차량 공유의 안전성 보장)

Autonomous Systems

Accelerating the path to production for your autonomous system with RTI Connext and the ROS Toolbox

Quantitative Finance

Deep Learning in Quantitative Finance: Multiagent Reinforcement Learning for Financial Trading

MATLAB Graphics and App Building

Boost Your App Design Efficiency – Effortless Component Swapping & Labeling in App Designer

  • Posts (feed)

bio_img_autonomous-systems

Mihir Acharya and YJ Lim share the latest advancements and industry trends in the robotics and autonomous systems area.

Autonomous Systems Design, develop, and test autonomous systems with MATLAB

Building realistic robot simulations with matlab and nvidia isaac sim.

Posted by Mihir Acharya , September 7, 2023

In this blog post, my colleague Dave Schowalter will introduce you to a new ecosystem that combines the photo-realistic simulation capabilities from NVIDIA Isaac Sim TM and the sensor processing and AI modeling capabilities from MathWorks for building realistic robot simulations.

I am Dave Schowalter! As a Partner Programs Manager at MathWorks, I build technology partnerships in the robotics and autonomous systems industry.

Simulating autonomous robots in a photo-realistic environment provides several practical advantages, especially when incorporating sensors and perception in the models. It enables robotics engineers and researchers to thoroughly assess robot performance in diverse, complex settings, enhancing adaptability and problem-solving capabilities.

Autonomous robots use machine learning and sensor-based perception algorithms. Incorporating sensors and sensor data processing with a photo-realistic scene simulation helps with improving accuracy and performance of these algorithms. This also becomes an advantage when training an autonomous robot based upon synthetic sensor outputs.

NVIDIA Isaac Sim and MathWorks Model-Based Design together provide an integrated approach to create and perform these simulations. It offers an efficient platform for addressing safety concerns and refining sensor calibration, ultimately leading to more reliable real-world implementations.

With Isaac Sim TM , NVIDIA has created a high level of photo-realism by using a combination of techniques, including:

  • Real-time Ray Tracing : Ray tracing is a technique that simulates the way light interacts with objects in the real world. This allows Isaac Sim to create realistic reflections, refractions, and shadows.
  • Physically-based Rendering : Physically-based rendering is a technique that uses the laws of physics to simulate the way light interacts with materials. This allows Isaac Sim to replicate the way actual materials such as metal, plastic, and wood reflect and scatter light.
  • High-Quality Textures : Isaac Sim uses high-quality textures to give objects a genuine appearance. These textures are created using a variety of methods, including scanning real objects and generating them using computer graphics techniques.
  • Advanced Lighting : Isaac Sim uses advanced lighting techniques to create authentic lighting conditions. These techniques include global illumination, which simulates the way light bounces off objects, and ambient occlusion, which simulates the way shadows are created by objects blocking light.
  • High-Performance Rendering : Isaac Sim uses NVIDIA’s GPUs to render scenes in real time. This allows users to perceive the interaction of simulation assets with the environment in a natural way.

Although Isaac Sim offers advanced physics simulation to recreate the behavior of objects in the real world, manipulation of synthetic data from multiple sensors (“sensor fusion”) and the use of those results to train AI algorithms and determine robot behavior can all be designed and managed in MATLAB and Simulink.  Such a system built in Simulink is depicted below.

Once designed, the entire robot behavior in its environment can be predicted and then adjusted as needed, before testing a physical prototype.  This integration of MATLAB and Simulink with Isaac Sim is most efficiently implemented through ROS, using the ROS Toolbox add-on in MATLAB.  A screenshot of the integration in use is shown below.

Finally, the program can be deployed on embedded hardware (for example on the NVIDIA Jetson platform) to drive the end application.

If you are interested in learning more about this workflow, please register for the joint NVIDIA/MathWorks webinar on September 12, “MATLAB and Isaac Sim.”   In the webinar, you will learn how to use this integration with workflows for manipulator and mobile robot applications.

You are now following this blog post

You will see updates in your activity feed .

You may receive emails, depending on your notification preferences .

print

Overcoming 4 Key Challenges in Cobot Software Development

robot simulation presentation

MATLAB and Simulink for Autonomous System Design

robot simulation presentation

Image map example

robot simulation presentation

QuadBot-NeuroMorphic

robot simulation presentation

Intelligent Bin Picking with Simulink® for UR5e Cobot

robot simulation presentation

BASIC OF FACE FILTERING AND ENHANCEMENT

To leave a comment, please click here to sign in to your MathWorks Account or create a new one.

Navigation Menu

Search code, repositories, users, issues, pull requests..., provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications You must be signed in to change notification settings

Presentation slides inside robot simulations 🎥🤖

chapulina/simslides

Folders and files.

NameName
145 Commits

Repository files navigation

Import PDF files into robot simulation and present flying from slide to slide.

SimSlides

SimSlides consists of plugins for two simulators: Gazebo Classic and Ignition Gazebo . There are different features for each simulator.

  • Navigate through keyframes using mouse , keyboard or wireless presenter
  • Look at a slide (even if it has moved)
  • Move camera to a specific pose
  • Go through slides stacked on the same pose
  • ... plus all Ignition features!

Gazebo classic

  • Import PDF files into simulation through the GUI
  • Seek to specific spot in a log file
  • Write copiable HTML text to a dialog
  • ... plus all Gazebo features!

Checking out a couple other tutorials is also recommended if you want to use each simulator's potential to customize your presentations. Maybe you want to setup keyboard triggers? Control a robot using ROS ? The possibilities are endless!

SimSlides' main branch supports both Gazebo Classic and Ignition. It's ok if you don't have both simulators installed, only the plugin for the simulator present will be compiled.

The main branch supports Ignition Citadel, Edifice and Fortress.

Follow the official install instructions .

Gazebo Classic

The main branch has been tested on Gazebo version 11.

Extra dependencies:

It's also recommended that you make sure ImageMagick can convert PDFs, see this .

Build SimSlides

By default, SimSlides will try to build against Ignition Citadel and Gazebo 11. For other Ignition versions, set the IGNITION_VERSION environment variable before building. For example:

SimSlides can be built with a basic cmake workflow, for example:

Be sure to add your CMAKE_PREFIX_PATH to LD_LIBRARY_PATH , for example, when following the steps above, you should do this before running:

It's also possible to build SimSlides inside a colcon workspace.

Run SimSlides

Run simslides:

Important : Source Gazebo, this may be in a different place depending on your Gazebo installation:

This starts SimSlides in an empty world. You're ready to create your own presentation!

You can find a demo presentation inside the worlds directory. The same demo works for both simulators.

Run it as follows:

Move to the simslides clone directory

(Only for Gazebo classic) Source Gazebo

Load the world

Your own presentation

You can generate your own presentation as follows:

Generate a new presentation

On the top menu, choose SimSlides -> Import PDF (or press F2 )

Choose a PDF file from your computer

Choose the folder to save the generated slide models at

Choose a prefix for your model names, they will be named prefix-0 , prefix-1 , ...

Click Generate. A model will be created for each page of your PDF. This may take a while, the screen goes black... But it works in the end. Sometimes it looks like not all pages of the PDF become models... That's an open issue.

When it's done, all slides will show up on the world in a grid.

A world file is also created, so you can reload that any time.

Presentation mode

Once you have the slides loaded into the world, present as follows:

Press F5 or the play button on the top left to start presentation mode

Press the arrow keys to go back and forth on the slides

You're free to use the rest of Gazebo's interface while presenting. If you've navigated far away from the current slide, you can press F1 to return to it.

At any moment, you can press F6 to return to the initial camera pose.

Existing presentations

When this project was started, all presentations were kept in different branches of the same repository. Since mid 2019, new presentations are being created in their own repositories.

Until mid 2019

Move to the presentation branch, available ones are:

CppCon2015 : CppCon, September 2015

BuenosAires_Nov2015 : University of Buenos Aires, November 2015

Chile_Nov2015 : Universidad de Chile, November 2015

IEEE_WiE_ILC_2016 : IEEE Women in Engineering International Leadership Conference, May 2016

ROSCon_Oct2016 : ROSCon, October 2016

ROSIndustrial_Jan2017 : ROS Industrial web meeting, January 2017

OSS4DM_Mar2017 : Open Source Software for Decision Making, March 2017

OSCON_May2017 : Open Source Conference, May 2017

ROSCon_Sep2017 : ROSCon, Sep 2017

Brasil_Mar2018 : Brasil visits, Mar 2018

QConSF_Nov2018 : QConSF, Nov 2018

UCSC_Feb2019 : University of California, Santa Cruz, Feb 2019

QConAI_Apr2019 : QCon.ai, Apr 2019

A lot changes from one presentation to the next. Follow instructions on that branch's README to run the presentation. I've done my best to document it all, but each presentation may take some massaging to work years later.

Since mid 2019

See each repository / world:

  • ROSConJP 2019 ( video )
  • ROSCon 2019 ( video )
  • ROS-Industrial Conference 2019: ( video )
  • All Things Open 2020 ( video )
  • Open Source 101 2021 ( video )

This project started as a few bash scripts for CppCon 2015. Back then, it used to be hosted on BitBucket using Mercurial.

Over the years, the project evolved into more handy GUI plugins, and is gaining more features for each presentation.

The repository was ported to GitHub + Git in August 2019, when BitBucket dropped Mercurial support.

Naos and UT Austin Villa

Week 0 (8/25): class overview, week 1: introduction to motion control, week 2: motion control continued, week 3: probability/sensing, week 4: kalman filters.

  • CWMtx C++ Matrix library .

Week 5: Localization

  • Adapting the Particle Size in Particle Filters Through KLD-Sampling Dieter Fox. In the International Journal of Robotic Research, IJRR, 2003. (An excellent description of robot localization - has some overlap with the textbook)
  • Vision-Based Fast and Reactive Monte-Carlo Localization Thomas Roefer and Matthias Jungel. In the IEEE International Conference on Robotics and Automation, ICRA, 2003. (Another team's implementation details are in Sections III and IV )
  • Fast and Robust Edge-Based Localization in the Sony Four-Legged Robot League Thomas Roefer and Matthias Jungel. In the Seventh International RoboCup Symposium, 2003. (On using field edges in localization)
  • Making Use Of What You Don't see: Negative Information in Markov Localization Jan Hoffmann, Michael Spranger, Daniel Gohring and Matthias Jungel. In the IEEE International Conference on Intelligent Robots and Systems, IROS, 2005. (Recent article on using negative information in localization)
  • Simultaneous Localization and Mapping (SLAM): Part I The Essential Algorithms Hugh Durrant-Whyte and Tim Bailey.
  • Multiple Model Kalman Filters: A Localization Technique for RoboCup Soccer Quinlan and Middleton.

Week 6: Vision

  • The UT Austin Villa 2003 Four-Legged Team , Extended version The University of Texas at Austin, Department of Computer Sciences, AI Laboratory Tech report UT-AI-TR-03-304. Read Sections 4, 4.1-4.3, 14.
  • The UT Austin Villa 2004 RoboCup Four-Legged Team: Coming of Age Read Sections 3, 3.1,3.2 (and the first couple of appendices if you're interested)
  • Using Layered Color Precision for a Self-Calibrating Vision System Matthias Jungel Robocup 2004
  • Bayesian Color Estimation for Adaptive Vision-based Robot Localization. D. Schulz and D. Fox Proceedings of IROS, 2004.
  • Color Learning on a Mobile Robot: Towards Full Autonomy under Changing Illumination Mohan Sridharan and Peter Stone. In The 20th International Joint Conference on Artificial Intelligence, pp. 2212
  • B-Human Team Report and Code Release 2011. Thomas R�fer et al.
  • UT Austin Villa 2013 - Advances in Vision, Kinematics, and Strategy: Paper , Slides Jacob Menashe et al.

Week 7: Walking

  • Machine Learning for Fast Quadrupedal Locomotion Nate Kohl and Peter Stone In The Nineteenth National Conference on Artificial Intelligence, pp. 611-616, July 2004.
  • The development of Honda humanoid robot Hirai, K. and Hirose, M. and Haikawa, Y. and Takenaka, T. ICRA 1998.
  • On the Stability of Anthropomorphic Systems Vukobratovic, M. and Stepanenko, J. Mathematical Biosciences.
  • Legged robots that balance. Raibert, M. H.
  • Virtual Model Control of a Bipedal Walking Robot Pratt, J., Dilworth, P. and Pratt, G. ICRA 1997.
  • Hybrid Zero Dynamics of Planar Biped Walkers Westervelt, E.R., Grizzle, J.W. and Koditschek, D.E. IEEE Trans. on Automatic Control, Vol.48, No.1, pp.42-56, 2003.
  • Modeling and Control of Multi-Contact Centers of Pressure and Internal Forces in Humanoid Robots Luis Sentis, Jaeheung Park, and Oussama Khatib. IROS 2009.

Week 8: Action and sensor models

Week 9: path planning, week 10: behavior architectures, week 11: multi-robot coordination, week 12: applications, week 13: social implications.

An Introductory Robot Programming Tutorial

Let’s face it, robots are cool. In this post, Toptal Engineer Nick McCrea provides a step-by-step, easy-to-follow tutorial (with code samples) that walks you through the process of building a basic autonomous mobile robot.

An Introductory Robot Programming Tutorial

By Nick McCrea

Nicholas is a professional software engineer with a passion for quality craftsmanship. He loves architecting and writing top-notch code.

PREVIOUSLY AT

Let’s face it, robots are cool. They’re also going to run the world some day, and hopefully, at that time they will take pity on their poor soft fleshy creators (a.k.a. robotics developers ) and help us build a space utopia filled with plenty. I’m joking of course, but only sort of .

In my ambition to have some small influence over the matter, I took a course in autonomous robot control theory last year, which culminated in my building a Python-based robotic simulator that allowed me to practice control theory on a simple, mobile, programmable robot.

In this article, I’m going to show how to use a Python robot framework to develop control software, describe the control scheme I developed for my simulated robot, illustrate how it interacts with its environment and achieves its goals, and discuss some of the fundamental challenges of robotics programming that I encountered along the way.

In order to follow this tutorial on robotics programming for beginners, you should have a basic knowledge of two things:

  • Mathematics —we will use some trigonometric functions and vectors
  • Python—since Python is among the more popular basic robot programming languages—we will make use of basic Python libraries and functions

The snippets of code shown here are just a part of the entire simulator, which relies on classes and interfaces, so in order to read the code directly, you may need some experience in Python and object oriented programming .

Finally, optional topics that will help you to better follow this tutorial are knowing what a state machine is and how range sensors and encoders work.

The Challenge of the Programmable Robot: Perception vs. Reality, and the Fragility of Control

The fundamental challenge of all robotics is this: It is impossible to ever know the true state of the environment. Robot control software can only guess the state of the real world based on measurements returned by its sensors. It can only attempt to change the state of the real world through the generation of control signals.

This graphic demonstrates the interaction between a physical robot and computer controls when practicing Python robot programming.

Thus, one of the first steps in control design is to come up with an abstraction of the real world, known as a model , with which to interpret our sensor readings and make decisions. As long as the real world behaves according to the assumptions of the model, we can make good guesses and exert control. As soon as the real world deviates from these assumptions, however, we will no longer be able to make good guesses, and control will be lost. Often, once control is lost, it can never be regained. (Unless some benevolent outside force restores it.)

This is one of the key reasons that robotics programming is so difficult. We often see videos of the latest research robot in the lab, performing fantastic feats of dexterity, navigation, or teamwork, and we are tempted to ask, “Why isn’t this used in the real world?” Well, next time you see such a video, take a look at how highly-controlled the lab environment is. In most cases, these robots are only able to perform these impressive tasks as long as the environmental conditions remain within the narrow confines of its internal model. Thus, one key to the advancement of robotics is the development of more complex, flexible, and robust models—and said advancement is subject to the limits of the available computational resources.

[Side Note: Philosophers and psychologists alike would note that living creatures also suffer from dependence on their own internal perception of what their senses are telling them. Many advances in robotics come from observing living creatures and seeing how they react to unexpected stimuli. Think about it. What is your internal model of the world? It is different from that of an ant, and that of a fish? (Hopefully.) However, like the ant and the fish, it is likely to oversimplify some realities of the world. When your assumptions about the world are not correct, it can put you at risk of losing control of things. Sometimes we call this “danger.” The same way our little robot struggles to survive against the unknown universe, so do we all. This is a powerful insight for roboticists.]

The Programmable Robot Simulator

The simulator I built is written in Python and very cleverly dubbed Sobot Rimulator . You can find v1.0.0 on GitHub . It does not have a lot of bells and whistles but it is built to do one thing very well: provide an accurate simulation of a mobile robot and give an aspiring roboticist a simple framework for practicing robot software programming. While it is always better to have a real robot to play with, a good Python robot simulator is much more accessible and is a great place to start.

In real-world robots, the software that generates the control signals (the “controller”) is required to run at a very high speed and make complex computations. This affects the choice of which robot programming languages are best to use: Usually, C++ is used for these kinds of scenarios, but in simpler robotics applications, Python is a very good compromise between execution speed and ease of development and testing.

The software I wrote simulates a real-life research robot called the Khepera but it can be adapted to a range of mobile robots with different dimensions and sensors. Since I tried to program the simulator as similar as possible to the real robot’s capabilities, the control logic can be loaded into a real Khepera robot with minimal refactoring, and it will perform the same as the simulated robot. The specific features implemented refer to the Khepera III, but they can be easily adapted to the new Khepera IV.

In other words, programming a simulated robot is analogous to programming a real robot. This is critical if the simulator is to be of any use to develop and evaluate different control software approaches.

In this tutorial, I will be describing the robot control software architecture that comes with v1.0.0 of Sobot Rimulator , and providing snippets from the Python source (with slight modifications for clarity). However, I encourage you to dive into the source and mess around. The simulator has been forked and used to control different mobile robots, including a Roomba2 from iRobot . Likewise, please feel free to fork the project and improve it.

The control logic of the robot is constrained to these Python classes/files:

  • models/supervisor.py —this class is responsible for the interaction between the simulated world around the robot and the robot itself. It evolves our robot state machine and triggers the controllers for computing the desired behavior.
  • models/supervisor_state_machine.py —this class represents the different states in which the robot can be, depending on its interpretation of the sensors.
  • The files in the models/controllers directory—these classes implement different behaviors of the robot given a known state of the environment. In particular, a specific controller is selected depending on the state machine.

Robots, like people, need a purpose in life. The goal of our software controlling this robot will be very simple: It will attempt to make its way to a predetermined goal point. This is usually the basic feature that any mobile robot should have, from autonomous cars to robotic vacuum cleaners. The coordinates of the goal are programmed into the control software before the robot is activated but could be generated from an additional Python application that oversees the robot movements. For example, think of it driving through multiple waypoints.

However, to complicate matters, the environment of the robot may be strewn with obstacles. The robot MAY NOT collide with an obstacle on its way to the goal. Therefore, if the robot encounters an obstacle, it will have to find its way around so that it can continue on its way to the goal.

The Programmable Robot

Every robot comes with different capabilities and control concerns. Let’s get familiar with our simulated programmable robot.

The first thing to note is that, in this guide, our robot will be an autonomous mobile robot . This means that it will move around in space freely and that it will do so under its own control. This is in contrast to, say, a remote-control robot (which is not autonomous) or a factory robot arm (which is not mobile). Our robot must figure out for itself how to achieve its goals and survive in its environment. This proves to be a surprisingly difficult challenge for novice robotics programmers.

Control Inputs: Sensors

There are many different ways a robot may be equipped to monitor its environment. These can include anything from proximity sensors, light sensors, bumpers, cameras, and so forth. In addition, robots may communicate with external sensors that give them information that they themselves cannot directly observe.

Our reference robot is equipped with nine infrared sensors —the newer model has eight infrared and five ultrasonic proximity sensors—arranged in a “skirt” in every direction. There are more sensors facing the front of the robot than the back because it is usually more important for the robot to know what is in front of it than what is behind it.

In addition to the proximity sensors, the robot has a pair of wheel tickers that track wheel movement. These allow you to track how many rotations each wheel makes, with one full forward turn of a wheel being 2,765 ticks. Turns in the opposite direction count backward, decreasing the tick count instead of increasing it. You don’t have to worry about specific numbers in this tutorial because the software we will write uses the traveled distance expressed in meters. Later I will show you how to compute it from ticks with an easy Python function.

Control Outputs: Mobility

Some robots move around on legs. Some roll like a ball. Some even slither like a snake.

Our robot is a differential drive robot, meaning that it rolls around on two wheels. When both wheels turn at the same speed, the robot moves in a straight line. When the wheels move at different speeds, the robot turns. Thus, controlling the movement of this robot comes down to properly controlling the rates at which each of these two wheels turn.

In Sobot Rimulator, the separation between the robot “computer” and the (simulated) physical world is embodied by the file robot_supervisor_interface.py , which defines the entire API for interacting with the “real robot” sensors and motors:

  • read_proximity_sensors() returns an array of nine values in the sensors’ native format
  • read_wheel_encoders() returns an array of two values indicating total ticks since the start
  • set_wheel_drive_rates( v_l, v_r ) takes two values (in radians-per-second) and sets the left and right speed of the wheels to those two values

This interface internally uses a robot object that provides the data from sensors and the possibility to move motors or wheels. If you want to create a different robot, you simply have to provide a different Python robot class that can be used by the same interface, and the rest of the code (controllers, supervisor, and simulator) will work out of the box!

The Simulator

As you would use a real robot in the real world without paying too much attention to the laws of physics involved, you can ignore how the robot is simulated and just skip directly to how the controller software is programmed, since it will be almost the same between the real world and a simulation. But if you are curious, I will briefly introduce it here.

The file world.py is a Python class that represents the simulated world, with robots and obstacles inside. The step function inside this class takes care of evolving our simple world by:

  • Applying physics rules to the robot’s movements
  • Considering collisions with obstacles
  • Providing new values for the robot sensors

In the end, it calls the robot supervisors responsible for executing the robot brain software.

The step function is executed in a loop so that robot.step_motion() moves the robot using the wheel speed computed by the supervisor in the previous simulation step.

The apply_physics() function internally updates the values of the robot proximity sensors so that the supervisor will be able to estimate the environment at the current simulation step. The same concepts apply to the encoders.

A Simple Model

First, our robot will have a very simple model. It will make many assumptions about the world. Some of the important ones include:

  • The terrain is always flat and even
  • Obstacles are never round
  • The wheels never slip
  • Nothing is ever going to push the robot around
  • The sensors never fail or give false readings
  • The wheels always turn when they are told to

Although most of these assumptions are reasonable inside a house-like environment, round obstacles could be present. Our obstacle avoidance software has a simple implementation and follows the border of obstacles in order to go around them. We will hint readers on how to improve the control framework of our robot with an additional check to avoid circular obstacles.

The Control Loop

We will now enter into the core of our control software and explain the behaviors that we want to program inside the robot. Additional behaviors can be added to this framework, and you should try your own ideas after you finish reading! Behavior-based robotics software was proposed more than 20 years ago and it’s still a powerful tool for mobile robotics. As an example, in 2007 a set of behaviors was used in the DARPA Urban Challenge—the first competition for autonomous driving cars!

A robot is a dynamic system. The state of the robot, the readings of its sensors, and the effects of its control signals are in constant flux. Controlling the way events play out involves the following three steps:

  • Apply control signals.
  • Measure the results.
  • Generate new control signals calculated to bring us closer to our goal.

These steps are repeated over and over until we have achieved our goal. The more times we can do this per second, the finer control we will have over the system. The Sobot Rimulator robot repeats these steps 20 times per second (20 Hz), but many robots must do this thousands or millions of times per second in order to have adequate control. Remember our previous introduction about different robot programming languages for different robotics systems and speed requirements.

In general, each time our robot takes measurements with its sensors, it uses these measurements to update its internal estimate of the state of the world—for example, the distance from its goal. It compares this state to a reference value of what it wants the state to be (for the distance, it wants it to be zero), and calculates the error between the desired state and the actual state. Once this information is known, generating new control signals can be reduced to a problem of minimizing the error which will eventually move the robot towards the goal.

A Nifty Trick: Simplifying the Model

To control the robot we want to program, we have to send a signal to the left wheel telling it how fast to turn, and a separate signal to the right wheel telling it how fast to turn. Let’s call these signals v L and v R . However, constantly thinking in terms of v L and v R is very cumbersome. Instead of asking, “How fast do we want the left wheel to turn, and how fast do we want the right wheel to turn?” it is more natural to ask, “How fast do we want the robot to move forward, and how fast do we want it to turn, or change its heading?” Let’s call these parameters velocity v and angular (rotational) velocity ω (read “omega”). It turns out we can base our entire model on v and ω instead of v L and v R , and only once we have determined how we want our programmed robot to move, mathematically transform these two values into the v L and v R we need to actually control the robot wheels. This is known as a unicycle model of control.

In robotics programming, it's important to understand the difference between unicycle and differential drive models.

Here is the Python code that implements the final transformation in supervisor.py . Note that if ω is 0, both wheels will turn at the same speed:

Estimating State: Robot, Know Thyself

Using its sensors, the robot must try to estimate the state of the environment as well as its own state. These estimates will never be perfect, but they must be fairly good because the robot will be basing all of its decisions on these estimations. Using its proximity sensors and wheel tickers alone, it must try to guess the following:

  • The direction to obstacles
  • The distance from obstacles
  • The position of the robot
  • The heading of the robot

The first two properties are determined by the proximity sensor readings and are fairly straightforward. The API function read_proximity_sensors() returns an array of nine values, one for each sensor. We know ahead of time that the seventh reading, for example, corresponds to the sensor that points 75 degrees to the right of the robot.

Thus, if this value shows a reading corresponding to 0.1 meters distance, we know that there is an obstacle 0.1 meters away, 75 degrees to the left. If there is no obstacle, the sensor will return a reading of its maximum range of 0.2 meters. Thus, if we read 0.2 meters on sensor seven, we will assume that there is actually no obstacle in that direction.

Because of the way the infrared sensors work (measuring infrared reflection), the numbers they return are a non-linear transformation of the actual distance detected. Thus, the Python function for determining the distance indicated must convert these readings into meters. This is done in supervisor.py as follows:

Again, we have a specific sensor model in this Python robot framework, while in the real world, sensors come with accompanying software that should provide similar conversion functions from non-linear values to meters.

Determining the position and heading of the robot (together known as the pose in robotics programming) is somewhat more challenging. Our robot uses odometry to estimate its pose. This is where the wheel tickers come in. By measuring how much each wheel has turned since the last iteration of the control loop, it is possible to get a good estimate of how the robot’s pose has changed—but only if the change is small .

This is one reason it is important to iterate the control loop very frequently in a real-world robot, where the motors moving the wheels may not be perfect. If we waited too long to measure the wheel tickers, both wheels could have done quite a lot, and it will be impossible to estimate where we have ended up.

Given our current software simulator, we can afford to run the odometry computation at 20 Hz—the same frequency as the controllers. But it could be a good idea to have a separate Python thread running faster to catch smaller movements of the tickers.

Below is the full odometry function in supervisor.py that updates the robot pose estimation. Note that the robot’s pose is composed of the coordinates x and y , and the heading theta , which is measured in radians from the positive X-axis. Positive x is to the east and positive y is to the north. Thus a heading of 0 indicates that the robot is facing directly east. The robot always assumes its initial pose is (0, 0), 0 .

Now that our robot is able to generate a good estimate of the real world, let’s use this information to achieve our goals.

Python Robot Programming Methods: Go-to-Goal Behavior

The supreme purpose in our little robot’s existence in this programming tutorial is to get to the goal point. So how do we make the wheels turn to get it there? Let’s start by simplifying our worldview a little and assume there are no obstacles in the way.

This then becomes a simple task and can be easily programmed in Python. If we go forward while facing the goal, we will get there. Thanks to our odometry, we know what our current coordinates and heading are. We also know what the coordinates of the goal are because they were pre-programmed. Therefore, using a little linear algebra, we can determine the vector from our location to the goal, as in go_to_goal_controller.py :

Note that we are getting the vector to the goal in the robot’s reference frame , and NOT in world coordinates. If the goal is on the X-axis in the robot’s reference frame, that means it is directly in front of the robot. Thus, the angle of this vector from the X-axis is the difference between our heading and the heading we want to be on. In other words, it is the error between our current state and what we want our current state to be. We, therefore, want to adjust our turning rate ω so that the angle between our heading and the goal will change towards 0. We want to minimize the error:

self.kP in the above snippet of the controller Python implementation is a control gain. It is a coefficient which determines how fast we turn in proportion to how far away from the goal we are facing. If the error in our heading is 0 , then the turning rate is also 0 . In the real Python function inside the file go_to_goal_controller.py , you will see more similar gains, since we used a PID controller instead of a simple proportional coefficient.

Now that we have our angular velocity ω , how do we determine our forward velocity v ? A good general rule of thumb is one you probably know instinctively: If we are not making a turn, we can go forward at full speed, and then the faster we are turning, the more we should slow down. This generally helps us keep our system stable and acting within the bounds of our model. Thus, v is a function of ω . In go_to_goal_controller.py the equation is:

A suggestion to elaborate on this formula is to consider that we usually slow down when near the goal in order to reach it with zero speed. How would this formula change? It has to include somehow a replacement of v_max() with something proportional to the distance. OK, we have almost completed a single control loop. The only thing left to do is transform these two unicycle-model parameters into differential wheel speeds, and send the signals to the wheels. Here’s an example of the robot’s trajectory under the go-to-goal controller, with no obstacles:

This is an example of the programmed robot's trajectory.

As we can see, the vector to the goal is an effective reference for us to base our control calculations on. It is an internal representation of “where we want to go.” As we will see, the only major difference between go-to-goal and other behaviors is that sometimes going towards the goal is a bad idea, so we must calculate a different reference vector.

Python Robot Programming Methods: Avoid-Obstacles Behavior

Going towards the goal when there’s an obstacle in that direction is a case in point. Instead of running headlong into things in our way, let’s try to program a control law that makes the robot avoid them.

To simplify the scenario, let’s now forget the goal point completely and just make the following our objective: When there are no obstacles in front of us, move forward. When an obstacle is encountered, turn away from it until it is no longer in front of us.

Accordingly, when there is no obstacle in front of us, we want our reference vector to simply point forward. Then ω will be zero and v will be maximum speed. However, as soon as we detect an obstacle with our proximity sensors, we want the reference vector to point in whatever direction is away from the obstacle. This will cause ω to shoot up to turn us away from the obstacle, and cause v to drop to make sure we don’t accidentally run into the obstacle in the process.

A neat way to generate our desired reference vector is by turning our nine proximity readings into vectors, and taking a weighted sum. When there are no obstacles detected, the vectors will sum symmetrically, resulting in a reference vector that points straight ahead as desired. But if a sensor on, say, the right side picks up an obstacle, it will contribute a smaller vector to the sum, and the result will be a reference vector that is shifted towards the left.

For a general robot with a different placement of sensors, the same idea can be applied but may require changes in the weights and/or additional care when sensors are symmetrical in front and in the rear of the robot, as the weighted sum could become zero.

When programmed correctly, the robot can avoid these complex obstacles.

Here is the code that does this in avoid_obstacles_controller.py :

Using the resulting ao_heading_vector as our reference for the robot to try to match, here are the results of running the robot software in simulation using only the avoid-obstacles controller, ignoring the goal point completely. The robot bounces around aimlessly, but it never collides with an obstacle, and even manages to navigate some very tight spaces:

This robot is successfully avoiding obstacles within the Python robot simulator.

Python Robot Programming Methods: Hybrid Automata (Behavior State Machine)

So far we’ve described two behaviors—go-to-goal and avoid-obstacles—in isolation. Both perform their function admirably, but in order to successfully reach the goal in an environment full of obstacles, we need to combine them.

The solution we will develop lies in a class of machines that has the supremely cool-sounding designation of hybrid automata . A hybrid automaton is programmed with several different behaviors, or modes, as well as a supervising state machine. The supervising state machine switches from one mode to another in discrete times (when goals are achieved or the environment suddenly changed too much), while each behavior uses sensors and wheels to react continuously to environment changes. The solution was called hybrid because it evolves both in a discrete and continuous fashion.

Our Python robot framework implements the state machine in the file supervisor_state_machine.py .

Equipped with our two handy behaviors, a simple logic suggests itself: When there is no obstacle detected, use the go-to-goal behavior. When an obstacle is detected, switch to the avoid-obstacles behavior until the obstacle is no longer detected.

As it turns out, however, this logic will produce a lot of problems. What this system will tend to do when it encounters an obstacle is to turn away from it, then as soon as it has moved away from it, turn right back around and run into it again. The result is an endless loop of rapid switching that renders the robot useless. In the worst case, the robot may switch between behaviors with every iteration of the control loop—a state known as a Zeno condition .

There are multiple solutions to this problem, and readers that are looking for deeper knowledge should check, for example, the DAMN software architecture .

What we need for our simple simulated robot is an easier solution: One more behavior specialized with the task of getting around an obstacle and reaching the other side.

Python Robot Programming Methods: Follow-Wall Behavior

Here’s the idea: When we encounter an obstacle, take the two sensor readings that are closest to the obstacle and use them to estimate the surface of the obstacle. Then, simply set our reference vector to be parallel to this surface. Keep following this wall until A) the obstacle is no longer between us and the goal, and B) we are closer to the goal than we were when we started. Then we can be certain we have navigated the obstacle properly.

With our limited information, we can’t say for certain whether it will be faster to go around the obstacle to the left or to the right. To make up our minds, we select the direction that will move us closer to the goal immediately. To figure out which way that is, we need to know the reference vectors of the go-to-goal behavior and the avoid-obstacle behavior, as well as both of the possible follow-wall reference vectors. Here is an illustration of how the final decision is made (in this case, the robot will choose to go left):

Utilizing a few types of behaviors, the programmed robot avoids obstacles and continues onward.

Determining the follow-wall reference vectors turns out to be a bit more involved than either the avoid-obstacle or go-to-goal reference vectors. Take a look at the Python code in follow_wall_controller.py to see how it’s done.

Final Control Design

The final control design uses the follow-wall behavior for almost all encounters with obstacles. However, if the robot finds itself in a tight spot, dangerously close to a collision, it will switch to pure avoid-obstacles mode until it is a safer distance away, and then return to follow-wall. Once obstacles have been successfully negotiated, the robot switches to go-to-goal. Here is the final state diagram, which is programmed inside the supervisor_state_machine.py :

This diagram illustrates the switching between robotics programming behaviors to achieve a goal and avoid obstacles.

Here is the robot successfully navigating a crowded environment using this control scheme:

The robot simulator has successfully allowed the robot software to avoid obstacles and achieve its original purpose.

An additional feature of the state machine that you can try to implement is a way to avoid circular obstacles by switching to go-to-goal as soon as possible instead of following the obstacle border until the end (which does not exist for circular objects!)

Tweak, Tweak, Tweak: Trial and Error

The control scheme that comes with Sobot Rimulator is very finely tuned. It took many hours of tweaking one little variable here, and another equation there, to get it to work in a way I was satisfied with. Robotics programming often involves a great deal of plain old trial-and-error. Robots are very complex and there are few shortcuts to getting them to behave optimally in a robot simulator environment…at least, not much short of outright machine learning, but that’s a whole other can of worms.

I encourage you to play with the control variables in Sobot Rimulator and observe and attempt to interpret the results. Changes to the following all have profound effects on the simulated robot’s behavior:

  • The error gain kP in each controller
  • The sensor gains used by the avoid-obstacles controller
  • The calculation of v as a function of ω in each controller
  • The obstacle standoff distance used by the follow-wall controller
  • The switching conditions used by supervisor_state_machine.py
  • Pretty much anything else

When Programmable Robots Fail

We’ve done a lot of work to get to this point, and this robot seems pretty clever. Yet, if you run Sobot Rimulator through several randomized maps, it won’t be long before you find one that this robot can’t deal with. Sometimes it drives itself directly into tight corners and collides. Sometimes it just oscillates back and forth endlessly on the wrong side of an obstacle. Occasionally it is legitimately imprisoned with no possible path to the goal. After all of our testing and tweaking, sometimes we must come to the conclusion that the model we are working with just isn’t up to the job, and we have to change the design or add functionality.

In the mobile robot universe, our little robot’s “brain” is on the simpler end of the spectrum. Many of the failure cases it encounters could be overcome by adding some more advanced software to the mix. More advanced robots make use of techniques such as mapping , to remember where it’s been and avoid trying the same things over and over; heuristics , to generate acceptable decisions when there is no perfect decision to be found; and machine learning , to more perfectly tune the various control parameters governing the robot’s behavior.

A Sample of What’s to Come

Robots are already doing so much for us, and they are only going to be doing more in the future. While even basic robotics programming is a tough field of study requiring great patience, it is also a fascinating and immensely rewarding one.

In this tutorial, we learned how to develop reactive control software for a robot using the high-level programming language Python. But there are many more advanced concepts that can be learned and tested quickly with a Python robot framework similar to the one we prototyped here. I hope you will consider getting involved in the shaping of things to come!

Acknowledgement: I would like to thank Dr. Magnus Egerstedt and Jean-Pierre de la Croix of the Georgia Institute of Technology for teaching me all this stuff, and for their enthusiasm for my work on Sobot Rimulator.

Further Reading on the Toptal Blog:

  • An Introduction to Robot Operating System: The Ultimate Robot Application Framework
  • Learn to Code: Wisdom and Tools for the Journey
  • Single Responsibility Principle: A Recipe for Great Code
  • Forex Algorithmic Trading: A Practical Tale for Engineers
  • A Machine Learning Tutorial With Examples: An Introduction to ML Theory and Its Applications

Understanding the basics

What is a robot.

A robot is a machine with sensors and mechanical components connected to and controlled by electronic boards or CPUs. They process information and apply changes to the physical world. Robots are mostly autonomous and replace or help humans in everything from daily routines to very dangerous tasks.

What are robots used for?

Robots are used in factories and farms to do heavy or repetitive tasks. They are used to explore planets and oceans, clean houses, and help elderly people. Researchers and engineers are also trying to use robots in disaster situations, medical analysis, and surgery. Self-driving cars are also robots!

How do you build a robot?

The creation of a robot requires multiple steps: the mechanical layout of the parts, the design of the sensors and drivers, and the development of the robot’s software. Usually, the raw body is built in factories and the software is developed and tested on the first batch of working prototypes.

How do you program a robot?

There are three steps involved. First, you get motors and sensors running using off-the-shelf drivers. Then you develop basic building blocks so that you can move the robot and read its sensors. Finally, use that to develop smart, complex software routines to create your desired behavior.

What is the best programming language for robotics?

Two main programming languages are the best when used in robotics: C++ and Python, often used together as each one has pros and cons. C++ is used in control loops, image processing and to interface low-level hardware. Python is used to handle high-level behaviors and to quickly develop tests or proof of concepts.

How can you program a robot using Java?

Assuming you are able to run the Java Virtual Machine on your robot, you can interface your Java code with the motor and sensor drivers using sockets or RPC. Writing device drivers directly in Java may be harder than in other languages such as C++, so it’s better to focus on developing high-level behavior!

What is robotics engineering?

Robotic engineering is a broad field of engineering focused on the design and integration of entire robotic systems. Thus it requires knowledge of mechanical, electronic, software, and control systems, interacting with the engineers specialized in each field to fulfill the requirements and goals for a given robot.

What is the difference between robotic process automation (RPA) and robotics programming?

Both fields develop software in order to help or replace humans, but RPA targets tasks usually done by a human in front of a computer, such as sending emails, filing receipts, or browsing a website. Robotics instead executes tasks in the real world such as cleaning, driving, building, or manufacturing.

Who invented the first robot in the world?

The first mobile robot was created in 1966 at Stanford Research Institute by a team lead by Charles Rosen and Nils Nilsson. Using only a 24-bit CPU and 196 KB of RAM, it was able to move around an office autonomously while avoiding obstacles. Since it shook while it moved, their creators called it Shakey.

  • Control Theory

Nick McCrea

Denver, CO, United States

Member since July 8, 2014

About the author

World-class articles, delivered weekly.

By entering your email, you are agreeing to our privacy policy .

Toptal Developers

  • Algorithm Developers
  • Angular Developers
  • AWS Developers
  • Azure Developers
  • Big Data Architects
  • Blockchain Developers
  • Business Intelligence Developers
  • C Developers
  • Computer Vision Developers
  • Django Developers
  • Docker Developers
  • Elixir Developers
  • Go Engineers
  • GraphQL Developers
  • Jenkins Developers
  • Kotlin Developers
  • Kubernetes Developers
  • Machine Learning Engineers
  • Magento Developers
  • .NET Developers
  • R Developers
  • React Native Developers
  • Ruby on Rails Developers
  • Salesforce Developers
  • SQL Developers
  • Tableau Developers
  • Unreal Engine Developers
  • Xamarin Developers
  • View More Freelance Developers

Join the Toptal ® community.

NVIDIA Isaac Sim

NVIDIA Isaac Sim™ is a reference application enabling developers to design, simulate, test, and train AI-based robots and autonomous machines in a physically-based virtual environment. Isaac Sim, built on NVIDIA Omniverse, is fully extensible, enabling developers to build their own custom simulators or integrate core Isaac Sim technologies into their existing testing and validation pipelines.

Download Omniverse Download Container Forums

Introducing NVIDIA Isaac Lab

NVIDIA Isaac Lab is a lightweight sample application built on Isaac Sim optimized for robot learning that is pivotal for robot foundation model training. Isaac Lab optimizes for reinforcement, imitation, and can train all types of robot embodiments including the Project GR00T foundation model for humanoids.

NVIDIA Isaac Lab application for robot learning and foundation model training

Key Benefits of Isaac Sim

Realistic simulation.

Isaac Sim makes the most of the Omniverse platform’s powerful simulation technologies. These include advanced GPU-enabled physics simulation with NVIDIA PhysX® 5, photorealism with real-time ray and path tracing, and MDL material definition support for physically based rendering.

Modular Architecture for a Variety of Applications

Isaac Sim is built to address many of the most common use cases, including manipulation, navigation, and synthetic data generation for training data. Its modular design also means the tool can be customized and extended to many new use cases.

Seamless Connectivity and Interoperability

Isaac Sim benefits from the Omniverse platform’s OpenUSD interoperability across 3D and simulation tools - enabling developers to easily design, import, build, and share robot models and virtual training environments.  Now, you can easily connect the robot’s brain to a virtual world through the Isaac ROS/ROS 2 interface, full-featured Python scripting, and plug-ins for importing robot and environment models. 

Key Features of Isaac Sim 

Pre-populated robots and sensors in a warehouse

Pre-Populated Robots and Sensors

Get started faster using pre-existing robot models and sensors. Explore new robot models, including FANUC and Techman, and sensor ecosystem support for Orbbec, Sensing, Zvision, Ouster, and Real-Sense.

 Open Robotics ROS logo

ROS/ROS 2.0 Support

Custom ROS messages and URDF/MJCF are now open sourced. Get support for custom ROS messages that allow standalone scripting to control the simulation steps manually. 

NVIDIA Omniverse Replicator for scalable synthetic data generation in simulation

Scalable Synthetic Data Generation

Explore randomization in simulation added for manipulator and mobile base use cases. Environmental dynamics and other attributes of 3D assets—such as lighting, reflection, color, and position—are randomized to train and test mobile robots and manipulators. 

OpenUSD SimReady warehouse scenes and assets

SimReady Assets 

Take advantage of OpenUSD SimReady warehouse scenes and assets. Use SimReady 3D assets to create and test scenarios and exercise robot solutions across warehouse configurations.

Developer Resources and Support

Get live help.

Connect with experts live to get your questions answered. 

Chat with us on our Forums

 Attend an upcoming event

 See the weekly livestream calendar

Explore Resources

Learn at your own pace with free getting-started material.

Check out this documentation .

Follow along self-paced trainings

Dive into Q&A forums .

Robotics DevOps

NVIDIA OSMO is a cloud-native workflow orchestration platform that lets you easily scale your workloads across distributed environments—from on-premises to private and public cloud resource clusters. It provides a single pane of glass for scheduling complex multi-stage and multi-container heterogeneous computing workflows.

NVIDIA OSMO logo

Latest Robotics News

Get started with isaac sim today..

Stanford University

Stanford engineering, s tanford e ngineering e verywhere, cs223a - introduction to robotics, course details, course description.

The purpose of this course is to introduce you to basics of modeling, design, planning, and control of robot systems. In essence, the material treated in this course is a brief survey of relevant results from geometry, kinematics, statics, dynamics, and control. The course is presented in a standard format of lectures, readings and problem sets. There will be an in-class midterm and final examination. These examinations will be open book. Lectures will be based mainly, but not exclusively, on material in the Lecture Notes book. Lectures will follow roughly the same sequence as the material presented in the book, so it can be read in anticipation of the lectures Topics: robotics foundations in kinematics, dynamics, control, motion planning, trajectory generation, programming and design. Prerequisites: matrix algebra.

  • DOWNLOAD All Course Materials

FPO

Khatib, Oussama

Prof. Khatib was the Program Chair of ICRA2000 (San Francisco) and Editor of ``The Robotics Review'' (MIT Press). He has served as the Director of the Stanford Computer Forum, an industry affiliate program. He is currently the President of the International Foundation of Robotics Research, IFRR, and Editor of STAR, Springer Tracts in Advanced Robotics. Prof. Khatib is IEEE fellow, Distinguished Lecturer of IEEE, and recipient of the JARA Award.

Lecture 1
Lecture 1-3
Lectures 4-5
Lectures 6-8
Lecture 9
Lecture 10
Lectures 11-12
Lectures 13-15

Assignments

--> --> --> --> --> -->
Lecture 3
Lectures 4-5
Lectures 6-8
Lectures 8-10
Lectures 11-13
Lectures 14-15

Course Sessions (16):

Watch Online: Download: Duration:
Watch Now Download 58 min*
Course Overview, History of Robotics Video, Robotics Applications, Related Stanford Robotics Courses, Lecture and Reading Schedule, Manipulator Kinematics, Manipulator Dynamics, Manipulator Control, Manipulator Force Control, Advanced Topics

Transcripts

Watch Online: Download: Duration:
Watch Now Download 1 hr 8 min
Spatial Descriptions, Generalized Coordinates, Operational Coordinates, Rotation Matrix, Example - Rotation Matrix, Translations, Example - Homogeneous Transform, Operators, General Operators

Watch Online: Download: Duration:
Watch Now Download 1 hr 17 min
Homogeneous Transform Interpretations, Compound Transformations, Spatial Descriptions, Rotation Representations, Euler Angles, Fixed Angles, Example - Singularities, Euler Parameters, Example - Rotations

Watch Online: Download: Duration:
Watch Now Download 1 hr 12 min
Manipulator Kinematics, Link Description, Link Connections, Denavit-Hartenberg Parameteres, Summary - DH Parameters, Example - DH Table, Forward Kinematics

Watch Online: Download: Duration:
Watch Now Download 1 hr 7 min
Summary - Frame Attachment, Example - RPRR Manipulator, Stanford Scheinman Arm, Stanford Scheinman Arm - DH Table, Forward Kinematics, Stanford Scheinman Arm - T-Matrices, Stanford Scheinman Arm - Final Results

Watch Online: Download: Duration:
Watch Now Download 1 hr 11 min
Instantaneous Kinematics, Jacobian, Jacobians - Direct Differentiation, Example 1, Scheinman Arm, Basic Jacobian, Position Representations, Cross Product Operator, Velocity Propagation, Example 2

Watch Online: Download: Duration:
Watch Now Download 1 hr 9 min
Jacobian - Explicit Form, Jacobian Jv / Jw, Jacobian in a Frame, Jacobian in Frame {0}, Scheinman Arm, Scheinman Arm - Jacobian, Kinematic Singularity

Watch Online: Download: Duration:
Watch Now Download 1 hr 15 min
Scheinman Arm - Demo, Kinematic Singularity, Example - Kinematic Singularity, Puma Simulation, Resolved Rate Motion Control, Angular/Linear - Velocities/Forces, Velocity/Force Duality, Virtual Work, Example

Watch Online: Download: Duration:
Watch Now Download 1 hr 16 min
Intro - Guest Lecturer: Gregory Hager, Overview - Computer Vision, Computational Stereo, Stereo-Based Reconstruction, Disparity Maps, SIFT Feature Selection, Tracking Cycle, Face Stabilization Video, Future Challenges
Watch Online: Download: Duration:
Watch Now Download 1 hr 2 min*
Guest Lecturer: Krasimir Kolarov, Trajectory Generation - Basic Problem, Cartesian Planning, Cubic Polynomial, Finding Via Point Velocities, Linear Interpolation, Higher Order Polynomials, Trajectory Planning with Obstacles
Watch Online: Download: Duration:
Watch Now Download 1 hr 14 min
Joint Space Dynamics, Newton-Euler Algorithm, Inertia Tensor, Example, Newton-Euler Equations, Lagrange Equations, Equations of Motion

Watch Online: Download: Duration:
Watch Now Download 1 hr 14 min
Lagrange Equations, Equations of Motion, Kinetic Energy, Equations of Motion - Explicit Form, Centrifugal and Coriolis Forces, Christoffel Symbols, Mass Matrix, V Matrix, Final Equation of Motion

Watch Online: Download: Duration:
Watch Now Download 1 hr 10 min
Control - Overview, Joint Space Control, Resolved Motion Rate Control, Natural Systems, Dissipative Systems, Example, Passive System Stability

Watch Online: Download: Duration:
Watch Now Download 1 hr 13 min
PD Control, Control Partitioning, Motion Control, Disturbance Rejection, Steady-State Error, PID Control, Effective Inertia

Watch Online: Download: Duration:
Watch Now Download 1 hr 12 min
Manipulator Control, PD Control Stability, Task Oriented Control, Task Oriented Equations of Motion, Operational Space Dynamics, Example, Nonlinear Dynamic Decoupling, Trajectory Tracking

Watch Online: Download: Duration:
Watch Now Download 1 hr 10 min
Compliance, Force Control, Dynamics, Task Description, Historical Robotics, Stanford Human-Safe Robot, Task Posture and Control, Multi-Contact Whole-Body Control

Stanford Center for Professional Development

  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Non-Discrimination
  • Accessibility

© Stanford University, Stanford, California 94305

robotics simulation

Robotics Simulation

Jul 22, 2014

320 likes | 753 Views

Robotics Simulation. ( Skynet ) Andrew Townsend Advisor: Professor Grant Braught. Introduction. Robotics is a quickly emerging field in today’s array of technology Robots are expensive, and each robot’s method of control is different Used in Dickinson College’s Artificial Life class

Share Presentation

  • andrew townsend advisor
  • single language
  • serial communication
  • essentially identical
  • predefined world
  • directional values

xaria

Presentation Transcript

Robotics Simulation (Skynet) Andrew Townsend Advisor: Professor Grant Braught

Introduction • Robotics is a quickly emerging field in today’s array of technology • Robots are expensive, and each robot’s method of control is different • Used in Dickinson College’s Artificial Life class • dLife – universal robotics controller • Problem to be solved: How can multiple students do coursework for the Artificial Life class at the same time?

Three Critical Goals for the Solution • Accurate modeling of Hemisson and AIBO robots • Multiple simultaneous users controlling multiple robots at once • Solution must integrate with the existing dLife package

Proposed Solution • Simulation provides a cheap alternative software solution to this problem. • Many Robotics simulators already exist • Not all simulate multiple robots or accept multiple users • None of them integrate with dLife.

Robotics Simulator’s Role • Integrate transparently with dLife such that the process of using a virtual robot is the same as using a physical robot • Provide support for multiple users using multiple robots in the same ‘space’ • Accurate modeling of a robot’s expected actions, to provide an effective alternative for Artificial Life students.

Background • Several robotics simulators already exist • Some of these include Pyro, Gazebo, Stage, Simulator Bob, and SIMpact

Background.Pyro • Pyro is one of the most similar applications to dLife. • Concentrates on providing a single language to communicate with a robot so that the user does not have to learn a new language for each kind of robot • Integrates with another simulator called Stage..

Background.Stage • Closest to this project • Integrates with client packages, such as Pyro • Focuses on representing a world and the simulated robot’s interactions with that world

Architecture • Similar to the Pyro/Stage methodology • Client / Server Architecture • Communication should not be limited to the local machine • Use Socket-based communication.

Architecture.Client (dLife) • The client of a robotics simulation represents a user’s input to the robot • dLife already provides a unified interface to abstract the user’s control of the robot and relevant sensor readings, regardless of what kind of robot it is • dLife must simply be extended to allow for socket based communication, rather than using serial communication. • Other than connection mechanisms, all other processes should be essentially identical to dLife’s interaction with a physical robot

Architecture.Server • Most obvious choice is to use Java • Server must be able to simulate the robots while still accepting new connections. Threaded connection listening! • Server has two main roles: • Server must be able to accept some form of predefined world, and display it to the user. • Server must also be capable of modeling a robot’s actions in that world.

The World is what you make of it • Clearly a simulator is useless if you can’t see what the robot is doing • Worlds are created from reading input from text files. • World Files assume ‘real world’ dimensions, in this case meters, and translate from there. • Axis flip

The World File • Each World File contains basic information such as the dimensions of the world and coloring. • Also allows for a SCALE statement for perspective purposes • Each world file can contain an unlimited number (barring system resources) of objects that reside in the world

The World File: Objectivity • Objects are currently limited to circles, rectangles, and lines • Objects are defined in the World File by their size, position, and coloring. • Currently objects are impassable objects as far as the robot is concerned, but moveable objects are a possibility for the future.

The World File: Syntax Checking • Upon loading a given World File, the server checks the file against the expected format and reports errors intelligently. • Erroneous data is reported to the user and discarded.

Robot Modeling.Representation • Each kind of robot should be drawn in the same way, with slight variations to indicate which robot is which. • Currently, only the Hemisson robot is implemented, and is represented by a circle of the user’s choice in color, with a line pointing in the direction the robot is facing. • Server handshake: ‘Hemisson [x] [y] [direction] [color]’

Robot Modeling.Implementation • As each robot is meant to be independent of each other, robots are implemented as versions of the robotThread class. • Thread is associated with the socket connection, real-time communication. • All processes associated with that particular robot are handled exclusively in that thread. • Makes it easy to keep each kind of robot’s appearances, capabilities, and actions separate while still keeping it easy to add new kinds of robots to the mix. • Timing is everything

Skynet v.1b DEMONSTRATION

Status • World representation is essentially complete • dLife integration is complete except for one thing… • Robot representation is defined and almost entirely complete, but problems with directional values (rounding errors and conversion issues) • Collision algorithm only partially works • Only forward/backward movement is fully supported, although command parsing is implemented

Challenges • GUIs. • Keeping real world measurements separate from internal ‘backend’ measurements. Directional values are more elusive than they should be. • Writing code in such a way that test cases would work (more threading!) • Collision detection algorithm

Future Work • Mostly revolve around the “HemissonThread” class • Independent and different wheel speeds • Fix directional value calculations • Empirical testing for accurate modeling – tweak timer accordingly • Fix dLife dialog box. • Finishing collision detection algorithm • Add support for scripting? • Bring the noise • Implementation of simAIBOs!

Thank you! Questions?

  • More by User

Robotics

Robotics. Coordinates, position, orientation kinematics. HTML5 reading file. Homework: Prepare for lab. Postings. Coordinate system. Method of providing system for specifying points / positions / vectors in a given N-dimensional space.

465 views • 24 slides

Robotics

Robotics. “Robot” coined by Karel Capek in a 1921 science-fiction Czech play.

749 views • 22 slides

Robotics

Start of class. Do Now: Average the following sets of numbers.{ 1, 2, 3, 4, 5}? 3{5, 10, 10, 15, 15, 20} ? 12.5{8, 4, 5, 9} ? 6.5. . . . Today's learning objectives. Introduction of the engineering design process.

383 views • 12 slides

Robotics

Robotics. Reprise on levels of language Movement in a crowded workspace Review for midterm Lab: Complete pick up, travel, deposit exercise. Homework: study for midterm. Post proposal for library research project. Movement. Move to position P. Note: position P may involve

437 views • 16 slides

Robotics

Robotics. Recap, Manufacturing, NXT-G features, sound Lab work: [Show bump exercise. Add sound. Combine sound & bump]. Start off with sound. Use buttons to indicate turning left or right. MAKE sounds. Display messages. Homework: postings (hints at topics). Combine sound, bump, timing….

604 views • 42 slides

Robotics

Robotics. ?. Designed For Integrity. The Need For Robotic Assurance. Robert G Parker. When Considering Robotics, What Do You Think Of ?. Science Fiction to Automobile Manufacturing Robots are a Way of Life. When Considering Robotics, What Do You Think Of ?.

1.27k views • 56 slides

ROBOTICS,.

ROBOTICS,. Submitted By., A.Ruban Dinesh. R.Manoj Prabhakar. M.Reuban Samuel. What is a Robot ?. “A re-programmable, multifunctional manipulator designed to move material, parts, tools, or specialized devices through various programmed motions for the performance of a variety of tasks.” .

809 views • 47 slides

Robotics

Robotics. Robotics is the branch of technology Deals with the design, construction, operation Majority of robots use electric motors Robot, a mechanical or virtual agent Combination of electronics, mechanics and software Many benefits such as creativity, design ability.

777 views • 10 slides

Robotics

Robotics. End of Arm Tooling for Industrial Robots. End-of-arm Tooling. End Effector = Gripper Mounted to a tooling plate Cost of end-of-arm tooling can account for 20% of the total cost of the robot Two Functions : 1. Hold the part while work is being performed

783 views • 31 slides

Robotics

Welcome!. By: Bradley Melanson & Derek Spencer. Robotics. Robots. General Information: Dictionary Definition of Robot: 1) a machine that resembles a human and does mechanical, routine takes on command. Origin: Czech, coined by Karel Capek in the play R.U.R. (1920).

365 views • 7 slides

Robotics

Technology. Robotics. Irish Mini Sumo Robot Competition Explained. Educational Aims

666 views • 33 slides

Robotics

Robotics. The History of Robots. 2000 B.C.: lever action dogs in Egyptian tomb. Middle Ages: movable figures on clocks. Automatons: a human-like figures that moved automatically. The History of Robots. 14th Century: Strasbourg - mechanical rooster that flaps its wings & crows at noon.

443 views • 23 slides

Robotics!!!!!

Robotics!!!!!

A robot is any inanimate object that has been programmed to be humanistic and do human like tasks. By Brandon Bradford. Robotics!!!!!. Welding Car assembly Unmanned Aerial Vehicles Human Interactions Surgery. Five tasks!!!. welding. UAV. Interacting with humans.

223 views • 6 slides

Robotics

Robotics. Class 2. What defines a robot?. Movement Sensors Program Energy. Organizing Lego Wedo kit. Use separate bags according to the color and category red pieces – bricks, beams, etc. except the two long red pieces. yellow pieces white, gray, green and light green pieces

268 views • 3 slides

Robotics

Robotics. Fanuc ABB Yamaha Epson CRS. Robotic Component Assembly. Robotic Testing.

212 views • 5 slides

Simulation and Robotics

Simulation and Robotics

Simulation and Robotics. Case Koralli-Tuote Company: Koralli-Tuote Oy, Kokkola,Finland Description: Demonstration of factory simulation possibilities in furniture production: material flow, bottlenecks, capacities, utilizations, waiting times, lead times, throughput etc.

209 views • 10 slides

Physics-Based Simulation: Graphics and Robotics

Physics-Based Simulation: Graphics and Robotics

Physics-Based Simulation: Graphics and Robotics. Chand T. John. Forward Dynamic Simulation. Problem: Determine the motion of a mechanical system generated by a set of forces or control values. Challenges: Contact/collision Large number of bodies Drift Control of end result

324 views • 26 slides

Robotics

Robotics. Catchup/Review: switch, arithmetic, range, loop Bluetooth Lab: Finish parallel parking. Next: Use Bluetooth communication for calculate & send location exercise. Review NXT-G. Switch block Arithmetic block Two operands or operand & constant Range Inside or outside

251 views • 13 slides

Simulation and Robotics

111 views • 10 slides

Robotics  at the  Sea Floor- A Simulation

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Got any suggestions?

We want to hear from you! Send us a message and help improve Slidesgo

Top searches

Trending searches

robot simulation presentation

26 templates

robot simulation presentation

49 templates

robot simulation presentation

11 templates

robot simulation presentation

71 templates

robot simulation presentation

15 templates

robot simulation presentation

first day of school

68 templates

Robot Presentation templates

How would you demonstrate that this set of google slides themes and powerpoint templates about robots has been designed by humans and not machines to allay any concerns: we are human, even though great advances are being made in the world of robotics talk about artificial intelligence, new discoveries, or just enjoy these slides with aesthetic elements of robots..

Robotics Engineering Company Profile presentation template

It seems that you like this template!

Premium template.

Unlock this template and gain unlimited access

Robotics Engineering Company Profile

Robotics engineering is the science of the future and development. Present the profile of your wonderful company with this attractive template with illustrations of robotic arms, which has all the necessary resources for you to promote your company and take it to the next level. Personalize your information in this...

Design of a Humanoid Robot: PhD Dissertation presentation template

Design of a Humanoid Robot: PhD Dissertation

That's it. It's the day humanity surrenders to robots. How could you possibly tell whether this text is being written by a human or not? Do you know who definitely wrote your dissertation on humanoid robots? You! In order to impress the assessment committee, this is the slide design you...

AI Chatbot App Pitch Deck presentation template

AI Chatbot App Pitch Deck

Download the AI Chatbot App Pitch Deck presentation for PowerPoint or Google Slides. Whether you're an entrepreneur looking for funding or a sales professional trying to close a deal, a great pitch deck can be the difference-maker that sets you apart from the competition. Let your talent shine out thanks...

Smart Nursing Home presentation template

Smart Nursing Home

Download the Smart Nursing Home presentation for PowerPoint or Google Slides. Hospitals, private clinics, specific wards, you know where to go when in need of medical attention. Perhaps there’s a clinic specialized in treating certain issues, or a hospital in your area that is well-known for its state-of-the-art technology. How...

Robotic Workshop presentation template

Robotic Workshop

Robots are already a reality that we find in more and more places every time. It is an area with great development ahead. If you work in this field and need to prepare a robotics workshop, at Slidesgo we have created this modern and simple template with motifs in green...

Robotic Process Automation (RPA) Project Proposal presentation template

Robotic Process Automation (RPA) Project Proposal

Who would have written the description of this template? A person or an artificial intelligence? It could be an AI, because thanks to the Robotic Process Automation (RPA) of a business, an action that was traditionally performed by a human being can now be carried out by a computer system....

Kawaii Robots Pitch Deck presentation template

Kawaii Robots Pitch Deck

Are you looking for something original and different to present your business plan? With this doodle style template with illustrations of Kawaii robots you will immediately capture the attention of your audience. Its pink and orange background colors and handwritten typography give it a fun, casual feel. It is perfect...

Humanoid Robot Pitch Deck presentation template

Humanoid Robot Pitch Deck

A humanoid robot is designed to mimic or simulate the shape and movements of a human being. We know that your company is innovative and wants to play a leading role in the future of technology, that's why we want to help you present yourself in a spectacular way and...

Humanoid Robot Project Proposal presentation template

Humanoid Robot Project Proposal

Technology is so far advanced that it has almost crossed the line. Can you tell whether this text has been written by a human or a robot? Well, maybe you should run this template through the Turing test, just to make sure… Present your project proposal for a robot company...

AI Essentials Workshop presentation template

AI Essentials Workshop

Download the AI Essentials Workshop presentation for PowerPoint or Google Slides. If you are planning your next workshop and looking for ways to make it memorable for your audience, don’t go anywhere. Because this creative template is just what you need! With its visually stunning design, you can provide your...

Robotic Workshop Infographics presentation template

Robotic Workshop Infographics

Download the "Robotic Workshop Infographics" template for PowerPoint or Google Slides and discover the power of infographics. An infographic resource gives you the ability to showcase your content in a more visual way, which will make it easier for your audience to understand your topic. Slidesgo infographics like this set...

Mechanical Articulating Axes Project Proposal presentation template

Mechanical Articulating Axes Project Proposal

Make an impactful presentation of your mechanical articulating axes project with this modern, futuristic template. The design is inspired by robotics and technology, with a cool blue background, illustrations of mechanical articulating axes here and there, and a modern layout. Showcase the strengths of your project and explain the benefits...

Robotics Lesson for College presentation template

Robotics Lesson for College

If you are a robotics professor at the university and you would like to prepare a different and original lecture that captures the attention of your students, take a look at this template from Slidesgo. It has a modern style, with geometric shapes. The background is black, but it is...

Metaverse Mayhem Aesthetic Theme for Business presentation template

Metaverse Mayhem Aesthetic Theme for Business

If you've ever felt close to an AI, it's the perfect time to make your next business presentation feel metaverse mayhem aesthetic inspired! The latest in business presentation technology has that distinct robotic vibe: mysterious robot illustrations illuminated by dark backgrounds and highlighted with shades of purple, almost giving off...

Chatbot Social Media Strategy presentation template

Chatbot Social Media Strategy

Download the Chatbot Social Media Strategy presentation for PowerPoint or Google Slides. How do you use social media platforms to achieve your business goals? If you need a thorough and professional tool to plan and keep track of your social media strategy, this fully customizable template is your ultimate solution....

Global Technology & Robotics Academy Center presentation template

Global Technology & Robotics Academy Center

Technology is part of our daily lives, and robotics academy centers are now more necessary than ever to train the next generation of inventors. Promote yours and get more students to enroll in your academy using this blue gradient template with robot illustrations. In it you will find the necessary...

Bachelor in Robotics Engineering presentation template

Bachelor in Robotics Engineering

Download the Bachelor in Robotics Engineering presentation for PowerPoint or Google Slides. As university curricula increasingly incorporate digital tools and platforms, this template has been designed to integrate with presentation software, online learning management systems, or referencing software, enhancing the overall efficiency and effectiveness of student work. Edit this Google...

Crobot Pitch Deck presentation template

Crobot Pitch Deck

Sometimes, you need to push forward with your ideas despite lacking financial aid. Well, how about using our template to try to impress some investors? It's structured as a pitch deck, ready to convey your message while showing what your project is about. A couple of wavy shapes on the...

  • Page 1 of 4

Describing Robots from Design to Learning: Towards an Interactive Lifecycle Representation of Robots † † thanks: A preprint submitted to the IEEE ICRA 2024 .

The robot development process is divided into several stages, which create barriers to the exchange of information between these different stages. We advocate for an interactive lifecycle representation, extending from robot morphology design to learning, and introduce the role of robot description formats in facilitating information transfer throughout this pipeline. We analyzed the relationship between design and simulation, enabling us to employ robot process automation methods for transferring information from the design phase to the learning phase in simulation. As part of this effort, we have developed an open-source plugin called ACDC4Robot for Fusion 360, which automates this process and transforms Fusion 360 into a user-friendly graphical interface for creating and editing robot description formats. Additionally, we offer an out-of-the-box robot model library to streamline and reduce repetitive tasks. All codes are hosted open-source. ( https://github.com/bionicdl-sustech/ACDC4Robot )

K eywords  First keyword   ⋅ ⋅ \cdot ⋅ Second keyword   ⋅ ⋅ \cdot ⋅ More

1 INTRODUCTION

As autonomous machines capable of interacting with the real world, various types of robots, such as wheeled mobile robots, quadrupedal robots, and humanoid robots, are emerging in domestic, factory, and other environments to collaborate with humans or accomplish tasks independently. The morphology of a robot is the essential factor that most directly affects the robot’s configuration space, thereby determining the robot’s function [ 1 ] . Robot morphology is primarily determined during the design process, thanks to the development of computer-aided design (CAD) technology, which makes it cost-effective, time-saving, and efficient compared to the manufacturing process.

Beyond robot morphology, learning has become an essential topic in robotics because it enables robots to achieve complex tasks and, thus, better interact with the environment. However, training robots in hardware may lead to failures or damage, making it expensive and time-consuming. Simulation provides a more cost-effective and safer way to develop robots. Moreover, robot simulators incorporate domain randomization techniques that increase the exploration of the state-action space, facilitating the transfer of knowledge learned in simulation to real robots [ 2 ] . All robot simulators construct simulation instances from robot models derived from robot description formats .

Refer to caption

Robot Description Format (RDF) is a class of formats that can describe the robot system in a structured manner following a set of rules. RDF contains information about the robot system, including kinematics, dynamics, actuators, sensors, and the environment with which the robot can interact. RDF can transfer information about the robot from the design phase to simulation; thus, it can be seen as the interface between robot morphology design and robot learning in a simulated environment.

1.1 File Formats from Design to Learning

Several file formats are used in robot morphology design and learning in a simulation environment. These file formats have specific features tailored to different application scenarios, hindering process interoperability. Various file formats make it challenging to transfer information from the design phase to the learning process in simulation.

In contemporary practice, robot morphology is typically designed using CAD software. File formats in the CAD field can be categorized into neutral and native formats. Neutral file formats adhere to cross-platform compatibility standards, including STEP files (.stp, .step), IGES files (.igs, .ige), COLLADA, and STL. Native file formats are platform-specific and contain precise information optimized for the respective platform, examples of which include SolidWorks (.sldprt, .sldasm), Fusion 360 (.f3d), Blender (.blend), and many others.

Several robot description formats are used in robot simulation. The most common format is the Unified Robotics Description Format (URDF), which is supported by various robot simulators, including PyBullet, Gazebo, and MuJoCo. SDFormat is natively supported by Gazebo and partially supported by PyBullet. MuJoCo natively supports MJCF and is also supported by Isaac Sim and PyBullet. Other robot description formats resemble native formats specific to particular simulators than URDF. For example, the CoppeliaSim file is designed for use with CoppeliaSim, and WBT is used in Webots.

1.2 A Brief Historical Review of Robot Description Formats

Robot Description Formats provide information for modeling the robot system and are widely used in robot simulators. Currently, research resources on robot description formats are limited, with most of the relevant information available only on their respective websites and forums, making research challenging. The authors in [ 3 ] compared existing formats and summarized their main advantages and limitations. Here, we offer a concise historical perspective on robot description formats to enhance understanding.

1.2.1 Before Unified Robot Description Format (URDF)

Research on robot modeling predates the concept of a robot description format by a considerable margin. Denavit and Hartenberg formulated a convention using four parameters to model robot manipulators in 1955 [ 4 ] , which is still widely used in robotics. With the advent of computer simulation, robots can be defined using programming languages with variables [ 5 ] . While it is theoretically possible to describe a robotic system through a programming language’s variables and data structures, the reliance on programming language features can make it cumbersome to exchange robot system information across different platforms for various purposes. Therefore, representing robot system information in a unified, programming language-independent manner will facilitate interchangeability across other platforms and enhance development efficiency. Park et al. [ 6 ] discussed XML-based formats, which can describe robots due to XML’s convenience in delivering information.

1.2.2 URDF, SDFormat, and Others

While developing a personal robotics platform, the idea of creating a “Linux for robotics” came to the minds of Eric Berger and Keenan Wyrobek [ 7 ] . With the first distribution of ROS—ROS Mango Tango—released in 2009, URDF was simultaneously introduced. URDF is an XML-based file format that enhances readability and describes robot links’ information, including kinematics, dynamics, geometries, and robot joints’ information organized in a tree structure. URDF universally models robots, making them suitable for visualization, simulation, and planning within the ROS framework.

With the growing popularity of ROS, URDF has become a widely used robot description format supported by various simulation platforms, such as PyBullet, MuJoCo, and Isaac Sim, among others. However, an increasing number of roboticists have recognized the limitations and issues of URDF, such as its inability to support closed-loop chains. The community has endorsed proposals like URDF2 to address these concerns 1 1 1 https://sachinchitta.github.io/urdf2/ . The problems stemming from URDF’s design may become increasingly challenging to resolve over time due to the diminishing activity in its development (the repository’s 2 2 2 https://github.com/ros/urdf update frequency has become very low). Therefore, new formats can draw upon URDF’s experience to avoid such issues from the outset and expand their ability to describe a broader range of scenarios.

Rosen Diankov et al. [ 8 ] promoted an XML-based open standard called COLLADA, which allows for complex kinematics with closed-loop chains. SDFormat (Simulation Description Format) was initially developed as part of the Gazebo simulator and separated from Gazebo as an independent project to enhance versatility across different platforms. SDFormat is also an XML-based format that shares a similar grammar with URDF but extends its ability to describe the environment with which the robot interacts. Furthermore, SDFormat is actively developing, making it more responsive to future robotics needs. MJCF is another XML-based file format initially used in the MuJoCo simulator. It can describe robot structures, including kinematics, dynamics, and other elements like sensors and motors.

Although these robot description formats enable more comprehensive modeling information for robotic systems and have resolved some of the limitations of URDF, URDF remains the most universally adopted robot description format in academia and industry. Fig. 2 provides a timeline representation of the release times of these robot description formats.

Refer to caption

1.2.3 Beyond URDF

Daniella Tola et al. [ 9 , 10 ] surveyed the user experience of URDF within the robotics community, including academia and industry. Their survey revealed problems associated with using URDF and inspired the research of robot description formats. Some challenges are specific to URDF, for instance, the lack of support for a closed-chain mechanism. Additionally, some challenges are common to other robot description formats, such as the complex workflow involving multiple tools, including CAD software, text editors, and simulators.

One of the solutions is to create a new robot description format that can adequately describe robot systems and is also easy to use. A new attempt in this regard is the OpenUSD format 3 3 3 https://aousd.org/ , which combines the strengths of academia and industry to drive progress in this field.

Another solution is to provide more tools to enhance the usability of robot description formats. Some tools, such as gz-usd 4 4 4 https://github.com/gazebosim/gz-mujoco/tree/main/sdformat_mjcf and sdformat_mjcf 5 5 5 https://github.com/gazebosim/gz-usd , improve the interoperability of different robot description formats. CAD tools for exporting robot designs to robot description formats are in high demand within the roboticist community because they relieve developers from the tedious workflow of creating robot description formats. Such tools include SolidWorks URDF exporter, Fusion2URDF, OnShape to URDF exporter, and the Blender extension Phobos.

In the rest of this paper, Section 2 introduces methods for structuring the workflow from design to learning and presents an automation tool, ACDC4Robot, designed to address these challenges. Section 3 demonstrates the usage of the automation tool with examples and offers a robot model library for users that can be readily utilized. We conclude in Section 4 and discuss the limitations of our work and the future of the format for robot system development. This article’s contributions include promoting a lifecycle representation from robot design to robot learning, offering the ACDC4Robot tool within Fusion 360 to streamline the workflow from robot design to robot learning, and constructing an out-of-the-box robot model library for robot design and learning.

2 METHODOLOGY

We analyze the workflow to describe robots from design to learning, then describe an interactive lifecycle representation. Next, we employ robot process automation to streamline the processes of robot design to robot learning. An automation tool integrated with a CAD platform can achieve this lifecycle representation interactively.

2.1 An Interactive Lifecycle Representation

The process of robot development can be represented in various ways. Here, we separate the robot development process into four stages: design, simulation, learning, and application. In many robot learning approaches, robots are trained initially in a simulation environment and then transferred to real robots using Sim2Rreal methods. As a result, the simulation, learning, and application stages can be streamlined into a single workflow. However, the difference in file formats between the design and simulation stages poses a challenge in transferring information from robot design to simulation. To address this issue, we introduce the concept of a robot description format as a bridge to eliminate the gap between design and simulation, Fig. 3 , allowing for the seamless connection of these stages to create a lifecycle representation of the robot development process.

For a robot description format based on the XML format, using a text editor is a straightforward but non-intuitive method for interacting with the robot description file. Creating or modifying the robot description file by hand becomes tedious, time-consuming, and error-prone. Since the robot description format contains information directly derived from robot design, the graphical interactive interface provided by CAD software can serve as a graphical editor for the robot description format. By utilizing CAD software as the GUI, the robot description format can be interacted with in a WYSIWYG (what you see is what you get) manner. Consequently, this entire process can be regarded as an interactive lifecycle representation of the robot.

Refer to caption

2.2 Robotic Process Automation from Design to Simulation

CAD software and robot simulators are two systems with distinct functions, each emphasizing different aspects of the robot. However, some features in CAD and robot simulators represent different forms of the same information. The way components are joined to construct a robot assembly in CAD software determines the kinematics of the robot. The physical properties of robot components in CAD software can also pertain to the dynamics in the robot simulator. The geometric shape of components can be utilized for visualization and collision information in the simulator. Fig. 4 shows that a one-to-one relationship between CAD and simulation systems enables the realization of automated conversions between these two processes, which was previously feasible.

Refer to caption

2.3 An Open-source Plug-in Using Fusion 360

We present an open-source plugin using Fusion 360 to achieve the interactive lifecycle process automation from robot design to robot learning. Fusion 360 is a popular CAD software developed by Autodesk within the roboticist community. It provides API access for developers, allowing it to accomplish automation tasks.

Following J. Collins et al.’s work [ 11 ] , we selected a set of popular simulators used in robotics learning, including RaiSim, Gazebo, Nvidia Isaac, MuJoCo, PyBullet, CARLA, Webots, and CoppeliaSim for comparing the compatibility of robot description formats: URDF, SDFormat, MJCF, and USD. Since we have opted to utilize Gazebo and PyBullet as our target simulation platforms, we have decided to use URDF and SDFormat according to Table 1 as the robot description formats for transforming the design into the learning process.

Robot
Description
Format
Gazebo, Nvidia Isaac,
MuJoCo, PyBullet,
CoppeliaSim
RaiSim, Nvidia Isaac,
MuJoCo, PyBullet

In this section, we will introduce the GUI of the Fusion 360 plugin that enables the interactive lifecycle process from robot design to robot simulation, along with a guide on how to use the plugin. We will begin by using the UR5e robot manipulator model as a predefined design to verify the plugin’s feasibility for exporting a serial chain robot into URDF for simulation. Next, we will employ a simple closed-loop four-bar linkage to demonstrate the workflow starting from scratch. We aim to confirm the plugin’s ability to simulate closed-chain mechanisms using SDFormat in Gazebo. Additionally, we will show a robot library containing various out-of-the-box robot models to help users reduce the time spent on repetitive tasks.

Refer to caption

3.1 The ACDC4Robot Plug-in with Fusion 360

Refer to caption

ACDC4Robot is an open-source plugin for Fusion 360 that can automatically convert design information into a simulation data structure (robot description format) for learning. The pipeline for using the ACDC4Robot plugin from design to learning is illustrated in Fig. 5 . Users can import an existing robot model into Fusion 360 or design a robot from scratch using Fusion 360. ACDC4Robot provides a straightforward GUI to simplify the conversion process, as shown in Fig. 6 . After completing the robot morphology design, click the start button of the ACDC4Robot plugin, and a settings panel will appear on the right side of Fusion 360. This panel allows the choice between URDF or SDFormat as the format for transferring design information to the simulation and to select the target simulator for exporting in a format compatible with the chosen simulator. The exported files can then be used in the simulation for robot learning.

Compared to the traditional method of creating and editing robot description formats using text editors, Fusion 360 as a graphical interface to modify robot design, where modifications directly reflect on the robot description file, is a more intuitive approach. Users can click a few buttons to generate robot descriptions for learning in simulation using ACDC4Robot, freeing them from the previous tedious workflow.

3.2 Demonstrations with Serial Chains

It is prevalent to use existing robot models for learning purposes. Here, we use the UR5e robot manipulator model downloaded from UR’s website to demonstrate the design-to-learning process with a serial chain robot, representing a typical robot type. As shown in Fig. 7 , the first step is to import the UR5e model into Fusion 360 and assemble it in a tree structure. Next, we use ACDC4Robot to export the design file UR5e.f3d into several files, including the text file UR5e.urdf and mesh files that are referenced in URDF as visualization and collision geometry. These robot description files can be loaded directly into PyBullet for learning.

3.3 Demonstrations with Closed Loops

Closed-chain mechanisms are also widely used in robots. We demonstrate creating a four-bar linkage from scratch to simulation in Fig. 8 . First, we draw sketches for the linkage bars and then create linkage geometry components from these sketches. Next, we assemble these components by joining them together. Finally, we use ACDC4Robot to export SDFormat files (as URDF cannot describe closed-loop chain mechanisms) to Gazebo.

Refer to caption

3.4 Robot Library of the ACDC4Robot Plugin

By enabling the automatic transfer of robot design information to learning in simulation through the ACDC4Robot plugin, the robot design model becomes metadata in the interactive lifecycle representation of a robot. Consequently, a library of robot design models helps construct robot applications using the interactive lifecycle pipeline.

Based on a survey of robot types modeled in URDF within the roboticist community [ 9 ] , it was found that robotic arms, mobile robots, end effectors, and dual-arm robots are the most frequently used types of robots. Using this knowledge and investigating several robot datasets, including RoboSuite 6 6 6 https://robosuite.ai/docs/modules/robots.html , awesome-robot-descriptions 7 7 7 https://github.com/robot-descriptions/awesome-robot-descriptions , Gazebo models 8 8 8 http://models.gazebosim.org/ , and CoppeliaRobotics models 9 9 9 https://github.com/CoppeliaRobotics/models , we have created a robot model library for Fusion 360. This library, listed in Table 2 , has been tested with the ACDC4Robot plugin and can be used out of the box. The robot library can be downloaded from the ACDC4Robot repository.

Robot Name Robot Type Structure
Franka Emika Panda Robotic Arm Serial Chain
Franka Emika Hand End Effector Serial Chain
Kinova Gen3 Robotic Arm Serial Chain
Rethink Sawyer Robotic Arm Serial Chain
Robotiq 2F85 Gripper End Effector Closed Loop
UR5e Robotic Arm Serial Chain
ABB YuMi Dual Arm Robot Serial Chain
KUKA youBot Mobile Robot Serial Chain

4 DISCUSSIONS AND CONCLUSION

4.1 towards a lifecycle representation.

This article introduces an interactive lifecycle representation that spans from robot design to robot learning in simulation. We identify the gap in transferring design information to the robot simulation environment, leading us to advocate for using a robot description format to bridge the robot morphology design and robot learning in simulation. To facilitate smoother information transfer throughout the process, we have developed a robot process automation tool capable of converting design information into simulation data. This automation is made possible by the one-to-one mapping relationship between design and simulation platforms. As a result, we have created an open-source plugin called ACDC4Robot for Fusion 360. This plugin allows users to convert design information into robot description format, turning Fusion 360 into a graphical user interface (GUI) for interactive robot modeling. This interactive lifecycle process, spanning from robot design to learning, simplifies the development of robot applications.

4.2 Limitations of the ACDC4Robot Plugin

Although the ACDC4Robot plugin can achieve an interactive lifecycle process from robot design to learning in simulation, it still has some limitations.

The ACDC4Robot plugin is developed using the Fusion 360 API, making it dependent on Fusion 360. Developing a platform-independent tool capable of directly converting a design file into a robot description format for the entire lifecycle process from robot design to learning would be more versatile.

Besides, the ACDC4Robot plugin currently only supports URDF and SDFormat. While SDFormat compensates to some extent for the limitations of URDF, such as modeling closed-chain robots, these two robot description formats can meet most of the needs of academia and industry. However, including support for exporting other robot description formats, such as MJCF, enhances the application of this tool.

Furthermore, the number of robot models in the robot library is still relatively low compared to other robot databases. This is partly due to the challenge of obtaining publicly available robot models. Additionally, assembling these acquired robot models in Fusion 360 and testing them with the ACDC4Robot plugin is quite time-consuming. We plan to incrementally expand the robot library during the future development process.

4.3 Towards a Unified Robot Lifecycle Format

The modularity of the robot development process has led to separate formats for storing information, creating barriers to data exchange between different stages of development. While conversion tools have partially addressed this issue, they must improve efficiency. An ultimate solution would involve adopting a universal format describing all the information across the robot development lifecycle. The adoption of such a universal format has the potential to enhance the efficiency of robot development significantly. The Universal Scene Description (USD) format is moving in this direction.

Acknowledgements

This work was partly supported by the Ministry of Education of China-Autodesk Joint Project on Engineering Education, the National Natural Science Foundation of China [62206119], and the Science, Technology, and Innovation Commission of Shenzhen Municipality [JCYJ20220818100417038, ZDSYS20220527171403009, and SGDX20220530110804030].

  • [1] Benedikt Feldotto, Fabrice O Morin, and Alois Knoll. The neurorobotics platform robot designer: modeling morphologies for embodied learning experiments. Frontiers in Neurorobotics , 16:856727, 2022.
  • [2] Fabio Muratore, Fabio Ramos, Greg Turk, Wenhao Yu, Michael Gienger, and Jan Peters. Robot learning from randomized simulations: A review. Frontiers in Robotics and AI , page 31, 2022.
  • [3] Mikhail Ivanou, Stanislav Mikhel, and Sergei Savin. Robot description formats and approaches. In 2021 International Conference" Nonlinearity, Information and Robotics"(NIR) , pages 1–5. IEEE, 2021.
  • [4] Jacques Denavit and Richard S Hartenberg. A kinematic notation for lower-pair mechanisms based on matrices. 1955.
  • [5] Leon Žlajpah. Simulation in robotics. Mathematics and Computers in Simulation , 79(4):879–897, 2008.
  • [6] Ji hwan Park, Tae Houn Song, Soon Mook Jung, and Jae Wook Jeon. Xml based robot description language. In 2007 International Conference on Control, Automation and Systems , pages 2477–2482. IEEE, 2007.
  • [7] Keenan A Wyrobek, Eric H Berger, HF Machiel Van der Loos, and J Kenneth Salisbury. Towards a personal robotics development platform: Rationale and design of an intrinsically safe personal robot. In 2008 IEEE International Conference on Robotics and Automation , pages 2165–2170. IEEE, 2008.
  • [8] Rosen Diankov, Ryohei Ueda, Kei Okada, and Hajime Saito. Collada: An open standard for robot file formats. In Proceedings of the 29th Annual Conference of the Robotics Society of Japan, AC2Q1–5 , 2011.
  • [9] Daniella Tola and Peter Corke. Understanding urdf: A survey based on user experience. arXiv preprint arXiv:2302.13442 , 2023.
  • [10] Daniella Tola and Peter Corke. Understanding urdf: A dataset and analysis. arXiv preprint arXiv:2308.00514 , 2023.
  • [11] Jack Collins, Shelvin Chand, Anthony Vanderkop, and David Howard. A review of physics simulators for robotic applications. IEEE Access , 9:51416–51431, 2021.

Select your location:

Select your language:

  • Purchase products online
  • View product data and availability.
  • Register and manage purchased products.
  • Use cloud-based software.
  • Seamless access to various KUKA tools

You are using Internet Explorer and will not be able to use our website properly. Please change to an up-to-date browser for ideal presentation of the website.

robot simulation presentation

Smart simulation software for efficient offline programming of KUKA robots: with KUKA.Sim, you can optimize the operation of your systems and robots outside the production environment – quickly and easily.

More productivity, safety and competitiveness

The future-oriented KUKA.Sim software brings robot applications virtually to life – before the system has even been put into operation. The robot motion sequences programmed offline are depicted in real time and analyzed and optimized with regard to their cycle times. With features such as a reachability check and collision detection, you can make sure that robot programs and work cell layouts can really be implemented. Digital simulation thus offers maximum planning reliability for your manufacturing processes at minimum cost and effort. At the same time, production downtimes are kept as short as possible.

Simulation of a robot system in just a few minutes, without deep programming knowledge with KUKA.Sim

From offline programming to virtual commissioning

Convincing advantages of simulating production processes with kuka.sim, time savings.

Plan your system and robot concepts quickly, easily and individually – without actually having to build them in the real world.

Increased sales

KUKA.Sim helps your sales team to professionally present your solutions to end customers and to increase your sales success.

Planning reliability

Design system concepts in advance with very accurate cycle times for increased planning reliability and competitiveness.

Verifiability

The reachability check and collision detection features allow you to test the viability of your robot programs and cell layouts.

Maximum flexibility through modular software architecture

KUKA.Sim is based on a modular software architecture – with an efficient, flexible and durable toolbox principle . The basic package can be expanded with three add-ons: for powerful modeling of an individual component library , for virtual commissioning and for simulation of welding applications . This means customers only pay for the functional expansions they actually need. If their requirements change, users can easily add further add-ons in the future. The modular system stands out for its flexibility and durability.  

robot simulation presentation

KUKA.Sim Basic – scope of functions

  • 64-bit application for top CAD performance
  • Integrated CAD imports (CATIA V5, V6, JT, STEP, 2D RealDWG, etc.)
  • 3D export function (STEP-AP242, JT)
  • 2D export function (RealDWG)
  • Extensive online library of currently available robot models, etc.
  • Configurable collision check and reachability check
  • AVI HD video and 3D PDF export function
  • Modeling wizards for creating your own components
  • Accurate cycle time determination (even without KUKA.OfficeLite)
  • Virtual reality support (additional VR hardware required)
  • Mobile Viewer app available for mobile devices
  • Support for 3D mouse (e.g. 3Dconnexion)
  • NVIDIA PhysX support
  • New: KRL editor for advanced offline programming
  • New: 3D safety zone configuration
  • New: Advanced I/O signal editor
  • New: Robot stopping distance simulation 
  • New: WorkVisual project export 

System requirements (minimum)

  • Supported operating systems: WIN 10 64-bit
  • Graphics card with 1 GB RAM (recommended: graphics card with 2 GB RAM) and a resolution of at least 1024 x 768 pixels (recommended: 1920 x 1080 pixels)
  • Dual-core CPU with 8 GB RAM (recommended: Intel i7 with 16 GB RAM)
  • DirectX 9.0 support

Optional add-ons for functional expansion 

Kuka.sim modeling – easy model creation with the component library.

The Modeling add-on expands the basic functionality of KUKA.Sim and makes it possible to create an individual component library from your own CAD data: with kinematic systems, sensors, signals, material flow or physical behavior. For even more realistic behavior simulation.

KUKA.Sim Connectivity – commissioning on virtual models (virtual commissioning)

Kuka.sim arcwelding – simulation of welding applications, test trial version free of charge or order kuka.sim now:.

Download a free demo version of KUKA.Sim (KUKA.OfficeLite not included). In the KUKA Marketplace you can also order the software directly. All you need is a my.KUKA registration.

robot simulation presentation

KUKA.Sim Video Tutorials

Step-by-step instructions for programmers of all levels of experience - from beginners to experts.

Image Work.Visual

KUKA.Sim is the intelligent simulation software package for saving time, increasing sales and improving competitiveness in a fast-moving market. Quirin Goerz, Chief Information Officer at KUKA 

Fast, simple and efficient with KUKA.Sim

Thanks to an intuitive user interface as well as a multitude of different functions and modules, KUKA.Sim is the optimum solution for  maximum efficiency in offline programming . The simple yet powerful KRL editor (KUKA Robot Language) provides two views: for experts and for beginners. A visual program tree makes programming possible even without KRL knowledge, for example. You can create optimum layouts for your production systems at an early stage of the project. Simply drag the smart components from the extensive library and drop them in the required position. Investigate alternatives and verify concepts with a minimum of effort. Since motion execution takes place in real time, cycle times can be calculated precisely . At the end, the simulations can be provided in a variety of formats for almost any device: from mobile to desktop applications, from 2D to 3D. 

Experience robot simulations on mobile devices

robot simulation presentation

Manage KUKA.Sim licenses easily

  • my.kuka customer portal
  • KUKA.Sim 3.x and higher
  • KUKA.Sim 2.2
  • KUKA.Sim 2.0
  • KUKA.Sim 2.1

You might also be interested in

robot simulation presentation

KUKA.OfficeLite

Kuka.safeoperation.

Robotarm

Industrial robots

Industrie 4.0

Industrie 4.0

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 05 June 2024

Design and application of virtual simulation teaching platform for intelligent manufacturing

  • Pengfei Zheng 1 , 2 ,
  • Junkai Yang 3 ,
  • Jingjing Lou 2 &
  • Bo Wang 2  

Scientific Reports volume  14 , Article number:  12895 ( 2024 ) Cite this article

194 Accesses

Metrics details

  • Electrical and electronic engineering
  • Information technology
  • Mechanical engineering

Aiming at the practical teaching of intelligent manufacturing majors faced with lack of equipment, tense teachers and other problems such as high equipment investment, high material loss, high teaching risk, difficult to implement internship, difficult to observe production, difficult to reproduce the results, and so on, we take the electrical automation technology, mechatronics technology and industrial robotics technology majors of intelligent manufacturing majors as an example, and design and establish a virtual simulation teaching platform for intelligent manufacturing majors by using the cloud computing platform, edge computing technology, and terminal equipment synergy. The platform includes six major virtual simulation modules, including virtual simulation of electrician electronics and PLC control, virtual and real combination of typical production lines of intelligent manufacturing, dual-axis collaborative robotics workstation, digital twin simulation, virtual disassembly of industrial robots, virtual simulation of magnetic yoke axis flexible production line. The platform covers the virtual simulation teaching content of basic principle experiments, advanced application experiments, and advanced integration experiments in intelligent manufacturing majors. In order to test the effectiveness of this virtual simulation platform for practical teaching in engineering, this paper organizes a teaching practice activity involving 246 students from two parallel classes of three different majors. Through a one-year teaching application, we analyzed the data on the grades of 7 core courses involved in three majors in one academic year, the proportion of participation in competitions and innovative activities, the number of awards and certificates of professional qualifications, and the subjective questionnaires of the testers. The analysis shows that the learners who adopt the virtual simulation teaching platform proposed in this paper for practical teaching are better than the learners under the traditional teaching method in terms of academic performance, proportion of participation in competitions and innovative activities, and proportion of awards and certificates by more than 13%, 37%, 36%, 27% and 22%, respectively. Therefore, the virtual simulation teaching platform of intelligent manufacturing established in this paper has obvious superiority in solving the problem of "three highs and three difficulties" existing in the practical teaching of engineering, and according to the questionnaire feedback from the testers, the platform can effectively alleviate the shortage of practical training equipment, stimulate the interest in learning, and help to broaden and improve the knowledge system of the learners.

Similar content being viewed by others

robot simulation presentation

Developing a virtual reality and AI-based framework for advanced digital manufacturing and nearshoring opportunities in Mexico

robot simulation presentation

Experience of virtual commissioning of a process control system for the production of high-paraffin oil

robot simulation presentation

A whole learning process-oriented formative assessment framework to cultivate complex skills

Introduction.

Online education (E-learning) originated in the United States of America in the last century, and then emerged in the world, and gradually spread from North America, Europe and other regions to the Asian region. In the spring of 2020, a sudden epidemic of Covid-19 profoundly changed the way we take for granted how we live, work, and learn, and this epidemic also pushed online education into an unprecedented boom mode. It has been noted that the crisis has seriously affected the basic elements of higher education. Therefore, the way to achieve educational reconstruction during the epidemic was for students to use the Internet, television and other distance learning avenues for independent, cooperative, parent-guided and innovative learning. However, practical education in engineering, which is mainly based on experiments or hands-on training operations, is difficult to be solved by simply transplanting the generalized mode of online education, which is an important opportunity and challenge for the development of online education in engineering in the post-epidemic era and future.

For this reason, many scholars around the world have conducted extensive research on virtual simulation online education models or teaching platforms using artificial intelligence, virtual simulation, virtual reality, augmented reality and other technologies. Some researchers have conducted virtual simulation teaching and learning studies for high-risk types of experiments or courses in the fields of science, engineering, agriculture, medicine, and biochemistry. Krishnamoorthy 1 reviewed the possibilities of using the existing online resources for teaching exclusively chemistry experiments virtually, and helped the teachers to teach exclusively the chemistry experiments and instruments remotely through virtual platforms. Chen et al. 2 designed a psychological virtual simulation experimental teaching system to solve the problem of “invisibility” of human psychological activities. Kruger et al. 3 proposed integrating the benefits of the different modes in the context of control system learning in a mechatronics engineering course, and designed a low‐cost ball‐on‐beal demonstrator with Matlab/Simulink software interface to bridge the theory‐practice divide in teaching state space control theory. Elkhatat et al. 4 used the computer‐based Aspen Plus® Sensitivity Analysis Tool (APSAT) to establish a virtual environment to mimic a gas absorption lab experiment in the Unit Operations Lab within the curriculum for the Bachelor of Science in Chemical Engineering. Nadeem et al. 5 presented a mobile application aimed at novice learners that makes use of technology for the teaching and learning of computer system engineering concepts. Lamo et al. 6 presented the case of study of a flexible laboratory for the use of Arduino in a Computer Technology course with 153 students, geographically distributed in Spain and Latin America. Lai et al. 7 designed a three-dimensional integrated circuit virtual experiment platform based on Unity3d, which used the Unity3d and 3ds Max tools to build a three-dimensional model of instruments, equipment, electronic components, and ultra cleanroom laboratory scenes in an integrated circuit experiment. Lionetti et al. 8 studied two Engaging Options for teaching microscopy remotely. Sreekanth et al. 9 proposed an innovative teaching–learning system that helps conduct chemistry volumetric experiments through a hardware-enabled platform named avatar-shell. Zheng et al. 10 explained the design, development and application of virtual simulation teaching resources of "Safe Electricity" based on Unity3D. Reen et al. 11 explored the integration of an immersive virtual reality simulation based on a challenging molecular biology concept within an existing module taught at University College Cork. Additionally, some other scholars have established virtual simulation systems for some high-cost application environments or scenarios, such as transportation, water conservancy engineering, earthquake and disaster prevention, and robotic systems. Dong et al. 12 established a virtual simulation experiment teaching platform by 3D modeling, human–computer interaction, and VR technologies. Their platform allowed students to understand the structure of the subway ventilation room, and master the control requirements of the ventilation system in the event of sudden fire, blockage, and failure in the subway. Rajabi et al. 13 evaluated the effect of education and premonition on the incorrect decision-making of residents under the stressful conditions of an earthquake, and then designed a virtual model based on a proposed classroom in a school in the city of Tehran to simulate a virtual learning experience. Pang et al. 14 built an online remote robotics experiment system using digital twin (DT) technology and IoT technology and adopted ADDIE (Analysis, Design, Development, Implementation, and Evaluation) teaching method. Zhao et al. 15 developed a virtual simulation cloud system of Waterway Engineering Design based on outcome-based education. Cho et al. 16 proposed an immersive virtual reality simulation for environmental education based on the virtual ecosystem model and two applications based on this simulation. Jayasundera et al. 17 indicated that virtual reality simulation (VRS) supports students’ transition back to patient care by increasing post-intervention confidence in clinical decision making, management, and patient communication. Nevertheless, other researchers have analyzed and summarized the effectiveness of virtual simulation technology applied to teaching and learning. Fernández et al. 18 developed a platform and protocol (LICIEXAM) for the in‐advance seat reservation and simultaneous online and in‐person class attendance, and examination tools and strategies, with a special emphasis on avoiding online cheating. Cerra et al. 19 showed the advantages of integrating interactive self‐assessment tools into CAD learning methodologies, such as problem‐based learning (PBL). Vergara et al. 20 carried out a quantitative research on the player profile and the one considered best for learning engineering professors, and identified the sociological and academic aspects that influence player profile choices, then designed a survey for a descriptive and inferential analysis of the answers given by a population of 532 engineering professors. Xie et al. 21 constructed a conceptual model and structural equation model (SEM) to analyze the virtual simulation experiment teaching effect by the SEM analytical method. Davis et al. 22 examined a model of the impact of perceptual variables on instructional effectiveness that can enhance teaching efficacy and outcome expectancy when preservice teachers engage in practice teaching experiences in a virtual classroom simulation. Ke et al. 23 indicated that the VR simulation better promoted the lab-teaching knowledge development than the live simulation, whereas the latter better fostered class-teaching knowledge development. Li et al. 24 discussed the efficacy of virtual simulation system teaching method in improving critical thinking of engineering students engaged in NC (Numerical Control) machining. In order to solve the problems of poor user experience and lack of navigational guidance in virtual simulation pedagogy, Dong et al. 25 proposed a scheme for establishing an intelligent navigational chemical laboratory based on multimodal fusion. Khalilia et al. 26 concluded that the learning process is supported through the implementation and study of modern learning strategies and activities for teaching crime scene investigation using virtual reality technology. An improved teaching mechanism empowered by edge computing–driven VR was proposed to enhance the education experience and improve the teaching environment 27 . Lu et al. 28 built an experimental teaching platform using virtual simulation technology for vital pulpotomy that includes learning and examination modes, which can effectively improve the teaching of pulpotomy. Zulfiqar et al. 29 discussed the concept of AR and its types, the need for AR applications in education, analysis of various state-of-the-art AR applications in terms of platform, augmented virtual content, interactions, usability, usefulness, performance, effectiveness, and ease-of-use under a single taxonomy, they gave a more comprehensive summary of the applications of AR technology in education and teaching. Similary, Chen et al. 30 gave a review of Artificial Intelligence (AI) in education, and assessed the impact of AI on education. Homoplastically, some researchers have conducted in-depth studies on assembly operation environment simulation using virtual reality, augmented reality and other technologies, and applied them to training, workshop analysis and design scenarios of assembly operation with good results. Dimitropoulos et al. 31 proposed a flexible framework that provided the necessary tools and functionalities allowing non expert users to create 1:1 replicas of industrial shop floors in a non-time-consuming way. Apostolopoulos et al. 32 presented a novel operator training framework based on Augmented Reality (AR) technology. Michalos et al. 33 proposed a method to analyze and enhance industrial workplaces using immersive virtual reality. Aivaliotis et al. 34 and Kousi et al. 35 presented an augmented reality software suite aiming at supporting the operator's interaction with flexible mobile robot workers. Other scholars discussed the applications of digital twin technology to robotic assembly lines 36 , 37 , 38 . The virtual simulation practice teaching platform involves many disciplines, and the majority of electronic information and mechanical engineering, we categorized each of the above virtual simulation teaching platforms, as listed in Table 1 .

However, there are fewer studies on basic virtual simulation public teaching platforms designed for a major category of specialties, such as chemical and chemical industry, biomedicine, aerospace, intelligent manufacturing, and so on. Therefore, in order to explore the new ideas, new modes and new mechanisms of virtual simulation-enabled experimental teaching, to help upgrade the digitalization of education, to cope with the changes in the practical training teaching environment of engineering majors, and to build a new model of practical teaching in science and engineering, this paper proposes a research on the construction of a virtual-real combination of practical training teaching platform under the three-body synergistic architecture of cloud computing, edge computing, and mobile terminal technology. For the basic content in the intelligent manufacturing class of specialties, the use of virtual simulation technology to design three-dimensional virtual wiring, PLC control, electrical signal monitoring and other modules, to achieve the electrical and electronic, PLC control class of basic experiments in the virtual and real combination of practical training teaching; For the higher-order applications in intelligent manufacturing majors, based on the form of industry-teaching fusion and school-enterprise cooperation, we develop the virtual simulation teaching module of automation production line based on the actual production and introduce it into the cloud training platform to realize the virtual simulation teaching of higher-order experiments in the intelligent control, which can be used to serve the training of the students in school and the employees of the enterprises; For the problems of lack of experimental equipment and tense workstations in engineering, the cloud practical training teaching platform is designed to meet the learning needs of students such as multi-location, remote, and no time limit, and to provide a new way of thinking for the application of online education to practical courses.

The remainder of this paper is organized as follows: Section " Overall platform design " constructs a "cloud, edge, end" three-body synergistic combination of virtual and real simulation teaching platform, which includes electrician electronics and PLC control virtual simulation, intelligent manufacturing typical production line virtual and real combination, two-axis collaborative robotics workstations, digital twin simulation, virtual disassembly of industrial robots, and the magnetic yoke axis flexible production line virtual simulation and other modules. Section " Applications and discussion " discusses the practice of the simulation teaching platform, and its teaching effect and evaluation. In Section " Conclusions ", we show the conclusions and the future of this study, and the overall frame is shown in Fig.  1 .

figure 1

Architecture of the virtual simulation teaching platform.

Overall platform design

At present, engineering training courses in colleges and universities are still in the state of "teaching-based, teacher-oriented", teachers and experimental equipment resources are tight, and it is difficult for students to enjoy "fair" teaching resources. And virtual simulation is the use of computer-generated virtual systems to mimic another real system, including three-dimensional spatial environment of human feelings (vision, hearing, touch) and real-time interactive response fidelity, so as to realize the interaction between the virtual world and the human being, reflecting the real world. China's Ministry of Education is also actively advocating the construction of virtual simulation training environments in colleges and universities, and has issued guidelines for the construction of model virtual simulation training bases. It puts forward the requirements for the construction of virtual simulation training environment, which should be based on the production environment and production equipment of advanced industry enterprises, absorbing new ideas, new technologies, new techniques, new norms and new standards, and constructing a virtual simulation training environment that is docked with the actual vocational situation. In the face of the dual background of the state vigorously calling on local universities to accelerate the construction of virtual simulation training bases and the urgent need to solve the current shortage of practical training equipment for manufacturing majors in engineering colleges and universities. we take the mechatronics technology, electrical automation technology, and industrial robotics technology majors under the equipment manufacturing catalog published by the Chinese Ministry of Education as an example, and propose a set of virtual simulation laboratory construction program to meet their daily virtual simulation teaching needs for the shared course system involved in the above three majors. The program covers the virtual simulation training courses such as electrician and electronic technology, electrical control and PLC, hydraulic and pneumatic technology, industrial robot assembly and debugging, industrial robot on-site programming, touch-screen human–machine interface technology, etc., and takes the first, second, and third year university students of the above three majors as the target service population. Construction of this virtual simulation teaching platform began in 2021 and was completed in 2023, with a total cost of approximately $700,000 USD. The overall flow of this research work, is shown in Fig.  2 .

figure 2

Research flowchart of the study.

Accordingly, this study innovates the engineering training teaching mode of engineering majors, takes intelligent manufacturing majors as an example, integrates the practical training equipment, cloud platform, cloud side-end technical means, and utilizes the cloud technology to realize the functions of the practical training equipment such as inbound communication, data transmission, cloud computing, remote operation and maintenance, and so on. Focusing on the shortcomings of the traditional engineering training mode, continuously integrating virtual simulation technology in practical teaching, designing a virtual simulation teaching platform with modern engineering training characteristics. Adopting the idea of "Internet of Everything", realizing the cross-border integration of hardware and software, and realizing the integration across time and space by means of cloud platform. Students can remotely operate on-campus practical training equipment through the digital campus, observe the effect of practical training in real time, and experiment on "cloud", as shown in Fig.  3 .

figure 3

Architecture of the "cloud training" platform.

In terms of technological innovation in the design of virtual simulation teaching platform, this paper makes the following contributions: (1) In order to overcome the virtual simulation teaching method of "virtual" but not "real", put forward a combination of virtual and real experimental hardware and software arrangement, the virtual machine and experimental hardware connected to realize the virtual terminal (software) and hardware terminal (equipment) of the two-way interaction control function; (2) In order to break the spatial limitation of the learners, the industrial Internet intelligent transmission terminal equipment (Flexem Box) is used to transmit the experimental commands, programs, and other data of the learners who are outside the laboratory to the industrial equipment in the remote laboratory to carry out the experimental projects of the electrical and electronic class as in the module of section II. Equally, Flexem Box transmits the data from relevant equipment, instruments and video surveillance in the lab back to the cloud data center, helping learners to achieve remote data monitoring, equipment diagnosis and other purposes. Flexem Box device is based on ARM CORTEX A8 processor core, with 3 Ethernet ports, 3 serial communication ports, with WIFI/GPRS/4G and other wireless devices interconnection capabilities. Flexem Box serves as a data bridge between on-site experimental equipment and the learner's client, opening the way to do hardware experiments remotely, realizing that learners can complete electrical and electronic routine experiments at home, and the results of the experiments are real and visible (observing the relevant data in the lab and the execution of the institution through the remote camera). This is a significant innovation of the virtual simulation teaching platform technology program proposed in this study, and is also a breakthrough in the integration technology of the virtual simulation teaching platform of intelligent manufacturing class, which provides an effective attempt to solve the limitation that the hardware experiments can only be carried out offline for a long time; (3) The virtual simulation teaching platform developed for intelligent manufacturing specialties promotes the balanced distribution of engineering practical teaching resources, overcomes the limitations of traditional practical teaching limited by the number of sets of equipment, and provides learners with more operating opportunities and trial and error possibilities. Additionally, many practical contents of intelligent manufacturing specialties have certain safety risks, such as electricians and machine tool operations, while the virtual simulation platform supports design, testing and simulation operations, which helps to improve the practical operation skills of learners and allow them to expand more practical knowledge in a safe environment; (4) Virtual simulation teaching platform through virtual reality technology, can simulate a variety of real or difficult to actually operate the teaching scene, such as electrical fault debugging, personalized integration of the production line, the disassembly of the robot body. This high simulation virtual teaching environment not only enhances the sense of real experience and learning fun when the learner operates, but also allows the learner to intuitively understand and master complex knowledge points, effectively making up for the classroom presentation of many difficult to reproduce, difficult to disassemble the operational content. Therefore, this virtual simulation teaching platform for intelligent manufacturing class has a significant competitive advantage in the teaching of engineering specialties, especially in the vocational education scenario that pays more attention to the cultivation of practical operation ability. Since this platform is designed based on the concept of intelligent manufacturing class professional group, and the teaching design of virtual simulation module is carried out according to the basic platform courses of each similar specialty, it is not only applicable to electromechanical specialties, but also can meet the basic experimental teaching of electronic information specialties. This virtual simulation teaching platform designed based on the course content of the multi-specialty platform has obvious competitive advantages over the virtual simulation platform for a single course in terms of the radiance of the teaching application level and the logic of the complete knowledge system. It will be able to play its own strengths in avoiding repeated investment in construction costs and realizing cross-specialty and cross-school practical teaching resource sharing.

Electrician electronics and PLC control virtual simulation module

The module can carry out conventional electrical components and wiring, stepper motor, DC motor, AC motor control and its programming and communication experiments. Three-dimensional virtual wiring using three-dimensional physical circuits and two-dimensional planar circuits, both in three-dimensional physical circuits can be wired, but also in the two-dimensional planar circuit operation, as shown in Fig.  4 . Meanwhile, the module also has the function of circuit logic analysis, which can monitor the current flow and the execution action of electrical components in real time. The virtual multimeter can measure voltage and current in the 3D and 2D scene to view and detect faults in real time, as shown in Fig.  5 . PLC can be edited and written in the virtual scene to control the operation of the stepper motor, the external components according to the signal drive to perform the corresponding action, and real-time observation of the operation of the stepper motor and PLC and other components, such as faults will be alarmed in real time and indicate the possible location of the fault, as shown in Fig.  6 .

figure 4

Virtual simulation of electrical wiring.

figure 5

Measurement with virtual multimeter.

figure 6

Virtual simulation of I/O Setting Interface in PLC Programming.

The main function of the module is to meet the learners of the basic electrical control lines and PLC control line installation and adjustment operations, as shown in Fig.  4 , the learner according to the electrical schematic diagram provided by the system, in the virtual environment to complete the selection of the corresponding electrical components, installation and wiring operations. And can use the virtual multimeter on the completion of the electrical control line in the corresponding nodes of resistance, voltage, current and other basic physical measurements, simulating the debugging and troubleshooting operations of the electrical line. In addition to the operation of virtual control lines based on basic components such as relays, contactors, travel switches, etc., this module also has the function of realizing the virtual debugging operation of basic control lines using PLC controllers, and the learner can connect the PLC with the built-in virtual control scene equipment through virtual wiring, such as the experiment of the material loading control unit in Fig.  6 , and the user can set up PLC's input and output ports and monitor the real-time signals of the relevant ports. The virtual operation of all the functions of this module supports two user interface interaction methods: mouse + keyboard, VR glasses + virtual control handle.

Virtual-real module of intelligent manufacturing typical production line.

This module integrates mechanical, pneumatic, electrical control, motor drive, sensor detection, PLC and industrial network control technology, and designs a set of virtual-real combination of practical training platform to meet the typical application scenarios of automated production line. The practical training platform consists of loading conveyor, handling robot, stacking robot, three-dimensional warehouse and other mechanical devices, can meet the automated production line, mechanical and electrical integration system design and other courses of practical training and teaching needs. The mechanical structure and control of the practical training table adopt unified standard interface, which has high compatibility and expandability. It can be used for teaching the installation and debugging of mechanical components, installation and debugging of pneumatic systems, installation of electrical control circuits and PLC programming, etc. It is suitable for single skill training and comprehensive project training for intelligent manufacturing majors, as shown in Figs. 7 , 8 , 9 , 10 and 11 .

figure 7

Experimental bench of typical production line.

figure 8

Virtual production line status monitoring (input signals).

figure 9

Virtual production line status monitoring (I/O signals).

figure 10

Converter setting interface.

figure 11

Panoramic view of the virtual production line.

The internal scenes in the simulation module, such as loading conveyor lines, handling manipulators, stacking manipulators, three-dimensional warehouse, etc., are built according to the real parts of the proportional size, and all the equipment are physical simulation of the operation mode corresponding to the real parts. The production line is divided into "physical control mode", "virtual control mode", "view status monitoring" and other operating modes, and realizes the virtual environment control virtual objects, real environment control virtual objects, real environment control physical objects and virtual environment control physical objects, virtual and real interoperability, cross-linkage operation.

The main function of this module is to integrate the simulation of units such as conveyor belts, manipulators, and three-dimensional warehouses that are commonly involved in typical automated production lines. The system built-in material conveyor, workpiece clamping and handling robot, screw drive module, three-dimensional warehouse, air compressor, PLC controller, frequency converter, electrical control module, equipment racks and other models are arranged into the virtual environment by learners. The virtual wiring function is used to connect the lines of each unit, the PLC controller is used to assign the input and output addresses of the controlled objects of each unit, and finally the linkage and debugging of the whole production line is controlled through PLC programming. The virtual operation of all the functions of this module only supports the user interface interaction mode of mouse + keyboard, and the human–computer interaction mode of VR headset + digital gloves will be introduced in the future.

Dual-axis collaborative robotics workstation module

After completing the above basic unit knowledge modules, learners can commission the units on-line to learn control, programming, assembly and commissioning techniques for complex systems. Therefore, we designed a two-axis collaborative workstation for an intelligent manufacturing line based on digital twin technology, which consists of six execution units for loading, machining, packaging, testing, and sorting, including a ring production line system, a vision system, with two 6-axis collaborative robots. The components on this workstation are modularized and can be used to teach the installation and debugging of electromechanical equipment, the installation and debugging of ring automatic control systems, and the installation and debugging of industrial network control systems, as shown in Fig.  12 . This module is a physical workstation, only one set of which is currently laid out for funding reasons, and will be used in conjunction with the digital twin simulation module in Sect. " digital twin simulation module " in teaching.

figure 12

Real view of two-axis collaborative robot workstation.

Digital twin simulation module

Due to the expensive cost of the dual-axis collaborative workstation equipment, in order to meet the needs of more students to use it and save the cost, we designed a simulation module based on digital twin technology for teaching dual-axis collaborative workstation in a digital environment. The module uses UNITY engine, 3DMAX modeling, to build a VR three-dimensional engine, editing and development platform, which can be used to browse the scene through VR, support spatial positioning, and can manipulate the handle to operate in the virtual scene of the software system, and watch the structure of the equipment and operate some specific experimental steps through the form of VR, as shown in Fig.  13 .

figure 13

Panoramic view of the digital twin simulation module.

The module can be connected to external real PLC through the built-in communication driver, and can also be connected and communicated with virtual PLC, such as PLCsim, as shown in Fig.  14 . The module integrates a robot model library, covering most robot brands, and supports the import of 3D model data and the addition of non-standard model components to the component library. Students can build a virtual production line through the mouse drag-and-drop operation, and give it motion parameters and control instructions, etc., can better simulate the simulation of real equipment commissioning state.

figure 14

Simulation of PLC communication in digital twin.

This module has the functionality of the module in Sect. " Virtual-real module of intelligent manufacturing typical production line " with the addition of a 6-degree-of-freedom virtual collaborative robot and visual inspection. According to the site and to facilitate teaching, the linear conveyor belt is changed into a ring, and two collaborative robots are used to realize the transfer of workpieces between work processes. The collaborative work operation function of the robots is emphasized to meet the learners' training needs for online programming and operation of robots. At present, the virtual operation of all the functions of this module only supports the user interface interaction mode of mouse + keyboard.

Virtual disassembly module for industrial robots

As a typical execution unit in the intelligent manufacturing industry, industrial robots have been widely used in various industries in the field of equipment manufacturing. Therefore, knowledge about the internal mechanical structure of industrial robots and their electrical control systems is also an essential professional foundation for students majoring in intelligent manufacturing. However, due to the high price of the industrial robot body, if frequent repetitive disassembly of the industrial robot body is easy to reduce the positioning accuracy of the robot and accelerate the obsolescence of the equipment, it is not able to meet the long-term repetitive daily teaching needs. Therefore, we use MAYA modeling, Unity3D running platform, design industrial robot virtual disassembly simulation module, based on dynamic simulation, combined with 3D virtual reality technology and SteamVR asynchronous projection technology to achieve the virtual disassembly of industrial robots, so that the operator is like an immersive roaming and interactive operation in the scene.

Take a typical industrial robot ABB IRB 120 arranged on campus as an object, the core parts of the robot body (upper and lower arms, wrist side cover and housing, connectors, timing belt, harness bracket, reduction gearbox assembly, electrode assembly, joint assembly, base, etc.) and the key parts of the electrical control cabinet (cover plate, fan cover, power distribution unit, computer unit, circuit boards, power supply connector, drive unit, cables, etc.) are mapped and modeled. This module covers the three major teaching demonstration functions of virtual disassembly, installation, and maintenance, and uses multiple small tasks for training in the disassembly and installation of each axis and complete robot. Among them, it contains bolt disassembly sequence, bolt installation torque, parts lubrication, etc., as shown in Fig.  15 . Additionally, through the interactive operation of the handle and the scene, the robot electrical control cabinet can be disassembled and assembled, and the electrical components can be arbitrarily rotated, translated, enlarged, and reduced, and simulated disassembly, installation, and maintenance can be carried out in accordance with the technical specifications for maintenance, as shown in Fig.  16 . The virtual disassembly operation of this module supports two user interface interaction modes: mouse + keyboard, VR glasses + virtual control handle.

figure 15

Virtual disassembly of industrial robot body.

figure 16

Virtual disassembly of the electrical control cabinet of industrial robot.

Virtual simulation module for magnetic yoke shaft flexible production line

On the basis of the above virtual simulation module, an enterprise product production line is introduced through the form of school-enterprise cooperation, and the virtual simulation teaching is carried out with real product cases, so as to make job connection for students' employment in advance and further improve the social adaptability of this virtual simulation teaching platform. Take the yoke shaft in the miniature special motor as the production object, a virtual simulation automated production line is established through three-dimensional simulation technology, and the module contains a heat treatment equipment, two CNC lathes, a CNC grinder, and two robotic arms. The machining process includes: yoke turning, dimensional inspection, yoke shaft finishing, yoke shaft grinding, cleaning and drying, dimensional inspection, and packaging of finished products, as shown in Fig.  17 . Simultaneously, the Internet of Things sensor data of the real production line MES system is associated with the three-dimensional scene and equipment in the virtual simulation module, and real-time data display is carried out on the virtual simulation model to realize the three-dimensional visualization display of dynamic data, and real-time update of the equipment stations in the three-dimensional scene, sensors, and the operation status of monitoring equipment. The module supports learners to build flexible production lines by themselves through typical equipment libraries (CNC lathes, CNC milling machines, CNC grinding machines, heat treatment machines, industrial robots, conveyor belts and other organizations), and also supports models in stl, obj and other formats to be imported into equipment libraries. Learners can conduct virtual simulation of typical production steps involved in the machining process of magnetic yoke shafts, such as CNC roughing, fine turning, fine grinding, workpiece clamping and transfer, cleaning and packaging. The virtual drive of CNC machine tools and industrial robots moving trajectories is performed by inputting PLC control commands and NC machining codes, in which the NC codes are only used for start-stop simulation of CNC machine tools, and the effect of machining parameter settings and G-command operation is carried out in another specialized CNC simulation software. Students can use this module to complete the production line equipment construction, related equipment knowledge learning, knowledge assessment and other functions. It can meet the teaching needs of comprehensively applying relevant knowledge and skills to design and debug a small flexible production line relatively independently.

figure 17

Virtual simulation module for flexible production line of magnet yoke shafts.

The main functions of this module include the reasonable layout of the flexible production line unit equipment, production line construction steps of the virtual simulation and the simulation of the machines and robots work beat control training, focusing on the communication between multiple devices and process coordination simulation operation. At present, the virtual operation training of this module is carried out in the professional computer room, and learners can carry out virtual simulation operations such as layout, PLC programming and importing for each equipment in the production line through the interaction of mouse + keyboard.

Applications and discussion

In order to continuously improve the application level of new-generation information technology such as virtual reality and artificial intelligence in practical training teaching, and deeply integrate information technology and practical training facilities, this work constructs a virtual simulation training teaching platform with perception, immersion, interactivity, conceptualization, and intelligence, and builds a set of virtual simulation training system with virtual and real combination, and configures the corresponding virtual simulation training equipment, which effectively solves the difficulties in the process of practical training teaching.

We innovate the traditional practical training teaching methods by adopting the hybrid practical training teaching method, which is mainly based on virtual simulation teaching, double verification of virtual and real combination, and the combination of online and offline linkage test and Q&A. The teaching process is described as follows: the practice place is divided into three kinds of on-campus pure equipment training room, pure virtual simulation training room (computer), virtual and real combination of training room (computer & equipment). Among them, the pure equipment training room such as the above mentioned flexible production line (hardware) training room, the production line from the enterprise special motor shaft of the real production line, due to the equipment is only a set, so only for trainee; Pure virtual simulation training room (computer) as mentioned above section II of the industrial robot virtual disassembly and assembly and flexible production line virtual simulation, the training room consists of computers and secondary development of the corresponding software, the software is installed in the cloud server on campus, the local computer room as a mirror of the software, students are required to log in and use the software through the unified server platform on campus, which supports simultaneous operation of 50 nodes; The combination of virtual and real training room such as section II mentioned above, which arranges the training equipment and virtual simulation software in the same place, and the students first utilize the simulation software in the computer to conduct virtual simulation on the teaching content. For example: the use of handling robot to achieve the handling of specified items into the warehouse experiment, students need to use the computer to set up the point of the robot's trajectory, and through the PLC to write a simple program to control the robot to the items of the grasping, moving, and warehousing, the virtual simulation run in the software environment. On this basis, students connect and communicate the computer with the real PLC through the data line, and write the program after the simulation is verified to be correct into the real PLC, and drive the real manipulator to carry the specified items and put them into the warehouse for debugging, which is a simple teaching case of the virtual-real combination training. All the computers in the virtual simulation training room mentioned above use Intel processors, Windows 10 operating system, 16.00 GB of RAM, processor Intel Core i7-10700, 8 cores and 16 threads, main frequency 2.90 GHz, and graphics card NVIDIA GeForce GTX 1650, simulation software based on customized training equipment for secondary development, the PLC library in the simulation system supports Siemens S7-200, S7-1200, S7-300, S7-400, S7-1500 series and Mitsubishi FR-FX2N, FR-FX3U, FR-FX2NC series, etc., and it does not support the customization of adding other domestic models of PLC, and the hardware PLC adopts the Siemens 1214C and Mitsubishi FR-FX3U series, and it supports the ladder and SFC as the programming languages, and other hardware using standard parts integration.

Students gave their informed consent to the processing of their data. The study was approved by the Yiwu Industrial and Commercial College and was in accordance with the principles of the Declaration of Helsinki.

Teaching application of simulation platform

The innovation of virtual simulation teaching platform must be applied to teaching practice as an important hand and impetus to promote the development of online education, alleviate the current tense situation of practical teaching equipment, and improve students' hands-on opportunities. As consequence, we carried out the practice based on the virtual simulation teaching platform for the intelligent manufacturing majors (including electrical automation technology, mechatronics technology, and industrial robotics technology) in our college. Two parallel classes (Class A and Class B) of different grades of each specialty are randomly selected as practice objects, in which Class A (Automation Class A, Mechatronics Class A, Robotics Class A) of all the specialties as the first group of learning samples, using the virtual simulation teaching platform established in this paper to carry out the daily teaching and learning implementation, and Class B (Automation Class B, Mechatronics Class B, Robotics Class B) as the second group of learning samples, maintaining the original mode of teaching and learning, and adopting the grouping of batch multi-persons to share the same set of real equipment to carry out the teaching and training, so as to carry out the data analysis on the mastery of the various core knowledge points.

In order to reflect the differences between the two sets of teaching modes more comprehensively and objectively, we also designed a test bank for different levels of mastery of the core knowledge points of different specialties (including theoretical and practical knowledge). Industrial robotics technology, for example, the test bank covers the basics of industrial robotics (theory), common electrical wiring installation and commissioning (practice), common PLC control line installation and commissioning (practice), industrial robotics system offline programming and simulation (practice), industrial robot assembly and overhaul (practice), industrial robotics application system integration (practice) and other 6 knowledge modules, each of which consists of 10 theoretical or practical questions with 10 points each. The composition of the questions for the industrial robot application system integration module, as shown in Table 2 . In addition, learners' subjective feelings were further investigated through the student experience of using the questionnaire feedback form, as shown in Table 3 , and the four options for questions 4–15 in the Table 3 are scored 5, 3, 2 and 1.

Discussion and evaluation of teaching effectiveness

According to the teaching effect comparison scheme, the degree of mastery of professional core knowledge of the two groups of students is quantitatively compared and analyzed in the form of grades, and the actual effect of the two teaching modes is comprehensively evaluated in different dimensions, such as the grades of the core courses in one academic year of their major, the proportion of the class participating in the various college students' vocational skills competitions and activities to improve their innovation ability and the number of awards, and the rate of certificates obtained by the class participating in the vocational qualification level exams, and so on. As shown in Fig.  18 , it is shown that: after practical comparison, the overall learning effect of the first group of different majors is significantly better than that of the second group, and this group is more outstanding in terms of professional core competence, enthusiasm for participation, innovation and practice. By comparing the six samples of three different majors under the intelligent manufacturing category in terms of their professional core course scores (Electrical Automation: PLC Motion Control Technology (Course 1), Industrial Robot Field Programming (Course 2); Industrial Robotics: Industrial Robot Assembly and Commissioning (Course 3), Touch Screen Human–Machine Interface Technology (Course 4), Industrial Robot Application System Integration (Course 5); Mechatronics: PLC Application Technology (Course 6), Hydraulics and Pneumatics Technology (Course 7)) and related academic index points, it can be concluded that: the average scores of one academic year professional core course scores of the three samples of Automation A, Mechatronics A and Robotics A in the first group are higher than those of the samples of Automation B, Mechatronics B and Robotics B of the second group by 11.5, 9.5 and 12.6 points respectively. In addition, the proportions of the three samples A in the first group, who participated in various competitions, innovative ability enhancement activities, won prizes in competitions and obtained vocational qualification level certificates were significantly higher than those of the three samples B in the second group. As shown in Fig.  19 , take the Industrial Robotics major as an example, the four indexes of the samples in the first group were higher than those of the samples in the second group by 37%, 36%, 27% and 22%, respectively.

figure 18

Comparison of core curriculum grades for two sample groups.

figure 19

Comparison of the performance of the industrial robotics on other academic indicators.

Consequently, the application practice of this virtual simulation teaching platform shows that the virtual simulation teaching platform established in this paper for intelligent manufacturing has a greater effect on the practical teaching effect of engineering, which can significantly improve the teaching dilemma of the tight experimental equipment in the institutions, significantly increase the average practical operation opportunities, effectively stimulate the students' interest in learning, and improve the comprehensive academic level of the students. Through the construction of professional virtual simulation teaching platform, it helps to use the new generation of information technology such as virtual reality, digital twin, develop resources, upgrade equipment, build courses, form teams, innovate the traditional practice teaching mode, effectively serve the professional practical training and social training, and enhance the cultivation ability of the local institutions for the intelligent manufacturing composite talents.

Nevertheless, due to the limited construction sites and funds, the construction program proposed in this paper does not cover all similar majors under the major category of equipment manufacturing, and the virtual simulation modules of some core courses are not embodied, such as the virtual simulation content of sensors and intelligent detection technology. Additionally, this work only takes three majors in the field of intelligent manufacturing as teaching practice samples, which is inevitably slightly insufficient in the test sample size and the representativeness of the sample group, which will bring challenges and limitations to the universality and wide promotion of this study. With the expansion of the experimental site of the researcher's college and the advancement of the construction process of the new campus, we will plan and build a larger-scale modern engineering practice center in the new campus, striving to be built as a national virtual simulation training teaching base, incorporating the second phase of the virtual simulation teaching project, covering the content of the virtual simulation teaching of intelligent sensors, intelligent detection, robotic welding, multi-axis complex machining, collaborative robots, and opto-machine-electrical integration, and others. On this basis, we will join hands with more similar institutions in the region in the future, in the form of setting up a new engineering education alliance, to jointly carry out applied research on practical teaching reforms for students majoring in equipment manufacturing and electronics and information technology in more universities.

The virtual simulation teaching platform design ideas proposed in this paper have positive significance for improving the traditional engineering practice teaching methods, especially the design logic of constructing a shared teaching platform of virtual simulation with a major category or a professional group as a carrier has obvious superiority for perfecting the curriculum system, enriching the teaching methods, and reducing the duplication of inputs. This Mooc-like open practice pedagogy and virtual simulation resources will provide educators and educational institutions with practical construction ideas and programs that will help break down the barriers to engineering practice teaching. Engineering practice teaching equipment is generally high cost, and the virtual simulation teaching method is just the right solution to this dilemma, we proposed the construction of similar professional sharing virtual simulation platform ideas to the educational counterparts presented a good teaching reform proposals and practice attempts.

Conclusions

This study aims at the "high investment, high loss, high risk", "difficult to implement, difficult to observe, difficult to reproduce" and other real problems in the teaching of intelligent manufacturing professional training, combined with the actual situation of the institution, an intelligent manufacturing professional virtual simulation teaching platform is designed, constructed and applied to teaching practice. Teaching practice shows that: the virtual simulation teaching platform of intelligent manufacturing proposed in this paper can better stimulate students' learning interest, enhance students' skill level, and comply with the industrial development demand, and the virtual simulation teaching platform developed based on the school-enterprise cooperation mode is closer to the market demand and can be referred to more highly, and has made some contributions and innovative practices in the following.

Design the corresponding virtual simulation teaching module for the basic knowledge (such as electrical technology, electrical control and PLC, etc.) of intelligent manufacturing majors. Through the virtual simulation of basic experiments of electronics and PLC, it realizes the functions of simulation measurement of electric signals, virtual wiring and debugging simulation of PLC control units. It serves the basic experimental teaching of intelligent manufacturing specialties;

For the core knowledge in the intelligent manufacturing, design a typical automated production line of virtual and real combination of practical training teaching module, through the mechanical transmission, PLC, inverter, touch screen, stepping system, servo system, sensor system, etc. object modeling, to achieve the production of materials in the production chain of transmission, sorting, handling, palletizing and other actions such as the implementation of the virtual simulation and detection. The utilization of the combination of virtual and real control methods to realize the two-way interoperability and control of the virtual simulation system and the physical experimental platform not only improves the students' practical opportunities and teaching effect, but also enhances the experimental experience and authenticity;

For the application field of industrial robots, the virtual simulation teaching module of industrial robot virtual disassembly and two-axis collaborative robot workstation is designed, and the virtual simulation module of industrial robot mechanical transmission, electrical control and other components is constructed through the ontological modeling of the core carrier (industrial robots) in the field of intelligent manufacturing. On this basis, build a cooperative robot workstation consisting of six units, such as loading unit, processing unit, packaging unit, testing unit, sorting unit, etc., and use digital twin technology to establish its corresponding digital twin training module, to achieve the experimental effect of "virtual and real synchronization" of the physical workstation and digital prototype;

For the typical production scenarios in the field of intelligent manufacturing, a virtual simulation system applicable to the flexible production line of the magnetic yoke shaft of micro special motors is established in the form of school-enterprise cooperation. For students to use intelligent manufacturing professional knowledge to solve practical production problems and perception of manufacturing workshop complete process chain involved in the comprehensive knowledge provides a good on-campus teaching environment, but also fully demonstrates the virtual simulation teaching platform in solving the production of practical training teaching faced by the "difficult to implement, difficult to observe, difficult to reproduce" on the problem of great superiority.

The work proposes a design scheme of virtual simulation teaching platform for intelligent manufacturing class, and completes the construction of relevant laboratories and teaching reform practice in the author's institution, but the work carried out in the development of teaching standards around the virtual simulation class of intelligent manufacturing professional courses is still lacking. Consequently, in the future research will be put on the expansion of the national virtual simulation training teaching base, as well as the virtual simulation teaching resources of regional institutions. Integration of intelligent manufacturing class in a professional group, the joint development of intelligent manufacturing class virtual simulation teaching standard and promotion of multiple institutions. Simultaneously, a broader evaluation study on learner experience will be conducted using a standardized questionnaire for multiple universities in order to obtain broadly comparable results in the future. This may also be a new opportunity and trend to respond to the times, iterate on new technologies, and incorporate new ideas, concepts, and advances in industry and education into the development of higher education.

Data availability

The data used to support the findings of this study are available from the corresponding author upon request.

Krishnamoorthy, S. Online resources for teaching chemistry experiments virtually. Afr. J. Chem. Educ. 12 (1), 71–81 (2022).

CAS   Google Scholar  

Chen, D., Kong, X. & Wei, Q. Design and development of psychological virtual simulation experiment teaching system. Com-put. Appl. Eng. Educ. 29 , 481–490 (2021).

Article   Google Scholar  

Kruger, K., Wolff, K. & Cairncross, K. Real, virtual, or simulated: Approaches to emergency remote learning in engineering. Comput. Appl. Eng. Educ. 30 , 93–105 (2022).

Google Scholar  

Elkhatat, A. M. & Al-Muhtaseb, S. A. Virtual mimic of lab experiment using the computer-based Aspen plus sensitivity analysis tool to boost the attainment of experiment’s learning outcomes and mitigate potential pandemic confinements. Comput. Appl. Eng. Educ. 1 , 1–16 (2022).

Nadeem, M., Lal, M., Cen, J. & Sharsheer, M. AR4FSM: Mobile augmented reality application in engineering education for finite-state machine understanding. Educ. Sci. 12 , 555 (2022).

Lamo, P., Perales, M. & de-la-Fuente-Valentín, L. Case of study in online course of computer engineering during COVID-19 pandemic. Electronics 11 , 578 (2022).

Article   CAS   Google Scholar  

Lai, Z., Cui, Y., Zhao, T. & Wu, Q. Design of three-dimensional virtual simulation experiment platform for integrated circuit course. Electronics 11 , 1437 (2022).

Lionetti, K. A. & Townsend, H. Teaching microscopy remotely: Two engaging options. J. Microbiol. Biol. Educ. 23 (1), 1–3 (2022).

Sreekanth, N. S., Varghese, N. & Babu, N. S. C. Virtual chemistry lab to virtual reality chemistry lab. Resonance. 27 (8), 1371–1385 (2022).

Article   PubMed Central   Google Scholar  

Zheng, Q. & Li, K. Design and development of virtual simulation teaching resources of “safe electricity” based on unity3D. J. Phys. Conf. Ser. 2173 , 012012 (2022).

Reen, F. J. et al. Developing student codesigned immersive virtual reality simulations for teaching of challenging concepts in molecular and cellular biology. FEMS Microbiol. Lett. 369 , 1–8 (2022).

Dong, S., Yu, F. & Wang, K. A virtual simulation experiment platform of subway emergency ventilation system and study on its teaching effect. Sci. Re. 12 , 10787 (2022).

ADS   CAS   Google Scholar  

Rajabi, M. S., Taghaddos, H. & Zahrai, M. S. Improving emergency training for earthquakes through immersive virtual environments and anxiety tests: A case study. Buildings 2022 , 12 (1850).

Pang, D., Cui, S. & Yang, G. Remote laboratory as an educational tool in robotics experimental course. Int. J. Emerg. Technol. Learn. 21 (17), 230–245 (2022).

Zhao, J., Shen, C., Chen, D. & Fu, X. Construction and application of waterway engineering design virtual simulation cloud system based on outcome-based education. Int. J. Emerg. Technol. Learn. 17 (20), 34–48 (2022).

Cho, Y. & Park, K. S. Designing immersive virtual reality simulation for environmental science education. Electronics 12 , 315 (2023).

Jayasundera, M., Myers, M., Pandian, K. & Gingell, G. Virtual reality simulation: Evaluating an experiential tool for the clinical application of pathophysiology. Med. Sci. Educ. 32 , 1575–1577 (2022).

Article   PubMed   PubMed Central   Google Scholar  

Fernández, C. & Vicente, M. A. Tools for simultaneous online and in-person teaching in a linear circuit analysis subject. Comput. Appl. Eng. Educ. 30 , 1774–1794 (2022).

Cerra, P. P., Álvarez, H. F., Parra, B. B. & Busón, S. C. Boosting computer-aided design pedagogy using interactive self-assessment graphical tools. Comput. Appl. Eng. Educ. 1 , 1–21 (2022).

Vergara, D., Antón-Sancho, Á. & Fernández-Arias, P. Player profiles for game-based applications in engineering education. Comput. Appl. Eng. Educ. https://doi.org/10.1002/cae.22576 (2022).

Xie, X. & Guo, X. Influencing factors of virtual simulation experiment teaching effect based on SEM. Int. J. Emerg. Technol. Learn. 17 (18), 89–102 (2022).

Davis, T. J., Merchant, Z. & Kwok, O. An examination of practice-based virtual simulations and pre-service mathematics teaching efficacy and outcome expectancy. Educ. Sci. 12 , 262 (2022).

Ke, F., Dai, Z., Pachman, M. & Yuan, X. Exploring multiuser virtual teaching simulation as an alternative learning environment for student instructors. Instr. Sci. 49 , 831–854 (2021).

Li, Y. et al. Cultivation of the students’ critical thinking ability in numerical control machining course based on the virtual simulation system teaching method. IEEE Access 8 , 173584–173598 (2020).

Dong, D. et al. A design of smart beaker structure and interaction paradigm based on multimodal fusion understanding. IEEE Access 8 , 173766–173778 (2020).

Khalilia, W. et al. Using virtual reality as support to the learning process of forensic scenarios. IEEE Access 10 , 83297–83310 (2022).

Wang, Z. & Cai, X. Teaching mechanism empowered by virtual simulation: Edge computing–driven approach. Dig. Commun. Netw. 9 , 483–491 (2023).

Lu, J. et al. Effect analysis of a virtual simulation experimental platform in teaching pulpotomy. BMC Med. Educ. 22 , 760 (2022).

Zulfiqar, F. et al. Augmented reality and its applications in education: A systematic survey. IEEE Access https://doi.org/10.1109/ACCESS.2023.3331218 (2023).

Chen, L., Chen, P. & Lin, Z. Artificial intelligence in education: A review. IEEE Access 8 , 75264–75278 (2020).

Dimitropoulos, N. et al. Framework enabling the design of virtual environments used for simulation of assembly operations. Proced. Manuf. 51 , 571–576 (2020).

Apostolopoulos, G. et al. Operator training framework for hybrid environments: An augmented reality module using machine learning object recognition. Proced. CIRP 106 , 102–107 (2022).

Michalos, G. et al. Workplace analysis and design using virtual reality techniques. CIRP Ann. Manuf. Technol. 67 (1), 141–144 (2018).

Aivaliotis, S. et al. An augmented reality software suite enabling seamless human robot interaction. Int. J. Comput. Integr. Manuf. 36 (1), 3–29 (2023).

Kousi, N. et al. enabling human robot interaction in flexible robotic assembly lines: An augmented reality based software suite. Proced. CIRP 81 , 1429–1434 (2019).

Kousi, N. et al. Digital twin for designing and reconfiguring human-robot collaborative assembly lines. Appl. Sci. 11 , 4620 (2021).

Aivaliotis, P. et al. Methodology for enabling dynamic digital twins and virtual model evolution in industrial robotics - a predictive maintenance application. Int. J. Comput. Integr. Manuf. 36 (7), 947–965 (2023).

Kousi, N. et al. Digital twin for adaptation of robots’ behavior in flexible robotic assembly lines. Proced. Manuf. 28 , 121–126 (2019).

Download references

Acknowledgements

This study was supported by the Special Project of Scientific Research and Development Center of Higher Education Institutions, Ministry of Education of the People's Republic of China (Grant No. ZJXF2022126), the new product pilot program projects of Zhejiang Province in 2022 (the second batch), China (Grant No. 2022D60SA7020371), the public Welfare Science and Technology Research Project of Jinhua, Zhejiang Province, China (Grant No. 2022-4-302), and Ywicc Special Interest Group on Machine Vision (YSIGMV).

This study was supported by the Special Project of Scientific Research and Development Center of Higher Education Institutions, Ministry of Education of the People's Republic of China (Grant No. ZJXF2022126), the new product pilot program projects of Zhejiang Province in 2022 (the second batch), China (Grant No. 2022D60SA7020371), the public Welfare Science and Technology Research Project of Jinhua, Zhejiang Province, China (Grant No. 2022–4-302), and Ywicc Special Interest Group on Machine Vision (YSIGMV).

Author information

Authors and affiliations.

Xingzhi College, Zhejiang Normal University, Lanxi, 321100, China

Pengfei Zheng

School of Mechanical Information, Yiwu Industrial & Commercial College, Yiwu, 322000, China

Pengfei Zheng, Jingjing Lou & Bo Wang

School of Engineering, Huzhou University, Huzhou, 313000, China

Junkai Yang

You can also search for this author in PubMed   Google Scholar

Contributions

P.Z. led the design of the overall architecture and implementation of the virtual simulation teaching platform, and wrote the main manuscript text and later revisions. J.Y. designed the function and hardware integration program of the virtual-reality teaching module. J.L. formulated the teaching application practice program and designed the teaching evaluation questionnaire, and processed the figures in the manuscript. B.W. and J.Y. conducted institutional and peer research and related data analysis of virtual simulation teaching platform. All authors participated in the construction and implementation of this platform, and reviewed and approved the manuscript.

Corresponding authors

Correspondence to Pengfei Zheng or Junkai Yang .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Zheng, P., Yang, J., Lou, J. et al. Design and application of virtual simulation teaching platform for intelligent manufacturing. Sci Rep 14 , 12895 (2024). https://doi.org/10.1038/s41598-024-62072-5

Download citation

Received : 27 January 2024

Accepted : 13 May 2024

Published : 05 June 2024

DOI : https://doi.org/10.1038/s41598-024-62072-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Online education
  • Computer technology
  • Intelligent manufacturing
  • Virtual simulation
  • Virtual-reality combination
  • Teaching platform

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

robot simulation presentation

IMAGES

  1. Fanuc launches new robot simulation software RoboGuide

    robot simulation presentation

  2. PPT

    robot simulation presentation

  3. EPSON Robot Simulation Using Sample Project

    robot simulation presentation

  4. Robot Simulation final

    robot simulation presentation

  5. Robotics Simulation: Reconfigurable Pick and Place Simulation

    robot simulation presentation

  6. Webots robotics simulation of a PR2 robot

    robot simulation presentation

VIDEO

  1. Robotics Simulator using UNITY

  2. Final Presentation (Simulation Presentation SP24-EA-BUS-W100-27115) The Brew Crew

  3. Lecture 16: Princeton: Introduction to Robotics

  4. MatLab

  5. SimplySim 3D robotics simulation

  6. Simulation of Robot Sandblasting

COMMENTS

  1. Robot Simulation

    Robot Simulation | PPT. Robot Simulation. •. 4 likes•1,027views. AI-enhanced description. MecklerMedia. Follow. The document discusses how MATLAB and Simulink can be used for robot simulation and implementation, highlighting several robotics projects that used these tools including simulating a Tesla Roadster and developing thought ...

  2. Robotic Manipulation

    Chapter 1: Introduction. Manipulation is more than pick-and-place. Open-world manipulation. Simulation. These notes are interactive. Model-based design and analysis. Organization of these notes. Exercises. Chapter 2: Let's get you a robot.

  3. Robot Simulation

    This simulation uses Stateflow® to control the system control and demonstrates how you can use Unreal Engine™ to simulate a complete virtual commissioning application in Simulink®. For an example showing how to deploy the main logic in Stateflow using Simulink PLC Coder™, see Generate Structured Text Code for Shuttle and Robot Control.

  4. From Robot Simulation to the Real World

    Even one robot, over time, will change, it loses a screw and suddenly, the center of mass shifted. The idea of randomization is not to find the real world, it's to be robust enough to arrange a ...

  5. Gazebo

    A robotics design, development, and simulation suite. Features Showcase Docs Community More. App. menu. Simulate before you build. Iterate quickly on design concepts and control strategies with Gazebo's rich suite of tools, libraries, and cloud services. Get Started Learn More. How it works. Gazebo brings a fresh approach to simulation with a ...

  6. Webots Simulation Tutorials and Resources

    Webots Simulation Tutorials and Resources. September 6, 2023. Ralph. Webots simulation software is a fast and friendly tool used especially for research and educational purposes with a long list of robotic projects and very good results. In this article are available a collection of tutorials and resources to start using Webots, from simple to ...

  7. Building Realistic Robot Simulations with MATLAB and NVIDIA Isaac Sim

    In this blog post, my colleague Dave Schowalter will introduce you to a new ecosystem that combines the photo-realistic simulation capabilities from NVIDIA Isaac SimTM and the sensor processing and AI modeling capabilities from MathWorks for building realistic robot simulations. Learn more at the joint NVIDIA/MathWorks webinar on September 12, "MATLAB and Isaac Sim". I am Dave Schowalter ...

  8. Transformer-based Neural Augmentation of Robot Simulation ...

    Simulation representations of robots have advancedin recent years. Yet, there remain significant sim-to-real gapsbecause of modeling assumptions and hard-to-...

  9. GitHub

    On the top menu, choose SimSlides -> Import PDF (or press F2) Choose a PDF file from your computer. Choose the folder to save the generated slide models at. Choose a prefix for your model names, they will be named prefix-0, prefix-1, ... Click Generate. A model will be created for each page of your PDF. This may take a while, the screen goes ...

  10. CS393R: Autonomous Robots -- Resources

    Slides on bipedal robot control: Slides on transfer from simulation to real robots; Zero-Moment Point - Thirty Five Years of Its Life Vukobratovic, M. and Borovac, B. International Journal of Humanoid Robotics, Vol.1, No. 1, pp.157-173, 2004. Biped Walking Pattern Generation by using Preview Control of Zero-Moment Point

  11. An Introductory Robot Programming Tutorial

    The simulator I built is written in Python and very cleverly dubbed Sobot Rimulator. You can find v1.0.0 on GitHub. It does not have a lot of bells and whistles but it is built to do one thing very well: provide an accurate simulation of a mobile robot and give an aspiring roboticist a simple framework for practicing robot software programming.

  12. Isaac Sim

    Isaac Sim benefits from the Omniverse platform's OpenUSD interoperability across 3D and simulation tools - enabling developers to easily design, import, build, and share robot models and virtual training environments. Now, you can easily connect the robot's brain to a virtual world through the Isaac ROS/ROS 2 interface, full-featured Python scripting, and plug-ins for importing robot and ...

  13. PPT

    Presentation Transcript. MATLAB Robot Simulator Adam Lodes University of Missouri December 6th, 2010. Introduction • Code a simulator that can model any robot • Given the DH parameters, model and animate the robot • Allows for the user to visualize joints and movements of any robot • Use Puma3D by Walla Walla Washington as an example ...

  14. STEM robots simulation with miranda: presentation

    http://miranda.softwaremiranda lets create great education robot simulations and challengesPython and Scratch languages can be used to program predefine robo...

  15. Stanford Engineering Everywhere

    Course Description. The purpose of this course is to introduce you to basics of modeling, design, planning, and control of robot systems. In essence, the material treated in this course is a brief survey of relevant results from geometry, kinematics, statics, dynamics, and control. The course is presented in a standard format of lectures ...

  16. PPT

    Robotics Simulation. ( Skynet ) Andrew Townsend Advisor: Professor Grant Braught. Introduction. Robotics is a quickly emerging field in today's array of technology Robots are expensive, and each robot's method of control is different Used in Dickinson College's Artificial Life class. Download Presentation. andrew townsend advisor. single ...

  17. Robotics Simulation

    Robotics simulation is a digital tool used to engineer robotics-based automated production systems. Functionally, robotics simulation uses digital representation - a digital twin - to enable dynamic interaction with robotics models in a virtual environment. Robotics and automation simulation systems aim to bring automation systems online faster and launch production with fewer errors than ...

  18. 25+ Robotics Projects, Lessons, and Activities

    25+ Robotics Projects, Lessons, and Activities. By Amy Cowen on March 21, 2024 6:00 AM. Use these free STEM lessons and activities to introduce and experiment with robotics with students. From designing and building simple robots to thinking about how robots can be used to solve real-world problems, these hands-on projects help students gain ...

  19. Transformer-Based Neural Augmentation of Robot Simulation

    Abstract: Simulation representations of robots have advanced in recent years. Yet, there remain significant sim-to-real gaps because of modeling assumptions and hard-to-model behaviors such as friction. In this letter, we propose to augment common simulation representations with a transformer-inspired architecture, by training a network to predict the true state of robot building blocks given ...

  20. Free Google Slides and PowerPoint Templates on robots

    Download the Bachelor in Robotics Engineering presentation for PowerPoint or Google Slides. As university curricula increasingly incorporate digital tools and platforms, this template has been designed to integrate with presentation software, online learning management systems, or referencing software, enhancing the overall efficiency and ...

  21. Describing Robots from Design to Learning:

    We identify the gap in transferring design information to the robot simulation environment, leading us to advocate for using a robot description format to bridge the robot morphology design and robot learning in simulation. To facilitate smoother information transfer throughout the process, we have developed a robot process automation tool ...

  22. KUKA.Sim

    KUKA.Sim is based on a modular software architecture - with an efficient, flexible and durable toolbox principle.The basic package can be expanded with three add-ons: for powerful modeling of an individual component library, for virtual commissioning and for simulation of welding applications.This means customers only pay for the functional expansions they actually need.

  23. Design and application of virtual simulation teaching platform for

    For the application field of industrial robots, the virtual simulation teaching module of industrial robot virtual disassembly and two-axis collaborative robot workstation is designed, and the ...

  24. Autodesk Education & Student Access

    Software and cloud-based services subject to an Educational license or subscription may be used by eligible users solely for Educational Purposes and shall not be used for commercial, professional or any other for-profit purposes. Unlock your creative potential with 3D design software from Autodesk. Software downloads are available to students ...