Robotic Manipulation
Perception, Planning, and Control
Russ Tedrake
Note: These are working notes used for a course being taught at MIT . They will be updated throughout the Fall 2023 semester. Lecture videos are available on YouTube .-->
![](http://omraadeinfo.online/777/templates/cheerup1/res/banner1.gif)
Search these notes
Pdf version of the notes.
You can also download a PDF version of these notes (updated much less frequently) from here .
The PDF version of these notes are autogenerated from the HTML version. There are a few conversion/formatting artifacts that are easy to fix (please feel free to point them out). But there are also interactive elements in the HTML version are not easy to put into the PDF. When possible, I try to provide a link. But I consider the online HTML version to be the main version.
Table of Contents
- Chapter 1: Introduction
- Manipulation is more than pick-and-place
- Open-world manipulation
- These notes are interactive
- Model-based design and analysis
- Organization of these notes
- Chapter 2: Let's get you a robot
- Robot description files
- Position-controlled robots
- Position Control.
- An aside: link dynamics with a transmission.
- Torque-controlled robots
- A proliferation of hardware
- Simulating the Kuka iiwa
- Dexterous hands
- Simple grippers
- Soft/underactuated hands
- Other end effectors
- If you haven't seen it...
- Putting it all together
- HardwareStation
- HardwareStationInterface
- HardwareStation stand-alone simulation
- More HardwareStation examples
- Chapter 3: Basic Pick and Place
- Monogram Notation
- Pick and place via spatial transforms
- Spatial Algebra
- Representations for 3D rotation
- Forward kinematics
- The kinematic tree
- Forward kinematics for pick and place
- Differential kinematics (Jacobians)
- Differential inverse kinematics
- The Jacobian pseudo-inverse
- Invertibility of the Jacobian
- Defining the grasp and pre-grasp poses
- A pick and place trajectory
- Differential inverse kinematics with constraints
- Pseudo-inverse as an optimization
- Adding velocity constraints
- Adding position and acceleration constraints
- Joint centering
- Tracking a desired pose
- Alternative formulations
- Chapter 4: Geometric Pose Estimation
- Cameras and depth sensors
- Depth sensors
- Representations for geometry
- Point cloud registration with known correspondences
- Iterative Closest Point (ICP)
- Dealing with partial views and outliers
- Detecting outliers
- Point cloud segmentation
- Generalizing correspondence
- Soft correspondences
- Nonlinear optimization
- Precomputing distance functions
- Global optimization
- Non-penetration and "free-space" constraints
- Free space constraints as non-penetration constraints
- Looking ahead
- Chapter 5: Bin Picking
- Generating random cluttered scenes
- Falling things
- Static equilibrium with frictional contact
- Spatial force
- Collision geometry
- Contact forces between bodies in collision
- The Contact Frame
- The (Coulomb) Friction Cone
- Static equilibrium as an optimization problem
- Contact simulation
- Model-based grasp selection
- The contact wrench cone
- Colinear antipodal grasps
- Grasp selection from point clouds
- Point cloud pre-processing
- Estimating normals and local curvature
- Evaluating a candidate grasp
- Generating grasp candidates
- The corner cases
- Programming the Task Level
- State Machines and Behavior Trees
- Task planning
- Large Language Models
- A simple state machine for "clutter clearing"
- Chapter 6: Motion Planning
- Inverse Kinematics
- From end-effector pose to joint angles
- IK as constrained optimization
- Global inverse kinematics
- Inverse kinematics vs differential inverse kinematics
- Grasp planning using inverse kinematics
- Kinematic trajectory optimization
- Trajectory parameterizations
- Optimization algorithms
- Sampling-based motion planning
- Rapidly-exploring random trees (RRT)
- The Probabilistic Roadmap (PRM)
- Post-processing
- Sampling-based planning in practice
- Graphs of Convex Sets (GCS)
- Graphs of Convex Sets
- GCS (Kinematic) Trajectory Optimization
- Convex decomposition of (collision-free) configuration space
- Variations and Extensions
- Time-optimal path parameterizations
- Chapter 7: Mobile Manipulation
- A New Cast of Characters
- What's different about perception?
- Partial views / active perception
- Unknown (potentially dynamic) environments
- Robot state estimation
- What's different about motion planning?
- Wheeled robots
- Holonomic drives
- Nonholonomic drives
- Legged robots
- What's different about simulation?
- Mapping (in addition to localization)
- Identifying traversable terrain
- Chapter 8: Manipulator Control
- The Manipulator-Control Toolbox
- Assume your robot is a point mass
- Trajectory tracking
- (Direct) force control
- Indirect force control
- Hybrid position/force control
- The general case (using the manipulator equations)
- Joint stiffness control
- Cartesian stiffness control
- Some implementation details on the iiwa
- Peg in hole
- Chapter 9: Object Detection and Segmentation
- Getting to big data
- Crowd-sourced annotation datasets
- Segmenting new classes via fine tuning
- Annotation tools for manipulation
- Synthetic datasets
- Self-supervised learning
- Even bigger datasets
- Object detection and segmentation
- Pretraining wth self-supervised learning
- Leveraging large-scale models
- Chapter 10: Deep Perception for Manipulation
- Pose estimation
- Pose representation
- Loss functions
- Pose estimation benchmarks
- Limitations
- Grasp selection
- (Semantic) Keypoints
- Dense Correspondences
- Task-level state
- Other perceptual tasks / representations
- Chapter 11: Reinforcement Learning
- RL Software
- Policy-gradient methods
- Black-box optimization
- Stochastic optimal control
- Using gradients of the policy, but not the environment
- REINFORCE, PPO, TRPO
- Control for manipulation should be easy
- Value-based methods
- Model-based RL
- Chapter 12: Soft Robots and Tactile Sensing
- Soft robot hardware
- Soft-body simulation
- Tactile sensing
- What information do we want/need?
- Visuotactile sensing
- Whole-body sensing
- Simulating tactile sensors
- Perception with tactile sensors
- Control with tactile sensors
- Appendix A: Spatial Algebra
- Position, Rotation, and Pose
- Spatial velocity
- Appendix B: Drake
- Online Jupyter Notebooks
- Running on Deepnote
- Enabling licensed solvers
- Running on your own machine
- Getting help
- Appendix C: DrakeGym Environments
- Appendix D: Setting up your own "Manipulation Station"
- Message Passing
- Kuka LBR iiwa + Schunk WSG Gripper
- Franka Panda
- Intel Realsense D415 Depth Cameras
- Appendix E: Miscellaneous
- How to cite these notes
- Annotation tool etiquette
- Some great final projects
- Please give me feedback!
You can find documentation for the source code supporting these notes here .
I've always loved robots, but it's only relatively recently that I've turned my attention to robotic manipulation. I particularly like the challenge of building robots that can master physics to achieve human/animal-like dexterity and agility. It was passive dynamic walkers and the beautiful analysis that accompanies them that first helped cement this centrality of dynamics in my view of the world and my approach to robotics. From there I became fascinated with (experimental) fluid dynamics, and the idea that birds with articulated wings actually "manipulate" the air to achieve incredible efficiency and agility. Humanoid robots and fast-flying aerial vehicles in clutter forced me to start thinking more deeply about the role of perception in dynamics and control. Now I believe that this interplay between perception and dynamics is truly fundamental, and I am passionate about the observation that relatively "simple" problems in manipulation (how do I button up my dress shirt?) expose the problem beautifully.
My approach to programming robots has always been very computational/algorithmic. I started out using tools primarily from machine learning (especially reinforcement learning) to develop the control systems for simple walking machines; but as the robots and tasks got more complex I turned to more sophisticated tools from model-based planning and optimization-based control. In my view, no other discipline has thought so deeply about dynamics as has control theory, and the algorithmic efficiency and guaranteed performance/robustness that can be obtained by the best model-based control algorithms far surpasses what we can do today with learning control. Unfortunately, the mathematical maturity of controls-related research has also led the field to be relatively conservative in their assumptions and problem formulations; the requirements for robotic manipulation break these assumptions. For example, robust control typically assumes dynamics that are (nearly) smooth and uncertainty that can be represented by simple distributions or simple sets; but in robotic manipulation, we must deal with the non-smooth mechanics of contact and uncertainty that comes from varied lighting conditions, and different numbers of objects with unknown geometry and dynamics. In practice, no state-of-the-art robotic manipulation system to date (that I know of) uses rigorous control theory to design even the low-level feedback that determines when a robot makes and breaks contact with the objects it is manipulating. An explicit goal of these notes is to try to change that.
In the past few years, deep learning has had an unquestionable impact on robotic perception, unblocking some of the most daunting challenges in performing manipulation outside of a laboratory or factory environment. We will discuss relevant tools from deep learning for object recognition, segmentation, pose/keypoint estimation, shape completion, etc. Now relatively old approaches to learning control are also enjoying an incredible surge in popularity, fueled in part by massive computing power and increasingly available robot hardware and simulators. Unlike learning for perception, learning control algorithms are still far from a technology, with some of the most impressive looking results still being hard to understand and to reproduce. But the recent work in this area has unquestionably highlighted the pitfalls of the conservatism taken by the controls community. Learning researchers are boldly formulating much more aggressive and exciting problems for robotic manipulation than we have seen before -- in many cases we are realizing that some manipulation tasks are actually quite easy, but in other cases we are finding problems that are still fundamentally hard.
Finally, it feels that the time is ripe for robotic manipulation to have a real and dramatic impact in the world, in fields from logistics to home robots. Over the last few years, we've seen UAVs/drones transition from academic curiosities into consumer products. Even more recently, autonomous driving has transitioned from academic research to industry, at least in terms of dollars invested. Manipulation feels like the next big thing that will make the leap from robotic research to practice. It's still a bit risky for a venture capitalist to invest in, but nobody doubts the size of the market once we have the technology. How lucky are we to potentially be able to play a role in that transition?
So this is where the notes begin... we are at an incredible crossroads between learning and control and robotics with an opportunity to have immediate impact in industrial and consumer applications and potentially even to forge entirely new eras for systems theory and controls. I'm just trying to hold on and to enjoy the ride.
A manipulation toolbox
Another explicit goal of these lecture notes is to provide high-quality implementations of the most useful tools in a manipulation scientist's toolbox. When I am forced to choose between mathematical clarity and runtime performance, the clear formulation is always my first priority; I will try to include a performant formulation, too, if possible or try to give pointers to alternatives. Manipulation research is moving quickly, and I aim to evolve these notes to keep pace. I hope that the software components provided in and in these notes can be directly useful to you in your own work.
If you would like to replicate any or all of the hardware that we use for these notes, you can find information and instructions in the appendix .
As you use the code, please consider contributing back (especially to the mature code in ). Even questions/bug reports can be important contributions. If you have questions/find issues with these notes, please submit them here .
First chapter
© Russ Tedrake, 2020-2023 |
Help Center Help Center
- Help Center
- Trial Software
- Product Updates
- Documentation
Robot Simulation
Author robot scenarios and incorporate sensor models to test autonomous robot algorithms in simulated environments. Validate your robot models in virtual simulation environments by co-simulating with Gazebo, Unreal Engine ® , and Simulink ® 3D Animation™ .
- Cuboid Scenario Simulation Scenarios with static meshes, robot platforms, sensors
- High-Fidelity Simulation Author scenes with realistic graphics, generate high-fidelity sensor data
- Gazebo Co-Simulation High fidelity simulation using co-simulation
- Bin-Picking Simulation Manipulator pick-and-place and bin-picking simulations
- Warehouse Robot Simulation Multi-robot management, obstacle avoidance, inventory management
Featured Examples
![robot simulation presentation Gazebo Simulation of Semi-Structured Intelligent Bin Picking for UR5e Using YOLO and PCA-Based Object Detection](https://www.mathworks.com/help/examples/urseries/win64/GazeboSimulationSemiStructuredIntelligentBinPickingUR5eExample_01.png)
Gazebo Simulation of Semi-Structured Intelligent Bin Picking for UR5e Using YOLO and PCA-Based Object Detection
Detailed workflow for simulating intelligent bin picking using Universal Robots UR5e cobot in Gazebo. The MATLAB project provided with this example consists of the Initialize, DataGeneration, Perception, Motion Planning, and Integration modules (project folders) to create a complete bin picking workflow.
![robot simulation presentation Automate Virtual Assembly Line with Two Robotic Workcells](https://www.mathworks.com/help/examples/shared_sl3d_robotics/win64/AutomationOfVirtualAssemblyLineExample_01.png)
Automate Virtual Assembly Line with Two Robotic Workcells
Simulation of an automated assembly to demonstrate virtual commissioning applications. The assembly line is based on a modular industrial framework created by ITQ GmbH known as Smart4i. This system consists of four components: two robotic workcells that are connected by a shuttle track and a conveyor belt. One of the two robots places cups onto the shuttle, while the other robot places balls in the cups. A slider then delivers those cups to a container. This simulation uses Stateflow® to control the system control and demonstrates how you can use Unreal Engine™ to simulate a complete virtual commissioning application in Simulink®. For an example showing how to deploy the main logic in Stateflow using Simulink PLC Coder™, see Generate Structured Text Code for Shuttle and Robot Control.
![robot simulation presentation Perform Path Planning Simulation with Mobile Robot](https://www.mathworks.com/help/examples/robotics/win64/PerformPathPlanningSimulationWithMobileRobotExample_04.png)
Perform Path Planning Simulation with Mobile Robot
Create a scenario to simulate a mobile robot navigating a room. The example demonstrates how to create a scenario, model a robot platform from a rigid body tree object, obtain a binary occupancy grid map from the scenario, and plan a path for the mobile robot to follow using the mobileRobotPRM path planning algorithm.
![robot simulation presentation Control and Simulate Multiple Warehouse Robots](https://www.mathworks.com/help/examples/robotics/win64/ControlAndSimulateMultipleWarehouseRobotsExample_16.png)
Control and Simulate Multiple Warehouse Robots
Control and simulate multiple robots working in a warehouse facility or distribution center.
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
- Switzerland (English)
- Switzerland (Deutsch)
- Switzerland (Français)
- 中国 (English)
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
- América Latina (Español)
- Canada (English)
- United States (English)
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
Contact your local office
InfoQ Software Architects' Newsletter
A monthly overview of things you need to know as an architect or aspiring architect.
View an example
We protect your privacy.
InfoQ Dev Summit Munich (Sep 26-27): Save your spot with up to 60% off with our limited Summer Sale. Last chance. Register Now
Facilitating the Spread of Knowledge and Innovation in Professional Software Development
- English edition
- Chinese edition
- Japanese edition
- French edition
Back to login
Login with:
Don't have an infoq account, helpful links.
- About InfoQ
- InfoQ Editors
- Write for InfoQ
- About C4Media
Choose your language
Learn practical strategies to clarify critical development priorities. Register now.
There are only a few days to save up to 60% off with the special Summer Sale.
Level up your software skills by uncovering the emerging trends you should focus on. Register now.
Your monthly guide to all the topics, technologies and techniques that every professional needs to know about. Subscribe for free.
InfoQ Homepage Presentations From Robot Simulation to the Real World
From Robot Simulation to the Real World
Louise Poubel overviews Gazebo's architecture with examples of projects using Gazebo, describing how to bridge virtual robots to their physical counterparts.
Louise Poubel is a software engineer at Open Robotics working on free and open source tools for robotics, like the robot simulator Gazebo and the Robot Operating System (ROS).
About the conference
QCon.ai is a practical AI and machine learning conference bringing together software teams working on all aspects of AI and machine learning.
Poubel: Let's get started. About six years ago, there was this huge robotics competition going on. The stakes were really high, the prizes were in the millions of dollars, and robots had to the tasks like this, driving vehicles in a disaster scenario kind of thing, handling tools that he would handle in this kind of scenario, and also traversing some tough terrain. The same robot had to do these tasks one after the other and in sequence. There were teams from all around the world competing, and as you can imagine- those pictures are from the finals in 2015 and that was really hard for the time, and they’re still tough tasks for robots to do today.
The competition didn't start right there straightaway, “Let's do it with the physical robots”. The competition actually had a first phase that was inside simulation. The robot had to do the same thing in simulation. They had to drive a little vehicle inside the simulator, they had to handle tools inside the simulator just like they would handle later on in the physical competition, and they also had to traverse some tough terrain. The way that the competition was structured is that the teams that did the best in this simulated competition, they would be granted a physical robot to compete in the physical competition later, so teams that couldn't afford their own physical robots or they didn't have the mechanical design of their own robots, they could just use the robots that they would get from the competition.
You can imagine the stakes were really high; these robots cost millions of dollars, and it was a fierce competition in a simulation phase as well that started in 2013. Teams were being very creative with how they were solving things inside the simulation and some teams had very interesting solutions to some of the problems. You can see that this is a very creative solution, it works and it got the team qualified, but there is a very important little detail. It's that you can't do that with the physical robot. Their arms are not strong enough to withstand the robot's weight like that, the hands are actually very delicate so you can't be banging it on the floor like this.
You would never try to do this with the physical robot, but they did it in stimulation and they qualified to compete later on with the physical robot. It's not like they didn't know. It's not like they tried to do this with the real robot and they broke a million dollar robot, they knew that there is this gap between the reality of the simulation and the reality of the physical world, and there will always be.
Today, I'll be talking to you a little bit about this process of going from simulation to the real life, to the real robot, environment, interacting with the physical world. Some of the things that we have to be aware when we are doing this transition and when we are training things and simulation and then to put the same code that we did in simulation inside the real robots. We have to be aware of the compromises done during the simulation, we have to be aware of the simplifying assumptions that were done while designing that simulation.
I'll be talking about this in the context of the simulator called Gazebo, which is where I'm running this presentation right now, which is a simulator that has been around for over 15 years. It's open source and free, people have been using it for a variety of different use cases all around the world. The reason why I'm focusing on Gazebo is that I am one of the core developers and I've been one of the core developers for the past five years. I work at Open Robotics, I'm a software engineer, and today, I'll be focusing on Gazebo because of that. This picture here is from my master thesis back when I still dealt with physical robots, not so much with robots that are just virtual inside the screen. I'll be talking a little bit also later about my experience when I was working on this and I also would use simulation then I went to the physical robot.
At Open Robotics, we work on open source software for robots, Gazebo is one of the projects. Another project that we have that some people here may have heard of is ROS, the robot operating system, and I'll mention it a little bit later as well. We are a big team of around 30 people all around the world, I'm right here in California in the headquarters. That's where I work from and all of us are split between Gazebo and ROS, and some other projects and everything that we do is free and open source.
Why Use Simulation?
For people here who are not familiar with robotics simulation, you may be wondering why, why would you even use simulation? Why don't you just do your whole development directly in the physical robot since that's the final goal, you want to control that physical robot. There are many different reasons, I selected a few that I think would be important for this kind of crowd here who are interested in AI. The first important reason is you can get very fast iterations when dealing with a simulation that is always inside your computer.
Imagine if you're dealing with a drone that is flying one kilometer away inside a farm, and every time that you change one line of code, you have to fly the drone and the drone falls and you have to run and pick it up, and fix it, and then put it to fly again, that doesn't scale. You can iterate on your code, everybody who's a software engineer knows that you don't get things right the first time and you keep trying, you can keep tweaking your code. With simulation, you can iterate much quicker than you would in a physical robot.
You can also spare the hardware, but hardware can be very expensive and mistakes can be very expensive too. If you have a one million dollar robot, you don't want to be wearing out its parts, you don't want to risk it falling and breaking parts all the time. In simulation, the robots are free; you just reset and the robot is back in one piece. There is also the safety matter, if you're developing with a physical robot and you are not sure exactly what the robot's going to do yet, you're in danger, depending on the size of the robot, depending on what the robot is doing, how the robot is moving in that environment. It's much safer to just do the risky things in simulation first, and then go to the physical robot.
Related to all of this is scalability, in simulation, it's free. You can just have 1,000 simulations running in parallel, while for you to have 1,000 robots training and doing things in parallel, that costs much more money. You can have for your team, the whole team would have one robot, then if you have all developers trying to use the same robot, they are not going to move as fast as if they each were working in a separate simulation.
When Simulation is Being Used
When are people using simulation? I think the parts that most people here would be interested in is machine learning training. For training, you usually need thousands and millions of repetitions for your robots to learn how to perform a task. You don't want to do that in the real hardware for all the reasons that I mentioned before. This is a big one, and people are using simulation, people are using Gazebo and other simulators for this goal. Besides that, there's also development, people are just good old fashioned, trying to send commands to the robot for the robots to do what he wants, to follow a line, or to pick up an object and use some computer vision.
All these developments, people were doing in simulation for the reasons I said before, but there's also prototyping. Sometimes you don't even have the physical robot yet and you want to create a robot in simulation first and see how things work and tweak the physical parameters of the robot, even before you manufacture it. There's also testing, a lot of people are ready CI in their robots, like every time you make a change to your robot code and maybe nightly or at every port request, you run that simulation to see if your robot's behavior is still what it should be.
What You Can Simulate
What can people simulate inside Gazebo? These are some examples that I took from the ignitionrobotics.org website, which is a website where you can get free models for using robotic simulation. You can see that there are some ground vehicles here, all these examples are wheeled, but you can also have legs robots, either bipeds with two leg or quadrupeds, or any other kinds of legged robot. You can see that there are some smaller robots, they are self-driving cars with sensors and some other form factors. There's also flying robots, both robots with fixed wing or quadcopters, hexacopters, you name it. Some more humanoid-like robots, this one is from NASA, this one is the PR2 robot. This one is on wheels, but you could have a robot like Atlas that I showed before that had legs. Besides these ones, there are also people simulating industrial robots, underwater robots. There are all sorts of robots being simulated inside Gazebo.
It all starts with how you describe your model, all those models that I showed you before, I showed you the visual appearance of the model and you may think, "This is just a 3D mesh." There's so much more to it, for the simulation, you need information, all the physics information about the robot like dynamics, where is the center of mass, what's the friction between each part of the robot and the external world, how bouncy is it, where exactly are the joints connected, are they springy? All this information has to be embedded into that robot model.
All those models that I showed you before are described in this format called the simulation description format, SDF. This format doesn't describe just the robot, but it describes also everything else in your scene. Everything else here in this world from the visual appearance, from where the lights are positioned, and the characteristic of the lights and the colors, every single thing, if there is wind, if there is a magnetic field, every single thing inside your simulation world is described using this format. It is an XML format, so everything is described with XML tags, so you have a tag for specular color of your materials or you have a tag for the friction of your materials.
But there's only so far that you can go with XML, sometimes you need more flexibility to put more complex behavior there, some more complex logic. For that, you use C++ plugins, Gazebo provides a variety of different interfaces that you can use to change things in simulation from the rendering side of things, so you can write a C++ plugin that implements different visual characteristics, makes things blink in different ways that you wouldn't be able to do just with the XML. The same goes for the physics, you can implement some different sensor noise models that you wouldn't be able to just with the SDF description.
The main language, like programming interface to Gazebo, is C++ right now, but I'll talk a little bit later about how you can use some other languages to also interact with simulation in very meaningful ways.
When people think about robot simulation, the first thing that you think about is the physics, how is the robot colliding with other things in the world? How is gravity pulling the robot down? That's indeed the main important part of the simulation. Gazebo, unlike other simulators, we don't implement our own physics engine. Instead, we have an abstraction layer that other people can use to integrate other physics engines. Right now, if you download the latest version of Gazebo, which is Gazebo 10, you're going to get these four physics engines that we support at the moment. The default is the Open Dynamics Engine, ODE, but we also support Dart, Bullet, and Simbody. These are all external projects that are also open source, but they are not part of the core Gazebo code.
Instead, we have this abstraction layer, what happens is that you describe your word only once, you describe your SDF file once, you write your C++ plugins only once, and at run time, you can choose which physics engine you're going to run with. Depending on your use case, you might prefer to run it with one or the other according to how many robots you have, according to the kinds of interactions that you have between objects, if you're doing manipulation or if you're doing more robot locomotion. All these things will affect what kind of physics engine you're going to choose to use in Gazebo.
Let's look a little bit at my little assistant for today. This is now, let's see some of the characteristics of the physics simulation that you should be aware of when you're planning to user simulation to then bring the codes to the physical world. Some of the simplifying assumptions that you can see are, for example, if I visualize here the collisions of the model- let me make it transparent- you are seeing these are orange boxes here, they are what the physics engine is actually seeing. The physics engine doesn't care about these blue and white parts, so for collision purposes, it's only calculating these boxes. It's not like you couldn't do with the more complex part, but it would just be very computationally expensive and not really worth it. It really depends on your final use case.
If you're really interested in the details of the parts are colliding with each other, then you want to use a more complex mesh, but for most use cases, you're only interested when the robot really bumped into something and for that, an approximation is much better and you gain so much in simulation performance. You have to be aware of this before you put the code in a physical robot, and you have to be aware of how much you can tune this. Depending on what you're using this robot for, you may want to choose these collisions a little bit different.
Some of the things that you can see here, for example, are that I didn't put collisions for the finger. The fingers are just going through here. If you're doing manipulation, you obviously need some collisions for the fingers, but if you're just making the robot play soccer, for example, you don't care about the collisions of the fingers, just remove them and gain a little bit of performance in your simulation. You can see here, for example, that actually the collision is hitting this box here, but if you remove the collision, if you're not looking at the collision, the complex part itself looks like the robot is floating a little bit. For most use cases, you really want the simplified shapes, but you have to keep that in mind before you go to the physical robot.
Another simplifying thing that you usually do, let's take a look at the joints and at the center of mass of the robot. This is the center of mass for each of the parts of the robot, and you can see here the axis of the joints, here on the neck, you can see that there is a joint up there that lets the neck go up and down, and then the neck can do like this. I think the robot has a total of 25 joints, and this description is made to spec, this is what the perfect robot would be like and that's what you put in simulation. In reality, your physically manufactured robot is going to deviate a lot from this, the joints are not going to be perfectly aligned from both sides of the robot. One arm is going to be a little bit heavier than the other, the center of mass may not be exactly in the center. Maybe the battery moved inside it and it's a little bit to the side. If you train your algorithms with a perfect robot inside the simulation, once you go and you take that to the physical robot, if it's overfitting for the perfect model, it's not going to work in the real model.
One thing that people usually do is randomize a little bit this while you're training your algorithms, for each iteration, you move that center of mass a little bit. You reduce and you increase the mass a little bit, you change the joints, you change all the parameters of the robot and the idea is not that you're going to find the real robot, because that doesn't exist. Each physical robot, they are manufactured differently, from one to the other, they're going to be different. Even one robot, over time, will change, it loses a screw and suddenly, the center of mass shifted. The idea of randomization is not to find the real world, it's to be robust enough to arrange a variation that once you put it in a real robot, the real robot is somewhere there in that range.
These are some of the interesting things, there are a bunch of other things, there's inertia, which is nice to look at too, but that's with the robot not transparent anymore. Here is a little clip from my master thesis and I did it within our robots and I did most of the work inside simulation. Only when I had it work in simulation, I went and I put the code in the real robot. A good rule of thumb is if it works in simulation, it may work in the real robot, if it doesn't work in simulation, it most probably is not going to work in the real robot, so at least you can take out all the cases that wouldn't work.
By the time I got here, I had put enough tolerances in the code and I had to test it a lot also with the physical robot, because it's important to periodically also test with the physical robot, that I was confident that the algorithm was working. You can see that there is someone's hands there in case something goes wrong, and this is mainly for a thing that we just had in model in simulation which is the physical robot. I was putting so much strength onto one of the feet all the time because I was trying to balance and those motors in the ankles were getting very hot and the robot comes with a built-in safety mechanism where it just screams, "Motor hot," and turns off all of its joints. The poor thing had the forehead all scratched, so the hand is there for these kind of use cases.
Let's talk a little bit about sensors, we talked about physics, how your robot interacts with the world, how you describe the dynamics and the kinematics of the robot, but how about how the robot is consuming information from the world in order to make decisions. Gazebo supports over 20 different types of sensors from cameras, GPS, IMUs, you name it. If you put something in the robot, we support it one way or the other. It's important to know by default what the simulation is going to give you, it's going to be very perfect data. It's always good for you to try to modify the data a little bit too also add that randomization, add that noise so that your data is not so perfect.
Let's go back to now, it has a few sensors right now, let's take a look at the cameras. I put two cameras in, it has one camera with noise and one camera with the perfect image. You can see the difference between them, this one is perfect, it doesn't have any noise, it’s basically what you're seeing through the user interface of the simulator and here, you can see that it has noise, I put a little bit too much. If you have a camera in the real robot with this much noise, maybe you should buy a new camera. I put some noise here, and you can see also there is some distortion because real cameras also have a little bit of a fisheye effect or the opposite, so you always have to take that into account. I did this all by passing parameters in XML, these are things that Gazebo just provides for you, but if your lens maybe has a different kind of distortion or you want to implement a different kind of noise, this is very simple gushing noise, but if you want to use a more elaborate thing, you can always write a C++ plugin for it.
Let's take a look at another sensor, this was a rendering sensor and we're using the rendering engine to collect all that information, but there is also physical sensors like an altimeter. I put this ball bouncing here and it has an altimeter, we can take a look at the vertical position and I also made it quite noisy, so you can see that the data is not perfect. If I hadn't put noise there, it would just look like a perfect parabola because that's what the simulator is doing, it's calculating everything perfectly for you. This is more what you would get from a real sensor and I also put the update rate very low, so the graph looks and better. The simulation is running at 1,000 hertz and, in theory, you could get data at 1,000 hertz, but then you have to see would your real sensor give your data at that rate and would it have some delay? You can tweak all these little things in the simulation.
Another thing to think about is interfaces, when you're programming for your physical robot, depending on the robot you're using, it may provide an SDK, it may provide some APIs that you can go and program it, maybe from the manufacturer, maybe something else, but then, how do you do the same in simulation? You want to write all your code once and then train in simulation, and then you just flip the switch, and that same code is acting now on the real robot, you don't want to have to write two separate codes and duplicate the logic in two places.
One way that people commonly do this is using ROS, the Robot Operating System, which is also, as I mentioned earlier, an open source project that we maintain at Open Robotics. ROS provides a bunch of tools in a common communication layer and libraries for you to be able to debug your robots better in a unified way and it has integration with the simulation so you can control your robot in simulation, and then you just switch to a physical robot and then you're controlling that physical robot with the same code. It's very convenient and ROS offers a variety of different language interfaces, you can use JavaScript, Java, Python, it's not limited just to C++ like Gazebo is. The interface between ROS and Gazebo C++ but once you're using ROS, you have access to all those other languages.
Let's look at some examples of past projects that we've done inside Gazebo in which we had both the simulation and the physical world component. This is a project called Haptics that happened a few years ago, and it was about controlling this prosthetic hand here. We developed the simulation and it had the same interface for you to control the hand in simulation and the physical hand. In this case, we were using MATLAB, you could just send the commands in MATLAB and the hand in simulation would perform the same way as the physical hand. We improved a lot the kind of surface contact that you would need for successful grasping inside simulation.
This was one project, another one, this one is a competition as well, it was called Sask and it was a tag competition between two swarms of drones. They could be fixed wing or quadcopters, or a mix of the two. Each team had up to 50 drones, imagine how it would have been to practice with that in the physical world. I have all these drones flying in for every single thing that you want to try, you have to go collecting all those drones, it's just not feasible.
The first phase of the competition was simulation, things were competing on the cloud. We had the simulation running on the cloud and they would just control their drones as if they were controlling with the same controls that they would eventually use in the physical world. Once they had practiced enough, they had the physical competition with swarms of drones playing tag in the real world, that's what the picture on the right is.
This one was the space robotics challenge that happened a couple of years ago. It was hosted by NASA using this robot here, which is called the Valkyrie, it's a NASA robot, it's also known as Robonaut 5. The final goal of Valkyrie is to go to Mars and organize the environment in Mars before humans go there. The competition was all set in Mars and you can see that the simulation here was set up in Mars, in a red planet, red sky, and the robot had to perform some tasks just like it's expected that it will have to do in the future.
Twenty teams from all around the world competed in the cloud. In this case, we weren't only simulating the physics and the sensors, but we were also simulating the kind of communication you would have with Mars. You would have a delay and you have very limited bandwidth, these were all parts of the challenge and the competition. What is super cool is that the winner of the competition had only interacted with the robot through simulation up until then. Once he won the competition, he was invited to go to a lab where they had reconstructed some of the tasks from the simulation. This is funny, they constructed in the real world something that we had created for simulation, instead of going the other way around. It took him only one day to get his codes that he used to win the competition in the virtual world to make the physical robot do the same thing.
This is another robot example that uses Gazebo and there's also a physical robot, this is developed by a group in Spain called Accutronix, and they integrated Gazebo with OpenAI Gym to train this robot. I think it can be extended for other robots to perform tasks in simulation, it is trained in simulation and then you can take what you learned in simulation and put the model inside the physical robot to perform the same task.
Now that you know a lot about Gazebo, let me tell you that we are currently rewriting it, as I mentioned earlier, Gazebo is over 15 years old and there is a lot of room for improvement. We want to make use of more modern things like running simulation, distribute it across machines in the cloud, we want to make use of more modern rendering technology, like physically-based rendering and retracing to have more realistic images in the camera sensors.
We're in the process of taking Gazebo which is currently a monolith huge code base and breaking it into smaller reusable libraries. It will have a lot of new features, like the physics abstraction is going to be more flexible so you can just write a physics plugin and use a different physics engine with Gazebo. The same thing will go for the rendering engine, we're just making a plugin interface so you can write those plugins to interface with any rendering engine that you want. Even if you have access to a proprietary one, you can just write a plugin and easily interface with it. There are a bunch of other improvements coming and this is what I've been spending most of my time on recently. That's it, I hope that I got you excited a little bit about simulation, and thank you.
Questions & Answers
Participant 1: You mentioned about putting in all the randomization to train the models. I don't have too much of a robotics background, so could you just shed some light on what kind of models and what do you mean by training those models?
Poubel: What I meant by those randomizations is in the description of your world where you have literally in your XML mass equals 1 kilogram and in the next simulation, instead of mass 1 kilogram, you can put mass 1.01 kilograms. You can change the position of them as a little bit, every time you load the simulation, you can have the simulation be a little bit different from the one before and when you're training your algorithms, like running 1,000 simulation, 10,000 or 100,000 simulations, having the model not be the same in every single one of them is going to make your final solution, your final model, much more robust to these variations. Once you come to the physical robot, it's going to be that more robust.
Participant 2: Thanks for the talk. As a follow up to the previous question, does that mean if you use no randomization, then the simulation is completely deterministic?
Poubel: Mostly, yes. There are still sometimes some numerical errors and there are some places where we use a random number generator that you can set the seed and make it be deterministic, but there's always a little bit of differences there, especially like sometimes we use some asynchronous mechanisms, so depending on the order the messages are coming, you may have a slightly different result.
Moderator: I was wondering if there are tips or tricks to use Gazebo in a continuous integration environment? Is it being done often?
Poubel: Yes, a lot of people are running CI and running Gazebo in the cloud. The first thing is turn off the user interface, you're not going to need it. There is a headless mode, right now, I'm running two processes, one for the back end and one for the front end. You don't need the front end when you're running tests, Gazebo comes with a test library that we use to test Gazebo and some people use to test their code. It's based on G test and you will have pass/fail and you can put expectations, you can say, "I expect my robot to hit the wall at this time," and, "I expect the robot not to have any disturbances during this whole run." Yes, these are some of the things that you're going to use if you use it for CI.
Participant 3: What kind of real-world simulations does Gazebo does support, like wind or heat, stuff like that? Does it do it or do we have to specify everything ourselves?
Poubel: I didn't get exactly what kind of real-world simulation.
Participant 4: As a simulation, in the real world, you have lots of physical effects from heat, wind. Does it have a library or something like that, or do we have to specify pretty much the whole simulation environment ourselves?
Poubel: There's a lot that comes with Gazebo, Gazebo ships with support for wind, for example, but it's a very simple wind model. It's a global wind, always going the same direction. If you want a more complex wind model, you would have to write your own C++ plugin to make that happen, or maybe import winds data from a different software.
We try to provide the basics and an example of all physical phenomena. There is buoyancy if you're underwater, there is a lift-drag for the fixed wings, we have the most basic things that are applied to most use cases and if you need something specific, you can always tweak, either download the code or start the plugin from scratch and tweak those parameters. It's all done through plugins, you don't need to compile Gazebo from source.
See more presentations with transcripts
![robot simulation presentation robot simulation presentation](https://cdn.infoq.com/statics_s1_20240530133244/images/qcon/qconsf2024_presentations.jpg)
Recorded at:
![robot simulation presentation](https://res.infoq.com/presentations/robot-simulation-real-world/en/promoimage/qcon.ai-logo-1559265497851-1559740508895.png)
Jun 28, 2019
Louise Poubel
Related Sponsored Content
Exploring enterprise ai: an introduction to llms, vector databases, and more, related sponsor.
![robot simulation presentation robot simulation presentation](https://imgopt.infoq.com//fit-in/130x200/filters:quality(100)/filters:no_upscale()/sponsorship/topic/60613ec9-e321-4022-8953-7e254337b0c7/HPElogoRSB-1715156621759.jpg)
Explore how HPE AI software Powered by Intel® Xeon® Scalable processors simplifies and accelerates your AI and LLM journey. Learn more .
This content is in the AI, ML & Data Engineering topic
Related topics:.
- AI, ML & Data Engineering
- QCon ai 2019 SF
- Transcripts
- QCon Software Development Conference
- Artificial Intelligence
Related Editorial
Popular across infoq, retrieval-augmented generation (rag) patterns and best practices, optimizing spring boot config management with configmaps: environment variables or volume mounts, cloudflare ai gateway now generally available, edo liberty on vector databases for successful adoption of generative ai and llm based applications, liquid: a large-scale relational graph database, openai publishes gpt model specification for fine-tuning behavior.
![robot simulation presentation Into Robotics](https://intorobotics.com/wp-content/uploads/2023/09/IntoRobotics-logo@4x-8-250x38.png)
Webots Simulation Tutorials and Resources
- September 6, 2023
![robot simulation presentation](https://intorobotics.com/wp-content/uploads/2023/09/webots-simulation-tutorials-and-resources.jpg)
Realistic simulation and modeling are the main features of the tool, which is also used for programming the robots in different programming languages including here C++, Java, or Python.
From colors to texture, from force simulation to interface sensors, Webots was designed for a long list of robotic projects with a large choice of sensors and actuators as well as a multi-robot simulation platform.
Programs developed with the built-in IDE or other development tools can be tested and transferred to educational or commercial physical robots.
Using 3D modeling could be created realistic environments and states of a robot with possibility to add artificial intelligence or computer vision with integrated tools.
Webots Simulation Tool
A wide range of robots has support for Webots. The list can be started with one of the most advanced humanoid robots like AIBO, and can continue with Nao, iCub, HOAP-2, Lego Mindstorms, etc.
Below are available a series of tutorials and guide as well as a series of resources for helping beginners or advanced user to use Webots tool.
Table of Contents
Collection of tutorials from how to start using Webots and to integration with Matlab or adding new plug-ins. Webots is a popular simulation and modeling tool used especially in research or educational projects. In this chapter are available a collection of tutorials and guides to learn how to use the Webots and start programming and simulation process.
- Tutorials – in this article are available a series of tutorials for a first simulation to a guide how to use ROS with Webots in order to build powerful 3D robots;
- Introduction to Webots – very good documentation to start using the Webots simulation tool;
- Controller Programming – ‘Hello World Example’, ‘Reading Sensors’, ‘Using Actuators’, or ‘How to use wb_robot_step()’ are just a few concepts described in this comprehensive guide;
- Supervisor Programming – programming example how to track the position and how to set the position of the robot;
- Cyberbotics’ Robot Curriculum/Novice programming Exercises – simple guide how to simulate and programming lines for a wall following algorithm;
- Designing and Building Multi-Robot Systems – presentation to learn how to control motors and how to avoid obstacles in Webots simulation software;
- Robotics Lab Demonstrator – guide how to use Webots and Enki for simulation and program mobile robots with wheels, legs, or wings;
- Modeling – guide with tips and tricks for modelling in Webots starting with how to build replicable/deterministic simulations to remove the noise from the simulation;
- Webots for NAO – Nao is a faithful customer for Webots and this is a very helpful tutorial how to start using Nao together with Webots;
- DifferentialWheels – a simulation guide for differential steering;
- Kinematics and Motion Analysis of a Three-Dimensional Sidewinding Snakelike Robot – a guide that shows formulas and programming lines for a complex simulation of a snake-like robot with a wide range of movements;
- Running Monitor on Webots – step-by-step guide to run MonitorShm on Webots;
- Echo State Networks – Pattern Generation for Motor Control – tutorial with formulas, Webots simulation and Matlab integration;
- Controller plug-in – Webots guide to add controller plug-in in order to develop easier code lines for a robot;
- Webots Reference Manual -comprehensive material with information for a wide range of Webots features including the interface with sensors and actuators, and for texture in Webots;
- C++/Java/Python – a collection of programming lines using different programming languages like C++, Java or Python;
- Using Visual C++ with Webots – guide how to use Visual C++ programming language on Webots;
- Webots – guide how to integrate Urbi and Webots simulation software;
- Intro to Controllers – comprehensive overview of the Webots guide to start programming controller in Java;
- A Neural Network Controller for Webots – article with basic information to understand and how to use neural networks to control a robot;
- kaist_webots – comprehensive guide to start using Webots with ROS and installing kaist_webots;
- Using MATLAB – guide start integrating Matlab and Webots;
Related Posts
![robot simulation presentation robot simulation presentation](https://intorobotics.com/wp-content/uploads/2024/05/image-26-300x205.jpeg)
Unleashing Creative Potential With Embedded Arduino Systems
![robot simulation presentation robot simulation presentation](https://intorobotics.com/wp-content/uploads/2024/05/image-24-300x205.jpeg)
Crafting Effective ROS Nodes: Rospy Publisher Essentials
![robot simulation presentation robot simulation presentation](https://intorobotics.com/wp-content/uploads/2024/05/image-22-300x205.jpeg)
Unleashing the Potential of Micro-Computing with Intel Edison
![robot simulation presentation robot simulation presentation](https://intorobotics.com/wp-content/uploads/2024/05/image-17-300x205.jpeg)
Your Ultimate Guide to Quality Components
![robot simulation presentation robot simulation presentation](https://intorobotics.com/wp-content/uploads/2024/05/image-11-300x205.jpeg)
Get Experience in Developing for AI
![robot simulation presentation robot simulation presentation](https://intorobotics.com/wp-content/uploads/2024/05/image-6-300x205.jpeg)
Current Trends and Predictions For Robotic Technician Careers
Never miss the next breakthrough.
Sign up for our newsletter and get all the latest tends, insight and technological breakthroughs delivered to your inbox!
Don't Miss Out!
- MATLAB Answers
- File Exchange
- AI Chat Playground
- Discussions
- Communities
- Treasure Hunt
- Community Advisors
- Virtual Badges
![robot simulation presentation Cleve’s Corner: Cleve Moler on Mathematics and Computing](https://blogs.mathworks.com/wp-content/themes/mathworks_1.0/images/9.jpg)
IBM Hexadecimal Floating Point
![robot simulation presentation The MATLAB Blog](https://blogs.mathworks.com/wp-content/themes/mathworks_1.0/images/26.jpg)
Paged Matrix Functions in MATLAB (2024 edition)
![robot simulation presentation Guy on Simulink](https://blogs.mathworks.com/wp-content/themes/mathworks_1.0/images/6.jpg)
What’s New in Simulink R2024a?
![robot simulation presentation MATLAB Community](https://blogs.mathworks.com/wp-content/themes/mathworks_1.0/images/4.jpg)
Community Q&A – Zhaoxu Liu
![robot simulation presentation Artificial Intelligence](https://blogs.mathworks.com/wp-content/themes/mathworks_1.0/images/17.jpg)
Building Confidence in AI with Constrained Deep Learning
![robot simulation presentation Developer Zone](https://blogs.mathworks.com/wp-content/themes/mathworks_1.0/images/12.jpg)
Streamlining the Medical Imaging Software Development Lifecycle
![robot simulation presentation Stuart’s MATLAB Videos](https://blogs.mathworks.com/wp-content/themes/mathworks_1.0/images/2.jpg)
Creating a Simple Function with Test Script
![robot simulation presentation Behind the Headlines](https://blogs.mathworks.com/wp-content/themes/mathworks_1.0/images/14.jpg)
Three favorites from TIME Magazine’s “Best Innovations of 2023”
![robot simulation presentation File Exchange Pick of the Week](https://blogs.mathworks.com/wp-content/themes/mathworks_1.0/images/3.jpg)
Celebrating Pi Day with cool visualizations
![robot simulation presentation Hans on IoT](https://blogs.mathworks.com/wp-content/themes/mathworks_1.0/images/15.jpg)
New ThingSpeak IoT Examples and Curriculum Module: Hardware Connectivity in Action
![robot simulation presentation Student Lounge](https://blogs.mathworks.com/wp-content/themes/mathworks_1.0/images/16.jpg)
How AMZ Racing Designed the Motor Controller to Achieve 0 to 100 km/h in 0.956 Seconds
![robot simulation presentation MATLAB ユーザーコミュニティー](https://blogs.mathworks.com/wp-content/themes/mathworks_1.0/images/19.jpg)
Minidrone Competition at ICRA 2024
![robot simulation presentation Startups, Accelerators, & Entrepreneurs](https://blogs.mathworks.com/wp-content/themes/mathworks_1.0/images/23.jpg)
Startup Spotlight: Ensuring Safety in Teledriving Ride Shares (스타트업 스포트라이트: 원격 주행 차량 공유의 안전성 보장)
![robot simulation presentation Autonomous Systems](https://blogs.mathworks.com/wp-content/themes/mathworks_1.0/images/29.jpg)
Accelerating the path to production for your autonomous system with RTI Connext and the ROS Toolbox
![robot simulation presentation Quantitative Finance](https://blogs.mathworks.com/wp-content/themes/mathworks_1.0/images/30.jpg)
Deep Learning in Quantitative Finance: Multiagent Reinforcement Learning for Financial Trading
![robot simulation presentation MATLAB Graphics and App Building](https://blogs.mathworks.com/wp-content/themes/mathworks_1.0/images/34.jpg)
Boost Your App Design Efficiency – Effortless Component Swapping & Labeling in App Designer
- Posts (feed)
![robot simulation presentation bio_img_autonomous-systems](https://blogs.mathworks.com/wp-content/themes/mathworks_1.0/images/29.jpg)
Mihir Acharya and YJ Lim share the latest advancements and industry trends in the robotics and autonomous systems area.
Autonomous Systems Design, develop, and test autonomous systems with MATLAB
Building realistic robot simulations with matlab and nvidia isaac sim.
Posted by Mihir Acharya , September 7, 2023
In this blog post, my colleague Dave Schowalter will introduce you to a new ecosystem that combines the photo-realistic simulation capabilities from NVIDIA Isaac Sim TM and the sensor processing and AI modeling capabilities from MathWorks for building realistic robot simulations.
I am Dave Schowalter! As a Partner Programs Manager at MathWorks, I build technology partnerships in the robotics and autonomous systems industry.
Simulating autonomous robots in a photo-realistic environment provides several practical advantages, especially when incorporating sensors and perception in the models. It enables robotics engineers and researchers to thoroughly assess robot performance in diverse, complex settings, enhancing adaptability and problem-solving capabilities.
Autonomous robots use machine learning and sensor-based perception algorithms. Incorporating sensors and sensor data processing with a photo-realistic scene simulation helps with improving accuracy and performance of these algorithms. This also becomes an advantage when training an autonomous robot based upon synthetic sensor outputs.
NVIDIA Isaac Sim and MathWorks Model-Based Design together provide an integrated approach to create and perform these simulations. It offers an efficient platform for addressing safety concerns and refining sensor calibration, ultimately leading to more reliable real-world implementations.
With Isaac Sim TM , NVIDIA has created a high level of photo-realism by using a combination of techniques, including:
- Real-time Ray Tracing : Ray tracing is a technique that simulates the way light interacts with objects in the real world. This allows Isaac Sim to create realistic reflections, refractions, and shadows.
- Physically-based Rendering : Physically-based rendering is a technique that uses the laws of physics to simulate the way light interacts with materials. This allows Isaac Sim to replicate the way actual materials such as metal, plastic, and wood reflect and scatter light.
- High-Quality Textures : Isaac Sim uses high-quality textures to give objects a genuine appearance. These textures are created using a variety of methods, including scanning real objects and generating them using computer graphics techniques.
- Advanced Lighting : Isaac Sim uses advanced lighting techniques to create authentic lighting conditions. These techniques include global illumination, which simulates the way light bounces off objects, and ambient occlusion, which simulates the way shadows are created by objects blocking light.
- High-Performance Rendering : Isaac Sim uses NVIDIA’s GPUs to render scenes in real time. This allows users to perceive the interaction of simulation assets with the environment in a natural way.
Although Isaac Sim offers advanced physics simulation to recreate the behavior of objects in the real world, manipulation of synthetic data from multiple sensors (“sensor fusion”) and the use of those results to train AI algorithms and determine robot behavior can all be designed and managed in MATLAB and Simulink. Such a system built in Simulink is depicted below.
Once designed, the entire robot behavior in its environment can be predicted and then adjusted as needed, before testing a physical prototype. This integration of MATLAB and Simulink with Isaac Sim is most efficiently implemented through ROS, using the ROS Toolbox add-on in MATLAB. A screenshot of the integration in use is shown below.
Finally, the program can be deployed on embedded hardware (for example on the NVIDIA Jetson platform) to drive the end application.
If you are interested in learning more about this workflow, please register for the joint NVIDIA/MathWorks webinar on September 12, “MATLAB and Isaac Sim.” In the webinar, you will learn how to use this integration with workflows for manipulator and mobile robot applications.
You are now following this blog post
You will see updates in your activity feed .
You may receive emails, depending on your notification preferences .
![robot simulation presentation print](https://blogs.mathworks.com/autonomous-systems/wp-content/themes/mathworks_1.0/images/btn_print.png)
Overcoming 4 Key Challenges in Cobot Software Development
![robot simulation presentation robot simulation presentation](https://blogs.mathworks.com/student-lounge/wp-content/blogs.dir/16/files/2019/02/RobotSystemDesign_Artifacts-e1550783547326.png)
MATLAB and Simulink for Autonomous System Design
![robot simulation presentation robot simulation presentation](https://blogs.mathworks.com/wp-content/themes/mathworks_1.0/images/placeholder_3.jpg)
Image map example
![robot simulation presentation robot simulation presentation](https://www.mathworks.com/matlabcentral/mlc-downloads/downloads/968cc8c9-afb0-4a84-8a61-2fed79ffd3d1/769b282f-248f-46de-b543-7c4fb4884403/images/1694970787.png)
QuadBot-NeuroMorphic
![robot simulation presentation robot simulation presentation](https://www.mathworks.com/matlabcentral/mlc-downloads/downloads/f029c993-5ef5-4ecd-8547-2f86268b67d2/71f67b85-4467-4402-9316-39d6bce7401a/images/1683095209.png)
Intelligent Bin Picking with Simulink® for UR5e Cobot
![robot simulation presentation robot simulation presentation](https://www.mathworks.com/matlabcentral/images/default_screenshot.jpg)
BASIC OF FACE FILTERING AND ENHANCEMENT
To leave a comment, please click here to sign in to your MathWorks Account or create a new one.
Navigation Menu
Search code, repositories, users, issues, pull requests..., provide feedback.
We read every piece of feedback, and take your input very seriously.
Saved searches
Use saved searches to filter your results more quickly.
To see all available qualifiers, see our documentation .
- Notifications You must be signed in to change notification settings
Presentation slides inside robot simulations 🎥🤖
chapulina/simslides
Folders and files.
Name | Name | |||
---|---|---|---|---|
145 Commits | ||||
Repository files navigation
Import PDF files into robot simulation and present flying from slide to slide.
![robot simulation presentation SimSlides](https://github.com/chapulina/simslides/raw/main/images/SimSlides_logo.png)
SimSlides consists of plugins for two simulators: Gazebo Classic and Ignition Gazebo . There are different features for each simulator.
- Navigate through keyframes using mouse , keyboard or wireless presenter
- Look at a slide (even if it has moved)
- Move camera to a specific pose
- Go through slides stacked on the same pose
- ... plus all Ignition features!
Gazebo classic
- Import PDF files into simulation through the GUI
- Seek to specific spot in a log file
- Write copiable HTML text to a dialog
- ... plus all Gazebo features!
Checking out a couple other tutorials is also recommended if you want to use each simulator's potential to customize your presentations. Maybe you want to setup keyboard triggers? Control a robot using ROS ? The possibilities are endless!
SimSlides' main branch supports both Gazebo Classic and Ignition. It's ok if you don't have both simulators installed, only the plugin for the simulator present will be compiled.
The main branch supports Ignition Citadel, Edifice and Fortress.
Follow the official install instructions .
Gazebo Classic
The main branch has been tested on Gazebo version 11.
Extra dependencies:
It's also recommended that you make sure ImageMagick can convert PDFs, see this .
Build SimSlides
By default, SimSlides will try to build against Ignition Citadel and Gazebo 11. For other Ignition versions, set the IGNITION_VERSION environment variable before building. For example:
SimSlides can be built with a basic cmake workflow, for example:
Be sure to add your CMAKE_PREFIX_PATH to LD_LIBRARY_PATH , for example, when following the steps above, you should do this before running:
It's also possible to build SimSlides inside a colcon workspace.
Run SimSlides
Run simslides:
Important : Source Gazebo, this may be in a different place depending on your Gazebo installation:
This starts SimSlides in an empty world. You're ready to create your own presentation!
You can find a demo presentation inside the worlds directory. The same demo works for both simulators.
Run it as follows:
Move to the simslides clone directory
(Only for Gazebo classic) Source Gazebo
Load the world
Your own presentation
You can generate your own presentation as follows:
Generate a new presentation
On the top menu, choose SimSlides -> Import PDF (or press F2 )
Choose a PDF file from your computer
Choose the folder to save the generated slide models at
Choose a prefix for your model names, they will be named prefix-0 , prefix-1 , ...
Click Generate. A model will be created for each page of your PDF. This may take a while, the screen goes black... But it works in the end. Sometimes it looks like not all pages of the PDF become models... That's an open issue.
When it's done, all slides will show up on the world in a grid.
A world file is also created, so you can reload that any time.
Presentation mode
Once you have the slides loaded into the world, present as follows:
Press F5 or the play button on the top left to start presentation mode
Press the arrow keys to go back and forth on the slides
You're free to use the rest of Gazebo's interface while presenting. If you've navigated far away from the current slide, you can press F1 to return to it.
At any moment, you can press F6 to return to the initial camera pose.
Existing presentations
When this project was started, all presentations were kept in different branches of the same repository. Since mid 2019, new presentations are being created in their own repositories.
Until mid 2019
Move to the presentation branch, available ones are:
CppCon2015 : CppCon, September 2015
BuenosAires_Nov2015 : University of Buenos Aires, November 2015
Chile_Nov2015 : Universidad de Chile, November 2015
IEEE_WiE_ILC_2016 : IEEE Women in Engineering International Leadership Conference, May 2016
ROSCon_Oct2016 : ROSCon, October 2016
ROSIndustrial_Jan2017 : ROS Industrial web meeting, January 2017
OSS4DM_Mar2017 : Open Source Software for Decision Making, March 2017
OSCON_May2017 : Open Source Conference, May 2017
ROSCon_Sep2017 : ROSCon, Sep 2017
Brasil_Mar2018 : Brasil visits, Mar 2018
QConSF_Nov2018 : QConSF, Nov 2018
UCSC_Feb2019 : University of California, Santa Cruz, Feb 2019
QConAI_Apr2019 : QCon.ai, Apr 2019
A lot changes from one presentation to the next. Follow instructions on that branch's README to run the presentation. I've done my best to document it all, but each presentation may take some massaging to work years later.
Since mid 2019
See each repository / world:
- ROSConJP 2019 ( video )
- ROSCon 2019 ( video )
- ROS-Industrial Conference 2019: ( video )
- All Things Open 2020 ( video )
- Open Source 101 2021 ( video )
This project started as a few bash scripts for CppCon 2015. Back then, it used to be hosted on BitBucket using Mercurial.
Over the years, the project evolved into more handy GUI plugins, and is gaining more features for each presentation.
The repository was ported to GitHub + Git in August 2019, when BitBucket dropped Mercurial support.
Naos and UT Austin Villa
Week 0 (8/25): class overview, week 1: introduction to motion control, week 2: motion control continued, week 3: probability/sensing, week 4: kalman filters.
- CWMtx C++ Matrix library .
Week 5: Localization
- Adapting the Particle Size in Particle Filters Through KLD-Sampling Dieter Fox. In the International Journal of Robotic Research, IJRR, 2003. (An excellent description of robot localization - has some overlap with the textbook)
- Vision-Based Fast and Reactive Monte-Carlo Localization Thomas Roefer and Matthias Jungel. In the IEEE International Conference on Robotics and Automation, ICRA, 2003. (Another team's implementation details are in Sections III and IV )
- Fast and Robust Edge-Based Localization in the Sony Four-Legged Robot League Thomas Roefer and Matthias Jungel. In the Seventh International RoboCup Symposium, 2003. (On using field edges in localization)
- Making Use Of What You Don't see: Negative Information in Markov Localization Jan Hoffmann, Michael Spranger, Daniel Gohring and Matthias Jungel. In the IEEE International Conference on Intelligent Robots and Systems, IROS, 2005. (Recent article on using negative information in localization)
- Simultaneous Localization and Mapping (SLAM): Part I The Essential Algorithms Hugh Durrant-Whyte and Tim Bailey.
- Multiple Model Kalman Filters: A Localization Technique for RoboCup Soccer Quinlan and Middleton.
Week 6: Vision
- The UT Austin Villa 2003 Four-Legged Team , Extended version The University of Texas at Austin, Department of Computer Sciences, AI Laboratory Tech report UT-AI-TR-03-304. Read Sections 4, 4.1-4.3, 14.
- The UT Austin Villa 2004 RoboCup Four-Legged Team: Coming of Age Read Sections 3, 3.1,3.2 (and the first couple of appendices if you're interested)
- Using Layered Color Precision for a Self-Calibrating Vision System Matthias Jungel Robocup 2004
- Bayesian Color Estimation for Adaptive Vision-based Robot Localization. D. Schulz and D. Fox Proceedings of IROS, 2004.
- Color Learning on a Mobile Robot: Towards Full Autonomy under Changing Illumination Mohan Sridharan and Peter Stone. In The 20th International Joint Conference on Artificial Intelligence, pp. 2212
- B-Human Team Report and Code Release 2011. Thomas R�fer et al.
- UT Austin Villa 2013 - Advances in Vision, Kinematics, and Strategy: Paper , Slides Jacob Menashe et al.
Week 7: Walking
- Machine Learning for Fast Quadrupedal Locomotion Nate Kohl and Peter Stone In The Nineteenth National Conference on Artificial Intelligence, pp. 611-616, July 2004.
- The development of Honda humanoid robot Hirai, K. and Hirose, M. and Haikawa, Y. and Takenaka, T. ICRA 1998.
- On the Stability of Anthropomorphic Systems Vukobratovic, M. and Stepanenko, J. Mathematical Biosciences.
- Legged robots that balance. Raibert, M. H.
- Virtual Model Control of a Bipedal Walking Robot Pratt, J., Dilworth, P. and Pratt, G. ICRA 1997.
- Hybrid Zero Dynamics of Planar Biped Walkers Westervelt, E.R., Grizzle, J.W. and Koditschek, D.E. IEEE Trans. on Automatic Control, Vol.48, No.1, pp.42-56, 2003.
- Modeling and Control of Multi-Contact Centers of Pressure and Internal Forces in Humanoid Robots Luis Sentis, Jaeheung Park, and Oussama Khatib. IROS 2009.
Week 8: Action and sensor models
Week 9: path planning, week 10: behavior architectures, week 11: multi-robot coordination, week 12: applications, week 13: social implications.
An Introductory Robot Programming Tutorial
Let’s face it, robots are cool. In this post, Toptal Engineer Nick McCrea provides a step-by-step, easy-to-follow tutorial (with code samples) that walks you through the process of building a basic autonomous mobile robot.
![robot simulation presentation An Introductory Robot Programming Tutorial](https://assets.toptal.io/images?url=https%3A%2F%2Fbs-uploads.toptal.io%2Fblackfish-uploads%2Fcomponents%2Fblog_post_page%2F4084019%2Fcover_image%2Fregular_1708x683%2Fcover-programming-a-robot-an-introductory-tutorial-214caa8fb76924253fa484af9f7e892b.png)
By Nick McCrea
Nicholas is a professional software engineer with a passion for quality craftsmanship. He loves architecting and writing top-notch code.
PREVIOUSLY AT
Let’s face it, robots are cool. They’re also going to run the world some day, and hopefully, at that time they will take pity on their poor soft fleshy creators (a.k.a. robotics developers ) and help us build a space utopia filled with plenty. I’m joking of course, but only sort of .
In my ambition to have some small influence over the matter, I took a course in autonomous robot control theory last year, which culminated in my building a Python-based robotic simulator that allowed me to practice control theory on a simple, mobile, programmable robot.
In this article, I’m going to show how to use a Python robot framework to develop control software, describe the control scheme I developed for my simulated robot, illustrate how it interacts with its environment and achieves its goals, and discuss some of the fundamental challenges of robotics programming that I encountered along the way.
In order to follow this tutorial on robotics programming for beginners, you should have a basic knowledge of two things:
- Mathematics —we will use some trigonometric functions and vectors
- Python—since Python is among the more popular basic robot programming languages—we will make use of basic Python libraries and functions
The snippets of code shown here are just a part of the entire simulator, which relies on classes and interfaces, so in order to read the code directly, you may need some experience in Python and object oriented programming .
Finally, optional topics that will help you to better follow this tutorial are knowing what a state machine is and how range sensors and encoders work.
The Challenge of the Programmable Robot: Perception vs. Reality, and the Fragility of Control
The fundamental challenge of all robotics is this: It is impossible to ever know the true state of the environment. Robot control software can only guess the state of the real world based on measurements returned by its sensors. It can only attempt to change the state of the real world through the generation of control signals.
![robot simulation presentation This graphic demonstrates the interaction between a physical robot and computer controls when practicing Python robot programming.](https://assets.toptal.io/images?url=https%3A%2F%2Fuploads.toptal.io%2Fblog%2Fimage%2F126755%2Ftoptal-blog-image-1533163065595-2de053ca39ad573f7067c2d9986cb829.png)
Thus, one of the first steps in control design is to come up with an abstraction of the real world, known as a model , with which to interpret our sensor readings and make decisions. As long as the real world behaves according to the assumptions of the model, we can make good guesses and exert control. As soon as the real world deviates from these assumptions, however, we will no longer be able to make good guesses, and control will be lost. Often, once control is lost, it can never be regained. (Unless some benevolent outside force restores it.)
This is one of the key reasons that robotics programming is so difficult. We often see videos of the latest research robot in the lab, performing fantastic feats of dexterity, navigation, or teamwork, and we are tempted to ask, “Why isn’t this used in the real world?” Well, next time you see such a video, take a look at how highly-controlled the lab environment is. In most cases, these robots are only able to perform these impressive tasks as long as the environmental conditions remain within the narrow confines of its internal model. Thus, one key to the advancement of robotics is the development of more complex, flexible, and robust models—and said advancement is subject to the limits of the available computational resources.
[Side Note: Philosophers and psychologists alike would note that living creatures also suffer from dependence on their own internal perception of what their senses are telling them. Many advances in robotics come from observing living creatures and seeing how they react to unexpected stimuli. Think about it. What is your internal model of the world? It is different from that of an ant, and that of a fish? (Hopefully.) However, like the ant and the fish, it is likely to oversimplify some realities of the world. When your assumptions about the world are not correct, it can put you at risk of losing control of things. Sometimes we call this “danger.” The same way our little robot struggles to survive against the unknown universe, so do we all. This is a powerful insight for roboticists.]
The Programmable Robot Simulator
The simulator I built is written in Python and very cleverly dubbed Sobot Rimulator . You can find v1.0.0 on GitHub . It does not have a lot of bells and whistles but it is built to do one thing very well: provide an accurate simulation of a mobile robot and give an aspiring roboticist a simple framework for practicing robot software programming. While it is always better to have a real robot to play with, a good Python robot simulator is much more accessible and is a great place to start.
In real-world robots, the software that generates the control signals (the “controller”) is required to run at a very high speed and make complex computations. This affects the choice of which robot programming languages are best to use: Usually, C++ is used for these kinds of scenarios, but in simpler robotics applications, Python is a very good compromise between execution speed and ease of development and testing.
The software I wrote simulates a real-life research robot called the Khepera but it can be adapted to a range of mobile robots with different dimensions and sensors. Since I tried to program the simulator as similar as possible to the real robot’s capabilities, the control logic can be loaded into a real Khepera robot with minimal refactoring, and it will perform the same as the simulated robot. The specific features implemented refer to the Khepera III, but they can be easily adapted to the new Khepera IV.
In other words, programming a simulated robot is analogous to programming a real robot. This is critical if the simulator is to be of any use to develop and evaluate different control software approaches.
In this tutorial, I will be describing the robot control software architecture that comes with v1.0.0 of Sobot Rimulator , and providing snippets from the Python source (with slight modifications for clarity). However, I encourage you to dive into the source and mess around. The simulator has been forked and used to control different mobile robots, including a Roomba2 from iRobot . Likewise, please feel free to fork the project and improve it.
The control logic of the robot is constrained to these Python classes/files:
- models/supervisor.py —this class is responsible for the interaction between the simulated world around the robot and the robot itself. It evolves our robot state machine and triggers the controllers for computing the desired behavior.
- models/supervisor_state_machine.py —this class represents the different states in which the robot can be, depending on its interpretation of the sensors.
- The files in the models/controllers directory—these classes implement different behaviors of the robot given a known state of the environment. In particular, a specific controller is selected depending on the state machine.
Robots, like people, need a purpose in life. The goal of our software controlling this robot will be very simple: It will attempt to make its way to a predetermined goal point. This is usually the basic feature that any mobile robot should have, from autonomous cars to robotic vacuum cleaners. The coordinates of the goal are programmed into the control software before the robot is activated but could be generated from an additional Python application that oversees the robot movements. For example, think of it driving through multiple waypoints.
However, to complicate matters, the environment of the robot may be strewn with obstacles. The robot MAY NOT collide with an obstacle on its way to the goal. Therefore, if the robot encounters an obstacle, it will have to find its way around so that it can continue on its way to the goal.
The Programmable Robot
Every robot comes with different capabilities and control concerns. Let’s get familiar with our simulated programmable robot.
The first thing to note is that, in this guide, our robot will be an autonomous mobile robot . This means that it will move around in space freely and that it will do so under its own control. This is in contrast to, say, a remote-control robot (which is not autonomous) or a factory robot arm (which is not mobile). Our robot must figure out for itself how to achieve its goals and survive in its environment. This proves to be a surprisingly difficult challenge for novice robotics programmers.
Control Inputs: Sensors
There are many different ways a robot may be equipped to monitor its environment. These can include anything from proximity sensors, light sensors, bumpers, cameras, and so forth. In addition, robots may communicate with external sensors that give them information that they themselves cannot directly observe.
Our reference robot is equipped with nine infrared sensors —the newer model has eight infrared and five ultrasonic proximity sensors—arranged in a “skirt” in every direction. There are more sensors facing the front of the robot than the back because it is usually more important for the robot to know what is in front of it than what is behind it.
In addition to the proximity sensors, the robot has a pair of wheel tickers that track wheel movement. These allow you to track how many rotations each wheel makes, with one full forward turn of a wheel being 2,765 ticks. Turns in the opposite direction count backward, decreasing the tick count instead of increasing it. You don’t have to worry about specific numbers in this tutorial because the software we will write uses the traveled distance expressed in meters. Later I will show you how to compute it from ticks with an easy Python function.
Control Outputs: Mobility
Some robots move around on legs. Some roll like a ball. Some even slither like a snake.
Our robot is a differential drive robot, meaning that it rolls around on two wheels. When both wheels turn at the same speed, the robot moves in a straight line. When the wheels move at different speeds, the robot turns. Thus, controlling the movement of this robot comes down to properly controlling the rates at which each of these two wheels turn.
In Sobot Rimulator, the separation between the robot “computer” and the (simulated) physical world is embodied by the file robot_supervisor_interface.py , which defines the entire API for interacting with the “real robot” sensors and motors:
- read_proximity_sensors() returns an array of nine values in the sensors’ native format
- read_wheel_encoders() returns an array of two values indicating total ticks since the start
- set_wheel_drive_rates( v_l, v_r ) takes two values (in radians-per-second) and sets the left and right speed of the wheels to those two values
This interface internally uses a robot object that provides the data from sensors and the possibility to move motors or wheels. If you want to create a different robot, you simply have to provide a different Python robot class that can be used by the same interface, and the rest of the code (controllers, supervisor, and simulator) will work out of the box!
The Simulator
As you would use a real robot in the real world without paying too much attention to the laws of physics involved, you can ignore how the robot is simulated and just skip directly to how the controller software is programmed, since it will be almost the same between the real world and a simulation. But if you are curious, I will briefly introduce it here.
The file world.py is a Python class that represents the simulated world, with robots and obstacles inside. The step function inside this class takes care of evolving our simple world by:
- Applying physics rules to the robot’s movements
- Considering collisions with obstacles
- Providing new values for the robot sensors
In the end, it calls the robot supervisors responsible for executing the robot brain software.
The step function is executed in a loop so that robot.step_motion() moves the robot using the wheel speed computed by the supervisor in the previous simulation step.
The apply_physics() function internally updates the values of the robot proximity sensors so that the supervisor will be able to estimate the environment at the current simulation step. The same concepts apply to the encoders.
A Simple Model
First, our robot will have a very simple model. It will make many assumptions about the world. Some of the important ones include:
- The terrain is always flat and even
- Obstacles are never round
- The wheels never slip
- Nothing is ever going to push the robot around
- The sensors never fail or give false readings
- The wheels always turn when they are told to
Although most of these assumptions are reasonable inside a house-like environment, round obstacles could be present. Our obstacle avoidance software has a simple implementation and follows the border of obstacles in order to go around them. We will hint readers on how to improve the control framework of our robot with an additional check to avoid circular obstacles.
The Control Loop
We will now enter into the core of our control software and explain the behaviors that we want to program inside the robot. Additional behaviors can be added to this framework, and you should try your own ideas after you finish reading! Behavior-based robotics software was proposed more than 20 years ago and it’s still a powerful tool for mobile robotics. As an example, in 2007 a set of behaviors was used in the DARPA Urban Challenge—the first competition for autonomous driving cars!
A robot is a dynamic system. The state of the robot, the readings of its sensors, and the effects of its control signals are in constant flux. Controlling the way events play out involves the following three steps:
- Apply control signals.
- Measure the results.
- Generate new control signals calculated to bring us closer to our goal.
These steps are repeated over and over until we have achieved our goal. The more times we can do this per second, the finer control we will have over the system. The Sobot Rimulator robot repeats these steps 20 times per second (20 Hz), but many robots must do this thousands or millions of times per second in order to have adequate control. Remember our previous introduction about different robot programming languages for different robotics systems and speed requirements.
In general, each time our robot takes measurements with its sensors, it uses these measurements to update its internal estimate of the state of the world—for example, the distance from its goal. It compares this state to a reference value of what it wants the state to be (for the distance, it wants it to be zero), and calculates the error between the desired state and the actual state. Once this information is known, generating new control signals can be reduced to a problem of minimizing the error which will eventually move the robot towards the goal.
A Nifty Trick: Simplifying the Model
To control the robot we want to program, we have to send a signal to the left wheel telling it how fast to turn, and a separate signal to the right wheel telling it how fast to turn. Let’s call these signals v L and v R . However, constantly thinking in terms of v L and v R is very cumbersome. Instead of asking, “How fast do we want the left wheel to turn, and how fast do we want the right wheel to turn?” it is more natural to ask, “How fast do we want the robot to move forward, and how fast do we want it to turn, or change its heading?” Let’s call these parameters velocity v and angular (rotational) velocity ω (read “omega”). It turns out we can base our entire model on v and ω instead of v L and v R , and only once we have determined how we want our programmed robot to move, mathematically transform these two values into the v L and v R we need to actually control the robot wheels. This is known as a unicycle model of control.
![robot simulation presentation In robotics programming, it's important to understand the difference between unicycle and differential drive models.](https://assets.toptal.io/images?url=https%3A%2F%2Fuploads.toptal.io%2Fblog%2Fimage%2F126756%2Ftoptal-blog-image-1533163092561-995d084b82e4f6ebda91d70643f81f5b.png)
Here is the Python code that implements the final transformation in supervisor.py . Note that if ω is 0, both wheels will turn at the same speed:
Estimating State: Robot, Know Thyself
Using its sensors, the robot must try to estimate the state of the environment as well as its own state. These estimates will never be perfect, but they must be fairly good because the robot will be basing all of its decisions on these estimations. Using its proximity sensors and wheel tickers alone, it must try to guess the following:
- The direction to obstacles
- The distance from obstacles
- The position of the robot
- The heading of the robot
The first two properties are determined by the proximity sensor readings and are fairly straightforward. The API function read_proximity_sensors() returns an array of nine values, one for each sensor. We know ahead of time that the seventh reading, for example, corresponds to the sensor that points 75 degrees to the right of the robot.
Thus, if this value shows a reading corresponding to 0.1 meters distance, we know that there is an obstacle 0.1 meters away, 75 degrees to the left. If there is no obstacle, the sensor will return a reading of its maximum range of 0.2 meters. Thus, if we read 0.2 meters on sensor seven, we will assume that there is actually no obstacle in that direction.
Because of the way the infrared sensors work (measuring infrared reflection), the numbers they return are a non-linear transformation of the actual distance detected. Thus, the Python function for determining the distance indicated must convert these readings into meters. This is done in supervisor.py as follows:
Again, we have a specific sensor model in this Python robot framework, while in the real world, sensors come with accompanying software that should provide similar conversion functions from non-linear values to meters.
Determining the position and heading of the robot (together known as the pose in robotics programming) is somewhat more challenging. Our robot uses odometry to estimate its pose. This is where the wheel tickers come in. By measuring how much each wheel has turned since the last iteration of the control loop, it is possible to get a good estimate of how the robot’s pose has changed—but only if the change is small .
This is one reason it is important to iterate the control loop very frequently in a real-world robot, where the motors moving the wheels may not be perfect. If we waited too long to measure the wheel tickers, both wheels could have done quite a lot, and it will be impossible to estimate where we have ended up.
Given our current software simulator, we can afford to run the odometry computation at 20 Hz—the same frequency as the controllers. But it could be a good idea to have a separate Python thread running faster to catch smaller movements of the tickers.
Below is the full odometry function in supervisor.py that updates the robot pose estimation. Note that the robot’s pose is composed of the coordinates x and y , and the heading theta , which is measured in radians from the positive X-axis. Positive x is to the east and positive y is to the north. Thus a heading of 0 indicates that the robot is facing directly east. The robot always assumes its initial pose is (0, 0), 0 .
Now that our robot is able to generate a good estimate of the real world, let’s use this information to achieve our goals.
Python Robot Programming Methods: Go-to-Goal Behavior
The supreme purpose in our little robot’s existence in this programming tutorial is to get to the goal point. So how do we make the wheels turn to get it there? Let’s start by simplifying our worldview a little and assume there are no obstacles in the way.
This then becomes a simple task and can be easily programmed in Python. If we go forward while facing the goal, we will get there. Thanks to our odometry, we know what our current coordinates and heading are. We also know what the coordinates of the goal are because they were pre-programmed. Therefore, using a little linear algebra, we can determine the vector from our location to the goal, as in go_to_goal_controller.py :
Note that we are getting the vector to the goal in the robot’s reference frame , and NOT in world coordinates. If the goal is on the X-axis in the robot’s reference frame, that means it is directly in front of the robot. Thus, the angle of this vector from the X-axis is the difference between our heading and the heading we want to be on. In other words, it is the error between our current state and what we want our current state to be. We, therefore, want to adjust our turning rate ω so that the angle between our heading and the goal will change towards 0. We want to minimize the error:
self.kP in the above snippet of the controller Python implementation is a control gain. It is a coefficient which determines how fast we turn in proportion to how far away from the goal we are facing. If the error in our heading is 0 , then the turning rate is also 0 . In the real Python function inside the file go_to_goal_controller.py , you will see more similar gains, since we used a PID controller instead of a simple proportional coefficient.
Now that we have our angular velocity ω , how do we determine our forward velocity v ? A good general rule of thumb is one you probably know instinctively: If we are not making a turn, we can go forward at full speed, and then the faster we are turning, the more we should slow down. This generally helps us keep our system stable and acting within the bounds of our model. Thus, v is a function of ω . In go_to_goal_controller.py the equation is:
A suggestion to elaborate on this formula is to consider that we usually slow down when near the goal in order to reach it with zero speed. How would this formula change? It has to include somehow a replacement of v_max() with something proportional to the distance. OK, we have almost completed a single control loop. The only thing left to do is transform these two unicycle-model parameters into differential wheel speeds, and send the signals to the wheels. Here’s an example of the robot’s trajectory under the go-to-goal controller, with no obstacles:
![robot simulation presentation This is an example of the programmed robot's trajectory.](https://assets.toptal.io/images?url=https%3A%2F%2Fuploads.toptal.io%2Fblog%2Fimage%2F127227%2Ftoptal-blog-image-1537520502248-97c702361811ddfddfe7a1d58f552c8a.png)
As we can see, the vector to the goal is an effective reference for us to base our control calculations on. It is an internal representation of “where we want to go.” As we will see, the only major difference between go-to-goal and other behaviors is that sometimes going towards the goal is a bad idea, so we must calculate a different reference vector.
Python Robot Programming Methods: Avoid-Obstacles Behavior
Going towards the goal when there’s an obstacle in that direction is a case in point. Instead of running headlong into things in our way, let’s try to program a control law that makes the robot avoid them.
To simplify the scenario, let’s now forget the goal point completely and just make the following our objective: When there are no obstacles in front of us, move forward. When an obstacle is encountered, turn away from it until it is no longer in front of us.
Accordingly, when there is no obstacle in front of us, we want our reference vector to simply point forward. Then ω will be zero and v will be maximum speed. However, as soon as we detect an obstacle with our proximity sensors, we want the reference vector to point in whatever direction is away from the obstacle. This will cause ω to shoot up to turn us away from the obstacle, and cause v to drop to make sure we don’t accidentally run into the obstacle in the process.
A neat way to generate our desired reference vector is by turning our nine proximity readings into vectors, and taking a weighted sum. When there are no obstacles detected, the vectors will sum symmetrically, resulting in a reference vector that points straight ahead as desired. But if a sensor on, say, the right side picks up an obstacle, it will contribute a smaller vector to the sum, and the result will be a reference vector that is shifted towards the left.
For a general robot with a different placement of sensors, the same idea can be applied but may require changes in the weights and/or additional care when sensors are symmetrical in front and in the rear of the robot, as the weighted sum could become zero.
![robot simulation presentation When programmed correctly, the robot can avoid these complex obstacles.](https://assets.toptal.io/images?url=https%3A%2F%2Fuploads.toptal.io%2Fblog%2Fimage%2F127228%2Ftoptal-blog-image-1537520579112-c85f06b4f9d4321053ef17bb90dcc308.png)
Here is the code that does this in avoid_obstacles_controller.py :
Using the resulting ao_heading_vector as our reference for the robot to try to match, here are the results of running the robot software in simulation using only the avoid-obstacles controller, ignoring the goal point completely. The robot bounces around aimlessly, but it never collides with an obstacle, and even manages to navigate some very tight spaces:
![robot simulation presentation This robot is successfully avoiding obstacles within the Python robot simulator.](https://assets.toptal.io/images?url=https%3A%2F%2Fuploads.toptal.io%2Fblog%2Fimage%2F127229%2Ftoptal-blog-image-1537520633834-8b1eff97b6b2816f97c9750c6ff42585.png)
Python Robot Programming Methods: Hybrid Automata (Behavior State Machine)
So far we’ve described two behaviors—go-to-goal and avoid-obstacles—in isolation. Both perform their function admirably, but in order to successfully reach the goal in an environment full of obstacles, we need to combine them.
The solution we will develop lies in a class of machines that has the supremely cool-sounding designation of hybrid automata . A hybrid automaton is programmed with several different behaviors, or modes, as well as a supervising state machine. The supervising state machine switches from one mode to another in discrete times (when goals are achieved or the environment suddenly changed too much), while each behavior uses sensors and wheels to react continuously to environment changes. The solution was called hybrid because it evolves both in a discrete and continuous fashion.
Our Python robot framework implements the state machine in the file supervisor_state_machine.py .
Equipped with our two handy behaviors, a simple logic suggests itself: When there is no obstacle detected, use the go-to-goal behavior. When an obstacle is detected, switch to the avoid-obstacles behavior until the obstacle is no longer detected.
As it turns out, however, this logic will produce a lot of problems. What this system will tend to do when it encounters an obstacle is to turn away from it, then as soon as it has moved away from it, turn right back around and run into it again. The result is an endless loop of rapid switching that renders the robot useless. In the worst case, the robot may switch between behaviors with every iteration of the control loop—a state known as a Zeno condition .
There are multiple solutions to this problem, and readers that are looking for deeper knowledge should check, for example, the DAMN software architecture .
What we need for our simple simulated robot is an easier solution: One more behavior specialized with the task of getting around an obstacle and reaching the other side.
Python Robot Programming Methods: Follow-Wall Behavior
Here’s the idea: When we encounter an obstacle, take the two sensor readings that are closest to the obstacle and use them to estimate the surface of the obstacle. Then, simply set our reference vector to be parallel to this surface. Keep following this wall until A) the obstacle is no longer between us and the goal, and B) we are closer to the goal than we were when we started. Then we can be certain we have navigated the obstacle properly.
With our limited information, we can’t say for certain whether it will be faster to go around the obstacle to the left or to the right. To make up our minds, we select the direction that will move us closer to the goal immediately. To figure out which way that is, we need to know the reference vectors of the go-to-goal behavior and the avoid-obstacle behavior, as well as both of the possible follow-wall reference vectors. Here is an illustration of how the final decision is made (in this case, the robot will choose to go left):
![robot simulation presentation Utilizing a few types of behaviors, the programmed robot avoids obstacles and continues onward.](https://assets.toptal.io/images?url=https%3A%2F%2Fuploads.toptal.io%2Fblog%2Fimage%2F126757%2Ftoptal-blog-image-1533163132257-cef100a1a349722545f4f4275e083448.png)
Determining the follow-wall reference vectors turns out to be a bit more involved than either the avoid-obstacle or go-to-goal reference vectors. Take a look at the Python code in follow_wall_controller.py to see how it’s done.
Final Control Design
The final control design uses the follow-wall behavior for almost all encounters with obstacles. However, if the robot finds itself in a tight spot, dangerously close to a collision, it will switch to pure avoid-obstacles mode until it is a safer distance away, and then return to follow-wall. Once obstacles have been successfully negotiated, the robot switches to go-to-goal. Here is the final state diagram, which is programmed inside the supervisor_state_machine.py :
![robot simulation presentation This diagram illustrates the switching between robotics programming behaviors to achieve a goal and avoid obstacles.](https://assets.toptal.io/images?url=https%3A%2F%2Fuploads.toptal.io%2Fblog%2Fimage%2F126758%2Ftoptal-blog-image-1533163148607-9003726c530c8d43edc36083286b1115.png)
Here is the robot successfully navigating a crowded environment using this control scheme:
![robot simulation presentation The robot simulator has successfully allowed the robot software to avoid obstacles and achieve its original purpose.](https://assets.toptal.io/images?url=https%3A%2F%2Fuploads.toptal.io%2Fblog%2Fimage%2F127230%2Ftoptal-blog-image-1537520675488-c0ae4ebb9a85f7f1d277ca32cd08cdb5.png)
An additional feature of the state machine that you can try to implement is a way to avoid circular obstacles by switching to go-to-goal as soon as possible instead of following the obstacle border until the end (which does not exist for circular objects!)
Tweak, Tweak, Tweak: Trial and Error
The control scheme that comes with Sobot Rimulator is very finely tuned. It took many hours of tweaking one little variable here, and another equation there, to get it to work in a way I was satisfied with. Robotics programming often involves a great deal of plain old trial-and-error. Robots are very complex and there are few shortcuts to getting them to behave optimally in a robot simulator environment…at least, not much short of outright machine learning, but that’s a whole other can of worms.
I encourage you to play with the control variables in Sobot Rimulator and observe and attempt to interpret the results. Changes to the following all have profound effects on the simulated robot’s behavior:
- The error gain kP in each controller
- The sensor gains used by the avoid-obstacles controller
- The calculation of v as a function of ω in each controller
- The obstacle standoff distance used by the follow-wall controller
- The switching conditions used by supervisor_state_machine.py
- Pretty much anything else
When Programmable Robots Fail
We’ve done a lot of work to get to this point, and this robot seems pretty clever. Yet, if you run Sobot Rimulator through several randomized maps, it won’t be long before you find one that this robot can’t deal with. Sometimes it drives itself directly into tight corners and collides. Sometimes it just oscillates back and forth endlessly on the wrong side of an obstacle. Occasionally it is legitimately imprisoned with no possible path to the goal. After all of our testing and tweaking, sometimes we must come to the conclusion that the model we are working with just isn’t up to the job, and we have to change the design or add functionality.
In the mobile robot universe, our little robot’s “brain” is on the simpler end of the spectrum. Many of the failure cases it encounters could be overcome by adding some more advanced software to the mix. More advanced robots make use of techniques such as mapping , to remember where it’s been and avoid trying the same things over and over; heuristics , to generate acceptable decisions when there is no perfect decision to be found; and machine learning , to more perfectly tune the various control parameters governing the robot’s behavior.
A Sample of What’s to Come
Robots are already doing so much for us, and they are only going to be doing more in the future. While even basic robotics programming is a tough field of study requiring great patience, it is also a fascinating and immensely rewarding one.
In this tutorial, we learned how to develop reactive control software for a robot using the high-level programming language Python. But there are many more advanced concepts that can be learned and tested quickly with a Python robot framework similar to the one we prototyped here. I hope you will consider getting involved in the shaping of things to come!
Acknowledgement: I would like to thank Dr. Magnus Egerstedt and Jean-Pierre de la Croix of the Georgia Institute of Technology for teaching me all this stuff, and for their enthusiasm for my work on Sobot Rimulator.
Further Reading on the Toptal Blog:
- An Introduction to Robot Operating System: The Ultimate Robot Application Framework
- Learn to Code: Wisdom and Tools for the Journey
- Single Responsibility Principle: A Recipe for Great Code
- Forex Algorithmic Trading: A Practical Tale for Engineers
- A Machine Learning Tutorial With Examples: An Introduction to ML Theory and Its Applications
Understanding the basics
What is a robot.
A robot is a machine with sensors and mechanical components connected to and controlled by electronic boards or CPUs. They process information and apply changes to the physical world. Robots are mostly autonomous and replace or help humans in everything from daily routines to very dangerous tasks.
What are robots used for?
Robots are used in factories and farms to do heavy or repetitive tasks. They are used to explore planets and oceans, clean houses, and help elderly people. Researchers and engineers are also trying to use robots in disaster situations, medical analysis, and surgery. Self-driving cars are also robots!
How do you build a robot?
The creation of a robot requires multiple steps: the mechanical layout of the parts, the design of the sensors and drivers, and the development of the robot’s software. Usually, the raw body is built in factories and the software is developed and tested on the first batch of working prototypes.
How do you program a robot?
There are three steps involved. First, you get motors and sensors running using off-the-shelf drivers. Then you develop basic building blocks so that you can move the robot and read its sensors. Finally, use that to develop smart, complex software routines to create your desired behavior.
What is the best programming language for robotics?
Two main programming languages are the best when used in robotics: C++ and Python, often used together as each one has pros and cons. C++ is used in control loops, image processing and to interface low-level hardware. Python is used to handle high-level behaviors and to quickly develop tests or proof of concepts.
How can you program a robot using Java?
Assuming you are able to run the Java Virtual Machine on your robot, you can interface your Java code with the motor and sensor drivers using sockets or RPC. Writing device drivers directly in Java may be harder than in other languages such as C++, so it’s better to focus on developing high-level behavior!
What is robotics engineering?
Robotic engineering is a broad field of engineering focused on the design and integration of entire robotic systems. Thus it requires knowledge of mechanical, electronic, software, and control systems, interacting with the engineers specialized in each field to fulfill the requirements and goals for a given robot.
What is the difference between robotic process automation (RPA) and robotics programming?
Both fields develop software in order to help or replace humans, but RPA targets tasks usually done by a human in front of a computer, such as sending emails, filing receipts, or browsing a website. Robotics instead executes tasks in the real world such as cleaning, driving, building, or manufacturing.
Who invented the first robot in the world?
The first mobile robot was created in 1966 at Stanford Research Institute by a team lead by Charles Rosen and Nils Nilsson. Using only a 24-bit CPU and 196 KB of RAM, it was able to move around an office autonomously while avoiding obstacles. Since it shook while it moved, their creators called it Shakey.
- Control Theory
Nick McCrea
Denver, CO, United States
Member since July 8, 2014
About the author
World-class articles, delivered weekly.
By entering your email, you are agreeing to our privacy policy .
Toptal Developers
- Algorithm Developers
- Angular Developers
- AWS Developers
- Azure Developers
- Big Data Architects
- Blockchain Developers
- Business Intelligence Developers
- C Developers
- Computer Vision Developers
- Django Developers
- Docker Developers
- Elixir Developers
- Go Engineers
- GraphQL Developers
- Jenkins Developers
- Kotlin Developers
- Kubernetes Developers
- Machine Learning Engineers
- Magento Developers
- .NET Developers
- R Developers
- React Native Developers
- Ruby on Rails Developers
- Salesforce Developers
- SQL Developers
- Tableau Developers
- Unreal Engine Developers
- Xamarin Developers
- View More Freelance Developers
Join the Toptal ® community.
NVIDIA Isaac Sim
NVIDIA Isaac Sim™ is a reference application enabling developers to design, simulate, test, and train AI-based robots and autonomous machines in a physically-based virtual environment. Isaac Sim, built on NVIDIA Omniverse, is fully extensible, enabling developers to build their own custom simulators or integrate core Isaac Sim technologies into their existing testing and validation pipelines.
Download Omniverse Download Container Forums
Introducing NVIDIA Isaac Lab
NVIDIA Isaac Lab is a lightweight sample application built on Isaac Sim optimized for robot learning that is pivotal for robot foundation model training. Isaac Lab optimizes for reinforcement, imitation, and can train all types of robot embodiments including the Project GR00T foundation model for humanoids.
![robot simulation presentation NVIDIA Isaac Lab application for robot learning and foundation model training](https://developer.download.nvidia.com/images/isaac-lab-1980x1080.jpg)
Key Benefits of Isaac Sim
Realistic simulation.
Isaac Sim makes the most of the Omniverse platform’s powerful simulation technologies. These include advanced GPU-enabled physics simulation with NVIDIA PhysX® 5, photorealism with real-time ray and path tracing, and MDL material definition support for physically based rendering.
Modular Architecture for a Variety of Applications
Isaac Sim is built to address many of the most common use cases, including manipulation, navigation, and synthetic data generation for training data. Its modular design also means the tool can be customized and extended to many new use cases.
Seamless Connectivity and Interoperability
Isaac Sim benefits from the Omniverse platform’s OpenUSD interoperability across 3D and simulation tools - enabling developers to easily design, import, build, and share robot models and virtual training environments. Now, you can easily connect the robot’s brain to a virtual world through the Isaac ROS/ROS 2 interface, full-featured Python scripting, and plug-ins for importing robot and environment models.
Key Features of Isaac Sim
![robot simulation presentation Pre-populated robots and sensors in a warehouse](https://developer.download.nvidia.com/isaac/images/isaac-sim-new-robots-sensors-techman-ari.jpg)
Pre-Populated Robots and Sensors
Get started faster using pre-existing robot models and sensors. Explore new robot models, including FANUC and Techman, and sensor ecosystem support for Orbbec, Sensing, Zvision, Ouster, and Real-Sense.
![robot simulation presentation Open Robotics ROS logo](https://developer.download.nvidia.com/images/isaac/logo-ros.jpg)
ROS/ROS 2.0 Support
Custom ROS messages and URDF/MJCF are now open sourced. Get support for custom ROS messages that allow standalone scripting to control the simulation steps manually.
![robot simulation presentation NVIDIA Omniverse Replicator for scalable synthetic data generation in simulation](https://developer.download.nvidia.com/images/isaac/nvidia-isaac-sim-1920x1080(1).jpg)
Scalable Synthetic Data Generation
Explore randomization in simulation added for manipulator and mobile base use cases. Environmental dynamics and other attributes of 3D assets—such as lighting, reflection, color, and position—are randomized to train and test mobile robots and manipulators.
![robot simulation presentation OpenUSD SimReady warehouse scenes and assets](https://developer.download.nvidia.com/isaac/images/isaac-sim-warehouse-builder-Mezzanine-Floor-Main.png)
SimReady Assets
Take advantage of OpenUSD SimReady warehouse scenes and assets. Use SimReady 3D assets to create and test scenarios and exercise robot solutions across warehouse configurations.
![](http://omraadeinfo.online/777/templates/cheerup1/res/banner1.gif)
Developer Resources and Support
Get live help.
Connect with experts live to get your questions answered.
Chat with us on our Forums
Attend an upcoming event
See the weekly livestream calendar
Explore Resources
Learn at your own pace with free getting-started material.
Check out this documentation .
Follow along self-paced trainings
Dive into Q&A forums .
Robotics DevOps
NVIDIA OSMO is a cloud-native workflow orchestration platform that lets you easily scale your workloads across distributed environments—from on-premises to private and public cloud resource clusters. It provides a single pane of glass for scheduling complex multi-stage and multi-container heterogeneous computing workflows.
![robot simulation presentation NVIDIA OSMO logo](https://developer.download.nvidia.com/images/issac-osmo-1920x1080.jpg)
Latest Robotics News
Get started with isaac sim today..
Stanford University
Stanford engineering, s tanford e ngineering e verywhere, cs223a - introduction to robotics, course details, course description.
The purpose of this course is to introduce you to basics of modeling, design, planning, and control of robot systems. In essence, the material treated in this course is a brief survey of relevant results from geometry, kinematics, statics, dynamics, and control. The course is presented in a standard format of lectures, readings and problem sets. There will be an in-class midterm and final examination. These examinations will be open book. Lectures will be based mainly, but not exclusively, on material in the Lecture Notes book. Lectures will follow roughly the same sequence as the material presented in the book, so it can be read in anticipation of the lectures Topics: robotics foundations in kinematics, dynamics, control, motion planning, trajectory generation, programming and design. Prerequisites: matrix algebra.
- DOWNLOAD All Course Materials
![robot simulation presentation FPO](https://see.stanford.edu/Content/Images/Instructors/khatib.jpg)
Khatib, Oussama
Prof. Khatib was the Program Chair of ICRA2000 (San Francisco) and Editor of ``The Robotics Review'' (MIT Press). He has served as the Director of the Stanford Computer Forum, an industry affiliate program. He is currently the President of the International Foundation of Robotics Research, IFRR, and Editor of STAR, Springer Tracts in Advanced Robotics. Prof. Khatib is IEEE fellow, Distinguished Lecturer of IEEE, and recipient of the JARA Award.
Lecture 1 | |
Lecture 1-3 | |
Lectures 4-5 | |
Lectures 6-8 | |
Lecture 9 | |
Lecture 10 | |
Lectures 11-12 | |
Lectures 13-15 |
Assignments
--> | Lecture 3 |
--> | Lectures 4-5 |
--> | Lectures 6-8 |
--> | Lectures 8-10 |
--> | Lectures 11-13 |
--> | Lectures 14-15 |
Course Sessions (16):
Watch Online: | Download: | Duration: | |
Watch Now | Download | 58 min* | |
Course Overview, History of Robotics Video, Robotics Applications, Related Stanford Robotics Courses, Lecture and Reading Schedule, Manipulator Kinematics, Manipulator Dynamics, Manipulator Control, Manipulator Force Control, Advanced Topics | |||
Transcripts
Watch Online: | Download: | Duration: | |
Watch Now | Download | 1 hr 8 min | |
Spatial Descriptions, Generalized Coordinates, Operational Coordinates, Rotation Matrix, Example - Rotation Matrix, Translations, Example - Homogeneous Transform, Operators, General Operators
|
Watch Online: | Download: | Duration: | |
Watch Now | Download | 1 hr 17 min | |
Homogeneous Transform Interpretations, Compound Transformations, Spatial Descriptions, Rotation Representations, Euler Angles, Fixed Angles, Example - Singularities, Euler Parameters, Example - Rotations
|
Watch Online: | Download: | Duration: | |
Watch Now | Download | 1 hr 12 min | |
Manipulator Kinematics, Link Description, Link Connections, Denavit-Hartenberg Parameteres, Summary - DH Parameters, Example - DH Table, Forward Kinematics
|
Watch Online: | Download: | Duration: | |
Watch Now | Download | 1 hr 7 min | |
Summary - Frame Attachment, Example - RPRR Manipulator, Stanford Scheinman Arm, Stanford Scheinman Arm - DH Table, Forward Kinematics, Stanford Scheinman Arm - T-Matrices, Stanford Scheinman Arm - Final Results
|
Watch Online: | Download: | Duration: | |
Watch Now | Download | 1 hr 11 min | |
Instantaneous Kinematics, Jacobian, Jacobians - Direct Differentiation, Example 1, Scheinman Arm, Basic Jacobian, Position Representations, Cross Product Operator, Velocity Propagation, Example 2
|
Watch Online: | Download: | Duration: | |
Watch Now | Download | 1 hr 9 min | |
Jacobian - Explicit Form, Jacobian Jv / Jw, Jacobian in a Frame, Jacobian in Frame {0}, Scheinman Arm, Scheinman Arm - Jacobian, Kinematic Singularity
|
Watch Online: | Download: | Duration: | |
Watch Now | Download | 1 hr 15 min | |
Scheinman Arm - Demo, Kinematic Singularity, Example - Kinematic Singularity, Puma Simulation, Resolved Rate Motion Control, Angular/Linear - Velocities/Forces, Velocity/Force Duality, Virtual Work, Example
|
Watch Online: | Download: | Duration: | |
Watch Now | Download | 1 hr 16 min | |
Intro - Guest Lecturer: Gregory Hager, Overview - Computer Vision, Computational Stereo, Stereo-Based Reconstruction, Disparity Maps, SIFT Feature Selection, Tracking Cycle, Face Stabilization Video, Future Challenges |
Watch Online: | Download: | Duration: | |
Watch Now | Download | 1 hr 2 min* | |
Guest Lecturer: Krasimir Kolarov, Trajectory Generation - Basic Problem, Cartesian Planning, Cubic Polynomial, Finding Via Point Velocities, Linear Interpolation, Higher Order Polynomials, Trajectory Planning with Obstacles | |||
Watch Online: | Download: | Duration: | |
Watch Now | Download | 1 hr 14 min | |
Joint Space Dynamics, Newton-Euler Algorithm, Inertia Tensor, Example, Newton-Euler Equations, Lagrange Equations, Equations of Motion
|
Watch Online: | Download: | Duration: | |
Watch Now | Download | 1 hr 14 min | |
Lagrange Equations, Equations of Motion, Kinetic Energy, Equations of Motion - Explicit Form, Centrifugal and Coriolis Forces, Christoffel Symbols, Mass Matrix, V Matrix, Final Equation of Motion
|
Watch Online: | Download: | Duration: | |
Watch Now | Download | 1 hr 10 min | |
Control - Overview, Joint Space Control, Resolved Motion Rate Control, Natural Systems, Dissipative Systems, Example, Passive System Stability
|
Watch Online: | Download: | Duration: | |
Watch Now | Download | 1 hr 13 min | |
PD Control, Control Partitioning, Motion Control, Disturbance Rejection, Steady-State Error, PID Control, Effective Inertia
|
Watch Online: | Download: | Duration: | |
Watch Now | Download | 1 hr 12 min | |
Manipulator Control, PD Control Stability, Task Oriented Control, Task Oriented Equations of Motion, Operational Space Dynamics, Example, Nonlinear Dynamic Decoupling, Trajectory Tracking
|
Watch Online: | Download: | Duration: | |
Watch Now | Download | 1 hr 10 min | |
Compliance, Force Control, Dynamics, Task Description, Historical Robotics, Stanford Human-Safe Robot, Task Posture and Control, Multi-Contact Whole-Body Control |
Stanford Center for Professional Development
- Stanford Home
- Maps & Directions
- Search Stanford
- Emergency Info
- Terms of Use
- Non-Discrimination
- Accessibility
© Stanford University, Stanford, California 94305
![robot simulation presentation robotics simulation](https://cdn1.slideserve.com/2149359/robotics-simulation-n.jpg)
Robotics Simulation
Jul 22, 2014
320 likes | 753 Views
Robotics Simulation. ( Skynet ) Andrew Townsend Advisor: Professor Grant Braught. Introduction. Robotics is a quickly emerging field in today’s array of technology Robots are expensive, and each robot’s method of control is different Used in Dickinson College’s Artificial Life class
Share Presentation
- andrew townsend advisor
- single language
- serial communication
- essentially identical
- predefined world
- directional values
![robot simulation presentation xaria](https://www.slideserve.com/photo/31229.jpeg)
Presentation Transcript
Robotics Simulation (Skynet) Andrew Townsend Advisor: Professor Grant Braught
Introduction • Robotics is a quickly emerging field in today’s array of technology • Robots are expensive, and each robot’s method of control is different • Used in Dickinson College’s Artificial Life class • dLife – universal robotics controller • Problem to be solved: How can multiple students do coursework for the Artificial Life class at the same time?
Three Critical Goals for the Solution • Accurate modeling of Hemisson and AIBO robots • Multiple simultaneous users controlling multiple robots at once • Solution must integrate with the existing dLife package
Proposed Solution • Simulation provides a cheap alternative software solution to this problem. • Many Robotics simulators already exist • Not all simulate multiple robots or accept multiple users • None of them integrate with dLife.
Robotics Simulator’s Role • Integrate transparently with dLife such that the process of using a virtual robot is the same as using a physical robot • Provide support for multiple users using multiple robots in the same ‘space’ • Accurate modeling of a robot’s expected actions, to provide an effective alternative for Artificial Life students.
Background • Several robotics simulators already exist • Some of these include Pyro, Gazebo, Stage, Simulator Bob, and SIMpact
Background.Pyro • Pyro is one of the most similar applications to dLife. • Concentrates on providing a single language to communicate with a robot so that the user does not have to learn a new language for each kind of robot • Integrates with another simulator called Stage..
Background.Stage • Closest to this project • Integrates with client packages, such as Pyro • Focuses on representing a world and the simulated robot’s interactions with that world
Architecture • Similar to the Pyro/Stage methodology • Client / Server Architecture • Communication should not be limited to the local machine • Use Socket-based communication.
Architecture.Client (dLife) • The client of a robotics simulation represents a user’s input to the robot • dLife already provides a unified interface to abstract the user’s control of the robot and relevant sensor readings, regardless of what kind of robot it is • dLife must simply be extended to allow for socket based communication, rather than using serial communication. • Other than connection mechanisms, all other processes should be essentially identical to dLife’s interaction with a physical robot
Architecture.Server • Most obvious choice is to use Java • Server must be able to simulate the robots while still accepting new connections. Threaded connection listening! • Server has two main roles: • Server must be able to accept some form of predefined world, and display it to the user. • Server must also be capable of modeling a robot’s actions in that world.
The World is what you make of it • Clearly a simulator is useless if you can’t see what the robot is doing • Worlds are created from reading input from text files. • World Files assume ‘real world’ dimensions, in this case meters, and translate from there. • Axis flip
The World File • Each World File contains basic information such as the dimensions of the world and coloring. • Also allows for a SCALE statement for perspective purposes • Each world file can contain an unlimited number (barring system resources) of objects that reside in the world
The World File: Objectivity • Objects are currently limited to circles, rectangles, and lines • Objects are defined in the World File by their size, position, and coloring. • Currently objects are impassable objects as far as the robot is concerned, but moveable objects are a possibility for the future.
The World File: Syntax Checking • Upon loading a given World File, the server checks the file against the expected format and reports errors intelligently. • Erroneous data is reported to the user and discarded.
Robot Modeling.Representation • Each kind of robot should be drawn in the same way, with slight variations to indicate which robot is which. • Currently, only the Hemisson robot is implemented, and is represented by a circle of the user’s choice in color, with a line pointing in the direction the robot is facing. • Server handshake: ‘Hemisson [x] [y] [direction] [color]’
Robot Modeling.Implementation • As each robot is meant to be independent of each other, robots are implemented as versions of the robotThread class. • Thread is associated with the socket connection, real-time communication. • All processes associated with that particular robot are handled exclusively in that thread. • Makes it easy to keep each kind of robot’s appearances, capabilities, and actions separate while still keeping it easy to add new kinds of robots to the mix. • Timing is everything
Skynet v.1b DEMONSTRATION
Status • World representation is essentially complete • dLife integration is complete except for one thing… • Robot representation is defined and almost entirely complete, but problems with directional values (rounding errors and conversion issues) • Collision algorithm only partially works • Only forward/backward movement is fully supported, although command parsing is implemented
Challenges • GUIs. • Keeping real world measurements separate from internal ‘backend’ measurements. Directional values are more elusive than they should be. • Writing code in such a way that test cases would work (more threading!) • Collision detection algorithm
Future Work • Mostly revolve around the “HemissonThread” class • Independent and different wheel speeds • Fix directional value calculations • Empirical testing for accurate modeling – tweak timer accordingly • Fix dLife dialog box. • Finishing collision detection algorithm • Add support for scripting? • Bring the noise • Implementation of simAIBOs!
Thank you! Questions?
- More by User
![robot simulation presentation Robotics](https://cdn0.slideserve.com/506435/robotics-dt.jpg)
Robotics. Coordinates, position, orientation kinematics. HTML5 reading file. Homework: Prepare for lab. Postings. Coordinate system. Method of providing system for specifying points / positions / vectors in a given N-dimensional space.
465 views • 24 slides
![robot simulation presentation Robotics](https://cdn0.slideserve.com/1173543/robotics-dt.jpg)
Robotics. “Robot” coined by Karel Capek in a 1921 science-fiction Czech play.
749 views • 22 slides
![robot simulation presentation Robotics](https://cdn4.slideserve.com/1184636/slide1-dt.jpg)
Start of class. Do Now: Average the following sets of numbers.{ 1, 2, 3, 4, 5}? 3{5, 10, 10, 15, 15, 20} ? 12.5{8, 4, 5, 9} ? 6.5. . . . Today's learning objectives. Introduction of the engineering design process.
383 views • 12 slides
![robot simulation presentation Robotics](https://cdn0.slideserve.com/1228034/robotics-dt.jpg)
Robotics. Reprise on levels of language Movement in a crowded workspace Review for midterm Lab: Complete pick up, travel, deposit exercise. Homework: study for midterm. Post proposal for library research project. Movement. Move to position P. Note: position P may involve
437 views • 16 slides
![robot simulation presentation Robotics](https://cdn0.slideserve.com/1294098/robotics-dt.jpg)
Robotics. Recap, Manufacturing, NXT-G features, sound Lab work: [Show bump exercise. Add sound. Combine sound & bump]. Start off with sound. Use buttons to indicate turning left or right. MAKE sounds. Display messages. Homework: postings (hints at topics). Combine sound, bump, timing….
604 views • 42 slides
![robot simulation presentation Robotics](https://cdn1.slideserve.com/1864385/slide1-dt.jpg)
Robotics. ?. Designed For Integrity. The Need For Robotic Assurance. Robert G Parker. When Considering Robotics, What Do You Think Of ?. Science Fiction to Automobile Manufacturing Robots are a Way of Life. When Considering Robotics, What Do You Think Of ?.
1.27k views • 56 slides
![robot simulation presentation ROBOTICS,.](https://cdn1.slideserve.com/2199697/slide1-dt.jpg)
ROBOTICS,. Submitted By., A.Ruban Dinesh. R.Manoj Prabhakar. M.Reuban Samuel. What is a Robot ?. “A re-programmable, multifunctional manipulator designed to move material, parts, tools, or specialized devices through various programmed motions for the performance of a variety of tasks.” .
809 views • 47 slides
![robot simulation presentation Robotics](https://cdn1.slideserve.com/2389740/robotics-dt.jpg)
Robotics. Robotics is the branch of technology Deals with the design, construction, operation Majority of robots use electric motors Robot, a mechanical or virtual agent Combination of electronics, mechanics and software Many benefits such as creativity, design ability.
777 views • 10 slides
![robot simulation presentation Robotics](https://cdn2.slideserve.com/3663370/robotics-dt.jpg)
Robotics. End of Arm Tooling for Industrial Robots. End-of-arm Tooling. End Effector = Gripper Mounted to a tooling plate Cost of end-of-arm tooling can account for 20% of the total cost of the robot Two Functions : 1. Hold the part while work is being performed
783 views • 31 slides
![robot simulation presentation Robotics](https://cdn2.slideserve.com/3765526/by-bradley-melanson-derek-spencer-dt.jpg)
Welcome!. By: Bradley Melanson & Derek Spencer. Robotics. Robots. General Information: Dictionary Definition of Robot: 1) a machine that resembles a human and does mechanical, routine takes on command. Origin: Czech, coined by Karel Capek in the play R.U.R. (1920).
365 views • 7 slides
![robot simulation presentation Robotics](https://cdn2.slideserve.com/5355560/slide1-dt.jpg)
Technology. Robotics. Irish Mini Sumo Robot Competition Explained. Educational Aims
666 views • 33 slides
![robot simulation presentation Robotics](https://cdn3.slideserve.com/5418707/robotics-dt.jpg)
Robotics. The History of Robots. 2000 B.C.: lever action dogs in Egyptian tomb. Middle Ages: movable figures on clocks. Automatons: a human-like figures that moved automatically. The History of Robots. 14th Century: Strasbourg - mechanical rooster that flaps its wings & crows at noon.
443 views • 23 slides
![robot simulation presentation Robotics!!!!!](https://cdn3.slideserve.com/5559441/by-brandon-bradford-dt.jpg)
Robotics!!!!!
A robot is any inanimate object that has been programmed to be humanistic and do human like tasks. By Brandon Bradford. Robotics!!!!!. Welding Car assembly Unmanned Aerial Vehicles Human Interactions Surgery. Five tasks!!!. welding. UAV. Interacting with humans.
223 views • 6 slides
![robot simulation presentation Robotics](https://cdn3.slideserve.com/5563790/robotics-dt.jpg)
Robotics. Class 2. What defines a robot?. Movement Sensors Program Energy. Organizing Lego Wedo kit. Use separate bags according to the color and category red pieces – bricks, beams, etc. except the two long red pieces. yellow pieces white, gray, green and light green pieces
268 views • 3 slides
![robot simulation presentation Robotics](https://cdn3.slideserve.com/6215395/robotics-dt.jpg)
Robotics. Fanuc ABB Yamaha Epson CRS. Robotic Component Assembly. Robotic Testing.
212 views • 5 slides
![robot simulation presentation Simulation and Robotics](https://cdn3.slideserve.com/6376723/simulation-and-robotics-dt.jpg)
Simulation and Robotics
Simulation and Robotics. Case Koralli-Tuote Company: Koralli-Tuote Oy, Kokkola,Finland Description: Demonstration of factory simulation possibilities in furniture production: material flow, bottlenecks, capacities, utilizations, waiting times, lead times, throughput etc.
209 views • 10 slides
![robot simulation presentation Physics-Based Simulation: Graphics and Robotics](https://cdn3.slideserve.com/6835290/physics-based-simulation-graphics-and-robotics-dt.jpg)
Physics-Based Simulation: Graphics and Robotics
Physics-Based Simulation: Graphics and Robotics. Chand T. John. Forward Dynamic Simulation. Problem: Determine the motion of a mechanical system generated by a set of forces or control values. Challenges: Contact/collision Large number of bodies Drift Control of end result
324 views • 26 slides
![robot simulation presentation Robotics](https://cdn3.slideserve.com/6977067/robotics-dt.jpg)
Robotics. Catchup/Review: switch, arithmetic, range, loop Bluetooth Lab: Finish parallel parking. Next: Use Bluetooth communication for calculate & send location exercise. Review NXT-G. Switch block Arithmetic block Two operands or operand & constant Range Inside or outside
251 views • 13 slides
![robot simulation presentation Simulation and Robotics](https://cdn5.slideserve.com/9515026/simulation-and-robotics-dt.jpg)
111 views • 10 slides
![robot simulation presentation Robotics at the Sea Floor- A Simulation](https://cdn5.slideserve.com/9574443/robotics-at-the-sea-floor-a-simulation-dt.jpg)
IEEE Account
- Change Username/Password
- Update Address
Purchase Details
- Payment Options
- Order History
- View Purchased Documents
Profile Information
- Communications Preferences
- Profession and Education
- Technical Interests
- US & Canada: +1 800 678 4333
- Worldwide: +1 732 981 0060
- Contact & Support
- About IEEE Xplore
- Accessibility
- Terms of Use
- Nondiscrimination Policy
- Privacy & Opting Out of Cookies
A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.
Got any suggestions?
We want to hear from you! Send us a message and help improve Slidesgo
Top searches
Trending searches
![robot simulation presentation robot simulation presentation](https://media.slidesgo.com/storage/22542051/generation-of-271656002054.jpg)
26 templates
![robot simulation presentation robot simulation presentation](https://media.slidesgo.com/storage/17359868/travel-infographics1644944385.jpg)
49 templates
![robot simulation presentation robot simulation presentation](https://media.slidesgo.com/storage/34888012/pulmonary-edema-treatment-breakthrough1683108248.jpg)
11 templates
![robot simulation presentation robot simulation presentation](https://media.slidesgo.com/storage/14254862/native-american-studies-college-major1637315335.jpg)
71 templates
![robot simulation presentation robot simulation presentation](https://media.slidesgo.com/storage/34399882/sleepover-party1682328773.jpg)
15 templates
![robot simulation presentation robot simulation presentation](https://media.slidesgo.com/storage/37871850/first-day-of-school1688373159.jpg)
first day of school
68 templates
Robot Presentation templates
How would you demonstrate that this set of google slides themes and powerpoint templates about robots has been designed by humans and not machines to allay any concerns: we are human, even though great advances are being made in the world of robotics talk about artificial intelligence, new discoveries, or just enjoy these slides with aesthetic elements of robots..
![robot simulation presentation Robotics Engineering Company Profile presentation template](https://media.slidesgo.com/storage/22836156/robotics-engineering-company-profile1657259998.jpg)
It seems that you like this template!
Premium template.
Unlock this template and gain unlimited access
Robotics Engineering Company Profile
Robotics engineering is the science of the future and development. Present the profile of your wonderful company with this attractive template with illustrations of robotic arms, which has all the necessary resources for you to promote your company and take it to the next level. Personalize your information in this...
![robot simulation presentation Design of a Humanoid Robot: PhD Dissertation presentation template](https://media.slidesgo.com/storage/25782408/design-of-a-humanoid-robot-phd-dissertation1665046542.jpg)
Design of a Humanoid Robot: PhD Dissertation
That's it. It's the day humanity surrenders to robots. How could you possibly tell whether this text is being written by a human or not? Do you know who definitely wrote your dissertation on humanoid robots? You! In order to impress the assessment committee, this is the slide design you...
![robot simulation presentation AI Chatbot App Pitch Deck presentation template](https://media.slidesgo.com/storage/52825546/ai-chatbot-app-pitch-deck1714407169.jpg)
AI Chatbot App Pitch Deck
Download the AI Chatbot App Pitch Deck presentation for PowerPoint or Google Slides. Whether you're an entrepreneur looking for funding or a sales professional trying to close a deal, a great pitch deck can be the difference-maker that sets you apart from the competition. Let your talent shine out thanks...
![robot simulation presentation Smart Nursing Home presentation template](https://media.slidesgo.com/storage/54170616/smart-nursing-home1716270326.jpg)
Smart Nursing Home
Download the Smart Nursing Home presentation for PowerPoint or Google Slides. Hospitals, private clinics, specific wards, you know where to go when in need of medical attention. Perhaps there’s a clinic specialized in treating certain issues, or a hospital in your area that is well-known for its state-of-the-art technology. How...
![robot simulation presentation Robotic Workshop presentation template](https://media.slidesgo.com/storage/39660303/robotic-workshop1692882236.jpg)
Robotic Workshop
Robots are already a reality that we find in more and more places every time. It is an area with great development ahead. If you work in this field and need to prepare a robotics workshop, at Slidesgo we have created this modern and simple template with motifs in green...
![robot simulation presentation Robotic Process Automation (RPA) Project Proposal presentation template](https://media.slidesgo.com/storage/16108159/robotic-process-automation-rpa-project-proposal1641981676.jpg)
Robotic Process Automation (RPA) Project Proposal
Who would have written the description of this template? A person or an artificial intelligence? It could be an AI, because thanks to the Robotic Process Automation (RPA) of a business, an action that was traditionally performed by a human being can now be carried out by a computer system....
![robot simulation presentation Kawaii Robots Pitch Deck presentation template](https://media.slidesgo.com/storage/1565108/kawaii-robots-pitch-deck1609162045.png)
Kawaii Robots Pitch Deck
Are you looking for something original and different to present your business plan? With this doodle style template with illustrations of Kawaii robots you will immediately capture the attention of your audience. Its pink and orange background colors and handwritten typography give it a fun, casual feel. It is perfect...
![robot simulation presentation Humanoid Robot Pitch Deck presentation template](https://media.slidesgo.com/storage/24360830/humanoid-robot-pitch-deck1662452127.jpg)
Humanoid Robot Pitch Deck
A humanoid robot is designed to mimic or simulate the shape and movements of a human being. We know that your company is innovative and wants to play a leading role in the future of technology, that's why we want to help you present yourself in a spectacular way and...
![robot simulation presentation Humanoid Robot Project Proposal presentation template](https://media.slidesgo.com/storage/25042685/humanoid-robot-project-proposal1663748962.jpg)
Humanoid Robot Project Proposal
Technology is so far advanced that it has almost crossed the line. Can you tell whether this text has been written by a human or a robot? Well, maybe you should run this template through the Turing test, just to make sure… Present your project proposal for a robot company...
![robot simulation presentation AI Essentials Workshop presentation template](https://media.slidesgo.com/storage/53801014/ai-essentials-workshop1715763274.jpg)
AI Essentials Workshop
Download the AI Essentials Workshop presentation for PowerPoint or Google Slides. If you are planning your next workshop and looking for ways to make it memorable for your audience, don’t go anywhere. Because this creative template is just what you need! With its visually stunning design, you can provide your...
![robot simulation presentation Robotic Workshop Infographics presentation template](https://media.slidesgo.com/storage/49629001/robotic-workshop-infographics1709118231.jpg)
Robotic Workshop Infographics
Download the "Robotic Workshop Infographics" template for PowerPoint or Google Slides and discover the power of infographics. An infographic resource gives you the ability to showcase your content in a more visual way, which will make it easier for your audience to understand your topic. Slidesgo infographics like this set...
![robot simulation presentation Mechanical Articulating Axes Project Proposal presentation template](https://media.slidesgo.com/storage/33335797/mechanical-articulating-axes-project-proposal1680177928.jpg)
Mechanical Articulating Axes Project Proposal
Make an impactful presentation of your mechanical articulating axes project with this modern, futuristic template. The design is inspired by robotics and technology, with a cool blue background, illustrations of mechanical articulating axes here and there, and a modern layout. Showcase the strengths of your project and explain the benefits...
![robot simulation presentation Robotics Lesson for College presentation template](https://media.slidesgo.com/storage/5192396/robotics-lesson-for-college1618841781.jpg)
Robotics Lesson for College
If you are a robotics professor at the university and you would like to prepare a different and original lecture that captures the attention of your students, take a look at this template from Slidesgo. It has a modern style, with geometric shapes. The background is black, but it is...
![robot simulation presentation Metaverse Mayhem Aesthetic Theme for Business presentation template](https://media.slidesgo.com/storage/30651498/metaverse-mayhem-aesthetic-theme-for-business1675081564.jpg)
Metaverse Mayhem Aesthetic Theme for Business
If you've ever felt close to an AI, it's the perfect time to make your next business presentation feel metaverse mayhem aesthetic inspired! The latest in business presentation technology has that distinct robotic vibe: mysterious robot illustrations illuminated by dark backgrounds and highlighted with shades of purple, almost giving off...
![robot simulation presentation Chatbot Social Media Strategy presentation template](https://media.slidesgo.com/storage/54597461/chatbot-social-media-strategy1716813859.jpg)
Chatbot Social Media Strategy
Download the Chatbot Social Media Strategy presentation for PowerPoint or Google Slides. How do you use social media platforms to achieve your business goals? If you need a thorough and professional tool to plan and keep track of your social media strategy, this fully customizable template is your ultimate solution....
![robot simulation presentation Global Technology & Robotics Academy Center presentation template](https://media.slidesgo.com/storage/28117698/global-technology-robotics-academy-center1669028598.jpg)
Global Technology & Robotics Academy Center
Technology is part of our daily lives, and robotics academy centers are now more necessary than ever to train the next generation of inventors. Promote yours and get more students to enroll in your academy using this blue gradient template with robot illustrations. In it you will find the necessary...
![robot simulation presentation Bachelor in Robotics Engineering presentation template](https://media.slidesgo.com/storage/54102288/bachelor-in-robotics-engineering1716197919.jpg)
Bachelor in Robotics Engineering
Download the Bachelor in Robotics Engineering presentation for PowerPoint or Google Slides. As university curricula increasingly incorporate digital tools and platforms, this template has been designed to integrate with presentation software, online learning management systems, or referencing software, enhancing the overall efficiency and effectiveness of student work. Edit this Google...
![robot simulation presentation Crobot Pitch Deck presentation template](https://media.slidesgo.com/storage/1294429/crobot-pitch-deck1607934881.png)
Crobot Pitch Deck
Sometimes, you need to push forward with your ideas despite lacking financial aid. Well, how about using our template to try to impress some investors? It's structured as a pitch deck, ready to convey your message while showing what your project is about. A couple of wavy shapes on the...
- Page 1 of 4
Describing Robots from Design to Learning: Towards an Interactive Lifecycle Representation of Robots † † thanks: A preprint submitted to the IEEE ICRA 2024 .
The robot development process is divided into several stages, which create barriers to the exchange of information between these different stages. We advocate for an interactive lifecycle representation, extending from robot morphology design to learning, and introduce the role of robot description formats in facilitating information transfer throughout this pipeline. We analyzed the relationship between design and simulation, enabling us to employ robot process automation methods for transferring information from the design phase to the learning phase in simulation. As part of this effort, we have developed an open-source plugin called ACDC4Robot for Fusion 360, which automates this process and transforms Fusion 360 into a user-friendly graphical interface for creating and editing robot description formats. Additionally, we offer an out-of-the-box robot model library to streamline and reduce repetitive tasks. All codes are hosted open-source. ( https://github.com/bionicdl-sustech/ACDC4Robot )
K eywords First keyword ⋅ ⋅ \cdot ⋅ Second keyword ⋅ ⋅ \cdot ⋅ More
1 INTRODUCTION
As autonomous machines capable of interacting with the real world, various types of robots, such as wheeled mobile robots, quadrupedal robots, and humanoid robots, are emerging in domestic, factory, and other environments to collaborate with humans or accomplish tasks independently. The morphology of a robot is the essential factor that most directly affects the robot’s configuration space, thereby determining the robot’s function [ 1 ] . Robot morphology is primarily determined during the design process, thanks to the development of computer-aided design (CAD) technology, which makes it cost-effective, time-saving, and efficient compared to the manufacturing process.
Beyond robot morphology, learning has become an essential topic in robotics because it enables robots to achieve complex tasks and, thus, better interact with the environment. However, training robots in hardware may lead to failures or damage, making it expensive and time-consuming. Simulation provides a more cost-effective and safer way to develop robots. Moreover, robot simulators incorporate domain randomization techniques that increase the exploration of the state-action space, facilitating the transfer of knowledge learned in simulation to real robots [ 2 ] . All robot simulators construct simulation instances from robot models derived from robot description formats .
![robot simulation presentation Refer to caption](https://arxiv.org/extracted/5305669/fig-Robot-System-Development-Framework.png)
Robot Description Format (RDF) is a class of formats that can describe the robot system in a structured manner following a set of rules. RDF contains information about the robot system, including kinematics, dynamics, actuators, sensors, and the environment with which the robot can interact. RDF can transfer information about the robot from the design phase to simulation; thus, it can be seen as the interface between robot morphology design and robot learning in a simulated environment.
1.1 File Formats from Design to Learning
Several file formats are used in robot morphology design and learning in a simulation environment. These file formats have specific features tailored to different application scenarios, hindering process interoperability. Various file formats make it challenging to transfer information from the design phase to the learning process in simulation.
In contemporary practice, robot morphology is typically designed using CAD software. File formats in the CAD field can be categorized into neutral and native formats. Neutral file formats adhere to cross-platform compatibility standards, including STEP files (.stp, .step), IGES files (.igs, .ige), COLLADA, and STL. Native file formats are platform-specific and contain precise information optimized for the respective platform, examples of which include SolidWorks (.sldprt, .sldasm), Fusion 360 (.f3d), Blender (.blend), and many others.
Several robot description formats are used in robot simulation. The most common format is the Unified Robotics Description Format (URDF), which is supported by various robot simulators, including PyBullet, Gazebo, and MuJoCo. SDFormat is natively supported by Gazebo and partially supported by PyBullet. MuJoCo natively supports MJCF and is also supported by Isaac Sim and PyBullet. Other robot description formats resemble native formats specific to particular simulators than URDF. For example, the CoppeliaSim file is designed for use with CoppeliaSim, and WBT is used in Webots.
1.2 A Brief Historical Review of Robot Description Formats
Robot Description Formats provide information for modeling the robot system and are widely used in robot simulators. Currently, research resources on robot description formats are limited, with most of the relevant information available only on their respective websites and forums, making research challenging. The authors in [ 3 ] compared existing formats and summarized their main advantages and limitations. Here, we offer a concise historical perspective on robot description formats to enhance understanding.
1.2.1 Before Unified Robot Description Format (URDF)
Research on robot modeling predates the concept of a robot description format by a considerable margin. Denavit and Hartenberg formulated a convention using four parameters to model robot manipulators in 1955 [ 4 ] , which is still widely used in robotics. With the advent of computer simulation, robots can be defined using programming languages with variables [ 5 ] . While it is theoretically possible to describe a robotic system through a programming language’s variables and data structures, the reliance on programming language features can make it cumbersome to exchange robot system information across different platforms for various purposes. Therefore, representing robot system information in a unified, programming language-independent manner will facilitate interchangeability across other platforms and enhance development efficiency. Park et al. [ 6 ] discussed XML-based formats, which can describe robots due to XML’s convenience in delivering information.
1.2.2 URDF, SDFormat, and Others
While developing a personal robotics platform, the idea of creating a “Linux for robotics” came to the minds of Eric Berger and Keenan Wyrobek [ 7 ] . With the first distribution of ROS—ROS Mango Tango—released in 2009, URDF was simultaneously introduced. URDF is an XML-based file format that enhances readability and describes robot links’ information, including kinematics, dynamics, geometries, and robot joints’ information organized in a tree structure. URDF universally models robots, making them suitable for visualization, simulation, and planning within the ROS framework.
With the growing popularity of ROS, URDF has become a widely used robot description format supported by various simulation platforms, such as PyBullet, MuJoCo, and Isaac Sim, among others. However, an increasing number of roboticists have recognized the limitations and issues of URDF, such as its inability to support closed-loop chains. The community has endorsed proposals like URDF2 to address these concerns 1 1 1 https://sachinchitta.github.io/urdf2/ . The problems stemming from URDF’s design may become increasingly challenging to resolve over time due to the diminishing activity in its development (the repository’s 2 2 2 https://github.com/ros/urdf update frequency has become very low). Therefore, new formats can draw upon URDF’s experience to avoid such issues from the outset and expand their ability to describe a broader range of scenarios.
Rosen Diankov et al. [ 8 ] promoted an XML-based open standard called COLLADA, which allows for complex kinematics with closed-loop chains. SDFormat (Simulation Description Format) was initially developed as part of the Gazebo simulator and separated from Gazebo as an independent project to enhance versatility across different platforms. SDFormat is also an XML-based format that shares a similar grammar with URDF but extends its ability to describe the environment with which the robot interacts. Furthermore, SDFormat is actively developing, making it more responsive to future robotics needs. MJCF is another XML-based file format initially used in the MuJoCo simulator. It can describe robot structures, including kinematics, dynamics, and other elements like sensors and motors.
Although these robot description formats enable more comprehensive modeling information for robotic systems and have resolved some of the limitations of URDF, URDF remains the most universally adopted robot description format in academia and industry. Fig. 2 provides a timeline representation of the release times of these robot description formats.
![robot simulation presentation Refer to caption](https://arxiv.org/extracted/5305669/fig-RDF-History.png)
1.2.3 Beyond URDF
Daniella Tola et al. [ 9 , 10 ] surveyed the user experience of URDF within the robotics community, including academia and industry. Their survey revealed problems associated with using URDF and inspired the research of robot description formats. Some challenges are specific to URDF, for instance, the lack of support for a closed-chain mechanism. Additionally, some challenges are common to other robot description formats, such as the complex workflow involving multiple tools, including CAD software, text editors, and simulators.
One of the solutions is to create a new robot description format that can adequately describe robot systems and is also easy to use. A new attempt in this regard is the OpenUSD format 3 3 3 https://aousd.org/ , which combines the strengths of academia and industry to drive progress in this field.
Another solution is to provide more tools to enhance the usability of robot description formats. Some tools, such as gz-usd 4 4 4 https://github.com/gazebosim/gz-mujoco/tree/main/sdformat_mjcf and sdformat_mjcf 5 5 5 https://github.com/gazebosim/gz-usd , improve the interoperability of different robot description formats. CAD tools for exporting robot designs to robot description formats are in high demand within the roboticist community because they relieve developers from the tedious workflow of creating robot description formats. Such tools include SolidWorks URDF exporter, Fusion2URDF, OnShape to URDF exporter, and the Blender extension Phobos.
In the rest of this paper, Section 2 introduces methods for structuring the workflow from design to learning and presents an automation tool, ACDC4Robot, designed to address these challenges. Section 3 demonstrates the usage of the automation tool with examples and offers a robot model library for users that can be readily utilized. We conclude in Section 4 and discuss the limitations of our work and the future of the format for robot system development. This article’s contributions include promoting a lifecycle representation from robot design to robot learning, offering the ACDC4Robot tool within Fusion 360 to streamline the workflow from robot design to robot learning, and constructing an out-of-the-box robot model library for robot design and learning.
2 METHODOLOGY
We analyze the workflow to describe robots from design to learning, then describe an interactive lifecycle representation. Next, we employ robot process automation to streamline the processes of robot design to robot learning. An automation tool integrated with a CAD platform can achieve this lifecycle representation interactively.
2.1 An Interactive Lifecycle Representation
The process of robot development can be represented in various ways. Here, we separate the robot development process into four stages: design, simulation, learning, and application. In many robot learning approaches, robots are trained initially in a simulation environment and then transferred to real robots using Sim2Rreal methods. As a result, the simulation, learning, and application stages can be streamlined into a single workflow. However, the difference in file formats between the design and simulation stages poses a challenge in transferring information from robot design to simulation. To address this issue, we introduce the concept of a robot description format as a bridge to eliminate the gap between design and simulation, Fig. 3 , allowing for the seamless connection of these stages to create a lifecycle representation of the robot development process.
For a robot description format based on the XML format, using a text editor is a straightforward but non-intuitive method for interacting with the robot description file. Creating or modifying the robot description file by hand becomes tedious, time-consuming, and error-prone. Since the robot description format contains information directly derived from robot design, the graphical interactive interface provided by CAD software can serve as a graphical editor for the robot description format. By utilizing CAD software as the GUI, the robot description format can be interacted with in a WYSIWYG (what you see is what you get) manner. Consequently, this entire process can be regarded as an interactive lifecycle representation of the robot.
![robot simulation presentation Refer to caption](https://arxiv.org/extracted/5305669/fig-Lifecycle-Representation.png)
2.2 Robotic Process Automation from Design to Simulation
CAD software and robot simulators are two systems with distinct functions, each emphasizing different aspects of the robot. However, some features in CAD and robot simulators represent different forms of the same information. The way components are joined to construct a robot assembly in CAD software determines the kinematics of the robot. The physical properties of robot components in CAD software can also pertain to the dynamics in the robot simulator. The geometric shape of components can be utilized for visualization and collision information in the simulator. Fig. 4 shows that a one-to-one relationship between CAD and simulation systems enables the realization of automated conversions between these two processes, which was previously feasible.
![robot simulation presentation Refer to caption](https://arxiv.org/extracted/5305669/fig-Design-Sim-Mapping.png)
2.3 An Open-source Plug-in Using Fusion 360
We present an open-source plugin using Fusion 360 to achieve the interactive lifecycle process automation from robot design to robot learning. Fusion 360 is a popular CAD software developed by Autodesk within the roboticist community. It provides API access for developers, allowing it to accomplish automation tasks.
Following J. Collins et al.’s work [ 11 ] , we selected a set of popular simulators used in robotics learning, including RaiSim, Gazebo, Nvidia Isaac, MuJoCo, PyBullet, CARLA, Webots, and CoppeliaSim for comparing the compatibility of robot description formats: URDF, SDFormat, MJCF, and USD. Since we have opted to utilize Gazebo and PyBullet as our target simulation platforms, we have decided to use URDF and SDFormat according to Table 1 as the robot description formats for transforming the design into the learning process.
IMAGES
VIDEO
COMMENTS
Robot Simulation | PPT. Robot Simulation. •. 4 likes•1,027views. AI-enhanced description. MecklerMedia. Follow. The document discusses how MATLAB and Simulink can be used for robot simulation and implementation, highlighting several robotics projects that used these tools including simulating a Tesla Roadster and developing thought ...
Chapter 1: Introduction. Manipulation is more than pick-and-place. Open-world manipulation. Simulation. These notes are interactive. Model-based design and analysis. Organization of these notes. Exercises. Chapter 2: Let's get you a robot.
This simulation uses Stateflow® to control the system control and demonstrates how you can use Unreal Engine™ to simulate a complete virtual commissioning application in Simulink®. For an example showing how to deploy the main logic in Stateflow using Simulink PLC Coder™, see Generate Structured Text Code for Shuttle and Robot Control.
Even one robot, over time, will change, it loses a screw and suddenly, the center of mass shifted. The idea of randomization is not to find the real world, it's to be robust enough to arrange a ...
A robotics design, development, and simulation suite. Features Showcase Docs Community More. App. menu. Simulate before you build. Iterate quickly on design concepts and control strategies with Gazebo's rich suite of tools, libraries, and cloud services. Get Started Learn More. How it works. Gazebo brings a fresh approach to simulation with a ...
Webots Simulation Tutorials and Resources. September 6, 2023. Ralph. Webots simulation software is a fast and friendly tool used especially for research and educational purposes with a long list of robotic projects and very good results. In this article are available a collection of tutorials and resources to start using Webots, from simple to ...
In this blog post, my colleague Dave Schowalter will introduce you to a new ecosystem that combines the photo-realistic simulation capabilities from NVIDIA Isaac SimTM and the sensor processing and AI modeling capabilities from MathWorks for building realistic robot simulations. Learn more at the joint NVIDIA/MathWorks webinar on September 12, "MATLAB and Isaac Sim". I am Dave Schowalter ...
Simulation representations of robots have advancedin recent years. Yet, there remain significant sim-to-real gapsbecause of modeling assumptions and hard-to-...
On the top menu, choose SimSlides -> Import PDF (or press F2) Choose a PDF file from your computer. Choose the folder to save the generated slide models at. Choose a prefix for your model names, they will be named prefix-0, prefix-1, ... Click Generate. A model will be created for each page of your PDF. This may take a while, the screen goes ...
Slides on bipedal robot control: Slides on transfer from simulation to real robots; Zero-Moment Point - Thirty Five Years of Its Life Vukobratovic, M. and Borovac, B. International Journal of Humanoid Robotics, Vol.1, No. 1, pp.157-173, 2004. Biped Walking Pattern Generation by using Preview Control of Zero-Moment Point
The simulator I built is written in Python and very cleverly dubbed Sobot Rimulator. You can find v1.0.0 on GitHub. It does not have a lot of bells and whistles but it is built to do one thing very well: provide an accurate simulation of a mobile robot and give an aspiring roboticist a simple framework for practicing robot software programming.
Isaac Sim benefits from the Omniverse platform's OpenUSD interoperability across 3D and simulation tools - enabling developers to easily design, import, build, and share robot models and virtual training environments. Now, you can easily connect the robot's brain to a virtual world through the Isaac ROS/ROS 2 interface, full-featured Python scripting, and plug-ins for importing robot and ...
Presentation Transcript. MATLAB Robot Simulator Adam Lodes University of Missouri December 6th, 2010. Introduction • Code a simulator that can model any robot • Given the DH parameters, model and animate the robot • Allows for the user to visualize joints and movements of any robot • Use Puma3D by Walla Walla Washington as an example ...
http://miranda.softwaremiranda lets create great education robot simulations and challengesPython and Scratch languages can be used to program predefine robo...
Course Description. The purpose of this course is to introduce you to basics of modeling, design, planning, and control of robot systems. In essence, the material treated in this course is a brief survey of relevant results from geometry, kinematics, statics, dynamics, and control. The course is presented in a standard format of lectures ...
Robotics Simulation. ( Skynet ) Andrew Townsend Advisor: Professor Grant Braught. Introduction. Robotics is a quickly emerging field in today's array of technology Robots are expensive, and each robot's method of control is different Used in Dickinson College's Artificial Life class. Download Presentation. andrew townsend advisor. single ...
Robotics simulation is a digital tool used to engineer robotics-based automated production systems. Functionally, robotics simulation uses digital representation - a digital twin - to enable dynamic interaction with robotics models in a virtual environment. Robotics and automation simulation systems aim to bring automation systems online faster and launch production with fewer errors than ...
25+ Robotics Projects, Lessons, and Activities. By Amy Cowen on March 21, 2024 6:00 AM. Use these free STEM lessons and activities to introduce and experiment with robotics with students. From designing and building simple robots to thinking about how robots can be used to solve real-world problems, these hands-on projects help students gain ...
Abstract: Simulation representations of robots have advanced in recent years. Yet, there remain significant sim-to-real gaps because of modeling assumptions and hard-to-model behaviors such as friction. In this letter, we propose to augment common simulation representations with a transformer-inspired architecture, by training a network to predict the true state of robot building blocks given ...
Download the Bachelor in Robotics Engineering presentation for PowerPoint or Google Slides. As university curricula increasingly incorporate digital tools and platforms, this template has been designed to integrate with presentation software, online learning management systems, or referencing software, enhancing the overall efficiency and ...
We identify the gap in transferring design information to the robot simulation environment, leading us to advocate for using a robot description format to bridge the robot morphology design and robot learning in simulation. To facilitate smoother information transfer throughout the process, we have developed a robot process automation tool ...
KUKA.Sim is based on a modular software architecture - with an efficient, flexible and durable toolbox principle.The basic package can be expanded with three add-ons: for powerful modeling of an individual component library, for virtual commissioning and for simulation of welding applications.This means customers only pay for the functional expansions they actually need.
For the application field of industrial robots, the virtual simulation teaching module of industrial robot virtual disassembly and two-axis collaborative robot workstation is designed, and the ...
Software and cloud-based services subject to an Educational license or subscription may be used by eligible users solely for Educational Purposes and shall not be used for commercial, professional or any other for-profit purposes. Unlock your creative potential with 3D design software from Autodesk. Software downloads are available to students ...