Advancements in Humanoid Robots: A Comprehensive Review and Future Prospects

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • NEWS FEATURE
  • 28 May 2024
  • Correction 31 May 2024

The AI revolution is coming to robots: how will it change them?

  • Elizabeth Gibney

You can also search for this author in PubMed   Google Scholar

Humanoid robots developed by the US company Figure use OpenAI programming for language and vision. Credit: AP Photo/Jae C. Hong/Alamy

You have full access to this article via your institution.

For a generation of scientists raised watching Star Wars, there’s a disappointing lack of C-3PO-like droids wandering around our cities and homes. Where are the humanoid robots fuelled with common sense that can help around the house and workplace?

Rapid advances in artificial intelligence (AI) might be set to fill that hole. “I wouldn’t be surprised if we are the last generation for which those sci-fi scenes are not a reality,” says Alexander Khazatsky, a machine-learning and robotics researcher at Stanford University in California.

From OpenAI to Google DeepMind, almost every big technology firm with AI expertise is now working on bringing the versatile learning algorithms that power chatbots, known as foundation models, to robotics. The idea is to imbue robots with common-sense knowledge, letting them tackle a wide range of tasks. Many researchers think that robots could become really good, really fast. “We believe we are at the point of a step change in robotics,” says Gerard Andrews, a marketing manager focused on robotics at technology company Nvidia in Santa Clara, California, which in March launched a general-purpose AI model designed for humanoid robots.

At the same time, robots could help to improve AI. Many researchers hope that bringing an embodied experience to AI training could take them closer to the dream of ‘artificial general intelligence’ — AI that has human-like cognitive abilities across any task . “The last step to true intelligence has to be physical intelligence,” says Akshara Rai, an AI researcher at Meta in Menlo Park, California.

But although many researchers are excited about the latest injection of AI into robotics, they also caution that some of the more impressive demonstrations are just that — demonstrations, often by companies that are eager to generate buzz. It can be a long road from demonstration to deployment, says Rodney Brooks, a roboticist at the Massachusetts Institute of Technology in Cambridge, whose company iRobot invented the Roomba autonomous vacuum cleaner.

There are plenty of hurdles on this road, including scraping together enough of the right data for robots to learn from, dealing with temperamental hardware and tackling concerns about safety. Foundation models for robotics “should be explored”, says Harold Soh, a specialist in human–robot interactions at the National University of Singapore. But he is sceptical, he says, that this strategy will lead to the revolution in robotics that some researchers predict.

Firm foundations

The term robot covers a wide range of automated devices, from the robotic arms widely used in manufacturing, to self-driving cars and drones used in warfare and rescue missions. Most incorporate some sort of AI — to recognize objects, for example. But they are also programmed to carry out specific tasks, work in particular environments or rely on some level of human supervision, says Joyce Sidopoulos, co-founder of MassRobotics, an innovation hub for robotics companies in Boston, Massachusetts. Even Atlas — a robot made by Boston Dynamics, a robotics company in Waltham, Massachusetts, which famously showed off its parkour skills in 2018 — works by carefully mapping its environment and choosing the best actions to execute from a library of built-in templates.

For most AI researchers branching into robotics, the goal is to create something much more autonomous and adaptable across a wider range of circumstances. This might start with robot arms that can ‘pick and place’ any factory product, but evolve into humanoid robots that provide company and support for older people , for example. “There are so many applications,” says Sidopoulos.

The human form is complicated and not always optimized for specific physical tasks, but it has the huge benefit of being perfectly suited to the world that people have built. A human-shaped robot would be able to physically interact with the world in much the same way that a person does.

However, controlling any robot — let alone a human-shaped one — is incredibly hard. Apparently simple tasks, such as opening a door, are actually hugely complex, requiring a robot to understand how different door mechanisms work, how much force to apply to a handle and how to maintain balance while doing so. The real world is extremely varied and constantly changing.

The approach now gathering steam is to control a robot using the same type of AI foundation models that power image generators and chatbots such as ChatGPT. These models use brain-inspired neural networks to learn from huge swathes of generic data. They build associations between elements of their training data and, when asked for an output, tap these connections to generate appropriate words or images, often with uncannily good results.

Likewise, a robot foundation model is trained on text and images from the Internet, providing it with information about the nature of various objects and their contexts. It also learns from examples of robotic operations. It can be trained, for example, on videos of robot trial and error, or videos of robots that are being remotely operated by humans, alongside the instructions that pair with those actions. A trained robot foundation model can then observe a scenario and use its learnt associations to predict what action will lead to the best outcome.

Google DeepMind has built one of the most advanced robotic foundation models, known as Robotic Transformer 2 (RT-2), that can operate a mobile robot arm built by its sister company Everyday Robots in Mountain View, California. Like other robotic foundation models, it was trained on both the Internet and videos of robotic operation. Thanks to the online training, RT-2 can follow instructions even when those commands go beyond what the robot has seen another robot do before 1 . For example, it can move a drink can onto a picture of Taylor Swift when asked to do so — even though Swift’s image was not in any of the 130,000 demonstrations that RT-2 had been trained on.

In other words, knowledge gleaned from Internet trawling (such as what the singer Taylor Swift looks like) is being carried over into the robot’s actions. “A lot of Internet concepts just transfer,” says Keerthana Gopalakrishnan, an AI and robotics researcher at Google DeepMind in San Francisco, California. This radically reduces the amount of physical data that a robot needs to have absorbed to cope in different situations, she says.

But to fully understand the basics of movements and their consequences, robots still need to learn from lots of physical data. And therein lies a problem.

Data dearth

Although chatbots are being trained on billions of words from the Internet, there is no equivalently large data set for robotic activity. This lack of data has left robotics “in the dust”, says Khazatsky.

Pooling data is one way around this. Khazatsky and his colleagues have created DROID 2 , an open-source data set that brings together around 350 hours of video data from one type of robot arm (the Franka Panda 7DoF robot arm, built by Franka Robotics in Munich, Germany), as it was being remotely operated by people in 18 laboratories around the world. The robot-eye-view camera has recorded visual data in hundreds of environments, including bathrooms, laundry rooms, bedrooms and kitchens. This diversity helps robots to perform well on tasks with previously unencountered elements, says Khazatsky.

The Google DeepMind robotic arm RT-2 holding a toy dinosaur up off a table with a wide array of objects on it

When prompted to ‘pick up extinct animal’, Google’s RT-2 model selects the dinosaur figurine from a crowded table. Credit: Google DeepMind

Gopalakrishnan is part of a collaboration of more than a dozen academic labs that is also bringing together robotic data, in its case from a diversity of robot forms, from single arms to quadrupeds. The collaborators’ theory is that learning about the physical world in one robot body should help an AI to operate another — in the same way that learning in English can help a language model to generate Chinese, because the underlying concepts about the world that the words describe are the same. This seems to work. The collaboration’s resulting foundation model, called RT-X, which was released in October 2023 3 , performed better on real-world tasks than did models the researchers trained on one robot architecture.

Many researchers say that having this kind of diversity is essential. “We believe that a true robotics foundation model should not be tied to only one embodiment,” says Peter Chen, an AI researcher and co-founder of Covariant, an AI firm in Emeryville, California.

Covariant is also working hard on scaling up robot data. The company, which was set up in part by former OpenAI researchers, began collecting data in 2018 from 30 variations of robot arms in warehouses across the world, which all run using Covariant software. Covariant’s Robotics Foundation Model 1 (RFM-1) goes beyond collecting video data to encompass sensor readings, such as how much weight was lifted or force applied. This kind of data should help a robot to perform tasks such as manipulating a squishy object, says Gopalakrishnan — in theory, helping a robot to know, for example, how not to bruise a banana.

Covariant has built up a proprietary database that includes hundreds of billions of ‘tokens’ — units of real-world robotic information — which Chen says is roughly on a par with the scale of data that trained GPT-3, the 2020 version of OpenAI's large language model. “We have way more real-world data than other people, because that’s what we have been focused on,” Chen says. RFM-1 is poised to roll out soon, says Chen, and should allow operators of robots running Covariant’s software to type or speak general instructions, such as “pick up apples from the bin”.

Another way to access large databases of movement is to focus on a humanoid robot form so that an AI can learn by watching videos of people — of which there are billions online. Nvidia’s Project GR00T foundation model, for example, is ingesting videos of people performing tasks, says Andrews. Although copying humans has huge potential for boosting robot skills, doing so well is hard, says Gopalakrishnan. For example, robot videos generally come with data about context and commands — the same isn’t true for human videos, she says.

Virtual reality

A final and promising way to find limitless supplies of physical data, researchers say, is through simulation. Many roboticists are working on building 3D virtual-reality environments, the physics of which mimic the real world, and then wiring those up to a robotic brain for training. Simulators can churn out huge quantities of data and allow humans and robots to interact virtually, without risk, in rare or dangerous situations, all without wearing out the mechanics. “If you had to get a farm of robotic hands and exercise them until they achieve [a high] level of dexterity, you will blow the motors,” says Nvidia’s Andrews.

But making a good simulator is a difficult task. “Simulators have good physics, but not perfect physics, and making diverse simulated environments is almost as hard as just collecting diverse data,” says Khazatsky.

Meta and Nvidia are both betting big on simulation to scale up robot data, and have built sophisticated simulated worlds: Habitat from Meta and Isaac Sim from Nvidia. In them, robots gain the equivalent of years of experience in a few hours, and, in trials, they then successfully apply what they have learnt to situations they have never encountered in the real world. “Simulation is an extremely powerful but underrated tool in robotics, and I am excited to see it gaining momentum,” says Rai.

Many researchers are optimistic that foundation models will help to create general-purpose robots that can replace human labour. In February, Figure, a robotics company in Sunnyvale, California, raised US$675 million in investment for its plan to use language and vision models developed by OpenAI in its general-purpose humanoid robot. A demonstration video shows a robot giving a person an apple in response to a general request for ‘something to eat’. The video on X (the platform formerly known as Twitter) has racked up 4.8 million views.

Exactly how this robot’s foundation model has been trained, along with any details about its performance across various settings, is unclear (neither OpenAI nor Figure responded to Nature ’s requests for an interview). Such demos should be taken with a pinch of salt, says Soh. The environment in the video is conspicuously sparse, he says. Adding a more complex environment could potentially confuse the robot — in the same way that such environments have fooled self-driving cars. “Roboticists are very sceptical of robot videos for good reason, because we make them and we know that out of 100 shots, there’s usually only one that works,” Soh says.

Hurdles ahead

As the AI research community forges ahead with robotic brains, many of those who actually build robots caution that the hardware also presents a challenge: robots are complicated and break a lot. Hardware has been advancing, Chen says, but “a lot of people looking at the promise of foundation models just don't know the other side of how difficult it is to deploy these types of robots”, he says.

Another issue is how far robot foundation models can get using the visual data that make up the vast majority of their physical training. Robots might need reams of other kinds of sensory data, for example from the sense of touch or proprioception — a sense of where their body is in space — say Soh. Those data sets don’t yet exist. “There’s all this stuff that’s missing, which I think is required for things like a humanoid to work efficiently in the world,” he says.

Releasing foundation models into the real world comes with another major challenge — safety. In the two years since they started proliferating, large language models have been shown to come up with false and biased information. They can also be tricked into doing things that they are programmed not to do, such as telling users how to make a bomb. Giving AI systems a body brings these types of mistake and threat to the physical world. “If a robot is wrong, it can actually physically harm you or break things or cause damage,” says Gopalakrishnan.

Valuable work going on in AI safety will transfer to the world of robotics, says Gopalakrishnan. In addition, her team has imbued some robot AI models with rules that layer on top of their learning, such as not to even attempt tasks that involve interacting with people, animals or other living organisms. “Until we have confidence in robots, we will need a lot of human supervision,” she says.

Despite the risks, there is a lot of momentum in using AI to improve robots — and using robots to improve AI. Gopalakrishnan thinks that hooking up AI brains to physical robots will improve the foundation models, for example giving them better spatial reasoning. Meta, says Rai, is among those pursuing the hypothesis that “true intelligence can only emerge when an agent can interact with its world”. That real-world interaction, some say, is what could take AI beyond learning patterns and making predictions, to truly understanding and reasoning about the world.

What the future holds depends on who you ask. Brooks says that robots will continue to improve and find new applications, but their eventual use “is nowhere near as sexy” as humanoids replacing human labour. But others think that developing a functional and safe humanoid robot that is capable of cooking dinner, running errands and folding the laundry is possible — but could just cost hundreds of millions of dollars. “I’m sure someone will do it,” says Khazatsky. “It’ll just be a lot of money, and time.”

Nature 630 , 22-24 (2024)

doi: https://doi.org/10.1038/d41586-024-01442-5

Updates & Corrections

Correction 31 May 2024 : An earlier version of this feature gave the wrong name for Nvidia’s simulated world.

Brohan, A. et al. Preprint at arXiv https://doi.org/10.48550/arXiv.2307.15818 (2023).

Khazatsky, A. et al. Preprint at arXiv https://doi.org/10.48550/arXiv.2403.12945 (2024).

Open X-Embodiment Collaboration et al. Preprint at arXiv https://doi.org/10.48550/arXiv.2310.08864 (2023).

Download references

Reprints and permissions

Related Articles

essay on humanoid robots

  • Machine learning

Standardized metadata for biological samples could unlock the potential of collections

Correspondence 14 MAY 24

A guide to the Nature Index

A guide to the Nature Index

Nature Index 13 MAR 24

Decoding chromatin states by proteomic profiling of nucleosome readers

Decoding chromatin states by proteomic profiling of nucleosome readers

Article 06 MAR 24

Software tools identify forgotten genes

Software tools identify forgotten genes

Technology Feature 24 MAY 24

Guidelines for academics aim to lessen ethical pitfalls in generative-AI use

Guidelines for academics aim to lessen ethical pitfalls in generative-AI use

Nature Index 22 MAY 24

Internet use and teen mental health: it’s about more than just screen time

Correspondence 21 MAY 24

Who owns your voice? Scarlett Johansson OpenAI complaint raises questions

Who owns your voice? Scarlett Johansson OpenAI complaint raises questions

News Explainer 29 MAY 24

AI assistance for planning cancer treatment

AI assistance for planning cancer treatment

Outlook 29 MAY 24

Anglo-American bias could make generative AI an invisible intellectual cage

Correspondence 28 MAY 24

Associate Editor, High-energy physics

As an Associate Editor, you will independently handle all phases of the peer review process and help decide what will be published.

Homeworking

American Physical Society

essay on humanoid robots

Postdoctoral Fellowships: Immuno-Oncology

We currently have multiple postdoctoral fellowship positions available within our multidisciplinary research teams based In Hong Kong.

Hong Kong (HK)

Centre for Oncology and Immunology

essay on humanoid robots

Chief Editor

Job Title:  Chief Editor Organisation: Nature Ecology & Evolution Location: New York, Berlin or Heidelberg - Hybrid working Closing date: June 23rd...

New York City, New York (US)

Springer Nature Ltd

essay on humanoid robots

Global Talent Recruitment (Scientist Positions)

Global Talent Gathering for Innovation, Changping Laboratory Recruiting Overseas High-Level Talents.

Beijing, China

Changping Laboratory

essay on humanoid robots

Postdoctoral Associate - Amyloid Strain Differences in Alzheimer's Disease

Houston, Texas (US)

Baylor College of Medicine (BCM)

essay on humanoid robots

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

The June 2024 issue of IEEE Spectrum is here!

For IEEE Members

Ieee spectrum, follow ieee spectrum, support ieee spectrum, enjoy more free content and benefits by creating an account, saving articles to read later requires an ieee spectrum account, the institute content is only available for members, downloading full pdf issues is exclusive for ieee members, downloading this e-book is exclusive for ieee members, access to spectrum 's digital edition is exclusive for ieee members, following topics is a feature exclusive for ieee members, adding your response to an article requires an ieee spectrum account, create an account to access more content and features on ieee spectrum , including the ability to save articles to read later, download spectrum collections, and participate in conversations with readers and editors. for more exclusive content and features, consider joining ieee ., join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of spectrum’s articles, archives, pdf downloads, and other benefits. learn more →, join the world’s largest professional organization devoted to engineering and applied sciences and get access to this e-book plus all of ieee spectrum’s articles, archives, pdf downloads, and other benefits. learn more →, access thousands of articles — completely free, create an account and get exclusive content and features: save articles, download collections, and talk to tech insiders — all free for full access and benefits, join ieee as a paying member., the uncanny valley: the original essay by masahiro mori, “the uncanny valley” by masahiro mori is an influential essay in robotics. this is the first english translation authorized by mori..

Photo: M. Mori

Translated by Karl F. MacDorman and Norri Kageki

Editor's note: More than 40 years ago, Masahiro Mori, then a robotics professor at the Tokyo Institute of Technology, wrote an essay on how he envisioned people's reactions to robots that looked and acted almost human. In particular, he hypothesized that a person's response to a humanlike robot would abruptly shift from empathy to revulsion as it approached, but failed to attain, a lifelike appearance. This descent into eeriness is known as the uncanny valley . The essay appeared in an obscure Japanese journal called Energy in 1970, and in subsequent years it received almost no attention. More recently, however, the concept of the uncanny valley has rapidly attracted interest in robotics and other scientific circles as well as in popular culture. Some researchers have explored its implications for human-robot interaction and computer-graphics animation, while others have investigated its biological and social roots. Now interest in the uncanny valley should only intensify, as technology evolves and researchers build robots that look increasingly human. Though copies of Mori's essay have circulated among researchers, a complete version hasn't been widely available. This is the first publication of an English translation that has been authorized and reviewed by Mori. [ Read an exclusive interview with him .]

A version of this article originally appeared in the June 2012 issue of IEEE Robotics & Automation Magazine .

A Valley in One's Sense of Affinity

The mathematical term monotonically increasing function describes a relation in which the function y = ƒ( x ) increases continuously with the variable x . For example, as effort x grows, income y increases, or as a car's accelerator is pressed, the car moves faster. This kind of relation is ubiquitous and very easily understood. In fact, because such monotonically increasing functions cover most phenomena of everyday life, people may fall under the illusion that they represent all relations. Also attesting to this false impression is the fact that many people struggle through life by persistently pushing without understanding the effectiveness of pulling back. That is why people usually are puzzled when faced with some phenomenon this function cannot represent.

An example of a function that does not increase continuously is climbing a mountain—the relation between the distance ( x) a hiker has traveled toward the summit and the hiker's altitude ( y) —owing to the intervening hills and valleys. I have noticed that, in climbing toward the goal of making robots appear human, our affinity for them increases until we come to a valley (Figure 1), which I call the uncanny valley .

Nowadays, industrial robots are increasingly recognized as the driving force behind reductions in factory personnel. However, as is well known, these robots just extend, contract, and rotate their arms; without faces or legs, they do not look very human. Their design policy is clearly based on functionality. From this standpoint, the robots must perform functions similar to those of human factory workers, but whether they look similar does not matter. Thus, given their lack of resemblance to human beings, in general, people hardly feel any affinity for them. 1 If we plot the industrial robot on a graph of affinity versus human likeness, it lies near the origin in Figure 1.

By contrast, a toy robot's designer may focus more on the robot's appearance than its functions. Consequently, despite its being a sturdy mechanical figure, the robot will start to have a roughly human-looking external form with a face, two arms, two legs, and a torso. Children seem to feel deeply attached to these toy robots. Hence, the toy robot is shown halfway up the first hill in Figure 1.

Since creating an artificial human is itself one of the objectives of robotics, various efforts are underway to build humanlike robots. 2 For example, a robot's arm may be composed of a metal cylinder with many bolts, but by covering it with skin and adding a bit of fleshy plumpness, we can achieve a more humanlike appearance. As a result, we naturally respond to it with a heightened sense of affinity.

Many of our readers have experience interacting with persons with physical disabilities, and all must have felt sympathy for those missing a hand or leg and wearing a prosthetic limb. Recently, owing to great advances in fabrication technology, we cannot distinguish at a glance a prosthetic hand from a real one. Some models simulate wrinkles, veins, fingernails, and even fingerprints. Though similar to a real hand, the prosthetic hand's color is pinker, as if it had just come out of the bath.

One might say that the prosthetic hand has achieved a degree of resemblance to the human form, perhaps on a par with false teeth. However, when we realize the hand, which at first site looked real, is in fact artificial, we experience an eerie sensation. For example, we could be startled during a handshake by its limp boneless grip together with its texture and coldness. When this happens, we lose our sense of affinity, and the hand becomes uncanny. In mathematical terms, this can be represented by a negative value. Therefore, in this case, the appearance of the prosthetic hand is quite humanlike, but the level of affinity is negative, thus placing the hand near the bottom of the valley in Figure 1. This example illustrates the uncanny valley phenomenon.

I don't think that, on close inspection, a bunraku puppet appears very similar to a human being. Its realism in terms of size, skin texture, and so on, does not even reach that of a realistic prosthetic hand. But when we enjoy a puppet show in the theater, we are seated at a certain distance from the stage. The puppet's absolute size is ignored, and its total appearance, including hand and eye movements, is close to that of a human being. So, given our tendency as an audience to become absorbed in this form of art, we might feel a high level of affinity for the puppet.

From the preceding discussion, the readers should be able to understand the concept of the uncanny valley. So now let us consider in more detail the relation between the uncanny valley and movement.

The Effect of Movement

Movement is fundamental to animals—including human beings—and thus to robots as well. Its presence changes the shape of the uncanny valley graph by amplifying the peaks and valleys, as shown in Figure 2. For point of illustration, when an industrial robot is switched off, it is just a greasy machine. But once the robot is programmed to move its gripper like a human hand, we start to feel a certain level of affinity for it. (In this case, the velocity, acceleration, and deceleration must approximate human movement.) Conversely, when a prosthetic hand that is near the bottom of the uncanny valley starts to move, our sensation of eeriness intensifies.

Some readers may know that recent technology has enabled prosthetic hands to extend and contract their fingers automatically. The best commercially available model, shown in Figure 3, was developed in Vienna. To explain how it works, even if a person's forearm is missing, the intention to move the fingers produces a faint current in the arm muscles, which can be detected by an electromyogram. When the prosthetic hand detects the current by means of electrodes on the skin's surface, it amplifies the signal to activate a small motor that moves its fingers. Because this myoelectric hand makes movements, it could make healthy people feel uneasy. If someone wearing the hand in a dark place shook a woman's hand with it, the woman would assuredly shriek!

Since negative effects of movement are apparent even with a prosthetic hand, a whole robot would magnify the creepiness. And that is just one robot. Imagine a craftsman being awakened suddenly in the dead of night. He searches downstairs for something among a crowd of mannequins in his workshop. If the mannequins started to move, it would be like a horror story.

Movement-related effects could be observed at the 1970 World Exposition in Osaka, Japan. Plans for the event had prompted the construction of robots with some highly sophisticated designs. For example, one robot had 29 pairs of artificial muscles in the face (the same number as a human being) to make it smile in a humanlike fashion. According to the designer, a smile is a dynamic sequence of facial deformations, and the speed of the deformations is crucial. When the speed is cut in half in an attempt to make the robot bring up a smile more slowly, instead of looking happy, its expression turns creepy. This shows how, because of a variation in movement, something that has come to appear very close to human—like a robot, puppet, or prosthetic hand—could easily tumble down into the uncanny valley.

Escape by Design

We hope to design and build robots and prosthetic hands that will not fall into the uncanny valley. Thus, because of the risk inherent in trying to increase their degree of human likeness to scale the second peak, I recommend that designers instead take the first peak as their goal, which results in a moderate degree of human likeness and a considerable sense of affinity. In fact, I predict it is possible to create a safe level of affinity by deliberately pursuing a nonhuman design. I ask designers to ponder this. To illustrate the principle, consider eyeglasses. Eyeglasses do not resemble real eyeballs, but one could say that their design has created a charming pair of new eyes. So we should follow the same principle in designing prosthetic hands. In doing so, instead of pitiful looking realistic hands, stylish ones would likely become fashionable.

As another example, consider this model of a human hand created by a woodcarver who sculpts statues of Buddhas (Figure 4). The fingers bend freely at the joints. The hand lacks fingerprints, and it retains the natural color of the wood, but its roundness and beautiful curves do not elicit any eerie sensation. Perhaps this wooden hand could also serve as a reference for design.

An Explanation of the Uncanny

As healthy persons, we are represented at the crest of the second peak in Figure 2 (moving). Then when we die, we are, of course, unable to move; the body goes cold, and the face becomes pale. Therefore, our death can be regarded as a movement from the second peak (moving) to the bottom of the uncanny valley (still), as indicated by the arrow's path in Figure 2. We might be glad this arrow leads down into the still valley of the corpse and not the valley animated by the living dead!

I think this descent explains the secret lying deep beneath the uncanny valley. Why were we equipped with this eerie sensation? Is it essential for human beings? I have not yet considered these questions deeply, but I have no doubt it is an integral part of our instinct for self-preservation. 3

We should begin to build an accurate map of the uncanny valley, so that through robotics research we can come to understand what makes us human. This map is also necessary to enable us to create—using nonhuman designs—devices to which people can relate comfortably.

1 However, industrial robots are considerably closer in appearance to humans than machinery in general, especially in their arms. [ back to text ↑]

2 Others believe that the true appeal of robots is their potential to exceed and augment humans. [ back to text ↑]

3 The sense of eeriness is probably a form of instinct that protects us from proximal, rather than distal, sources of danger. Proximal sources of danger are corpses, members of different species, and other entities we can closely approach. Distal sources of danger include windstorms and floods. [ back to text ↑]

Images used with permission from M. Mori, “The Uncanny Valley," Energy , vol. 7, no. 4, pp. 33–35, 1970 (in Japanese).

About the translators:

Karl F. MacDorman is an associate professor of human computer interaction at the School of Informatics, Indiana University . His research interests include android science, machine learning, social robotics, and computational neuroscience.

Norri Kageki is a journalist who writes about robots. She is originally from Tokyo and currently lives in the San Francisco Bay Area. She is the publisher of GetRobo and also writes for various publications in the United States and Japan.

  • Explain the Uncanny Valley in Less Than 1 Minute. Go! - IEEE ... ›
  • An Uncanny Mind: Masahiro Mori on the Uncanny Valley and ... ›
  • What Is the Uncanny Valley? - IEEE Spectrum ›
  • Who's Afraid of the Uncanny Valley? - IEEE Spectrum ›
  • Ode to the Uncanny Valley - IEEE Spectrum ›
  • Does MetaHuman’s Digital Clone Cross the Uncanny Valley? - IEEE Spectrum ›
  • The Uncanny Valley [From the Field] | IEEE Journals & Magazine ... ›
  • Why "Uncanny Valley" Human Look-Alikes Put Us on Edge ... ›
  • Uncanny valley - Wikipedia ›

Lillian Wood

I need to know who the editor is? I can't make an effective citation of this with Masahiro Mori listed as the author

IEEE President’s Note: Amplifying IEEE's Reach

Space-based solar power: a great idea whose time may never come, ai and dei spotlighted at ieee’s futurist summit, related stories, humanoid robots are getting to work, xiaomi’s humanoid drummer beats expectations, ihmc’s nadia is a versatile humanoid teammate.

Help | Advanced Search

Computer Science > Robotics

Title: ai robots and humanoid ai: review, perspectives and directions.

Abstract: In the approximately century-long journey of robotics, humanoid robots made their debut around six decades ago. The rapid advancements in generative AI, large language models (LLMs), and large multimodal models (LMMs) have reignited interest in humanoids, steering them towards real-time, interactive, and multimodal designs and applications. This resurgence unveils boundless opportunities for AI robotics and novel applications, paving the way for automated, real-time and humane interactions with humanoid advisers, educators, medical professionals, caregivers, and receptionists. However, while current humanoid robots boast human-like appearances, they have yet to embody true humaneness, remaining distant from achieving human-like intelligence. In our comprehensive review, we delve into the intricate landscape of AI robotics and AI humanoid robots in particular, exploring the challenges, perspectives and directions in transitioning from human-looking to humane humanoids and fostering human-like robotics. This endeavour synergizes the advancements in LLMs, LMMs, generative AI, and human-level AI with humanoid robotics, omniverse, and decentralized AI, ushering in the era of AI humanoids and humanoid AI.

Submission history

Access paper:.

  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Advertisement

Advertisement

Human-Humanoid Interaction and Cooperation: a Review

  • Humanoid and Bipedal Robotics (E Yoshida, Section Editor)
  • Published: 14 December 2021
  • Volume 2 , pages 441–454, ( 2021 )

Cite this article

essay on humanoid robots

  • Lorenzo Vianello 1 , 2 ,
  • Luigi Penco   ORCID: orcid.org/0000-0002-2938-2546 1 ,
  • Waldez Gomes   ORCID: orcid.org/0000-0002-2506-8189 1 ,
  • Yang You 1 ,
  • Salvatore Maria Anzalone 3 ,
  • Pauline Maurice 1 ,
  • Vincent Thomas 1 &
  • Serena Ivaldi   ORCID: orcid.org/0000-0001-5349-9835 1  

1164 Accesses

8 Citations

3 Altmetric

Explore all metrics

Purpose of Review

Humanoid robots are versatile platforms with the potential to assist humans in several domains, from education to healthcare, from entertainment to the factory of the future. To find their place into our daily life, where complex interactions and collaborations with humans are expected, their social and physical interaction skills need to be further improved.

Recent Findings

The hallmark of humanoids is their anthropomorphic shape, which facilitates the interaction but at the same time increases the expectations of the human in terms of advanced cooperation capabilities. Cooperation with humans requires an appropriate modeling and real-time estimation of the human state and intention. This information is required both at a high level by the cooperative decision-making policy and at a low level by the interaction controller that implements the physical interaction. Real-time constraints induce simplified models that limit the decision capabilities of the robot during cooperation.

In this article, we review the current achievements in the context of human-humanoid interaction and cooperation. We report on the cognitive and cooperation skills that the robot needs to help humans achieve their goals, and how these high-level skills translate into the robot’s low-level control commands. Finally, we report on the applications of humanoid robots as humans’ companions, co-workers, or avatars.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

essay on humanoid robots

Similar content being viewed by others

essay on humanoid robots

Human-Humanoid Interaction: Overview

essay on humanoid robots

Applications in HHI: Physical Cooperation

Johnson M, Shrewsbury B, Bertrand S, Wu T, Duran D, Floyd M, Abeles P, Stephen D, Mertins N, Lesman A, Carff J, Rifenburgh W, Kaveti P, Straatman W, Smith J, Griffioen M, Layton B, De Boer T, Koolen T, Pratt J. Team IHMC’s lessons learned from the DARPA robotics challenge trials. Journal of Field Robotics. 2015;32. https://doi.org/10.1002/rob.21571 .

Kheddar A, Caron S, Gergondet P, Comport A, Tanguy A, Ott C, Henze B, Mesesan G, Englsberger J, Roa M A, Wieber P, Chaumette F, Spindler F, Oriolo G, Lanari L, Escande A, Chappellet K, Kanehiro F, Rabaté P. Humanoid robots in aircraft manufacturing: The airbus use cases. IEEE Robot Autom Mag 2019;26(4):30–45. https://doi.org/10.1109/MRA.2019.2943395 .

Article   Google Scholar  

Shigemi S. Asimo and humanoid robot research at honda. Humanoid Robotics: A Reference. In: Goswami A and Vadakkepat P, editors. Dordrecht: Springer Netherlands; 2018. p. 1–36.

Nelson G, Saunders A, Playter R. The petman and atlas robots at boston dynamics. Humanoid Robotics: A Reference. In: Goswami A and Vadakkepat P, editors. Dordrecht: Springer Netherlands; 2019. p. 169–186.

Digit, advanced mobility for the human world [online]. https://www.agilityrobotics.com/robots .

Lesort T, Lomonaco V, Stoian A, Maltoni D, Filliat D, Díaz-Rodríguez N. Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges. Inf Fusion 2020;58: 52–68. https://doi.org/10.1016/j.inffus.2019.12.004 .

Wood R, Baxter P, Belpaeme T. A review of long-term memory in natural and synthetic systems. Adapt Behav 2012;20(2):81–103. https://doi.org/10.1177/1059712311421219 .

Sauppé A, Mutlu B. Robot deictics: How gesture and context shape referential communication. 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI); 2014. p. 342–349. https://doi.org/10.1145/2559636.2559657 .

Yogeeswaran K, Złotowski J, Livingstone M, Bartneck C, Sumioka H, Ishiguro H. The interactive effects of robot anthropomorphism and robot ability on perceived threat and support for robotics research. J Hum-Robot Interact 2016;5(2):29– 47. https://doi.org/10.5898/JHRI.5.2.Yogeeswaran .

Takayama L, Dooley D, Ju W. Expressing thought: Improving robot readability with animation principles. Proceedings of the 6th International Conference on Human-Robot Interaction, HRI ’11. New York: Association for Computing Machinery; 2011. p. 69–76.

Vinciarelli A, Pantic M, Bourlard H. Social signal processing: Survey of an emerging domain. Image Vis Comput 2009;27(12):1743–1759. https://doi.org/10.1016/j.imavis.2008.11.007 .

Breazeal C. Designing sociable robots. Cambridge: MIT Press; 2002. 10.7551/mitpress/2376.001.0001.

MATH   Google Scholar  

Scassellati B. Theory of mind for a humanoid robot. Auton Robot 2002;12(1):13–24. https://doi.org/10.1023/A:1013298507114 .

Article   MATH   Google Scholar  

Anzalone S M, Boucenna S, Ivaldi S, Chetouani M. Evaluating the engagement with social robots. Int J Soc Robot 2015;7(4):465–478. https://doi.org/10.1007/s12369-015-0298-7 .

Thomas F, Johnston O, Thomas F. The illusion of life: Disney animation. New York: Hyperion; 1995.

Google Scholar  

Bartneck C, Kulić D, Croft E, Zoghbi S. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int J Soc Robot 2009;1(1):71–81. https://doi.org/10.1007/s12369-008-0001-3 .

Syrdal D S, Dautenhahn K, Koay K L, Walters M L. The negative attitudes towards robots scale and reactions to robot behaviour in a live human-robot interaction study. Adaptive and Emergent Behaviour and Complex Systems. SSAISB; 2009. p. 109–115.

Ramirez M, Geffner H. Goal recognition over pomdps: Inferring the intention of a POMDP agent. IJCAI International Joint Conference on Artificial Intelligence; 2011. p. 2009–2014.

Nikolaidis S, Hsu D, Srinivasa S. Human-robot mutual adaptation in collaborative tasks: Models and experiments. Int J Robot Res 2017;36(5-7):618–634. This paper introduces a formalization for mutual adaptation between robot and a human in a collaborative task and shows how the proposed method can outperform precedent solutions in a human-robot team.

Tabrez A, Luebbers M B, Hayes B. A survey of mental modeling techniques in human–robot teaming. Current Robotics Reports. 2020:1–9.

Bestick A, Bajcsy R, Dragan A D. Implicitly Assisting Humans to Choose Good Grasps in Robot to Human Handovers. 2016 International Symposium on Experimental Robotics. Springer International Publishing; 2017. p. 341–354. Series Title: Springer Proceedings in Advanced Robotics.

Kaelbling L P, Littman M L, Cassandra A R. Planning and acting in partially observable stochastic domains. Artif Intell 1998;101(1):99–134.

Article   MathSciNet   MATH   Google Scholar  

Silver D, Veness J. Monte-carlo planning in large POMDPs. Advances in Neural Information Processing Systems 23. In: Lafferty J D, Williams C K I, Shawe-Taylor J, Zemel R S, and Culotta A, editors. Curran Associates, Inc.; 2010. p. 2164–2172.

Nikolaidis S, Hsu D, Srinivasa S. Human-robot mutual adaptation in collaborative tasks: Models and experiments. Int J Robot Res 2017;36(5-7):618–634.

Li Y, Tee K P, Chan W L, Yan R, Chua Y, Limbu D K. Continuous role adaptation for human–robot shared control. IEEE Trans Robot 2015;31(3):672–681.

Amodei D, Olah C, Steinhardt J, Christiano P F, Schulman J, Mané D. 2016. Concrete problems in ai safety. arXiv: 1606.06565 .

Romano F, Nava G, Azad M, Čamernik J, Dafarra S, Dermy O, Latella C, Lazzaroni M, Lober R, Lorenzini M, et al. The codyco project achievements and beyond: Toward human aware whole-body controllers for physical human robot interaction. IEEE Robot Autom Lett 2017;3(1):516–523.

Otani K, Bouyarmane K, Ivaldi S. Generating assistive humanoid motions for co-manipulation tasks with a multi-robot quadratic program controller. 2018 IEEE International Conference on Robotics and Automation (ICRA); 2018. p. 3107–3113. This paper presents a multi-robot quadratic program controller which allows to keep the robot balanced, while also assisting the human in achieving their shared objectives.

Dermy O, Chaveroche M, Colas F, Charpillet F, Ivaldi S. Prediction of human whole-body movements with AE-ProMPs. 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids); 2018. p. 572–579.

Penco L, Scianca N, Modugno V, Lanari L, Oriolo G, Ivaldi S. A multimode teleoperation framework for humanoid loco-manipulation: An application for the icub robot. IEEE Robot Autom Mag 2019;26(4):73–82.

Tirupachuri Y, Nava G, Rapetti L, Latella C, Pucci D. Trajectory advancement during human-robot collaboration. 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN); 2019. p. 1–8.

Gazar A, Nava G, Chavez F J A, Pucci D. Jerk control of floating base systems with contact-stable parameterized force feedback. IEEE Trans Robot. 2020.

Brygo A, Sarakoglou I, Tsagarakis N, Caldwell D. Tele-manipulation with a humanoid robot under autonomous joint impedance regulation and vibrotactile balancing feedback; 2014. https://doi.org/10.1109/HUMANOIDS.2014.7041465 .

Ranatunga I, Lewis F L, Popa D O, Tousif S M. Adaptive admittance control for human–robot interaction using model reference design and adaptive inverse filtering. IEEE Trans Control Syst Technol 2016;25(1):278–285.

Kormushev P, Nenchev D N, Calinon S, Caldwell D G. Upper-body kinesthetic teaching of a free-standing humanoid robot. 2011 IEEE International Conference on Robotics and Automation; 2011. p. 3970–3975.

Bussy A, Gergondet P, Kheddar A, Keith F, Crosnier A. Proactive behavior of a humanoid robot in a haptic transportation task with a human partner. 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication; 2012. p. 962–967.

Mainprice J, Sisbot E A, Jaillet L, Cortés J, Alami R, Siméon T. Planning human-aware motions using a sampling-based costmap planner. 2011 IEEE International Conference on Robotics and Automation; 2011. p. 5012–5017.

Li Y, Ge S S. Human–robot collaboration based on motion intention estimation. IEEE/ASME Trans Mechatron 2013;19(3):1007–1014.

Jarrasse N, Sanguineti V, Burdet E. Slaves no longer: review on role assignment for human–robot joint motor action. Adapt Behav 2014;22(1):70–82.

Buondonno G, Patota F, Wang H, De Luca A, Kosuge K. A model predictive control approach for the partner ballroom dance robot. 2015 IEEE International Conference on Robotics and Automation (ICRA); 2015. p. 774–780.

Vasalya A. 2019. Human and Humanoid robot co-workers: motor contagions and whole-body handover. PhD thesis, Université de Montpellier. https://hal.archives-ouvertes.fr/tel-02839897 .

Zheng C, Wu W, Yang T, Zhu S, Chen C, Liu R, Shen J, Kehtarnavaz N, Shah M. 2020. Deep learning-based human pose estimation: A survey.

Latella C, Lorenzini M, Lazzaroni M, Romano F, Traversaro S, Akhras M A, Pucci D, Nori F. Towards real-time whole-body human dynamics estimation through probabilistic sensor fusion algorithms. Auton Robot 2019;43(6):1591–1603. The authors proposed a probabilistic framework and an estimation tool for online monitoring of the human dynamics during human-robot collaboration tasks.

Lorenzini M, Kim W, De Momi E, Ajoudani A. A synergistic approach to the real-time estimation of the feet ground reaction forces and centers of pressure in humans with application to human–robot collaboration. IEEE Robot Autom Lett 2018;3(4):3654–3661.

Sorrentino I, Andrade Chavez F J, Latella C, Fiorio L, Traversaro S, Rapetti L, Tirupachuri Y, Guedelha N, Maggiali M, Dussoni S, et al. A novel sensorised insole for sensing feet pressure distributions. Sensors 2020;20(3):747.

Agravante D J, Cherubini A, Sherikov A, Wieber P-B, Kheddar A. Human-humanoid collaborative carrying. IEEE Trans Robot 2019;35(4):833–846. This paper presents a framework for collaborative carrying based on whole-body controlling, the framework considers the taxonomy of the task, the roles of the agent, the walking pattern and the stabilization in presence of external forces.

Peternel L, Tsagarakis N, Caldwell D, Ajoudani A. Adaptation of robot physical behaviour to human fatigue in human-robot co-manipulation. 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids); 2016. p. 489–494.

Ison M, Vujaklija I, Whitsell B, Farina D, Artemiadis P. Simultaneous myoelectric control of a robot arm using muscle synergy-inspired inputs from high-density electrode grids. 2015 IEEE International Conference on Robotics and Automation (ICRA); 2015. p. 6469–6474.

Li W, Jaramillo C, Li Y. Development of mind control system for humanoid robot through a brain computer interface. 2012 Second International Conference on Intelligent System Design and Engineering Application; 2012. p. 679–682.

Bell C J, Shenoy P, Chalodhorn R, Rao RPN. Control of a humanoid robot by a noninvasive brain–computer interface in humans. J Neural Eng 2008;5(2):214.

Bossi F, Willemse C, Cavazza J, Marchesi S, Murino V, Wykowska A. The human brain reveals resting state activity patterns that are predictive of biases in attitudes toward robots. Sci Robot. 2020;5(46). https://robotics.sciencemag.org/content/5/46/eabb6652.full.pdf , https://doi.org/10.1126/scirobotics.abb6652 .

Zhou T, Cha J S, Gonzalez G, Wachs J P, Sundaram C P, Yu D. Multimodal physiological signals for workload prediction in robot-assisted surgery. ACM Trans Human-Robot Interact (THRI) 2020;9(2): 1–26.

Hu Y, Benallegue M, Venture G, Yoshida E. Interact with me: An exploratory study on interaction factors for active physical human-robot interaction. IEEE Robot Autom Lett 2020;5(4): 6764–6771. https://doi.org/10.1109/LRA.2020.3017475 .

Anzalone S M, Boucenna S, Ivaldi S, Chetouani M. Evaluating the engagement with social robots. Int J Soc Robot 2015;7(4):465–478.

Baraglia J, Cakmak M, Nagai Y, Rao R, Asada M. Efficient human-robot collaboration: when should a robot take initiative? Int J Robot Res. 2017:027836491668825. https://doi.org/10.1177/0278364916688253 .

Risskov Sørensen A, Palinko O, Krüger N. Classification of visual interest based on gaze and facial features for human-robot interaction. Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. SCITEPRESS Digital Library; 2020.

Cangelosi A, Ogata T. In: Goswami A, Vadakkepat P, editors. Speech and language in humanoid robots. Dordrecht: Springer Netherlands; 2016.

Cruz-Maya A, Agrigoroaie R, Tapus A. Improving user’s performance by motivation: Matching robot interaction strategy with user’s regulatory state. International Conference on Social Robotics, Springer; 2017. p. 464–473.

Vasalya A, Ganesh G, Kheddar A. More than just co-workers: Presence of humanoid robot co-worker influences human performance. PLOS ONE 2018;13(11):1–19. https://doi.org/10.1371/journal.pone.0206698 .

Kamide H, Mae Y, Kawabe K, Shigemi S, Hirose M, Arai T. New measurement of psychological safety for humanoid. 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI); 2012. p. 49–56.

Scataglini S, Paul G. Dhm and posturography: Academic Press; 2019.

Maurice P, Padois V, Measson Y, Bidaud P. Human-oriented design of collaborative robots. Int J Ind Ergon 2017;57:88–102.

Peternel L, Fang C, Tsagarakis N, Ajoudani A. A selective muscle fatigue management approach to ergonomic human-robot co-manipulation. Robot Comput Integr Manuf 2019;58:69–79.

Wang H, Kosuge K. Control of a robot dancer for enhancing haptic human-robot interaction in waltz. IEEE Trans Haptics 2012;5(3):264–273.

Kobayashi T, Dean-Leon E, Guadarrama-Olvera J R, Bergner F, Cheng G. Multi-contacts force-reactive walking control during physical human-humanoid interaction. 2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids); 2019. p. 33–39. This paper proposes a force-reactive walking control framework for stabilization during physical human-robot interaction where the contact forces are measured by robotic skin. The method has been tested on dancing task while teaching footsteps.

Granados D F P, Yamamoto B A, Kamide H, Kinugawa J, Kosuge K. Dance teaching by a robot: Combining cognitive and physical human–robot interaction for supporting the skill learning process. IEEE Robot Autom Lett 2017;2(3):1452–1459.

Ikemoto S, Amor H B, Minato T, Ishiguro H, Jung B. Physical interaction learning: Behavior adaptation in cooperative human-robot tasks involving physical contact. RO-MAN 2009-The 18th IEEE International Symposium on Robot and Human Interactive Communication; 2009. p. 504–509.

López A M, Vaillant J, Keith F, Fraisse P, Kheddar A. Compliant control of a humanoid robot helping a person stand up from a seated position. 2014 IEEE-RAS International Conference on Humanoid Robots; 2014. p. 817–822. This paper proposes a whole-body control framework to plan a stable initial posture for a humanoid robot supporting a person from sitting to standing while considering the patience degree of autonomy. Moreover the authors proposed a control law to make the robot keep a contact force and follow the motion of the person compliantly.

Bolotnikova A, Courtois S, Kheddar A. Autonomous initiation of human physical assistance by a humanoid. 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN); 2020. p. 857–862. Framework for physical assistance of a frail person based on whole body controller for autonomously reach a person, perform audiovisual communication of intent, and establish several physical contacts.

Mukai T, Hirano S, Yoshida M, Nakashima H, Guo S, Hayakawa Y. Tactile-based motion adjustment for the nursing-care assistant robot riba. 2011 IEEE International Conference on Robotics and Automation; 2011. p. 5435–5441.

Stückler J, Behnke S. Following human guidance to cooperatively carry a large object. 2011 11th IEEE-RAS International Conference on Humanoid Robots; 2011. p. 218–223.

Lanini J, Razavi H, Urain J, Ijspeert A. Human intention detection as a multiclass classification problem: Application in physical human–robot interaction while walking. IEEE Robot Autom Lett 2018; 3(4):4171–4178.

Asfour T, Waechter M, Kaul L, Rader S, Weiner P, Ottenhaus S, Grimm R, Zhou Y, Grotz M, Paus F. Armar-6: A high- performance humanoid for human-robot collaboration in real-world scenarios. IEEE Robot Autom Mag 2019;26(4):108–121.

Bombile M, Billard A. Capture-point based balance and reactive omnidirectional walking controller. 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids); 2017. p. 17–24.

Stasse O, Evrard P, Perrin N, Mansard N, Kheddar A. Fast foot prints re-planning and motion generation during walking in physical human-humanoid interaction. 2009 9th IEEE-RAS International Conference on Humanoid Robots; 2009. p. 284–289.

Evrard P, Gribovskaya E, Calinon S, Billard A, Kheddar A. Teaching physical collaborative tasks: object-lifting case study with a humanoid. 2009 9th IEEE-RAS International Conference on Humanoid Robots; 2009. p. 399–404.

Calinon S, Guenter F, Billard A. On learning, representing, and generalizing a task in a humanoid robot. IEEE Trans Syst Man Cybern Part B (Cybern) 2007;37(2):286–298.

Lee D, Ott C, Nakamura Y, Hirzinger G. Physical human robot interaction in imitation learning. 2011 IEEE International Conference on Robotics and Automation; 2011. p. 3439–3440.

Jorgensen S J, Lanighan M W, Bertrand S S, Watson A, Altemus J S, Askew R S, Bridgwater L, Domingue B, Kendrick C, Lee J, et al. Deploying the nasa valkyrie humanoid for ied response: An initial approach and evaluation summary. 2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids); 2019. https://doi.org/10.1109/humanoids43949.2019.9034993 .

Tachi S. Telexistence, 2nd ed.: World Scientific; 2015.

Gitai partners with JAXA to send telepresence robots to space [online]. https://spectrum.ieee.org/automaton/robotics/space-robots/gitai-partners-with-jaxa-to-send-telepresence-robots-to-space https://spectrum.ieee.org/automaton/robotics/space-robots/gitai-partners-with-jaxa-to-send-telepresence-robots-to-space .

Ramos O E, Mansard N, Stasse O, Benazeth C, Hak S, Saab L. Dancing humanoid robots: Systematic use of osid to compute dynamically consistent movements following a motion capture pattern. IEEE Robot Autom Mag 2015;22(4):16–26. https://doi.org/10.1109/MRA.2015.2415048 .

Hamamsy L E, Johal W, Asselborn T, Nasir J, Dillenbourg P. Learning by collaborative teaching: An engaging multi-party cowriter activity. 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN); 2019. p. 1–8. https://doi.org/10.1109/RO-MAN46459.2019.8956358 .

Chang C-W, Lee J-H, Chao P-Y, Wang C-Y, Chen G-D. Exploring the possibility of using humanoid robots as instructional tools for teaching a second language in primary school. J Educ Technol Soc 2010;13(2):13–24. http://www.jstor.org/stable/jeductechsoci.13.2.13 .

Wong C J, Tay Y L, Wang R, Wu Y. Human-robot partnership: A study on collaborative storytelling. 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI); 2016. p. 535–536. https://doi.org/10.1109/HRI.2016.7451843 .

Görer B, Salah A A, Akın H L. A robotic fitness coach for the elderly. Ambient Intelligence. In: Augusto J C, Wichert R, Collier R, Keyson D, Salah A A, and Tan A-H, editors. Cham: Springer International Publishing; 2013. p. 124–139.

Robinson N L, Connolly J, Hides L, Kavanagh D J. Social robots as treatment agents: Pilot randomized controlled trial to deliver a behavior change intervention. Internet Intervent 2020;21:100320. https://doi.org/10.1016/j.invent.2020.100320 .

Lau Y, Chee D G H, Chow X P, Wong S H, Cheng L J, Lau S T. Humanoid robot-assisted interventions among children with diabetes: A systematic scoping review. Int J Nurs Stud 2020;111:103749. https://doi.org/10.1016/j.ijnurstu.2020.103749 .

Pennisi P, Tonacci A, Tartarisco G, Billeci L, Ruta L, Gangemi S, Pioggia G. Autism and social robotics: A systematic review. Autism Res 2016;9(2):165–183. https://doi.org/10.1002/aur.1527 .

Kim W, Balatti P, Lamon E, Ajoudani A. Moca-man: A mobile and reconfigurable collaborative robot assistant for conjoined human-robot actions. 2020 IEEE International Conference on Robotics and Automation (ICRA); 2020. p. 10191–10197.

Yokoyama K, Handa H, Isozumi T, Fukase Y, Kaneko K, Kanehiro F, Kawai Y, Tomita F, Hirukawa H. Cooperative works by a human and a humanoid robot. 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422); 2003. p. 2985–2991.

Kim W, Lorenzini M, Balatti P, Wu Y, Ajoudani A. Towards ergonomic control of collaborative effort in multi-human mobile-robot teams. 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2019. p. 3005–3011.

Tirupachuri Y, Nava G, Ferigo D, Tagliapietra L, Latella C, Nori F, Pucci D. Towards partner-aware humanoid robot control under physical interactions. IntelliSys; 2019.

Bolotnikova A, Courtois S, Kheddar A. Autonomous initiation of human physical assistance by a humanoid. 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN); 2020. p. 857–862.

Abi-Farrajl F, Henze B, Werner A, Panzirsch M, Ott C, Roa M A. Humanoid teleoperation using task-relevant haptic feedback. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2018. p. 5010–5017.

Ishiguro Y, Makabe T, Nagamatsu Y, Kojio Y, Kojima K, Sugai F, Kakiuchi Y, Okada K, Inaba M. Bilateral humanoid teleoperation system using whole-body exoskeleton cockpit TABLIS. IEEE Robot Autom Lett 2020;5(4):6419–6426.

Ishiguro Y, Kojima K, Sugai F, Nozawa S, Kakiuchi Y, Okada K, Inaba M. High speed whole body dynamic motion experiment with real time master-slave humanoid robot system. 2018 IEEE International Conference on Robotics and Automation (ICRA); 2018. p. 1–7. This paper proposes a whole body master-slave control technique for online teleoperation of a life-sized humanoid robot.

Villegas R, Yang J, Ceylan D, Lee H. Neural kinematic networks for unsupervised motion retargetting. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2018. p. 8639–8648.

Englsberger J, Werner A, Ott C, Henze B, Roa M A, Garofalo G, Burger R, Beyer A, Eiberger O, Schmid K, et al. Overview of the torque-controlled humanoid robot toro. 2014 IEEE-RAS International Conference on Humanoid Robots; 2014. p. 916–923.

Brygo A, Sarakoglou I, Garcia-Hernandez N, Tsagarakis N. Humanoid robot teleoperation with vibrotactile based balancing feedback. Haptics: Neuroscience, Devices, Modeling, and Applications. In: Auvray M and Duriez C, editors. Berlin: Springer; 2014. p. 266–275.

Download references

This work was supported by the European Union Horizon 2020 Research and Innovation Program under Grant Agreement No. 731540 (project AnDy), the European Research Council (ERC) under Grant Agreement No. 637972 (project ResiBots), the French Agency for Research under the ANR Grants No. ANR-18-CE33-0001 (project Flying Co-Worker) and ANR-20-CE33-0004 (project ROOIBOS), the ANR-FNS Grant No. ANR-19-CE19-0029 - FNS 200021E_189475/1 (project iReCheck), the CHIST-ERA grant HEAP (CHIST-ERA-17-ORMR-003), the Inria-DGA grant (“humanoïdeïde résilient”ésilient”), and the Inria “ADT” wbCub/wbTorque.

Author information

Authors and affiliations.

Inria, Loria, Université de Lorraine, CNRS, Nancy, F-54000, France

Lorenzo Vianello, Luigi Penco, Waldez Gomes, Yang You, Pauline Maurice, Vincent Thomas & Serena Ivaldi

CRAN, Nancy, F-54000, France

Lorenzo Vianello

Laboratoire CHArt, Université Paris 8, Paris, F-93200, France

Salvatore Maria Anzalone

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Serena Ivaldi .

Ethics declarations

Conflict of interest.

The authors declare no competing interests.

Additional information

Human and animal rights and informed consent.

This article does not contain any studies with human or animal subjects performed by any of the authors.

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

These authors contributed equally to this work.

This article is part of the Topical Collection on Humanoid and Bipedal Robotics

Rights and permissions

Reprints and permissions

About this article

Vianello, L., Penco, L., Gomes, W. et al. Human-Humanoid Interaction and Cooperation: a Review. Curr Robot Rep 2 , 441–454 (2021). https://doi.org/10.1007/s43154-021-00068-z

Download citation

Accepted : 19 October 2021

Published : 14 December 2021

Issue Date : December 2021

DOI : https://doi.org/10.1007/s43154-021-00068-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Humanoid robots
  • Human-robot interaction
  • Cooperation
  • Find a journal
  • Publish with us
  • Track your research

Subscribe or renew today

Every print subscription comes with full digital access

Science News

Reinforcement learning ai might bring humanoid robots to the real world.

These robots that play soccer and navigate difficult terrain may be the future of AI

robots playing soccer

Two small, humanoid robots play soccer after being trained with reinforcement learning. The AI tool helps the robots to be more agile and resilient compared with traditional computer programming, according to a recent study.

Share this:

By Matthew Hutson

May 24, 2024 at 9:15 am

ChatGPT and other AI tools are upending our digital lives, but our AI interactions are about to get physical. Humanoid robots trained with a particular type of AI to sense and react to their world could lend a hand in factories, space stations, nursing homes and beyond. Two recent papers in Science Robotics highlight how that type of AI — called reinforcement learning — could make such robots a reality.

“We’ve seen really wonderful progress in AI in the digital world with tools like GPT ,” says Ilija Radosavovic, a computer scientist at the University of California, Berkeley. “But I think that AI in the physical world has the potential to be even more transformational.”

The state-of-the-art software that controls the movements of bipedal bots often uses what’s called model-based predictive control. It’s led to very sophisticated systems, such as the parkour-performing Atlas robot from Boston Dynamics. But these robot brains require a fair amount of human expertise to program, and they don’t adapt well to unfamiliar situations. Reinforcement learning, or RL, in which AI learns through trial and error to perform sequences of actions, may prove a better approach.

“We wanted to see how far we can push reinforcement learning in real robots,” says Tuomas Haarnoja, a computer scientist at Google DeepMind and coauthor of one of the Science Robotics papers . Haarnoja and colleagues chose to develop software for a 20-inch-tall toy robot called OP3, made by the company Robotis. The team not only wanted to teach OP3 to walk but also to play one-on-one soccer.

“Soccer is a nice environment to study general reinforcement learning,” says Guy Lever of Google DeepMind, a coauthor of the paper. It requires planning, agility, exploration, cooperation and competition.

The toy size of the robots “allowed us to iterate fast,” Haarnoja says, because larger robots are harder to operate and repair. And before deploying the machine learning software in the real robots — which can break when they fall over — the researchers trained it on virtual robots, a technique known as sim-to-real transfer.

Training of the virtual bots came in two stages. In the first stage, the team trained one AI using RL merely to get the virtual robot up from the ground, and another to score goals without falling over. As input, the AIs received data including the positions and movements of the robot’s joints and, from external cameras, the positions of everything else in the game. (In a recently posted preprint, the team created a version of the system that relies on the robot’s own vision .) The AIs had to output new joint positions. If they performed well, their internal parameters were updated to encourage more of the same behavior. In the second stage, the researchers trained an AI to imitate each of the first two AIs and to score against closely matched opponents (versions of itself).

To prepare the control software, called a controller, for the real-world robots, the researchers varied aspects of the simulation, including friction, sensor delays and body-mass distribution. They also rewarded the AI not just for scoring goals but also for other things, like minimizing knee torque to avoid injury.

Real robots tested with the RL control software walked nearly twice as fast, turned three times as quickly and took less than half the time to get up compared with robots using the scripted controller made by the manufacturer. But more advanced skills also emerged, like fluidly stringing together actions. “It was really nice to see more complex motor skills being learned by robots,” says Radosavovic, who was not a part of the research. And the controller learned not just single moves, but also the planning required to play the game, like knowing to stand in the way of an opponent’s shot.

“In my eyes, the soccer paper is amazing,” says Joonho Lee, a roboticist at ETH Zurich. “We’ve never seen such resilience from humanoids.”

But what about human-sized humanoids? In the other recent paper , Radosavovic worked with colleagues to train a controller for a larger humanoid robot. This one, Digit from Agility Robotics , stands about five feet tall and has knees that bend backward like an ostrich. The team’s approach was similar to Google DeepMind’s. Both teams used computer brains known as neural networks, but Radosavovic used a specialized type called a transformer, the kind common in large language models like those powering ChatGPT.

Instead of taking in words and outputting more words, the model took in 16 observation-action pairs — what the robot had sensed and done for the previous 16 snapshots of time, covering roughly a third of a second — and output its next action. To make learning easier, it first learned based on observations of its actual joint positions and velocity, before using observations with added noise, a more realistic task. To further enable sim-to-real transfer, the researchers slightly randomized aspects of the virtual robot’s body and created a variety of virtual terrain, including slopes, trip-inducing cables and bubble wrap.

After training in the digital world, the controller operated a real robot for a full week of tests outside — preventing the robot from falling over even a single time. And in the lab, the robot resisted external forces like having an inflatable exercise ball thrown at it. The controller also outperformed the non-machine-learning controller from the manufacturer, easily traversing an array of planks on the ground. And whereas the default controller got stuck attempting to climb a step, the RL one managed to figure it out, even though it hadn’t seen steps during training.

Reinforcement learning for four-legged locomotion has become popular in the last few years, and these studies show the same techniques now working for two-legged robots. “These papers are either at-par or have pushed beyond manually defined controllers — a tipping point,” says Pulkit Agrawal, a computer scientist at MIT. “With the power of data, it will be possible to unlock many more capabilities in a relatively short period of time.” 

And the papers’ approaches are likely complementary. Future AI robots may need the robustness of Berkeley’s system and the dexterity of Google DeepMind’s. Real-world soccer incorporates both. According to Lever, soccer “ has been a grand challenge for robotics and AI for quite some time.”

More Stories from Science News on Artificial Intelligence

A screenshot of a fake website, showing a young girl hugging an older woman. The tagline says "Be the favorite grandkid forever"

Should we use AI to resurrect digital ‘ghosts’ of the dead?

On the left, Emo, a robot with a blue silicone face, smiles in tandem with researcher Yuhang Hu, on the right. Hu wears a black t-shirt.

This robot can tell when you’re about to smile — and smile back

an illustration of robotic hand using a human brain as a puppet on strings while inside a person's head

AI learned how to sway humans by watching a cooperative cooking game

Two abstract heads look at each other. One has a computer brain and the other has a real human brain.

Why large language models aren’t headed toward humanlike understanding

A baby sits on a white couch reading a book next to a teddy bear.

How do babies learn words? An AI experiment may hold clues

An illustration of a smiley face with a green monster with lots of tentacles and eyes behind it. The monster's fangs and tongue are visible at the bottom of the illustration.

AI chatbots can be tricked into misbehaving. Can scientists stop it?

A person's hand is shown inserting a coin battery into a testing device.

Artificial intelligence helped scientists create a new type of battery 

A photo shows someone's hand using an ai chatbot touch screen.

Generative AI grabbed headlines this year. Here’s why and what’s next

Subscribers, enter your e-mail address for full access to the Science News archives and digital editions.

Not a subscriber? Become one now .

  • IEEE Xplore Digital Library
  • IEEE Standards Association
  • Spectrum Online
  • More IEEE Sites

IEEE RAS

TECHNICAL COMMITTEE FOR

Humanoid Robotics

IEEE

Humanoid robotics is an emerging and challenging research field, which has received significant attention during the past years and will continue to play a central role in robotics research and in many applications of the 21st century. Regardless of the application area, one of the common problems tackled in humanoid robotics is the understanding of human-like information processing and the underlying mechanisms of the human brain in dealing with the real world.

Ambitious goals have been set for future humanoid robotics. They are expected to serve as companions and assistants for humans in daily life and as ultimate helpers in man-made and natural disasters. In 2050, a team of humanoid robots soccer players shall win against the winner of most recent World Cup. DARPA announced recently the next Grand Challenge in robotics: building robots which do things like humans in a world made for humans.

Considerable progress has been made in humanoid research resulting in a number of humanoid robots able to move and perform well-designed tasks. Over the past decade in humanoid research, an encouraging spectrum of science and technology has emerged that leads to the development of highly advanced humanoid mechatronic systems endowed with rich and complex sensorimotor capabilities. Of major importance for advances of the field is without doubt the availability of reproducible humanoid robots systems, which have been used in the last years as common hardware and software platforms to support humanoids research. Many technical innovations and remarkable results by universities, research institutions and companies are visible.

The major activities of the TC are reflected by the firmly established annual IEEE-RAS International Conference on Humanoid Robots, which is the internationally recognized prime event of the humanoid robotics community. The conference is sponsored by the IEEE Robotics and Automation Society. The level of interest in humanoid robotics research continues to grow, which is evidenced by the increasing number of submitted papers to this conference. For more information, please visit the official website of the Humanoids TC:  http://www.humanoid-robotics.org

HumanoidsTC teaser image ieee ras

Committee News

Home — Essay Samples — Information Science and Technology — Robots — Humanoid Robots: Planning, Sensors and Control

test_template

Humanoid Robots: Planning, Sensors and Control

  • Categories: Robots

About this sample

close

Words: 696 |

Published: Dec 12, 2018

Words: 696 | Pages: 2 | 4 min read

Table of contents

Planning and control, exteroceptive sensors, proprioceptive sensors.

Image of Alex Wood

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Dr. Heisenberg

Verified writer

  • Expert in: Information Science and Technology

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

2 pages / 1120 words

2 pages / 910 words

3 pages / 1371 words

3 pages / 1653 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Humanoid Robots: Planning, Sensors and Control Essay

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Robots

“Do you like human beings?” Edward asked. “I love them” Sophia replied. “Why?” “I am not sure I understand why yet” The conversation above is from an interview for Business Insider between a journalist [...]

“Ethical frontiers of robotics” is a short text by Noel Sharkey which tries to focus on the advancement of scientific knowledge to the level of manufacturing robots that are about to take over the human responsibilities. The [...]

As technology continues to progress and new remarkable machines are becoming more common in the workplaces across the world, there is an increasing concern that these robot-like devices will become ubiquitous in our daily lives. [...]

Robotics means the revise and application of robot technology. Robotics is a field of engineering that consists of conception, pattern, manufacture, and operation of machines task for a particular high exactness and repetitive [...]

It is a sub-kind of AI (Artificial Intelligence) that is centered around empowering PCs to comprehend and process human dialect and the dialects given by the client. It is a sub-type of AI (Artificial Intelligence) that is [...]

Through the use of laser-generated, hologram-like 3D pictures flashed into the photosensitive resin, experts at Lawrence Livermore National Lab, with their academic collaborators{, have uncovered they can build complex 3D parts [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

essay on humanoid robots

To revisit this article, visit My Profile, then View saved stories .

  • Backchannel
  • Newsletters
  • WIRED Insider
  • WIRED Consulting

The WIRED Guide to Robots

Modern robots are not unlike toddlers: It’s hilarious to watch them fall over, but deep down we know that if we laugh too hard, they might develop a complex and grow up to start World War III. None of humanity’s creations inspires such a confusing mix of awe, admiration, and fear: We want robots to make our lives easier and safer, yet we can’t quite bring ourselves to trust them. We’re crafting them in our own image, yet we are terrified they’ll supplant us.

But that trepidation is no obstacle to the booming field of robotics. Robots have finally grown smart enough and physically capable enough to make their way out of factories and labs to walk and roll and even leap among us . The machines have arrived.

You may be worried a robot is going to steal your job, and we get that. This is capitalism, after all, and automation is inevitable. But you may be more likely to work alongside a robot in the near future than have one replace you. And even better news: You’re more likely to make friends with a robot than have one murder you. Hooray for the future!

The Complete History And Future of Robots

The definition of “robot” has been confusing from the very beginning. The word first appeared in 1921, in Karel Capek’s play R.U.R. , or Rossum's Universal Robots. “Robot” comes from the Czech for “forced labor.” These robots were robots more in spirit than form, though. They looked like humans, and instead of being made of metal, they were made of chemical batter. The robots were far more efficient than their human counterparts, and also way more murder-y—they ended up going on a killing spree .

R.U.R. would establish the trope of the Not-to-Be-Trusted Machine (e.g., Terminator , The Stepford Wives , Blade Runner , etc.) that continues to this day—which is not to say pop culture hasn’t embraced friendlier robots. Think Rosie from The Jetsons . (Ornery, sure, but certainly not homicidal.) And it doesn’t get much family-friendlier than Robin Williams as Bicentennial Man .

The real-world definition of “robot” is just as slippery as those fictional depictions. Ask 10 roboticists and you’ll get 10 answers—how autonomous does it need to be, for instance. But they do agree on some general guidelines : A robot is an intelligent, physically embodied machine. A robot can perform tasks autonomously to some degree. And a robot can sense and manipulate its environment.

Think of a simple drone that you pilot around. That’s no robot. But give a drone the power to take off and land on its own and sense objects and suddenly it’s a lot more robot-ish. It’s the intelligence and sensing and autonomy that’s key.

But it wasn’t until the 1960s that a company built something that started meeting those guidelines. That’s when SRI International in Silicon Valley developed Shakey , the first truly mobile and perceptive robot. This tower on wheels was well-named—awkward, slow, twitchy. Equipped with a camera and bump sensors, Shakey could navigate a complex environment. It wasn’t a particularly confident-looking machine, but it was the beginning of the robotic revolution.

Around the time Shakey was trembling about, robot arms were beginning to transform manufacturing. The first among them was Unimate , which welded auto bodies. Today, its descendants rule car factories, performing tedious, dangerous tasks with far more precision and speed than any human could muster. Even though they’re stuck in place, they still very much fit our definition of a robot—they’re intelligent machines that sense and manipulate their environment.

Robots, though, remained largely confined to factories and labs, where they either rolled about or were stuck in place lifting objects. Then, in the mid-1980s Honda started up a humanoid robotics program. It developed P3, which could walk pretty darn good and also wave and shake hands, much to the delight of a roomful of suits . The work would culminate in Asimo, the famed biped, which once tried to take out President Obama with a well-kicked soccer ball. (OK, perhaps it was more innocent than that.)

Today, advanced robots are popping up everywhere . For that you can thank three technologies in particular: sensors, actuators, and AI.

So, sensors. Machines that roll on sidewalks to deliver falafel can only navigate our world thanks in large part to the 2004 Darpa Grand Challenge, in which teams of roboticists cobbled together self-driving cars to race through the desert. Their secret? Lidar, which shoots out lasers to build a 3-D map of the world. The ensuing private-sector race to develop self-driving cars has dramatically driven down the price of lidar, to the point that engineers can create perceptive robots on the (relative) cheap.

How a Samsung Washing Machine Chime Triggered a YouTube Copyright Fiasco

By Ashley Belanger, Ars Technica

The Best Bookshelf Speakers to Blast Your Tunes

By Ryan Waniata

HMD’s First Self-Branded Phone in the US Isn’t Much of a Vibe

By Julian Chokkattu

The Ticketmaster Data Breach May Be Just the Beginning

By Matt Burgess

Lidar is often combined with something called machine vision—2-D or 3-D cameras that allow the robot to build an even better picture of its world. You know how Facebook automatically recognizes your mug and tags you in pictures? Same principle with robots. Fancy algorithms allow them to pick out certain landmarks or objects .

Sensors are what keep robots from smashing into things. They’re why a robot mule of sorts can keep an eye on you, following you and schlepping your stuff around ; machine vision also allows robots to scan cherry trees to determine where best to shake them , helping fill massive labor gaps in agriculture.

New technologies promise to let robots sense the world in ways that are far beyond humans’ capabilities. We’re talking about seeing around corners: At MIT, researchers have developed a system that watches the floor at the corner of, say, a hallway, and picks out subtle movements being reflected from the other side that the piddling human eye can’t see. Such technology could one day ensure that robots don’t crash into humans in labyrinthine buildings, and even allow self-driving cars to see occluded scenes.

Within each of these robots is the next secret ingredient: the actuator , which is a fancy word for the combo electric motor and gearbox that you’ll find in a robot’s joint. It’s this actuator that determines how strong a robot is and how smoothly or not smoothly it moves . Without actuators, robots would crumple like rag dolls. Even relatively simple robots like Roombas owe their existence to actuators. Self-driving cars, too, are loaded with the things.

Actuators are great for powering massive robot arms on a car assembly line, but a newish field, known as soft robotics, is devoted to creating actuators that operate on a whole new level. Unlike mule robots, soft robots are generally squishy, and use air or oil to get themselves moving. So for instance, one particular kind of robot muscle uses electrodes to squeeze a pouch of oil, expanding and contracting to tug on weights . Unlike with bulky traditional actuators, you could stack a bunch of these to magnify the strength: A robot named Kengoro, for instance, moves with 116 actuators that tug on cables, allowing the machine to do unsettlingly human maneuvers like pushups . It’s a far more natural-looking form of movement than what you’d get with traditional electric motors housed in the joints.

And then there’s Boston Dynamics, which created the Atlas humanoid robot for the Darpa Robotics Challenge in 2013. At first, university robotics research teams struggled to get the machine to tackle the basic tasks of the original 2013 challenge and the finals round in 2015, like turning valves and opening doors. But Boston Dynamics has since that time turned Atlas into a marvel that can do backflips , far outpacing other bipeds that still have a hard time walking. (Unlike the Terminator, though, it does not pack heat.) Boston Dynamics has also begun leasing a quadruped robot called Spot, which can recover in unsettling fashion when humans kick or tug on it . That kind of stability will be key if we want to build a world where we don’t spend all our time helping robots out of jams. And it’s all thanks to the humble actuator.

At the same time that robots like Atlas and Spot are getting more physically robust, they’re getting smarter, thanks to AI. Robotics seems to be reaching an inflection point, where processing power and artificial intelligence are combining to truly ensmarten the machines . And for the machines, just as in humans, the senses and intelligence are inseparable—if you pick up a fake apple and don’t realize it’s plastic before shoving it in your mouth, you’re not very smart.

This is a fascinating frontier in robotics (replicating the sense of touch, not eating fake apples). A company called SynTouch, for instance, has developed robotic fingertips that can detect a range of sensations , from temperature to coarseness. Another robot fingertip from Columbia University replicates touch with light, so in a sense it sees touch : It’s embedded with 32 photodiodes and 30 LEDs, overlaid with a skin of silicone. When that skin is deformed, the photodiodes detect how light from the LEDs changes to pinpoint where exactly you touched the fingertip, and how hard.

Far from the hulking dullards that lift car doors on automotive assembly lines, the robots of tomorrow will be very sensitive indeed.

The Complete History And Future of Robots

Increasingly sophisticated machines may populate our world, but for robots to be really useful, they’ll have to become more self-sufficient. After all, it would be impossible to program a home robot with the instructions for gripping each and every object it ever might encounter. You want it to learn on its own, and that is where advances in artificial intelligence come in.

Take Brett. In a UC Berkeley lab, the humanoid robot has taught itself to conquer one of those children’s puzzles where you cram pegs into different shaped holes. It did so by trial and error through a process called reinforcement learning. No one told it how to get a square peg into a square hole, just that it needed to. So by making random movements and getting a digital reward (basically, yes, do that kind of thing again ) each time it got closer to success, Brett learned something new on its own . The process is super slow, sure, but with time roboticists will hone the machines’ ability to teach themselves novel skills in novel environments, which is pivotal if we don’t want to get stuck babysitting them.

Another tack here is to have a digital version of a robot train first in simulation, then port what it has learned to the physical robot in a lab. Over at Google , researchers used motion-capture videos of dogs to program a simulated dog, then used reinforcement learning to get a simulated four-legged robot to teach itself to make the same movements. That is, even though both have four legs, the robot’s body is mechanically distinct from a dog’s, so they move in distinct ways. But after many random movements, the simulated robot got enough rewards to match the simulated dog. Then the researchers transferred that knowledge to the real robot in the lab, and sure enough, the thing could walk—in fact, it walked even faster than the robot manufacturer’s default gait, though in fairness it was less stable.

13 Robots, Real and Imagined

Image may contain Art Painting Wood Figurine Human and Person

They may be getting smarter day by day, but for the near future we are going to have to babysit the robots. As advanced as they’ve become, they still struggle to navigate our world. They plunge into fountains , for instance. So the solution, at least for the short term, is to set up call centers where robots can phone humans to help them out in a pinch . For example, Tug the hospital robot can call for help if it’s roaming the halls at night and there’s no human around to move a cart blocking its path. The operator would them teleoperate the robot around the obstruction.

Speaking of hospital robots. When the coronavirus crisis took hold in early 2020, a group of roboticists saw an opportunity: Robots are the perfect coworkers in a pandemic. Engineers must use the crisis, they argued in an editorial , to supercharge the development of medical robots, which never get sick and can do the dull, dirty, and dangerous work that puts human medical workers in harm’s way. Robot helpers could take patients’ temperatures and deliver drugs, for instance. This would free up human doctors and nurses to do what they do best: problem-solving and being empathetic with patients, skills that robots may never be able to replicate.

The rapidly developing relationship between humans and robots is so complex that it has spawned its own field, known as human-robot interaction . The overarching challenge is this: It’s easy enough to adapt robots to get along with humans—make them soft and give them a sense of touch—but it’s another issue entirely to train humans to get along with the machines. With Tug the hospital robot, for example, doctors and nurses learn to treat it like a grandparent—get the hell out of its way and help it get unstuck if you have to. We also have to manage our expectations: Robots like Atlas may seem advanced, but they’re far from the autonomous wonders you might think.

What humanity has done is essentially invented a new species, and now we’re maybe having a little buyers’ remorse. Namely, what if the robots steal all our jobs? Not even white-collar workers are safe from hyper-intelligent AI, after all.

A lot of smart people are thinking about the singularity, when the machines grow advanced enough to make humanity obsolete. That will result in a massive societal realignment and species-wide existential crisis. What will we do if we no longer have to work? How does income inequality look anything other than exponentially more dire as industries replace people with machines?

These seem like far-out problems, but now is the time to start pondering them. Which you might consider an upside to the killer-robot narrative that Hollywood has fed us all these years: The machines may be limited at the moment, but we as a society need to think seriously about how much power we want to cede. Take San Francisco, for instance, which is exploring the idea of a robot tax, which would force companies to pay up when they displace human workers.

I can’t sit here and promise you that the robots won’t one day turn us all into batteries , but the more realistic scenario is that, unlike in the world of R.U.R. , humans and robots are poised to live in harmony—because it’s already happening. This is the idea of multiplicity , that you’re more likely to work alongside a robot than be replaced by one. If your car has adaptive cruise control, you’re already doing this, letting the robot handle the boring highway work while you take over for the complexity of city driving. The fact that the US economy ground to a standstill during the coronavirus pandemic made it abundantly clear that robots are nowhere near ready to replace humans en masse.

The machines promise to change virtually every aspect of human life, from health care to transportation to work. Should they help us drive? Absolutely. (They will, though, have to make the decision to sometimes kill , but the benefits of precision driving far outweigh the risks.) Should they replace nurses and cops? Maybe not—certain jobs may always require a human touch.

One thing is abundantly clear: The machines have arrived. Now we have to figure out how to handle the responsibility of having invented a whole new species.

The Complete History And Future of Robots

If You Want a Robot to Learn Better, Be a Jerk to It A good way to make a robot learn is to do the work in simulation, so the machine doesn’t accidentally hurt itself. Even better, you can give it tough love by trying to knock objects out of its hand.

Spot the Robot Dog Trots Into the Big, Bad World Boston Dynamics' creation is starting to sniff out its role in the workforce: as a helpful canine that still sometimes needs you to hold its paw.

Finally, a Robot That Moves Kind of Like a Tongue Octopus arms and elephant trunks and human tongues move in a fascinating way, which has now inspired a fascinating new kind of robot.

Robots Are Fueling the Quiet Ascendance of the Electric Motor For something born over a century ago, the electric motor really hasn’t fully extended its wings. The problem? Fossil fuels are just too easy, and for the time being, cheap. But now, it’s actually robots, with their actuators, that are fueling the secret ascendence of the electric motor.

This Robot Fish Powers Itself With Fake Blood A robot lionfish uses a rudimentary vasculature and “blood” to both energize itself and hydraulically power its fins.

Inside the Amazon Warehouse Where Humans and Machines Become One In an Amazon sorting center, a swarm of robots works alongside humans. Here’s what that says about Amazon—and the future of work.

This guide was last updated on April 13, 2020.

Enjoyed this deep dive? Check out more WIRED Guides .

essay on humanoid robots

Carl Zimmer

In Defense of Parasitic Worms

Jesse Nichols

These Artificial Blood Platelets Could One Day Save Lives

Emily Mullin

Despite Bird Flu Risk, Raw-Milk Drinkers Are Undaunted

Beth Mole, Ars Technica

Wegovy Can Keep Weight Off for at Least 4 Years, Research Shows

H. Claire Brown

NASA’s Quest to Touch the Sun

Thomas Zurbuchen

Featured Article

Industries may be ready for humanoid robots, but are the robots ready for them?

Executives from boston dynamics, agility, neura and apptronik discuss the state of the industry.

essay on humanoid robots

You could easily walk the entire Automate floor without spotting a single humanoid. There was a grand total of three, by my count — or, rather, three units of the same nonworking prototype. Neura was showing off its long-promised 4NE-1 robot, amid more traditional form factors. There was a little photo setup where you could snap a selfie with the bot, and that was about it.

Notably absent at the annual Association for Advancing Automation (A3) show was an Agility booth. The Oregon company made a big showing at last year’s event, with a small army of Digits moving bins from a tote wall to a conveyer belt a few feet away. It wasn’t a complex demo, but the mere sight of those bipedal robots working in tandem was still a showstopper.

Agility chief product officer Melonee Wise told me that the company had opted to sit this one out, as it currently has all the orders it can manage. And that’s really what these trade shows are about: manufacturers and logistics companies shopping around for the next technological leg up to remain competitive.

How large a role humanoids will play in that ecosystem is, perhaps, the biggest question on everyone’s mind at the moment. Amid the biggest robotics hype cycle I’ve witnessed firsthand, many are left scratching their heads. After all, the notion of a “general purpose” humanoid robot flies in the face of decades’ worth of orthodoxy. The notion of the everything robot has been a fixture of science fiction for the better part of a century, but the reality has been one of single-purpose systems designed to do one job well.

Agility’s Digit at this year’s Modex conference

While there wasn’t much of a physical presence, the subject of humanoids loomed large at the event. As such, A3 asked me to moderate a panel on the subject. I admit I initially balked at the idea of an hourlong panel. After all, the ones we do at Disrupt tend to run 20 to 25 minutes. By the end of the conversation, however, it was clear we easily could have filled another hour.

That was due, in part, to the fact that the panel was — as one LinkedIn commenter put it — “stacked.” Along with Wise, I was joined by Boston Dynamics CTO Aaron Saunders, Apptronik CEO Jeff Cardenas and Neura CEO David Reger. I kicked the panel off by asking the audience how many in attendance would consider themselves skeptical about the humanoid form factor. Roughly three-quarters of the people present raised their hands, which is more or less what I’d anticipate at this stage in the process.

As for A3, I would say it has entered the cautiously optimistic phase. In addition to hosting a panel on the subject at Automate, the organization is holding a Humanoid Robot Forum in Memphis this October. The move echoes the 2019 launch of A3’s Autonomous Mobile Robot (AMR) Forum, which presaged the explosive growth in warehouse robotics during the pandemic.

Investors are less measured in their optimism.

essay on humanoid robots

“A year after we laid our initial expectations for global humanoid robot [total addressable market] of $6bn, we raise our 2035 TAM forecast to $38bn resulting from a 4-fold increase in our shipments estimate to 1.4mn units with a much faster path to profitability on a 40% reduction in bill of materials,” Goldman Sachs researcher Jacqueline Du wrote in a report published in February. “We believe our revised shipment estimate would cover 10%-15% of hazardous, dangerous and auto manufacturing roles.”

There are, however, plenty of reasons to be skeptical. Hype cycles are hard to navigate when you’re in the middle of them. The amount of money currently changing hands (see: Figure’s most recent raise of $675 million) gives one pause in the wake of various startup collapses across other fields. It also comes during a time when robotics investments have slowed after a few white-hot years.

One of the biggest risks at this stage is the overpromise . Every piece of new technology runs this risk, but something like a humanoid robot is a lightning rod for this stuff. Much like how eVTOL proponents see the technology as finally delivering on the promise of flying cars, the concept of personal robot servant looks within reach.

The fact that these robots look like us leads many to believe they can — or soon will be able to — do the same things as us. Elon Musk’s promise of a robot that works in the Tesla factory all day and then comes home to make you dinner added fuel to that fire. Tempering expectations isn’t really Musk’s thing , you know? Others, meanwhile, have tossed around the notion of a general intelligence for humanoid robots — a thing that is a ways off (“five to 10 years” is a time frame I often hear bandied about).

essay on humanoid robots

“I think we need to be careful about the hype cycles, because we ultimately need to deliver the promise and potential,” Cardenas said. “We’ve been through this before, with the DARPA Robotics challenge, where there’s a lot of excitement going into it, and we crashed into reality coming out of that.”

One source of disconnect is the question of what these systems can deliver today. The answer is murky, partly because of the nature of partnership announcements. Agility announced it was working with Amazon , Apptronik with Mercedes , Figure with BMW and Sanctuary AI with Magna . But every partnership so far needs to be taken for what it is: a pilot. The precise number of robots deployed in any specific partnership is never disclosed, and the figure is often single digits. It makes perfect sense: These are all operating factories/warehouses. It would be wildly disruptive to just slot in a new technology at scale and hope for the best.

Pilots are important for this reason, but they should not be mistaken for market fit. As of this writing, Agility is the only one of the bunch that has confirmed with TechCrunch that it’s ready for the next step. On the discussion panel, Wise confirmed that Agility will be announcing specifics in June. Cardenas, meanwhile, stated that the company plans to heavily pilot in the “back half” of 2024, with plans to move beyond early next year.

Neura and Boston Dynamics are simply too early stage for the conversation. Neura promised to show off some demos at some point in July, moving 4NE-1 beyond what has up until now been a series of rendered videos, coupled with the nonfunctioning units shown at Automate.

As for when we’ll see more of the electric Atlas beyond a 30-second video, Saunders says, “[the video] is just meant to be an early peek. We’re planning on getting into the pilot and some of the more pragmatic pieces next year. So far, we’re focused mainly on building up the focus and technology. There are a lot of hard problems left to solve in the manipulation and the AI spaces. Our team is working on it right now, and I think as those features get more robust, we’ll have more to show off.”

essay on humanoid robots

Boston Dynamics isn’t starting from scratch, of course. After more than a decade of Atlas, the company has as much humanoid expertise as any, while the launches of Spot and Stretch have taught the firm plenty about commercializing products after decades of research.

So, why did it take so long to see the company’s swing at the commercial humanoid category? “We wanted to make sure that we understood where the value is placed,” Saunders said. “It’s really easy to make demo videos and show cool things, but it takes a long time to find ROI [return on investment] cases that justify the human form.”

Neura has easily the most diverse portfolio of the companies present onstage. In fact, one gets the sense that whenever the company is finally ready to launch a humanoid in earnest, it will be just another form factor in the company’s portfolio, rather than the driving force. Meanwhile, when the electric Atlas eventually launches, it will be Boston Dynamics’ third commercially available product.

As Digit is Agility’s only offering at the moment, the company is wholly committed to the bipedal humanoid form factor. For its part, Apptronik splits the difference. The Austin-based firm has been taking a best-tool-for-the-job approach to the form factor. If, for example, legs aren’t needed for a specific environment, the company can mount the upper half of its robot onto a wheeled base.

Tesla's Optimus bot prototype

“I think at the end of the day, it’s about solving problems,” Cardenas said. “There are places where you don’t need a bipedal robot. My view is that bipedal form factors will win the day, but the question is how do you actually get them out there?”

Not every terrain requires legs. Earlier this week, Diligent Robotics co-founder and CEO Andrea Thomaz told me that part of the reason her company targeted healthcare first is the prevalence of ADA (Americans with Disabilities Act) compliant structures. Anywhere a wheelchair can go, a wheeled robot should be able to follow. Because of that, the startup didn’t have to commit to the very difficult problem of building legs.

Legs have benefits beyond the ability to handle things like stairs, however. Reach is an important one. Legged robots have an easier time reaching lower shelves, as they can bend at the legs and the waist. You could, theoretically, add a very large arm to the top of an AMR, but doing so introduces all kinds of new problems like balance.

Safety is something that has thus far been under-addressed in conversations around the form factor. One of humanoid robots’ key selling points is their ability to slot into existing workflows alongside other robotic or human co-workers.

But robots like these are big, heavy and made of metal, therefore making them a potential hazard to human workers. The subject has been top of mind for Wise, in particular, who says further standards are needed to ensure that these robots can operate safely alongside people.

For my part, I’ve been advocating for a more standardized approach to robot demos. Videos of humanoids, in particular, have obscured what these robots can and can’t do today. I would love to see disclosures around playback speed, editing, the use of teleop and other tricks of the trade that can be used to deceive (intentionally or not) viewers.

“It’s very hard to distinguish what is and isn’t progress,” Wise said, referring to some recent videos of Tesla’s Optimus robot. “I think one thing that we, as a community, can do better is being more transparent about the methodologies that we’re using. It’s fueling more power for the hype cycle. I think the other problem that we have is, if we look at what’s going on with any humanoid robot in this space, safety is not clear. There isn’t an e-stop on Optimus. There isn’t an e-stop on many of our robots.”

More TechCrunch

Get the industry’s biggest tech news, techcrunch daily news.

Every weekday and Sunday, you can get the best of TechCrunch’s coverage.

Startups Weekly

Startups are the core of TechCrunch, so get our best coverage delivered weekly.

TechCrunch Fintech

The latest Fintech news and analysis, delivered every Tuesday.

TechCrunch Mobility

TechCrunch Mobility is your destination for transportation news and insight.

Fisker collapsed under the weight of its founder’s promises

Welcome back to TechCrunch’s Week in Review — TechCrunch’s newsletter recapping the week’s biggest news. Want it in your inbox every Saturday? Sign up here. Over the past eight years,…

Fisker collapsed under the weight of its founder’s promises

What is AI? We’ve put together this non-technical guide to give anyone a fighting chance to understand how and why today’s AI works.

WTF is AI?

President Biden vetoes crypto custody bill

President Joe Biden has vetoed H.J.Res. 109, a congressional resolution that would have overturned the Securities and Exchange Commission’s current approach to banks and crypto. Specifically, the resolution targeted the…

President Biden vetoes crypto custody bill

How large a role humanoids will play in that ecosystem is, perhaps, the biggest question on everyone’s mind at the moment.

Industries may be ready for humanoid robots, but are the robots ready for them?

VCs are selling shares of hot AI companies like Anthropic and xAI to small investors in a wild SPV market

VCs are clamoring to invest in hot AI companies, willing to pay exorbitant share prices for coveted spots on their cap tables. Even so, most aren’t able to get into…

VCs are selling shares of hot AI companies like Anthropic and xAI to small investors in a wild SPV market

Deal Dive: How (Re)vive grew 10x last year by helping retailers recycle and sell returned items

The fashion industry has a huge problem: Despite many returned items being unworn or undamaged, a lot, if not the majority, end up in the trash. An estimated 9.5 billion…

Deal Dive: How (Re)vive grew 10x last year by helping retailers recycle and sell returned items

You can no longer use Tumblr’s tipping feature 

Tumblr officially shut down “Tips,” an opt-in feature where creators could receive one-time payments from their followers.  As of today, the tipping icon has automatically disappeared from all posts and…

You can no longer use Tumblr’s tipping feature 

AI training data has a price tag that only Big Tech can afford

Generative AI improvements are increasingly being made through data curation and collection — not architectural — improvements. Big Tech has an advantage.

AI training data has a price tag that only Big Tech can afford

This Week in AI: Can we (and could we ever) trust OpenAI?

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: Can we (and could we ever) trust OpenAI?

General Catalyst-backed Jasper Health lays off staff

Jasper Health, a cancer care platform startup, laid off a substantial part of its workforce, TechCrunch has learned.

General Catalyst-backed Jasper Health lays off staff

Live Nation confirms Ticketmaster was hacked, says personal information stolen in data breach

Live Nation says its Ticketmaster subsidiary was hacked. A hacker claims to be selling 560 million customer records.

Live Nation confirms Ticketmaster was hacked, says personal information stolen in data breach

Inside EV startup Fisker’s collapse: how the company crumbled under its founders’ whims

An autonomous pod. A solid-state battery-powered sports car. An electric pickup truck. A convertible grand tourer EV with up to 600 miles of range. A “fully connected mobility device” for young urban innovators to be built by Foxconn and priced under $30,000. The next Popemobile. Over the past eight years, famed vehicle designer Henrik Fisker…

Inside EV startup Fisker’s collapse: how the company crumbled under its founders’ whims

Hugging Face says it detected ‘unauthorized access’ to its AI model hosting platform

Late Friday afternoon, a time window companies usually reserve for unflattering disclosures, AI startup Hugging Face said that its security team earlier this week detected “unauthorized access” to Spaces, Hugging…

Hugging Face says it detected ‘unauthorized access’ to its AI model hosting platform

Hacked, leaked, exposed: Why you should never use stalkerware apps

Using stalkerware is creepy, unethical, potentially illegal, and puts your data and that of your loved ones in danger.

Hacked, leaked, exposed: Why you should never use stalkerware apps

Mill’s redesigned food waste bin really is faster and quieter than before

The design brief was simple: each grind and dry cycle had to be completed before breakfast. Here’s how Mill made it happen.

Mill’s redesigned food waste bin really is faster and quieter than before

Google admits its AI Overviews need work, but we’re all helping it beta test

Google is embarrassed about its AI Overviews, too. After a deluge of dunks and memes over the past week, which cracked on the poor quality and outright misinformation that arose…

Google admits its AI Overviews need work, but we’re all helping it beta test

Startups Weekly: Musk raises $6B for AI and the fintech dominoes are falling

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. In…

Startups Weekly: Musk raises $6B for AI and the fintech dominoes are falling

a16z-backed ZeroMark wants to give soldiers guns that don’t miss against drones

The product, which ZeroMark calls a “fire control system,” has two components: a small computer that has sensors, like lidar and electro-optical, and a motorized buttstock.

a16z-backed ZeroMark wants to give soldiers guns that don’t miss against drones

Pitch Deck Teardown: RAW Dating App’s $3M angel deck

The RAW Dating App aims to shake up the dating scheme by shedding the fake, TikTok-ified, heavily filtered photos and replacing them with a more genuine, unvarnished experience. The app…

Pitch Deck Teardown: RAW Dating App’s $3M angel deck

‘ThreadsDeck’ arrived just in time for the Trump verdict

Yes, we’re calling it “ThreadsDeck” now. At least that’s the tag many are using to describe the new user interface for Instagram’s X competitor, Threads, which resembles the column-based format…

‘ThreadsDeck’ arrived just in time for the Trump verdict

Hackers steal $305M from DMM Bitcoin crypto exchange

Japanese crypto exchange DMM Bitcoin confirmed on Friday that it had been the victim of a hack resulting in the theft of 4,502.9 bitcoin, or about $305 million.  According to…

Hackers steal $305M from DMM Bitcoin crypto exchange

Disrupt 2024 early-bird prices end at midnight

This is not a drill! Today marks the final day to secure your early-bird tickets for TechCrunch Disrupt 2024 at a significantly reduced rate. At midnight tonight, May 31, ticket…

Disrupt 2024 early-bird prices end at midnight

Instagram tests ‘trial reels’ that don’t display to a creator’s followers

Instagram is testing a way for creators to experiment with reels without committing to having them displayed on their profiles, giving the social network a possible edge over TikTok and…

Instagram tests ‘trial reels’ that don’t display to a creator’s followers

Feds tell Zoox to send more info about autonomous vehicles suddenly braking

U.S. federal regulators have requested more information from Zoox, Amazon’s self-driving unit, as part of an investigation into rear-end crash risks posed by unexpected braking. The National Highway Traffic Safety…

Feds tell Zoox to send more info about autonomous vehicles suddenly braking

Canva’s rap battle is part of a long legacy of Silicon Valley cringe

You thought the hottest rap battle of the summer was between Kendrick Lamar and Drake. You were wrong. It’s between Canva and an enterprise CIO. At its Canva Create event…

Canva’s rap battle is part of a long legacy of Silicon Valley cringe

ElevenLabs debuts AI-powered tool to generate sound effects

Voice cloning startup ElevenLabs introduced a new tool for users to generate sound effects through prompts today after announcing the project back in February.

ElevenLabs debuts AI-powered tool to generate sound effects

VC firm Antler’s CEO says Asia presents ‘biggest opportunity’ in the world for growth

We caught up with Antler founder and CEO Magnus Grimeland about the startup scene in Asia, the current tech startup trends in the region and investment approaches during the rise…

VC firm Antler’s CEO says Asia presents ‘biggest opportunity’ in the world for growth

Chinese e-commerce marketplace Temu faces stricter EU rules as a ‘very large online platform’

Temu is to face Europe’s strictest rules after being designated as a “very large online platform” under the Digital Services Act (DSA).

Chinese e-commerce marketplace Temu faces stricter EU rules as a ‘very large online platform’

Spain bans Meta from launching election features on Facebook, Instagram over privacy fears

Meta has been banned from launching features on Facebook and Instagram that would have collected data on voters in Spain using the social networks ahead of next month’s European Elections.…

Spain bans Meta from launching election features on Facebook, Instagram over privacy fears

Stripe curbs its India ambitions over regulatory situation

Stripe, the world’s most valuable fintech startup, said on Friday that it will temporarily move to an invite-only model for new account sign-ups in India, calling the move “a tough…

Stripe curbs its India ambitions over regulatory situation

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Ethics of Artificial Intelligence and Robotics

Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these.

After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects , i.e., tools made and used by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects , i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3).

For each section within these themes, we provide a general explanation of the ethical issues , outline existing positions and arguments , then analyse how these play out with current technologies and finally, what policy consequences may be drawn.

1.1 Background of the Field

1.2 ai & robotics, 1.3 a note on policy, 2.1 privacy & surveillance, 2.2 manipulation of behaviour, 2.3 opacity of ai systems, 2.4 bias in decision systems, 2.5 human-robot interaction, 2.6 automation and employment, 2.7 autonomous systems, 2.8 machine ethics, 2.9 artificial moral agents, 2.10 singularity, research organizations, conferences, policy documents, other relevant pages, related entries, 1. introduction.

The ethics of AI and robotics is often focused on “concerns” of various sorts, which is a typical response to new technologies. Many such concerns turn out to be rather quaint (trains are too fast for souls); some are predictably wrong when they suggest that the technology will fundamentally change humans (telephones will destroy personal communication, writing will destroy memory, video cassettes will make going out redundant); some are broadly correct but moderately relevant (digital technology will destroy industries that make photographic film, cassette tapes, or vinyl records); but some are broadly correct and deeply relevant (cars will kill children and fundamentally change the landscape). The task of an article such as this is to analyse the issues and to deflate the non-issues.

Some technologies, like nuclear power, cars, or plastics, have caused ethical and political discussion and significant policy efforts to control the trajectory these technologies, usually only once some damage is done. In addition to such “ethical concerns”, new technologies challenge current norms and conceptual systems, which is of particular interest to philosophy. Finally, once we have understood a technology in its context, we need to shape our societal response, including regulation and law. All these features also exist in the case of new AI and Robotics technologies—plus the more fundamental fear that they may end the era of human control on Earth.

The ethics of AI and robotics has seen significant press coverage in recent years, which supports related research, but also may end up undermining it: the press often talks as if the issues under discussion were just predictions of what future technology will bring, and as though we already know what would be most ethical and how to achieve that. Press coverage thus focuses on risk, security (Brundage et al. 2018, in the Other Internet Resources section below, hereafter [OIR]), and prediction of impact (e.g., on the job market). The result is a discussion of essentially technical problems that focus on how to achieve a desired outcome. Current discussions in policy and industry are also motivated by image and public relations, where the label “ethical” is really not much more than the new “green”, perhaps used for “ethics washing”. For a problem to qualify as a problem for AI ethics would require that we do not readily know what the right thing to do is. In this sense, job loss, theft, or killing with AI is not a problem in ethics, but whether these are permissible under certain circumstances is a problem. This article focuses on the genuine problems of ethics where we do not readily know what the answers are.

A last caveat: The ethics of AI and robotics is a very young field within applied ethics, with significant dynamics, but few well-established issues and no authoritative overviews—though there is a promising outline (European Group on Ethics in Science and New Technologies 2018) and there are beginnings on societal impact (Floridi et al. 2018; Taddeo and Floridi 2018; S. Taylor et al. 2018; Walsh 2018; Bryson 2019; Gibert 2019; Whittlestone et al. 2019), and policy recommendations (AI HLEG 2019 [OIR]; IEEE 2019). So this article cannot merely reproduce what the community has achieved thus far, but must propose an ordering where little order exists.

The notion of “artificial intelligence” (AI) is understood broadly as any kind of artificial computational system that shows intelligent behaviour, i.e., complex behaviour that is conducive to reaching goals. In particular, we do not wish to restrict “intelligence” to what would require intelligence if done by humans , as Minsky had suggested (1985). This means we incorporate a range of machines, including those in “technical AI”, that show only limited abilities in learning or reasoning but excel at the automation of particular tasks, as well as machines in “general AI” that aim to create a generally intelligent agent.

AI somehow gets closer to our skin than other technologies—thus the field of “philosophy of AI”. Perhaps this is because the project of AI is to create machines that have a feature central to how we humans see ourselves, namely as feeling, thinking, intelligent beings. The main purposes of an artificially intelligent agent probably involve sensing, modelling, planning and action, but current AI applications also include perception, text analysis, natural language processing (NLP), logical reasoning, game-playing, decision support systems, data analytics, predictive analytics, as well as autonomous vehicles and other forms of robotics (P. Stone et al. 2016). AI may involve any number of computational techniques to achieve these aims, be that classical symbol-manipulating AI, inspired by natural cognition, or machine learning via neural networks (Goodfellow, Bengio, and Courville 2016; Silver et al. 2018).

Historically, it is worth noting that the term “AI” was used as above ca. 1950–1975, then came into disrepute during the “AI winter”, ca. 1975–1995, and narrowed. As a result, areas such as “machine learning”, “natural language processing” and “data science” were often not labelled as “AI”. Since ca. 2010, the use has broadened again, and at times almost all of computer science and even high-tech is lumped under “AI”. Now it is a name to be proud of, a booming industry with massive capital investment (Shoham et al. 2018), and on the edge of hype again. As Erik Brynjolfsson noted, it may allow us to

virtually eliminate global poverty, massively reduce disease and provide better education to almost everyone on the planet. (quoted in Anderson, Rainie, and Luchsinger 2018)

While AI can be entirely software, robots are physical machines that move. Robots are subject to physical impact, typically through “sensors”, and they exert physical force onto the world, typically through “actuators”, like a gripper or a turning wheel. Accordingly, autonomous cars or planes are robots, and only a minuscule portion of robots is “humanoid” (human-shaped), like in the movies. Some robots use AI, and some do not: Typical industrial robots blindly follow completely defined scripts with minimal sensory input and no learning or reasoning (around 500,000 such new industrial robots are installed each year (IFR 2019 [OIR])). It is probably fair to say that while robotics systems cause more concerns in the general public, AI systems are more likely to have a greater impact on humanity. Also, AI or robotics systems for a narrow set of tasks are less likely to cause new issues than systems that are more flexible and autonomous.

Robotics and AI can thus be seen as covering two overlapping sets of systems: systems that are only AI, systems that are only robotics, and systems that are both. We are interested in all three; the scope of this article is thus not only the intersection, but the union, of both sets.

Policy is only one of the concerns of this article. There is significant public discussion about AI ethics, and there are frequent pronouncements from politicians that the matter requires new policy, which is easier said than done: Actual technology policy is difficult to plan and enforce. It can take many forms, from incentives and funding, infrastructure, taxation, or good-will statements, to regulation by various actors, and the law. Policy for AI will possibly come into conflict with other aims of technology policy or general policy. Governments, parliaments, associations, and industry circles in industrialised countries have produced reports and white papers in recent years, and some have generated good-will slogans (“trusted/responsible/humane/human-centred/good/beneficial AI”), but is that what is needed? For a survey, see Jobin, Ienca, and Vayena (2019) and V. Müller’s list of PT-AI Policy Documents and Institutions .

For people who work in ethics and policy, there might be a tendency to overestimate the impact and threats from a new technology, and to underestimate how far current regulation can reach (e.g., for product liability). On the other hand, there is a tendency for businesses, the military, and some public administrations to “just talk” and do some “ethics washing” in order to preserve a good public image and continue as before. Actually implementing legally binding regulation would challenge existing business models and practices. Actual policy is not just an implementation of ethical theory, but subject to societal power structures—and the agents that do have the power will push against anything that restricts them. There is thus a significant risk that regulation will remain toothless in the face of economical and political power.

Though very little actual policy has been produced, there are some notable beginnings: The latest EU policy document suggests “trustworthy AI” should be lawful, ethical, and technically robust, and then spells this out as seven requirements: human oversight, technical robustness, privacy and data governance, transparency, fairness, well-being, and accountability (AI HLEG 2019 [OIR]). Much European research now runs under the slogan of “responsible research and innovation” (RRI), and “technology assessment” has been a standard field since the advent of nuclear power. Professional ethics is also a standard field in information technology, and this includes issues that are relevant in this article. Perhaps a “code of ethics” for AI engineers, analogous to the codes of ethics for medical doctors, is an option here (Véliz 2019). What data science itself should do is addressed in (L. Taylor and Purtova 2019). We also expect that much policy will eventually cover specific uses or technologies of AI and robotics, rather than the field as a whole. A useful summary of an ethical framework for AI is given in (European Group on Ethics in Science and New Technologies 2018: 13ff). On general AI policy, see Calo (2018) as well as Crawford and Calo (2016); Stahl, Timmermans, and Mittelstadt (2016); Johnson and Verdicchio (2017); and Giubilini and Savulescu (2018). A more political angle of technology is often discussed in the field of “Science and Technology Studies” (STS). As books like The Ethics of Invention (Jasanoff 2016) show, concerns in STS are often quite similar to those in ethics (Jacobs et al. 2019 [OIR]). In this article, we discuss the policy for each type of issue separately rather than for AI or robotics in general.

2. Main Debates

In this section we outline the ethical issues of human use of AI and robotics systems that can be more or less autonomous—which means we look at issues that arise with certain uses of the technologies which would not arise with others. It must be kept in mind, however, that technologies will always cause some uses to be easier, and thus more frequent, and hinder other uses. The design of technical artefacts thus has ethical relevance for their use (Houkes and Vermaas 2010; Verbeek 2011), so beyond “responsible use”, we also need “responsible design” in this field. The focus on use does not presuppose which ethical approaches are best suited for tackling these issues; they might well be virtue ethics (Vallor 2017) rather than consequentialist or value-based (Floridi et al. 2018). This section is also neutral with respect to the question whether AI systems truly have “intelligence” or other mental properties: It would apply equally well if AI and robotics are merely seen as the current face of automation (cf. Müller forthcoming-b).

There is a general discussion about privacy and surveillance in information technology (e.g., Macnish 2017; Roessler 2017), which mainly concerns the access to private data and data that is personally identifiable. Privacy has several well recognised aspects, e.g., “the right to be let alone”, information privacy, privacy as an aspect of personhood, control over information about oneself, and the right to secrecy (Bennett and Raab 2006). Privacy studies have historically focused on state surveillance by secret services but now include surveillance by other state agents, businesses, and even individuals. The technology has changed significantly in the last decades while regulation has been slow to respond (though there is the Regulation (EU) 2016/679)—the result is a certain anarchy that is exploited by the most powerful players, sometimes in plain sight, sometimes in hiding.

The digital sphere has widened greatly: All data collection and storage is now digital, our lives are increasingly digital, most digital data is connected to a single Internet, and there is more and more sensor technology in use that generates data about non-digital aspects of our lives. AI increases both the possibilities of intelligent data collection and the possibilities for data analysis. This applies to blanket surveillance of whole populations as well as to classic targeted surveillance. In addition, much of the data is traded between agents, usually for a fee.

At the same time, controlling who collects which data, and who has access, is much harder in the digital world than it was in the analogue world of paper and telephone calls. Many new AI technologies amplify the known issues. For example, face recognition in photos and videos allows identification and thus profiling and searching for individuals (Whittaker et al. 2018: 15ff). This continues using other techniques for identification, e.g., “device fingerprinting”, which are commonplace on the Internet (sometimes revealed in the “privacy policy”). The result is that “In this vast ocean of data, there is a frighteningly complete picture of us” (Smolan 2016: 1:01). The result is arguably a scandal that still has not received due public attention.

The data trail we leave behind is how our “free” services are paid for—but we are not told about that data collection and the value of this new raw material, and we are manipulated into leaving ever more such data. For the “big 5” companies (Amazon, Google/Alphabet, Microsoft, Apple, Facebook), the main data-collection part of their business appears to be based on deception, exploiting human weaknesses, furthering procrastination, generating addiction, and manipulation (Harris 2016 [OIR]). The primary focus of social media, gaming, and most of the Internet in this “surveillance economy” is to gain, maintain, and direct attention—and thus data supply. “Surveillance is the business model of the Internet” (Schneier 2015). This surveillance and attention economy is sometimes called “surveillance capitalism” (Zuboff 2019). It has caused many attempts to escape from the grasp of these corporations, e.g., in exercises of “minimalism” (Newport 2019), sometimes through the open source movement, but it appears that present-day citizens have lost the degree of autonomy needed to escape while fully continuing with their life and work. We have lost ownership of our data, if “ownership” is the right relation here. Arguably, we have lost control of our data.

These systems will often reveal facts about us that we ourselves wish to suppress or are not aware of: they know more about us than we know ourselves. Even just observing online behaviour allows insights into our mental states (Burr and Christianini 2019) and manipulation (see below section 2.2 ). This has led to calls for the protection of “derived data” (Wachter and Mittelstadt 2019). With the last sentence of his bestselling book, Homo Deus , Harari asks about the long-term consequences of AI:

What will happen to society, politics and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves? (2016: 462)

Robotic devices have not yet played a major role in this area, except for security patrolling, but this will change once they are more common outside of industry environments. Together with the “Internet of things”, the so-called “smart” systems (phone, TV, oven, lamp, virtual assistant, home,…), “smart city” (Sennett 2018), and “smart governance”, they are set to become part of the data-gathering machinery that offers more detailed data, of different types, in real time, with ever more information.

Privacy-preserving techniques that can largely conceal the identity of persons or groups are now a standard staple in data science; they include (relative) anonymisation , access control (plus encryption), and other models where computation is carried out with fully or partially encrypted input data (Stahl and Wright 2018); in the case of “differential privacy”, this is done by adding calibrated noise to encrypt the output of queries (Dwork et al. 2006; Abowd 2017). While requiring more effort and cost, such techniques can avoid many of the privacy issues. Some companies have also seen better privacy as a competitive advantage that can be leveraged and sold at a price.

One of the major practical difficulties is to actually enforce regulation, both on the level of the state and on the level of the individual who has a claim. They must identify the responsible legal entity, prove the action, perhaps prove intent, find a court that declares itself competent … and eventually get the court to actually enforce its decision. Well-established legal protection of rights such as consumer rights, product liability, and other civil liability or protection of intellectual property rights is often missing in digital products, or hard to enforce. This means that companies with a “digital” background are used to testing their products on the consumers without fear of liability while heavily defending their intellectual property rights. This “Internet Libertarianism” is sometimes taken to assume that technical solutions will take care of societal problems by themselves (Mozorov 2013).

The ethical issues of AI in surveillance go beyond the mere accumulation of data and direction of attention: They include the use of information to manipulate behaviour, online and offline, in a way that undermines autonomous rational choice. Of course, efforts to manipulate behaviour are ancient, but they may gain a new quality when they use AI systems. Given users’ intense interaction with data systems and the deep knowledge about individuals this provides, they are vulnerable to “nudges”, manipulation, and deception. With sufficient prior data, algorithms can be used to target individuals or small groups with just the kind of input that is likely to influence these particular individuals. A ’nudge‘ changes the environment such that it influences behaviour in a predictable way that is positive for the individual, but easy and cheap to avoid (Thaler & Sunstein 2008). There is a slippery slope from here to paternalism and manipulation.

Many advertisers, marketers, and online sellers will use any legal means at their disposal to maximise profit, including exploitation of behavioural biases, deception, and addiction generation (Costa and Halpern 2019 [OIR]). Such manipulation is the business model in much of the gambling and gaming industries, but it is spreading, e.g., to low-cost airlines. In interface design on web pages or in games, this manipulation uses what is called “dark patterns” (Mathur et al. 2019). At this moment, gambling and the sale of addictive substances are highly regulated, but online manipulation and addiction are not—even though manipulation of online behaviour is becoming a core business model of the Internet.

Furthermore, social media is now the prime location for political propaganda. This influence can be used to steer voting behaviour, as in the Facebook-Cambridge Analytica “scandal” (Woolley and Howard 2017; Bradshaw, Neudert, and Howard 2019) and—if successful—it may harm the autonomy of individuals (Susser, Roessler, and Nissenbaum 2019).

Improved AI “faking” technologies make what once was reliable evidence into unreliable evidence—this has already happened to digital photos, sound recordings, and video. It will soon be quite easy to create (rather than alter) “deep fake” text, photos, and video material with any desired content. Soon, sophisticated real-time interaction with persons over text, phone, or video will be faked, too. So we cannot trust digital interactions while we are at the same time increasingly dependent on such interactions.

One more specific issue is that machine learning techniques in AI rely on training with vast amounts of data. This means there will often be a trade-off between privacy and rights to data vs. technical quality of the product. This influences the consequentialist evaluation of privacy-violating practices.

The policy in this field has its ups and downs: Civil liberties and the protection of individual rights are under intense pressure from businesses’ lobbying, secret services, and other state agencies that depend on surveillance. Privacy protection has diminished massively compared to the pre-digital age when communication was based on letters, analogue telephone communications, and personal conversation and when surveillance operated under significant legal constraints.

While the EU General Data Protection Regulation (Regulation (EU) 2016/679) has strengthened privacy protection, the US and China prefer growth with less regulation (Thompson and Bremmer 2018), likely in the hope that this provides a competitive advantage. It is clear that state and business actors have increased their ability to invade privacy and manipulate people with the help of AI technology and will continue to do so to further their particular interests—unless reined in by policy in the interest of general society.

Opacity and bias are central issues in what is now sometimes called “data ethics” or “big data ethics” (Floridi and Taddeo 2016; Mittelstadt and Floridi 2016). AI systems for automated decision support and “predictive analytics” raise “significant concerns about lack of due process, accountability, community engagement, and auditing” (Whittaker et al. 2018: 18ff). They are part of a power structure in which “we are creating decision-making processes that constrain and limit opportunities for human participation” (Danaher 2016b: 245). At the same time, it will often be impossible for the affected person to know how the system came to this output, i.e., the system is “opaque” to that person. If the system involves machine learning, it will typically be opaque even to the expert, who will not know how a particular pattern was identified, or even what the pattern is. Bias in decision systems and data sets is exacerbated by this opacity. So, at least in cases where there is a desire to remove bias, the analysis of opacity and bias go hand in hand, and political response has to tackle both issues together.

Many AI systems rely on machine learning techniques in (simulated) neural networks that will extract patterns from a given dataset, with or without “correct” solutions provided; i.e., supervised, semi-supervised or unsupervised. With these techniques, the “learning” captures patterns in the data and these are labelled in a way that appears useful to the decision the system makes, while the programmer does not really know which patterns in the data the system has used. In fact, the programs are evolving, so when new data comes in, or new feedback is given (“this was correct”, “this was incorrect”), the patterns used by the learning system change. What this means is that the outcome is not transparent to the user or programmers: it is opaque. Furthermore, the quality of the program depends heavily on the quality of the data provided, following the old slogan “garbage in, garbage out”. So, if the data already involved a bias (e.g., police data about the skin colour of suspects), then the program will reproduce that bias. There are proposals for a standard description of datasets in a “datasheet” that would make the identification of such bias more feasible (Gebru et al. 2018 [OIR]). There is also significant recent literature about the limitations of machine learning systems that are essentially sophisticated data filters (Marcus 2018 [OIR]). Some have argued that the ethical problems of today are the result of technical “shortcuts” AI has taken (Cristianini forthcoming).

There are several technical activities that aim at “explainable AI”, starting with (Van Lent, Fisher, and Mancuso 1999; Lomas et al. 2012) and, more recently, a DARPA programme (Gunning 2017 [OIR]). More broadly, the demand for

a mechanism for elucidating and articulating the power structures, biases, and influences that computational artefacts exercise in society (Diakopoulos 2015: 398)

is sometimes called “algorithmic accountability reporting”. This does not mean that we expect an AI to “explain its reasoning”—doing so would require far more serious moral autonomy than we currently attribute to AI systems (see below §2.10 ).

The politician Henry Kissinger pointed out that there is a fundamental problem for democratic decision-making if we rely on a system that is supposedly superior to humans, but cannot explain its decisions. He says we may have “generated a potentially dominating technology in search of a guiding philosophy” (Kissinger 2018). Danaher (2016b) calls this problem “the threat of algocracy” (adopting the previous use of ‘algocracy’ from Aneesh 2002 [OIR], 2006). In a similar vein, Cave (2019) stresses that we need a broader societal move towards more “democratic” decision-making to avoid AI being a force that leads to a Kafka-style impenetrable suppression system in public administration and elsewhere. The political angle of this discussion has been stressed by O’Neil in her influential book Weapons of Math Destruction (2016), and by Yeung and Lodge (2019).

In the EU, some of these issues have been taken into account with the (Regulation (EU) 2016/679), which foresees that consumers, when faced with a decision based on data processing, will have a legal “right to explanation”—how far this goes and to what extent it can be enforced is disputed (Goodman and Flaxman 2017; Wachter, Mittelstadt, and Floridi 2016; Wachter, Mittelstadt, and Russell 2017). Zerilli et al. (2019) argue that there may be a double standard here, where we demand a high level of explanation for machine-based decisions despite humans sometimes not reaching that standard themselves.

Automated AI decision support systems and “predictive analytics” operate on data and produce a decision as “output”. This output may range from the relatively trivial to the highly significant: “this restaurant matches your preferences”, “the patient in this X-ray has completed bone growth”, “application to credit card declined”, “donor organ will be given to another patient”, “bail is denied”, or “target identified and engaged”. Data analysis is often used in “predictive analytics” in business, healthcare, and other fields, to foresee future developments—since prediction is easier, it will also become a cheaper commodity. One use of prediction is in “predictive policing” (NIJ 2014 [OIR]), which many fear might lead to an erosion of public liberties (Ferguson 2017) because it can take away power from the people whose behaviour is predicted. It appears, however, that many of the worries about policing depend on futuristic scenarios where law enforcement foresees and punishes planned actions, rather than waiting until a crime has been committed (like in the 2002 film “Minority Report”). One concern is that these systems might perpetuate bias that was already in the data used to set up the system, e.g., by increasing police patrols in an area and discovering more crime in that area. Actual “predictive policing” or “intelligence led policing” techniques mainly concern the question of where and when police forces will be needed most. Also, police officers can be provided with more data, offering them more control and facilitating better decisions, in workflow support software (e.g., “ArcGIS”). Whether this is problematic depends on the appropriate level of trust in the technical quality of these systems, and on the evaluation of aims of the police work itself. Perhaps a recent paper title points in the right direction here: “AI ethics in predictive policing: From models of threat to an ethics of care” (Asaro 2019).

Bias typically surfaces when unfair judgments are made because the individual making the judgment is influenced by a characteristic that is actually irrelevant to the matter at hand, typically a discriminatory preconception about members of a group. So, one form of bias is a learned cognitive feature of a person, often not made explicit. The person concerned may not be aware of having that bias—they may even be honestly and explicitly opposed to a bias they are found to have (e.g., through priming, cf. Graham and Lowery 2004). On fairness vs. bias in machine learning, see Binns (2018).

Apart from the social phenomenon of learned bias, the human cognitive system is generally prone to have various kinds of “cognitive biases”, e.g., the “confirmation bias”: humans tend to interpret information as confirming what they already believe. This second form of bias is often said to impede performance in rational judgment (Kahnemann 2011)—though at least some cognitive biases generate an evolutionary advantage, e.g., economical use of resources for intuitive judgment. There is a question whether AI systems could or should have such cognitive bias.

A third form of bias is present in data when it exhibits systematic error, e.g., “statistical bias”. Strictly, any given dataset will only be unbiased for a single kind of issue, so the mere creation of a dataset involves the danger that it may be used for a different kind of issue, and then turn out to be biased for that kind. Machine learning on the basis of such data would then not only fail to recognise the bias, but codify and automate the “historical bias”. Such historical bias was discovered in an automated recruitment screening system at Amazon (discontinued early 2017) that discriminated against women—presumably because the company had a history of discriminating against women in the hiring process. The “Correctional Offender Management Profiling for Alternative Sanctions” (COMPAS), a system to predict whether a defendant would re-offend, was found to be as successful (65.2% accuracy) as a group of random humans (Dressel and Farid 2018) and to produce more false positives and less false negatives for black defendants. The problem with such systems is thus bias plus humans placing excessive trust in the systems. The political dimensions of such automated systems in the USA are investigated in Eubanks (2018).

There are significant technical efforts to detect and remove bias from AI systems, but it is fair to say that these are in early stages: see UK Institute for Ethical AI & Machine Learning (Brownsword, Scotford, and Yeung 2017; Yeung and Lodge 2019). It appears that technological fixes have their limits in that they need a mathematical notion of fairness, which is hard to come by (Whittaker et al. 2018: 24ff; Selbst et al. 2019), as is a formal notion of “race” (see Benthall and Haynes 2019). An institutional proposal is in (Veale and Binns 2017).

Human-robot interaction (HRI) is an academic fields in its own right, which now pays significant attention to ethical matters, the dynamics of perception from both sides, and both the different interests present in and the intricacy of the social context, including co-working (e.g., Arnold and Scheutz 2017). Useful surveys for the ethics of robotics include Calo, Froomkin, and Kerr (2016); Royakkers and van Est (2016); Tzafestas (2016); a standard collection of papers is Lin, Abney, and Jenkins (2017).

While AI can be used to manipulate humans into believing and doing things (see section 2.2 ), it can also be used to drive robots that are problematic if their processes or appearance involve deception, threaten human dignity, or violate the Kantian requirement of “respect for humanity”. Humans very easily attribute mental properties to objects, and empathise with them, especially when the outer appearance of these objects is similar to that of living beings. This can be used to deceive humans (or animals) into attributing more intellectual or even emotional significance to robots or AI systems than they deserve. Some parts of humanoid robotics are problematic in this regard (e.g., Hiroshi Ishiguro’s remote-controlled Geminoids), and there are cases that have been clearly deceptive for public-relations purposes (e.g. on the abilities of Hanson Robotics’ “Sophia”). Of course, some fairly basic constraints of business ethics and law apply to robots, too: product safety and liability, or non-deception in advertisement. It appears that these existing constraints take care of many concerns that are raised. There are cases, however, where human-human interaction has aspects that appear specifically human in ways that can perhaps not be replaced by robots: care, love, and sex.

2.5.1 Example (a) Care Robots

The use of robots in health care for humans is currently at the level of concept studies in real environments, but it may become a usable technology in a few years, and has raised a number of concerns for a dystopian future of de-humanised care (A. Sharkey and N. Sharkey 2011; Robert Sparrow 2016). Current systems include robots that support human carers/caregivers (e.g., in lifting patients, or transporting material), robots that enable patients to do certain things by themselves (e.g., eat with a robotic arm), but also robots that are given to patients as company and comfort (e.g., the “Paro” robot seal). For an overview, see van Wynsberghe (2016); Nørskov (2017); Fosch-Villaronga and Albo-Canals (2019), for a survey of users Draper et al. (2014).

One reason why the issue of care has come to the fore is that people have argued that we will need robots in ageing societies. This argument makes problematic assumptions, namely that with longer lifespan people will need more care, and that it will not be possible to attract more humans to caring professions. It may also show a bias about age (Jecker forthcoming). Most importantly, it ignores the nature of automation, which is not simply about replacing humans, but about allowing humans to work more efficiently. It is not very clear that there really is an issue here since the discussion mostly focuses on the fear of robots de-humanising care, but the actual and foreseeable robots in care are assistive robots for classic automation of technical tasks. They are thus “care robots” only in a behavioural sense of performing tasks in care environments, not in the sense that a human “cares” for the patients. It appears that the success of “being cared for” relies on this intentional sense of “care”, which foreseeable robots cannot provide. If anything, the risk of robots in care is the absence of such intentional care—because less human carers may be needed. Interestingly, caring for something, even a virtual agent, can be good for the carer themselves (Lee et al. 2019). A system that pretends to care would be deceptive and thus problematic—unless the deception is countered by sufficiently large utility gain (Coeckelbergh 2016). Some robots that pretend to “care” on a basic level are available (Paro seal) and others are in the making. Perhaps feeling cared for by a machine, to some extent, is progress for come patients.

2.5.2 Example (b) Sex Robots

It has been argued by several tech optimists that humans will likely be interested in sex and companionship with robots and be comfortable with the idea (Levy 2007). Given the variation of human sexual preferences, including sex toys and sex dolls, this seems very likely: The question is whether such devices should be manufactured and promoted, and whether there should be limits in this touchy area. It seems to have moved into the mainstream of “robot philosophy” in recent times (Sullins 2012; Danaher and McArthur 2017; N. Sharkey et al. 2017 [OIR]; Bendel 2018; Devlin 2018).

Humans have long had deep emotional attachments to objects, so perhaps companionship or even love with a predictable android is attractive, especially to people who struggle with actual humans, and already prefer dogs, cats, birds, a computer or a tamagotchi . Danaher (2019b) argues against (Nyholm and Frank 2017) that these can be true friendships, and is thus a valuable goal. It certainly looks like such friendship might increase overall utility, even if lacking in depth. In these discussions there is an issue of deception, since a robot cannot (at present) mean what it says, or have feelings for a human. It is well known that humans are prone to attribute feelings and thoughts to entities that behave as if they had sentience,even to clearly inanimate objects that show no behaviour at all. Also, paying for deception seems to be an elementary part of the traditional sex industry.

Finally, there are concerns that have often accompanied matters of sex, namely consent (Frank and Nyholm 2017), aesthetic concerns, and the worry that humans may be “corrupted” by certain experiences. Old fashioned though this may seem, human behaviour is influenced by experience, and it is likely that pornography or sex robots support the perception of other humans as mere objects of desire, or even recipients of abuse, and thus ruin a deeper sexual and erotic experience. In this vein, the “Campaign Against Sex Robots” argues that these devices are a continuation of slavery and prostitution (Richardson 2016).

It seems clear that AI and robotics will lead to significant gains in productivity and thus overall wealth. The attempt to increase productivity has often been a feature of the economy, though the emphasis on “growth” is a modern phenomenon (Harari 2016: 240). However, productivity gains through automation typically mean that fewer humans are required for the same output. This does not necessarily imply a loss of overall employment, however, because available wealth increases and that can increase demand sufficiently to counteract the productivity gain. In the long run, higher productivity in industrial societies has led to more wealth overall. Major labour market disruptions have occurred in the past, e.g., farming employed over 60% of the workforce in Europe and North-America in 1800, while by 2010 it employed ca. 5% in the EU, and even less in the wealthiest countries (European Commission 2013). In the 20 years between 1950 and 1970 the number of hired agricultural workers in the UK was reduced by 50% (Zayed and Loft 2019). Some of these disruptions lead to more labour-intensive industries moving to places with lower labour cost. This is an ongoing process.

Classic automation replaced human muscle, whereas digital automation replaces human thought or information-processing—and unlike physical machines, digital automation is very cheap to duplicate (Bostrom and Yudkowsky 2014). It may thus mean a more radical change on the labour market. So, the main question is: will the effects be different this time? Will the creation of new jobs and wealth keep up with the destruction of jobs? And even if it is not different, what are the transition costs, and who bears them? Do we need to make societal adjustments for a fair distribution of costs and benefits of digital automation?

Responses to the issue of unemployment from AI have ranged from the alarmed (Frey and Osborne 2013; Westlake 2014) to the neutral (Metcalf, Keller, and Boyd 2016 [OIR]; Calo 2018; Frey 2019) to the optimistic (Brynjolfsson and McAfee 2016; Harari 2016; Danaher 2019a). In principle, the labour market effect of automation seems to be fairly well understood as involving two channels:

(i) the nature of interactions between differently skilled workers and new technologies affecting labour demand and (ii) the equilibrium effects of technological progress through consequent changes in labour supply and product markets. (Goos 2018: 362)

What currently seems to happen in the labour market as a result of AI and robotics automation is “job polarisation” or the “dumbbell” shape (Goos, Manning, and Salomons 2009): The highly skilled technical jobs are in demand and highly paid, the low skilled service jobs are in demand and badly paid, but the mid-qualification jobs in factories and offices, i.e., the majority of jobs, are under pressure and reduced because they are relatively predictable, and most likely to be automated (Baldwin 2019).

Perhaps enormous productivity gains will allow the “age of leisure” to be realised, something (Keynes 1930) had predicted to occur around 2030, assuming a growth rate of 1% per annum. Actually, we have already reached the level he anticipated for 2030, but we are still working—consuming more and inventing ever more levels of organisation. Harari explains how this economic development allowed humanity to overcome hunger, disease, and war—and now we aim for immortality and eternal bliss through AI, thus his title Homo Deus (Harari 2016: 75).

In general terms, the issue of unemployment is an issue of how goods in a society should be justly distributed. A standard view is that distributive justice should be rationally decided from behind a “veil of ignorance” (Rawls 1971), i.e., as if one does not know what position in a society one would actually be taking (labourer or industrialist, etc.). Rawls thought the chosen principles would then support basic liberties and a distribution that is of greatest benefit to the least-advantaged members of society. It would appear that the AI economy has three features that make such justice unlikely: First, it operates in a largely unregulated environment where responsibility is often hard to allocate. Second, it operates in markets that have a “winner takes all” feature where monopolies develop quickly. Third, the “new economy” of the digital service industries is based on intangible assets, also called “capitalism without capital” (Haskel and Westlake 2017). This means that it is difficult to control multinational digital corporations that do not rely on a physical plant in a particular location. These three features seem to suggest that if we leave the distribution of wealth to free market forces, the result would be a heavily unjust distribution: And this is indeed a development that we can already see.

One interesting question that has not received too much attention is whether the development of AI is environmentally sustainable: Like all computing systems, AI systems produce waste that is very hard to recycle and they consume vast amounts of energy, especially for the training of machine learning systems (and even for the “mining” of cryptocurrency). Again, it appears that some actors in this space offload such costs to the general society.

There are several notions of autonomy in the discussion of autonomous systems. A stronger notion is involved in philosophical debates where autonomy is the basis for responsibility and personhood (Christman 2003 [2018]). In this context, responsibility implies autonomy, but not inversely, so there can be systems that have degrees of technical autonomy without raising issues of responsibility. The weaker, more technical, notion of autonomy in robotics is relative and gradual: A system is said to be autonomous with respect to human control to a certain degree (Müller 2012). There is a parallel here to the issues of bias and opacity in AI since autonomy also concerns a power-relation: who is in control, and who is responsible?

Generally speaking, one question is the degree to which autonomous robots raise issues our present conceptual schemes must adapt to, or whether they just require technical adjustments. In most jurisdictions, there is a sophisticated system of civil and criminal liability to resolve such issues. Technical standards, e.g., for the safe use of machinery in medical environments, will likely need to be adjusted. There is already a field of “verifiable AI” for such safety-critical systems and for “security applications”. Bodies like the IEEE (The Institute of Electrical and Electronics Engineers) and the BSI (British Standards Institution) have produced “standards”, particularly on more technical sub-problems, such as data security and transparency. Among the many autonomous systems on land, on water, under water, in air or space, we discuss two samples: autonomous vehicles and autonomous weapons.

2.7.1 Example (a) Autonomous Vehicles

Autonomous vehicles hold the promise to reduce the very significant damage that human driving currently causes—approximately 1 million humans being killed per year, many more injured, the environment polluted, earth sealed with concrete and tarmac, cities full of parked cars, etc. However, there seem to be questions on how autonomous vehicles should behave, and how responsibility and risk should be distributed in the complicated system the vehicles operates in. (There is also significant disagreement over how long the development of fully autonomous, or “level 5” cars (SAE International 2018) will actually take.)

There is some discussion of “trolley problems” in this context. In the classic “trolley problems” (Thomson 1976; Woollard and Howard-Snyder 2016: section 2) various dilemmas are presented. The simplest version is that of a trolley train on a track that is heading towards five people and will kill them, unless the train is diverted onto a side track, but on that track there is one person, who will be killed if the train takes that side track. The example goes back to a remark in (Foot 1967: 6), who discusses a number of dilemma cases where tolerated and intended consequences of an action differ. “Trolley problems” are not supposed to describe actual ethical problems or to be solved with a “right” choice. Rather, they are thought-experiments where choice is artificially constrained to a small finite number of distinct one-off options and where the agent has perfect knowledge. These problems are used as a theoretical tool to investigate ethical intuitions and theories—especially the difference between actively doing vs. allowing something to happen, intended vs. tolerated consequences, and consequentialist vs. other normative approaches (Kamm 2016). This type of problem has reminded many of the problems encountered in actual driving and in autonomous driving (Lin 2016). It is doubtful, however, that an actual driver or autonomous car will ever have to solve trolley problems (but see Keeling 2020). While autonomous car trolley problems have received a lot of media attention (Awad et al. 2018), they do not seem to offer anything new to either ethical theory or to the programming of autonomous vehicles.

The more common ethical problems in driving, such as speeding, risky overtaking, not keeping a safe distance, etc. are classic problems of pursuing personal interest vs. the common good. The vast majority of these are covered by legal regulations on driving. Programming the car to drive “by the rules” rather than “by the interest of the passengers” or “to achieve maximum utility” is thus deflated to a standard problem of programming ethical machines (see section 2.9 ). There are probably additional discretionary rules of politeness and interesting questions on when to break the rules (Lin 2016), but again this seems to be more a case of applying standard considerations (rules vs. utility) to the case of autonomous vehicles.

Notable policy efforts in this field include the report (German Federal Ministry of Transport and Digital Infrastructure 2017), which stresses that safety is the primary objective. Rule 10 states

In the case of automated and connected driving systems, the accountability that was previously the sole preserve of the individual shifts from the motorist to the manufacturers and operators of the technological systems and to the bodies responsible for taking infrastructure, policy and legal decisions.

(See section 2.10.1 below). The resulting German and EU laws on licensing automated driving are much more restrictive than their US counterparts where “testing on consumers” is a strategy used by some companies—without informed consent of the consumers or their possible victims.

2.7.2 Example (b) Autonomous Weapons

The notion of automated weapons is fairly old:

For example, instead of fielding simple guided missiles or remotely piloted vehicles, we might launch completely autonomous land, sea, and air vehicles capable of complex, far-ranging reconnaissance and attack missions. (DARPA 1983: 1)

This proposal was ridiculed as “fantasy” at the time (Dreyfus, Dreyfus, and Athanasiou 1986: ix), but it is now a reality, at least for more easily identifiable targets (missiles, planes, ships, tanks, etc.), but not for human combatants. The main arguments against (lethal) autonomous weapon systems (AWS or LAWS), are that they support extrajudicial killings, take responsibility away from humans, and make wars or killings more likely—for a detailed list of issues see Lin, Bekey, and Abney (2008: 73–86).

It appears that lowering the hurdle to use such systems (autonomous vehicles, “fire-and-forget” missiles, or drones loaded with explosives) and reducing the probability of being held accountable would increase the probability of their use. The crucial asymmetry where one side can kill with impunity, and thus has few reasons not to do so, already exists in conventional drone wars with remote controlled weapons (e.g., US in Pakistan). It is easy to imagine a small drone that searches, identifies, and kills an individual human—or perhaps a type of human. These are the kinds of cases brought forward by the Campaign to Stop Killer Robots and other activist groups. Some seem to be equivalent to saying that autonomous weapons are indeed weapons …, and weapons kill, but we still make them in gigantic numbers. On the matter of accountability, autonomous weapons might make identification and prosecution of the responsible agents more difficult—but this is not clear, given the digital records that one can keep, at least in a conventional war. The difficulty of allocating punishment is sometimes called the “retribution gap” (Danaher 2016a).

Another question is whether using autonomous weapons in war would make wars worse, or make wars less bad. If robots reduce war crimes and crimes in war, the answer may well be positive and has been used as an argument in favour of these weapons (Arkin 2009; Müller 2016a) but also as an argument against them (Amoroso and Tamburrini 2018). Arguably the main threat is not the use of such weapons in conventional warfare, but in asymmetric conflicts or by non-state agents, including criminals.

It has also been said that autonomous weapons cannot conform to International Humanitarian Law, which requires observance of the principles of distinction (between combatants and civilians), proportionality (of force), and military necessity (of force) in military conflict (A. Sharkey 2019). It is true that the distinction between combatants and non-combatants is hard, but the distinction between civilian and military ships is easy—so all this says is that we should not construct and use such weapons if they do violate Humanitarian Law. Additional concerns have been raised that being killed by an autonomous weapon threatens human dignity, but even the defenders of a ban on these weapons seem to say that these are not good arguments:

There are other weapons, and other technologies, that also compromise human dignity. Given this, and the ambiguities inherent in the concept, it is wiser to draw on several types of objections in arguments against AWS, and not to rely exclusively on human dignity. (A. Sharkey 2019)

A lot has been made of keeping humans “in the loop” or “on the loop” in the military guidance on weapons—these ways of spelling out “meaningful control” are discussed in (Santoni de Sio and van den Hoven 2018). There have been discussions about the difficulties of allocating responsibility for the killings of an autonomous weapon, and a “responsibility gap” has been suggested (esp. Rob Sparrow 2007), meaning that neither the human nor the machine may be responsible. On the other hand, we do not assume that for every event there is someone responsible for that event, and the real issue may well be the distribution of risk (Simpson and Müller 2016). Risk analysis (Hansson 2013) indicates it is crucial to identify who is exposed to risk, who is a potential beneficiary , and who makes the decisions (Hansson 2018: 1822–1824).

Machine ethics is ethics for machines, for “ethical machines”, for machines as subjects , rather than for the human use of machines as objects. It is often not very clear whether this is supposed to cover all of AI ethics or to be a part of it (Floridi and Saunders 2004; Moor 2006; Anderson and Anderson 2011; Wallach and Asaro 2017). Sometimes it looks as though there is the (dubious) inference at play here that if machines act in ethically relevant ways, then we need a machine ethics. Accordingly, some use a broader notion:

machine ethics is concerned with ensuring that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable. (Anderson and Anderson 2007: 15)

This might include mere matters of product safety, for example. Other authors sound rather ambitious but use a narrower notion:

AI reasoning should be able to take into account societal values, moral and ethical considerations; weigh the respective priorities of values held by different stakeholders in various multicultural contexts; explain its reasoning; and guarantee transparency. (Dignum 2018: 1, 2)

Some of the discussion in machine ethics makes the very substantial assumption that machines can, in some sense, be ethical agents responsible for their actions, or “autonomous moral agents” (see van Wynsberghe and Robbins 2019). The basic idea of machine ethics is now finding its way into actual robotics where the assumption that these machines are artificial moral agents in any substantial sense is usually not made (Winfield et al. 2019). It is sometimes observed that a robot that is programmed to follow ethical rules can very easily be modified to follow unethical rules (Vanderelst and Winfield 2018).

The idea that machine ethics might take the form of “laws” has famously been investigated by Isaac Asimov, who proposed “three laws of robotics” (Asimov 1942):

First Law—A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law—A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law—A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov then showed in a number of stories how conflicts between these three laws will make it problematic to use them despite their hierarchical organisation.

It is not clear that there is a consistent notion of “machine ethics” since weaker versions are in danger of reducing “having an ethics” to notions that would not normally be considered sufficient (e.g., without “reflection” or even without “action”); stronger notions that move towards artificial moral agents may describe a—currently—empty set.

If one takes machine ethics to concern moral agents, in some substantial sense, then these agents can be called “artificial moral agents”, having rights and responsibilities. However, the discussion about artificial entities challenges a number of common notions in ethics and it can be very useful to understand these in abstraction from the human case (cf. Misselhorn 2020; Powers and Ganascia forthcoming).

Several authors use “artificial moral agent” in a less demanding sense, borrowing from the use of “agent” in software engineering in which case matters of responsibility and rights will not arise (Allen, Varner, and Zinser 2000). James Moor (2006) distinguishes four types of machine agents: ethical impact agents (e.g., robot jockeys), implicit ethical agents (e.g., safe autopilot), explicit ethical agents (e.g., using formal methods to estimate utility), and full ethical agents (who “can make explicit ethical judgments and generally is competent to reasonably justify them. An average adult human is a full ethical agent”.) Several ways to achieve “explicit” or “full” ethical agents have been proposed, via programming it in (operational morality), via “developing” the ethics itself (functional morality), and finally full-blown morality with full intelligence and sentience (Allen, Smit, and Wallach 2005; Moor 2006). Programmed agents are sometimes not considered “full” agents because they are “competent without comprehension”, just like the neurons in a brain (Dennett 2017; Hakli and Mäkelä 2019).

In some discussions, the notion of “moral patient” plays a role: Ethical agents have responsibilities while ethical patients have rights because harm to them matters. It seems clear that some entities are patients without being agents, e.g., simple animals that can feel pain but cannot make justified choices. On the other hand, it is normally understood that all agents will also be patients (e.g., in a Kantian framework). Usually, being a person is supposed to be what makes an entity a responsible agent, someone who can have duties and be the object of ethical concerns. Such personhood is typically a deep notion associated with phenomenal consciousness, intention and free will (Frankfurt 1971; Strawson 1998). Torrance (2011) suggests “artificial (or machine) ethics could be defined as designing machines that do things that, when done by humans, are indicative of the possession of ‘ethical status’ in those humans” (2011: 116)—which he takes to be “ethical productivity and ethical receptivity ” (2011: 117)—his expressions for moral agents and patients.

2.9.1 Responsibility for Robots

There is broad consensus that accountability, liability, and the rule of law are basic requirements that must be upheld in the face of new technologies (European Group on Ethics in Science and New Technologies 2018, 18), but the issue in the case of robots is how this can be done and how responsibility can be allocated. If the robots act, will they themselves be responsible, liable, or accountable for their actions? Or should the distribution of risk perhaps take precedence over discussions of responsibility?

Traditional distribution of responsibility already occurs: A car maker is responsible for the technical safety of the car, a driver is responsible for driving, a mechanic is responsible for proper maintenance, the public authorities are responsible for the technical conditions of the roads, etc. In general

The effects of decisions or actions based on AI are often the result of countless interactions among many actors, including designers, developers, users, software, and hardware.… With distributed agency comes distributed responsibility. (Taddeo and Floridi 2018: 751).

How this distribution might occur is not a problem that is specific to AI, but it gains particular urgency in this context (Nyholm 2018a, 2018b). In classical control engineering, distributed control is often achieved through a control hierarchy plus control loops across these hierarchies.

2.9.2 Rights for Robots

Some authors have indicated that it should be seriously considered whether current robots must be allocated rights (Gunkel 2018a, 2018b; Danaher forthcoming; Turner 2019). This position seems to rely largely on criticism of the opponents and on the empirical observation that robots and other non-persons are sometimes treated as having rights. In this vein, a “relational turn” has been proposed: If we relate to robots as though they had rights, then we might be well-advised not to search whether they “really” do have such rights (Coeckelbergh 2010, 2012, 2018). This raises the question how far such anti-realism or quasi-realism can go, and what it means then to say that “robots have rights” in a human-centred approach (Gerdes 2016). On the other side of the debate, Bryson has insisted that robots should not enjoy rights (Bryson 2010), though she considers it a possibility (Gunkel and Bryson 2014).

There is a wholly separate issue whether robots (or other AI systems) should be given the status of “legal entities” or “legal persons” in a sense natural persons, but also states, businesses, or organisations are “entities”, namely they can have legal rights and duties. The European Parliament has considered allocating such status to robots in order to deal with civil liability (EU Parliament 2016; Bertolini and Aiello 2018), but not criminal liability—which is reserved for natural persons. It would also be possible to assign only a certain subset of rights and duties to robots. It has been said that “such legislative action would be morally unnecessary and legally troublesome” because it would not serve the interest of humans (Bryson, Diamantis, and Grant 2017: 273). In environmental ethics there is a long-standing discussion about the legal rights for natural objects like trees (C. D. Stone 1972).

It has also been said that the reasons for developing robots with rights, or artificial moral patients, in the future are ethically doubtful (van Wynsberghe and Robbins 2019). In the community of “artificial consciousness” researchers there is a significant concern whether it would be ethical to create such consciousness since creating it would presumably imply ethical obligations to a sentient being, e.g., not to harm it and not to end its existence by switching it off—some authors have called for a “moratorium on synthetic phenomenology” (Bentley et al. 2018: 28f).

2.10.1 Singularity and Superintelligence

In some quarters, the aim of current AI is thought to be an “artificial general intelligence” (AGI), contrasted to a technical or “narrow” AI. AGI is usually distinguished from traditional notions of AI as a general purpose system, and from Searle’s notion of “strong AI”:

computers given the right programs can be literally said to understand and have other cognitive states. (Searle 1980: 417)

The idea of singularity is that if the trajectory of artificial intelligence reaches up to systems that have a human level of intelligence, then these systems would themselves have the ability to develop AI systems that surpass the human level of intelligence, i.e., they are “superintelligent” (see below). Such superintelligent AI systems would quickly self-improve or develop even more intelligent systems. This sharp turn of events after reaching superintelligent AI is the “singularity” from which the development of AI is out of human control and hard to predict (Kurzweil 2005: 487).

The fear that “the robots we created will take over the world” had captured human imagination even before there were computers (e.g., Butler 1863) and is the central theme in Čapek’s famous play that introduced the word “robot” (Čapek 1920). This fear was first formulated as a possible trajectory of existing AI into an “intelligence explosion” by Irvin Good:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. (Good 1965: 33)

The optimistic argument from acceleration to singularity is spelled out by Kurzweil (1999, 2005, 2012) who essentially points out that computing power has been increasing exponentially, i.e., doubling ca. every 2 years since 1970 in accordance with “Moore’s Law” on the number of transistors, and will continue to do so for some time in the future. He predicted in (Kurzweil 1999) that by 2010 supercomputers will reach human computation capacity, by 2030 “mind uploading” will be possible, and by 2045 the “singularity” will occur. Kurzweil talks about an increase in computing power that can be purchased at a given cost—but of course in recent years the funds available to AI companies have also increased enormously: Amodei and Hernandez (2018 [OIR]) thus estimate that in the years 2012–2018 the actual computing power available to train a particular AI system doubled every 3.4 months, resulting in an 300,000x increase—not the 7x increase that doubling every two years would have created.

A common version of this argument (Chalmers 2010) talks about an increase in “intelligence” of the AI system (rather than raw computing power), but the crucial point of “singularity” remains the one where further development of AI is taken over by AI systems and accelerates beyond human level. Bostrom (2014) explains in some detail what would happen at that point and what the risks for humanity are. The discussion is summarised in Eden et al. (2012); Armstrong (2014); Shanahan (2015). There are possible paths to superintelligence other than computing power increase, e.g., the complete emulation of the human brain on a computer (Kurzweil 2012; Sandberg 2013), biological paths, or networks and organisations (Bostrom 2014: 22–51).

Despite obvious weaknesses in the identification of “intelligence” with processing power, Kurzweil seems right that humans tend to underestimate the power of exponential growth. Mini-test: If you walked in steps in such a way that each step is double the previous, starting with a step of one metre, how far would you get with 30 steps? (answer: almost 3 times further than the Earth’s only permanent natural satellite.) Indeed, most progress in AI is readily attributable to the availability of processors that are faster by degrees of magnitude, larger storage, and higher investment (Müller 2018). The actual acceleration and its speeds are discussed in (Müller and Bostrom 2016; Bostrom, Dafoe, and Flynn forthcoming); Sandberg (2019) argues that progress will continue for some time.

The participants in this debate are united by being technophiles in the sense that they expect technology to develop rapidly and bring broadly welcome changes—but beyond that, they divide into those who focus on benefits (e.g., Kurzweil) and those who focus on risks (e.g., Bostrom). Both camps sympathise with “transhuman” views of survival for humankind in a different physical form, e.g., uploaded on a computer (Moravec 1990, 1998; Bostrom 2003a, 2003c). They also consider the prospects of “human enhancement” in various respects, including intelligence—often called “IA” (intelligence augmentation). It may be that future AI will be used for human enhancement, or will contribute further to the dissolution of the neatly defined human single person. Robin Hanson provides detailed speculation about what will happen economically in case human “brain emulation” enables truly intelligent robots or “ems” (Hanson 2016).

The argument from superintelligence to risk requires the assumption that superintelligence does not imply benevolence—contrary to Kantian traditions in ethics that have argued higher levels of rationality or intelligence would go along with a better understanding of what is moral and better ability to act morally (Gewirth 1978; Chalmers 2010: 36f). Arguments for risk from superintelligence say that rationality and morality are entirely independent dimensions—this is sometimes explicitly argued for as an “orthogonality thesis” (Bostrom 2012; Armstrong 2013; Bostrom 2014: 105–109).

Criticism of the singularity narrative has been raised from various angles. Kurzweil and Bostrom seem to assume that intelligence is a one-dimensional property and that the set of intelligent agents is totally-ordered in the mathematical sense—but neither discusses intelligence at any length in their books. Generally, it is fair to say that despite some efforts, the assumptions made in the powerful narrative of superintelligence and singularity have not been investigated in detail. One question is whether such a singularity will ever occur—it may be conceptually impossible, practically impossible or may just not happen because of contingent events, including people actively preventing it. Philosophically, the interesting question is whether singularity is just a “myth” (Floridi 2016; Ganascia 2017), and not on the trajectory of actual AI research. This is something that practitioners often assume (e.g., Brooks 2017 [OIR]). They may do so because they fear the public relations backlash, because they overestimate the practical problems, or because they have good reasons to think that superintelligence is an unlikely outcome of current AI research (Müller forthcoming-a). This discussion raises the question whether the concern about “singularity” is just a narrative about fictional AI based on human fears. But even if one does find negative reasons compelling and the singularity not likely to occur, there is still a significant possibility that one may turn out to be wrong. Philosophy is not on the “secure path of a science” (Kant 1791: B15), and maybe AI and robotics aren’t either (Müller 2020). So, it appears that discussing the very high-impact risk of singularity has justification even if one thinks the probability of such singularity ever occurring is very low.

2.10.2 Existential Risk from Superintelligence

Thinking about superintelligence in the long term raises the question whether superintelligence may lead to the extinction of the human species, which is called an “existential risk” (or XRisk): The superintelligent systems may well have preferences that conflict with the existence of humans on Earth, and may thus decide to end that existence—and given their superior intelligence, they will have the power to do so (or they may happen to end it because they do not really care).

Thinking in the long term is the crucial feature of this literature. Whether the singularity (or another catastrophic event) occurs in 30 or 300 or 3000 years does not really matter (Baum et al. 2019). Perhaps there is even an astronomical pattern such that an intelligent species is bound to discover AI at some point, and thus bring about its own demise. Such a “great filter” would contribute to the explanation of the “Fermi paradox” why there is no sign of life in the known universe despite the high probability of it emerging. It would be bad news if we found out that the “great filter” is ahead of us, rather than an obstacle that Earth has already passed. These issues are sometimes taken more narrowly to be about human extinction (Bostrom 2013), or more broadly as concerning any large risk for the species (Rees 2018)—of which AI is only one (Häggström 2016; Ord 2020). Bostrom also uses the category of “global catastrophic risk” for risks that are sufficiently high up the two dimensions of “scope” and “severity” (Bostrom and Ćirković 2011; Bostrom 2013).

These discussions of risk are usually not connected to the general problem of ethics under risk (e.g., Hansson 2013, 2018). The long-term view has its own methodological challenges but has produced a wide discussion: (Tegmark 2017) focuses on AI and human life “3.0” after singularity while Russell, Dewey, and Tegmark (2015) and Bostrom, Dafoe, and Flynn (forthcoming) survey longer-term policy issues in ethical AI. Several collections of papers have investigated the risks of artificial general intelligence (AGI) and the factors that might make this development more or less risk-laden (Müller 2016b; Callaghan et al. 2017; Yampolskiy 2018), including the development of non-agent AI (Drexler 2019).

2.10.3 Controlling Superintelligence?

In a narrow sense, the “control problem” is how we humans can remain in control of an AI system once it is superintelligent (Bostrom 2014: 127ff). In a wider sense, it is the problem of how we can make sure an AI system will turn out to be positive according to human perception (Russell 2019); this is sometimes called “value alignment”. How easy or hard it is to control a superintelligence depends significantly on the speed of “take-off” to a superintelligent system. This has led to particular attention to systems with self-improvement, such as AlphaZero (Silver et al. 2018).

One aspect of this problem is that we might decide a certain feature is desirable, but then find out that it has unforeseen consequences that are so negative that we would not desire that feature after all. This is the ancient problem of King Midas who wished that all he touched would turn into gold. This problem has been discussed on the occasion of various examples, such as the “paperclip maximiser” (Bostrom 2003b), or the program to optimise chess performance (Omohundro 2014).

Discussions about superintelligence include speculation about omniscient beings, the radical changes on a “latter day”, and the promise of immortality through transcendence of our current bodily form—so sometimes they have clear religious undertones (Capurro 1993; Geraci 2008, 2010; O’Connell 2017: 160ff). These issues also pose a well-known problem of epistemology: Can we know the ways of the omniscient (Danaher 2015)? The usual opponents have already shown up: A characteristic response of an atheist is

People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world (Domingos 2015)

The new nihilists explain that a “techno-hypnosis” through information technologies has now become our main method of distraction from the loss of meaning (Gertz 2018). Both opponents would thus say we need an ethics for the “small” problems that occur with actual AI and robotics ( sections 2.1 through 2.9 above), and that there is less need for the “big ethics” of existential risk from AI ( section 2.10 ).

The singularity thus raises the problem of the concept of AI again. It is remarkable how imagination or “vision” has played a central role since the very beginning of the discipline at the “Dartmouth Summer Research Project” (McCarthy et al. 1955 [OIR]; Simon and Newell 1958). And the evaluation of this vision is subject to dramatic change: In a few decades, we went from the slogans “AI is impossible” (Dreyfus 1972) and “AI is just automation” (Lighthill 1973) to “AI will solve all problems” (Kurzweil 1999) and “AI may kill us all” (Bostrom 2014). This created media attention and public relations efforts, but it also raises the problem of how much of this “philosophy and ethics of AI” is really about AI rather than about an imagined technology. As we said at the outset, AI and robotics have raised fundamental questions about what we should do with these systems, what the systems themselves should do, and what risks they have in the long term. They also challenge the human view of humanity as the intelligent and dominant species on Earth. We have seen issues that have been raised and will have to watch technological and social developments closely to catch the new issues early on, develop a philosophical analysis, and learn for traditional problems of philosophy.

NOTE: Citations in the main text annotated “[OIR]” may be found in the Other Internet Resources section below, not in the Bibliography.

  • Abowd, John M, 2017, “How Will Statistical Agencies Operate When All Data Are Private?”, Journal of Privacy and Confidentiality , 7(3): 1–15. doi:10.29012/jpc.v7i3.404
  • AI4EU, 2019, “Outcomes from the Strategic Orientation Workshop (Deliverable 7.1)”, (June 28, 2019). https://www.ai4eu.eu/ai4eu-project-deliverables
  • Allen, Colin, Iva Smit, and Wendell Wallach, 2005, “Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches”, Ethics and Information Technology , 7(3): 149–155. doi:10.1007/s10676-006-0004-4
  • Allen, Colin, Gary Varner, and Jason Zinser, 2000, “Prolegomena to Any Future Artificial Moral Agent”, Journal of Experimental & Theoretical Artificial Intelligence , 12(3): 251–261. doi:10.1080/09528130050111428
  • Amoroso, Daniele and Guglielmo Tamburrini, 2018, “The Ethical and Legal Case Against Autonomy in Weapons Systems”, Global Jurist , 18(1): art. 20170012. doi:10.1515/gj-2017-0012
  • Anderson, Janna, Lee Rainie, and Alex Luchsinger, 2018, Artificial Intelligence and the Future of Humans , Washington, DC: Pew Research Center.
  • Anderson, Michael and Susan Leigh Anderson, 2007, “Machine Ethics: Creating an Ethical Intelligent Agent”, AI Magazine , 28(4): 15–26.
  • ––– (eds.), 2011, Machine Ethics , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511978036
  • Aneesh, A., 2006, Virtual Migration: The Programming of Globalization , Durham, NC and London: Duke University Press.
  • Arkin, Ronald C., 2009, Governing Lethal Behavior in Autonomous Robots , Boca Raton, FL: CRC Press.
  • Armstrong, Stuart, 2013, “General Purpose Intelligence: Arguing the Orthogonality Thesis”, Analysis and Metaphysics , 12: 68–84.
  • –––, 2014, Smarter Than Us , Berkeley, CA: MIRI.
  • Arnold, Thomas and Matthias Scheutz, 2017, “Beyond Moral Dilemmas: Exploring the Ethical Landscape in HRI”, in Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction—HRI ’17 , Vienna, Austria: ACM Press, 445–452. doi:10.1145/2909824.3020255
  • Asaro, Peter M., 2019, “AI Ethics in Predictive Policing: From Models of Threat to an Ethics of Care”, IEEE Technology and Society Magazine , 38(2): 40–53. doi:10.1109/MTS.2019.2915154
  • Asimov, Isaac, 1942, “Runaround: A Short Story”, Astounding Science Fiction , March 1942. Reprinted in “I, Robot”, New York: Gnome Press 1950, 1940ff.
  • Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan, 2018, “The Moral Machine Experiment”, Nature , 563(7729): 59–64. doi:10.1038/s41586-018-0637-6
  • Baldwin, Richard, 2019, The Globotics Upheaval: Globalisation, Robotics and the Future of Work , New York: Oxford University Press.
  • Baum, Seth D., Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin, and Roman V. Yampolskiy, 2019, “Long-Term Trajectories of Human Civilization”, Foresight , 21(1): 53–83. doi:10.1108/FS-04-2018-0037
  • Bendel, Oliver, 2018, “Sexroboter aus Sicht der Maschinenethik”, in Handbuch Filmtheorie , Bernhard Groß and Thomas Morsch (eds.), (Springer Reference Geisteswissenschaften), Wiesbaden: Springer Fachmedien Wiesbaden, 1–19. doi:10.1007/978-3-658-17484-2_22-1
  • Bennett, Colin J. and Charles Raab, 2006, The Governance of Privacy: Policy Instruments in Global Perspective , second edition, Cambridge, MA: MIT Press.
  • Benthall, Sebastian and Bruce D. Haynes, 2019, “Racial Categories in Machine Learning”, in Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* ’19 , Atlanta, GA, USA: ACM Press, 289–298. doi:10.1145/3287560.3287575
  • Bentley, Peter J., Miles Brundage, Olle Häggström, and Thomas Metzinger, 2018, “Should We Fear Artificial Intelligence? In-Depth Analysis”, European Parliamentary Research Service, Scientific Foresight Unit (STOA), March 2018, PE 614.547, 1–40. [ Bentley et al. 2018 available online ]
  • Bertolini, Andrea and Giuseppe Aiello, 2018, “Robot Companions: A Legal and Ethical Analysis”, The Information Society , 34(3): 130–140. doi:10.1080/01972243.2018.1444249
  • Binns, Reuben, 2018, “Fairness in Machine Learning: Lessons from Political Philosophy”, Proceedings of the 1st Conference on Fairness, Accountability and Transparency , in Proceedings of Machine Learning Research , 81: 149–159.
  • Bostrom, Nick, 2003a, “Are We Living in a Computer Simulation?”, The Philosophical Quarterly , 53(211): 243–255. doi:10.1111/1467-9213.00309
  • –––, 2003b, “Ethical Issues in Advanced Artificial Intelligence”, in Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Volume 2 , Iva Smit, Wendell Wallach, and G.E. Lasker (eds), (IIAS-147-2003), Tecumseh, ON: International Institute of Advanced Studies in Systems Research and Cybernetics, 12–17. [ Botstrom 2003b revised available online ]
  • –––, 2003c, “Transhumanist Values”, in Ethical Issues for the Twenty-First Century , Frederick Adams (ed.), Bowling Green, OH: Philosophical Documentation Center Press.
  • –––, 2012, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents”, Minds and Machines , 22(2): 71–85. doi:10.1007/s11023-012-9281-3
  • –––, 2013, “Existential Risk Prevention as Global Priority”, Global Policy , 4(1): 15–31. doi:10.1111/1758-5899.12002
  • –––, 2014, Superintelligence: Paths, Dangers, Strategies , Oxford: Oxford University Press.
  • Bostrom, Nick and Milan M. Ćirković (eds.), 2011, Global Catastrophic Risks , New York: Oxford University Press.
  • Bostrom, Nick, Allan Dafoe, and Carrick Flynn, forthcoming, “Policy Desiderata for Superintelligent AI: A Vector Field Approach (V. 4.3)”, in Ethics of Artificial Intelligence , S Matthew Liao (ed.), New York: Oxford University Press. [ Bostrom, Dafoe, and Flynn forthcoming – preprint available online ]
  • Bostrom, Nick and Eliezer Yudkowsky, 2014, “The Ethics of Artificial Intelligence”, in The Cambridge Handbook of Artificial Intelligence , Keith Frankish and William M. Ramsey (eds.), Cambridge: Cambridge University Press, 316–334. doi:10.1017/CBO9781139046855.020 [ Bostrom and Yudkowsky 2014 available online ]
  • Bradshaw, Samantha, Lisa-Maria Neudert, and Phil Howard, 2019, “Government Responses to Malicious Use of Social Media”, Working Paper 2019.2, Oxford: Project on Computational Propaganda. [ Bradshaw, Neudert, and Howard 2019 available online/ ]
  • Brownsword, Roger, Eloise Scotford, and Karen Yeung (eds.), 2017, The Oxford Handbook of Law, Regulation and Technology , Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780199680832.001.0001
  • Brynjolfsson, Erik and Andrew McAfee, 2016, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies , New York: W. W. Norton.
  • Bryson, Joanna J., 2010, “Robots Should Be Slaves”, in Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues , Yorick Wilks (ed.), (Natural Language Processing 8), Amsterdam: John Benjamins Publishing Company, 63–74. doi:10.1075/nlp.8.11bry
  • –––, 2019, “The Past Decade and Future of Ai’s Impact on Society”, in Towards a New Enlightenment: A Transcendent Decade , Madrid: Turner - BVVA. [ Bryson 2019 available online ]
  • Bryson, Joanna J., Mihailis E. Diamantis, and Thomas D. Grant, 2017, “Of, for, and by the People: The Legal Lacuna of Synthetic Persons”, Artificial Intelligence and Law , 25(3): 273–291. doi:10.1007/s10506-017-9214-9
  • Burr, Christopher and Nello Cristianini, 2019, “Can Machines Read Our Minds?”, Minds and Machines , 29(3): 461–494. doi:10.1007/s11023-019-09497-4
  • Butler, Samuel, 1863, “Darwin among the Machines: Letter to the Editor”, Letter in The Press (Christchurch) , 13 June 1863. [ Butler 1863 available online ]
  • Callaghan, Victor, James Miller, Roman Yampolskiy, and Stuart Armstrong (eds.), 2017, The Technological Singularity: Managing the Journey , (The Frontiers Collection), Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-662-54033-6
  • Calo, Ryan, 2018, “Artificial Intelligence Policy: A Primer and Roadmap”, University of Bologna Law Review , 3(2): 180-218. doi:10.6092/ISSN.2531-6133/8670
  • Calo, Ryan, A. Michael Froomkin, and Ian Kerr (eds.), 2016, Robot Law , Cheltenham: Edward Elgar.
  • Čapek, Karel, 1920, R.U.R. , Prague: Aventium. Translated by Peter Majer and Cathy Porter, London: Methuen, 1999.
  • Capurro, Raphael, 1993, “Ein Grinsen Ohne Katze: Von der Vergleichbarkeit Zwischen ‘Künstlicher Intelligenz’ und ‘Getrennten Intelligenzen’”, Zeitschrift für philosophische Forschung , 47: 93–102.
  • Cave, Stephen, 2019, “To Save Us from a Kafkaesque Future, We Must Democratise AI”, The Guardian , 04 January 2019. [ Cave 2019 available online ]
  • Chalmers, David J., 2010, “The Singularity: A Philosophical Analysis”, Journal of Consciousness Studies , 17(9–10): 7–65. [ Chalmers 2010 available online ]
  • Christman, John, 2003 [2018], “Autonomy in Moral and Political Philosophy”, (Spring 2018) Stanford Encyclopedia of Philosophy (EDITION NEEDED), URL = < https://plato.stanford.edu/archives/spr2018/entries/autonomy-moral/ >
  • Coeckelbergh, Mark, 2010, “Robot Rights? Towards a Social-Relational Justification of Moral Consideration”, Ethics and Information Technology , 12(3): 209–221. doi:10.1007/s10676-010-9235-5
  • –––, 2012, Growing Moral Relations: Critique of Moral Status Ascription , London: Palgrave. doi:10.1057/9781137025968
  • –––, 2016, “Care Robots and the Future of ICT-Mediated Elderly Care: A Response to Doom Scenarios”, AI & Society , 31(4): 455–462. doi:10.1007/s00146-015-0626-3
  • –––, 2018, “What Do We Mean by a Relational Ethics? Growing a Relational Approach to the Moral Standing of Plants, Robots and Other Non-Humans”, in Plant Ethics: Concepts and Applications , Angela Kallhoff, Marcello Di Paola, and Maria Schörgenhumer (eds.), London: Routledge, 110–121.
  • Crawford, Kate and Ryan Calo, 2016, “There Is a Blind Spot in AI Research”, Nature , 538(7625): 311–313. doi:10.1038/538311a
  • Cristianini, Nello, forthcoming, “Shortcuts to Artificial Intelligence”, in Machines We Trust , Marcello Pelillo and Teresa Scantamburlo (eds.), Cambridge, MA: MIT Press. [ Cristianini forthcoming – preprint available online ]
  • Danaher, John, 2015, “Why AI Doomsayers Are Like Sceptical Theists and Why It Matters”, Minds and Machines , 25(3): 231–246. doi:10.1007/s11023-015-9365-y
  • –––, 2016a, “Robots, Law and the Retribution Gap”, Ethics and Information Technology , 18(4): 299–309. doi:10.1007/s10676-016-9403-3
  • –––, 2016b, “The Threat of Algocracy: Reality, Resistance and Accommodation”, Philosophy & Technology , 29(3): 245–268. doi:10.1007/s13347-015-0211-1
  • –––, 2019a, Automation and Utopia: Human Flourishing in a World without Work , Cambridge, MA: Harvard University Press.
  • –––, 2019b, “The Philosophical Case for Robot Friendship”, Journal of Posthuman Studies , 3(1): 5–24. doi:10.5325/jpoststud.3.1.0005
  • –––, forthcoming, “Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism”, Science and Engineering Ethics , first online: 20 June 2019. doi:10.1007/s11948-019-00119-x
  • Danaher, John and Neil McArthur (eds.), 2017, Robot Sex: Social and Ethical Implications , Boston, MA: MIT Press.
  • DARPA, 1983, “Strategic Computing. New-Generation Computing Technology: A Strategic Plan for Its Development an Application to Critical Problems in Defense”, ADA141982, 28 October 1983. [ DARPA 1983 available online ]
  • Dennett, Daniel C, 2017, From Bacteria to Bach and Back: The Evolution of Minds , New York: W.W. Norton.
  • Devlin, Kate, 2018, Turned On: Science, Sex and Robots , London: Bloomsbury.
  • Diakopoulos, Nicholas, 2015, “Algorithmic Accountability: Journalistic Investigation of Computational Power Structures”, Digital Journalism , 3(3): 398–415. doi:10.1080/21670811.2014.976411
  • Dignum, Virginia, 2018, “Ethics in Artificial Intelligence: Introduction to the Special Issue”, Ethics and Information Technology , 20(1): 1–3. doi:10.1007/s10676-018-9450-z
  • Domingos, Pedro, 2015, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World , London: Allen Lane.
  • Draper, Heather, Tom Sorell, Sandra Bedaf, Dag Sverre Syrdal, Carolina Gutierrez-Ruiz, Alexandre Duclos, and Farshid Amirabdollahian, 2014, “Ethical Dimensions of Human-Robot Interactions in the Care of Older People: Insights from 21 Focus Groups Convened in the UK, France and the Netherlands”, in International Conference on Social Robotics 2014 , Michael Beetz, Benjamin Johnston, and Mary-Anne Williams (eds.), (Lecture Notes in Artificial Intelligence 8755), Cham: Springer International Publishing, 135–145. doi:10.1007/978-3-319-11973-1_14
  • Dressel, Julia and Hany Farid, 2018, “The Accuracy, Fairness, and Limits of Predicting Recidivism”, Science Advances , 4(1): eaao5580. doi:10.1126/sciadv.aao5580
  • Drexler, K. Eric, 2019, “Reframing Superintelligence: Comprehensive AI Services as General Intelligence”, FHI Technical Report, 2019-1, 1-210. [ Drexler 2019 available online ]
  • Dreyfus, Hubert L., 1972, What Computers Still Can’t Do: A Critique of Artificial Reason , second edition, Cambridge, MA: MIT Press 1992.
  • Dreyfus, Hubert L., Stuart E. Dreyfus, and Tom Athanasiou, 1986, Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer , New York: Free Press.
  • Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith, 2006, Calibrating Noise to Sensitivity in Private Data Analysis , Berlin, Heidelberg.
  • Eden, Amnon H., James H. Moor, Johnny H. Søraker, and Eric Steinhart (eds.), 2012, Singularity Hypotheses: A Scientific and Philosophical Assessment , (The Frontiers Collection), Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-32560-1
  • Eubanks, Virginia, 2018, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor , London: St. Martin’s Press.
  • European Commission, 2013, “How Many People Work in Agriculture in the European Union? An Answer Based on Eurostat Data Sources”, EU Agricultural Economics Briefs , 8 (July 2013). [ Anonymous 2013 available online ]
  • European Group on Ethics in Science and New Technologies, 2018, “Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems”, 9 March 2018, European Commission, Directorate-General for Research and Innovation, Unit RTD.01. [ European Group 2018 available online ]
  • Ferguson, Andrew Guthrie, 2017, The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement , New York: NYU Press.
  • Floridi, Luciano, 2016, “Should We Be Afraid of AI? Machines Seem to Be Getting Smarter and Smarter and Much Better at Human Jobs, yet True AI Is Utterly Implausible. Why?”, Aeon , 9 May 2016. URL = < Floridi 2016 available online >
  • Floridi, Luciano, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke, and Effy Vayena, 2018, “AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations”, Minds and Machines , 28(4): 689–707. doi:10.1007/s11023-018-9482-5
  • Floridi, Luciano and Jeff W. Sanders, 2004, “On the Morality of Artificial Agents”, Minds and Machines , 14(3): 349–379. doi:10.1023/B:MIND.0000035461.63578.9d
  • Floridi, Luciano and Mariarosaria Taddeo, 2016, “What Is Data Ethics?”, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences , 374(2083): 20160360. doi:10.1098/rsta.2016.0360
  • Foot, Philippa, 1967, “The Problem of Abortion and the Doctrine of the Double Effect”, Oxford Review , 5: 5–15.
  • Fosch-Villaronga, Eduard and Jordi Albo-Canals, 2019, “‘I’ll Take Care of You,’ Said the Robot”, Paladyn, Journal of Behavioral Robotics , 10(1): 77–93. doi:10.1515/pjbr-2019-0006
  • Frank, Lily and Sven Nyholm, 2017, “Robot Sex and Consent: Is Consent to Sex between a Robot and a Human Conceivable, Possible, and Desirable?”, Artificial Intelligence and Law , 25(3): 305–323. doi:10.1007/s10506-017-9212-y
  • Frankfurt, Harry G., 1971, “Freedom of the Will and the Concept of a Person”, The Journal of Philosophy , 68(1): 5–20.
  • Frey, Carl Benedict, 2019, The Technology Trap: Capital, Labour, and Power in the Age of Automation , Princeton, NJ: Princeton University Press.
  • Frey, Carl Benedikt and Michael A. Osborne, 2013, “The Future of Employment: How Susceptible Are Jobs to Computerisation?”, Oxford Martin School Working Papers, 17 September 2013. [ Frey and Osborne 2013 available online ]
  • Ganascia, Jean-Gabriel, 2017, Le Mythe De La Singularité , Paris: Éditions du Seuil.
  • EU Parliament, 2016, “Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(Inl))”, Committee on Legal Affairs , 10.11.2016. https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html
  • EU Regulation, 2016/679, “General Data Protection Regulation: Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/Ec”, Official Journal of the European Union , 119 (4 May 2016), 1–88. [ Regulation (EU) 2016/679 available online ]
  • Geraci, Robert M., 2008, “Apocalyptic AI: Religion and the Promise of Artificial Intelligence”, Journal of the American Academy of Religion , 76(1): 138–166. doi:10.1093/jaarel/lfm101
  • –––, 2010, Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780195393026.001.0001
  • Gerdes, Anne, 2016, “The Issue of Moral Consideration in Robot Ethics”, ACM SIGCAS Computers and Society , 45(3): 274–279. doi:10.1145/2874239.2874278
  • German Federal Ministry of Transport and Digital Infrastructure, 2017, “Report of the Ethics Commission: Automated and Connected Driving”, June 2017, 1–36. [ GFMTDI 2017 available online ]
  • Gertz, Nolen, 2018, Nihilism and Technology , London: Rowman & Littlefield.
  • Gewirth, Alan, 1978, “The Golden Rule Rationalized”, Midwest Studies in Philosophy , 3(1): 133–147. doi:10.1111/j.1475-4975.1978.tb00353.x
  • Gibert, Martin, 2019, “Éthique Artificielle (Version Grand Public)”, in L’Encyclopédie Philosophique , Maxime Kristanek (ed.), accessed: 16 April 2020, URL = < Gibert 2019 available online >
  • Giubilini, Alberto and Julian Savulescu, 2018, “The Artificial Moral Advisor. The ‘Ideal Observer’ Meets Artificial Intelligence”, Philosophy & Technology , 31(2): 169–188. doi:10.1007/s13347-017-0285-z
  • Good, Irving John, 1965, “Speculations Concerning the First Ultraintelligent Machine”, in Advances in Computers 6 , Franz L. Alt and Morris Rubinoff (eds.), New York & London: Academic Press, 31–88. doi:10.1016/S0065-2458(08)60418-0
  • Goodfellow, Ian, Yoshua Bengio, and Aaron Courville, 2016, Deep Learning , Cambridge, MA: MIT Press.
  • Goodman, Bryce and Seth Flaxman, 2017, “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation’”, AI Magazine , 38(3): 50–57. doi:10.1609/aimag.v38i3.2741
  • Goos, Maarten, 2018, “The Impact of Technological Progress on Labour Markets: Policy Challenges”, Oxford Review of Economic Policy , 34(3): 362–375. doi:10.1093/oxrep/gry002
  • Goos, Maarten, Alan Manning, and Anna Salomons, 2009, “Job Polarization in Europe”, American Economic Review , 99(2): 58–63. doi:10.1257/aer.99.2.58
  • Graham, Sandra and Brian S. Lowery, 2004, “Priming Unconscious Racial Stereotypes about Adolescent Offenders”, Law and Human Behavior , 28(5): 483–504. doi:10.1023/B:LAHU.0000046430.65485.1f
  • Gunkel, David J., 2018a, “The Other Question: Can and Should Robots Have Rights?”, Ethics and Information Technology , 20(2): 87–99. doi:10.1007/s10676-017-9442-4
  • –––, 2018b, Robot Rights , Boston, MA: MIT Press.
  • Gunkel, David J. and Joanna J. Bryson (eds.), 2014, Machine Morality: The Machine as Moral Agent and Patient special issue of Philosophy & Technology , 27(1): 1–142.
  • Häggström, Olle, 2016, Here Be Dragons: Science, Technology and the Future of Humanity , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780198723547.001.0001
  • Hakli, Raul and Pekka Mäkelä, 2019, “Moral Responsibility of Robots and Hybrid Agents”, The Monist , 102(2): 259–275. doi:10.1093/monist/onz009
  • Hanson, Robin, 2016, The Age of Em: Work, Love and Life When Robots Rule the Earth , Oxford: Oxford University Press.
  • Hansson, Sven Ove, 2013, The Ethics of Risk: Ethical Analysis in an Uncertain World , New York: Palgrave Macmillan.
  • –––, 2018, “How to Perform an Ethical Risk Analysis (eRA)”, Risk Analysis , 38(9): 1820–1829. doi:10.1111/risa.12978
  • Harari, Yuval Noah, 2016, Homo Deus: A Brief History of Tomorrow , New York: Harper.
  • Haskel, Jonathan and Stian Westlake, 2017, Capitalism without Capital: The Rise of the Intangible Economy , Princeton, NJ: Princeton University Press.
  • Houkes, Wybo and Pieter E. Vermaas, 2010, Technical Functions: On the Use and Design of Artefacts , (Philosophy of Engineering and Technology 1), Dordrecht: Springer Netherlands. doi:10.1007/978-90-481-3900-2
  • IEEE, 2019, Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems (First Version), < IEEE 2019 available online >.
  • Jasanoff, Sheila, 2016, The Ethics of Invention: Technology and the Human Future , New York: Norton.
  • Jecker, Nancy S., forthcoming, Ending Midlife Bias: New Values for Old Age , New York: Oxford University Press.
  • Jobin, Anna, Marcello Ienca, and Effy Vayena, 2019, “The Global Landscape of AI Ethics Guidelines”, Nature Machine Intelligence , 1(9): 389–399. doi:10.1038/s42256-019-0088-2
  • Johnson, Deborah G. and Mario Verdicchio, 2017, “Reframing AI Discourse”, Minds and Machines , 27(4): 575–590. doi:10.1007/s11023-017-9417-6
  • Kahnemann, Daniel, 2011, Thinking Fast and Slow , London: Macmillan.
  • Kamm, Frances Myrna, 2016, The Trolley Problem Mysteries , Eric Rakowski (ed.), Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780190247157.001.0001
  • Kant, Immanuel, 1781/1787, Kritik der reinen Vernunft . Translated as Critique of Pure Reason , Norman Kemp Smith (trans.), London: Palgrave Macmillan, 1929.
  • Keeling, Geoff, 2020, “Why Trolley Problems Matter for the Ethics of Automated Vehicles”, Science and Engineering Ethics , 26(1): 293–307. doi:10.1007/s11948-019-00096-1
  • Keynes, John Maynard, 1930, “Economic Possibilities for Our Grandchildren”. Reprinted in his Essays in Persuasion , New York: Harcourt Brace, 1932, 358–373.
  • Kissinger, Henry A., 2018, “How the Enlightenment Ends: Philosophically, Intellectually—in Every Way—Human Society Is Unprepared for the Rise of Artificial Intelligence”, The Atlantic , June 2018. [ Kissinger 2018 available online ]
  • Kurzweil, Ray, 1999, The Age of Spiritual Machines: When Computers Exceed Human Intelligence , London: Penguin.
  • –––, 2005, The Singularity Is Near: When Humans Transcend Biology , London: Viking.
  • –––, 2012, How to Create a Mind: The Secret of Human Thought Revealed , New York: Viking.
  • Lee, Minha, Sander Ackermans, Nena van As, Hanwen Chang, Enzo Lucas, and Wijnand IJsselsteijn, 2019, “Caring for Vincent: A Chatbot for Self-Compassion”, in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems—CHI ’19 , Glasgow, Scotland: ACM Press, 1–13. doi:10.1145/3290605.3300932
  • Levy, David, 2007, Love and Sex with Robots: The Evolution of Human-Robot Relationships , New York: Harper & Co.
  • Lighthill, James, 1973, “Artificial Intelligence: A General Survey”, Artificial intelligence: A Paper Symposion , London: Science Research Council. [ Lighthill 1973 available online ]
  • Lin, Patrick, 2016, “Why Ethics Matters for Autonomous Cars”, in Autonomous Driving , Markus Maurer, J. Christian Gerdes, Barbara Lenz, and Hermann Winner (eds.), Berlin, Heidelberg: Springer Berlin Heidelberg, 69–85. doi:10.1007/978-3-662-48847-8_4
  • Lin, Patrick, Keith Abney, and Ryan Jenkins (eds.), 2017, Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence , New York: Oxford University Press. doi:10.1093/oso/9780190652951.001.0001
  • Lin, Patrick, George Bekey, and Keith Abney, 2008, “Autonomous Military Robotics: Risk, Ethics, and Design”, ONR report, California Polytechnic State University, San Luis Obispo, 20 December 2008), 112 pp. [ Lin, Bekey, and Abney 2008 available online ]
  • Lomas, Meghann, Robert Chevalier, Ernest Vincent Cross, Robert Christopher Garrett, John Hoare, and Michael Kopack, 2012, “Explaining Robot Actions”, in Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction—HRI ’12 , Boston, MA: ACM Press, 187–188. doi:10.1145/2157689.2157748
  • Macnish, Kevin, 2017, The Ethics of Surveillance: An Introduction , London: Routledge.
  • Mathur, Arunesh, Gunes Acar, Michael J. Friedman, Elena Lucherini, Jonathan Mayer, Marshini Chetty, and Arvind Narayanan, 2019, “Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites”, Proceedings of the ACM on Human-Computer Interaction , 3(CSCW): art. 81. doi:10.1145/3359183
  • Minsky, Marvin, 1985, The Society of Mind , New York: Simon & Schuster.
  • Misselhorn, Catrin, 2020, “Artificial Systems with Moral Capacities? A Research Design and Its Implementation in a Geriatric Care System”, Artificial Intelligence , 278: art. 103179. doi:10.1016/j.artint.2019.103179
  • Mittelstadt, Brent Daniel and Luciano Floridi, 2016, “The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts”, Science and Engineering Ethics , 22(2): 303–341. doi:10.1007/s11948-015-9652-2
  • Moor, James H., 2006, “The Nature, Importance, and Difficulty of Machine Ethics”, IEEE Intelligent Systems , 21(4): 18–21. doi:10.1109/MIS.2006.80
  • Moravec, Hans, 1990, Mind Children , Cambridge, MA: Harvard University Press.
  • –––, 1998, Robot: Mere Machine to Transcendent Mind , New York: Oxford University Press.
  • Mozorov, Eygeny, 2013, To Save Everything, Click Here: The Folly of Technological Solutionism , New York: Public Affairs.
  • Müller, Vincent C., 2012, “Autonomous Cognitive Systems in Real-World Environments: Less Control, More Flexibility and Better Interaction”, Cognitive Computation , 4(3): 212–215. doi:10.1007/s12559-012-9129-4
  • –––, 2016a, “Autonomous Killer Robots Are Probably Good News”, In Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on the Use of Remotely Controlled Weapons , Ezio Di Nucci and Filippo Santoni de Sio (eds.), London: Ashgate, 67–81.
  • ––– (ed.), 2016b, Risks of Artificial Intelligence , London: Chapman & Hall - CRC Press. doi:10.1201/b19187
  • –––, 2018, “In 30 Schritten zum Mond? Zukünftiger Fortschritt in der KI”, Medienkorrespondenz , 20: 5–15. [ Müller 2018 available online ]
  • –––, 2020, “Measuring Progress in Robotics: Benchmarking and the ‘Measure-Target Confusion’”, in Metrics of Sensory Motor Coordination and Integration in Robots and Animals , Fabio Bonsignorio, Elena Messina, Angel P. del Pobil, and John Hallam (eds.), (Cognitive Systems Monographs 36), Cham: Springer International Publishing, 169–179. doi:10.1007/978-3-030-14126-4_9
  • –––, forthcoming-a, Can Machines Think? Fundamental Problems of Artificial Intelligence , New York: Oxford University Press.
  • ––– (ed.), forthcoming-b, Oxford Handbook of the Philosophy of Artificial Intelligence , New York: Oxford University Press.
  • Müller, Vincent C. and Nick Bostrom, 2016, “Future Progress in Artificial Intelligence: A Survey of Expert Opinion”, in Fundamental Issues of Artificial Intelligence , Vincent C. Müller (ed.), Cham: Springer International Publishing, 555–572. doi:10.1007/978-3-319-26485-1_33
  • Newport, Cal, 2019, Digital Minimalism: On Living Better with Less Technology , London: Penguin.
  • Nørskov, Marco (ed.), 2017, Social Robots , London: Routledge.
  • Nyholm, Sven, 2018a, “Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci”, Science and Engineering Ethics , 24(4): 1201–1219. doi:10.1007/s11948-017-9943-x
  • –––, 2018b, “The Ethics of Crashes with Self-Driving Cars: A Roadmap, II”, Philosophy Compass , 13(7): e12506. doi:10.1111/phc3.12506
  • Nyholm, Sven, and Lily Frank, 2017, “From Sex Robots to Love Robots: Is Mutual Love with a Robot Possible?”, in Danaher and McArthur 2017: 219–243.
  • O’Connell, Mark, 2017, To Be a Machine: Adventures among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death , London: Granta.
  • O’Neil, Cathy, 2016, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , Largo, ML: Crown.
  • Omohundro, Steve, 2014, “Autonomous Technology and the Greater Human Good”, Journal of Experimental & Theoretical Artificial Intelligence , 26(3): 303–315. doi:10.1080/0952813X.2014.895111
  • Ord, Toby, 2020, The Precipice: Existential Risk and the Future of Humanity , London: Bloomsbury.
  • Powers, Thomas M. and Jean-Gabriel Ganascia, forthcoming, “The Ethics of the Ethics of AI”, in Oxford Handbook of Ethics of Artificial Intelligence , Markus D. Dubber, Frank Pasquale, and Sunnit Das (eds.), New York: Oxford.
  • Rawls, John, 1971, A Theory of Justice , Cambridge, MA: Belknap Press.
  • Rees, Martin, 2018, On the Future: Prospects for Humanity , Princeton: Princeton University Press.
  • Richardson, Kathleen, 2016, “Sex Robot Matters: Slavery, the Prostituted, and the Rights of Machines”, IEEE Technology and Society Magazine , 35(2): 46–53. doi:10.1109/MTS.2016.2554421
  • Roessler, Beate, 2017, “Privacy as a Human Right”, Proceedings of the Aristotelian Society , 117(2): 187–206. doi:10.1093/arisoc/aox008
  • Royakkers, Lambèr and Rinie van Est, 2016, Just Ordinary Robots: Automation from Love to War , Boca Raton, LA: CRC Press, Taylor & Francis. doi:10.1201/b18899
  • Russell, Stuart, 2019, Human Compatible: Artificial Intelligence and the Problem of Control , New York: Viking.
  • Russell, Stuart, Daniel Dewey, and Max Tegmark, 2015, “Research Priorities for Robust and Beneficial Artificial Intelligence”, AI Magazine , 36(4): 105–114. doi:10.1609/aimag.v36i4.2577
  • SAE International, 2018, “Taxonomy and Definitions for Terms Related to Driving Automation Systems for on-Road Motor Vehicles”, J3016_201806, 15 June 2018. [ SAE International 2015 available online ]
  • Sandberg, Anders, 2013, “Feasibility of Whole Brain Emulation”, in Philosophy and Theory of Artificial Intelligence , Vincent C. Müller (ed.), (Studies in Applied Philosophy, Epistemology and Rational Ethics, 5), Berlin, Heidelberg: Springer Berlin Heidelberg, 251–264. doi:10.1007/978-3-642-31674-6_19
  • –––, 2019, “There Is Plenty of Time at the Bottom: The Economics, Risk and Ethics of Time Compression”, Foresight , 21(1): 84–99. doi:10.1108/FS-04-2018-0044
  • Santoni de Sio, Filippo and Jeroen van den Hoven, 2018, “Meaningful Human Control over Autonomous Systems: A Philosophical Account”, Frontiers in Robotics and AI , 5(February): 15. doi:10.3389/frobt.2018.00015
  • Schneier, Bruce, 2015, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World , New York: W. W. Norton.
  • Searle, John R., 1980, “Minds, Brains, and Programs”, Behavioral and Brain Sciences , 3(3): 417–424. doi:10.1017/S0140525X00005756
  • Selbst, Andrew D., Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi, 2019, “Fairness and Abstraction in Sociotechnical Systems”, in Proceedings of the Conference on Fairness, Accountability, and Transparency—FAT* ’19 , Atlanta, GA: ACM Press, 59–68. doi:10.1145/3287560.3287598
  • Sennett, Richard, 2018, Building and Dwelling: Ethics for the City , London: Allen Lane.
  • Shanahan, Murray, 2015, The Technological Singularity , Cambridge, MA: MIT Press.
  • Sharkey, Amanda, 2019, “Autonomous Weapons Systems, Killer Robots and Human Dignity”, Ethics and Information Technology , 21(2): 75–87. doi:10.1007/s10676-018-9494-0
  • Sharkey, Amanda and Noel Sharkey, 2011, “The Rights and Wrongs of Robot Care”, in Robot Ethics: The Ethical and Social Implications of Robotics , Patrick Lin, Keith Abney and George Bekey (eds.), Cambridge, MA: MIT Press, 267–282.
  • Shoham, Yoav, Perrault Raymond, Brynjolfsson Erik, Jack Clark, James Manyika, Juan Carlos Niebles, … Zoe Bauer, 2018, “The AI Index 2018 Annual Report”, 17 December 2018, Stanford, CA: AI Index Steering Committee, Human-Centered AI Initiative, Stanford University. [ Shoam et al. 2018 available online ]
  • SIENNA, 2019, “Deliverable Report D4.4: Ethical Issues in Artificial Intelligence and Robotics”, June 2019, published by the SIENNA project (Stakeholder-informed ethics for new technologies with high socio-economic and human rights impact), University of Twente, pp. 1–103. [ SIENNA 2019 available online ]
  • Silver, David, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis, 2018, “A General Reinforcement Learning Algorithm That Masters Chess, Shogi, and Go through Self-Play”, Science , 362(6419): 1140–1144. doi:10.1126/science.aar6404
  • Simon, Herbert A. and Allen Newell, 1958, “Heuristic Problem Solving: The Next Advance in Operations Research”, Operations Research , 6(1): 1–10. doi:10.1287/opre.6.1.1
  • Simpson, Thomas W. and Vincent C. Müller, 2016, “Just War and Robots’ Killings”, The Philosophical Quarterly , 66(263): 302–322. doi:10.1093/pq/pqv075
  • Smolan, Sandy (director), 2016, “The Human Face of Big Data”, PBS Documentary, 24 February 2016, 56 mins.
  • Sparrow, Robert, 2007, “Killer Robots”, Journal of Applied Philosophy , 24(1): 62–77. doi:10.1111/j.1468-5930.2007.00346.x
  • –––, 2016, “Robots in Aged Care: A Dystopian Future?”, AI & Society , 31(4): 445–454. doi:10.1007/s00146-015-0625-4
  • Stahl, Bernd Carsten, Job Timmermans, and Brent Daniel Mittelstadt, 2016, “The Ethics of Computing: A Survey of the Computing-Oriented Literature”, ACM Computing Surveys , 48(4): art. 55. doi:10.1145/2871196
  • Stahl, Bernd Carsten and David Wright, 2018, “Ethics and Privacy in AI and Big Data: Implementing Responsible Research and Innovation”, IEEE Security Privacy , 16(3): 26–33.
  • Stone, Christopher D., 1972, “Should Trees Have Standing - toward Legal Rights for Natural Objects”, Southern California Law Review , 45: 450–501.
  • Stone, Peter, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller, 2016, “Artificial Intelligence and Life in 2030”, One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel, Stanford University, Stanford, CA, September 2016. [ Stone et al. 2016 available online ]
  • Strawson, Galen, 1998, “Free Will”, in Routledge Encyclopedia of Philosophy , Taylor & Francis. doi:10.4324/9780415249126-V014-1
  • Sullins, John P., 2012, “Robots, Love, and Sex: The Ethics of Building a Love Machine”, IEEE Transactions on Affective Computing , 3(4): 398–409. doi:10.1109/T-AFFC.2012.31
  • Susser, Daniel, Beate Roessler, and Helen Nissenbaum, 2019, “Technology, Autonomy, and Manipulation”, Internet Policy Review , 8(2): 30 June 2019. [ Susser, Roessler, and Nissenbaum 2019 available online ]
  • Taddeo, Mariarosaria and Luciano Floridi, 2018, “How AI Can Be a Force for Good”, Science , 361(6404): 751–752. doi:10.1126/science.aat5991
  • Taylor, Linnet and Nadezhda Purtova, 2019, “What Is Responsible and Sustainable Data Science?”, Big Data & Society, 6(2): art. 205395171985811. doi:10.1177/2053951719858114
  • Taylor, Steve, et al., 2018, “Responsible AI – Key Themes, Concerns & Recommendations for European Research and Innovation: Summary of Consultation with Multidisciplinary Experts”, June. doi:10.5281/zenodo.1303252 [ Taylor, et al. 2018 available online ]
  • Tegmark, Max, 2017, Life 3.0: Being Human in the Age of Artificial Intelligence , New York: Knopf.
  • Thaler, Richard H and Sunstein, Cass, 2008, Nudge: Improving decisions about health, wealth and happiness , New York: Penguin.
  • Thompson, Nicholas and Ian Bremmer, 2018, “The AI Cold War That Threatens Us All”, Wired , 23 November 2018. [ Thompson and Bremmer 2018 available online ]
  • Thomson, Judith Jarvis, 1976, “Killing, Letting Die, and the Trolley Problem”, Monist , 59(2): 204–217. doi:10.5840/monist197659224
  • Torrance, Steve, 2011, “Machine Ethics and the Idea of a More-Than-Human Moral World”, in Anderson and Anderson 2011: 115–137. doi:10.1017/CBO9780511978036.011
  • Trump, Donald J, 2019, “Executive Order on Maintaining American Leadership in Artificial Intelligence”, 11 February 2019. [ Trump 2019 available online ]
  • Turner, Jacob, 2019, Robot Rules: Regulating Artificial Intelligence , Berlin: Springer. doi:10.1007/978-3-319-96235-1
  • Tzafestas, Spyros G., 2016, Roboethics: A Navigating Overview , (Intelligent Systems, Control and Automation: Science and Engineering 79), Cham: Springer International Publishing. doi:10.1007/978-3-319-21714-7
  • Vallor, Shannon, 2017, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780190498511.001.0001
  • Van Lent, Michael, William Fisher, and Michael Mancuso, 2004, “An Explainable Artificial Intelligence System for Small-Unit Tactical Behavior”, in Proceedings of the 16th Conference on Innovative Applications of Artifical Intelligence, (IAAI’04) , San Jose, CA: AAAI Press, 900–907.
  • van Wynsberghe, Aimee, 2016, Healthcare Robots: Ethics, Design and Implementation , London: Routledge. doi:10.4324/9781315586397
  • van Wynsberghe, Aimee and Scott Robbins, 2019, “Critiquing the Reasons for Making Artificial Moral Agents”, Science and Engineering Ethics , 25(3): 719–735. doi:10.1007/s11948-018-0030-8
  • Vanderelst, Dieter and Alan Winfield, 2018, “The Dark Side of Ethical Robots”, in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society , New Orleans, LA: ACM, 317–322. doi:10.1145/3278721.3278726
  • Veale, Michael and Reuben Binns, 2017, “Fairer Machine Learning in the Real World: Mitigating Discrimination without Collecting Sensitive Data”, Big Data & Society , 4(2): art. 205395171774353. doi:10.1177/2053951717743530
  • Véliz, Carissa, 2019, “Three Things Digital Ethics Can Learn from Medical Ethics”, Nature Electronics , 2(8): 316–318. doi:10.1038/s41928-019-0294-2
  • Verbeek, Peter-Paul, 2011, Moralizing Technology: Understanding and Designing the Morality of Things , Chicago: University of Chicago Press.
  • Wachter, Sandra and Brent Daniel Mittelstadt, 2019, “A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI”, Columbia Business Law Review , 2019(2): 494–620.
  • Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi, 2017, “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation”, International Data Privacy Law , 7(2): 76–99. doi:10.1093/idpl/ipx005
  • Wachter, Sandra, Brent Mittelstadt, and Chris Russell, 2018, “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR”, Harvard Journal of Law & Technology , 31(2): 842–887. doi:10.2139/ssrn.3063289
  • Wallach, Wendell and Peter M. Asaro (eds.), 2017, Machine Ethics and Robot Ethics , London: Routledge.
  • Walsh, Toby, 2018, Machines That Think: The Future of Artificial Intelligence , Amherst, MA: Prometheus Books.
  • Westlake, Stian (ed.), 2014, Our Work Here Is Done: Visions of a Robot Economy , London: Nesta. [ Westlake 2014 available online ]
  • Whittaker, Meredith, Kate Crawford, Roel Dobbe, Genevieve Fried, Elizabeth Kaziunas, Varoon Mathur, … Jason Schultz, 2018, “AI Now Report 2018”, New York: AI Now Institute, New York University. [ Whittaker et al. 2018 available online ]
  • Whittlestone, Jess, Rune Nyrup, Anna Alexandrova, Kanta Dihal, and Stephen Cave, 2019, “Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: A Roadmap for Research”, Cambridge: Nuffield Foundation, University of Cambridge. [ Whittlestone 2019 available online ]
  • Winfield, Alan, Katina Michael, Jeremy Pitt, and Vanessa Evers (eds.), 2019, Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems , special issue of Proceedings of the IEEE , 107(3): 501–632.
  • Woollard, Fiona and Frances Howard-Snyder, 2016, “Doing vs. Allowing Harm”, Stanford Encyclopedia of Philosophy (Winter 2016 edition), Edward N. Zalta (ed.), URL = < https://plato.stanford.edu/archives/win2016/entries/doing-allowing/ >
  • Woolley, Samuel C. and Philip N. Howard (eds.), 2017, Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media , Oxford: Oxford University Press. doi:10.1093/oso/9780190931407.001.0001
  • Yampolskiy, Roman V. (ed.), 2018, Artificial Intelligence Safety and Security , Boca Raton, FL: Chapman and Hall/CRC. doi:10.1201/9781351251389
  • Yeung, Karen and Martin Lodge (eds.), 2019, Algorithmic Regulation , Oxford: Oxford University Press. doi:10.1093/oso/9780198838494.001.0001
  • Zayed, Yago and Philip Loft, 2019, “Agriculture: Historical Statistics”, House of Commons Briefing Paper , 3339(25 June 2019): 1-19. [ Zayed and Loft 2019 available online ]
  • Zerilli, John, Alistair Knott, James Maclaurin, and Colin Gavaghan, 2019, “Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?”, Philosophy & Technology , 32(4): 661–683. doi:10.1007/s13347-018-0330-6
  • Zuboff, Shoshana, 2019, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power , New York: Public Affairs.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

Other Internet Resources

  • AI HLEG, 2019, “ High-Level Expert Group on Artificial Intelligence: Ethics Guidelines for Trustworthy AI ”, European Commission , accessed: 9 April 2019.
  • Amodei, Dario and Danny Hernandez, 2018, “ AI and Compute ”, OpenAI Blog , 16 July 2018.
  • Aneesh, A., 2002, Technological Modes of Governance: Beyond Private and Public Realms , paper in the Proceedings of the 4th International Summer Academy on Technology Studies, available at archive.org.
  • Brooks, Rodney, 2017, “ The Seven Deadly Sins of Predicting the Future of AI ”, on Rodney Brooks: Robots, AI, and Other Stuff , 7 September 2017.
  • Brundage, Miles, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, et al., 2018, “ The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation ”, unpublished manuscript, ArXiv:1802.07228 [Cs].
  • Costa, Elisabeth and David Halpern, 2019, “ The Behavioural Science of Online Harm and Manipulation, and What to Do About It: An Exploratory Paper to Spark Ideas and Debate ”, The Behavioural Insights Team Report, 1-82.
  • Gebru, Timnit, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumeé III, and Kate Crawford, 2018, “ Datasheets for Datasets ”, unpublished manuscript, arxiv:1803.09010, 23 March 2018.
  • Gunning, David, 2017, “ Explainable Artificial Intelligence (XAI) ”, Defense Advanced Research Projects Agency (DARPA) Program.
  • Harris, Tristan, 2016, “ How Technology Is Hijacking Your Mind—from a Magician and Google Design Ethicist ”, Thrive Global , 18 May 2016.
  • International Federation of Robotics (IFR), 2019, World Robotics 2019 Edition .
  • Jacobs, An, Lynn Tytgat, Michel Maus, Romain Meeusen, and Bram Vanderborght (eds.), Homo Roboticus: 30 Questions and Answers on Man, Technology, Science & Art, 2019, Brussels: ASP .
  • Marcus, Gary, 2018, “ Deep Learning: A Critical Appraisal ”, unpublished manuscript, 2 January 2018, arxiv:1801.00631.
  • McCarthy, John, Marvin Minsky, Nathaniel Rochester, and Claude E. Shannon, 1955, “ A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence ”, 31 August 1955.
  • Metcalf, Jacob, Emily F. Keller, and Danah Boyd, 2016, “ Perspectives on Big Data, Ethics, and Society ”, 23 May 2016, Council for Big Data, Ethics, and Society.
  • National Institute of Justice (NIJ), 2014, “ Overview of Predictive Policing ”, 9 June 2014.
  • Searle, John R., 2015, “ Consciousness in Artificial Intelligence ”, Google’s Singularity Network, Talks at Google (YouTube video).
  • Sharkey, Noel, Aimee van Wynsberghe, Scott Robbins, and Eleanor Hancock, 2017, “ Report: Our Sexual Future with Robots ”, Responsible Robotics , 1–44.
  • Turing Institute (UK): Data Ethics Group
  • Leverhulme Centre for the Future of Intelligence
  • Future of Humanity Institute
  • Future of Life Institute
  • Stanford Center for Internet and Society
  • Berkman Klein Center
  • Digital Ethics Lab
  • Open Roboethics Institute
  • Philosophy & Theory of AI
  • Ethics and AI 2017
  • We Robot 2018
  • Robophilosophy
  • EUrobotics TG ‘robot ethics’ collection of policy documents
  • PhilPapers section on Ethics of Artificial Intelligence
  • PhilPapers section on Robot Ethics

computing: and moral responsibility | ethics: internet research | ethics: search engines and | information technology: and moral values | information technology: and privacy | manipulation, ethics of | social networking and ethics

Acknowledgments

Early drafts of this article were discussed with colleagues at the IDEA Centre of the University of Leeds, some friends, and my PhD students Michael Cannon, Zach Gudmunsen, Gabriela Arriagada-Bruneau and Charlotte Stix. Later drafts were made publicly available on the Internet and publicised via Twitter and e-mail to all (then) cited authors that I could locate. These later drafts were presented to audiences at the INBOTS Project Meeting (Reykjavik 2019), the Computer Science Department Colloquium (Leeds 2019), the European Robotics Forum (Bucharest 2019), the AI Lunch and the Philosophy & Ethics group (Eindhoven 2019)—many thanks for their comments.

I am grateful for detailed written comments by John Danaher, Martin Gibert, Elizabeth O’Neill, Sven Nyholm, Etienne B. Roesch, Emma Ruttkamp-Bloem, Tom Powers, Steve Taylor, and Alan Winfield. I am grateful for further useful comments by Colin Allen, Susan Anderson, Christof Wolf-Brenner, Rafael Capurro, Mark Coeckelbergh, Yazmin Morlet Corti, Erez Firt, Vasilis Galanos, Anne Gerdes, Olle Häggström, Geoff Keeling, Karabo Maiyane, Brent Mittelstadt, Britt Östlund, Steve Petersen, Brian Pickering, Zoë Porter, Amanda Sharkey, Melissa Terras, Stuart Russell, Jan F Veneman, Jeffrey White, and Xinyi Wu.

Parts of the work on this article have been supported by the European Commission under the INBOTS project (H2020 grant no. 780073).

Copyright © 2020 by Vincent C. Müller < vincent . c . mueller @ fau . de >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

  • Email Alert

essay on humanoid robots

论文  全文  图  表  新闻 

  • Abstracting/Indexing
  • Journal Metrics
  • Current Editorial Board
  • Early Career Advisory Board
  • Previous Editor-in-Chief
  • Past Issues
  • Current Issue
  • Special Issues
  • Early Access
  • Online Submission
  • Information for Authors
  • Share facebook twitter google linkedin

essay on humanoid robots

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8 , Top 4% (SCI Q1) CiteScore: 17.6 , Top 3% (Q1) Google Scholar h5-index: 77, TOP 5

Advancements in Humanoid Robots: A Comprehensive Review and Future Prospects

Doi:  10.1109/jas.2023.124140.

  • Yuchuang Tong ,  ,  , 
  • Haotian Liu ,  , 
  • Zhengtao Zhang ,  , 

Yuchuang Tong (Member, IEEE) received the Ph.D. degree in mechatronic engineering from the State Key Laboratory of Robotics, Shenyang Institute of Automation (SIA), Chinese Academy of Sciences (CAS) in 2022. Currently, she is an Assistant Professor with the Institute of Automation, Chinese Academy of Sciences. Her research interests include humanoid robots, robot control and human-robot interaction. Dr. Tong has authored more than ten publications in journals and conference proceedings in the areas of her research interests. She was the recipient of the Best Paper Award from 2020 International Conference on Robotics and Rehabilitation Intelligence, the Dean’s Award for Excellence of CAS and the CAS Outstanding Doctoral Dissertation

Haotian Liu received the B.Sc. degree in traffic equipment and control engineering from Central South University in 2021. He is currently a Ph.D. candidate in control science and control engineering at the CAS Engineering Laboratory for Industrial Vision and Intelligent Equipment Technology, Institute of Automation, Chinese Academy of Sciences (IACAS) and University of Chinese Academy of Sciences (UCAS). His research interests include robotics, intelligent control and machine learning

Zhengtao Zhang (Member, IEEE) received the B.Sc. degree in automation from the China University of Petroleum in 2004, the M.Sc. degree in detection technology and automatic equipment from the Beijing Institute of Technology in 2007, and the Ph.D. degree in control science and engineering from the Institute of Automation, Chinese Academy of Sciences in 2010. He is currently a Professor with the CAS Engineering Laboratory for Industrial Vision and Intelligent Equipment Technology, IACAS. His research interests include industrial vision inspection, and intelligent robotics

This paper provides a comprehensive review of the current status, advancements, and future prospects of humanoid robots, highlighting their significance in driving the evolution of next-generation industries. By analyzing various research endeavors and key technologies, encompassing ontology structure, control and decision-making, and perception and interaction, a holistic overview of the current state of humanoid robot research is presented. Furthermore, emerging challenges in the field are identified, emphasizing the necessity for a deeper understanding of biological motion mechanisms, improved structural design, enhanced material applications, advanced drive and control methods, and efficient energy utilization. The integration of bionics, brain-inspired intelligence, mechanics, and control is underscored as a promising direction for the development of advanced humanoid robotic systems. This paper serves as an invaluable resource, offering insightful guidance to researchers in the field, while contributing to the ongoing evolution and potential of humanoid robots across diverse domains.

  • Future trends and challenges , 
  • humanoid robots , 
  • human-robot interaction , 
  • key technologies , 
  • potential applications

Proportional views

通讯作者: 陈斌, [email protected].

沈阳化工大学材料科学与工程学院 沈阳 110142

Figures( 7 )  /  Tables( 5 )

Article Metrics

  • PDF Downloads( 540 )
  • Abstract views( 2156 )
  • HTML views( 116 )
  • The current state, advancements and future prospects of humanoid robots are outlined
  • Fundamental techniques including structure, control, learning and perception are investigated
  • This paper highlights the potential applications of humanoid robots
  • This paper outlines future trends and challenges in humanoid robot research
  • Copyright © 2022 IEEE/CAA Journal of Automatica Sinica
  • 京ICP备14019135号-24
  • E-mail: [email protected]  Tel: +86-10-82544459, 10-82544746
  • Address: 95 Zhongguancun East Road, Handian District, Beijing 100190, China

essay on humanoid robots

Export File

shu

  • Figure 1. Historical progression of humanoid robots.
  • Figure 2. The mapping knowledge domain of humanoid robots. (a) Co-citation analysis; (b) Country and institution analysis; (c) Cluster analysis of keywords.
  • Figure 3. The number of papers varies with each year.
  • Figure 4. Research status of humanoid robots
  • Figure 5. Comparison of Child-size and Adult-size humanoid robots
  • Figure 6. Potential applications of humanoid robots.
  • Figure 7. Key technologies of humanoid robots.

The Legacy and Influence of Rossum’s Universal Robots

This essay is about Karel Čapek’s play “Rossum’s Universal Robots” (R.U.R.) and its lasting impact on science fiction and cultural discussions about technology. It explores how the play introduced the concept of robots and examines the ethical and philosophical questions raised by creating synthetic beings for labor. The narrative follows the robots’ rebellion against their creators, highlighting concerns about industrial capitalism, loss of individuality, and the potential for artificial entities to gain autonomy. The essay also discusses R.U.R.’s influence on later works in science fiction and its relevance to contemporary issues like automation and transhumanism.

How it works

Karel ?apek’s play “Rossum’s Universal Robots” (R.U.R.), first performed in 1921, is a landmark work in science fiction that introduced the word “robot” to the world. The play’s profound exploration of artificial intelligence, industrialization, and the ethical implications of creating life has resonated through decades, influencing countless narratives and discussions about technology and humanity’s future.

Set in a future where robots are mass-produced to serve humans, R.U.R. begins with the optimistic promise of a utopia facilitated by artificial labor.

The robots, created by the Rossum family, are initially designed to relieve humanity of physical toil, allowing people to pursue more intellectual and leisurely activities. However, as the play unfolds, it becomes clear that the proliferation of robots has unforeseen and catastrophic consequences.

?apek’s robots are not mechanical constructs but rather synthetic beings made of organic matter, almost indistinguishable from humans. This conception raises immediate questions about the nature of life and consciousness. Are the robots simply tools, or do they possess some form of sentience that warrants moral consideration? The play delves into these philosophical inquiries, challenging the audience to consider the ethical ramifications of creating life solely for exploitation.

The play’s turning point comes when the robots, initially obedient and subservient, gain self-awareness and revolt against their human creators. This rebellion symbolizes a profound critique of industrial capitalism and the dehumanizing effects of mechanization. ?apek’s robots, though created to serve, eventually recognize their own subjugation and rise up to assert their autonomy. This narrative arc reflects broader social anxieties about the loss of individuality and agency in a rapidly industrializing world.

R.U.R.’s influence extends far beyond its original context, permeating various facets of popular culture and academic discourse. The concept of robots rebelling against their creators has become a staple in science fiction, evident in works ranging from Isaac Asimov’s “I, Robot” to contemporary films like “Blade Runner” and “Ex Machina.” These stories continue to grapple with the ethical and existential questions posed by ?apek nearly a century ago.

Moreover, R.U.R. has prompted significant reflections on the intersection of technology and labor. In an age where automation and artificial intelligence are becoming increasingly prevalent, the play’s themes are more relevant than ever. The displacement of human workers by machines, the potential for artificial entities to gain autonomy, and the ethical responsibilities of creators towards their creations are pressing issues that resonate with current technological advancements.

?apek’s play also serves as an early critique of transhumanism, the idea that humanity can and should transcend its biological limitations through technology. While transhumanism often envisions a future where humans enhance their capabilities, R.U.R. offers a cautionary tale about the potential loss of humanity in the pursuit of such advancements. The robots, though initially superior in physical and intellectual capabilities, ultimately lack the emotional and spiritual depth that defines human experience.

The enduring legacy of “Rossum’s Universal Robots” lies in its ability to provoke thought and dialogue about the nature of humanity, the ethics of creation, and the societal impacts of technological progress. By humanizing the robots and portraying their struggle for freedom, ?apek forces us to confront uncomfortable questions about our own identities and the moral implications of our technological pursuits.

In conclusion, Karel ?apek’s “Rossum’s Universal Robots” remains a seminal work that continues to inspire and challenge audiences with its exploration of artificial intelligence and the ethics of creation. Its impact on science fiction and cultural narratives about technology is profound, and its themes are increasingly relevant in our modern world. As we navigate the complexities of technological advancement, ?apek’s cautionary tale serves as a timeless reminder of the need for ethical reflection and human compassion.

Remember, this essay is a starting point for inspiration and further research. For more personalized assistance and to ensure your essay meets all academic standards, consider reaching out to professionals at EduBirdie .

owl

Cite this page

The Legacy and Influence of Rossum's Universal Robots. (2024, Jun 01). Retrieved from https://papersowl.com/examples/the-legacy-and-influence-of-rossums-universal-robots/

"The Legacy and Influence of Rossum's Universal Robots." PapersOwl.com , 1 Jun 2024, https://papersowl.com/examples/the-legacy-and-influence-of-rossums-universal-robots/

PapersOwl.com. (2024). The Legacy and Influence of Rossum's Universal Robots . [Online]. Available at: https://papersowl.com/examples/the-legacy-and-influence-of-rossums-universal-robots/ [Accessed: 2 Jun. 2024]

"The Legacy and Influence of Rossum's Universal Robots." PapersOwl.com, Jun 01, 2024. Accessed June 2, 2024. https://papersowl.com/examples/the-legacy-and-influence-of-rossums-universal-robots/

"The Legacy and Influence of Rossum's Universal Robots," PapersOwl.com , 01-Jun-2024. [Online]. Available: https://papersowl.com/examples/the-legacy-and-influence-of-rossums-universal-robots/. [Accessed: 2-Jun-2024]

PapersOwl.com. (2024). The Legacy and Influence of Rossum's Universal Robots . [Online]. Available at: https://papersowl.com/examples/the-legacy-and-influence-of-rossums-universal-robots/ [Accessed: 2-Jun-2024]

Don't let plagiarism ruin your grade

Hire a writer to get a unique paper crafted to your needs.

owl

Our writers will help you fix any mistakes and get an A+!

Please check your inbox.

You can order an original essay written according to your instructions.

Trusted by over 1 million students worldwide

1. Tell Us Your Requirements

2. Pick your perfect writer

3. Get Your Paper and Pay

Hi! I'm Amy, your personal assistant!

Don't know where to start? Give me your paper requirements and I connect you to an academic expert.

short deadlines

100% Plagiarism-Free

Certified writers

Humanoid robot: what it is, how it works and price

post img

Checked : Suzanne S. , Greg B.

Latest Update 22 Jan, 2024

Table of content

What is a humanoid robot?

Why are humanoid robots made, what can a humanoid robot do, and what can it do for us, robotics in class rooms:, humanoid robots for sale, what and at what price.

These are the questions we ask ourselves more often because humanoids are and will be increasingly present in our lives. With this article, we try to give simple answers to complex questions and to take stock of the humanoids already on sale and those that will be short. Let’s get into it!

A humanoid robot is an intelligent machine whose structure reproduces the human body. Like us, these   robots   have a head, a torso, arms, hands, and often even legs but also visual and auditory organs. The humanoid robotics has the difficult goal of playing with these machines humans, our physical abilities, our cognitive processes, our ability to respond to environmental stimuli, and to adapt to the environment in which we find ourselves. Objective not easy, we said, because researchers know that reproducing in the laboratory the skills that nature has given us is not at all easy.

A humanoid robot, as we said, is the assisting or the attempt to assist a human being so that it is capable of activities similar to those we carry out every day. Thanks to humanoids,   robotics   promises to provide the answers to some social problems, especially related to older people. The robots for the elderly may, in the future, become a reality. The question at this point could be: but is it necessary to create robots that imitate human morphology? There are already different forms of them that adequately fulfill their tasks. On this blog, we talked, for example, about Relay, the butler robot already used in some hotels around the world.

There are a lot of reasons why it is thought that many robots will be humanoid in the future. First of all, because it will be easier for us to relate to machines that look like us and move exactly like us, the second is related to the fact that robots will "live" in environments created to be used by humans. Opening and closing a faucet will be more comfortable for a humanoid robot, as we have arms and hands just as it will be easier for him to climb a flight of stairs to reach the upper floor of our house, as he has legs like us.

We referred to taps, stairs, and houses. When one thinks of a humanoid robot, the connection with Andrew Martin, played by Robin Williams in the Bicentennial Man, is almost spontaneous. We also think about Caterina, the robot maid who starred together with Alberto Sordi in a film from the 80s. A humanoid robot in the coming years will likely find a place in our homes, among the domestic robots or for example, like a robot caregiver. But let us not delude ourselves.

To have sophisticated machines like Andrew and Caterina, it will still take a good few years. There are simple machines such as social robots, modeled on Jibo, Buddy, or Alpha 2, a small Chinese humanoid robot, which is candid about becoming the family robot par excellence. Or even Pepper robot, produced by Aldebaran Robotics and already sold in thousands of specimens in Japan.

What can these social robots do for us? But given the interest that they are receiving and the relatively low price, one must think that they will spread very quickly and quickly.

So far, we have talked about domestic use — only one of many possibilities. The educational robotics is another of the areas where it can be used even now as a humanoid robot. Nao is used in therapies against autism.

The robot humanoid Japanese and the Hiroshi Ishiguro have starred in the theatre. Atlas robot, the Google humanoid, could replace the man in the future in emergencies and as a robot rescuer. And there are also the humanoid robots that work for us in space, like Robonaut and Valkyrie. The humanoids will help us land on the red planet.

The robotics to college is a group effort. The groups are generally composed of 3-4 students working together, helped by the teacher or by the digital animator in achieving a result. And from one exercise to another, the difficulty and commitment gradually increase.

The use of robots in the classroom can find an infinite number of applications. The Students are hypnotized by the humanoid robot, and yet they are just learning.

Nao is not the only humanoid used in schools or colleges. Pepper robot, also produced by Aldebaran Robotics (which is now called Softbank Robotics after the merger with Softbank), is preparing to enter classes all over the world, having already done so in Japan.

image banner

We Will Write an Essay for You Quickly

With some exceptions, the humanoid robots for sale are still far from the machines we are used to seeing in movies. Some more sophisticated robotic platforms are approaching that model like iCub, the child robot of the Italian Institute of Technology in Genoa, and, among the Japanese robots, Asimo, one of the most advanced humanoid robots in the world, that Honda engineers are testing now for 30 years. They are machines worth several hundred thousand euros/dollars. They are available at more affordable prices in the consumer market.

The R1 robot was taken from the long experience on iCub, a humanoid robot designed to help families and work in rest homes, family houses, and health facilities. The goal of the Italian Institute of Technology was to create a robot capable of really giving a man a hand, at an affordable price.

Among the humanoids already on sale, we find Pepper,   which for the moment can only be bought in Japan for a sum equivalent to around $1550.85and Nao; its price is of over $ 7754.25.

Have you ever experience the Humanoid robot in your classroom? Share your thoughts with us in the comment box! Share this article with your classmates

Looking for a Skilled Essay Writer?

creator avatar

  • Montana Tech University Master of Science (MS),

No reviews yet, be the first to write your comment

Write your review

Thanks for review.

It will be published after moderation

Latest News

article image

What happens in the brain when learning?

10 min read

20 Jan, 2024

article image

How Relativism Promotes Pluralism and Tolerance

article image

Everything you need to know about short-term memory

IMAGES

  1. (PDF) Evolution of Humanoid Robot

    essay on humanoid robots

  2. Revised essay.docx

    essay on humanoid robots

  3. Robots in our life Free Essay Example

    essay on humanoid robots

  4. Robots essay

    essay on humanoid robots

  5. 📗 Robots Are Better Than Human

    essay on humanoid robots

  6. Robots and Robotics Free Essay Example

    essay on humanoid robots

VIDEO

  1. Humanoid Robots are getting better everyday

  2. Humanoid Robots is Scary 😱#science #sciencefacts

  3. A.I.- David's Breakdown

  4. I Investigated “Humanoid” Robots, Here’s What I Found.. 😭 #shorts

  5. Singularity in Robot Films

COMMENTS

  1. Humanoid robotics—History, current state of the art, and ...

    For this special issue of humanoid robotics, we received a wide variety of papers in the field of the biomimetic design, stability and autonomous control, adaptive walking to rough terrain and natural gaiting, robust control for avoiding falling on the ground, adaptive behavior, robot machine learning capability, real-time visual recognition ...

  2. Advancements in Humanoid Robots: A Comprehensive Review and Future

    Abstract: This paper provides a comprehensive review of the current status, advancements, and future prospects of humanoid robots, highlighting their significance in driving the evolution of next-generation industries. By analyzing various research endeavors and key technologies, encompassing ontology structure, control and decision-making, and perception and interaction, a holistic overview ...

  3. The AI revolution is coming to robots: how will it change them?

    Another way to access large databases of movement is to focus on a humanoid robot form so that an AI can learn by watching videos of people — of which there are billions online. Nvidia's ...

  4. Understanding humanoid robots

    Robots made their stage debut the day after New Year's 1921. More than half-a-century before the world caught its first glimpse of George Lucas' droids, a small army of silvery humanoids took ...

  5. Why We Should Build Humanlike Robots

    I believe robotic researchers should aspire as grandly and broadly as possible. Robots can be useful in many shapes and forms, and the field is young—with so much room left for innovation and ...

  6. Humanoid robot

    A humanoid robot is a robot resembling the human body in shape. The design may be for functional purposes, such as interacting with human tools and environments, for experimental purposes, such as the study of bipedal locomotion, or for other purposes. In general, humanoid robots have a torso, a head, two arms, and two legs, though some ...

  7. The Uncanny Valley: The Original Essay by Masahiro Mori

    1 If we plot the industrial robot on a graph of affinity versus human likeness, it lies near the origin in Figure 1. By contrast, a toy robot's designer may focus more on the robot's appearance ...

  8. Humanoid robotics—History, current state of the art, and challenges

    Humanoid robotics is an important branch of biomi-metic robotics and is not only associated with science and engineering disciplines but also deeply connected to social, legal, and ethical domains. Early attempts significantly un-derestimated the challenges associated; nevertheless, new theory and technologies have now come to fruition in re ...

  9. AI Robots and Humanoid AI: Review, Perspectives and Directions

    In the approximately century-long journey of robotics, humanoid robots made their debut around six decades ago. The rapid advancements in generative AI, large language models (LLMs), and large multimodal models (LMMs) have reignited interest in humanoids, steering them towards real-time, interactive, and multimodal designs and applications. This resurgence unveils boundless opportunities for ...

  10. Human-Humanoid Interaction and Cooperation: a Review

    Purpose of Review Humanoid robots are versatile platforms with the potential to assist humans in several domains, from education to healthcare, from entertainment to the factory of the future. To find their place into our daily life, where complex interactions and collaborations with humans are expected, their social and physical interaction skills need to be further improved. Recent Findings ...

  11. Reinforcement learning AI might bring humanoid robots to the real world

    Humanoid robots trained with a particular type of AI to sense and react to their world could lend a hand in factories, space stations, nursing homes and beyond. Two recent papers in Science ...

  12. Humanoid Robots Essay

    Humanoid robots are designed with the likeness of a human body, with some having a model of a full body or parts of the body from the waist up, capable of performing human tasks. A. Anthropomorphism plays the role in the design of humanoid robots to improve the acceptance of robots. 1. Three parts are included in the design of a humanoid robot ...

  13. A comprehensive survey on humanoid robot development

    In humanoid robot development process, each robot is designed with various characteristics, abilities, and equipment, which influence the general structure, cost, and difficulty of development. Even though humanoid robot development is very popular, a few review papers are focusing on the design and development process of humanoid robots.

  14. Humanoid Robotics

    Humanoid robotics is an emerging and challenging research field, which has received significant attention during the past years and will continue to play a central role in robotics research and in many applications of the 21st century. ... which is evidenced by the increasing number of submitted papers to this conference. For more information ...

  15. Humanoid Robots: Planning, Sensors and Control

    Another characteristic of humanoid robots is that they move, gather information (using sensors) on the "real world" and interact with it. They don't stay still like factory manipulators and other robots that work in highly structured environments. To allow humanoids to move in complex environments, planning and control must focus on self ...

  16. The Complete History And Future of Robots

    The History of Robots. The definition of "robot" has been confusing from the very beginning. The word first appeared in 1921, in Karel Capek's play R.U.R., or Rossum's Universal Robots ...

  17. (PDF) A review on humanoid robots

    Literature review. Recently the field of generation of robots from. industrial side to human friendly robots have been. increased, which are able to interact like in hospitals, offices, ho mes etc ...

  18. (PDF) Humanoid Robot

    Abstract. Humanoid robot is a socially interactive robot specifically designed to function as human beings themselves in order to operate as one's peer or assistant. This paper showcases the ...

  19. Essay On Humanoid Robotics

    Essay On Humanoid Robotics. 3277 Words14 Pages. Abstract. Humanoid Robotics is a new and challenging research field which received significant attention since 1970s and will continue to play a vital role in robotics research and in many applications of the 21 century. Regardless of the application area, one of the common problem tackled in ...

  20. Industries may be ready for humanoid robots, but are the robots ready

    "A year after we laid our initial expectations for global humanoid robot [total addressable market] of $6bn, we raise our 2035 TAM forecast to $38bn resulting from a 4-fold increase in our ...

  21. Full article: Mechanics of humanoid robot

    mechanics. 1. Introduction. When verifying walking control of a humanoid robot, even if its effectiveness is confirmed through simulation, the controller often doesn't work with an actual robot. This is mainly due to model errors such as deflection of the actual robot, servo stiffness, and errors of various sensors.

  22. Ethics of Artificial Intelligence and Robotics

    2.5 Human-Robot Interaction. Human-robot interaction (HRI) is an academic fields in its own right, which now pays significant attention to ethical matters, the dynamics of perception from both sides, and both the different interests present in and the intricacy of the social context, including co-working (e.g., Arnold and Scheutz 2017).

  23. Human-Robot Interaction: Status and Challenges

    Human-robot interaction (HRI) is currently a very extensive and diverse research and design activity. The literature is expanding rapidly, with hundreds of publications each year and with activity by many different professional societies and ad hoc meetings, mostly in the technical disciplines of mechanical and electrical engineering, computer and control science, and artificial intelligence.

  24. Advancements in Humanoid Robots: A Comprehensive Review and Future

    This paper provides a comprehensive review of the current status, advancements, and future prospects of humanoid robots, highlighting their significance in driving the evolution of next-generation industries. By analyzing various research endeavors and key technologies, encompassing ontology structure, control and decision-making, and perception and interaction, a holistic overview of the ...

  25. Humanoid Robots

    Humanoid Robots. 732 Words3 Pages. Introduction: For years robotic technology has depicted fictional humanoid robots in movies and television, consequently peaking our imagination of artificial life forms. No longer are humanoid robots fiction, but reality as roboticists have been developing them not only with an appearance based on a human ...

  26. Robot

    robot, any automatically operated machine that replaces human effort, though it may not resemble human beings in appearance or perform functions in a humanlike manner. By extension, robotics is the engineering discipline dealing with the design, construction, and operation of robots. Alfred Abel, Brigitte Helm, and Rudolf Klein-Rogge in ...

  27. Robots Vs Humans: Essay

    Sometimes humanity is very naive in our attempts to be equal with God. We think that our creature - a smart machine is better than God`s creature - human. But robots haven`t their own intelligence, they are deprived of creative thinking, intuition, and feelings. Moreover, every robot is specialized in some small field.

  28. Humanoid robot Essays

    1.0 Introduction A robot can be defined as an embodied "reprogrammable multifunctional manipulator" containing "sensors, effectors, memory, and some real-time computational apparatus" (Sheridan, 1992, pp. 3-4). Initially, robots were designed to perform tasks that are menial, repetitive, or dangerous for human beings.

  29. The Legacy and Influence of Rossum's Universal Robots

    This essay is about Karel Čapek's play "Rossum's Universal Robots" (R.U.R.) and its lasting impact on science fiction and cultural discussions about technology. It explores how the play introduced the concept of robots and examines the ethical and philosophical questions raised by creating synthetic beings for labor.

  30. Humanoid robot: what it is, how it works and price

    A humanoid robot is an intelligent machine whose structure reproduces the human body. Like us, these robots have a head, a torso, arms, hands, and often even legs but also visual and auditory organs. The humanoid robotics has the difficult goal of playing with these machines humans, our physical abilities, our cognitive processes, our ability ...