Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • NEWS FEATURE
  • 28 May 2024

The AI revolution is coming to robots: how will it change them?

  • Elizabeth Gibney

You can also search for this author in PubMed   Google Scholar

Humanoid robots developed by the US company Figure use OpenAI programming for language and vision. Credit: AP Photo/Jae C. Hong/Alamy

For a generation of scientists raised watching Star Wars, there’s a disappointing lack of C-3PO-like droids wandering around our cities and homes. Where are the humanoid robots fuelled with common sense that can help around the house and workplace?

Rapid advances in artificial intelligence (AI) might be set to fill that hole. “I wouldn’t be surprised if we are the last generation for which those sci-fi scenes are not a reality,” says Alexander Khazatsky, a machine-learning and robotics researcher at Stanford University in California.

From OpenAI to Google DeepMind, almost every big technology firm with AI expertise is now working on bringing the versatile learning algorithms that power chatbots, known as foundation models, to robotics. The idea is to imbue robots with common-sense knowledge, letting them tackle a wide range of tasks. Many researchers think that robots could become really good, really fast. “We believe we are at the point of a step change in robotics,” says Gerard Andrews, a marketing manager focused on robotics at technology company Nvidia in Santa Clara, California, which in March launched a general-purpose AI model designed for humanoid robots.

At the same time, robots could help to improve AI. Many researchers hope that bringing an embodied experience to AI training could take them closer to the dream of ‘artificial general intelligence’ — AI that has human-like cognitive abilities across any task . “The last step to true intelligence has to be physical intelligence,” says Akshara Rai, an AI researcher at Meta in Menlo Park, California.

But although many researchers are excited about the latest injection of AI into robotics, they also caution that some of the more impressive demonstrations are just that — demonstrations, often by companies that are eager to generate buzz. It can be a long road from demonstration to deployment, says Rodney Brooks, a roboticist at the Massachusetts Institute of Technology in Cambridge, whose company iRobot invented the Roomba autonomous vacuum cleaner.

There are plenty of hurdles on this road, including scraping together enough of the right data for robots to learn from, dealing with temperamental hardware and tackling concerns about safety. Foundation models for robotics “should be explored”, says Harold Soh, a specialist in human–robot interactions at the National University of Singapore. But he is sceptical, he says, that this strategy will lead to the revolution in robotics that some researchers predict.

Firm foundations

The term robot covers a wide range of automated devices, from the robotic arms widely used in manufacturing, to self-driving cars and drones used in warfare and rescue missions. Most incorporate some sort of AI — to recognize objects, for example. But they are also programmed to carry out specific tasks, work in particular environments or rely on some level of human supervision, says Joyce Sidopoulos, co-founder of MassRobotics, an innovation hub for robotics companies in Boston, Massachusetts. Even Atlas — a robot made by Boston Dynamics, a robotics company in Waltham, Massachusetts, which famously showed off its parkour skills in 2018 — works by carefully mapping its environment and choosing the best actions to execute from a library of built-in templates.

For most AI researchers branching into robotics, the goal is to create something much more autonomous and adaptable across a wider range of circumstances. This might start with robot arms that can ‘pick and place’ any factory product, but evolve into humanoid robots that provide company and support for older people , for example. “There are so many applications,” says Sidopoulos.

The human form is complicated and not always optimized for specific physical tasks, but it has the huge benefit of being perfectly suited to the world that people have built. A human-shaped robot would be able to physically interact with the world in much the same way that a person does.

However, controlling any robot — let alone a human-shaped one — is incredibly hard. Apparently simple tasks, such as opening a door, are actually hugely complex, requiring a robot to understand how different door mechanisms work, how much force to apply to a handle and how to maintain balance while doing so. The real world is extremely varied and constantly changing.

The approach now gathering steam is to control a robot using the same type of AI foundation models that power image generators and chatbots such as ChatGPT. These models use brain-inspired neural networks to learn from huge swathes of generic data. They build associations between elements of their training data and, when asked for an output, tap these connections to generate appropriate words or images, often with uncannily good results.

Likewise, a robot foundation model is trained on text and images from the Internet, providing it with information about the nature of various objects and their contexts. It also learns from examples of robotic operations. It can be trained, for example, on videos of robot trial and error, or videos of robots that are being remotely operated by humans, alongside the instructions that pair with those actions. A trained robot foundation model can then observe a scenario and use its learnt associations to predict what action will lead to the best outcome.

Google DeepMind has built one of the most advanced robotic foundation models, known as Robotic Transformer 2 (RT-2), that can operate a mobile robot arm built by its sister company Everyday Robots in Mountain View, California. Like other robotic foundation models, it was trained on both the Internet and videos of robotic operation. Thanks to the online training, RT-2 can follow instructions even when those commands go beyond what the robot has seen another robot do before 1 . For example, it can move a drink can onto a picture of Taylor Swift when asked to do so — even though Swift’s image was not in any of the 130,000 demonstrations that RT-2 had been trained on.

In other words, knowledge gleaned from Internet trawling (such as what the singer Taylor Swift looks like) is being carried over into the robot’s actions. “A lot of Internet concepts just transfer,” says Keerthana Gopalakrishnan, an AI and robotics researcher at Google DeepMind in San Francisco, California. This radically reduces the amount of physical data that a robot needs to have absorbed to cope in different situations, she says.

But to fully understand the basics of movements and their consequences, robots still need to learn from lots of physical data. And therein lies a problem.

Data dearth

Although chatbots are being trained on billions of words from the Internet, there is no equivalently large data set for robotic activity. This lack of data has left robotics “in the dust”, says Khazatsky.

Pooling data is one way around this. Khazatsky and his colleagues have created DROID 2 , an open-source data set that brings together around 350 hours of video data from one type of robot arm (the Franka Panda 7DoF robot arm, built by Franka Robotics in Munich, Germany), as it was being remotely operated by people in 18 laboratories around the world. The robot-eye-view camera has recorded visual data in hundreds of environments, including bathrooms, laundry rooms, bedrooms and kitchens. This diversity helps robots to perform well on tasks with previously unencountered elements, says Khazatsky.

The Google DeepMind robotic arm RT-2 holding a toy dinosaur up off a table with a wide array of objects on it

When prompted to ‘pick up extinct animal’, Google’s RT-2 model selects the dinosaur figurine from a crowded table. Credit: Google DeepMind

Gopalakrishnan is part of a collaboration of more than a dozen academic labs that is also bringing together robotic data, in its case from a diversity of robot forms, from single arms to quadrupeds. The collaborators’ theory is that learning about the physical world in one robot body should help an AI to operate another — in the same way that learning in English can help a language model to generate Chinese, because the underlying concepts about the world that the words describe are the same. This seems to work. The collaboration’s resulting foundation model, called RT-X, which was released in October 2023 3 , performed better on real-world tasks than did models the researchers trained on one robot architecture.

Many researchers say that having this kind of diversity is essential. “We believe that a true robotics foundation model should not be tied to only one embodiment,” says Peter Chen, an AI researcher and co-founder of Covariant, an AI firm in Emeryville, California.

Covariant is also working hard on scaling up robot data. The company, which was set up in part by former OpenAI researchers, began collecting data in 2018 from 30 variations of robot arms in warehouses across the world, which all run using Covariant software. Covariant’s Robotics Foundation Model 1 (RFM-1) goes beyond collecting video data to encompass sensor readings, such as how much weight was lifted or force applied. This kind of data should help a robot to perform tasks such as manipulating a squishy object, says Gopalakrishnan — in theory, helping a robot to know, for example, how not to bruise a banana.

Covariant has built up a proprietary database that includes hundreds of billions of ‘tokens’ — units of real-world robotic information — which Chen says is roughly on a par with the scale of data that trained GPT-3, the 2020 version of OpenAI's large language model. “We have way more real-world data than other people, because that’s what we have been focused on,” Chen says. RFM-1 is poised to roll out soon, says Chen, and should allow operators of robots running Covariant’s software to type or speak general instructions, such as “pick up apples from the bin”.

Another way to access large databases of movement is to focus on a humanoid robot form so that an AI can learn by watching videos of people — of which there are billions online. Nvidia’s Project GR00T foundation model, for example, is ingesting videos of people performing tasks, says Andrews. Although copying humans has huge potential for boosting robot skills, doing so well is hard, says Gopalakrishnan. For example, robot videos generally come with data about context and commands — the same isn’t true for human videos, she says.

Virtual reality

A final and promising way to find limitless supplies of physical data, researchers say, is through simulation. Many roboticists are working on building 3D virtual-reality environments, the physics of which mimic the real world, and then wiring those up to a robotic brain for training. Simulators can churn out huge quantities of data and allow humans and robots to interact virtually, without risk, in rare or dangerous situations, all without wearing out the mechanics. “If you had to get a farm of robotic hands and exercise them until they achieve [a high] level of dexterity, you will blow the motors,” says Nvidia’s Andrews.

But making a good simulator is a difficult task. “Simulators have good physics, but not perfect physics, and making diverse simulated environments is almost as hard as just collecting diverse data,” says Khazatsky.

Meta and Nvidia are both betting big on simulation to scale up robot data, and have built sophisticated simulated worlds: Habitat from Meta and Isaac Lab from Nvidia. In them, robots gain the equivalent of years of experience in a few hours, and, in trials, they then successfully apply what they have learnt to situations they have never encountered in the real world. “Simulation is an extremely powerful but underrated tool in robotics, and I am excited to see it gaining momentum,” says Rai.

Many researchers are optimistic that foundation models will help to create general-purpose robots that can replace human labour. In February, Figure, a robotics company in Sunnyvale, California, raised US$675 million in investment for its plan to use language and vision models developed by OpenAI in its general-purpose humanoid robot. A demonstration video shows a robot giving a person an apple in response to a general request for ‘something to eat’. The video on X (the platform formerly known as Twitter) has racked up 4.8 million views.

Exactly how this robot’s foundation model has been trained, along with any details about its performance across various settings, is unclear (neither OpenAI nor Figure responded to Nature ’s requests for an interview). Such demos should be taken with a pinch of salt, says Soh. The environment in the video is conspicuously sparse, he says. Adding a more complex environment could potentially confuse the robot — in the same way that such environments have fooled self-driving cars. “Roboticists are very sceptical of robot videos for good reason, because we make them and we know that out of 100 shots, there’s usually only one that works,” Soh says.

Hurdles ahead

As the AI research community forges ahead with robotic brains, many of those who actually build robots caution that the hardware also presents a challenge: robots are complicated and break a lot. Hardware has been advancing, Chen says, but “a lot of people looking at the promise of foundation models just don't know the other side of how difficult it is to deploy these types of robots”, he says.

Another issue is how far robot foundation models can get using the visual data that make up the vast majority of their physical training. Robots might need reams of other kinds of sensory data, for example from the sense of touch or proprioception — a sense of where their body is in space — say Soh. Those data sets don’t yet exist. “There’s all this stuff that’s missing, which I think is required for things like a humanoid to work efficiently in the world,” he says.

Releasing foundation models into the real world comes with another major challenge — safety. In the two years since they started proliferating, large language models have been shown to come up with false and biased information. They can also be tricked into doing things that they are programmed not to do, such as telling users how to make a bomb. Giving AI systems a body brings these types of mistake and threat to the physical world. “If a robot is wrong, it can actually physically harm you or break things or cause damage,” says Gopalakrishnan.

Valuable work going on in AI safety will transfer to the world of robotics, says Gopalakrishnan. In addition, her team has imbued some robot AI models with rules that layer on top of their learning, such as not to even attempt tasks that involve interacting with people, animals or other living organisms. “Until we have confidence in robots, we will need a lot of human supervision,” she says.

Despite the risks, there is a lot of momentum in using AI to improve robots — and using robots to improve AI. Gopalakrishnan thinks that hooking up AI brains to physical robots will improve the foundation models, for example giving them better spatial reasoning. Meta, says Rai, is among those pursuing the hypothesis that “true intelligence can only emerge when an agent can interact with its world”. That real-world interaction, some say, is what could take AI beyond learning patterns and making predictions, to truly understanding and reasoning about the world.

What the future holds depends on who you ask. Brooks says that robots will continue to improve and find new applications, but their eventual use “is nowhere near as sexy” as humanoids replacing human labour. But others think that developing a functional and safe humanoid robot that is capable of cooking dinner, running errands and folding the laundry is possible — but could just cost hundreds of millions of dollars. “I’m sure someone will do it,” says Khazatsky. “It’ll just be a lot of money, and time.”

doi: https://doi.org/10.1038/d41586-024-01442-5

Brohan, A. et al. Preprint at arXiv https://doi.org/10.48550/arXiv.2307.15818 (2023).

Khazatsky, A. et al. Preprint at arXiv https://doi.org/10.48550/arXiv.2403.12945 (2024).

Open X-Embodiment Collaboration et al. Preprint at arXiv https://doi.org/10.48550/arXiv.2310.08864 (2023).

Download references

Reprints and permissions

Related Articles

literature review on ai and robotics

  • Machine learning

Standardized metadata for biological samples could unlock the potential of collections

Correspondence 14 MAY 24

A guide to the Nature Index

A guide to the Nature Index

Nature Index 13 MAR 24

Decoding chromatin states by proteomic profiling of nucleosome readers

Decoding chromatin states by proteomic profiling of nucleosome readers

Article 06 MAR 24

Software tools identify forgotten genes

Software tools identify forgotten genes

Technology Feature 24 MAY 24

Guidelines for academics aim to lessen ethical pitfalls in generative-AI use

Guidelines for academics aim to lessen ethical pitfalls in generative-AI use

Nature Index 22 MAY 24

Internet use and teen mental health: it’s about more than just screen time

Correspondence 21 MAY 24

Anglo-American bias could make generative AI an invisible intellectual cage

Correspondence 28 MAY 24

Superstar porous materials get salty thanks to computer simulations

Superstar porous materials get salty thanks to computer simulations

News 23 MAY 24

AlphaFold3 — why did Nature publish it without its code?

AlphaFold3 — why did Nature publish it without its code?

Editorial 22 MAY 24

Assistant, Associate or Full Professor

The McLaughlin Research Institute and Touro University – Montana campus invite applications for open rank faculty positions.

McLaughlin Research Institute

literature review on ai and robotics

Postdoctoral Associate- Neuroscience

Houston, Texas (US)

Baylor College of Medicine (BCM)

literature review on ai and robotics

Call for applications- junior and senior scientists

The BORDEAUX INSTITUTE OF ONCOLOGY (BRIC U1312, https://www.bricbordeaux.com/) is seeking to recruit new junior and senior researchers

Bordeaux (Ville), Gironde (FR)

INSERM - U1312 BRIC

literature review on ai and robotics

Postdoctoral Scholar - Organic Synthesis

Memphis, Tennessee

The University of Tennessee Health Science Center (UTHSC)

literature review on ai and robotics

Postdoctoral Scholar - Chemical Biology

literature review on ai and robotics

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies
  • Research article
  • Open access
  • Published: 18 January 2021

Exploring the impact of Artificial Intelligence and robots on higher education through literature-based design fictions

  • A. M. Cox   ORCID: orcid.org/0000-0002-2587-245X 1  

International Journal of Educational Technology in Higher Education volume  18 , Article number:  3 ( 2021 ) Cite this article

49 Citations

31 Altmetric

Metrics details

Artificial Intelligence (AI) and robotics are likely to have a significant long-term impact on higher education (HE). The scope of this impact is hard to grasp partly because the literature is siloed, as well as the changing meaning of the concepts themselves. But developments are surrounded by controversies in terms of what is technically possible, what is practical to implement and what is desirable, pedagogically or for the good of society. Design fictions that vividly imagine future scenarios of AI or robotics in use offer a means both to explain and query the technological possibilities. The paper describes the use of a wide-ranging narrative literature review to develop eight such design fictions that capture the range of potential use of AI and robots in learning, administration and research. They prompt wider discussion by instantiating such issues as how they might enable teaching of high order skills or change staff roles, as well as exploring the impact on human agency and the nature of datafication.

Introduction

The potential of Artificial Intelligence (AI) and robots to reshape our future has attracted vast interest among the public, government and academia in the last few years. As in every other sector of life, higher education (HE) will be affected, perhaps in a profound way (Bates et al., 2020 ; DeMartini and Benussi, 2017 ). HE will have to adapt to educate people to operate in a new economy and potentially for a different way of life. AI and robotics are also likely to change how education itself works, altering what learning is like, the role of teachers and researchers, and how universities work as institutions.

However, the potential changes in HE are hard to grasp for a number of reasons. One reason is that impact is, as Clay ( 2018 ) puts it, “wide and deep” yet the research literature discussing it is siloed. AI and robotics for education are separate literatures, for example. AI for education, learning analytics (LA) and educational data mining also remain somewhat separate fields. Applications to HE research as opposed to learning, such as the robot scientist concept or text and data mining (TDM), are also usually discussed separately. Thus if we wish to grasp the potential impact of AI and robots on HE holistically we need to extend our vision across the breadth of these diverse literatures.

A further reason why the potential implications of AI and robots for HE are quite hard to grasp is because rather than a single technology, something like AI is an idea or aspiration for how computers could participate in human decision making. Faith in how to do this has shifted across different technologies over time; as have concepts of learning (Roll and Wylie, 2016 ). Also, because AI and robotics are ideas that have been pursued over many decades there are some quite mature applications: impacts have already happened. Equally there are potential applications that are being developed and many only just beginning to be imagined. So, confusingly from a temporal perspective, uses of AI and robots in HE are past, present and future.

Although hard to fully grasp, it is important that a wider understanding and debate is achieved, because AI and robotics pose a range of pedagogic, practical, ethical and social justice challenges. A large body of educational literature explores the challenges of implementing new technologies in the classroom as a change management issue (e.g. as synthesised by Reid, 2014 ). Introducing AI and robots will not be a smooth process without its challenges and ironies. There is also a strong tradition in the educational literature of critical responses to technology in HE. These typically focus on issues such as the potential of technology to dehumanise the learning experience. They are often driven by fear of commercialisation or neo-liberal ideologies wrapped up in technology. Similar arguments are developing around AI and robotics. There is a particularly strong concentration of critique around the datafication of HE. Thus the questions around the use of AI and robots are as much about what we should do as what is possible (Selwyn, 2019a ). Yet according to a recent literature review most current research about AI in learning is from computer science and seems to neglect both pedagogy and ethics (Zawacki-Richter et al., 2019 ). Research on AIEd has also been recognised to have a WEIRD (western, educated, industrialized, rich and democratic) bias for some time (Blanchard, 2015 ).

One device to make the use of AI and robots more graspable is fiction, with its ability to help us imagine alternative worlds. Science fiction has already had a powerful influence on creating collective imaginaries of technology and so in shaping the future (Dourish and Bell, 2014 ). Science fiction has had a fascination with AI and robots, presumably because they enhance or replace defining human attributes: the mind and the body. To harness the power of fiction for the critical imagination, a growing body of work within Human Computer Interaction (HCI) studies adopts the use of speculative or critical narratives to destabilise assumptions through “design fictions” (Blythe 2017 ): “a conflation of design, science fact, and science fiction” (Bleecker, 2009 : 6). They can be used to pose critical questions about the impact of technology on society and to actively engage wider publics in how technology is designed. This is a promising route for making the impact of AI and robotics on HE easier to grasp. In this context, the purpose of this paper is to describe the development of a collection of design fictions to widen the debate about the potential impact of AI and robots on HE, based on a wide-ranging narrative literature review. First, the paper will explain more fully the design fiction method.

Method: design fictions

There are many types of fictions that are used for our thinking about the future. In strategic planning and in future studies, scenarios—essentially fictional narratives—are used to encapsulate contrasting possible futures (Amer et al., 2013 ; Inayatullah, 2008 ). These are then used collaboratively by stakeholders to make choices about preferred directions. On a more practical level, in designing information systems traditional design scenarios are short narratives that picture use of a planned system and that are employed to explain how it could be used to solve existing problems. As Carroll ( 1999 ) argues, such scenarios are also essentially stories or fictions and this reflects the fact that system design is inherently a creative process (Blythe, 2017 ). They are often used to involve stakeholders in systems design. The benefit is that the fictional scenario prompts reflection outside the constraints of trying to produce something that simply works (Carroll, 1999 ). But they tend to represent a system being used entirely as intended (Nathan et al., 2007 ). They typically only include immediate stakeholders and immediate contexts of use, rather than thinking about the wider societal impacts of pervasive use of the technology. A growing body of work in the study of HCI refashions these narratives:

Design fiction is about creative provocation, raising questions, innovation, and exploration. (Bleecker, 2009 : 7).

Design fictions create a speculative space in which to raise questions about whether a particular technology is desirable, the socio-cultural assumptions built into technologies, the potential for different technologies to make different worlds, our relation to technology in general, and indeed our role in making the future happen.

Design fictions exist on a spectrum between speculative and critical. Speculative fictions are exploratory. More radical, critical fictions ask fundamental questions about the organisation of society and are rooted in traditions of critical design (Dunne and Raby, 2001 ). By definition they challenge technical solutionism: the way that technologies seem to be built to solve a problem that does not necessarily exist or ignore the contextual issues that might impact its success (Blythe et al., 2016 ).

Design fictions can be used in research in a number of ways, where:

Fictions are the output themselves, as in this paper.

Fictions (or an artefact such as a video based on them) are used to elicit research data, e.g. through interviews or focus groups Lyckvi et al. ( 2018 ).

Fictions are co-created with the public as part of a process of raising awareness (e.g. Tsekleves et al. 2017 ).

For a study of the potential impact of AI and robots on HE, design fictions are a particularly suitable method. They are already used by some authors working in the field such as Pinkwart ( 2016 ), Luckin and Holmes ( 2017 ) and Selwyn et al. ( 2020 ). As a research tool, design fictions can encapsulate key issues in a short, accessible form. Critically, they have the potential to change the scope of the debate, by shifting attention away from the existing literature and its focus on developing and testing specific AI applications (Zawacki-Richter et al., 2019 ) to weighing up more or less desirable directions of travel for society. They can be used to pose critical questions that are not being asked by developers because of the WEIRD bias in the research community itself (Blanchard, 2015 ), to shift focus onto ethical and social justice issues, and also raise doubts based on practical obstacles to their widespread adoption. Fictions engage readers imaginatively and on an affective level. Furthermore, because they are explicitly fictions readers can challenge their assumptions, even get involved in actively rewriting them.

Design fictions are often individual texts. But collections of fictions create potential for reading against each other, further prompting thoughts about alternative futures. In a similar way, in future studies, scenarios are often generated around four or more alternatives, each premised on different assumptions (Inayatullah, 2008 ). This avoids the tendency towards a utopian/ dystopian dualism found in some use of fiction (Rummel et al., 2016 ; Pinkwart 2016 ). Thus in this study the aim was to produce a collection of contrasting fictions that surface the range of debates revolving around the application of AI and robotics to HE.

The process of producing fictions is not easy to render transparent.

In this study the foundation for the fictions was a wide-ranging narrative review of the literature (Templier and Paré, 2015 ). The purpose of the review was to generate a picture of the pedagogic, social, ethical and implementation issues raised by the latest trends in the application of AI and robots to teaching, research and administrative functions in HE, as a foundation for narratives which could instantiate the issues in a fictional form. We know from previous systematic reviews that these type of issue are neglected at least in the literature on AIEds (Zawacki-Richter et al., 2019 ). So the chief novelty of the review lay in (a) focusing on social, ethical, pedagogic and management implications (b) encompassing both AI and robotics as related aspects of automation and (c) seeking to be inclusive across the full range of functions of HE, including impacts on learning, but also on research and scholarly communications, as well as administrative functions, and estates management (smart campus).

In order to gather references for the review, systematic searches on the ERIC database for relevant terms such as “AI or Artificial Intelligence”; “conversational agent”, “AIED” were conducted. Selection was made for items which either primarily addressed non-technical issues or which themselves contained substantial literature reviews that could be used to gain a picture of the most recent applications. This systematic search was combined with snowballing (also known as pearl growing techniques) using references by and to highly relevant matches to find other relevant material. While typically underreported in systematic reviews this method has been shown to be highly effective in retrieving more relevant items (Badampudi et al. 2015 ). Some grey literature was included because there are a large number of reports by governmental organisations summarizing the social implications of AI and robots. Because many issues relating to datafication are foreshadowed in the literature on learning analytics, this topic was also included. In addition, some general literature on AI and robots, while not directly referencing education, was deemed to be relevant, particularly as it was recognised that education might be a late adopter and so impacts would be felt through wider social changes rather than directly through educational applications. Literature reviews which suggested trends in current technologies were included but items which were detailed reports of the development of technologies were excluded. Items prior to 2016 tended also to be excluded, because the concern was with the latest wave of AI and robots. As a result of these searches in the order of 500 items were consulted, with around 200 items deemed to be of high relevance. As such there is no claim that this was an “exhaustive” review, rather it should be seen as complimenting existing systematic reviews by serving a different purpose. The review also successfully identified a number of existing fictions in the literature that could then be rewritten to fit the needs of the study, such as to apply to HE, to make them more concise or add new elements (fictions 1, 3, 4).

As an imaginative act, writing fictions is not reducible to a completely transparent method, although some aspects can be described (Lyckvi et al., 2018 ). Some techniques to create effective critical designs are suggested by Auger ( 2013 ) such as placing something uncanny or unexpected against the backdrop of mundane normality and a sense of verisimilitude (perhaps achieved through mixing fact and fiction). Fiction 6, for example, exploits the mundane feel of committee meeting minutes to help us imagine the debates that would occur among university leaders implementing AI. A common strategy is to take the implications of a central counterfactual premise to its logical conclusion: asking: “what if?” For example, fiction 7 extends existing strategies of gathering data and using chatbots to act on them to its logical extension as a comprehensive system of data surveillance. Another technique used here was to exploit certain genres of writing such as in fiction 6 where using a style of writing from marketing and PR remind us of the role of EdTech companies in producing AI and robots.

Table 1 offers a summary of the eight fictions produced through this process. The fictions explore the potential of AI and robots in different areas of university activity, in learning, administration and research (Table 1 column 5). They seek to represent some different types of technology (column 2). Some are rather futuristic, most seem feasible today, or in the very near future (column 3). The full text of the fictions and supporting material can be downloaded from the University of Sheffield data repository, ORDA, and used under a cc-by-sa licence ( https://doi.org/10.35542/osf.io/s2jc8 ). The following sections describe each fiction in turn, showing how it relates to the literature and surfaces relevant issues. Table 2 below will summarise the issues raised.

In the following sections each of the eight fictions is described, set in the context of the literature review material that shaped their construction.

AI and robots in learning: Fiction 1, “AIDan, the teaching assistant”

Much of the literature around AI in learning focuses on tools that directly teach students (Baker and Smith, 2019 ; Holmes et al., 2019 ; Zawacki-Richter et al., 2019 ). This includes classes of systems such as:

Intelligent tutoring systems (ITS) which teach course content step by step, taking an approach personalised to the individual. Holmes et al. ( 2019 ) differentiate different types of Intelligent Tutoring Systems, based on whether they adopt a linear, dialogic or more exploratory model.

One emerging area of adaptivity is using sensors to detect the emotional and physical state of the learner, recognising the embodied and affective aspects of learning (Luckin, et al., 2016 ); a further link is being made to how virtual and augmented reality can be used to make the experience more engaging and authentic (Holmes et al., 2019 ).

Automatic writing evaluation (AWE) which are tools to assess and offer feedback on writing style (rather than content) such as learnandwrite, Grammarly and Turnitin’s Revision Assistant (Strobl, et al. 2019 ; Hussein et al., 2019 ; Hockly, 2019 ).

Conversational agents (also known as Chatbots or virtual assistants) which are AI tools designed to converse with humans (Winkler and Sӧllner, 2018 ).

The adaptive pedagogical agent, which is an “anthropomorphic virtual character used in an online learning environment to serve instructional purposes” (Martha and Santoso, 2017 ).

Many of these technologies are rather mature, such as AWE and ITS. However, there are also a wide range of different type of systems within each category, e.g. conversational agents can be designed for short or long term interaction, and could act as tutors, engage in language practice, answer questions, promote reflection or act as co-learners. They could be based on text or verbal interaction (Følstad et al., 2019 ; Wellnhammer et al., 2020 ).

Much of such literature reflects the development of AI technologies and their evaluation compared to other forms of teaching. However, according to a recent review it is primarily written by computer scientists mostly from a technical point of view with relatively little connection to pedagogy or ethics (Zawacki-Richter et al., 2019 ). In contrast some authors such as Luckin and Holmes, seek to move beyond the rather narrow development of tools and their evaluation, to envisioning how AI can address the grand challenges of learning in the twenty-first century (Luckin, et al. 2016 ; Holmes et al., 2019 ; Woolf et al., 2013 ). According to this vision many of the inefficiencies and injustices of the current global education system can be addressed by applying AI.

To surface such discussion around what is possible fiction 1 is based loosely on a narrative published by Luckin and Holmes ( 2017 ) themselves. In their paper, they imagine a school classroom ten years into the future from the time of writing, where a teacher is working with an AI teaching assistant. Built into their fiction are the key features of their vision of AI (Luckin et al. 2016 ), thus emphasis is given to:

AI designed to support teachers rather than replacing them;

Personalisation of learning experiences through adaptivity;

Replacement of one-off assessment by continuous monitoring of performance (Luckin, 2017 );

The monitoring of haptic data to adjust learning material to students’ emotional and physical state in real time;

The potential of AI to support learning twenty-first century skills, such as collaborative skills;

Teachers developing skills in data analysis as part of their role;

Students (and parents) as well as teachers having access to data about their learning.

While Luckin and Holmes ( 2017 ) acknowledge that the vision of AI sounds a “bit big brother” it is, as one would expect, essentially an optimistic piece in which all the key technologies they envisage are brought together to improve learning in a broad sense. The fiction developed here retains most of these elements, but reimagined for an HE context, and with a number of other changes:

Reference is also made to rooting teaching in learning science, one of the arguments for AI Luckin makes in a number of places (e.g. Luckin et al. 2016 ).

Students developing a long term relationship with the AI. It is often seen as a desirable aspect of providing AI as a lifelong learning partner (Woolf, et al. 2013 ).

Of course, the more sceptical reader may be troubled by some aspects of this vision, including the potential effects of continuously monitoring performance as a form of surveillance. The emphasis on personalization of learning through AI has been increasingly questioned (Selwyn, 2019a ).

The following excerpt gives a flavour of the fiction:

Actually, I partly picked this Uni because I knew they had AI like AIDan which teach you on principles based in learning science. And exams are a thing of the past! AIDan continuously updates my profile and uses this to measure what I have learned. I have set tutorials with AIDan to analyse data on my performance. Jane often talks me through my learning data as well. I work with him planning things like my module choices too. Some of my data goes to people in the department (like my personal tutor) to student and campus services and the library to help personalise their services.

Social robots in learning: Fiction 2, “Footbotball”

Luckin and Holmes ( 2017 ) see AI as instantiated by sensors and cameras built into the classroom furniture. Their AI does not seem to have a physical form, though it does have a human name. But there is also a literature around educational robots: a type of social robot for learning.

a physical robot, in the same space as the student. It has an intelligence that can support learning tasks and students learn by interacting with it through suitable semiotic systems (Catlin et al., 2018 ).

There is some evidence that learning is better when the learner interacts with a physical entity rather than purely virtual agent and certainly there might be beneficial where what is learned involves embodiment (Belpaeme et al., 2018 ). Fiction 2 offers an imaginative account of what learning alongside robots might be like, in the context of university sport rather than within the curriculum. The protagonist describes how he is benefiting from using university facilities to participate in an imaginary sport, footbotball.

Maybe it’s a bit weird to say, but it’s about developing mutual understanding and… respect. Like the bots can sense your feelings too and chip in with a word just to pick you up if you make a mistake. And you have to develop an awareness of their needs too. Know when is the right time to say something to them to influence them in the right direction. When you watch the best teams they are always like talking to each other. But also just moving together, keeping eyes on and moving as a unit.

The protagonist in fiction 2 describes the high level and employability skills he is learning from a sporting application of robotics. This also reminds us of how the widespread use of AI and robots in wider social contexts may be a key driver for adoption in HE.

Fiction 3: “Criticalbot in conversation”

Both fictions 1 and 2 are glossy science fictions, with a strongly futuristic feel and, as in traditional design scenarios the technology seems to be used as intended by the designer. In contrast, the third fiction is inspired by Bayne’s ( 2015 ) description of Teacherbot, a chatbot developed to support a MOOC on elearning and digital cultures. Teacherbot does not masquerade as human. Students on the course are told what it is and its interactions are clumsy enough to reveal this anyway. Yet Bayne’s ( 2015 ) argument is that it offers positive non-human affordances. Students seem to learn from it, partly by reflecting on its place in their learning. Thus fiction 3, echoing the sample transcripts between Teacherbot and real students quoted in Bayne ( 2015 ), is a fictional dialogue between a conversational agent and a group of students working on an assignment (itself on the topic of bias in AI). Criticalbot, as its name suggests, is being imagined used to teach the students to be more critical, e.g. by prompting them to think harder about how to read an academic text, raising questions about the authority of authors, and prompting discussion around quotes from a key text.

Student 1: @CriticalBot we have been set the task to research bias in AIED. What do you know about bias in AIED? CriticalBot: Blanchard is recommended in the reading list. Student 2: I think he means rtfm. Student 1: Blanchard? CriticalBot: Sorry. I should have typed: Blanchard, E. G. (2015). Socio-cultural imbalances in AIED research: Investigations, implications and opportunities. International Journal of Artificial Intelligence in Education, 25(2), 204–228. No one’s perfect. Student 1: @CriticalBot What is the main point he makes? CriticalBot: Nice try. Try reading the abstract. Abstracts summarise key points. It may also be useful to read the conclusion section to grasp the author’s main argument before deciding on whether to undertake a closer reading.

As the quotation from the fiction illustrates, echoing Bayne ( 2015 ), the conversation in Fiction 2 is not necessarily smooth; misunderstandings and conflicts occur. The fiction brings into view the less compliant vision of the student who might wish to game the system, a potential problem with AI which is apparent in the literature of AWE (Hussein et al. 2019 ). This fiction encapsulates an important alternative potential imaginary of AI, as a simple, low-tech intervention. At the same time in being designed to promote critical thinking it can also be seen as teaching a key, high-level skill. This challenges us to ask if an AI can truly do that and how.

The intelligent campus: Fiction 4, “The intelligent campus app”

The AIED literature with its emphasis on the direct application of AI to learning accounts for a big block of the literature about AI in Higher Education, but not all of it. Another rather separate literature exists around the smart or intelligent campus (e.g. JISC 2018; Min-Allah and Alrashed, 2020 ; Dong et al., 2020 ). This is the application of Internet of Things and increasingly AI to the management of the campus environment. This is often oriented towards estates management, such as monitoring room usage and controlling lighting and heating. But it does also encompass support of wayfinding, attendance monitoring, and ultimately of student experience, so presents an interesting contrast to the AIEd literature.

The fourth fiction is adapted from a report each section of which is introduced by quotes from an imaginary day in the life of a student, Leda, who reflects on the benefits of the intelligent/smart campus technologies to her learning experience (JISC, 2018). The emphasis in the report is on:

Data driven support of wayfinding and time management;

Integration of smart campus with smart city features (e.g. bus and traffic news);

Attendance monitoring and delivery of learning resources;

The student also muses about the ethics of the AI. She is presented as a little ambivalent about the monitoring technologies, and as in Luckin and Holmes ( 2017 ), it is referred to in her own words as potentially “a bit big brother” (JISC 2018: 9). But ultimately she concludes that the smart campus improves her experience as a student. In this narrative, unlike in the Luckin and Holmes ( 2017 ) fiction, the AI is much more in the background and lacks a strong personality. It is a different sort of optimistic vision geared towards convenience rather than excellence. There is much less of a futuristic feel, indeed one could say that not only does the technology exist to deliver many of the services described, they are already available and in use—though perhaps not integrated within one application.

Sitting on the bus I look at the plan for the day suggested in the University app. A couple of timetabled classes; a group work meeting; and there is a reminder about that R205 essay I have been putting off. There is quite a big slot this morning when the App suggests I could be in the library planning the essay – as well as doing the prep work for one of the classes it has reminded me about. It is predicting that the library is going to be very busy after 11AM anyway, so I decide to go straight there.

The fiction seeks to bring out more about the idea of “nudging” to change behaviours a concept often linked to AI and the ethics of which are queried by Selwyn ( 2019a ). The issue of how AI and robots might impact the agency of the learner recurs across the first four fictions.

AI and robotics in research: Fiction 5, “The Research Management Suite TM”

So far in this paper most of the focus has been on the application of AI and robotics to learning. AI also has applications in university research, but it is an area far less commonly considered than learning and teaching. Only 1% of CIOs responding to a survey of HEIs by Gartner had deployed AI for research, compared to 27% for institutional analytics and 10% for adaptive learning (Lowendahl and Williams, 2018 ). Some AI could be used directly in research, not just to perform analytical tasks, but to generate hypotheses to be tested (Jones et al., 2019 ). The “robot scientist” being tireless and able to work in a precise way could carry through many experiments and increase reproducibility (King, et al., 2009 ; Sparkes et al., 2010 ). It might have the potential to make significant discoveries independently, perhaps by simply exploiting its tirelessness to test every possible hypothesis rather than use intuition to select promising ones (Kitano, 2016 ).

Another direct application of AI to research is text and data mining (TDM). Given the vast rate of academic publishing there is growing need to mine published literature to offer summaries to researchers or even to develop and test hypotheses (McDonald and Kelly, 2012 ). Advances in translation also offer potential to make the literature in other languages more accessible, with important benefits.

Developments in publishing give us a further insight into how AI might be applied in the research domain. Publishers are investing heavily in AI (Gabriel, 2019 ). One probable landmark was that in 2019, Springer published the first “machine generated research book” (Schoenenberger, 2019 : v): a literature review of research on Lithium-Ion batteries, written entirely automatically. This does not suggest the end of the academic author, Springer suggest, but does imply changing roles (Schoenenberger, 2019 ). AI is being applied to many aspects of the publication process: to identify peer reviewers (Price and Flach, 2017 ), to assist review by checking statistics, to summarise open peer reviews, to check for plagiarism or for the fabrication of data (Heaven, 2018 ), to assist copy editing, to suggest keywords and to summarise and translate text. Other tools claim to predict the future citation of articles (Thelwall, 2019 ). Data about academics, their patterns of collaboration and citation through scientometrics are currently based primarily on structured bibliographic data. The cutting edge is the application of text mining techniques to further analyse research methods, collaboration patterns, and so forth (Atanassova et al., 2019 ). This implies a potential revolution in the management and evaluation of research. It will be relevant to ask what responsible research metrics are in this context (Wilsdon, 2015 ).

Instantiating these developments, the sixth fiction revolves around a university licensing “Research Management Suite TM “a set of imaginary proprietary tools to offer institutional level support to its researchers to increase and perhaps measure their productivity. A flavour of the fiction can be gleaned from this except:

Academic Mentor ™ is our premium meta analysis service. Drawing on historic career data from across the disciplines, it identifies potential career pathways to inform your choices in your research strategy. By identifying structural holes in research fields it enables you to position your own research within emerging research activity, so maximising your visibility and contribution. Mining data from funder strategy, the latest publications, preprints and news sources it identifies emergent interdisciplinary fields, matching your research skills and interests to the complex dynamics of the changing research landscape.

This fiction prompts questions about the nature of the researcher’s role and ultimately about what research is. At what point does the AI become a co-author, because it is making a substantive intellectual contribution to writing a research output, making a creative leap or even securing funding? Given the centrality of research to academic identity this indeed may feel even more challenging than the teaching related scenarios. This fiction also recognised the important role of EdTech companies in how AI reaches HE, partly because of the high cost of AI development. The reader is also prompted to wonder how the technology might disrupt the HE landscape if those investing in these technologies were ambitious newer institutions keen to rise in university league tables.

Tackling pragmatic barriers: Fiction 6, “Verbatim minutes of University AI project steering committee: AI implementation phase 3”

A very large literature around technologies in HE in general focuses on the challenges of implementing them as a change management problem. Reid ( 2014 ), for example, seeks to develop a model of the differing factors that block the smooth implementation of learning technologies in the classroom, such as problems with access to the technology, project management challenges, as well as issues around teacher identity. Echoing these arguments, Tsai et al.’s ( 2017 , 2019 ) work captures why for all the hype around it, Learning Analytics have not yet found extensive practical application in HE. Given that AI requires intensive use of data, by extension we can argue that the same barriers will probably apply to AI. Specifically Tsai et al. ( 2017 , 2019 ) identify barriers in terms of technical, financial and other resource demands, ethics and privacy issues, failures of leadership, a failure to involve all stakeholders (students in particular) in development, a focus on technical issues and neglect of pedagogy, insufficient staff training and a lack of evidence demonstrating the impact on learning. There are hints of similar types of challenge around the implementation of administration focussed applications (Nurshatayeva, et al., 2020 ) and TDM (FutureTDM, 2016 ).

Reflecting these thoughts, the fifth fiction is an extract from an imaginary committee meeting, in which senior university managers discuss the challenges they are facing in implementing AI. It seeks to surface issues around teacher identity, disciplinary differences and resource pressures that might shape the extensive implementation of AI in practice.

Faculty of Humanities Director: But I think there is a pedagogic issue here. With the greatest of respect to Engineering, this approach to teaching, simply does not fit our subject. You cannot debate a poem or a philosophical treatise with a machine. Faculty of Engineering Director: The pilot project also showed improved student satisfaction. Data also showed better student performance. Less drop outs. Faculty of Humanities Director: Maybe that’s because… Vice Chancellor: All areas where Faculty of Humanities has historically had a strategic issue. Faculty of Engineering Director: The impact on employability has also been fantastic, in terms of employers starting to recognise the value of our degrees now fluency with automation is part of our graduate attributes statement. Faculty of Humanities Director: I see the benefits, I really do. But you have to remember you are taking on deep seated assumptions within the disciplinary culture of Humanities at this university. Staff are already under pressure with student numbers not to mention in terms of producing world class research! I am not sure how far this can be pushed. I wouldn’t want to see more industrial action.

Learning analytics and datafication: Fiction 7, “Dashboards”

Given the strong relation between “big data” and AI, the claimed benefits and the controversies that already exist around LA are relevant to AI too (Selwyn, 2019a ). The main argument for LA is that they give teachers and learners themselves information to improve learning processes. Advocates talk of an obligation to act. LA can also be used for the administration of admissions decisions and ensuring retention. Chatbots are now being used to assist applicants through complex admissions processes or to maintain contact to ensure retention and appear to offer a cheap and effective alternative (Page and Gehlbach, 2017 ; Nurshatayeva et al., 2020 ). Gathering more data about HE also promotes public accountability.

However, data use in AI does raise many issues. The greater the dependence on data or data driven AI the greater the security issues associated with the technology. Another inevitable concern is with legality and the need to abide by appropriate privacy legislation, such as GDPR in Europe. Linked to this are clearly privacy issues, implying consent, the right to control over the use of one’s data and the right to withdraw (Fjeld et al., 2020 ). Yet a recent study by Jones ( 2020 ) found students knew little of how LA were being used in their institution or remembered consenting to allowing their data to be used. These would all be recognised as issues by most AI projects.

However, increasingly critiques of AI in learning centre around the datafication of education (Jarke and Breiter, 2019 ; Williamson and Eynon, 2020 ; Selwyn, 2019 a; Kwet and Prinsloo, 2020 ). A data driven educational system has the potential to be used or experienced as a surveillance system. “What can be accomplished with data is usually a euphemism for what can be accomplished with surveillance” (Kwet and Prinsloo, 2020 : 512). Not only might individual freedoms be threatened by institutions or commercial providers undertaking surveillance of student and teaching staff behaviour, there is also a chilling effect just through the fear of being watched (Kwet and Prinsloo, 2020 ). Students become mere data points, as surveillance becomes intensified and normalised (Manolev et al. 2019 ). While access to their own learning data could be empowering for students, techniques such as nudging intended to influence people without their knowledge undermine human agency (Selwyn, 2019b ). Loss of human agency is one of the fears revolving around AI and robots.

Further, a key issue with AI is that although predictions can be accurate or useful it is quite unclear how these were produced. Because AI “learns” from data, even the designers do not fully understand how the results were arrived at so they are certainly hard to explain to the public. The result is a lack of transparency, and so of accountability, leading to deresponsibilisation.

Much of the current debate around big data and AI revolves around bias, created by using training data that does not represent the whole population, reinforced by the lack of diversity among designers of the systems. If data is based on existing behaviour, this is likely to reproduce existing patterns of disadvantage in society, unless AI design takes into account social context—but datafication is driven by standardisation. Focussing on technology diverts attention from the real causes of achievement gaps in social structures, it could be argued (Macgilchrist, 2019 ). While often promoted as a means of empowering learners and their teachers, mass personalisation of education redistributes power away from local decision making (Jarke and Breiter, 2019 ; Zeide, 2017 ). In the context of AIEd there is potential for assumptions about what should be taught to show very strong cultural bias, in the same way that critics have already argued that plagiarism detection systems impose culturally specific notions of authorship and are marketed in a way to reinforce crude ethnic stereotypes (Canzonetta and Kannan, 2016 ).

Datafication also produces performativity: the tendency of institutions (and teachers and students) to shift their behaviour towards doing what scores well against the metric, in a league table mentality. Yet what is measured is often a proxy of learning or reductive of what learning in its full sense is, critics argue (Selwyn, 2019b ). The potential impact is to turn HE further into a marketplace (Williamson, 2019 ). It is evident that AI developments are often partly a marketing exercise (Lacity, 2017 ). Edtech companies play a dominant role in developing AI (Williamson and Eynon, 2020 ). Selwyn ( 2019a ) worries that those running education will be seduced by glittering promises of techno-solutionism, when the technology does not really work. The UK government has invested heavily in gathering more data about HE in order to promote the reform of HE in the direction of marketisation and student choice (Williamson and Eynon, 2020 ). Learning data could also increasingly itself become a commodity, further reinforcing the commercialisation of HE.

Thus fiction 6 explores the potential to gather data about learning on a huge scale, make predictions based on it and take actions via conveying information to humans or through chatbots. In the fiction the protagonist explains an imaginary institutional level system that is making data driven decisions about applicants and current students.

Then here we monitor live progress of current students within their courses. We can dip down into attendance, learning environment use, library use, and of course module level performance and satisfaction plus the extra-curricula data. Really low-level stuff some of it. It’s pretty much all there, monitored in real time. We are really hot on transition detection and monitoring. The chatbots are used just to check in on students, see they are ok, nudge things along, gather more data. Sometimes you just stop and look at it ticking away and think “wow!”. That all gets crunched by the system. All the time we feed the predictives down into departmental dashboards, where they pick up the intervention work. Individual teaching staff have access via smart speaker. Meanwhile, we monitor the trend lines up here.

In the fiction the benefits in terms of being able to monitor and address attainment gaps is emphasised. The protagonist’s description of projects that are being worked on suggests competing drivers behind such developments including meeting government targets, cost saving and the potential to make money by reselling educational data.

Infrastructure: Fiction 8, “Minnie—the AI admin assistant”

A further dimension to the controversy around AI is to consider its environmental cost and the societal impact of the wider infrastructures needed to support AI. Brevini ( 2020 ) points out that a common AI training model in linguistics can create the equivalent of five times the lifetime emissions of an average US car. This foregrounds the often unremarked environmental impact of big data and AI. It also prompts us to ask questions about the infrastructure required for AI. Crawford and Joler’s ( 2018 ) brilliant Anatomy of an AI system reveals that making possible the functioning of a physically rather unassuming AI like Amazon echo, is a vast global infrastructure based on mass human labour, complex logistic chains and polluting industry.

The first part of fiction 8 describes a personal assistant based on voice recognition, like Siri, which answers all sorts of administrative questions.The protagonist expresses some unease with how the system works, reflecting the points made by Rummel et al. ( 2016 ) about the failure of systems if despite their potential sophistication they lack nuance and flexibility in their application. There is also a sense of alienation (Griffiths, 2015 ). The second part of the fiction extends this sense of unease to a wider perspective on the usually invisible, but very material infrastructure which AI requires, as captured in Crawford and Joler ( 2018 ). In addition, imagery is drawn from Maughan’s ( 2016 ) work where he travels backwards up the supply chain for consumer electronics from the surreal landscape of hi-tech docks then visiting different types of factories and ending up visiting a huge polluted lake created by mining operations for rare earth elements in China. This perspective queries all the other fictions with their focus on using technologies or even campus infrastructure by widening the vision to encompass the global infrastructures that are required to make AI possible.

The vast effort of global logistics to bring together countless components to build the devices through which we interact with AI. Lorries queuing at the container port as another ship comes in to dock. Workers making computer components in hi-tech factories in East Asia. All dressed in the same blue overalls and facemasks, two hundred workers queue patiently waiting to be scan searched as they leave work at the end of the shift. Exploitative mining extracting non-renewable, scarce minerals for computer components, polluting the environment and (it is suspected) reducing the life expectancy of local people. Pipes churn out a clayey sludge into a vast lake.

Conclusion: using the fictions together

As we have seen each of the fictions seeks to open up different positive visions or dimensions of debate around AI (summarised in Table 2 below). All implicitly ask questions about the nature of human agency in relationship to AI systems and robots, be that through empowerment through access to learning data (fiction 1), their power to play against the system (Fiction 3) or the hidden effects of nudging (Fiction 4) and the reinforcements of social inequalities. Many raise questions about the changing role of staff or the skills required to operate in this environment. They are written in a way seeking to avoid taking sides, e.g. not to always undercut a utopian view or simply present a dark dystopia. Each contains elements that might be inspirational or a cause of controversy. Specifically, they can be read together to suggest tensions between different possible futures. In particular fictions 7 and 8 and the commercial aspects implied by the presentation of fiction 5, reveal aspects of AI largely invisible in the glossy strongly positive images in fictions 1 and 2, or the deceptive mundanity of fiction 3. It is also anticipated that the fictions will be read “against the grain” by readers wishing to question what the future is likely to be or should be like. This is one of the affordances of them being fictions.

The most important contribution of the paper was the wide-ranging narrative literature review emphasising the social, ethical, pedagogic and management issues of automation through AI and robots on HE as a whole. On the basis of the understanding gained from the literature review a secondary contribution was the development of a collection of eight accessible, repurposable design fictions that prompt debate about the potential role of AI and robots in HE. This prompts us to notice common challenges, such as around commodification and the changing role of data. It encompasses work written by developers, by those with more visionary views, those who see the challenges as primarily pragmatic and those coming from much more critical perspectives.

The fictions are intended to be used to explore staff and student responses through data collection using the fictions to elicit views. The fictions could also be used in teaching to prompt debate among students, perhaps setting them the task to write new fictions (Rapp, 2020 ). Students of education could use them to explore the potential impact of AI on educational institutions and to discuss the role of technologies in educational change more generally. The fictions could be used in teaching students of computer science, data science, HCI and information systems in courses about computer ethics, social responsibility and sustainable computing—as well as those directly dealing with AI. They could also be used in Media Studies and Communications, e.g. to compare them with other future imaginaries in science fiction or to design multimedia creations inspired by such fictions. They might also be used for management studies as a case study of strategizing around AI in a particular industry.

While there is an advantage in seeking to encompass the issues within a small collection of engaging fictions that in total run to less than 5000 words, it must be acknowledged that not every issue is reflected. For example, what is not included is the different ways that AI and robots might be used in teaching different disciplines, such as languages, computer science or history. The many ways that robots might be used in background functions or to play the role themselves of learner also requires further exploration. Most of the fictions were located in a fairly near future, but there is also potential to develop much more futuristic fictions. These gaps leave room for the development of more fictions.

The paper has explained the rationale and process of writing design fictions. To the growing literature around design fictions, the paper seeks to make a contribution by emphasising the use of design fictions as collections, exploiting different narratives and styles and genre of writing to set up intertextual reflections that help us ask questions about technologies in the widest sense.

Availability of data and materials

Data from the project is available from the University of Sheffield repository, ORDA. https://doi.org/10.35542/osf.io/s2jc8 .

Amer, M., Daim, T., & Jetter, A. (2013). A review of scenario planning. Futures, 46, 23–40.

Article   Google Scholar  

Atanassova, I., Bertin, M., & Mayr, P. (2019). Editorial: mining scientific papers: NLP-enhanced bibliometrics. Frontiers in Research Metrics and Analytics . https://doi.org/10.3389/frma.2019.00002 .

Auger, J. (2013). Speculative design: Crafting the speculation. Digital Creativity, 24 (1), 11–35.

Badampudi, D., Wohlin, C., & Petersen, K. (2015). Experiences from using snowballing and database searches in systematic literature studies. In Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering (pp. 1–10).

Baker, T., Smith, L. and Anissa, N. (2019). Educ-AI-tion Rebooted? Exploring the future of artificial intelligence in schools and colleges. NESTA. https://www.nesta.org.uk/report/education-rebooted/ .

Bates, T., Cobo, C., Mariño, O., & Wheeler, S. (2020). Can artificial intelligence transform higher education? International Journal of Educational Technology in Higher Education . https://doi.org/10.1186/s41239-020-00218-x .

Bayne, S. (2015). Teacherbot: interventions in automated teaching. Teaching in Higher Education, 20 (4), 455–467.

Belpaeme, T., Kennedy, J., Ramachandran, A., Scassellati, B., & Tanaka, F. (2018). Social robots for education: A review. https://doi.org/10.1126/scirobotics.aat5954 .

Blanchard, E. G. (2015). Socio-cultural imbalances in AIED research: Investigations, implications and opportunities. International Journal of Artificial Intelligence in Education, 25 (2), 204–228.

Bleecker, J. (2009). Design fiction: A short essay on design, science, fact and fiction. Near Future Lab.

Blythe, M. (2017). Research fiction: storytelling, plot and design. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 5400–5411).

Blythe, M., Andersen, K., Clarke, R., & Wright, P. (2016). Anti-solutionist strategies: Seriously silly design fiction. Conference on Human Factors in Computing Systems - Proceedings (pp. 4968–4978). Association for Computing Machinery.

Brevini, B. (2020). Black boxes, not green: Mythologizing artificial intelligence and omitting the environment. Big Data & Society, 7 (2), 2053951720935141.

Canzonetta, J., & Kannan, V. (2016). Globalizing plagiarism & writing assessment: a case study of Turnitin. The Journal of Writing Assessment , 9(2). http://journalofwritingassessment.org/article.php?article=104 .

Carroll, J. M. (1999) Five reasons for scenario-based design. In Proceedings of the 32nd Annual Hawaii International Conference on Systems Sciences . HICSS-32. Abstracts and CD-ROM of Full Papers, Maui, HI, USA, 1999, pp. 11. https://doi.org/10.1109/HICSS.1999.772890 .

Catlin, D., Kandlhofer, M., & Holmquist, S. (2018). EduRobot Taxonomy a provisional schema for classifying educational robots. 9th International Conference on Robotics in Education 2018, Qwara, Malta.

Clay, J. (2018). The challenge of the intelligent library. Keynote at What does your eResources data really tell you? 27th February, CILIP.

Crawford, K., & Joler, V. (2018) Anatomy of an AI system , https://anatomyof.ai/ .

Darby, E., Whicher, A., & Swiatek, A. (2017). Co-designing design fictions: a new approach for debating and priming future healthcare technologies and services. Archives of design research. Health Services Research, 30 (2), 2.

Google Scholar  

Demartini, C., & Benussi, L. (2017). Do Web 4.0 and Industry 4.0 Imply Education X.0? IT Pro , 4–7.

Dong, Z. Y., Zhang, Y., Yip, C., Swift, S., & Beswick, K. (2020). Smart campus: Definition, framework, technologies, and services. IET Smart Cities, 2 (1), 43–54.

Dourish, P., & Bell, G. (2014). “resistance is futile”: Reading science fiction alongside ubiquitous computing. Personal and Ubiquitous Computing, 18 (4), 769–778.

Dunne, A., & Raby, F. (2001). Design noir: The secret life of electronic objects . New York: Springer Science & Business Media.

Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. SSRN Electronic Journal . https://doi.org/10.2139/ssrn.3518482 .

Følstad, A., Skjuve, M., & Brandtzaeg, P. (2019). Different chatbots for different purposes: Towards a typology of chatbots to understand interaction design. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 11551 LNCS , pp. 145–156. Springer Verlag.

Future TDM. (2016). Baseline report of policies and barriers of TDM in Europe. https://project.futuretdm.eu/wp-content/uploads/2017/05/FutureTDM_D3.3-Baseline-Report-of-Policies-and-Barriers-of-TDM-in-Europe.pdf .

Gabriel, A. (2019). Artificial intelligence in scholarly communications: An elsevier case study. Information Services & Use, 39 (4), 319–333.

Griffiths, D. (2015). Visions of the future, horizon report . LACE project. http://www.laceproject.eu/visions-of-the-future-of-learning-analytics/ .

Heaven, D. (2018). The age of AI peer reviews. Nature, 563, 609–610.

Hockly, N. (2019). Automated writing evaluation. ELT Journal, 73 (1), 82–88.

Holmes, W., Bialik, M. and Fadel, C. (2019). Artificial Intelligence in Education . The center for curriculum redesign. Boston, MA.

Hussein, M., Hassan, H., & Nassef, M. (2019). Automated language essay scoring systems: A literature review. PeerJ Computer Science . https://doi.org/10.7717/peerj-cs.208 .

Inayatullah, S. (2008). Six pillars: Futures thinking for transforming. foresight, 10 (1), 4–21.

Jarke, J., & Breiter, A. (2019). Editorial: the datafication of education. Learning, Media and Technology, 44 (1), 1–6.

JISC. (2019). The intelligent campus guide. Using data to make smarter use of your university or college estate . https://www.jisc.ac.uk/rd/projects/intelligent-campus .

Jones, E., Kalantery, N., & Glover, B. (2019). Research 4.0 Interim Report. Demos.

Jones, K. (2019). “Just because you can doesn’t mean you should”: Practitioner perceptions of learning analytics ethics. Portal, 19 (3), 407–428.

Jones, K., Asher, A., Goben, A., Perry, M., Salo, D., Briney, K., & Robertshaw, M. (2020). “We’re being tracked at all times”: Student perspectives of their privacy in relation to learning analytics in higher education. Journal of the Association for Information Science and Technology . https://doi.org/10.1002/asi.24358 .

King, R. D., Rowland, J., Oliver, S. G., Young, M., Aubrey, W., Byrne, E., et al. (2009). The automation of science. Science, 324 (5923), 85–89.

Kitano, H. (2016). Artificial intelligence to win the nobel prize and beyond: Creating the engine for scientific discovery. AI Magazine, 37 (1), 39–49.

Kwet, M., & Prinsloo, P. (2020). The ‘smart’ classroom: a new frontier in the age of the smart university. Teaching in Higher Education, 25 (4), 510–526.

Lacity, M., Scheepers, R., Willcocks, L. & Craig, A. (2017). Reimagining the University at Deakin: An IBM Watson Automation Journey . The Outsourcing Unit Working Research Paper Series.

Lowendahl, J.-M., & Williams, K. (2018). 5 Best Practices for Artificial Intelligence in Higher Education. Gartner. Research note.

Luckin, R. (2017). Towards artificial intelligence-based assessment systems. Nature Human Behaviour, 1 (3), 1–3.

Luckin, R., & Holmes, W. (2017). A.I. is the new T.A. in the classroom. https://howwegettonext.com/a-i-is-the-new-t-a-in-the-classroom-dedbe5b99e9e .

Luckin, R., Holmes, W., Griffiths, M., & Pearson, L. (2016). Intelligence unleashed an argument for AI in Education. Pearson. https://www.pearson.com/content/dam/one-dot-com/one-dot-com/global/Files/about-pearson/innovation/open-ideas/Intelligence-Unleashed-v15-Web.pdf .

Lyckvi, S., Wu, Y., Huusko, M., & Roto, V. (2018). Eagons, exoskeletons and ecologies: On expressing and embodying fictions as workshop tasks. ACM International Conference Proceeding Series (pp. 754–770). Association for Computing Machinery.

Macgilchrist, F. (2019). Cruel optimism in edtech: When the digital data practices of educational technology providers inadvertently hinder educational equity. Learning, Media and Technology, 44 (1), 77–86.

Manolev, J., Sullivan, A., & Slee, R. (2019). The datafication of discipline: ClassDojo, surveillance and a performative classroom culture. Learning, Media and Technology, 44 (1), 36–51.

Martha, A. S. D., & Santoso, H. B. (2019). The design and impact of the pedagogical agent: A systematic literature review. Journal of Educators Online, 16 (1), n1.

Maughan, T. (2016). The hidden network that keeps the world running. https://datasociety.net/library/the-hidden-network-that-keeps-the-world-running/ .

McDonald, D., & Kelly, U. (2012). The value and benefits of text mining . England: HEFCE.

Min-Allah, N., & Alrashed, S. (2020). Smart campus—A sketch. Sustainable Cities and Society . https://doi.org/10.1016/j.scs.2020.102231 .

Nathan, L. P., Klasnja, P. V., & Friedman, B. (2007). Value scenarios: a technique for envisioning systemic effects of new technologies. In CHI'07 extended abstracts on human factors in computing systems (pp. 2585–2590).

Nurshatayeva, A., Page, L. C., White, C. C., & Gehlbach, H. (2020). Proactive student support using artificially intelligent conversational chatbots: The importance of targeting the technology. EdWorking paper, Annenberg University https://www.edworkingpapers.com/sites/default/files/ai20-208.pdf .

Page, L., & Gehlbach, H. (2017). How an artificially intelligent virtual assistant helps students navigate the road to college. AERA Open . https://doi.org/10.1177/2332858417749220 .

Pinkwart, N. (2016). Another 25 years of AIED? Challenges and opportunities for intelligent educational technologies of the future. International journal of artificial intelligence in education, 26 (2), 771–783.

Price, S., & Flach, P. (2017). Computational support for academic peer review: A perspective from artificial intelligence. Communications of the ACM, 60 (3), 70–79.

Rapp, A. (2020). Design fictions for learning: A method for supporting students in reflecting on technology in human–computer interaction courses. Computers & Education, 145, 103725.

Reid, P. (2014). Categories for barriers to adoption of instructional technologies. Education and Information Technologies, 19 (2), 383–407.

Renz, A., & Hilbig, R. (2020). Prerequisites for artificial intelligence in further education: Identification of drivers, barriers, and business models of educational technology companies. International Journal of Educational Technology in Higher Education . https://doi.org/10.1186/s41239-020-00193-3 .

Roll, I., & Wylie, R. (2016). Evolution and Revolution in Artificial Intelligence in Education. International Journal of Artificial Intelligence in Education, 26 (2), 582–599.

Rummel, N., Walker, E., & Aleven, V. (2016). Different futures of adaptive collaborative learning support. International Journal of Artificial Intelligence in Education, 26 (2), 784–795.

Schoenenberger, H. (2019). Preface. In H. Schoenenberger (Ed.), Lithium-ion batteries a machine-generated summary of current research (v–xxiii) . Berlin: Springer.

Selwyn, N. (2019a). Should robots replace teachers? AI and the future of education . New Jersey: Wiley.

Selwyn, N. (2019b). What’s the problem with learning analytics? Journal of Learning Analytics, 6 (3), 11–19.

Selwyn, N., Pangrazio, L., Nemorin, S., & Perrotta, C. (2020). What might the school of 2030 be like? An exercise in social science fiction. Learning, Media and Technology, 45 (1), 90–106.

Sparkes, A., Aubrey, W., Byrne, E., Clare, A., Khan, M. N., Liakata, M., et al. (2010). Towards robot scientists for autonomous scientific discovery. Automated Experimentation, 2 (1), 1.

Strobl, C., Ailhaud, E., Benetos, K., Devitt, A., Kruse, O., Proske, A., & Rapp, C. (2019). Digital support for academic writing: A review of technologies and pedagogies. Computers and Education, 131, 33–48.

Templier, M., & Paré, G. (2015). A framework for guiding and evaluating literature reviews. Communications of the Association for Information Systems, 37 (1), 6.

Thelwall, M. (2019). Artificial intelligence, automation and peer review . Bristol: JISC.

Tsai, Y., & Gasevic, D. (2017). Learning analytics in higher education—Challenges and policies: A review of eight learning analytics policies. ACM International Conference Proceeding Series (pp. 233–242). Association for Computing Machinery.

Tsai, Y. S., Poquet, O., Gašević, D., Dawson, S., & Pardo, A. (2019). Complexity leadership in learning analytics: Drivers, challenges and opportunities. British Journal of Educational Technology, 50 (6), 2839–2854.

Tsekleves, E., Darby, A., Whicher, A., & Swiatek, P. (2017). Co-designing design fictions: A new approach for debating and priming future healthcare technologies and services. Archives of Design Research, 30 (2), 5–21.

Wellnhammer, N., Dolata, M., Steigler, S., & Schwabe, G. (2020). Studying with the help of digital tutors: Design aspects of conversational agents that influence the learning process. Proceedings of the 53rd Hawaii International Conference on System Sciences , (pp. 146–155).

Williamson, B. (2019). Policy networks, performance metrics and platform markets: Charting the expanding data infrastructure of higher education. British Journal of Educational Technology, 50 (6), 2794–2809.

Williamson, B., & Eynon, R. (2020). Historical threads, missing links, and future directions in AI in education. Learning, Media and Technology. https://doi.org/10.1080/17439884.2020.1798995 .

Wilsdon, J. (2015). The metric tide: Independent review of the role of metrics in research assessment and management . Sage.

Winkler, R. & Söllner, M. (2018). Unleashing the potential of chatbots in education: A state-of-the-art analysis. In: Academy of Management Annual Meeting (AOM). Chicago, USA.

Woolf, B. P., Lane, H. C., Chaudhri, V. K., & Kolodner, J. L. (2013). AI grand challenges for education. AI Magazine, 34 (4), 66–84.

Zawacki-Richter, O., Marín, V., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—where are the educators? International Journal of Educational Technology in Higher Education . https://doi.org/10.1186/s41239-019-0171-0 .

Zeide, E. (2017). The structural consequences of big data-driven education. Big Data, 5 (2), 164–172.

Download references

Acknowledgements

Not applicable.

The project was funded by Society of Research into Higher Education—Research Scoping Award—SA1906.

Author information

Authors and affiliations.

Information School, The University of Sheffield, Level 2, Regent Court, 211 Portobello, Sheffield, S1 4DP, UK

You can also search for this author in PubMed   Google Scholar

Contributions

AC conceived and wrote the entire article. All authors read and approved the final manuscript.

Corresponding author

Correspondence to A. M. Cox .

Ethics declarations

Competing interests.

The author declares that he has no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Cox, A.M. Exploring the impact of Artificial Intelligence and robots on higher education through literature-based design fictions. Int J Educ Technol High Educ 18 , 3 (2021). https://doi.org/10.1186/s41239-020-00237-8

Download citation

Received : 04 September 2020

Accepted : 24 November 2020

Published : 18 January 2021

DOI : https://doi.org/10.1186/s41239-020-00237-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial Intelligence
  • Social robots
  • Learning analytics
  • Design fiction

literature review on ai and robotics

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Med (Lausanne)

A Review of Artificial Intelligence and Robotics in Transformed Health Ecosystems

Kerstin denecke.

1 Institute for Medical Information, Bern University of Applied Sciences, Bern, Switzerland

Claude R. Baudoin

2 Object Management Group, Needham, MA, United States

Associated Data

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Health care is shifting toward become proactive according to the concept of P5 medicine–a predictive, personalized, preventive, participatory and precision discipline. This patient-centered care heavily leverages the latest technologies of artificial intelligence (AI) and robotics that support diagnosis, decision making and treatment. In this paper, we present the role of AI and robotic systems in this evolution, including example use cases. We categorize systems along multiple dimensions such as the type of system, the degree of autonomy, the care setting where the systems are applied, and the application area. These technologies have already achieved notable results in the prediction of sepsis or cardiovascular risk, the monitoring of vital parameters in intensive care units, or in the form of home care robots. Still, while much research is conducted around AI and robotics in health care, adoption in real world care settings is still limited. To remove adoption barriers, we need to address issues such as safety, security, privacy and ethical principles; detect and eliminate bias that could result in harmful or unfair clinical decisions; and build trust in and societal acceptance of AI.

The Need for AI and Robotics in Transformed Health Ecosystems

“Artificial intelligence (AI) is the term used to describe the use of computers and technology to simulate intelligent behavior and critical thinking comparable to a human being” ( 1 ). Machine learning enables AI applications to automatically (i.e., without being explicitly programmed for) improving their algorithms through experiences gained by cognitive inputs or by the use of data. AI solutions provide data and knowledge to be used by humans or other technologies. The possibility of machines behaving in such a way was originally raised by Alan Turing and further explored starting in the 1950s. Medical expert systems such as MYCIN, designed in the 1970s for medical consultations ( 2 ), were internationally recognized a revolution supporting the development of AI in medicine. However, the clinical acceptance was not very high. Similar disappointments across multiple domains led to the so-called “AI winter,” in part because rule-based systems do not allow the discovery of unknown relationships and in part because of the limitations in computing power at the time. Since then, computational power has increased enormously.

Over the centuries, we have improved our knowledge about structure and function of the human body, starting with the organs, tissues, cells sub-cell components etc. Meanwhile, we could advance it up to the molecular and sub-molecular level, including protein coding genes, DNA sequences, non-coding RNA etc. and their effects and behavior in the human body. This has resulted in a continuously improving understanding of the biology of diseases and disease progressions ( 3 ). Nowadays, biomedical research and clinical practice are struggling with the size and complexity of the data produced by sequencing technologies, and how to derive from it new diagnoses and treatments. Experiment results, often hidden in clinical data warehouses, must be aggregated, analyzed, and exploited to derive our new, detailed and data-driven knowledge of diseases and enable better decision making.

New tools based on AI have been developed to predict disease recurrence and progression ( 4 ) or response to treatment; and robotics, often categorized as a branch of AI, plays an increasing role in patient care. In a medical context, AI means for example imitating the decision-making processes of health professionals ( 1 ). In contrast to AI that generates data, robotics provides touchable outcomes or realize physical tasks. AI and robotics use knowledge and patient data for various tasks such as: diagnosis; planning of surgeries; monitoring of patient physical and mental wellness; basic physical interventions to improve patient independence during physical or mental deterioration. We will review concrete realizations in a later section of this paper.

These advances are causing a revolution in health care, enabling it to become proactive as called upon by the concept of P5 medicine –a predictive, personalized, preventive, participatory and precision discipline ( 5 ). AI can help interpret personal health information together with other data to stratify the diseases to predict, stop or treat their progression.

In this paper, we describe the impact of AI and robotics on P5 medicine and introduce example use cases. We then discuss challenges faced by these developments. We conclude with recommendations to help AI and robotics transform health ecosystems. We extensively refer to appropriate literature for details on the underlying methods and technologies. Note that we concentrate on applications in the care setting and will not address in more detail the systems used for the education of professionals, logistics, or related to facility management–even though there are clearly important applications of AI in these areas.

Classification of AI and Robotic Systems in Medicine

We can classify the landscape of AI and robotic systems in health care according to different dimensions ( Figure 1 ): use, task, technology. Within the “use” dimension, we can further distinguish the application area or the care setting. The “task” dimension is characterized by the system's degree of autonomy. Finally, regarding the “technology” dimension, we consider the degree of intrusion into a patient and the type of system. Clearly, this is a simplification and aggregation: AI algorithms as such will not be located in a patient etc.

An external file that holds a picture, illustration, etc.
Object name is fmed-09-795957-g0001.jpg

Categorization of systems based on AI and robotics in health care.

Classification Based on Type of System

We can distinguish two types of such systems: virtual and physical ( 6 ).

  • Virtual systems (relating to AI systems) range from applications such as electronic health record (EHR) systems, or text and data mining applications, to systems supporting treatment decisions.
  • Physical systems relate to robotics and include robots that assist in performing surgeries, smart prostheses for handicapped people, and physical aids for elderly care.

There can also be hybrid systems combining AI with robotics, such as social robots that interact with users or microrobots that deliver drugs inside the body.

All these systems exploit enabling technologies that are data and algorithms (see Figure 2 ). For example, a robotic system may collect data from different sensors–visual, physical, auditory or chemical. The robot's processor manipulates, analyzes, and interprets the data. Actuators enable the robot to perform different functions including visual, physical, auditory or chemical responses.

An external file that holds a picture, illustration, etc.
Object name is fmed-09-795957-g0002.jpg

Types of AI-based systems and enabling technologies.

Two kinds of data are required: data that captures the knowledge and experience gained by the system during diagnosis and treatment, usually through machine learning; and individual patient data, which AI can assess and analyze to derive recommendations. Data can be obtained from physical sensors (wearable, non-wearable), from biosensors ( 7 ), or from other information systems such as an EHR application. From the collected data, digital biomarkers can be derived that AI can analyze and interpret ( 8 ).

AI-specific algorithms and methods allow data analysis, reasoning, and prediction. AI consists of a growing number of subfields such as machine learning (supervised, unsupervised, and reinforcement learning), machine vision, natural language processing (NLP) and more. NLP enables computers to process and understand natural language (written or spoken). Machine vision or computer vision extracts information from images. An authoritative taxonomy of AI does not exist yet, although several standards bodies have started addressing this task.

AI methodologies can be divided into knowledge-based AI and data-driven AI ( 9 ).

  • Knowledge-based AI models human knowledge by asking experts for relevant concepts and knowledge they use to solve problems. This knowledge is then formalized in software ( 9 ). This is the form of AI closest to the original expert systems of the 1970s.
  • Data-driven AI starts from large amounts of data, which are typically processed by machine learning methods to learn patterns that can be used for prediction. Virtual or augmented reality and other types of visualizations can be used to present and explore data, which helps understand relations among data items that are relevant for diagnosis ( 10 ).

To more fully exploit the knowledge captured in computerized models, the concept of digital twin has gained traction in the medical field ( 11 ). The terms “digital patient model,” “virtual physiological human,” or “digital phenotype” designate the same idea. A digital twin is a virtual model fed by information coming from wearables ( 12 ), omics, and patient records. Simulation, AI and robotics can then be applied to the digital twin to learn about the disease progression, to understand drug responses, or to plan surgery, before intervening on the actual patient or organ, effecting a significant digital transformation of the health ecosystems. Virtual organs (e.g., a digital heart) are an application of this concept ( 13 ). A digital twin can be customized to an individual patient, thus improving diagnosis.

Regardless of the specific kind of AI, there are some requirements that all AI and robotic systems must meet. They must be:

  • Adaptive . Transformed health ecosystems evolve rapidly, especially since according to P5 principles they adapt treatment and diagnosis to individual patients.
  • Context-aware . They must infer the current activity state of the user and the characteristics of the environment in order to manage information content and distribution.
  • Interoperable . A system must be able to exchange data and knowledge with other ones ( 14 ). This requires common semantics between systems, which is the object of standard terminologies, taxonomies or ontologies such as SNOMED CT. NLP can also help with interoperability ( 15 ).

Classification Based on Degree of Autonomy

AI and robotic systems can be grouped along an assistive-to-autonomous axis ( Figure 3 ). Assistive systems augment the capabilities of their user by aggregating and analyzing data, performing concrete tasks under human supervision [for example, a semiautonomous ultrasound scanner ( 17 )], or learning how to perform tasks from a health professional's demonstrations. For example, a robot may learn from a physiotherapist how to guide a patient through repetitive rehabilitation exercises ( 18 ).

An external file that holds a picture, illustration, etc.
Object name is fmed-09-795957-g0003.jpg

Levels of autonomy of robotic and AI systems. [following models proposed by ( 16 )].

Autonomous systems respond to real world conditions, make decisions, and perform actions with minimal or no interaction with a human ( 19 ). They be encountered in a clinical setting (autonomous implanted devices), in support functions to provide assistance 1 (carrying things around in a facility), or to automate non-physical work, such as a digital receptionist handling patient check-in ( 20 ).

Classification Based on Application Area

The diversity of users of AI and robotics in health care implies an equally broad range of application areas described below.

Robotics and AI for Surgery

Robotics-assisted surgery, “the use of a mechanical device to assist surgery in place of a human-being or in a human-like way” ( 21 ) is rapidly impacting many common general surgical procedures, especially minimally invasive surgery. Three types of robotic systems are used in surgery:

  • Active systems undertake pre-programmed tasks while remaining under the control of the operating surgeon;
  • Semi-active systems allow a surgeon to complement the system's pre-programmed component;
  • Master–slave systems lack any autonomous elements; they entirely depend on a surgeon's activity. In laparoscopic surgery or in teleoperation, the surgeon's hand movements are transmitted to surgical instruments, which reproduce them.

Surgeons can also be supported by navigation systems, which localize positions in space and help answer a surgeon's anatomical orientation questions. Real-time tracking of markers, realized in modern surgical navigation systems using a stereoscopic camera emitting infrared light, can determine the 3D position of prominent structures ( 22 ).

Robotics and AI for Rehabilitation

Various AI and robotic systems support rehabilitation tasks such as monitoring, risk prevention, or treatment ( 23 ). For example, fall detection systems ( 24 ) use smart sensors placed within an environment or in a wearable device, and automatically alert medical staff, emergency services, or family members if assistance is required. AI allows these systems to learn the normal behavioral patterns and characteristics of individuals over time. Moreover, systems can assess environmental risks, such as household lights that are off or proximity to fall hazards (e.g., stairwells). Physical systems can provide physical assistance (e.g., lifting items, opening doors), monitoring, and therapeutic social functions ( 25 ). Robotic rehabilitation applications can provide both physical and cognitive support to individuals by monitoring physiological progress and promoting social interaction. Robots can support patients in recovering motions after a stroke using exoskeletons ( 26 ), or recovering or supplementing lost function ( 27 ). Beyond directly supporting patients, robots can also assist caregivers. An overview on home-based rehabilitation robots is given by Akbari et al. ( 28 ). Virtual reality and augmented reality allow patients to become immersed within and interact with a 3D model of a real or imaginary world, allowing them to practice specific tasks ( 29 ). This has been used for motor function training, recovery after a stroke ( 30 ) and in pain management ( 31 ).

Robotics and AI for Telemedicine

Systems supporting telemedicine support among others the triage, diagnostic, non-surgical treatment, surgical treatment, consultation, monitoring, or provision of specialty care ( 32 ).

  • Medical triage assesses current symptoms, signs, and test results to determine the severity of a patient's condition and the treatment priority. An increasing number of mobile health applications based on AI are used for diagnosis or treatment optimization ( 33 ).
  • Smart mobile and wearable devices can be integrated into “smart homes” using Internet-of-Things (IoT) technologies. They can collect patient and contextual data, assist individuals with everyday functioning, monitor progress toward individualized care and rehabilitation goals, issue reminders, and alert care providers if assistance is required.
  • Telemedicine for specialty care includes additional tools to track mood and behavior (e.g., pain diaries), AI-based chatbots can mitigate social isolation in home care environments 2 by offering companionship and emotional support to users, noting if they are not sleeping well, in pain or depressed, which could indicate a more complex mental condition ( 34 ).
  • Beyond this, there are physical systems that can deliver specialty care: Robot DE NIRO can interact naturally, reliably, and safely with humans, autonomously navigate through environments on command, intelligently retrieve or move objects ( 35 ).

Robotics and AI for Prediction and Precision Medicine

Precision medicine considers the individual patients, their genomic variations as well as contributing factors (age, gender, ethnicity, etc.), and tailors interventions accordingly ( 8 ). Digital health applications can also incorporate data such as emotional state, activity, food intake, etc. Given the amount and complexity of data this requires, AI can learn from comprehensive datasets to predict risks and identify the optimal treatment strategy ( 36 ). Clinical decision support systems (CDSS) that integrate AI can provide differential diagnoses, recognize early warning signs of patient morbidity or mortality, or identify abnormalities in radiological images or laboratory test results ( 37 ). They can increase patient safety, for example by reducing medication or prescription errors or adverse events and can increase care consistency and efficiency ( 38 ). They can support clinical management by ensuring adherence to the clinical guidelines or automating administrative functions such as clinical and diagnostic encoding ( 39 ), patient triage or ordering of procedures ( 37 ).

AI and Agents for Management and Support Tasks

NLP applications, such as voice transcription, have proved helpful for clinical note-taking ( 40 ), compiling electronic health records, automatically generating medical reports from patient-doctor conversations or diagnostic reports ( 41 ). AI algorithms can help retrieving context-relevant patient data. Concept-based information retrieval can improve search accuracy and retrieval speed ( 42 ). AI algorithms can improve the use and allocation of hospital resources by predicting the length of stay of patients ( 43 ) or risk of re-admission ( 44 ).

Classification Based on Degree of Intrusion Into a Patient

Robotic systems can be used inside the body, on the body or outside the body. Those applied inside the body include microrobots ( 45 ), surgical robots and interventional robots. Microrobots are sub-millimeter untethered devices that can be propelled for example by chemical reactions ( 46 ), or physical fields ( 47 ). They can move unimpeded through the body and perform tasks such as targeted therapy (localized delivery of drugs) ( 48 ).

Microrobots can assist in physical surgery, for example by drilling through a blood clot or by opening up obstructions in the urinary tract to restore normal flow ( 49 ). They can provide directed local tissue heating to destroy cancer cells ( 50 ). They can be implanted to provide continuous remote monitoring and early awareness of an emerging disease.

Robotic prostheses, orthoses and exoskeletons are examples of robotic systems worn on the body. Exoskeletons are wearable robotic systems that are tightly physically coupled with a human body to provide assistance or enhance the wearer's physical capabilities ( 51 ). While they have often been developed for applications outside of health care, they can help workers with physically demanding tasks such as moving patients ( 52 ) or assist people with muscle weakness or movement disorders. Wearable technology can also be used to measure and transmit data about vital signs or physical activity ( 19 ).

Robotic systems applied outside the body can help avoid direct contact when treating patients with infectious diseases ( 53 ), assist in surgery (as already mentioned), including remote surgical procedures that leverage augmented reality ( 54 ) or assist providers when moving patients ( 55 ).

Classification Based on Care Setting

Another dimension of AI and robotics is the duration of their use, which directly correlates with the location of use. Both can significantly influence the requirements, design, and technology components of the solution. In a longer-term care setting, robotics can be used in a patient's home (e.g., for monitoring of vital signs) or for treatment in a nursing home. Shorter-term care settings include inpatient hospitals, palliative care facilities or inpatient psychiatric facilities. Example applications are listed in Table 1 .

Classification by care setting.

Selected care settings where robotic systems may be used [adapted from ( 62 )] .

Sample Realizations

Having seen how to classify AI and robotic systems in health care, we turn to recent concrete achievements that illustrate their practical application and achievements already realized. This list is definitely not exhaustive, but it illustrates the fact that we're no longer purely at the research or experimentation stage: the technology is starting to bear fruit in a very concrete way–that is, by improving outcomes–even when only in the context of clinical trials prior to regulatory approval for general use.

Sepsis Onset Prediction

Sepsis was recently identified as the leading cause of death worldwide, surpassing even cancer or cardiovascular diseases. 3 And while timely diagnosis and treatment are difficult in other care settings, it is also the leading cause of death in hospitals in the United States (Sepsis Fact Sheet 4 ) A key reason is the difficulty of recognizing precursor symptoms early enough to initiate effective treatment. Therefore, early onset prediction promises to save millions of lives each year. Here are four such projects:

  • Bayesian Health 5 , a startup founded by a researcher at Johns Hopkins University, applied its model to a test population of hospital patients and correctly identified 82% of the 9,800 patients who later developed sepsis.
  • Dascena, a California startup, has been testing its software on large cohorts of patients since 2017, achieving significant improvements in outcomes ( 63 ).
  • Patchd 6 uses wearable devices and deep learning to predict sepsis in high-risk patients. Early studies have shown that this technology can predict sepsis 8 h earlier, and more accurately, than under existing standards of care.
  • A team of researchers from Singapore developed a system that combines clinical measures (structured data) with physician notes (unstructured data), resulting in improved early detection while reducing false positives ( 64 ).

Monitoring Systems in the Intensive Care Unit

For patients in an ICU, the paradox is that large amounts of data are collected, displayed on monitors, and used to trigger alarms, but these various data streams are rarely used together, nor can doctors or nurses effectively observe all the data from all the patients all the time.

This is an area where much has been written, but most available information points to studies that have not resulted in actual deployments. A survey paper alluded in particular to the challenge of achieving effective collaboration between ICU staff and automated processes ( 65 ).

In one application example, machine learning helps resolving the asynchrony between a mechanical ventilator and the patient's own breathing reflexes, which can cause distress and complicate recovery ( 66 ).

Tumor Detection From Image Analysis

This is another area where research has provided evidence of the efficacy of AI, generally not employed alone but rather as an advisor to a medical professional, yet there are few actual deployments at scale.

These applications differ based on the location of the tumors, and therefore on the imaging techniques used to observe them. AI makes the interpretation of the images more reliable, generally by pinpointing to the radiologists areas they might otherwise overlook.

  • In a study performed in Korea, AI appeared to improve the recognition of lung cancer in chest X-rays ( 67 ). AI by itself performed better than unaided radiologists, and the improvement was greater when AI was used as an aid by radiologists. Note however that the sample size was fairly small.
  • Several successive efforts aimed to use AI to classify dermoscopic images to discriminate between benign nevi and melanoma ( 68 ).

AI for COVID-19 Detection

The rapid and tragic emergence of the COVID-19 disease, and its continued evolution at the time of this writing, have mobilized many researchers, including the AI community. This domain is naturally divided into two areas, diagnostic and treatment.

An example of AI applied to COVID-19 diagnostic is based on an early observation that the persistent cough that is one of the common symptoms of the disease “sounds different” from the cough caused by other ailments, such as the common cold. The MIT Opensigma project 7 has “crowdsourced” sound recordings of coughs from many people, most of whom do not have the disease while some know that they have it or had it. Several similar projects have been conducted elsewhere ( 69 ).

Another effort used AI to read computer tomography images to provide a rapid COVID-19 test, reportedly achieving over 90% accuracy in 15 s ( 70 ). Curiously, after this news was widely circulated in February-March 2020, nothing else was said for several months. Six months later, a blog post 8 from the University of Virginia radiology and medical department asserted that “CT scans and X-rays have a limited role in diagnosing coronavirus.” The approach pioneered in China may have been the right solution at a specific point in time (many cases concentrated in a small geographical area, requiring a massive detection effort before other rapid tests were available), thus overriding the drawbacks related to equipment cost and patient exposure to radiation.

Patient Triage and Symptom Checkers

While the word triage immediately evokes urgent decisions about what interventions to perform on acutely ill patients or accident victims, it can also be applied to remote patient assistance (e.g., telehealth applications), especially in areas underserved by medical staff and facilities.

In an emergency care setting, where triage decisions can result in the survival or death of a person, there is a natural reluctance to entrust such decisions to machines. However, AI as a predictor of outcomes could serve as an assistant to an emergency technician or doctor. A 2017 study of emergency room triage of patients with acute abdominal pain only showed an “acceptable level of accuracy” ( 71 ), but more recently, the Mayo Clinic introduced an AI-based “digital triage platform” from Diagnostic Robotics 9 to “perform clinical intake of patients and suggest diagnoses and hospital risk scores.” These solutions can now be delivered by a website or a smartphone app, and have evolved from decision trees designed by doctors to incorporate AI.

Cardiovascular Risk Prediction

Google Research announced in 2018 that it has achieved “prediction of cardiovascular risk factors from retinal fundus photographs via deep learning” with a level of accuracy similar to traditional methods such as blood tests for cholesterol levels ( 72 ). The novelty consists in the use of a neural network to analyze the retina image, resulting in more power at the expense of explainability.

In practice, the future of such a solution is unclear: certain risk factors could be assessed from the retinal scan, but those were often factors that could be measured directly anyway–such as from blood pressure.

Gait Analysis

Many physiological and neurological factors affect how someone walks, given the complex interactions between the sense of touch, the brain, the nervous system, and the muscles involved. Certain conditions, in particular Parkinson's disease, have been shown to affect a person's gait, causing visible symptoms that can help diagnose the disease or measure its progress. Even if an abnormal gait results from another cause, an accurate analysis can help assess the risk of falls in elderly patients.

Compared to other applications in this section, gait analysis has been practiced for a longer time (over a century) and has progressed incrementally as new motion capture methods (film, video, infrared cameras) were developed. In terms of knowledge representation, see for example the work done at MIT twenty years ago ( 73 ). Computer vision, combined with AI, can considerably improve gait analysis compared to a physician's simple observation. Companies such as Exer 10 offer solutions that physical therapists can use to assess patients, or that can help monitor and improve a home exercise program. This is an area where technology has already been deployed at scale: there are more than 60 clinical and research gate labs 11 in the U.S. alone.

Home Care Robots

Robots that provide assistance to elderly or sick persons have been the focus of research and development for several decades, particularly in Japan due to the country's large aging population with above-average longevity. “Elder care robots” can be deployed at home (with cost being an obvious issue for many customers) or in senior care environments ( 74 ), where they will help alleviate a severe shortage of nurses and specialized workers, which cannot be easily addressed through the hiring of foreign help given the language barrier.

The types of robots used in such settings are proliferating. They range from robots that help patients move or exercise, to robots that help with common tasks such as opening the front door to a visitor or bringing a cup of tea, to robots that provide psychological comfort and even some form of conversation. PARO, for instance, is a robotic bay seal developed to provide treatment to patients with dementia ( 75 ).

Biomechatronics

Biomechatronics combines biology, mechanical engineering, and electronics to design assistive devices that interpret inputs from sensors and send commands to actuators–with both sensors and actuators attached in some manner to the body. The sensors, actuators, control system, and the human subject form together a closed-loop control system.

Biomechatronic applications live at the boundary of prosthetics and robotics, for example to help amputees achieve close-to-normal motion of a prosthetic limb. This work has been demonstrated for many years, with impressive results, at the MIT Media Lab under Prof. Hugh Herr 12 However, those applications have rarely left the lab environment due to the device cost. That cost could be lowered by production in large quantities, but coverage by health insurance companies or agencies is likely to remain problematic.

Mapping of Use Cases to Classification

Table 2 shows a mapping of the above use cases to the classification introduced in the first section of this paper.

Mapping of use cases to our classification.

Adoption Challenges to AI and Robotics in Health Care

While the range of opportunities, and the achievements to date, of robotics and AI are impressive as seen above, multiple issues impede their deployment and acceptance in daily practice.

Issues related to trust, security, privacy and ethics are prevalent across all aspects of health care, and many are discussed elsewhere in this issue. We will therefore only briefly mention those challenges that are unique to AI and robotics.

Resistance to Technology

Health care professionals may ignore or resist new technologies for multiple reasons, including actual or perceived threats to professional status and autonomy ( 76 ), privacy concerns ( 77 ) or the unresolved legal and ethical questions of responsibility ( 78 ). The issues of worker displacement by robots are just as acute in health care as in other domains. Today, while surgery robots operate increasingly autonomously, humans still perform many tasks and play an essential role in determining the robot's course of operation (e.g., for selecting the process parameters or for the positioning of the patient) ( 79 ). This allocation of responsibilities is bound to evolve.

Transparency and Explainability

Explainability is “a characteristic of an AI-driven system allowing a person to reconstruct why a certain AI came up with the presented prediction” ( 80 ). In contrast to rule-based systems, AI-based predictions can often not be explained in a human-intelligible manner, which can hide errors or bias (the “black box problem” of machine learning). The explainability of AI models is an ongoing research area. When information on the reasons for an AI-based decision is missing, physicians cannot judge the reliability of the advice and there is a risk to patient safety.

Responsibility, Accountability and Liability

Who is responsible when the AI or robot makes mistakes or creates harm in patients? Is it the programmer, manufacturer, end user, the AI/robotic system itself, the provider of the training dataset, or something (or someone) else? The answer depends on the system's degree of autonomy. The European Parliament's 2017 Resolution on AI ( 81 ) assigns legal responsibility for an action of an AI or robotic system to a human actor, which may be its owner, developer, manufacturer or operator.

Data Protection

Machine learning requires access to large quantities of data regarding patients as well as healthy people. This raises issues regarding the ownership of data, protection against theft, compliance with regulations such as HIPAA in the U.S. ( 82 ) or GDPR for European citizens ( 83 ), and what level of anonymization of data is necessary and possible. Regarding the last point, AI models could have unintended consequences, and the evolution of science itself could make patient re-identification possible in the future.

Data Quality and Integration

Currently, the reliability and quality of data received from sensors and digital health devices remain uncertain ( 84 )–a fact that future research and development must address. Datasets in medicine are naturally imperfect (due to noise, errors in documentation, incompleteness, differences in documentation granularities, etc.), hence it is impossible to develop error-free machine learning models ( 80 ). Furthermore, without a way to quickly and reliably integrate the various data sources for analysis, there is lost potential for fast diagnosis by AI algorithms.

Safety and Security

Introducing AI and robotics into the delivery of health care is likely to create new risks and safety issues. Those will exist even under normal functioning circumstances, when they may be due to design, programming or configuration errors, or improper data preparation ( 85 ).

These issues only get worse when considering the probability of cyberattacks:

  • Patient data may be exposed or stolen, perhaps by scammers who want to exploit it for profit.
  • Security vulnerabilities in robots that interact directly with patients may cause malfunctions that physically threaten the patient or professional. The robot may cause harm directly, or indirectly by giving a surgeon incorrect feedback. In case of unexpected robot behavior, it may be unclear to the user whether the robot is functioning properly or is under attack ( 86 ).

The EU Commission recently drafted a legal framework 13 addressing the risks of AI (not only in health care) in order to improve the safety of and trust in AI. The framework distinguishes four levels of risks: unacceptable risk, high risk, limited risk and minimal risk. AI systems with unacceptable risks will be prohibited, high-risk ones will have to meet strict obligations before release (e.g., risk assessment and mitigation, traceability of results). Limited-risk applications such as chatbots (which can be used in telemedicine) will require “labeling” so that users are made aware that they are interacting with an AI-powered system.

While P5 medicine aims at considering multiple factors–ethnicity, gender, socio-economic background, education, etc.–to come up with individualized care, current implementations of AI often demonstrate potential biases toward certain patient groups of the population. The training datasets may have under-represented those groups, or important features may be distributed differently across groups–for example, cardiovascular disease or Parkinson's disease progress differently in men and women ( 87 ), so the corresponding features will vary. These causes result in undesirable bias and “unintended of unnecessary discrimination” of subgroups ( 88 ).

On the flip side, careful implementations of AI could explicitly consider gender, ethnicity, etc. differences to achieve more effective treatments for patients belonging to those groups. This can be considered “desirable bias” that counteracts the undesirable kind ( 89 ) and gets us closer to the goals of P5 medicine.

Trust–An Evolving Relationship

The relationship between patients and medical professionals has evolved over time, and AI is likely to impact it by inserting itself into the picture (see Figure 4 ). Although AI and robotics are performing well, human surveillance is still essential. Robots and AI algorithms operate logically, but health care often requires acting empathically. If doctors become intelligent users of AI, they may retain the trust associated with their role, but most patients, who have a limited understanding of the technologies involved, would have much difficulty in trusting AI ( 90 ). Conversely, reliable and accurate diagnosis and beneficial treatment, and appropriate use of AI and robotics by the physician can strengthen the patient's trust ( 91 ).

An external file that holds a picture, illustration, etc.
Object name is fmed-09-795957-g0004.jpg

Physician-patient-AI relationship.

This assumes of course that the designers of those systems adhere to established guidelines for trustworthy AI in the first place, which includes such requirements as creating systems that are lawful, ethical, and robust ( 92 , 93 ).

AI and Robotics for Transformed Health Care–A Converging Path

We can summarize the previous sections as follows:

  • There are many types of AI applications and robotic systems, which can be introduced in many aspects of health care.
  • AI's ability to digest and process enormous amounts of data, and derive conclusions that are not obvious to a human, holds the promise of more personalized and predictive care–key goals of P5 medicine.
  • There have been, over the last few years, a number of proof-of-concept and pilot projects that have exhibited promising results for diagnosis, treatment, and health maintenance. They have not yet been deployed at scale–in part because of the time it takes to fully evaluate their efficacy and safety.
  • There is a rather daunting list of challenges to address, most of which are not purely technical–the key one being demonstrating that the systems are effective and safe enough to warrant the confidence of both the practitioners and their patients.

Based on this analysis, what is the roadmap to success for these technologies, and how will they succeed in contributing to the future of health care? Figure 5 depicts the convergent approaches that need to be developed to ensure safe and productive adoption, in line with the P5 medicine principles.

An external file that holds a picture, illustration, etc.
Object name is fmed-09-795957-g0005.jpg

Roadmap for transformed health care.

First, AI technology is currently undergoing a remarkable revival and being applied to many domains. Health applications will both benefit from and contribute to further advances. In areas such as image classification or natural language understanding, both of which have obvious utility in health care, the rate of progress is remarkable. Today's AI techniques may seem obsolete in ten years.

Second, the more technical challenges of AI–such as privacy, explainability, or fairness–are being worked on, both in the research community and in the legislative and regulatory world. Standard procedures for assessing the efficacy and safety of systems will be needed, but in reality, this is not a new concept: it is what has been developed over the years to approve new medicines. We need to be consistent and apply the same hard-headed validation processes to the new technologies.

Third, it should be clear from our exploration of this subject that education –of patients as well as of professionals–is key to the societal acceptance of the role that AI and robotics will be called upon to play. Every invention or innovation–from the steam engine to the telephone to the computer–has gone through this process. Practitioners must learn enough about how AI models and robotics work to build a “working relationship” with those tools and build trust in them–just as their predecessors learned to trust what they saw on an X-ray or CT scan. Patients, for their part, need to understand what AI and robotics can or cannot do, how the physician will remain in the loop when appropriate, and what data is being collected about them in the process. We will have a responsibility to ensure that complex systems that patients do not sufficiently understand cannot be misused against them, whether accidentally or deliberately.

Fourth, health care is also a business, involving financial transactions between patients, providers, and insurers (public or private, depending on the country). New cost and reimbursement models will need to be developed, especially given that when AI is used to assist professionals, not replace them, the cost of the system is additive to the human cost of assessing the data and reviewing the system's recommendations.

Fifth and last, clinical pathways have to be adapted and new role models for physicians have to be built. Clinical paths can already differ and make it harder to provide continuity of care to a patient who moves across care delivery systems that have different capabilities. This issue is being addressed by the BPM+ Health Community 14 using the business process, case management and decision modeling standards of the Object Management Group (OMG). The issue will become more complex by integrating AI and robotics: every doctor has similar training and a stethoscope, but not every doctor or hospital will have the same sensors, AI programs, or robots.

Eventually, the convergence of these approaches will help to build a complete digital patient model–a digital twin of each specific human being – generated out of all the data gathered from general practitioners, hospitals, laboratories, mHealth apps, and wearable sensors, along the entire life of the patient. At that point, AI will be able to support superior, fully personal and predictive medicine, while robotics will automate or support many aspects of treatment and care.

Data Availability Statement

Author contributions.

KD came up with the classification of AI and robotic systems. CB identified concrete application examples. Both authors contributed equally, identified adoption challenges, and developed recommendations for future work. Both authors contributed to the article and approved the submitted version.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

1 https://cmte.ieee.org/futuredirections/2019/07/21/autonomous-systems-in-healthcare/

2 https://emag.medicalexpo.com/ai-powered-chatbots-to-help-against-self-isolation-during-covid-19/

3 https://www.med.ubc.ca/news/sepsis-leading-cause-of-death-worldwide/

4 https://www.sepsis.org/wp-content/uploads/2017/05/Sepsis-Fact-Sheet-2018.pdf

5 https://medcitynews.com/2021/07/johns-hopkins-spinoff-looking-to-build-better-risk-prediction-tooing,ls-emerges-with-15m/

6 https://www.patchdmedical.com/

7 https://hisigma.mit.edu

8 https://blog.radiology.virginia.edu/covid-19-and-imaging/

9 https://hitinfrastructure.com/news/diagnostic-robotics-mayo-clinic-bring-triage-platform-to-patients

10 https://www.exer.ai

11 https://www.gcmas.org/map

12 https://www.media.mit.edu/groups/biomechatronics/overview/

13 https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

14 https://www.bpm-plus.org/

Help | Advanced Search

Computer Science > Human-Computer Interaction

Title: from explainable to interactive ai: a literature review on current trends in human-ai interaction.

Abstract: AI systems are increasingly being adopted across various domains and application areas. With this surge, there is a growing research focus and societal concern for actively involving humans in developing, operating, and adopting these systems. Despite this concern, most existing literature on AI and Human-Computer Interaction (HCI) primarily focuses on explaining how AI systems operate and, at times, allowing users to contest AI decisions. Existing studies often overlook more impactful forms of user interaction with AI systems, such as giving users agency beyond contestability and enabling them to adapt and even co-design the AI's internal mechanics. In this survey, we aim to bridge this gap by reviewing the state-of-the-art in Human-Centered AI literature, the domain where AI and HCI studies converge, extending past Explainable and Contestable AI, delving into the Interactive AI and beyond. Our analysis contributes to shaping the trajectory of future Interactive AI design and advocates for a more user-centric approach that provides users with greater agency, fostering not only their understanding of AI's workings but also their active engagement in its development and evolution.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

This paper is in the following e-collection/theme issue:

Published on 27.5.2024 in Vol 26 (2024)

Advances in the Application of AI Robots in Critical Care: Scoping Review

Authors of this article:

Author Orcid Image

  • Yun Li 1, 2 * , MD   ; 
  • Min Wang 1, 2 * , MPH   ; 
  • Lu Wang 1, 2 * , MD   ; 
  • Yuan Cao 3 * , MPH   ; 
  • Yuyan Liu 1, 2 , MPH   ; 
  • Yan Zhao 2 , MPH   ; 
  • Rui Yuan 1, 2 , MD   ; 
  • Mengmeng Yang 2 , MPH   ; 
  • Siqian Lu 4 , DSC   ; 
  • Zhichao Sun 4 , DSC   ; 
  • Feihu Zhou 2 , PhD   ; 
  • Zhirong Qian 4, 5, 6 , PhD   ; 
  • Hongjun Kang 2 , PhD  

1 Medical School of Chinese PLA, Beijing, China

2 The First Medical Centre, Chinese PLA General Hospital, Beijing, China

3 The Second Hospital, Hebei Medical University, Hebei, China

4 Beidou Academic & Research Center, Beidou Life Science, Guangzhou, China

5 Department of Radiation Oncology, Fujian Medical University Union Hospital, Fujian, China

6 The Seventh Affiliated Hospital, Sun Yat-sen University, Shenzhen, China

*these authors contributed equally

Corresponding Author:

Hongjun Kang, PhD

The First Medical Centre

Chinese PLA General Hospital

28 Fuxing Road

Haidian District,

Beijing, 100853

Phone: 86 13811989878

Email: [email protected]

Background: In recent epochs, the field of critical medicine has experienced significant advancements due to the integration of artificial intelligence (AI). Specifically, AI robots have evolved from theoretical concepts to being actively implemented in clinical trials and applications. The intensive care unit (ICU), known for its reliance on a vast amount of medical information, presents a promising avenue for the deployment of robotic AI, anticipated to bring substantial improvements to patient care.

Objective: This review aims to comprehensively summarize the current state of AI robots in the field of critical care by searching for previous studies, developments, and applications of AI robots related to ICU wards. In addition, it seeks to address the ethical challenges arising from their use, including concerns related to safety, patient privacy, responsibility delineation, and cost-benefit analysis.

Methods: Following the scoping review framework proposed by Arksey and O’Malley and the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, we conducted a scoping review to delineate the breadth of research in this field of AI robots in ICU and reported the findings. The literature search was carried out on May 1, 2023, across 3 databases: PubMed, Embase, and the IEEE Xplore Digital Library. Eligible publications were initially screened based on their titles and abstracts. Publications that passed the preliminary screening underwent a comprehensive review. Various research characteristics were extracted, summarized, and analyzed from the final publications.

Results: Of the 5908 publications screened, 77 (1.3%) underwent a full review. These studies collectively spanned 21 ICU robotics projects, encompassing their system development and testing, clinical trials, and approval processes. Upon an expert-reviewed classification framework, these were categorized into 5 main types: therapeutic assistance robots, nursing assistance robots, rehabilitation assistance robots, telepresence robots, and logistics and disinfection robots. Most of these are already widely deployed and commercialized in ICUs, although a select few remain under testing. All robotic systems and tools are engineered to deliver more personalized, convenient, and intelligent medical services to patients in the ICU, concurrently aiming to reduce the substantial workload on ICU medical staff and promote therapeutic and care procedures. This review further explored the prevailing challenges, particularly focusing on ethical and safety concerns, proposing viable solutions or methodologies, and illustrating the prospective capabilities and potential of AI-driven robotic technologies in the ICU environment. Ultimately, we foresee a pivotal role for robots in a future scenario of a fully automated continuum from admission to discharge within the ICU.

Conclusions: This review highlights the potential of AI robots to transform ICU care by improving patient treatment, support, and rehabilitation processes. However, it also recognizes the ethical complexities and operational challenges that come with their implementation, offering possible solutions for future development and optimization.

Introduction

Artificial intelligence (AI) and robotics are 2 distinct yet interconnected concepts ubiquitous in contemporary media and digital platforms. The term artificial intelligence was first introduced as a Medical Subject Heading in the US National Library of Medicine’s PubMed database in 1986, defined as “Theory and development of computer systems which perform tasks that normally require human intelligence” [ 1 ]. The hallmarks of AI encompass autonomous thinking, learning, recognition, reasoning, judgment, and inference. Medicine has long been considered a promising application field for AI, where it can augment clinical diagnostics and decision-making capacities [ 2 ]. Robotics, as defined by the Medical Subject Headings of the US National Library of Medicine, pertains to “the application of electronic, computerized control systems to mechanical devices designed to perform human functions” [ 3 ]. Presently, a standardized definition for “AI robots” remains elusive. However, they can be perceived as “physical devices inheriting electronic, computerized, and mechanical control systems, capable of perception, reasoning, learning, decision-making, and task execution without direct human control, able to mimic and execute various tasks of human intelligence” [ 4 ]. The crux of this review is the discussion of AI robot applications within the intensive care unit (ICU). While AI technology encompasses machine learning, deep learning, predictive modeling, and natural language processing, this review did not delve into these distinct technologies. Rather, it concentrated on the practical products and application cases where these AI technologies are integrated into robotic systems.

The past few decades have witnessed an exponential proliferation of research into AI robots, particularly in the health care domain. However, most of these developments have remained confined to the stages of product development and testing, with few achieving large-scale clinical implementation. The year 2020 sparked an unforeseen public health incident that catalyzed intense interest in AI robots. The outbreak of COVID-19 expedited the transformative revolution of “Healthcare + AI + Robotics” technologies and their applications [ 5 , 6 ]. The deployment of AI robots significantly curtailed the infection risk for medical personnel in contagion hot spots, and AI robots stood on the front lines in the battle against COVID-19 transmission [ 7 ]. Serving as a technology that improves performance, precision, and time efficiency as well as reducing costs, AI robots have promoted the upgrade and development of modern industry and are being rapidly adopted by many industries [ 8 , 9 ]. The applicability of robots in society is already evident and growing significantly [ 10 ].

Despite the rapid advancements in this field across military, security, transportation, and manufacturing sectors, our focus remains on intensive care scenarios such as COVID-19. ICUs are specialized settings designed to provide systematic, high-quality medical care and life-saving treatment to patients with single- or multiorgan dysfunction, life-threatening illnesses, or potential high-risk factors [ 11 ]. As scientific and technological advancements accelerate, the contradiction between the high demand for quality care for patients who are critically ill and the chronic shortage of medical resources becomes increasingly pronounced [ 12 ]. In 1995, Hanson and Marshall [ 13 ] postulated that AI could reduce care costs for patients in the ICU and improve their prognosis. “There are plenty of areas in critical care where it would be extremely helpful to have efficacious, fair, and transparent AI systems,” notes Gary Weissman, professor of pulmonary and critical care medicine [ 14 ]. The full potential of AI will be realized once it becomes a trusted clinical assistant for intensivists. After all, ICUs, which routinely collect a significant volume of data, provide an ideal setting for the deployment of machine learning technologies [ 15 ].

Currently, the application of AI in intensive care predominantly focuses on “assisting” health care professionals. This review targeted AI robots in ICUs, primarily discussing relevant advancements in recent years and presenting and analyzing challenges faced in ICUs and potential solutions, as well as strategies for health care professionals to handle AI robotic technologies, for the reference of health care professionals and system developers.

We followed the scoping review methodology proposed by Arksey and O’Malley [ 16 ], which includes (1) identifying the research question; (2) identifying relevant studies; (3) study selection; (4) charting the data; and (5) collating, summarizing, and reporting the results. In addition, to ensure the rigor of the scoping review, we adhered to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines ( Multimedia Appendix 1 [ 17 ]).

Upon establishing the focus on the application of AI robots in ICUs, we embarked on a keyword search encompassing terms related to robots and intensive care. Due to the novelty and complexity of the research topic, the experimental designs and results presentation vary significantly across the literature, making traditional methods of literature quality assessment inadequate. Consequently, we relied on the profound knowledge and practical experience of domain experts to provide essential perspectives for understanding the complexities presented in the literature. We engaged 4 experts—2 from the IT sector and 2 medical professionals—to devise a search strategy and select appropriate databases. Given the interdisciplinary nature of our research, spanning medicine, robotics engineering, and human-computer interaction design, our search was not confined to medical databases; we also included databases from the engineering field. Our literature search encompassed 3 electronic databases: PubMed, Embase, and the IEEE Xplore Digital Library. We restricted our search to publications in English, imposing no limits on the year of publication. The search was conducted over a brief period, from May 1, 2023, to May 5, 2023. Textbox 1 provides detailed insights into our search methodology.

Search strings

  • PubMed: (“Artificial Intelligence” [Medical Subject Heading (MeSH)] OR “AI” OR “Robotics” [MeSH] OR “Robots”) AND (“Intensive Care Units” [MeSH] OR “ICU” OR “Critical Care Units” OR “CCU”)
  • Embase: (“artificial intelligence”/exp OR “ai” OR “robotics”/exp OR “robots”) AND (“intensive care unit”/exp OR “icu” OR “critical care unit” OR “ccu”)
  • IEEE Xplore Digital Library: (“Artificial Intelligence” OR “AI” OR “Robotics” OR “Robots”) AND (“Intensive Care Units” OR “ICU” OR “Critical Care Units” OR “CCU”)

During our initial search across the 3 databases, we retrieved a total of 5908 articles. These search results were first imported into EndNote (Clarivate Analytics), a literature management tool, to facilitate the deduplication process. Subsequently, LW and YC independently conducted a screening of titles and abstracts to weed out articles that were clearly irrelevant. Given the market’s saturation with robot products of similar functionalities and the constraints on this review’s length, we opted to focus on the most representative or widely adopted technologies among robots offering the same functions. This selection process led to the inclusion of 77 articles in our review. These studies encompassed multiple phases, including the design, development, and validation of robots, collectively covering 21 different AI robotic products, detailing their development, evolution, and application examples. Figure 1 shows a comprehensive view of our study selection process.

literature review on ai and robotics

In our examination of the 77 articles selected for inclusion, we meticulously identified the key characteristics of each study. These characteristics encompassed the design purpose of the technology, the AI algorithms used, the anticipated application scenarios, and the target user groups. From this analysis, we developed an initial classification framework aimed at organizing technologies based on their primary functions and application contexts. For instance, technologies designed for nursing tasks were classified under nursing assistance robots, whereas those with therapeutic responsibilities were designated as therapeutic assistance robots. Similarly, devices integrated with ventilators capable of remote operation and achieving therapeutic goals were also classified as therapeutic assistance robots.

To validate the accuracy and logic behind our classification, we sought insights from additional experts in the domains of medicine and information engineering. Their feedback prompted adjustments and refinements to our framework, ensuring that it precisely represented the nuances and relationships among the various technologies.

Subsequently, applying the refined classification framework, we categorized the included studies into their respective groups, deriving organized results. This meticulous classification was undertaken with the aim of ensuring clarity and comprehensibility for all potential readers, including professionals (physicians and engineers), patients, and their families. Our goal was to provide a clear understanding of the applications and potential of various technologies in the ICU, making the information accessible and valuable to a broad audience.

Inclusion and Exclusion Criteria

This review included studies that met two main criteria: (1) the studies needed to focus on actual products or application cases that conformed to the definition of AI robots, and (2) the application scenario of the study had to be the ICU or any place for patients discharged from the ICU.

We excluded studies that met any of the following conditions: (1) limited to AI algorithms such as machine learning, deep learning predictive models, or decision support systems; and (2) not published in English.

Application of AI Robots in Intensive Care

Currently, there is a broad array of experimental studies and AI robots applied in ICUs. We categorized these into 5 main types based on their application scenarios and functions in ICUs: therapeutic assistance robots, nursing assistance robots, rehabilitation assistance robots, telepresence robots, and logistics and disinfection robots ( Figure 2 ). The application of robotics in the medical field is extensive, encompassing myriad functions and scenarios, rendering their precise classification a formidable task. Multimedia Appendix 2 [ 6 , 18 - 90 ], which is neither exclusive nor exhaustive, summarizes the representative research groups and commercial suppliers to provide readers with a panoramic view of the field.

literature review on ai and robotics

Therapeutic Assistance Robots

The ICU is a specialized department dedicated to the treatment of patients who are critically ill. Driven by the necessity to enhance therapeutic outcomes, mitigate medical risks, alleviate the workload of health care personnel, and provide personalized treatment regimens for patients, therapeutic assistance robots in the ICU have become indispensable [ 91 ].

Ventilator-Mounted Cartesian Robot

This can also be viewed as a type of telepresence robot. A teleoperated ventilator controller system has been developed, which consists of a custom robotic patient-side device and a touch-based master console. Computer vision tasks that enable an intuitive user interface and accurate robot control are executed on the master console. The system was initially developed and deployed for the most popular touch screen–controlled ventilator at Johns Hopkins Hospital, the Maquet Servo-u (Getinge AB) [ 18 ]. Different from the traditional ventilators in most ICUs, the robot performs remote control and monitoring outside the ICU room through the network. The actual ventilator settings are controlled and adjusted remotely via physical controls (such as buttons and knobs) and synchronized real-time image transmission video feedback [ 19 ]. It reduces the time staff spend entering the ICU performing simple tasks such as changing ventilator settings, reducing the risk of infection and stress on personal protective clothing resources. Following a qualitative assessment in clinical environments, feedback from respiratory therapists highlighted that the system could significantly empower the respiratory care team by liberating valuable resources. The design’s Cartesian layout enables the robot to approach the operating table in as horizontal a manner as possible. This approach not only minimizes the installation burden but also ensures that the robot’s operation remains closely aligned with conventional manual procedures. Such a configuration facilitates ease of use and integration into existing medical workflows, thereby enhancing efficiency without compromising the quality of patient care [ 18 ]. In a simulated ICU environment, a Cartesian robot reduced the total time required for a respiratory therapist to make typical setup adjustments to a traditional ventilator from 271 to 109 seconds, which was 2.49 times faster [ 18 ]. More recently, Song et al [ 20 ] developed an integrated telemonitoring or operation system with an accurate XY positioner and a 3-df end effector for accurate manipulation (maximum positioning error of 0.695 mm; repeatability is 0.348 mm).

In ICUs where a single brand of ventilator is predominantly used, configuring Cartesian robots proves to be more convenient. Nevertheless, to accommodate a broader array of ICU wards and equipment types such as infusion pumps, it is imperative to expand the variety and range of control interfaces. This expansion aims to enhance the robot system’s capabilities for physical control interactions. However, a significant hurdle to the clinical application of these robotic systems is the challenge associated with their cleaning and disinfection. This issue could be mitigated in the future by enclosing the equipment within acrylic covers, thereby simplifying the process of maintaining hygiene and ensuring the equipment’s safety for patient care [ 6 ]. This solution not only addresses infection control concerns but also aids in the seamless integration of robotic systems into the rigorous and cleanliness-focused environment of the ICU.

McSleepy (Intelligent Technology in Anesthesia research group laboratory, McGill University, Montreal, Quebec, Canada), a real robot for anesthesia, is able to autonomously control hypnosis, analgesia, and neuromuscular block at the same time with regard to induction, maintenance, and emergence [ 21 ]. The anesthesiologist begins by entering patient data, including height, weight, type of surgery, and medication history, on the touch screen. In fully automatic mode, the system will use remifentanil, propofol, and rocuronium or succinylcholine to induce anesthesia. McSleepy assists the anesthesiologist in the same way that automatic transmission assists people when driving. Through a sensor that measures muscle movement, it monitors the patient’s depth of consciousness, pain severity, and muscle movement and injects the corresponding dose of medication intravenously according to a built-in algorithm based on the obtained data. It provides deep or peripheral muscle relaxation in different closed-loop models [ 22 ]. As such, anesthesiologists can focus more on other aspects of direct patient care. An additional feature is that the system can communicate with PDAs, making distant monitoring and anesthetic control possible [ 21 , 23 , 24 ].

In a pilot study in the Department of Anesthesiology and Critical Care of the Bordeaux University Hospital, McSleepy performed automatic anesthesia for cardiac surgery with isoproterenol, remifentanil, and rocuronium without manual control. Automatic cardiac anesthesia was successfully performed in 80% (97.5% CI 53%-95%) of cases. Hypnosis was monitored using the bispectral index, which was <20% of the bispectral index target of 45, showing better control over hypnosis and duration of anesthesia [ 25 ]. A randomized controlled trial investigating a novel closed-loop drug delivery system found that automated delivery achieves superior sedation control over manual methods. This improvement is credited to the closed-loop system’s ability to frequently or continuously monitor control variables and adjust drug delivery rates more often, circumventing the fatigue that can impair manual administration [ 26 ]. To guarantee the safety of this automated system, numerous protective features have been implemented. These include preventing the administration of muscle relaxants when mask ventilation proves difficult and querying any manual actions that breach established safety protocols [ 27 ]. Recent advancements in research highlight the successful application of robotic technology in anesthesiology. Beyond the surgical anesthesia applications that we have detailed, delicate procedures such as endotracheal intubation are also seeing promising developments [ 28 ]. Anesthesiologists are encouraged to engage with robotic systems, leveraging their significant contributions to enhancing medical quality and efficacy, thereby ensuring the highest standard of patient care and treatment.

Cardiopulmonary Resuscitation Robots (Seoul University Medical College, South Korea)

Some reports indicate that failure of cardiopulmonary resuscitation (CPR) is an important limiting factor for life-prolonging treatment of patients in the ICU [ 29 , 30 ]. Robots have enough power to achieve high-quality CPR, which could overcome the shortcomings of manual CPR and mechanical CPR devices. Recently, Jung et al [ 31 ] developed an automated robotic CPR system that performs CPR automatically, analyzes the patient’s condition, and relays the information to the CPR system. In the initial state of the CPR process, the robot manipulator determines the optimal compression position by adjusting the point of pressure and, guided by end-tidal carbon dioxide levels, periodically and repeatedly delivers adequate speed and depth for performing CPR. The combined CPR system can accurately capture the compression condition of patients, overcome the blank periods when medical personnel alternately perform CPR, and increase the probability of cardiac resuscitation [ 31 , 32 ].

In a study using a porcine model of cardiac arrest, the use of a CPR robot did not significantly enhance the success rate of resuscitation efforts compared to manual CPR. However, there was a notable improvement in the neurological deficit scores observed 48 hours after resuscitation, suggesting a potential benefit in postresuscitation neurological outcomes [ 32 ]. Another study indicated that robot-assisted CPR outperformed traditional manual CPR methods, achieving higher resuscitation success rates in patients without specific injuries [ 31 ]. The anticipated benefits of incorporating robot-assisted CPR into clinical settings extend beyond merely improving resuscitation success rates. They also include the potential to reduce labor costs and minimize the instances of ICU staff congregating around a patient’s bedside to alternate performing CPR. This technological advancement suggests a promising direction for enhancing the efficiency and effectiveness of resuscitation efforts, thereby improving patient outcomes while optimizing resource use within critical care environments.

Nursing Assistance Robots

Nursing care robots, designed to assist patients who are bed-ridden with simple long-term care services, have been widely used in hospitals to assist older adults and individuals with disabilities. The many invasive diagnostic procedures and heavy nursing workload (multiple multitasking) in the ICU make medical staff vulnerable and overloaded, with very limited ability to provide timely care to patients. Through the intervention of intelligent robots, heavy physical activity in nursing work is alleviated to a certain extent, and nurses are liberated from it so that they can put their energies into more professional and meticulous nursing work.

First, despite the proliferation of devices currently available on the market to assist with venipuncture, there remain certain limitations. Technologies such as ultrasound, which can easily penetrate human tissue, thereby enabling the high-resolution visualization of both shallow and deep tissue structures for vascular guidance, and devices such as the VeinViewer and AccuVein AV300 [ 33 , 34 ], which use near-infrared spectroscopy technology for vascular imaging, are fairly mature. However, these techniques are still constrained by their inability to provide depth of the vein as well as the fact that imaging technology does not directly aid the insertion of the needle. VenousPro (VascuLogic, LLC), an automated robotic venipuncture device, addresses these issues. It identifies vessels suitable for cannulation and robotically guides an attached needle toward the lumen center, enabling the safe drawing of blood from peripheral forearm veins [ 35 , 36 ]. The device uses 940-nm near-infrared light to enhance the contrast of the subcutaneous peripheral vein, selecting the appropriate vein through real-time imaging and mapping of the 3D spatial coordinates of the subcutaneous vein [ 36 ]. Chen et al [ 36 ] evaluated the system’s cannulation accuracy on a vascular model through tracking, free-space localization, and use of a dark-skinned phlebotomy training model. Early versions of the device successfully demonstrated the feasibility of automatic venous access with a 100% success rate for venipuncture and placing of the needle in the desired position with high precision (mean positioning error 0.21 mm, SD 0.02 mm) [ 35 ]. This effectively reduced the risk of acupuncture injury [ 37 ].

Subsequently, the team developed an improved instrument based on a 9-df image-guided venipuncture robot, which increased radial rotation in response to rolling vein deformation and allowed for real-time adjustments to the pose and orientation of the needle [ 38 ]. Through robust optimization of near-infrared imaging coupled with image analysis and a robotic control system, this venipuncture robot realizes the automation of venipuncture, minimizing the likelihood of needlestick injuries and related bloodborne infections. This not only liberates nursing staff from the repetitive task of venipuncture but also protects the safety of practitioners [ 36 ]. In addition, this device may, in the future, be integrated with diagnostics, facilitating automated blood draws and rapid patient information synchronization. Moreover, the foundational imaging, computer vision, model recognition, and robotic technology developed for this device can be extended to arterial pathways, computer-assisted diagnostics, and miniature robotic surgery, potentially revolutionizing practice in ICUs and other departments.

Second, prompt and effective airway management can prove pivotal to the survival of patients who are critically ill. However, for patients with substantial secretions, sputum aspiration undertaken by nursing staff becomes a frequent necessity, significantly demanding the latter’s time and effort. Therefore, a sputum suction robot (Xi’an Jiaotong University, China) has been developed, which is a simple 6-df manipulator with a steering gear. Exhibiting stable movement, the sputum suction robot is proficient in smoothly performing the clamping, insertion, back-off protection, and removal of the suction tube, thereby achieving effective sputum aspiration [ 39 ]. Naturally, the current iteration of this sputum suction robot faces certain challenges, such as the complexity of the mechanical arm structure and limited mobility that may result in some sputum being left unaspirated. The simplification of the driving mechanism and the optimization of mobility present promising directions for future enhancements. Such improvements aim to realize high-precision, high-quality automated sputum suction within the ICU.

Third, kangaroo care is a nursing method aimed at premature infants. It serves as a nonpharmacological intervention for treating surgical pain in infants. By promoting skin-to-skin contact between the infant and parents, it reduces the adverse effects of repeated surgical pain on the long-term development of the nervous system [ 40 ]. However, a notable challenge within the neonatal ICU (NICU) setting is that parents cannot always be physically present to provide this essential care. This limitation calls for innovative solutions to ensure that premature infants still receive the benefits of kangaroo care, possibly through alternative methods or support systems that can simulate the presence and therapeutic effects of parental contact. Calmer (University of British Columbia, Vancouver, British Columbia, Canada) was developed to manage acute pain effectively for preterm infants in the NICU by simulating key pain-reducing components of human touch–based treatment. The human tactile pain treatment effect is achieved by placing the infant prone on the Calmer, which provides sensory intervention similar to parental skin contact while simulating the heartbeat and respiratory frequency of the parents with customized physiological signal processing software to mitigate the adverse neurodevelopmental effects of early pain exposure in preterm infants [ 41 ]. The effects of this robotic device on pain management during routine blood collection were studied in 10 infants. Calmer reduced physiological pain reactivity during and after painful blood collection procedures [ 34 , 40 ]. This approach not only aligns with the philosophy of nonpharmacological pain management but also potentially saves the NICU approximately US $380,000 per year in nursing time (United States). However, it is crucial to clarify that Calmer does not intend to replace the role of parents or neonatal care. It merely provides an alternative solution when such avenues are not available.

Fourth, older patients consistently represent a predominant segment of ICU admissions, manifesting unique characteristics and frequently presenting with cognitive impairments and mobility limitations. Physical assistive robots are strategically engineered to address the exigencies of daily living activities [ 92 ]. A robot named Paro has been extensively integrated within ICU environments and subsequent post-ICU discharge home care, offering socioemotional and psychological support to older patients in the ICU. With its lifelike appearance and behavior, Paro emulates genuine animal esthetics and movements, enabling it to perceive and respond to human vocalizations, touch, and actions. Recognized as one of the world’s premier nursing assistance robots, Paro has been commercialized and operationalized across diverse care settings, particularly within ICU care, in multiple countries [ 42 ]. While it is designed to provide cognitive stimulation, no definitive evidence has surfaced indicating immediate or long-term cognitive improvement after Paro intervention. However, we cannot overlook its positive psychological impact in several settings. Its widespread application and accessibility also serve as a blueprint for the development of companion robots [ 43 ].

Beyond the 4 nursing assistance robots previously mentioned, a diverse range of technologies is under development aimed at enhancing nursing practice. These technologies are designed to assist with various tasks, including feeding patients, transporting patients, bathing, providing emotional support, administering medication, and facilitating other daily activities. Notable examples include the Care-O-bot, a compact, highly integrated service robot that primarily functions as a household assistant [ 44 ]. Another example is “RI-MAN,” a soft-bodied robot designed to safely lift adults [ 45 ]. While these robots undertake certain nursing responsibilities and are potentially beneficial to the general population, they were not included in the scope of this review. This decision reflects our focus on specific categories of medical robot modules directly applicable to critical care and ICU settings, thereby delineating a clear boundary for our review’s content and objectives.

The extensive research by Locsin [ 93 ] underscores a future where the integration of nursing with AI technology is not just a possibility but an inevitability. This projection implies that, as technology evolves, nurses will need to become proficient in advanced technological tools, ensuring that their participation in clinical practices is grounded in informed consent. In the nursing practices of the future, technology will serve to augment human nursing activities. This includes leveraging technology for predictive interventions to enhance nursing efficiency. However, it is critical to note that human cognition and emotional intelligence will remain at the heart of technological care. This integration suggests a model of care where technology and human elements complement each other, ensuring that patient care remains personalized, empathetic, and efficient, thereby reflecting the intrinsic values of nursing while embracing the advancements of AI technology.

Thus, the future of nursing practice is envisioned as a symbiotic relationship between human nurses and medical robots. This partnership aims not merely at task completion but also at leveraging sophisticated technology to deepen patient understanding and enhance care, thereby elevating the standard of nursing services provided. Human nurses are poised to maintain their pivotal role in health care, with person-centered care continuing to serve as the foundational principle of nursing. Through this fusion of technology and human touch, nursing will evolve, ensuring that care remains empathetic, responsive, and fundamentally human at its core.

Rehabilitation Assistance Robots

Most patients who are critically ill, confined to bed rest for extended periods, exhibit a high incidence of ICU-acquired weakness, significantly influencing patient prognoses [ 94 ]. Rehabilitation therapy, a potential solution to this issue, can notably enhance patient outcomes and quality of life. Numerous studies have attested to the pivotal role of early rehabilitation in improving patient prognoses and ensuring quality of life. Rehabilitation robots are devised to aid the restoration of impaired sensory, motor, and cognitive skills [ 95 ]. The deployment of these robots can relieve physicians of strenuous training tasks, allowing for the analysis of robot-derived data during training sessions to evaluate patient rehabilitation status. It has been established that intensive locomotor training can affect improvements in walking function for patients with movement disorders after stroke or spinal cord injury (SCI) [ 96 - 98 ]. Rehabilitation robots facilitate more extensive and intensive training compared to traditional therapeutic methods. Moreover, rehabilitation robots typically engage and challenge patients to meet objectives interactively. Although their application within the ICU is currently minimal, these robots hold promising potential in assisting early rehabilitation efforts for patients in the ICU. There are presently several commercially available and experimental-stage noncommercial rehabilitation robots reported in the literature. Some cases are presented in the following paragraphs.

First, Lokomat (Hocoma, Inc) is a sophisticated robotic system designed for gait rehabilitation. It targets patients who exhibit locomotor anomalies induced by brain injury, spinal cord damage, neurological disorders, muscular injuries, and orthopedic diseases, facilitating improved mobility in patients who are neurologically compromised [ 46 , 47 ]. The Lokomat system comprises a structural frame affixed to a treadmill, featuring a load-bearing arrangement such as a gait orthosis. When attached to the patient, it modulates hip and knee joint movements to generate a predefined gait pattern [ 48 ]. Regular Lokomat-assisted training has been effectively used in rehabilitating patients who were comatose with cerebral hemorrhage. Following a 4-month therapeutic regimen, patients exhibited remarkable improvements in the severity of their comatose state (as per the Glasgow Coma Scale score of 4), an extension in walking duration from 15 to 32 minutes, and recuperation of eye and joint movement [ 49 ].

In a rigorous evaluation, Chillura et al [ 47 ] used Lokomat in a 6-month intensive conventional rehabilitation therapy. The outcome was a marked improvement in muscle strength (42/60), physical and mental independence (80/126), and 6-minute walk distance (47 m) in patients in the ICU with acquired weakness. A separate prospective study corroborates the potential of Lokomat in ameliorating patient rehabilitation after stroke [ 50 ]. A randomized single-blind, parallel-group clinical trial (40 participants per group) showed that training with the Lokomat system for 3 to 6 months after lesion can improve the walking ability of patients with incomplete SCI. This improvement was manifested in increased walking endurance and enhanced strength in the lower limbs [ 51 ]. Furthermore, 6 additional randomized controlled trials using the Lokomat system corroborated these findings [ 52 - 57 ], underscoring the consistency and reliability of the Lokomat as an effective rehabilitation tool for improving mobility and strength in individuals with incomplete SCI. This body of evidence strongly supports the Lokomat system’s role in advancing the recovery process for patients with SCI, offering them a viable pathway to regain mobility and improve their quality of life.

Currently, advancements in robot-assisted lower-limb rehabilitation have led to the development of 3 main types of devices: exoskeleton, end effector, and portable powered robotic exoskeletons. End-effector devices interact directly with the patient’s lower limbs, applying force, assisting movement, or guiding patients through specific movement patterns, exemplified by the “G-EO-Systems” [ 99 ] and “Haptic Walker” [ 100 ]. Exoskeleton-type devices, exemplified by “LOPES” from Delft University of Technology in the Netherlands [ 101 ], encase the patient’s legs and provide support and movement assistance directly aligned with the limb’s natural biomechanics. Among these, Lokomat stands out due to its superior customization capabilities, safety, comfort, and integration of virtual reality (VR). It also boasts features such as participation detection and motivational elements to engage users more effectively in their rehabilitation process. With 651 institutions worldwide adopting Lokomat, its widespread use underscores its significant advantages and effectiveness in facilitating the rehabilitation of patients with lower-limb impairments [ 58 ].

Such findings hint toward a broad applicability of Lokomat in ICU settings, enabling both early and post-ICU rehabilitation to be personalized and intelligent. As AI and machine learning continue to advance, the Lokomat system’s ability to analyze patients’ gait data and physiological indicators to offer precise rehabilitation strategies and real-time feedback is becoming a reality. Leveraging VR technology may render the rehabilitation training environment for patients in the ICU more captivating and web-based, fostering increased patient engagement and positivity during the recuperative process. Furthermore, integration with biometric sensors and remote monitoring technologies could feasibly enable real-time tracking and remote guidance of patients’ rehabilitation progress, heralding the future direction of Lokomat’s potential applications.

Second, aside from lower-limb or gait assistance rehabilitation robots, research into upper-limb rehabilitation robotics remains an active field of investigation. ArmeoSpring (Hocoma, Inc) is a rehabilitative exoskeleton that stabilizes the arm via a fixed frame. It is a passive exoskeleton that uses adjustable springs to deliver gravitational compensation for the patient, thus allowing the individual undergoing rehabilitation to concentrate solely on executing requisite tasks. Despite its passive nature, the ArmeoSpring exoskeleton is laden with sensors, enabling its application as an evaluation instrument for assessing the capacity and range of the user’s arm [ 59 ]. These sensors also facilitate interactive training and integration with VR, enabling patients to simulate task-oriented motor exercises within a virtual learning environment on a computer screen, providing auditory and visual performance feedback during and after interaction.

A 5-year randomized controlled trial involving a subacute stroke program evaluated 215 patients with stroke with moderate to severe arm injuries who were undergoing rehabilitation therapy [ 60 ]. It appeared that there was no substantial difference in functional upper-extremity improvement with ArmeoSpring robotic intervention, although sensorimotor scores did demonstrate enhancement (13.32 vs 11.78). This equivocal result could be ascribed to a lack of sequencing of early testing interventions [ 61 ]. Conversely, another clinical trial involving a patient with mild to moderate hemiparesis demonstrated significant improvement in upper-extremity arm motion after 4 months of ArmeoSpring treatment [ 62 ]. It remains uncertain whether robotic-assisted rehabilitation definitively surpasses conventional physical therapy; rather, it appears to offer advantages to traditional treatments. In addition, in contrast to conventional physical therapy, careful patient selection is essential for robotic-assisted interventions. Factors such as the degree of functional impairment, age, disease duration, and cognitive levels play significant roles in this selection process. A study focusing on the evaluation of upper-limb movement parameters in patients after a stroke using the ArmeoSpring demonstrated its effectiveness in reliably and sensitively assessing motor impairments and the influence of therapeutic interventions on the motor learning process. The research highlighted the device’s potential as a valuable assessment tool for quantifying sensorimotor disorders in the upper limb [ 59 ].

Another type of end-effector robot, the “MIT-MANUS” (Massachusetts Institute of Technology, United States) [ 63 ], trains the upper limb by applying force at a single point on the patient’s arm. However, due to its linkage mechanism, the robot has a long mechanical arm and low torque output capacity, making it difficult to perform tasks requiring high-load resistance training. In addition, it has a large footprint. It lacks the ability to perform complex movements of the upper-limb joints, similar to the ArmeoSpring. Therefore, we did not include the MIT-MANUS in this review.

Third, patients in the ICU not only grapple with their primary illness but also frequently experience newly acquired long-term physical, psychological, and cognitive impairments, collectively referred to as post–intensive care syndrome [ 102 ]. Given the inconsistency in results from various screening tools, there is an urgent need for an objective, comprehensive, simplified, and unified assessment tool. The Kinesiological Instrument for Normal and Altered Reaching Movements (KINARM; Kingston, Ontario, Canada) is a robotic research tool specifically designed to execute quantitative neurological evaluations of sensorimotor, proprioceptive, and cognitive brain functions. It comprises a wheelchair and an upper-extremity exoskeleton tailored to patients based on their physical specifications. The KINARM permits researchers to gauge the coordination of limbs across multiple joints while also precisely measuring the joint-specific force exerted by the patient during task execution. The precision of this tool eliminates the subjectivity typically intrinsic to physiotherapeutic assessments of neurological status, such as muscle tone, spasticity, proprioception, and others [ 64 ].

A total of 104 patients in the ICU underwent sensorimotor and neurocognitive assessments using the KINARM 3 and 12 months after discharge. The team then performed a series of kinase evaluations on stroke survivors 3 and 12 months after discharge in patients who were critically ill and receiving acute renal replacement therapy and obtained a 0.3 correlation (90% strength in 89 patients) between regional cerebral oxygen saturation (a surrogate marker of cerebral autonomic regulation) and delirium in patients who were critically ill through KINARM scoring [ 65 ]. The tool has also been used to assess the correlation between brain tissue oxygenation (a surrogate marker of brain perfusion) during the acute phase of critical illness (ie, 24 hours) and long-term neurological dysfunction [ 66 , 103 ], as well as to evaluate sensorimotor deficits in patients with stroke and traumatic brain injury [ 67 , 104 ]. The KINARM provides objective and quantifiable data for sensorimotor and neurocognitive functions in ICU survivors. As a diagnostic and assessment tool, it aids rehabilitation and supports ICU survivors in regaining autonomy and independence in their daily lives. Despite its limitations, efforts must continue to enhance mobility and portability, broaden applicability and scope, and reduce cost and operational complexity.

Compared to traditional physiotherapy, robot-assisted rehabilitation based on AI and VR offers patients more intensive, systematic, repetitive, and task-oriented rehabilitation training, which plays a crucial role in promoting the process of functional recovery. Although many studies have shown that robot-assisted rehabilitation can effectively enhance the rehabilitation effect, a review of the clinical application of stroke rehabilitation points out that robot-assisted therapy has not shown obvious advantages in improving the motor function of patients with stroke [ 68 ]. Compared with traditional training or stand-alone training, its effect on the rehabilitation of patients with chronic stroke is still questionable. Similarly, the effectiveness of using exoskeleton devices for upper-limb motor function training also lacks sufficient evidence [ 105 ]. Therefore, at the current stage, robot-assisted rehabilitation therapy should be considered as a supplement to traditional physiotherapy, not a replacement. There is also no clear evidence to show that robot gait training can outperform traditional physical therapy when applied alone to patients with chronic stroke [ 106 ]. On the basis of the existing evidence, we can conclude that robot-assisted rehabilitation therapy can improve the motor function of patients needing rehabilitation and serve as an additional treatment intervention in combination with traditional rehabilitation therapy. However, with the further development of AI and machine learning technology in the future, we expect robot-assisted rehabilitation therapy to have greater development potential.

On the other hand, we must acknowledge that the current rehabilitative assistance robots face 3 core challenges: energy endurance, comfort assurance, and cost control. First, we need to change the existing endurance mode and adopt more effective energy supply methods. Second, we need to solve the comfort issues that may arise during the use of robots, such as blood circulation problems and muscle deformation that may be caused by wearing methods. Finally, we need to focus on controlling costs so that all patients who need rehabilitation can afford it.

Telepresence Robots

Telepresence represents a potential avenue for enhancing information accessibility for providers, encompassing aspects such as patients’ visual and auditory feedback, bedside care, and vital sign data facilitated by remote monitoring or telechecking [ 107 ]. Given that most patients in the ICU are susceptible to unpredictable conditions, there is an acute need for swift identification and prompt response during emergent situations. The significance of telepresence robots lies in their ability to deliver expert health care services over distances, effectively mitigating the need for colocation of physicians and patients. This approach greatly augments the accessibility of health care services for patients in remote areas. Moreover, it potentially eradicates the likelihood of infectious disease transmission between patients and health care professionals [ 108 ]. Using AI and human-machine interaction, these telepresence robots supplement diagnostic and therapeutic processes via medical professionals’ expertise, thereby enhancing the exchange of visual and electronic information between the patient and health care staff [ 109 ]. To illustrate this, we provide the following example.

InTouch Health Remote Presence-7 (RP-7; InTouch Health Systems) is a real-time audiovisual robotic telepresence system that provides communication among patients, hospital staff, and remote physicians [ 33 , 69 , 110 ]. Remote assessors used the RP-7 robot end point to conduct their clinical coma evaluations [ 69 ]. Compared with the total scores on the Glasgow Coma Scale or Full Outline of Unresponsiveness of the remote physician evaluators, the RP-7 robotic system had a similar score (difference in scores of 0.25 and 0.40, respectively), and it can serve as a reliable scoring system to help evaluate patients in a coma [ 69 ]. In an ICU setting, the RP-7 assisted in the assessment of increased efficiency, care coordination, and throughput, with a decrease in patient ICU stay (–0.8 days), an increase in hospital discharges (+11%), and a significant decrease in the number of unexpected events (–1.2 days) [ 70 , 109 ]. Regarding the team cooperation ability (attitude, behavior, and cognition) of clinicians, the use of the RP-7 maintained the cooperation, trust, communication, and psychological safety of the team [ 107 ]. However, the RP-7 does not enhance collaboration between nurses and physicians in patient care decisions as compared to traditional telephone night checks [ 111 ].

A year later, the InTouch Vita, developed by InTouch Health and iRobot, was found to improve the independence of remote clinicians in managing patient care [ 71 , 112 , 113 ]. The Vita has an improved navigation system with an autopilot feature that enables remote service providers to directly control or automatically direct it to a predetermined location for improved efficiency [ 113 ]. In addition, the product provides real-time clinical access to patient data and has been cleared by the Food and Drug Administration for active patient monitoring that may be needed for immediate clinical action [ 72 ]. The Remote Presence Virtual + Independent Telemedicine Assistant (RP-VITA) also has an iPad interface that allows pilots to browse quickly and easily [ 114 ]. Unlike the RP-7, which must be powered using an active joystick, the RP-VITA only needs a mobile phone to log into the system and issue commands verbally to move the robot around to the designated location to complete the task [ 71 ]. To enable the robot to accurately follow the target character and maintain the corresponding safe distance and speed, Long et al [ 73 ] used the improved Gaussian filter algorithm to estimate and correct the centroid of the human body in real time, effectively improving the stability and safety of human tracking.

As a remote, virtual, and independent telemedicine assistant, the RP-VITA enables physicians to directly interact with patients from anywhere in the world. This technological advancement effectively transcends the traditional barriers of physical and biological constraints commonly encountered in ICU settings, where the immediacy of medical services is crucial yet often challenging to maintain. With the RP-VITA, physicians no longer need to undertake urgent commutes to the ward; instead, they can simply access the robot platform from the comfort of their home, offering a solution that is not only more expedient but also significantly more convenient [ 74 ]. This achieves truly meaningful health care services, delivering the right expertise to the right place at the right time to do the right thing at the right price. The RP-VITA exemplifies the future of health care delivery, emphasizing efficiency, accessibility, and the strategic allocation of medical expertise.

Second, Stevie has a stethoscope port and a high-definition pan-tilt-zoom camera, which can relay information during an examination of a patient and help physicians identify illnesses and diseases in the ICU [ 75 ]. It was developed by a research team from Trinity College, Dublin, Ireland, and is currently in use at Steve Biko Academic Hospital, South Africa, in the second design iteration [ 76 ]. It was designed to be neutral so as to avoid perceptions of gender, race, and age [ 115 ]. The upper body of the robot consists of a humanoid head (digital display), trunk, and 2 short arms [ 116 ]. A humanoid form makes robots more acceptable [ 117 ]. The control system gives the user the ability to operate the robot remotely, including controlling the robot voice and media volume as well as motion and motion. In addition, Stevie’s humanoid facial expressions convey clear emotions [ 76 ]. Humanlike limbs can be used to form intuitive gestures, emphasize emotional states, and direct attention, conveying more information than facial expressions [ 116 ]. To provide static stability, the robot is equipped with wheels that support omnidirectional, allowing it to maneuver smoothly in any direction without shifting its base. This feature is essential for navigating complex environments and enhancing operational efficiency. Stevie is known as the “most popular baby” for the ICU team [ 115 ].

Third, MGI Ultrasound System-Remote 3 (MGIUS-R3; MGI Tech Co, Ltd), a robot-assisted teleultrasound diagnostic system, has considerable application value in the ICU [ 77 , 118 ]. It combines a robotic arm, an ultrasound imaging system, and audiovisual communication for remote manipulation, allowing physicians to manipulate the robotic arm and adjust parameters outside the ICU room for remote ultrasound workups via the fifth-generation (5G) network technology for real-time transmission of audio, video, and ultrasound images [ 78 ].

It has been used as a long-range ultrasound device to fight the COVID-19 outbreak in many places in China such as Wuhan [ 79 ]. Powerful cloud data transfer rates (up to 10 GB per second) with management mode enable high-definition image capture and high-quality information transfer for large data set sharing. At a distance of 700 or even 1479 km (between the isolation ward of Zhejiang Provincial People’s Hospital and Central South Hospital of Wuhan University in Hubei Province and between Wuhan city in Hubei Province and Sanya city in Hainan Province), it can successfully conduct remote ultrasound examination of the lung and other parts of the patient with the quality meeting the requirements for clinical diagnosis [ 80 , 119 ]. A double-blind diagnostic trial of COVID-19 in 22 cases showed that the diagnostic accuracy of the MGIUS-R3 ultrasound for positive lesions was good (93%), which could replace the traditional scanning method to some extent (difference: P =.09) [ 77 ]. Another study also successfully performed ultrasound examination of the liver, gallbladder, pancreas, spleen, and kidney in 32 patients, and the high-quality image (average score of 4.73) met the requirements for remote ultrasound diagnosis [ 78 ]. Overall, the MGIUS-R3 is noninvasive and repeatable, reduces the risk of infection for patients and physicians, increases safety, and is highly feasible in the ICU. However, all remote ultrasound procedures and communications for the robot are based on 5G networks, which require stable network support throughout the procedure [ 119 , 120 ].

Through the network, remote ultrasound robots transmit ultrasound images of challenging cases from remote areas to tertiary hospitals, where consulting experts diagnose and analyze them, providing decision feedback. This provides an effective solution for situations with limited medical resources, lack of expertise, and high risk of infection.

Fourth, the intrinsic characteristics of the ICU, despite their provision of the highest level of medical care and reliance on advanced organ-support therapies, inherently limit the ability to facilitate real-time visits and communication between patients and their family members. This restriction not only affects the emotional well-being of patients due to a lack of verbal encouragement and emotional support from loved ones but also poses a challenge to patient recovery processes within the high-stress environment of the ICU. For the safety and emotional needs of patients and their families, as well as to achieve the goal of resource optimization, visiting robots have emerged. In 2021, the first 5G+ medical robot+VR visiting system was officially launched in the stroke ICU ward of West China Hospital of Sichuan University. The system successfully enables remote visits for family members at designated hospital locations equipped with matching VR glasses, allowing family members to interact with patients in the ICU in real-time through 2-way communication, facilitating an immersive visitation experience at the bedside [ 121 ]. The family members only need to wear VR glasses in the designated place of the hospital and operate the robot remotely through a computer, iPad, or mobile phone. The robot will automatically go to the patient bed designated to be visited after receiving the instruction. 5G networks with high-speed and low-latency transmission features (speeds of 10 to 30 Gbps) transmit back real-time 8K (resolutions of up to 7680 × 4320 pixels) ultra-high-definition full-motion video, enabling families to “physically” visit a patient’s bedside [ 81 , 121 ].

The introduction of visiting robots represents a significant step forward in addressing a previously underappreciated aspect of health care—the impact of patient relationships, particularly with family members, on the process of disease recovery. By facilitating connections that were once hindered due to logistical, health, or institutional barriers, these robots enhance the humanization of health care services. They embody an innovative shift in the delivery models of health care services, ensuring that emotional support and the therapeutic benefits of family presence are not overlooked in patient care. This innovation underscores the importance of integrating technological advancements with the core values of empathy, care, and support, thereby enriching the patient experience.

Logistics and Disinfection Robots

The number of medical interventions in ICUs is larger than that in the general ward, and they are more invasive. In addition, the physiological condition of patients who are critical is often fragile, which makes patients in the ICU particularly vulnerable to iatrogenic injury [ 122 , 123 ]. At the same time, ICU health care workers face a high risk of infection [ 124 ]. The presence of logistics and disinfection robots avoids the spread of viruses, ensures clean areas for clinicians and patients, and minimizes the risk of infection for medical staff. Logistics and disinfection robots could have also worked in this application in a hospital setting during the COVID-19 pandemic, focusing on these same tasks and supporting patient management [ 125 ]. The common forms of logistics and disinfection robots are transport robots and infection control robots, which can solve the problem of ICU staff shortages and share heavy and tedious work. A brief description follows.

Transport Robot HelpMate (HelpMate Robotics Inc)

The transportation of goods and patients is an indispensable part of intensive care. However, a large amount of reciprocating work will consume most of the physical strength of nurses, long-term handling of patients will also cause serious lumbar muscle injury in nurses, and unarmed handling may cause secondary injuries to patients. HelpMate is a mature solution that provides autonomous transportation of materials and supplies, with users dispatching tasks through their console interface, and autonomous operations with unsupervised navigation technologies, such as proximity sensors for obstacle avoidance and path planning for navigation [ 48 ]. In the ICU, the use of automated transport vehicles will increase efficiency and avoid the potential for cross-infection, especially during the COVID-19 lockdown [ 82 ].

As early as the 1980s, the object transfer robot HelpMate was used to carry medical supplies, meals, and experimental samples, among other things. HelpMate mainly navigates based on prestored maps and has a certain obstacle avoidance ability [ 83 ]. In 2003, the University of Maryland Medical Center began a pilot program to determine the logistical capabilities and functional utility of the automated pharmacy system II Robot-Rx (McKesson) robotic technology in the delivery of medications from satellite pharmacies to ICU patient care units [ 84 ].

In view of the situation that it is not suitable to change a patient’s posture during the process of transfer, Osaka General Medical Center in Japan found that more than half of nurses were willing to use a robot for patient transfer after using the robot Coupling-Parallel Adaption Merged (bilateral transfer bed, mainly through a conveyor belt to complete patient transfer between the bed and stretcher), and nearly half of the patients showed no discomfort during transfer [ 85 ].

Infection Control Robots

Infection control robots could make ICU wards safer and cleaner than ever before [ 126 , 127 ]. They are a major step up from traditional human cleaning methods, which take more time, are less effective, and often miss crevices that can hide nasty bugs. This is especially important for ICUs and clean rooms for patients who are immunocompromised, and it is possible that automated cleaning will soon become standard practice throughout the health care system. The existing disinfection robots can be mainly divided into 2 groups: UV robots and hydrogen peroxide vapor robots.

The Xenex robots use pulsed xenon to create intense bursts of broad-spectrum UV light that can cut bacterial contamination by a factor of 20 and kill 95% of deadly pathogens. More than 100 hospitals now use Xenex robots [ 86 ].

An “EPS” logistics disinfection robot (Ipsen Smart Health Tech [Shenzhen] Co, Ltd), which has both transport and disinfection functions, has been officially on duty in the ICU of Central South Hospital (Wuhan, China), undertaking the drug distribution work of nurses from the station to the ICU wards of patients with severe COVID-19 [ 87 ]. At the same time, it can customize the disinfection time and route for high-frequency areas of physician-patient activity and can carry out automatic UV disinfection.

The hydrogen peroxide vapor disinfecting robot combines a hydrogen peroxide device with a robot (Shanghai Jiao Tong University and Lingzhi Technology joint research and development, China) [ 88 ]. The disinfecting system inside the robot generates disinfecting gas and can realize autonomous navigation and autonomous movement in an unmanned environment. At present, this disinfection robot has been effectively used in the prevention and control of the COVID-19 pandemic [ 89 ]. It is mainly used in ICUs, negative pressure isolation wards, infectious wards, and other closed spaces requiring sterilization.

Moreover, the literature has reported a novel robot system capable of real-time air pathogen monitoring in ICUs. The system comprises an automatic guided vehicle, an air sample collector, and a pathogen detection system. By autonomously patrolling and collecting air samples, the robot uses biosensor technology to perform real-time detection of airborne pathogens. This detection process includes lysis of pathogen particles, amplification of target sequences, and sensitive detection via the Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) associate system. This robot system allows for real-time monitoring of airborne pathogens within the ICU, aiding in the prevention of hospital-acquired infections and reducing the burden of ICU management. Endowed with a high degree of autonomy and real-time responsiveness, this system provides a new avenue for infection control within ICUs [ 90 ].

The ICU is a highly specific and complex area for monitoring patients who are critically ill, requiring not only 24-hour continuous care but also more intensive, timely, and coordinated interventions. Although the introduction of AI robots into ICUs is still in the initial stage, their autonomy, ease of training, and strong adaptability enhance their performance and provide substantial assistance to medical staff. In particular, the COVID-19 outbreak has increased the demand for AI robots that are not fatigued or infected, which may reduce the fatigue of ICU medical staff, reduce medical errors, and improve patient safety. However, for example, monitoring robots for single-organ life-support devices, comprehensive cardiopulmonary-monitoring robots for patients with multiple organ failure, and robots that are flexible in dealing with various unexpected tasks arising during patient care have not yet appeared. Therefore, the demand and development potential of AI robots in ICUs is huge.

Challenges and Potential Solutions Related to the Use of AI Robots in ICUs

Despite significant advancements and widespread applications of AI robots across all processes within the ICU, they face a host of challenges due to the complexity and uncertainty of the real world, algorithmic limitations, and ethical and moral considerations. These challenges encompass safety, dignity, privacy, and questions of liability. In addition, the implementation of advanced technologies requires considerable investment, making cost-benefit considerations indispensable. Our objective is to develop AI robots for ICU application that uphold human values, demonstrate ethically sound behavior, and draw robust conclusions [ 128 ]. It is imperative to accurately recognize these extant issues and propose effective solutions.

The “Asilomar AI Principles,” signed at the 2017 Asilomar conference held in Asilomar, California, United States, call on global AI professionals to adhere to these principles to safeguard the interests and safety of humanity in the future [ 129 ]. The principles emphasize ethical standards and values, including privacy, security, fairness, and transparency. Although these principles only provide an ethical framework and guiding principles for AI development and do not offer specific policies, regulations, or standards, they still serve as reference guidelines for the development of the AI field.

Security Issues

With the continuous evolution of AI robotic technology, its capabilities and functions are constantly being enhanced. However, there is a lack of clear standards to define the meaning of safety and accuracy and evaluate its specific programs. Narrow technical approaches are insufficient to ensure the safety of AI robots, which must be considered within the broader sociotechnical context in which they operate [ 130 ]. Therefore, systems with moral and social reasoning capabilities are becoming increasingly important. In some cases, human involvement can serve as a constraint on robot design, especially in decisions involving life and death. However, in the realm of high-speed decision-making, robots require built-in moral and social reasoning capabilities [ 131 ]. In addition, the lack of sample size, heterogeneity of diseases, and complex operations lead to biases in AI algorithms, which cannot ensure the safety and effectiveness of robot treatments. These challenges should be evaluated alongside the risks already familiar to humans, allowing us to set realistic expectations and foresee significant advancements in the realm of robot safety.

In an effort to confront the emerging safety challenges within the AI domain, various countries have embarked on proactive measures aimed at addressing these potential concerns. Initiatives such as the AI Foundation Model working group established by the United Kingdom in April 2023 [ 132 ] and the series of AI white papers updated annually by the Chinese Association for Artificial Intelligence are significant steps toward understanding and addressing these issues [ 133 ]. Moreover, the global AI Safety Summit held in November 2023 marked a pivotal moment, with 28 countries coming together to sign the “Bletchley Declaration.” This declaration represents a unified commitment to scrutinize the risks associated with the frontiers of AI technology, such as natural language processing, computer vision, and reinforcement learning, with a specific focus on the development of large language models by leading companies such as OpenAI, Meta, and Google [ 134 ]. These concerted efforts underline a global recognition of the complexities and challenges posed by AI, as well as a determined move toward collaborative solutions. By focusing on risk assessment, ethical standards, and safety protocols, these initiatives highlight an international resolve to navigate the advancements of AI technology in a manner that not only benefits human society but also safeguards against potential hazards.

Dignity Issues

Medicine has always been a humanistic science where physicians are expected to not only adopt a scientific attitude toward patients but also resonate emotionally with them, embodying empathetic care. Patients in the ICU typically exhibit both physical and psychological fragility, necessitating humanistic care and emotional support from medical staff. This cannot be substituted by a robot due to its “mechanical”; “pre-programmed”; and, thus, “neutral” way of interacting with patients. Emotional recognition technology can be incorporated into AI robotic systems, providing corresponding emotional support by recognizing patient emotions, for instance, through voice, facial expressions, or body language. AI robots often have an unequal communication with patients, leaning more toward 1-way output from the robot. Before engaging in the discourse on equality between humans and AI robots, it is crucial to address a foundational question: should AI be classified as a tool or an agent? This distinction becomes especially pertinent in the context of conversational AI robots. When perceived merely as tools, there is a risk of undervaluing the anthropomorphic attributes and functionalities they embody. Conversely, viewing them as agents presents its own set of challenges as they inherently lack humanlike qualities such as empathy, intentionality, and the capacity to bear responsibility [ 135 ]. This debate is not new but remains central to the evolving conversation around AI’s role in society. Recent technological developments suggest that, by integrating natural language processing and voice recognition technologies, robots can become more anthropomorphic and capable of responding to patients’ language needs, alleviating feelings of loneliness and neglect in patients in the ICU. For example, the integration of the popular ChatGPT with AI robots may enhance their linguistic potential. Intelligent interactive AI robots combining ChatGPT’s linguistic skills with the computer vision and tangible abilities of robots could revolutionize the way humans interact with technology [ 136 ]. They could be better at navigating the subtleties of human interaction, boasting superior natural language generation capabilities.

On the other hand, the narrow technical AI safety field lacks ideological and demographic diversity, leading to a lack of breadth and rigor in knowledge. Moreover, practitioners in this field often come from White, male, and underrepresented groups, which is insufficient to meet the broad participation and shared human care needed for technological development, potentially leading to racial, gender, and other biases in technology application [ 137 , 138 ]. For instance, using Framingham Heart Study data to predict cardiovascular event risk in people of color may lead to both overestimation and underestimation of risk [ 139 ]. Similar racial biases may inadvertently be built into health care algorithms. Therefore, it is necessary to expand the space for broader participation, pursuing equality and common development from the source of technology, to avoid defects in AI robot products [ 140 ].

Privacy Issues

To enhance the monitoring of patients who are critically ill, robots are often equipped with surveillance apparatuses to log pertinent data and wirelessly transmit information. Such actions may infringe upon the privacy rights of patients, jeopardizing their confidentiality. Nonetheless, these features also hold merit in ensuring patient safety [ 141 ]. As AI robots proliferate across various sectors, including health care, concerns over privacy safeguards have been a recurrent topic of critique. It is imperative that AI robots adhere to privacy regulations when handling medical data, such as the General Data Protection Regulation that is prevalent in Europe [ 142 ]. Encryption techniques should be used during data collection and storage, narrowing the scope of data acquisition and ensuring anonymization. Of paramount importance is the establishment of stringent access control mechanisms, ensuring that data are accessible only to authorized personnel. Concurrently, it is vital to instate ethical guidelines and standards focused on privacy protection, which will dictate the conduct of AI robots and their data-handling procedures. All these regulations must ensure that individuals retain a voice over the collection, storage, and use of their information [ 143 ].

Attribution of Liability Issues

The integration of AI into robotic systems has rendered questions of accountability more intricate. Endowing robots with autonomy and decision-making capabilities stands as a primary objective of AI integration into robotics. However, given the current trajectory of technological advancements, intelligent analytics still bear systemic decision-making risks. When malfunctions occur, attributing responsibility for the robot’s actions becomes contentious given that robots lack comprehension of reprimand, sanctions, accountability, or remorse [ 144 , 145 ]. Consequently, the liability arising from erroneous decisions made by robots can pose significant legal conundrums [ 141 ]. Accountability in computer science encompasses a multifaceted domain, probing the responsibility attribution across its creation, dissemination, and use phases [ 146 ]. Thus, devising a legal framework pertinent to AI robots is essential, delineating the responsibilities and obligations of robot manufacturers, operators, and users spanning every facet of robot design, production, operation, and use. Regulatory bodies also play a pivotal role as rigorous supervisory protocols and technical benchmarks are needed to scrutinize and accredit AI robot designs and operations. Penalties and sanctions become indispensable components of this framework. Introducing a robotic liability insurance system might also serve as an efficacious remedy to mitigate the damages and risks incurred by robots. Manufacturers, operators, and users could opt for such liability insurance, necessitating clear demarcations of responsibility. Meanwhile, both the United States and the European Union advocate for a focus on algorithm transparency and accountability, aiming to make their decision-making processes more transparent and comprehensible. This will facilitate a better assessment of responsibility and serve as a warning against outsourcing moral responsibility to algorithms [ 147 ]. This calls for collaboration between manufacturers and developers, with the National Institute for Health and Care Excellence in the United States requiring developers to program code in an environment of technological feasibility and respect for intellectual property rights, ensuring reproducibility and error checking [ 148 ].

Addressing the issue of robotic accountability necessitates a holistic perspective, incorporating technical, legal, ethical, and societal dimensions; fostering collaborative mechanisms with multistakeholder participation to collectively bear the decision-making risks and responsibilities; and ensuring the utmost protection of the interests and safety of robot users and beneficiaries.

The United Kingdom has introduced the Code of Conduct for Data-Driven Health and Care Technology, adopting a “Regulation as a Service” model. This innovative approach ensures that regulatory checks are embedded at various stages throughout the AI development cycle, aiming to uphold high standards of safety, efficacy, and ethics in the creation and implementation of AI technologies in health care [ 149 ]. In June 2021, the US Government Accountability Office released an AI accountability framework covering 4 aspects: governance (promoting accountability by establishing processes to manage, operate, and oversee implementation), data (ensuring quality, reliability, and representativeness of data sources and processing), performance (producing results consistent with program objectives), and monitoring (ensuring reliability and relevance over time). This accountability framework sets principles and directions for future legislation and policy making and also serves as a model for the advancement of accountability systems in other countries [ 150 ].

Cost-Benefit Issues

Economic viability remains a pivotal facet in the societal integration of any nascent technology, with the deployment of AI robots in ICUs presenting intricate cost-benefit deliberations. Being an avant-garde technology, the initial research and development expenditures for AI robots are considerable. Once successfully developed and incorporated into ICU settings, they are further subjected to financial burdens, encompassing but not confined to the costs of system enhancements and upgrades; resource use during staff training and acclimatization; and investments necessitated by data security, personal dignity, privacy safeguards, and responsibility allocation issues. However, when observed from a utility standpoint, the application of AI robots bears significant value, especially for populous nations such as China. In the face of scarce grassroots medical resources, AI robots serve as catalysts, facilitating the dispersion of health care provision to underserved and remote locales, meeting the medical needs of a vast populace—this aligns with the principal objective underpinning this technological advancement. In addition, the integration of AI robots can effectively bridge disparities in health care accessibility between high- and low-income nations, substantially augmenting the societal benefits of this technology [ 151 ]. Therefore, despite the steep initial investments and continual operational expenses, in the broader spectrum of health care service provision and societal equity realization, AI robots undoubtedly offer pronounced advantages and value.

Principal Findings

AI robots have firmly established their significance within intensive care, with their integration into the ICU regimen continually deepening. This paper delineates 5 distinct application domains of AI robotic systems, be it experimental or commercial, within the ICU, addressing both technical impediments and prospective research avenues while proposing potential remedial strategies. The trajectory for AI robots within the ICU setting is promising. Currently at the nascent phase of AI robot technological deployment, there remains an extensive scope of endeavors to be pursued and myriad challenges to be surmounted. As the propagation of pertinent technologies ensues, health care professionals should welcome such intelligent implementations with optimism, recognizing the present-day confines of AI apparatuses synergistically amalgamating human and system intellect, thereby maximizing the data analysis, promptings, and recommendation proficiencies of intelligent robots. Today’s robotic entities coalesce seamlessly with AI, where sophisticated robotic systems, characterized by safety and flexibility melded with augmented computational proficiencies, yield invaluable big data insights. Simultaneously, anthropomorphic designs provide patients in the ICU with a more comforting medical experience.

“There are plenty of areas in critical care where it would be extremely helpful to have efficacious, fair, and transparent AI systems,” notes Gary Weissman, assistant professor in pulmonary and critical care medicine at the University of Pennsylvania Perelman School of Medicine [ 152 ]. Similarly, Dr Brijesh Patel, an intensivist at the Royal Brompton Hospital in London, emphasizes that “intensive care is a specialty with special prospects for AI. After all, the ICU is a space where a large amount of data is routinely collected, making it an ideal place for deploying machine learning techniques” [ 14 ]. Dr Patel, who dedicates a considerable portion of his ward rounds to adjusting ventilator settings, points out that the continuous advancement in AI technology could automate such repetitive tasks.

However, there is a consensus among experts that AI is not poised to replace physicians entirely. Instead, it is seen as a tool to streamline certain tasks, enhancing efficiency where it is most needed. Aldo Faisal, a professor of AI and neuroscience at Imperial College London, emphasizes a balanced perspective on AI’s role within health care teams. He advocates for a realistic understanding of AI’s capabilities and limitations, suggesting that neither undue fear nor excessive reverence is helpful [ 153 ].

This paragraph presents a futuristic scenario representing the potential evolution and future blueprint of AI robotics in the ICU ward. Envision a scenario where a patient who is critically injured is admitted to the ICU and greeted by “IntelliGreet”—an intelligent reception robot that collates the patient’s fundamental data and medical history, seamlessly completing admission formalities. The therapeutic reins are taken over by “MediBot,” a state-of-the-art medical robot that executes an array of treatment procedures (ranging from drug administration and wound care to life support) predicated on the patient’s condition and physician’s directives. The patient’s nursing needs are catered to by “CareCompanion,” an omnipresent nursing robot that meticulously monitors vital signs, offering essential care services (such as cleaning, feeding, and movement) while concurrently assuming the responsibility of reporting to physicians. As the patient recuperates under their meticulous care and nears discharge, “DischargeDuty” steps in, formulating discharge plans and subsequent treatment regimens based on physician prescriptions, facilitating communication with both the patient and their kin, and overseeing the discharge processes. Finally, the domestic care robot “HomeCareHelper” persists in its caregiving, administering medication reminders, monitoring patient health, and even offering rudimentary domestic aid for solitary individuals. All AI robots in this envisioned setting are interlinked via a cloud data platform, enabling real-time sharing of patient medical data, achieving cohesive and synergized medical service delivery. Reflecting upon the advancements in AI robotic technology over recent decades, it is undeniable that this vision is poised to materialize.

Limitations of This Review

This review acknowledges a number of limitations that could affect the interpretation and applicability of its findings. One significant concern is the potential for publication bias, a common issue in scientific literature where studies with negative results are less likely to be published. This could lead to an overrepresentation of positive findings in this review. In addition, despite efforts to mitigate bias by involving interdisciplinary experts and employing dual reviewers during the literature search and data collection phases, subjective biases could still influence the selection and interpretation of the studies.

Another challenge is the absence of a universally accepted classification system for ICU robotic systems. In response, our classification framework was developed based on expert opinions and existing literature, striving for as comprehensive and rational an approach as possible. Nevertheless, the potential for omissions exists given the rapidly evolving nature of technology and the diverse applications of robotics in critical care settings. These limitations highlight the need for ongoing research and critical evaluation of emerging technologies in health care, emphasizing the importance of transparency and methodological rigor in scientific reviews.

Conclusions

This scoping review comprehensively covered AI robots in the ICU, detailing the most widely used or newly developed robotic devices on the market. Robots in ICU wards are becoming valuable assistants to physicians and nurses. Although ethical and safety concerns remain unresolved in this field, these challenges are inevitable in the development of new technologies, and experts and developers are focusing on addressing them. Future research should focus on developing policies and regulations to prevent or resolve these issues, making AI robots an integral part of ICUs and other hospital wards.

Acknowledgments

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Authors' Contributions

HK was responsible for hypothesis generation. HK and ZQ were responsible for the conception of this study. Y Li, MW, LW, YC, Y Liu, YZ, RY, MY, SL, and ZS contributed substantially to all aspects of the work, including literature review and manuscript drafting. FZ revised the manuscript. All authors contributed to the manuscript and approved the submitted version.

Conflicts of Interest

None declared.

PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) checklist.

Classification of intensive care unit robots.

  • Bobrow DG, Stefik MJ. Perspectives on artificial intelligence programming. Science. Feb 28, 1986;231(4741):951-957. [ CrossRef ] [ Medline ]
  • Haug CJ, Drazen JM. Artificial intelligence and machine learning in clinical medicine, 2023. N Engl J Med. Mar 30, 2023;388(13):1201-1208. [ CrossRef ] [ Medline ]
  • Georgopoulos AP, Schwartz AB, Kettner RE. Response: neuronal coding and robotics. Science. Jul 17, 1987;237(4812):301. [ CrossRef ] [ Medline ]
  • McKendrick M, Yang S, McLeod GA. The use of artificial intelligence and robotics in regional anaesthesia. Anaesthesia. Jan 2021;76 Suppl 1:171-181. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Javaid M, Haleem A, Vaishya R, Bahl S, Suman R, Vaish A. Industry 4.0 technologies and their applications in fighting COVID-19 pandemic. Diabetes Metab Syndr. 2020;14(4):419-422. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Yang GZ, J Nelson B, Murphy RR, Choset H, Christensen H, H Collins S, et al. Combating COVID-19-the role of robotics in managing public health and infectious diseases. Sci Robot. Mar 25, 2020;5(40):eabb5589. [ CrossRef ] [ Medline ]
  • Fong SJ, Dey N, Chaki J. Artificial Intelligence for Coronavirus Outbreak. Singapore, Singapore. Springer; 1999.
  • Howard J. Artificial intelligence: implications for the future of work. Am J Ind Med. Nov 2019;62(11):917-926. [ CrossRef ] [ Medline ]
  • Agarwal S, Punn NS, Sonbhadra SK, Tanveer M, Nagabhushan P, Pandian KK, et al. Unleashing the power of disruptive and emerging technologies amid COVID-19: a detailed review. arXiv.. . Preprint posted online on May 23, 2020.
  • Siciliano B, Khatib O. Robotics and the handbook. In: Siciliano B, Khatib O, editors. Springer Handbook of Robotics. Cham, Switzerland. Springer; 2016:1-6.
  • Marshall JC, Bosco L, Adhikari NK, Connolly B, Diaz JV, Dorman T, et al. What is an intensive care unit? A report of the task force of the World Federation of Societies of Intensive and Critical Care Medicine. J Crit Care. Feb 2017;37:270-276. [ CrossRef ] [ Medline ]
  • Clancy TR. Artificial intelligence and nursing: the future is now. J Nurs Adm. Mar 2020;50(3):125-127. [ CrossRef ] [ Medline ]
  • Hanson CWIII, Marshall BE. Artificial intelligence applications in the intensive care unit. Crit Care Med. Feb 2001;29(2):427-435. [ CrossRef ] [ Medline ]
  • Burki TK. Artificial intelligence hold promise in the ICU. Lancet Respir Med. Aug 2021;9(8):826-828. [ CrossRef ]
  • Gutierrez G. Artificial intelligence in the intensive care unit. Crit Care. Mar 24, 2020;24(1):101. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. Feb 2005;8(1):19-32. [ CrossRef ]
  • Tricco AC, Lillie E, Zarin W, O'Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. Oct 02, 2018;169(7):467-473. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Vagvolgyi BP, Khrenov M, Cope J, Deguet A, Kazanzides P, Manzoor S, et al. Telerobotic operation of intensive care unit ventilators. Front Robot AI. Jun 24, 2021;8:612964. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ehmke C, Kopp P, Karakikes M, Jahn R, Huang Y, Geiger J. A remote controller accessory for ventilators. GitHub. 2021. URL: https://github.com/claasehmke/Remote-Controller-Accessory-for-Ventilators [accessed 2024-04-29]
  • Song C, Yang G, Park S, Jang N, Jeon S, Oh SR, et al. On the design of integrated tele-monitoring/ operation system for therapeutic devices in isolation intensive care unit. IEEE Robot Autom Lett. Oct 2022;7(4):8705-8712. [ CrossRef ]
  • Hemmerling TM, Arbeid E, Wehbe M, Cyr S, Taddei R, Zaouter C. Evaluation of a novel closed-loop total intravenous anaesthesia drug delivery system: a randomized controlled trial. Br J Anaesth. Jun 2013;110(6):1031-1039. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hemmerling TM. Automated anesthesia. Curr Opin Anaesthesiol. Dec 2009;22(6):757-763. [ CrossRef ] [ Medline ]
  • Wehbe M, Arbeid E, Cyr S, Mathieu PA, Taddei R, Morse J, et al. A technical description of a novel pharmacological anesthesia robot. J Clin Monit Comput. Feb 2014;28(1):27-34. [ CrossRef ] [ Medline ]
  • Naidoo S. Understanding artificial intelligence and machine learning in anaesthesia. School of Clinical Medicine, Discipline of Anaesthesiology and Critical Care. 2021. URL: https:/​/anaesthetics.​ukzn.ac.za/​wp-content/​uploads/​2021/​01/​2021-15-Jan-Artificial-Intelligence-in-Anaesthesia-S-Naidoo.​pdf [accessed 2024-04-29]
  • Zaouter C, Hemmerling TM, Lanchon R, Valoti E, Remy A, Leuillet S, et al. The feasibility of a completely automated total IV anesthesia drug delivery system for cardiac surgery. Anesth Analg. Oct 2016;123(4):885-893. [ CrossRef ] [ Medline ]
  • Hemmerling TM, Charabati S, Zaouter C, Minardi C, Mathieu PA. A randomized controlled trial demonstrates that a novel closed-loop propofol system performs better hypnosis control than manual administration. Can J Anaesth. Aug 2010;57(8):725-735. [ CrossRef ] [ Medline ]
  • Hemmerling TM, Taddei R, Wehbe M, Morse J, Cyr S, Zaouter C. Robotic anesthesia - a vision for the future of anesthesia. Transl Med UniSa. Sep 2011;1:1-20. [ FREE Full text ] [ Medline ]
  • Tighe PJ, Badiyan SJ, Luria I, Lampotang S, Parekattil S. Robot-assisted airway support: a simulated case. Anesth Analg. Oct 2010;111(4):929-931. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sprung CL, Ricou B, Hartog CS, Maia P, Mentzelopoulos SD, Weiss M, et al. Changes in end-of-life practices in European intensive care units from 1999 to 2016. JAMA. Nov 05, 2019;322(17):1692-1704. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Levitov A, Frankel HL, Blaivas M, Kirkpatrick AW, Su E, Evans D, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients-part II: cardiac ultrasonography. Crit Care Med. Jun 2016;44(6):1206-1227. [ CrossRef ] [ Medline ]
  • Jung J, Kim J, Kim S, Kwon WY, Na SH, Kim KS, et al. Application of robot manipulator for cardiopulmonary resuscitation. In: Proceedings of the 2016 International Symposium on Experimental Robotics. 2016. Presented at: ISER '16; October 3-6, 2016:266-274; Tokyo, Japan. URL: https://link.springer.com/chapter/10.1007/978-3-319-50115-4_24 [ CrossRef ]
  • Suh GJ, Park J, Lee JC, Na SH, Kwon WY, Kim KS, et al. End-tidal CO-guided automated robot CPR system in the pig. Preliminary communication. Resuscitation. Jun 2018;127:119-124. [ CrossRef ] [ Medline ]
  • Garingo A, Friedlich P, Tesoriero L, Patil S, Jackson P, Seri I. The use of mobile robotic telemedicine technology in the neonatal intensive care unit. J Perinatol. Jan 2012;32(1):55-63. [ CrossRef ] [ Medline ]
  • Williams N, MacLean K, Guan L, Collet JP, Holsti L. Pilot testing a robot for reducing pain in hospitalized preterm infants. OTJR (Thorofare N J). Apr 2019;39(2):108-115. [ CrossRef ] [ Medline ]
  • Balter ML, Chen AI, Maguire TJ, Yarmush ML. The system design and evaluation of a 7-DOF image-guided venipuncture robot. IEEE Trans Robot. Aug 2015;31(4):1044-1053. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chen A, Nikitczuk K, Nikitczuk J, Maguire T, Yarmush M. Portable robot for autonomous venipuncture using 3D near infrared image guidance. Technology (Singap World Sci). Sep 2013;1(1):72-87. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lippi G, Cadamuro J. Novel opportunities for improving the quality of preanalytical phase. A glimpse to the future? J Med Biochem. Oct 2017;36(4):293-300. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chen AI, Balter ML, Maguire TJ, Yarmush ML. Real-time needle steering in response to rolling vein deformation by a 9-DOF image-guided autonomous venipuncture robot. Rep U S. 2015;2015:2633-2638. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Tan W, Shen C, Luo Y, Zhu H, Ma T, Dong D, et al. Development of motion unit of simulated intelligent endotracheal suctioning robot. Zhongguo Yi Liao Qi Xie Za Zhi. Jan 30, 2019;43(1):17-20. [ CrossRef ] [ Medline ]
  • Kostandy RR, Ludington-Hoe SM. The evolution of the science of kangaroo (mother) care (skin-to-skin contact). Birth Defects Res. Sep 01, 2019;111(15):1032-1043. [ CrossRef ] [ Medline ]
  • Holsti L, MacLean K, Oberlander T, Synnes A, Brant R. Calmer: a robot for managing acute pain effectively in preterm infants in the neonatal intensive care unit. Pain Rep. 2019;4(2):e727. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Yu C, Sommerlad A, Sakure L, Livingston G. Socially assistive robots for people with dementia: systematic review and meta-analysis of feasibility, acceptability and the effect on cognition, neuropsychiatric symptoms and quality of life. Ageing Res Rev. Jun 2022;78:101633. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Demange M, Pino M, Kerhervé H, Rigaud AS, Cantegreil-Kallen I. Management of acute pain in dementia: a feasibility study of a robot-assisted intervention. J Pain Res. 2019;12:1833-1846. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Reiser U, Connette C, Fischer J, Kubacki J, Bubeck A, Weisshardt F. Care-O-bot® 3 - creating a product vision for service robot applications by integrating design and technology. In: Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2009. Presented at: IROS '09; October 11-15, 2009:1992-1998; St. Louis, MO. URL: https://ieeexplore.ieee.org/document/5354526 [ CrossRef ]
  • Odashima T, Onishi M, Tahara K, Takagi K, Asano F, Kato Y. A soft human-interactive robot RI-MAN. In: Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2006. Presented at: IROS '06; October 9-15, 2006:1; Beijing, China. URL: https://ieeexplore.ieee.org/abstract/document/4058352 [ CrossRef ]
  • Riener R, Lünenburger L, Maier I, Colombo G, Dietz V. Locomotor training in subjects with sensori-motor deficits: an overview of the Robotic Gait Orthosis Lokomat. J Healthc Eng. Jun 2010;1(2):197-216. [ CrossRef ]
  • Chillura A, Bramanti A, Tartamella F, Pisano MF, Clemente E, Lo Scrudato M, et al. Advances in the rehabilitation of intensive care unit acquired weakness: a case report on the promising use of robotics and virtual reality coupled to physiotherapy. Medicine (Baltimore). Jul 10, 2020;99(28):e20939. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Dahl TS, Kamel Boulos MN. Robots in health and social care: a complementary technology to home care and telehealthcare? Robotics. Dec 30, 2013;3(1):1-21. [ CrossRef ]
  • Rusek W, Adamczyk M, Baran J, Leszczak J, Pop T. Feasibility of Using Erigo and Lokomat in Rehabilitation for Patient in Vegetative State: A Case Report. J Clin Case Rep. 2020;10(1318):1-4. [ FREE Full text ]
  • Uivarosan D, Tit DM, Iovan C, Nistor-Cseppento DC, Endres L, Lazar L, et al. Effects of combining modern recovery techniques with neurotrophic medication and standard treatment in stroke patients. Sci Total Environ. Aug 20, 2019;679:80-87. [ CrossRef ] [ Medline ]
  • Alcobendas-Maestro M, Esclarín-Ruz A, Casado-López RM, Muñoz-González A, Pérez-Mateos G, González-Valdizán E, et al. Lokomat robotic-assisted versus overground training within 3 to 6 months of incomplete spinal cord lesion: randomized controlled trial. Neurorehabil Neural Repair. 2012;26(9):1058-1063. [ CrossRef ] [ Medline ]
  • Esclarín-Ruz A, Alcobendas-Maestro M, Casado-Lopez R, Perez-Mateos G, Florido-Sanchez MA, Gonzalez-Valdizan E, et al. A comparison of robotic walking therapy and conventional walking therapy in individuals with upper versus lower motor neuron lesions: a randomized controlled trial. Arch Phys Med Rehabil. Jun 2014;95(6):1023-1031. [ CrossRef ] [ Medline ]
  • Labruyère R, van Hedel HJ. Strength training versus robot-assisted gait training after incomplete spinal cord injury: a randomized pilot study in patients depending on walking assistance. J Neuroeng Rehabil. Jan 09, 2014;11:4. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Tang Q, Huang Q, Hu C. Research on design theory and compliant control for underactuated lower-extremity rehabilitation robotic systems code: (51175368); 2012.01-2015.12. J Phys Ther Sci. Oct 2014;26(10):1597-1599. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Shin JC, Kim JY, Park HK, Kim NY. Effect of robotic-assisted gait training in patients with incomplete spinal cord injury. Ann Rehabil Med. Dec 2014;38(6):719-725. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Duffell LD, Niu X, Brown G, Mirbagheri MM. Variability in responsiveness to interventions in people with spinal cord injury: do some respond better than others? Annu Int Conf IEEE Eng Med Biol Soc. 2014;2014:5872-5875. [ CrossRef ] [ Medline ]
  • Field-Fote EC, Lindley SD, Sherman AL. Locomotor training approaches for individuals with spinal cord injury: a preliminary report of walking-related outcomes. J Neurol Phys Ther. Sep 2005;29(3):127-137. [ CrossRef ] [ Medline ]
  • Home page. Hocoma. URL: https://www.hocoma.com/us/solutions/lokomat/ [accessed 2023-03-05]
  • Brihmat N, Loubinoux I, Castel-Lacanal E, Marque P, Gasq D. Kinematic parameters obtained with the ArmeoSpring for upper-limb assessment after stroke: a reliability and learning effect study for guiding parameter use. J Neuroeng Rehabil. Sep 29, 2020;17(1):130. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rémy-Néris O, Le Jeannic A, Dion A, Médée B, Nowak E, Poiroux É, et al. REM Investigative Team*. Additional, mechanized upper limb self-rehabilitation in patients with subacute stroke: the REM-AVC randomized trial. Stroke. Jun 2021;52(6):1938-1947. [ CrossRef ] [ Medline ]
  • Bernhardt J, Hayward KS. What is next after this well-conducted, but neutral, multisite trial testing self-rehabilitation approaches? Stroke. Jun 2021;52(6):1948-1950. [ CrossRef ] [ Medline ]
  • Colomer C, Baldoví A, Torromé S, Navarro M, Moliner B, Ferri J, et al. Efficacy of Armeo® Spring during the chronic phase of stroke. Study in mild to moderate cases of hemiparesis. Neurologia. Jun 2013;28(5):261-267. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Krebs HI, Ferraro M, Buerger SP, Newbery MJ, Makiyama A, Sandmann M, et al. Rehabilitation robotics: pilot trial of a spatial extension for MIT-Manus. J Neuroeng Rehabil. Oct 26, 2004;1(1):5. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wood MD, Maslove DM, Muscedere J, Scott SH, Boyd JG, Canadian Critical Care Trials Group. Robotic technology provides objective and quantifiable metrics of neurocognitive functioning in survivors of critical illness: a feasibility study. J Crit Care. Dec 2018;48:228-236. [ CrossRef ] [ Medline ]
  • Jawa NA, Holden RM, Silver SA, Scott SH, Day AG, Norman PA, et al. Identifying neurocognitive outcomes and cerebral oxygenation in critically ill adults on acute kidney replacement therapy in the intensive care unit: the INCOGNITO-AKI study protocol. BMJ Open. Aug 17, 2021;11(8):e049250. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wood M, Maslove D, Muscedere J, Scott S, Boyd J. Using robotic technology to quantify neurological deficits among survivors of critical illness: do they relate to brain tissue oxygen levels? a pilot study. Intensive Care Med Exp. Oct 1, 2015;3(S1):1-2. [ CrossRef ]
  • Debert CT, Herter TM, Scott SH, Dukelow S. Robotic assessment of sensorimotor deficits after traumatic brain injury. J Neurol Phys Ther. Jun 2012;36(2):58-67. [ CrossRef ] [ Medline ]
  • Chang WH, Kim YH. Robot-assisted therapy in stroke rehabilitation. J Stroke. Sep 2013;15(3):174-181. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Adcock AK, Kosiorek H, Parikh P, Chauncey A, Wu Q, Demaerschalk BM. Reliability of robotic telemedicine for assessing critically ill patients with the full outline of UnResponsiveness Score and Glasgow Coma Scale. Telemed J E Health. Jul 2017;23(7):555-560. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • McNelis J, Schwall GJ, Collins JF. Robotic remote presence technology in the surgical intensive care unit. J Trauma Acute Care Surg. Feb 2012;72(2):527-530. [ CrossRef ] [ Medline ]
  • Pransky J. The Pransky interview: Dr Yulun Wang, founder and CEO of InTouch Health. Ind Robot. 2015;42(5):381-385. [ CrossRef ]
  • Hand L. FDA clears telemedicine robot for use in hospitals. Medscape. 2013. URL: https://www.medscape.com/viewarticle/778672 [accessed 2023-05-19]
  • Long Y, Xu Y, Xiao Z, Shen Z. Kinect-based human body tracking system control of medical care service robot. In: Proceedings of the 2018 WRC Symposium on Advanced Robotics and Automation. 2018. Presented at: WRC SARA '18; August 15-19, 2018:281-288; Beijing, China. URL: https://ieeexplore.ieee.org/document/8584246 [ CrossRef ]
  • Ackerman E. iRobot and InTouch health announce RP-VITA telemedicine robot. IEEE Spectrum. 2012. URL: https://spectrum.ieee.org/irobot-and-intouch-health-announce-rpvita-telemedicine-robot [accessed 2024-03-03]
  • Swanepoel L. Steve Biko academic hospital has robot for ICU. NuusFlits. 2021. URL: https://find-it.co.za/2021/07/07/steve-biko-academic-hospital-has-robot-for-icu/ [accessed 2024-02-28]
  • Taylor L, Downing A, Noury GA, Masala G, Palomino M, McGinn C. Exploring the applicability of the socially assistive robot Stevie in a day center for people with dementia. In: Proceedings of the 30th IEEE International Conference on Robot & Human Interactive Communication. 2021. Presented at: RO-MAN '21; August 8-12, 2021:957-962; Vancouver, BC. URL: https://ieeexplore.ieee.org/document/9515423 [ CrossRef ]
  • Wu S, Wu D, Ye R, Li K, Lu Y, Xu J, et al. Pilot study of robot-assisted teleultrasound based on 5G network: a new feasible strategy for early imaging assessment during COVID-19 pandemic. IEEE Trans Ultrason Ferroelectr Freq Control. Nov 2020;67(11):2241-2248. [ CrossRef ]
  • Duan S, Liu L, Chen Y, Yang L, Zhang Y, Wang S, et al. A 5G-powered robot-assisted teleultrasound diagnostic system in an intensive care unit. Crit Care. Apr 07, 2021;25(1):134. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Li X, Guo L, Sun L, Yue PW, Xu H. Teleultrasound for the COVID-19 pandemic: a statement from China. Adv Ultrasound Diagn Ther. 2020;4(2):50-56. [ CrossRef ]
  • Wang J, Peng C, Zhao Y, Ye R, Hong J, Huang H, et al. Application of a robotic tele-echography system for COVID-19 pneumonia. J Ultrasound Med. Feb 2021;40(2):385-390. [ CrossRef ] [ Medline ]
  • Li D. 5G and intelligence medicine-how the next generation of wireless technology will reconstruct healthcare? Precis Clin Med. Dec 2019;2(4):205-208. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Khakharia A, Mehendale N. Robotic motion planning using Probabilistic Collision Hidden Markov Model (PCHMM) for contactless drug delivery. SSRN Journal.. . Preprint posted online on June 15, 2021. [ FREE Full text ] [ CrossRef ]
  • Evans J, Krishnamurthy B, Pong W, Croston R, Weiman C, Engelberger G. HelpMate™: a robotic materials transport system. Robot Auton Syst. Nov 1989;5(3):251-256. [ CrossRef ]
  • Summerfield MR, Seagull FJ, Vaidya N, Xiao Y. Use of pharmacy delivery robots in intensive care units. Am J Health Syst Pharm. Jan 01, 2011;68(1):77-83. [ CrossRef ] [ Medline ]
  • Wang H, Kasagami F. Careful-patient mover used for patient transfer in hospital. In: Proceedings of the 2007 IEEE/ICME International Conference on Complex Medical Engineering, 2007. Presented at: IEEE/ICME '07; May 23-27, 2007:20-26; Beijing, China. URL: https://ieeexplore.ieee.org/document/4381684 [ CrossRef ]
  • Stibich M, Stachowiak J. The microbiological impact of pulsed xenon ultraviolet disinfection on resistant bacteria, bacterial spore and fungi and viruses. S Afr J Infect Dis. Mar 31, 2016;31(1):12-15. [ CrossRef ]
  • Logistics Disinfection Robot of epsgroup Hospital Put into Anti-Epidemic Work in Central South Hospital of Wuhan University. Tencent QQ. 2020. URL: https://page.om.qq.com/page/OqCoH8_owiYNyNG8UvggcMHg0 [accessed 2024-02-29]
  • The "autonomous mobile disinfection robot" produced in Shanghai has arrived and has been used on the front line of the fight against the epidemic. China News Network. 2020. URL: https://baijiahao.baidu.com/s?id=1658226694285619350&wfr=spider&for=pc [accessed 2024-02-11]
  • Maclean M, McKenzie K, Moorhead S, Tomb RM, Coia JE, MacGregor SJ, et al. Decontamination of the hospital environment: new technologies for infection control. Curr Treat Options Infect Dis. Jan 30, 2015;7(1):39-51. [ CrossRef ]
  • Yu LH. Real-time monitoring of air pathogens in the ICU with biosensors and robots. Crit Care. Nov 14, 2022;26(1):353. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Weil MH, Tang W. From intensive care to critical care medicine: a historical perspective. Am J Respir Crit Care Med. Jun 01, 2011;183(11):1451-1453. [ CrossRef ] [ Medline ]
  • Site A, Lohan ES, Jolanki O, Valkama O, Hernandez RR, Latikka R, et al. Managing perceived loneliness and social-isolation levels for older adults: a survey with focus on wearables-based solutions. Sensors (Basel). Feb 01, 2022;22(3):1108. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Locsin RC. The co-existence of technology and caring in the theory of technological competency as caring in nursing. J Med Invest. 2017;64(1.2):160-164. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hermans G, Van den Berghe G. Clinical review: intensive care unit acquired weakness. Crit Care. Aug 05, 2015;19(1):274. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Calabrò RS, Cacciola A, Bertè F, Manuli A, Leo A, Bramanti A, et al. Robotic gait rehabilitation and substitution devices in neurological disorders: where are we now? Neurol Sci. Apr 2016;37(4):503-514. [ CrossRef ] [ Medline ]
  • van Kammen K, Boonstra AM, van der Woude LH, Visscher C, Reinders-Messelink HA, den Otter R. Lokomat guided gait in hemiparetic stroke patients: the effects of training parameters on muscle activity and temporal symmetry. Disabil Rehabil. Oct 2020;42(21):2977-2985. [ CrossRef ] [ Medline ]
  • Baronchelli F, Zucchella C, Serrao M, Intiso D, Bartolo M. The effect of robotic assisted gait training with Lokomat® on balance control after stroke: systematic review and meta-analysis. Front Neurol. 2021;12:661815. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Calafiore D, Negrini F, Tottoli N, Ferraro F, Ozyemisci-Taskiran O, de Sire A. Efficacy of robotic exoskeleton for gait rehabilitation in patients with subacute stroke : a systematic review. Eur J Phys Rehabil Med. Feb 2022;58(1):1-8. [ CrossRef ] [ Medline ]
  • Louie DR, Eng JJ, Lam T, Spinal Cord Injury Research Evidence (SCIRE) Research Team. Gait speed using powered robotic exoskeletons after spinal cord injury: a systematic review and correlational study. J Neuroeng Rehabil. Oct 14, 2015;12:82. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Mehrholz J, Pohl M. Electromechanical-assisted gait training after stroke: a systematic review comparing end-effector and exoskeleton devices. J Rehabil Med. Mar 2012;44(3):193-199. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Veneman JF, Kruidhof R, Hekman EE, Ekkelenkamp R, Van Asseldonk EH, van der Kooij H. Design and evaluation of the LOPES exoskeleton robot for interactive gait rehabilitation. IEEE Trans Neural Syst Rehabil Eng. Sep 2007;15(3):379-386. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Petrinec AB, Martin BR. Post-intensive care syndrome symptoms and health-related quality of life in family decision-makers of critically ill patients. Palliat Support Care. Dec 26, 2018;16(6):719-724. [ CrossRef ] [ Medline ]
  • Wood MD, Maslove D, Muscedere J, Scott SH, Day A, Boyd JG. Assessing the relationship between brain tissue oxygenation and neurological dysfunction in critically ill patients: study protocol. Int J Clin Trials. Aug 06, 2016;3(3):98-105. [ CrossRef ]
  • Dukelow SP, Herter TM, Moore KD, Demers MJ, Glasgow JI, Bagg SD, et al. Quantitative assessment of limb position sense following stroke. Neurorehabil Neural Repair. Feb 2010;24(2):178-187. [ CrossRef ] [ Medline ]
  • Veerbeek JM, Langbroek-Amersfoort AC, van Wegen EE, Meskers CG, Kwakkel G. Effects of robot-assisted therapy for the upper limb after stroke. Neurorehabil Neural Repair. Feb 24, 2017;31(2):107-121. [ CrossRef ] [ Medline ]
  • Schwartz I, Meiner Z. Robotic-assisted gait training in neurological patients: who may benefit? Ann Biomed Eng. May 2015;43(5):1260-1269. [ CrossRef ] [ Medline ]
  • Lazzara EH, Benishek LE, Patzer B, Gregory ME, Hughes AM, Heyne K, et al. Utilizing telemedicine in the trauma intensive care unit: does it impact teamwork? Telemed J E Health. Aug 2015;21(8):670-676. [ CrossRef ] [ Medline ]
  • Avgousti S, Christoforou EG, Panayides AS, Voskarides S, Novales C, Nouaille L, et al. Medical telerobotic systems: current status and future trends. Biomed Eng Online. Aug 12, 2016;15(1):96. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Vespa PM, Miller C, Hu X, Nenov V, Buxey F, Martin NA. Intensive care unit robotic telepresence facilitates rapid physician response to unstable patients and decreased cost in neurointensive care. Surg Neurol. Apr 2007;67(4):331-337. [ CrossRef ] [ Medline ]
  • Chung KK, Grathwohl KW, Poropatich RK, Wolf SE, Holcomb JB. Robotic telepresence: past, present, and future. J Cardiothorac Vasc Anesth. Aug 2007;21(4):593-596. [ CrossRef ] [ Medline ]
  • Bettinelli M, Lei Y, Beane M, Mackey C, Liesching TN. Does robotic telerounding enhance nurse-physician collaboration satisfaction about care decisions? Telemed J E Health. Aug 2015;21(8):637-643. [ CrossRef ] [ Medline ]
  • Jang SM, Hong YJ, Lee K, Kim S, Chiến BV, Kim J. Assessment of user needs for telemedicine robots in a developing nation hospital setting. Telemed J E Health. Jun 2021;27(6):670-678. [ CrossRef ] [ Medline ]
  • Bloss R. Unmanned vehicles while becoming smaller and smarter are addressing new applications in medical, agriculture, in addition to military and security. Ind Robot. 2014;41(1):82-86. [ CrossRef ]
  • Kristoffersson A, Coradeschi S, Loutfi A. A Review of Mobile Robotic Telepresence. Adv Hum Comput Interact. 2013;2013:1-17. [ CrossRef ]
  • Botha CF. Gender and humanoid robots: a somaesthetic analysis. Filos Theor J Afr Philos Cult Relig. Dec 13, 2021;10(3):119-130. [ CrossRef ]
  • McGinn C, Bourke E, Murtagh A, Donovan C, Lynch P, Cullinan MF, et al. Meet Stevie: a socially assistive robot developed through application of a ‘design-thinking’ approach. J Intell Robot Syst. Jul 17, 2019;98(1):39-58. [ CrossRef ]
  • Păvăloiu IB, Vasilățeanu A, Popa R, Scurtu D, Hang A, Goga N. Healthcare robotic telepresence. In: Proceedings of the 13th International Conference on Electronics, Computers and Artificial Intelligence. 2021. Presented at: ECAI '21; July 1-3, 2021:1-6; Pitesti, Romania. URL: https://ieeexplore.ieee.org/document/9515025 [ CrossRef ]
  • Evans KD, Yang Q, Liu Y, Ye R, Peng C. Sonography of the lungs: diagnosis and surveillance of patients with COVID-19. J Diagn Med Sonogr. Apr 21, 2020;36(4):370-376. [ CrossRef ]
  • Sarker S, Jamal L, Ahmed SF, Irtisam N. Robotics and artificial intelligence in healthcare during COVID-19 pandemic: a systematic review. Rob Auton Syst. Dec 2021;146:103902. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Teng R, Ding Y, See KC. Use of robots in critical care: systematic review. J Med Internet Res. May 16, 2022;24(5):e33380. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sichuan's first "5G+medical robot+VR" visitation system officially launched. China News Network. URL: https://www.chinanews.com.cn/jk/2021/02-08/9408104.shtml [accessed 2023-06-10]
  • Kelly FE, Fong K, Hirsch N, Nolan JP. Intensive care medicine is 60 years old: the history and future of the intensive care unit. Clin Med (Lond). Aug 2014;14(4):376-379. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Marshall JC. Critical illness is an iatrogenic disorder. Crit Care Med. Oct 2010;38(10 Suppl):S582-S589. [ CrossRef ] [ Medline ]
  • Buetti N, Ruckly S, de Montmollin E, Reignier J, Terzi N, Cohen Y, et al. COVID-19 increased the risk of ICU-acquired bloodstream infections: a case-cohort study from the multicentric OUTCOMEREA network. Intensive Care Med. Feb 2021;47(2):180-187. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sierra Marín SD, Gomez-Vargas D, Céspedes N, Múnera M, Roberti F, Barria P, et al. Expectations and perceptions of healthcare professionals for robot deployment in hospital environments during the COVID-19 pandemic. Front Robot AI. 2021;8:612746. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Di Lallo A, Murphy R, Krieger A, Zhu J, Taylor RH, Su H. Medical robots for infectious diseases: lessons and challenges from the COVID-19 pandemic. IEEE Robot Autom Mag. Mar 2021;28(1):18-27. [ CrossRef ]
  • Tamantini C, di Luzio FS, Cordella F, Pascarella G, Agro FE, Zollo L. A Robotic Health-Care Assistant for COVID-19 Emergency: A Proposed Solution for Logistics and Disinfection in a Hospital Environment. IEEE Robot Autom Mag. Mar 2021;28(1):71-81. [ CrossRef ]
  • Curtis N. To ChatGPT or not to ChatGPT? The impact of artificial intelligence on academic publishing. Pediatr Infect Dis J. Apr 01, 2023;42(4):275. [ CrossRef ] [ Medline ]
  • The FLI Team. A principled AI discussion in Asilomar. Future of Life Institute. 2017. URL: https://futureoflife.org/2017/01/17/principled-ai-discussion-asilomar/ [accessed 2024-02-28]
  • Lazar S, Nelson A. AI safety on whose terms? Science. Jul 14, 2023;381(6654):138. [ CrossRef ] [ Medline ]
  • Yang GZ, Bellingham J, Dupont PE, Fischer P, Floridi L, Full R, et al. The grand challenges of science robotics. Sci Robot. Jan 31, 2018;3(14):aar7650. [ CrossRef ] [ Medline ]
  • Department for Science, Innovation and Technology, AI Safety Institute, Smith C, Sunak R. Tech entrepreneur Ian Hogarth to lead UK’s AI foundation model taskforce. Government of UK. URL: https:/​/www.​gov.uk/​government/​news/​tech-entrepreneur-ian-hogarth-to-lead-uks-ai-foundation-model-taskforce [accessed 2023-05-10]
  • Chinese artificial intelligence white paper series. Chinese Association for Artificial Intelligence. URL: https://www.caai.cn/index.php?s=/home/article/detail/id/3188.html [accessed 2024-02-29]
  • The Bletchley declaration by countries attending the AI safety summit. Department for Science IT, Government of UK. URL: https://tinyurl.com/2xbn9wcv [accessed 2023-05-10]
  • Sedlakova J, Trachsel M. Conversational artificial intelligence in psychotherapy: a new therapeutic tool or agent? Am J Bioeth. May 2023;23(5):4-13. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Tustumi F, Andreollo NA, Aguilar-Nascimento JE. Future of the language models in healthcare: the role of ChatGPT. Arq Bras Cir Dig. 2023;36:e1727. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Crigger E, Reinbold K, Hanson C, Kao A, Blake K, Irons M. Trustworthy augmented intelligence in health care. J Med Syst. Jan 12, 2022;46(2):12. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Zou J, Schiebinger L. Ensuring that biomedical AI benefits diverse populations. EBioMedicine. May 2021;67:103358. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gijsberts CM, Groenewegen KA, Hoefer IE, Eijkemans MJ, Asselbergs FW, Anderson TJ, et al. Race/ethnic differences in the associations of the Framingham risk factors with carotid IMT and cardiovascular events. PLoS One. 2015;10(7):e0132321. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kostick-Quenet KM, Gerke S. AI in the hands of imperfect users. NPJ Digit Med. Dec 28, 2022;5(1):197. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Safdar NM, Banja JD, Meltzer CC. Ethical considerations in artificial intelligence. Eur J Radiol. Jan 2020;122:108768. [ CrossRef ] [ Medline ]
  • Rumbold JM, Pierscionek BK. A critique of the regulation of data science in healthcare research in the European Union. BMC Med Ethics. Apr 08, 2017;18(1):27. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sanderson K. GPT-4 is here: what scientists think. Nature. Mar 2023;615(7954):773. [ CrossRef ] [ Medline ]
  • O'Sullivan S, Nevejans N, Allen C, Blyth A, Leonard S, Pagallo U, et al. Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. Int J Med Robot. Feb 09, 2019;15(1):e1968. [ CrossRef ] [ Medline ]
  • Da Silva M, Horsley T, Singh D, Da Silva E, Ly V, Thomas B, et al. Legal concerns in health-related artificial intelligence: a scoping review protocol. Syst Rev. Jun 17, 2022;11(1):123. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Liebrenz M, Schleifer R, Buadze A, Bhugra D, Smith A. Generating scholarly content with ChatGPT: ethical challenges for medical publishing. Lancet Digit Health. Mar 2023;5(3):e105-e106. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Feijóo C, Kwon Y, Bauer JM, Bohlin E, Howell B, Jain R, et al. Harnessing artificial intelligence (AI) to increase wellbeing for all: the case for a new technology diplomacy. Telecomm Policy. Jul 2020;44(6):101988. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Challen R, Denny J, Pitt M, Gompels L, Edwards T, Tsaneva-Atanasova K. Artificial intelligence, bias and clinical safety. BMJ Qual Saf. Mar 2019;28(3):231-237. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Greaves F, Joshi I, Campbell M, Roberts S, Patel N, Powell J. What is an appropriate level of evidence for a digital health intervention? Lancet. Dec 22, 2019;392(10165):2665-2667. [ CrossRef ] [ Medline ]
  • Artificial intelligence: an accountability framework for federal agencies and other entities. U.S. Government Accountability Office. URL: https://www.gao.gov/products/gao-21-519sp [accessed 2023-05-10]
  • Sarfraz Z, Sarfraz A, Iftikar HM, Akhund R. Is COVID-19 pushing us to the fifth industrial revolution (society 5.0)? Pak J Med Sci. 2021;37(2):591-594. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Cosgriff CV, Stone DJ, Weissman G, Pirracchio R, Celi LA. The clinical artificial intelligence department: a prerequisite for success. BMJ Health Care Inform. Jul 2020;27(1):e100183. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gottesman O, Johansson F, Komorowski M, Faisal A, Sontag D, Doshi-Velez F, et al. Guidelines for reinforcement learning in healthcare. Nat Med. Jan 2019;25(1):16-18. [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

Edited by S Ma, T Leung; submitted 29.10.23; peer-reviewed by S Wei, TAR Sure; comments to author 26.02.24; revised version received 07.03.24; accepted 22.04.24; published 27.05.24.

©Yun Li, Min Wang, Lu Wang, Yuan Cao, Yuyan Liu, Yan Zhao, Rui Yuan, Mengmeng Yang, Siqian Lu, Zhichao Sun, Feihu Zhou, Zhirong Qian, Hongjun Kang. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 27.05.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

Advances in the Application of AI Robots in Critical Care: Scoping Review

Affiliations.

  • 1 Medical School of Chinese PLA, Beijing, China.
  • 2 The First Medical Centre, Chinese PLA General Hospital, Beijing, China.
  • 3 The Second Hospital, Hebei Medical University, Hebei, China.
  • 4 Beidou Academic & Research Center, Beidou Life Science, Guangzhou, China.
  • 5 Department of Radiation Oncology, Fujian Medical University Union Hospital, Fujian, China.
  • 6 The Seventh Affiliated Hospital, Sun Yat-sen University, Shenzhen, China.
  • PMID: 38801765
  • DOI: 10.2196/54095

Background: In recent epochs, the field of critical medicine has experienced significant advancements due to the integration of artificial intelligence (AI). Specifically, AI robots have evolved from theoretical concepts to being actively implemented in clinical trials and applications. The intensive care unit (ICU), known for its reliance on a vast amount of medical information, presents a promising avenue for the deployment of robotic AI, anticipated to bring substantial improvements to patient care.

Objective: This review aims to comprehensively summarize the current state of AI robots in the field of critical care by searching for previous studies, developments, and applications of AI robots related to ICU wards. In addition, it seeks to address the ethical challenges arising from their use, including concerns related to safety, patient privacy, responsibility delineation, and cost-benefit analysis.

Methods: Following the scoping review framework proposed by Arksey and O'Malley and the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, we conducted a scoping review to delineate the breadth of research in this field of AI robots in ICU and reported the findings. The literature search was carried out on May 1, 2023, across 3 databases: PubMed, Embase, and the IEEE Xplore Digital Library. Eligible publications were initially screened based on their titles and abstracts. Publications that passed the preliminary screening underwent a comprehensive review. Various research characteristics were extracted, summarized, and analyzed from the final publications.

Results: Of the 5908 publications screened, 77 (1.3%) underwent a full review. These studies collectively spanned 21 ICU robotics projects, encompassing their system development and testing, clinical trials, and approval processes. Upon an expert-reviewed classification framework, these were categorized into 5 main types: therapeutic assistance robots, nursing assistance robots, rehabilitation assistance robots, telepresence robots, and logistics and disinfection robots. Most of these are already widely deployed and commercialized in ICUs, although a select few remain under testing. All robotic systems and tools are engineered to deliver more personalized, convenient, and intelligent medical services to patients in the ICU, concurrently aiming to reduce the substantial workload on ICU medical staff and promote therapeutic and care procedures. This review further explored the prevailing challenges, particularly focusing on ethical and safety concerns, proposing viable solutions or methodologies, and illustrating the prospective capabilities and potential of AI-driven robotic technologies in the ICU environment. Ultimately, we foresee a pivotal role for robots in a future scenario of a fully automated continuum from admission to discharge within the ICU.

Conclusions: This review highlights the potential of AI robots to transform ICU care by improving patient treatment, support, and rehabilitation processes. However, it also recognizes the ethical complexities and operational challenges that come with their implementation, offering possible solutions for future development and optimization.

Keywords: AI; ICU; artificial intelligence; critical care medicine; intensive care unit; robotics.

©Yun Li, Min Wang, Lu Wang, Yuan Cao, Yuyan Liu, Yan Zhao, Rui Yuan, Mengmeng Yang, Siqian Lu, Zhichao Sun, Feihu Zhou, Zhirong Qian, Hongjun Kang. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 27.05.2024.

Publication types

  • Artificial Intelligence*
  • Critical Care* / methods
  • Intensive Care Units
  • Robotics* / methods

A Literature Review on New Robotics: Automation from Love to War

  • Open access
  • Published: 09 April 2015
  • Volume 7 , pages 549–570, ( 2015 )

Cite this article

You have full access to this open access article

literature review on ai and robotics

  • Lambèr Royakkers 1 &
  • Rinie van Est 2  

104 Citations

7 Altmetric

Explore all metrics

This article investigates the social significance of robotics for the years to come in Europe and the US by studying robotics developments in five different areas: the home, health care, traffic, the police force, and the army. Our society accepts the use of robots to perform dull, dangerous, and dirty industrial jobs. But now that robotics is moving out of the factory, the relevant question is how far do we want to go with the automation of care for children and the elderly, of killing terrorists, or of making love? This literature review attempts to provide an engaged but sober (non-speculative) insight into the societal issues raised by the new robotics: which robot technologies are coming; what are they capable of; and which ethical and regulatory questions will they consequently raise?

Similar content being viewed by others

literature review on ai and robotics

Understanding anthropomorphism in service provision: a meta-analysis of physical robots, chatbots, and other AI

literature review on ai and robotics

Artificial Intelligence and the future of work

literature review on ai and robotics

The adoption of artificial intelligence and robotics in the hotel industry: prospects and challenges

Avoid common mistakes on your manuscript.

1 Introduction

Until recently, robots were mainly used in factories for automating production processes. In the 1970s, the appearance of factory robots led to much debate on their influence on employment. Mass unemployment was feared. Although this did not come to pass, robots have radically changed the way work is done in countless factories. This article focuses on how the use of robotics outside the factory will change our lives over the coming decades.

New robotics no longer concerns only factory applications, but also the use of robotics in a more complex and unstructured outside world, that is, the automation of numerous human activities, such as caring for the sick, driving a car, making love, and killing people. New robotics, therefore, literally concerns automation from love to war. The military sector and the car industry are particularly strong drivers behind the development of this new information technology. In fact they have always been so. The car industry took the lead with the introduction of the industrial robot as well as with the robotisation of cars. The military, especially in the United States, stood at the forefront of artificial intelligence development, and now artificial intelligence is driven by computers and the Internet. More precisely, robotics makes use of the existing ICT infrastructure and also implies a continued technological evolution of these networks. Through robotics, the Internet has gained, as it were, ‘senses and hands and feet’. The new robot is thus not usually a self-sufficient system. In order to understand the possibilities and impossibilities of the new robotics, it is therefore important to realise that robots are usually supported by a network of information technologies, such as the Internet, and thus are often presented as networked robots.

New robotics is driven by two long-term engineering ambitions. Firstly, there is the engineering dream of building machines that can move and act autonomously in complex and unstructured environments. Secondly, there is the dream of building machines that are capable of social behaviour and have the capacity for moral decision making. The notion that this may be technologically possible within a few decades is referred to as the ‘strong AI’ view (AI: artificial intelligence). It is highly doubtful that this will indeed happen. At the same time, the ‘strong AI’ view prevails in the media and is highly influential in the formulation and public financing of IT research. It is beyond dispute that this technology will strongly influence the various practices researched. This also puts many societally and politically sensitive issues on the political and public agenda. For example, according to Peter Singer, the robotisation of the army is ‘the biggest revolution within the armed forces since the atom bomb’ [ 82 ]. The robotisation of cars, too, appears to have begun causing large technological and cultural changes in the field of mobility. Netherlands Organisation for Applied Scientific Research (TNO) describes the introduction of car robots as a “gradual revolutionary development” [ 94 ]. Through robots, the police may enjoy an expansion of the current range of applications for surveillance technologies. Home automation and robotics make tele-care possible and will radically change health care practice over the coming years. Finally, we point to the fact that over the past years, ‘simple’ robotics technologies have given the entertainment industry a new face: think of Wii or Kinect. We will continue to be presented with such technological gadgets in the coming period.

New robotics offers numerous possibilities for making human life more pleasant, but it also raises countless difficult societal and ethical issues. The debate on the application of robotics to distant battlegrounds is very current, while the application of care robots is just appearing on the horizon. Prompted by the arrival of new robotics, the Rathenau Instituut in 2011 and 2012 investigated the social meaning of robotics for the years to come in Europe and the US by studying robotics developments in five application domains: the home, health care, traffic, the police, and the army [ 70 ]. For this study, a comprehensive literature review was carried out with the goal of selecting the most relevant articles on the robots of the five application domains and the related ethics. For each application domain, we present the main ethical issues and the most relevant findings obtained from the literature, with the focus on the following three central questions:

What is possible right now in terms of new robotic technologies, and what is expected to become possible in the medium and long term?

What ethical questions does the new robotics raise in the shorter and longer terms?

What regulatory issues are raised by these ethical issues? In other words, what points should be publicly discussed or put on the agenda by politicians and policymakers?

Based on the results of our literature review, this article firstly addresses the above questions in the following five sections relating to, respectively, the home, health care, traffic, the police, and the army. After that, this review provides the first overview of studies that investigate the ethical aspects of new robotics based on some key characteristics of new robotics that are discussed in Sect.  7 . We will end with an epilogue.

2 Home Robot

In this section we will discuss two types of home robots: the functional household robot and the entertainment robot. In relation to entertainment robots, we have made a distinction between the social interaction robot and the physical interaction robot such as the sex robot.

2.1 Household Robots

In relation to household robots, we see a huge gap between the high expectations concerning multifunctional, general-purpose robots that can completely take over housework and the actual performance of the currently available robots, and robots that we expect in the coming years. In 1964, Medith Wooldridge Thring [ 92 ] predicted that by around 1984 a robot would be developed that would take over most household tasks and that the vast majority of housewives would want to be entirely relieved of the daily work in the household, such as cleaning the bathroom, scrubbing floors, cleaning the oven, doing laundry, washing dishes, dusting and sweeping, and making beds. Thring theorised that an investment of US$5 million would be sufficient for developing such a household robot within ten years. Despite a multitude of investments, the multifunctional home robot is still not within reach. During the last ten years, the first robots have made their entry into the household, but they are all ‘one trick ponies’ or monomaniacal: specialised machines that can only perform one task. According to Bill Gates [ 40 ]: ‘[w]e may be on the verge of a new era, when the PC will get up from the desktop and allow us to see, hear, touch and manipulate objects in places where we are not physically present.’

It is unlikely that households will start using in droves the monomaniacal simple cleaning robots such as vacuum cleaner robots, robot lawn mowers, and robots that clean windows with a chamois leather. These robots can only perform parts of the household tasks, and they also force the user to adapt and streamline part of their environment. The study by Sung et al. [ 88 ] showed that almost all users of a robotic vacuum cleaner made changes to the organisation of their home and their home furniture. The more tidy and less furnished the household is, the easier it is to make use of that robot vacuum cleaner. This process of rationalising the environment so that the robot vacuum cleaner can do its job better is known as ‘roombarization’ [ 89 ], referring to the vacuum cleaner robot Roomba. Typical modifications are moving or hiding cables and cords, removing deep-pile carpet, removing lightweight objects from the floor, and moving furniture. An inhibiting factor for the rise of the commercial vacuum cleaner robot probably lies in this need for a structured environment. The history of technology research shows that the interest in new devices quickly decreases when existing practices require too many changes [ 66 ].

The expectation that the new generation of robots will operate in more unstructured environments will not work in the household. This is not only a matter of time and technological innovation and development, but it is also an issue that comes up against fundamental limitations. Housework turns out to be less simple than expected. Closer inspection shows that many situations in which a household task must be performed do require a lot of common-sense decisions, for which no fixed algorithms exist. The degree of difficulty is shown by research from the University of California at Berkeley, which aims to develop a robot that is able to fold laundry. Eventually, a robot was developed that took 25 minutes to fold one towel [ 62 ].

Bill Gates’ prediction of “a robot in every home by 2015”, in our opinion, is highly unlikely to happen. We expect that this cannot be realised in the short or the medium term. Many technical challenges must be overcome before the home robot can convince the public that it can take over a variety of household chores efficiently.

2.2 Amusement Robots

2.2.1 expectations.

It seems that entertainment robots do meet expectations and social needs. Compared to the household robot, expectations concerning the entertainment robot are much less pre-defined. The goals are just communicating, playing, and relaxing. The need is not set, but arises in the interaction. We see an age-old dream come true: devices that resemble humans or animals, and can interact with us. Examples are the dog AIBO (a robot companion shaped like a dog), the fluffy cuddly toy Furby, the funny My Keepon (a little yellow dancing robot that can dance to the rhythm of music) and the sex robot: all four invite us to play out social and physical interaction. People become attached to the robot and attribute human features to it. This is called ‘anthropomorphism’, i.e. attributing human traits and behaviours to non-human subjects. People even assign robots a psychological and a moral status, which we previously only attributed to living humans [ 59 ]. Research shows that young children are much more attached to toy robots than to dolls or teddy bears, and even consider them as friends [ 90 ].

Nevertheless, we certainly cannot speak of a success story. The social interaction robots that are currently available are very limited in their social interaction and are very predictable, so many consumers will not remain fascinated for long eventually. This motivates researchers to proceed in order to reach a more efficient and effective interaction [ 12 , 29 , 45 ]. There is a lack of knowledge about the mechanisms that encourage communication between humans and robots, how behaviour occurs between humans and robots, and even how the interaction between people actually works. Such knowledge is critical to the design of the social robot, because its success depends on successful interaction [ 13 ]. This research discipline of human–robot interaction is still in its infancy. At this time—and probably within the next ten years—we should therefore consider commercially available social interaction robots like Furby and My Keepon as fads and gadgets whose lustre soon fades rather than as kinds of family friends. How the sex robot will develop is still unknown, but the sex industry and some robot technologists see a great future for this robot and consider the sex robot to be a driving force behind the development of social robots and human–robot interaction research (see, for example, [ 51 ]).

In order to let robots interact with humans in a successful manner, many obstacles must be overcome, especially to develop a social robot which has many of the social intelligence properties as defined by Fong et al. [ 38 ]: it can express and observe feelings, is able to communicate via a high-level dialogue, has the ability to learn social skills, the ability to maintain social relationships, the ability to provide natural cues such as looks and gestures, and has (or simulates) a certain personality and character. It will take decades before a social robot has matured enough to incorporate these properties, but modern technology will make it increasingly possible to interact with robots in a refined manner. This will turn out to be a very gradual process.

2.2.2 Ethical and Regulatory Issues

2.2.2.1 Emotional Development The entertainment robot is modelled on the principle of anthropomorphism. Since users are strongly inclined to anthropomorphism, robots quickly generate feelings [ 28 ]. This raises all sorts of social and ethical questions, particularly the question of what influence entertainment robots have on the development of children and on our human relationships. Sharkey and Sharkey [ 79 ] especially question nanny robots for children, as they think it will damage their emotional and social development and will lead to bonding problems in children. Sparrow [ 83 ] has already expressed worries about toy robots remaining as ‘simulacra’ for true social interaction. Emotions expressed by robots promise an emotional connection, which they can never give, for emotions displayed by robots are indeed merely imitations, and therein lies the danger, according to Sparrow, “imitation is likely to involve the real ethical danger that we will mistake our creations for what they are not” [ 83 ] (p. 317).

2.2.2.2 De-socialisation For Turkle [ 96 ], the advance of the robot for social purposes is worrying, and she fears that people will lose their social skills and become even lonelier. She is concerned that children will get used to perfect friendships with perfectly programmed positive robots, so they will not learn to deal with real-life people with all their complexities, problems, and bad habits. These ideas remain speculations, because there has been only limited research on the actual effects of the impact of social robots on children and adults [ 91 ]. In addition, Turkle sees the sex robot as a symbol of a great danger, namely that the robot’s influence stops us from being willing to exert the necessary effort required for regular human relations: “Dependence on a robot presents itself as risk free. But when one becomes accustomed to ‘companionship’ without demands, life with people may seem overwhelming. Dependence on a person is risky—because it makes us the subject of rejection—but it also opens us to deeply knowing another.” She states that the use of sex robots leads to de-socialisation.

According to Evans [ 36 ], a real friendship between robots and humans is impossible, since friendship is conditional. Intimate friendship, therefore, is a kind of paradox: on the one hand, we want a friend to be reliable and not to let us down, but when we receive complete devotion we lose interest. In addition, Evans argues that we can only really care about a robot when the robot can actually suffer. If robots cannot experience pain, we will just consider them to be objects. This raises further ethical questions, such as whether we should develop robots that can suffer and whether we should grant rights to robots (see [ 53 ]).

Little research has been done on socialisation and de-socialisation, but it is important that we think about boundaries; where and when do social robots have a positive socialising effect and where do we expect de-socialisation?

2.2.2.3 Sex with Robots The possibility of having sex with robots may reduce the incidence of cheating on a partner and adultery. There is still the question of whether having robot sex would be considered as being unfaithful, or whether robot sex would become just as humdrum and innocent as the use of a vibrator nowadays [ 56 ]. These robots could satisfy people’s desire for illicit sexual practices, such as paedophilia, and could be used in therapy to remedy the underlying problem. According to Levy [ 52 ], the sex robot, the ultimate symbol of the automation of lust, may in the future contribute to solving the general problem of sex slavery and the trafficking of women, issues which are currently ignored by many politicians. The sex robot could even be an alternative to a human prostitute. Research in this field is still lacking. Ian Yeoman, a futurologist in the tourism and leisure industry, predicts that the prostitution robot will be introduced [ 58 ]. In response to this prospect, Amanda Kloer [ 49 ] even sees robots as perfect prostitutes, but she inserts a note:

In a way, robots would be the perfect prostitutes. They have no shame, feel no pain, and have no emotional or physical fall-out from the trauma which prostitution often causes. As machines, they can’t be victims of human trafficking. It would certainly end the prostitution/human trafficking debate. But despite all the arguments I can think of for this being a good idea, I’ve gotta admit it creeps me out a little bit. Have we devalued sex so much that is doesn’t even matter if what we have sex with isn’t human? Has the commercial sex industry made sex so mechanical that it will inevitably become ... mechanical?

Related to the adult-sized sex robot is the issue of sex with child–robots and the associated issue of whether child–robot sex should be punishable. The questions that now arise are whether this child–robot sex contributes to a subculture that promotes sexual abuse of children, or whether it reduces the sexual abuse of children. No country’s legislation establish that sex with childrobots is a criminal offence. National legislators (or, for example, the European Commission) will have to create a legal framework if this behaviour is to be prohibited.

3 Care Robot

3.1 expectations.

A staffing shortage—due to future ageing—is often invoked as an argument for deploying robotics in long-term care. An ageing population is defined as a population that has an increase in the number of persons aged 65 and over compared with the rest of the population. According to the European Commission [ 35 ], the proportion of those aged 65 and over is projected to rise from 17 % in 2010 to 30 % in 2060. Moreover, it is expected that people will be living longer: life expectancy at birth is projected to increase from 76.6 years in 2010 to 84.6 in 2060 for males, and from 82.5 to 89.1 for females. One out of ten people aged 65 and over will be octogenarians or older. The growth of the very oldest group will put pressure on care services and will result in an increase in the demand for various services for the elderly: (1) assisting the elderly and/or their caregivers in daily tasks; (2) helping to monitor their behaviour and health; and (3) providing companionship [ 79 ]. Care-robot developers have high expectations: in the future, care robots will take the workload away from caregivers. However, the argument that robots can solve staff shortages in health care has no basis in hard evidence. Instead of replacing labour, the deployment of care robots rather leads to a shift and redistribution of responsibilities and tasks and forms new kinds of care [ 67 ].

During the next 10 years, care support robots may not widely enter the field of care. The use of care robots must be viewed primarily from the perspective of current development and deployment of home automation (domotics). These smart technologies, which at present are being incorporated widely into our environment, are the prelude to a future home with robots. Domotics allows the tele-monitoring of people; it offers the possibility of using a TV or computer screen at home in order to be able to easily talk with health care professionals [ 1 ]. Also, medical data such as blood glucose levels or electrocardiogram (ECG) can be uploaded to the doctor or hospital. We expect that the possibilities of automation will continuously expand and will become supportive of robotic technologies. However, in addition to technological challenges, the challenge is to make domotics and robotics applications cost-effective. This often necessitates many years of innovative trajectories of research, and, especially in the field of long-term care, innovation processes are usually difficult to finance [ 15 ].

Despite this difficult process, domotics will increasingly become part of health care practice because of the current trend of decentralisation and the fact that telecommunications technologies will allow people to receive more remote care at home. This may lengthen the period during which senior citizens will be able to live independently at home.

In the long term, robots will enter the field of care. A development such as the Japanese lifting robot, RIBA II, has already brought the use of care robots one step closer. This robot supports human carers in lifting their clients. Apart from robots that support caregivers, there are already robots that allow people to live independently at home for longer, such as Kompai, a robot that gives support to ’dependent’ persons in their private home, which has been developed by the French company Robosoft. This robot has a touch screen and is directly connected to the Internet, thereby enabling contact via videoconferences between a resident and a doctor. This robot on wheels that has no arms can recognise human speech and can talk. The robot understands commands and can perform actions, such as leaving the room on command, playing music or creating a shopping list. Kompai is also able to determine its location within the house and will return independently to its docking station when its batteries need recharging. At the moment, the robot is unable to express human emotions, but Robosoft expects this functionality will be added to Kompai in the future. Robosoft has launched Kompai as a basic robot platform, enabling technicians to continue tinkering so that in time new applications can be developed and added. Footnote 1 Kompai is now deployed and will be further developed within several European projects, e.g. in the Seventh Framework project Mobiserv (An Integrated Intelligent Home Environment for the Provision of Health, Nutrition and Mobility Services to the Elderly). This project examines how innovative technologies help to support the elderly in a user-friendly way so that they can live independently for as long as possible. Footnote 2

3.2 Ethical Issues and Regulatory Issues

The use of robotic technologies in care puts forward the question of to what extent the current and future health care system will have space to give actual care . Care implies concern about the welfare of people, entering into a relationship with them, dealing with their discomforts, and finding a balance between what is good for that person and whatever it is that they are asking for. Robots seem to be the epitome of effective and efficient care: the ultimate rationalisation of a concept that perhaps cannot be captured in sensors, figures, and data. The use of care robots requires a vision of care practice, and the discussion should be about what exactly we mean by ‘care’, taking into consideration aspects such as reciprocity, empathy, and warmth, and the role taken up by technology. In 2006, Robert and Linda Sparrow [ 85 ] noted their thoughts about the use of robots in care for senior citizens. Their message is utterly clear: the use of care robots is unethical. They emphasise that robots are unable to provide the care, companionship, or affection needed by ageing people. They see the use of robots by the elderly as an expression of a profound lack of respect for senior citizens. Other ethicists do not immediately reject the care robot, as they see opportunities for care robots, if used in certain conditions and with particular qualifications (e.g. [ 11 , 19 , 23 ]).

3.2.2 Fine-Tuning

Developers should take into consideration the wishes and needs of caregivers as well as those of care recipients in their design process [ 103 ]. Technicians are therefore required to make a ‘value sensitive design’, a design which also takes into account the wishes and needs of different groups of users—caregivers as well as care recipients. Both of these users should be involved as early as possible in the design process [ 106 ]. The process also requires the use of tele-technologies and home automation for fine-tuning and getting clear coordination with other stakeholders such as general practitioners, hospitals, nursing homes, home health agencies, insurance companies, and family members. This takes us to the point made by Van Oost and Reed [ 105 ]: when reflecting on deploying care robots one must not focus only on those persons directly involved—the entire socio-technical context must also be examined.

3.2.3 Privacy

In the short term, ethical issues play a role in home automation. Registering and monitoring the behaviour of care recipients raises privacy issues. Exactly what information is collected on the people being tele-monitored? What does that data say about the daily activities within the household? Who has access to the data that is collected? How long will the data be stored? Are the care recipients aware of the fact that information is being collected about them? Is it justified to deploy these technologies and data-gathering methods when some people, for example because they have dementia, are unaware of the presence of such technologies? These questions about privacy should be taken into consideration by developers and politicians when they are advocating the deployment of both home automation and robotics. According to Borenstein and Pearson [ 11 ], the degree of control that the care recipient has over the information collected is important. When a person has actual control over the information collected, this enhances the autonomy of that person. This requires developers, right from the beginning of the design process, to consider the consequences for privacy of their robotic technologies. The challenge is to strike a proper balance between the protection of privacy and the need to keep living at home independently.

3.2.4 Human Dignity

Another important drawback put forward by ethicists is the feared reduction in human contact. Care recipients will no longer have direct contact with human caretakers; they will only have contact with devices or will have remote contact, mediated by technology. The increasing use of robots therefore raises social issues relating to the human dignity of the care recipient. The manner in which robots are deployed proves to be a crucial point. When robots are used to replace the caregiver, there is a risk that care becomes dehumanised [ 107 ]. Sharkey and Sharkey [ 79 ] also point to the danger of the objectification of care for senior citizens by using care robots. When robots take over tasks like feeding and lifting, the recipients may consider themselves as objects. They foresee the possibility that senior citizens may develop the idea that they have less control over their lives if they receive care from care robots compared to just receiving care from human caregivers. The risk of paternalism comes into play here in terms of the extent to which the robot may enforce actions. The ethical objection of the ‘objectification of the patient’ is consistent with the idea that robots cannot provide ‘real’ care. Human contact is often seen as crucial for the provision of good care [ 19 ]. According to Sparrow and Sparrow [ 85 ], robots are unable to meet the emotional and social needs that senior citizens have in relation to almost all aspects of caring.

The robot-as-companion technology raises controversial images: lonely elderly people who only have contact with robot animals or humanoid robots. The ethical concerns about the pet robot focus on the reduction of human contact that such technology brings about and the possibility of deceiving, for example, patients with dementia [ 11 , 19 , 79 , 85 , 95 ]. Sparrow and Sparrow [ 85 ] describe care robots for the elderly as ’simulacra’ that replace real social interaction, just like the nanny robot for children (see Sect.  2 ). In addition, they wonder about the degree to which a robot can provide entertainment for longer periods; their expectation is that once the novelty of the robot has worn off, it ends up being idle after being thrown out in a corner. Borenstein and Pearson [ 11 ] are more positive about the deployment of robots; they believe that although robots cannot provide real friendship, the deployment of a companion robot, such as the seal robot Paro, may relieve feelings of loneliness and isolation.

The question underlying all of this is how much right has a care recipient to receive real human contact? Or, to put it more bluntly, how many minutes of real human contact is a care recipient entitled to receive each day? It is important to observe the choice of the care recipient. Some people might prefer a human caregiver, while others may prefer a support robot, depending on which one gives them a greater sense of self-esteem. Robots can thus be used to make people more independent (e.g. by assisting people when showering or going to the toilet) or to motivate them to go out more often to keep up their social contacts. The more control the care recipient has over the robot, the less likely he or she is to feel objectified by the care robot. Thus, the use of robotics should be tailor-made and should not lose sight of the needs of care recipients. We agree with the advice of Sharkey and Sharkey [ 79 ], who argue that in the use of robotics for health care a balance must always be sought between increasing the autonomy and quality of life of older people—by allowing them to remain at home longer—and protecting the individual rights of people and their physical and mental well-being.

3.2.5 Competences of Caregivers

In the medium term, the increasing use of care robots puts demands on the professional skills of caregivers [ 100 ]. The use of robotic technologies creates a new care practice in which the caregivers get a new role, and their duties and responsibilities will shift [ 3 , 67 ]. Indeed, working with a lifting robot requires specific skills of caregivers: knowing how to steer the robot and to predict potential failures. Providing care at a distance requires that caregivers are able to diagnose and tele-monitor people via a computer or TV screen and are able to reassure a patient. New skills are also expected of the care receiver. The care receiver should be able to deal with tele-conferencing and with forwarding data messages to a doctor. Obviously, this requires that care professionals have the ability to instruct patients about the technology and to familiarise them with its use. Dealing with robotic technology therefore opens a new chapter in the training of caregivers, so that caregivers may easily cope with it and anticipate the possibilities and limitations of robotic technologies (see also [ 80 ]. The entire deployment of care technologies should be re-examined in the context of both practice and policy.

4 Robot Car

4.1 expectations.

The robotisation of the car is in full swing. Advanced Driver Assistance Systems (ADAS) support the driver, but do not yet allow fully automated driving in traffic. The application of driver assistance systems is rapidly developing and is fully stimulated by industry, research institutions, and governments. There are high expectations of these systems regarding safety effects. The available driver assistance systems are probably only harbingers of a major development that will lead to a progressive automation of the driving task. This trend can now be observed. Systems that in principle only advised or warned, as in alerting the driver if they were speeding or unintentionally veering off the roadway, are being further developed into systems that actually intervene, causing the car to return to the correct lane when the driver unintentionally leaves the roadway. In addition, car manufacturers especially compete with each other in terms of comfort and safety, because there is not much more that can be done to improve the quality of cars [ 43 ]. Intelligence therefore becomes the unique selling point for a new car.

Science also points to the potential of cooperative systems—in conjunction with traffic management—that operate through connected navigation. Nowadays, much research is being carried out and many of these research projects are funded by the European Union, such as Safespot (2006–2010), Footnote 3 CO-OPerative SystEms for Intelligent Road Safety (COOPERS, 2006–2010), Footnote 4 Cooperative Vehicle-Infrastructure Systems (CVIS, 2006–2010), Footnote 5 and Safe Road Trains for the Environment (SATRE 2009–2013). Footnote 6 The first pilot projects have now been realised, and it is expected that these cooperative systems will lead to less congestion and better use of the road network. The European Commission [ 32 ] will propose short-term technical specifications that are required to exchange data and information from vehicle to vehicle (V2V) and vehicle to infrastructure (V2I). This proposed standardisation should result in a push for the further implementation of these systems. ‘Train driving’ (i.e. cars that follow close to each other and exchange information about their speed, position, and acceleration) requires a cooperative driving electronics standardisation so that different car brands can click into the ’train’ plan. Despite the expected positive contribution of cooperative systems—in contrast with driver assistance systems—little research has been done on the safety and possible side effects of cooperative systems. It will take some years before V2V and V2I communication is safe and reliable enough to be used in cooperative driving.

The development of cooperative systems will contribute to the further implementation of autonomous driving. The autonomous driving may have to be applied on motorways with cooperative adaptive Cruise control (ACC), for which V2V communication will be necessary. The infrastructure will not need to change much, because drivers can already get all information about local traffic regulations, traffic congestion, roadworks, and the like via the navigation system or any other in-car information source. Perhaps roadside systems could be placed on the road to guide autonomous driving, especially at motorway slip roads. This semi-autonomous driving allows autonomous driving of the car on certain roads with non-complex traffic situations, such as motorways, but not in places with more complex traffic situations, such as in a city. Scientists consider that this will be realistic by about 2020 [ 108 ]; see also the SATRE project). The expected result of this semi-autonomous driving is that cars will become more fuel-efficient, road safety on highways will increase, and traffic congestion will be partly mitigated, especially in shock wave traffic congestion. During autonomous driving, the driver can read a book, use the Internet, or have breakfast, and so on.

The autonomous car was first promised in 1939 by Bel Geddes during the Futurama exhibition he designed for General Motors for New York World’s Fair. With Futurama, Geddes speculated about what society would look like in future. In his book Magic Motorways (1940), he writes, “[t]hese cars of 1960 and the highways on which they drive will have in them devices which will correct the faults of human beings as drivers. They will prevent the driver from committing errors.” In 1958, General Motors’ engineers demonstrated the first ‘autonomous car’. This car was autonomously driven over a stretch of highway by way of magnets attached to the car and wiring in the roadway—also called ‘automatic highways’. In a press release, General Motors proudly announced the result [ 109 ]: “An automatically guided automobile cruised along a one-mile check road at the General Motors Technical Center today, steered by an electric cable beneath the concrete surface. It was the first demonstration of its kind with a full-size passenger car, indicating the possibility of a built-in guidance system for tomorrow’s highways. ... The car rolled along the two-lane road and negotiated the check banked turn-around loops at either end without the driver’s hands on the steering wheel.” In 1974, a group of forty-six researchers predicted that these automatic highways would become a reality between 2000 and 2020 [ 97 ].

In 2010, it was announced that Google would undertake research on autonomous vehicles. Meanwhile, the company has driven autonomous cars (six Toyota Prius and one Audi TT) for thousands of test kilometres on the Californian public roads. Legal fines were prevented by positioning the drivers’ hands just over the steering wheel, ready to intervene in case of problems. Early in 2011, Google started to lobby in the US state of Nevada about adjusting road traffic regulations. According to Google, autonomously driven vehicles should be legalised and the ban on text messaging from moving autonomous cars should be lifted. Meanwhile, the states of California, Nevada, and Florida are setting ground rules for self-driving cars on the roads [ 18 ]. According to research leader Sebastian Thrun, Google hopes that the development will ultimately contribute to better traffic flow and a reduction in the number of accidents. He estimates that the annual number of 1.2 million casualties in the entire world can be reduced by half by the use of the autonomous car [ 93 ].

Since 2011, an autonomous car has also been driven in Berlin; it is called Made in Germany and is the successor to the Spirit of Berlin , which participated in the DARPA Urban Challenge in 2007. The car, a modified Volkswagen Passat, is the result of the Nomos car project, subsidised by the German government and implemented by the Artificial Intelligence Group of the Free University of Berlin. Footnote 7 The developers have been awarded a licence to carry out car tests on the roads in the states of Berlin and Brandenburg. The next goal of the developers is to drive the car across Europe. A notable development of this project is that you can order the car with your smartphone. The developers demonstrate a clear vision of the future. The idea is that cars should vanish from the road when they are not driving. In their view, future cars should remain in central car parks until an order call is made. As soon as the call is received, the car, a driverless taxi, sets off for the customer’s location and then picks up the customer and takes them to a destination specified by the customer by smartphone. During the ride, the car’s system can decide whether it picks up other customers it encounters with a matching destination on the planned route. According to the researchers, in a city like Berlin, given the tie-in with existing public transport, private car use can still be efficient with a mere 10 % of the number of cars that now run daily in the city. Hence, the researchers see this development as a trend for ‘greener’ cars.

4.2 Ethical and Regulatory Issues

4.2.1 acceptance.

In several European research projects, research is being carried out into the acceptance of the robotic control of the car—and in particular the acceptance of driver support and cooperative systems, such as in the projects European Field Operational Test on Active Safety Systems (EuroFOT) Footnote 8 and Adaptive Integrated Driver-vehicle InterfacE (AIDE). Footnote 9 It focuses on two questions: (1) how do motorists feel about technology taking over the driving task and (2) will motorists accept interference from these systems? In principle, drivers are hesitant when systems taking over driving tasks, because they often sense initial discomfort in a machine-dominated environment. However, according to a recent Cisco report on the consumer experience within the automotive industry, 57 % of global consumers trust autonomous cars. Footnote 10 Moreover, acceptance grows as motorists have driven them and have come to trust the systems [ 104 ]. The RESPONSE project showed that for a successful market introduction of driver assistance systems, the focus should be on convincing the public that the systems are effective and safe [ 26 ]. In addition, drivers want to have the ability to personally intervene and to turn off the system.

4.2.2 Skilling Versus de-Skilling

The fact that many drivers come to rely on driver assistance systems makes them less alert. In addition, these systems can lead to de-skilling, so that driving ability may deteriorate. This can lead to dangerous situations at times when the (semi-autonomous) car does not respond autonomously and control should be taken over by a driver who has become less road savvy [ 27 ]. Consequently, driver assistance systems require new driver skills. It is important that attention is paid to this, and a possible solution could be that driving with driver assistance systems becomes a mandatory part of the driving licence.

4.2.3 The Better Driver

Autonomous cars by Google and the Free University of Berlin make the driver redundant. Many researchers see the autonomous car as a method of preventing traffic accidents, for conscious or unconscious human error is involved in almost all traffic accidents. Several studies show that more than 90 % of all accidents occur due to human error and that only 5–10 % are the result of deficiencies in the vehicle or the driving environment (see, for example, [ 14 , 25 ]. Autonomous vehicles have continuous complete attention and focus, keep within the speed limit, do not get drunk, and abstain from aggressive behaviour, and so on. In addition, humans are no match for the technology when it comes to reaction time and alertness, both in routine situations and in critical situations [ 101 ]. But before the human factor can be switched off in traffic, the autonomous vehicle must be thoroughly tested in the actual dynamic traffic before safely functioning on the road. This could take many years; predictions range from five to thirty years. In the development process, a good step forward would be fitting autonomous cars with V2V and V2I systems, allowing multiple systems to monitor traffic situations at the same time.

4.2.4 Safety

The greatest benefit of these systems, sought by the European Commission [ 33 ] in particular, is in traffic safety. The Commission aims to halve the total number of road deaths in the European Union by 2020 as compared to 2010. This is a very ambitious goal, which in our opinion can only be achieved by rigorous measures such as mandating a number of driver assistance systems. The European Commission [ 34 ] has already made the Anti-lock braking system, electronic stability control, and eCall (a warning system that automatically alerts emergency services in case of an accident) compulsory. The next systems that could qualify for such an obligation are the ACC system (adaptive Cruise control system) and the forward collision warning. Cars equipped with both systems could potentially affect up to 5.7 % of the accidents causing injury on motorways. The European Commission seeks to achieve the strategic goal of halving the number of road casualties by stronger enforcement of traffic regulations. Speed is also a basic risk factor in traffic and has a major impact on the number of traffic accidents [ 4 ]. This could easily be addressed by using a far-reaching variant of the intelligent speed assistance (ISA) system so that drivers cannot violate the speed limit. It is not expected that this variant will be implemented easily, because public acceptance of such an intrusive system is quite low [ 60 ]. The car is considered by many as an ‘icon of freedom’, and an intrusive system restricts the freedom of the motorist. The question is whether a moral duty exists to mitigate this freedom in the interest of road safety. The European Commission could have an important role here by making this variant of the ISA system compulsory. Although this would result in a lot of resistance from both drivers and political parties, it would be an effective measure.

4.2.5 Security

Cooperative systems have to deal with the security of the information and communication network. Cooperative driving, for example, necessitates both hardware for communication and a link to the engine management system so that the vehicle can control its own speed. A disadvantage is that the system is fragile and the car could become the victim of hacking attempts. American researchers at Center for Automotive Embedded Systems Security (CAESS) have shown that it is possible to hijack and take over full control of the car [ 17 ]. In theory, malicious people could take over a motorway junction, completely disrupting traffic or causing injury. The European research project Preserve (Preparing Secure Vehicle-to-X Communication Systems), started in 2011, deals with the development and testing of a security system. Footnote 11 Securing data is complicated because cryptology increases the necessary information flow but at the same time restricts the available bandwidth.

4.2.6 Privacy

The most pressing problems are going to arise in the short term in the field of privacy. Increasing the enforcement of road traffic rules can easily be accomplished via V2I systems that can monitor a driver’s behaviour, allowing the owner to be fined automatically for violations of traffic rules. In addition, insurance firms may introduce premiums for drivers who drive safely and use monitoring. This is still considered an invasion of privacy, but the question is whether in the long term traffic safety will beat privacy. Therefore, the remaining question is whether politicians can keep their promise that the eCall system will remain ‘dormant’ and that this system will not be used as an electronic data recorder for tracking criminals or for fining drivers who violate traffic regulations [ 30 ]. This danger is real, as shown by CCTV cameras in Amsterdam that were only supposed to be keeping an eye on polluting trucks, and were installed for monitoring only those, but that were subsequently also used for other purposes, such as whether owners of number plates had an unpaid fine [ 55 ].

4.2.7 Autonomous Car

Fully autonomous driving will not be a realistic picture before 2020, even though this is predicted by both General Motors and Google. However, given the developments in the field of car robotics, it seems inevitable that the autonomous car will become commonplace. A more likely estimate is that these systems will function by around 2030. The launch will probably take place via taxi systems, as outlined by the German researchers in the Auto Nomos project, and it will be possible to call an autonomous car on a mobile phone and it will be waiting for its passenger at stations and theatres, and the like (see also [ 14 ]).

The social impact of the introduction of the autonomous car could be very significant. Visions of the future of the autonomous car now lead to different scenarios, sometimes even diametrically opposed ones (see, for example, [ 7 ]). Therefore, policymakers and politicians must anticipate these possible scenarios. What exactly will the implications be for public transport, car ownership, road safety, and road use, et cetera? This should be investigated for each scenario, so that policymakers and politicians can design the road of the future and discourage undesirable developments at an early stage.

Yet it is high time that various parties (governments, industry, research institutes, and interest groups) considered the technical and legal aspects of cooperative and autonomous driving. The autonomous car will force regulators to rethink the car and driver relationship, and will possibly place more emphasis on the regulation of the car than of the driver. For instance, instead of certifying that a driver can pass a road test, the state might certify that a car can pass a test, upending the traditional drivers’ licensing system. Questions also arise about liability for accidents, since the technology that makes an autonomous car is an after-market system. So if it hits something, does the fault lie with the manufacturer or the technology company that modified the car? These aspects require time to resolve and, after all, will needlessly slow the introduction of car robotisation by a few years if no moves are made now.

5 Police Robot

5.1 expectations.

Within the global expansion in robotics, the police domain is an important application. This aspect is largely fuelled by developments in the field of military robots. In the field of police robots, the USA and Japan are making clear headway compared to Europe. We may conclude that the application of robotics within the police domain is still in an experimental, exploratory phase. Two applications are central: carrying out surveillance and disarming explosives. In most countries, the police have a number of ground and airborne robots outfitted with smart cameras. Over the past decade, a large increase in smart cameras has been observed in public areas, and this increase has been reinforced, especially since 2001, by a higher priority being given to investigation and law enforcement by the police. The ground and airborne police robots are mobile unmanned systems with limited autonomy that can be deployed for a specific task, and they are tele-operated [ 61 ]). Robots can be particularly useful for the police when they use their authority to gain access to any location. A robot can, for example, be used to bring objects to or pick up objects in a so-called hot area that the police cannot enter in order to observe situations that police officers cannot see. In this way, robots are strengthening the core missions of enforcement and investigation. They also increase the safety of the police officers, who can thus avoid dangerous places. For example, in the US, the remotely controlled so-called V-A1 robots are already deployed, especially in the state of Virginia [ 39 ]. They are equipped with cameras, chemical detection equipment, and a mechanical arm to grab objects. These robots enable agent operators to assess dangerous situations from a distance without running risks themselves.

With regard to another police function, providing assistance to citizens, the use of robots is further away in the future. We may call it a prelude to the social robot, because for them to succeed on the street, the quality of their direct interaction with citizens will be crucial. In the long term, robots could come into service in police work, for example in the form of humanoids, operating visibly and having contact with the public. One can imagine a robot traffic cop, a robot as part of riot control, or as a police officer on the street, just patrolling, providing a service, and keeping its eyes open in the street or in a shopping mall. For the moment, these types of applications in robot technology still face major challenges, given the complexity of the social and physical space in the public domain [ 10 ]. In addition, it is of great importance that police robots are accepted and obeyed by people who are panicky or violent. According to Salvini et al. [ 71 ], violence against police robots may constitute an obstacle to the deployment of these robots, because they somehow encourage young people to act aggressively towards these robots.

In Japan, we see experiments with robots in the street; they are called ‘urban surveillance robots’ or ‘urban robots’, and they can take over some of the responsibilities of a police officer, such as identifying criminal or suspicious behaviour and providing a service to the public. The Reborg-Q was tested in shopping malls, airports, and hotels. It is a behemoth of 100 kg, is 1.50 m long, and is equipped with cameras, a touchscreen, and an artificial voice. It is a tele-operated robot that can also supervise pre-programmed routes independently. The Reborg-Q can identify the faces of unwanted visitors when it is on patrol and respond by alerting human guards. Footnote 12

5.2 Ethical and Regulatory Issues

5.2.1 privacy versus safety.

A tricky issue with robots and intelligent cameras is the violation of privacy [ 75 ]. It is possible that in the short term the government will monitor our daily activities 24 h a day for the sake of safety. This creates tension between ensuring privacy and ensuring security. The very essence of the rule of law is that there should be a balance between protecting the public by the government and protecting it from the government. Without privacy protection, the government is a potential threat to the rights of its own citizens. At the same time, the government loses its legitimacy when it cannot guarantee safety for its citizens from outside threats from other parties. Therefore, it is important that it is clear when data may be collected, what data may be collected and stored, and for what purpose these robots and intelligent cameras may gather data, and that a clear distinction remains between the monitored public space and private life [ 72 ]. Moreover, the risk of the manipulation of sound and image recordings and the risk that data might end up in the hands of the wrong people should also be factored in.

5.2.2 Skilling Versus de-Skilling

The increasing deployment of police robots means that police officers must acquire new skills. The operation of tele-operated police robots and the performance of police actions using robotic technology both need different operational and strategic requirements from the police personnel. The downside is that a loss of essential police skills may occur—skills acquired through extensive training and experience—as a result of getting used to the deployment of police robots, after which police officers would be less able to intervene in serious problems that cannot be solved using robots [ 44 ].

5.2.3 Deployment

A legal complication regarding the deployment of airborne robots for police purposes is that it is not yet clear how these robots can be deployed in accordance with existing laws and regulations [ 111 ]. From the perspective of the British police, the national Aviation Acts and regulations can be seen as obstacles in relation to the use of air robots for specific purposes [ 54 ]. For any type of aircraft there are laws and regulations to ensure safe traffic in the airspace. In this respect, one may question whether airborne robots seamlessly fit into existing laws and regulations. Does this technological innovation require a new, not yet existing category within the Aviation Acts and regulations? In other words, which restrictions of a national legal framework apply regarding, for example, the deployment of UAVs over a festival or a fire in some dunes?

Moreover, deployment reliability is an issue. This means that the robots must not pose any danger to civilians and that certain safety rules must be met. A failure of an airborne robot that hovers above a festival crowd could result in disastrous consequences. Extensive security measures must therefore be taken before making the decision to deploy these police robots.

In the short term, research will be needed on the laws and regulations relating to the deployment of police robots, especially the airborne robots, so that the applications of robots for police purposes can be adapted. It is important that safety in the air and on the ground is not compromised. To this end, governments could make a contribution by establishing safety standards for police robots.

5.2.4 Armed Police Robots

Armed police robots raise important ethical questions. In the short and medium term, it is not expected that there will be armed autonomous police robots, but there are ongoing experiments with armed tele-operated police robots. In the USA, for example, there are concrete plans for tele-operated police robots equipped with a taser. They can, among other tasks, be used for crowd control. At the moment, when this police robot with a taser attacks a ’suspect’, it creates a new, dangerous situation that is different from the situation in which an agent arrests a suspect using a Taser. Neil Davison, an expert in the field of non-lethal weapons at the University of Bradford, says: “The victim would have to receive shocks for longer, or repeatedly, to give police time to reach the scene and restrain them, which carries greater risk to their health” [ 64 ].

The emergence of armed police robots requires a political stance. This political position should be based on the desirability and legality of deploying robots for police tasks, in comparison with non-robotic alternatives, for public safety, and on the implications of these robots for the exercise of police violence.

6 Military Robot

6.1 expectations.

During the invasion of Iraq in 2003, no use was made of robots, as conventional weapons were thought to yield enough ‘shock and awe’. However, the thousands of American soldiers and Iraqi civilians killed reduced popular support for the invasion and made the deployment of military robots desirable. By the end of 2008, there were 12,000 ground robots operating in Iraq, mostly being used to defuse roadside bombs, and 7,000 reconnaissance planes or drones were deployed [ 82 ]. The robot is therefore a technological development that has a great influence on contemporary military operations, and this is seen as a new military revolution. New robotics applications are constantly sought these days and are developed in order to perform dull, dangerous, and dirty jobs and to improve situational awareness, but also in order to kill targets.

During the last decade, advances have been made in the development of the armed military robot. From 2009, more ground-based pilots—or cubicle warriors—have been trained to use armed unmanned aircraft than have been trained as fighter pilots [ 102 ]. The expectation is that unmanned aircraft will increasingly replace manned aircraft, and in the medium term will even make manned aircraft obsolete. To this end, further technological developments are required, such as the development of self-protection systems for unmanned systems, so that they become less vulnerable, and the development of sense and avoid systems, so they can be safely controlled in civilian airspace. In the short term, we do not expect the introduction of armed ground robots on the battlefield. These have already been developed, but are deployed with little success in conflict zones.

A trend we are observing in military robotics development is a shift ‘from in -the-loop to on -the-loop to out -the loop’ [ 78 ]. We have seen that cubicle warriors are increasingly being assigned monitoring tasks rather than having a supervisory role. The next step would be for the cubicle warrior to become unnecessary and for the military robot to function autonomously. The autonomous robot is high on the US military agenda, and the US Air Force [ 98 ] assumes that by around 2050 it will be possible to fully deploy autonomous unmanned combat aerial vehicles (UCAV). Given current developments and investment in military robotics technology, this US Air Force prediction seems not to be utopian but a real image of the future. The wish to promote autonomous robots is mainly driven by the fact that tele-guided robots are more expensive, firstly because the production cost of tele-guided robots are higher and secondly because these robots incur personnel costs as they need human flight support. One of the main goals of the Future Combat Systems programme, therefore, is to deploy military robots as ‘force multipliers’ so that one military operator can run a multiple large-scale robot attack [ 76 ]. To this end, robots are programmed to cooperate in swarms so they can run coordinated missions. In 2003, the Americans deployed the first test with 120 small reconnaissance planes in a mutually coordinated flight [ 50 ]. This swarm technology is developing rapidly, and will probably become military practice in a few years’ time. This is a future in which the automation of death will become reality.

6.2 Ethical and Regulatory Issues

6.2.1 the better soldier.

Using armed military robots can also allow greater risks to be taken than with manned systems. During the Kosovo war, for the sake of the pilots’ safety, NATO aircraft flew no lower than 15,000 feet so that hostile fire could not touch them. During one specific air raid from this elevation, NATO aircraft bombed a convoy of buses filled with refugees—although they thought they were hitting Serbian tanks [ 9 ]. These tragic mistakes could be prevented by deploying unmanned aircraft that are able to fly lower, because they are equipped with advanced sensors and cameras, allowing the operator to better identify the target. Furthermore, by using military robots, operators are in a better position to consider their decisions. These robots could be used in very dangerous military operations, including home-to-home searches in cities where the situation is unclear. In such a situation, a soldier has to determine within a fraction of a second who is a combatant and who is a citizen and neutralise those persons who form an immediate threat—before they open fire themselves. A military robot is able to enter a building without endangering soldiers or civilians. An operator will open fire on someone only when that person has shot at the robot first. According to Strawser [ 86 ], it is morally reprehensible to command a soldier to run the risk of fatal injury if that task that could be carried out by a military robot. In circumstances like this, Strawser holds that the use of armed military robots is ethically mandatory because of the ‘principle of unnecessary risk’.

Arkin [ 6 ] suggests that in the future autonomous armed military robots will be able to take better rational and ethical decisions than regular soldiers, and will thus prevent acts of revenge and torture (see also [ 87 ]. At war, soldiers are exposed to enormous stress and all of its consequences. This is evident from a report that includes staggering figures on the morale of soldiers during the military operation Iraqi Freedom [ 65 ]. For example, the report found that less than half of the soldiers believed that citizens should be treated with respect, that a third felt that the torture of civilians should be allowed to save a colleague, and that 10 % indicated that they had abused Iraqi civilians. Furthermore, less than half said they would betray a colleague who had behaved unethically, 12 % thought that Iraqi citizens can be regarded as rebels, and less than 10 % said that they would report an incident in which their unit had failed to adhere to fighting orders. The report also shows that after the loss of a fellow soldier, feelings of anger and revenge double the number of abuses of civilians, and that emotions can cloud soldiers’ judgement during a war.

6.2.2 Abuse and Proliferation

The first signs of an international arms race in relation to military robotics technology are already visible. All over the world, significant amounts of money are being invested in the development of armed military robots. This is happening in countries such as in the United States, Britain, Canada, China, South Korea, Russia, Israel, and Singapore. Proliferation to other countries, e.g. by transfer of robotics technology, materials, and knowledge, is almost inevitable. This is because, unlike with other weapons systems, the research and development of armed military robots is fairly transparent and accessible. Furthermore, robotics technology is relatively easy to copy and the necessary equipment for armed military robots can easily be bought and is not too expensive [ 82 ].

In addition, much of the robotics technology is in fact open source technology and is a so-called dual-use technology; thus, it is a technology that in future will potentially be geared towards applications in both the military and the civilian market. One threat is that in future certain commercial robotic devices, which can be bought on the open market, could be transformed relatively easily into robot weapons.

It also likely that unstable countries and terrorist organisations will deploy armed military robots. Singer [ 81 ] fears that armed military robots will become the ultimate weapon for ethnic rebels, fundamentalists, and terrorists. Noel Sharkey [ 77 ] also predicts that soon a robot will replace a suicide bomber. International regulations on the use of armed military robots will not solve this problem, as terrorists and insurgents disregard international humanitarian law. From this perspective, the American researcher and former fighter pilot Mary Cummings [ 22 ] outlines a doomsday scenario of a small, unmanned aircraft loaded with a biological weapon being flown into any sports stadium by terrorists.

An important tool to curb the proliferation of armed military robots is obviously controlling production of these robots by global arms control treaties. A major problem with this, however, is that major powers such as the US and China are not parties to these treaties. In addition, legislation in the UN framework is needed in the field of the export of armed military robots to combat the illicit trafficking of armed military robots and to set up licences for traders in armed military robotics technology.

6.2.3 Hacking

Another danger is that military robots will be hacked or may become infected with a computer virus. In October 2011, US Predators and Reapers were infected by a computer virus [ 74 ]. This virus did nothing but keylogging: it forwarded all the commands to and from these drones and sent it elsewhere. This particular incident was not serious, but the danger of such viruses could become immense. Through hacking, others could take over unmanned combat aircraft, and viruses can disrupt the robots in such a way that they become uncontrollable, and they could then be hijacked by people acting illegally. This is a real and present danger: in 2009, the Iraqis intercepted drone video feeds using US$26 off-the-shelf software [ 42 ].

6.2.4 Targeted Killing

On 30 September 2011, in a northern province of Yemen, Anwar al-Awlaki, an American citizen and a senior figure in Al Qaeda in the Arabian Peninsula, was killed by an American drone, as was a second American citizen, of Pakistani origin, whom the drone operators did not realise was present [ 20 ]. So Anwar-al-Awlaki was killed with the President’s intent but without Anwar-al-Awlaki having been charged formally with a crime or convicted at trial. Khan [ 48 ] argues that indiscriminate killing of suspected terrorists by drone attacks in Pakistan cannot be justified on moral grounds, because these attacks do not discriminate between terrorists and innocent women, children, and the elderly. This tactical move of using the drones is counterproductive and is “unwittingly helping terrorists”. He states that the US’s international counterterrorism efforts can only be successful by devising a clear strategy: adopting transparent, legitimate procedures with the help of Pakistan to bring the culprits to book and to achieve long-term results. More scholars [ 2 , 16 , 110 ] hold the opinion that the United States probably carries out illegal targeted killings in Pakistan, Yemen, Somalia, and elsewhere. The government must be held to account when it carries out such killings in violation of the Constitution and international law.

6.2.5 Loss of Public Support

Although armed military robots, due their precision robotics, are much more accurate in hitting their target, their use may eventually lead to more victims being killed, because they will be deployed much faster and more frequently, even in civilian areas. The sharp increase in air strikes by unmanned aircraft could ultimately lead to more casualties than before. According to estimates made by the New America Foundation [ 63 ], the US air strikes in Pakistan using UAVs increased from nine air attacks between 2004 and 2007, resulting in 100 victims, including nine civilians, to 118 air raids in 2010, with 800 casualties, of whom 46 were civilians. The Bureau of Investigate Journalism (TBIJ) estimates that in Pakistan more than 750 civilians were killed by drones between 2004 and 2013. Footnote 13 In addition, the use of armed unmanned systems is often considered as a cowardly act by locals, and every civilian victim of such an automated device will be used by insurgents for propaganda. All this leads to a loss of psychological support among the local population, despite the fact that support is an essential tool for providing a positive contribution, for example in stabilising a conflict (see, for example, [ 48 ]).

According to a recent report of Human Rights Watch on civilian casualties in Afghanistan, taking “tactical measures to reduce civilian deaths may at times put combatants at greater risk”, yet they are prerequisites for maintaining the support of the local population [ 46 ] (p. 5), which on turn is something that the success of the mission in Afghanistan depends on. Clearly, a mounting civilian death toll is something that might very well strengthen the resentment against the West and might make recruitment easier for both the insurgency and the terrorist groups that the coalition troops are trying to fight. For example, Ghosh and Thompson [ 41 ] describe how, in Waziristan, the region in Pakistan afflicted by a lot of drone attacks, the use of unmanned aircraft is certainly seen as dishonourable and cowardly, which does not contribute to ‘the winning of the hearts and minds’. Just before he was killed by a UCAV, Baitullah Mehsud, the Pashtun commander of the Pakistani Taliban, even claimed that each drone attack “brings him three or four suicide bombers” [ 41 ], mainly found among the families of the drones’ victims.

6.2.6 Responsibility and Autonomy

Military robots may in the future have ethical constraints built into their design—a so-called ethical governor—which suppresses unethical lethal behaviour. Sponsored by the US Army, Arkin [ 5 ] has carried out research to create a mathematical decision mechanism consisting of prohibitions and obligations derived directly from the laws of war. The idea is that future military robots might give a warning if orders, according to their ethical governor, are illegal or unethical. For example, a military robot might advise a human operator not to fire because the diagnosis of the camera images tells it that the operator is about to attack non-combatants. An argument for rejecting this approach to what constitutes an ethical design is that ethical governors may form a ‘moral buffer’ between human operators and their actions, allowing them to tell themselves that the military robot took the decision. According to Cummings [ 21 ] (p. 30), “[t]hese moral buffers, in effect, allow people to ethically distance themselves from their actions and diminish a sense of accountability and responsibility.” A consequence of this is that humans might then simply show a type of behaviour that was desired by the designers of the technology instead of explicitly choosing to act this way, and thus might over-rely on military robots (the “automation bias”). This can lead to a dangerous situation; because the technology is “imperfectly reliable”, the human operator must catch the instances when some aspect of the technology fails.

According to several authors (for example, [ 8 , 37 , 78 , 84 ], the assumption about and/or the allocation of responsibility is a fundamental condition of fighting a just war. Ethical governors might blur the line between non-autonomous and autonomous systems, as the decision of a human operator is not the result of deliberation but is mainly determined or even enforced by a military robot. In other words, human operators do not have sufficient freedom to make independent decisions, which makes the attribution of responsibility difficult. The moralising of the military robot can deprive the human operator of controlling the situation; his future role will be restricted to monitoring. The value of ‘keeping the man in-the-loop’ will then be eroded, and will be replaced by ‘keeping the man on-the-loop’. This can have consequences for the question of responsibility. Detert et al. [ 24 ] have argued that people who believe that they have little personal control in certain situations—such as those who monitor, i.e. who are on-the-loop—are more likely to go along with rules, decisions, and situations even if they are unethical or have harmful effects. This implies that it would be more difficult to hold a human operator reasonably responsible for his decisions, since it is not really the operator that takes the decisions, but a military robot system [ 69 ].

The idea that this might become real follows from a report from The Intercept by Glenn Greenward in which he states that the US military and the CIA often rely on data from the National Security Agency’s electronic surveillance programmes for targeted drone strikes and killings. According to a former drone operator, the NSA often identifies targets based on controversial metadata analysis and mobile phone-tracking technologies: “Rather than confirming a target’s identity with operatives or information on the ground, the CIA or the US military then orders a strike based on the activity and location of the mobile phone a person is believed to be using.” In fact, no human intelligence is used to identify the target, so the technology “geolocates” the SIM card or handset of a suspected terrorist’s mobile phone, allowing the operator to push the button in order to kill the person using the device [ 73 ]. Furthermore, the drone operator said that while the technology has led to the deaths of terrorists and others involved in attacks against US forces, innocent people have “absolutely” been killed because of the NSA’s reliance on the surveillance methods.

An important question regarding autonomous armed robots is whether they are able to meet the principles of proportionality and discrimination. Compliance with these principles often requires empathy and situational awareness. A tele-operated military robot can be helpful for an operator because of its highly sophisticated sensors, but it does not seem feasible that in the next decade military robots will possess the ability to empathise and exercise common sense. Some scientists wonder if this will be possible at all because of the dynamic and complex battle environments in which these autonomous robots will have to operate (see, for example, [ 76 ]).

For the development of military robotics technology, a broad international debate is required about the responsibilities of governments, industry, the scientific community, lawyers, non-governmental organisations, and other stakeholders. Such a debate has not been realised because of the rapid development of military robotics so far. A start was made with a debate during an informal meeting of experts at the United Nations in Geneva in May 2014 on lethal autonomous weapons systems [ 99 ]. The necessity of a broad international debate is shown by the contemporary technological developments in military robotics, which cannot always be qualified as ethical. The deployment of armed military robots affects the entire world, and it is therefore important that all stakeholders with a variety of interests and views enter into a mutual debate (see also [ 57 ]). The starting point for this debate must be the development of common legal and ethical principles for the responsible deployment of armed military robots.

7 Conclusion

In this section we summarise our findings (see Table  1 ). Our summary is based on some key characteristics of new robotics that evoke various social and ethical issues: (1) short-, medium- and long-term trends in the field of robot technology, (2) social gains of robotisation, (3) robots as information technology, (4) the lifelike appearance of robots, (5) the degree of autonomy of robots, (6) robotic systems as dehumanising systems, and (7) governance issues relating to new robotics.

7.1 Technology Trends

Both in Europe and the United States, the goal of developing robotics for the domestic environment, care, traffic, police, and the army is embraced by policymakers and industry as a new research and societal goal. There is an aim for technology to enable an increasing amount of autonomous moral and social actions. Thus, a radical development path unfolds, namely, the modelling, digitisation, and automation of human behaviour, decisions, and actions. This development is at least partially legitimated by speculative socio-technical imaginaries, such as multifunctional, autonomous, and socially and morally intelligent robots.

In the short and medium term, developments in the field of new robotics are mainly characterised by terms such as ‘man in-the-loop’ and ‘man on-the-loop’, which indicate that robotic systems are increasingly advising human operators on which action must be taken. Firstly, this signifies the digitisation of various previously low-technology fields, such as the sex industry and elderly care. For example, in the coming decade a combined breakthrough of home automation and tele-care is expected, which will place the caretaker and patient in a technological loop. The experimentation with care robots must be seen from this perspective. Secondly, in high-technology practices, such as the automotive industry and the military, we see a shift from ‘man in-the-loop’ to ‘man on-the-loop’ and even ‘man out-of-the loop’. It is not unlikely, for example, that autonomous cars will gradually become common by around 2030.

7.2 Expected Social Gains

Robotisation presents a way of rationalising social practices and reducing their dependence on people (cf. [ 68 ]). Rationalisation can have many benefits: higher efficiency, less mistakes, cheaper products, and a higher quality of services, et cetera. Rational systems aim for greater control over the uncertainties of life, in particular over people, who constitute a major source of uncertainty in social life. A way of limiting the dependence on people is to use machines to help them or even to replace them with machines. As Ritzer [ 68 ] (p. 105) argues: “With the coming of robots we have reached the ultimate stage in the replacement of humans with nonhuman technology.”

The development and use of robotic systems is often legitimated by the fact that they will take over “dirty, dull, and dangerous” work. Some claim that the ‘principle of unnecessary risk’ leads to an ethical obligation to apply robotics in certain situations. Strawser [ 86 ] believes it is morally unacceptable to give a soldier an order that may lead him to suffer lethal injuries if a military robot could perform this same task. This principle of unnecessary risk is also applicable to driving cars and sex robots. Given the many degrading circumstances in prostitution, should the presence of a reasonable technological alternative not lead to an ethical obligation to replace human prostitutes by sex robots? And if robots can be better drivers that cause far less severe traffic accidents, aren’t we obliged to gradually replace the human driver by technology?

7.3 Robots as Information Technology

New robotics also brings up many ethical, legal, and social issues. Some of the issues are related to the fact that robotic systems are information systems. That means that social issues, such as privacy, informed consent, cyber security, hacking, and data ownership also play a role in robotics. Because much of the robotics technology is open source, this makes it easier for the technology to proliferate and for terrorist organisations to abuse it. The fact that within new robotics great attention is given to improving the interface between machines and humans brings new questions with it, especially in the area of privacy. The vision of affective computing, for example, can only be realised if the robot is allowed to measure and store data about our facial expressions.

7.4 Lifelike Appearance of Robots

The lifelike appearance of robots may raise various issues. To improve the interaction between humans and robots, robotics explicitly makes use of the ability of man to anthropomorphise technology. This raises questions about the limits within which this psychological phenomenon may be used. Some fear that a widespread future use of socially intelligent nanny robots may negatively influence the emotional and social development of children. Others warn of the possibility that persuasive social robots may influence or fake people, and may even try to deceive them. The possibility of building child-like robots raises the question of whether child–robot sex should be punishable.

7.5 Degree of Autonomy of Robots

Another characteristic of robotics which raises many issues concerns the degree of autonomy of the robot or, more precisely, the degree of control that is delegated by the user to the machine. The safety of tele-operated or autonomous robotic systems is an important topic. Tele-operation and semi-automation of tasks, whether in the field of care, driving, the police, or the army, require new skills of caregivers, drivers, police officers, and soldiers. When a controller delegates various tasks to the robot, this immediately raises questions in the field of responsibility and liability. The shift towards ‘man on- the-loop’ raises the question of to what extent the user still receives enough information to make well-informed decisions. The future option of taking people completely out of the loop raises the question of what kind of decisions and actions we want to leave to a robotic machine? Do we want machines to autonomously make decisions about killing people or raising children?

7.6 Robot Systems as Dehumanising Systems

A central, almost existential, worry that is linked to the development of new robotics is related to the notion of dehumanisation. This happens when robotisation as rationalisation overshoots its mark and lead to socio-technical systems that may become anti-human or even destructive of human beings. With regard to social robotics, there is a fear that people will gradually grow accustomed to pleasant, customised relationships with artificial companions and will lose interest in dealing with complex, real people. With respect to care robots, some fear that the use of robots will ultimately lead to a reduction of human contact between the patient and the caregiver and an objectification of the patient. The mechanisation of human encounters is at the core of this debate, regardless of whether it relates to sex or the act of killing.

7.7 Governance of New Robotics

In this article several large-scale socio-technical transitions were described, in particular the shift towards domotics and tele-care, smart mobility, and robotic warfare. These important transitions need to be guided by widespread public and political debate, and efforts should be made to regulate all kinds of social and ethical issues that have been identified in this article. In particular, with regard to the (proliferation of the) use of armed military robots, a broad international debate is needed. We also spotted some more specific issues that need attention: how can society deal with armed police robots, how can one deal with the issue of sex with child–robots, can sex robots be an alternative to human prostitutes, and what is the influence of social interaction technology on our social capital?

The introduction of new robotics is paired with an enormous human challenge. Making use of opportunities and dealing with their social, legal, and ethical aspects calls for human wisdom. Trust in our technological capabilities is an important part of this. But let us hope that trust in technology will not get the upper hand. Trust in humans and the acceptance of human abilities, but also human failures, ought to be the driving force behind our actions. Like no other technology, new robotics is inspired by the physical and cognitive abilities of humans. This technology aims to copy and subsequently improve human characteristics and capacities. It appears to be a race between machines and us. Bill Joy [ 47 ] is afraid of this race, because he fears that humans will ultimately be worse off. His miserable worst-case scenario describes “a future that does not need us”. The ultimate goal of new robotics should not be to create an autonomous and socially and morally capable machine. This is an engineer’s dream. Such a vision should not be the leading principle. Robotics is not about building the perfect machine, but about supporting the well-being of humans.

Our exploratory study shows that social practices often possess a balance between ‘hard’ and ‘soft’ tasks. The police take care of law enforcement and criminal investigation, and also offer support to citizens. The war must be won, but so also must the ‘hearts and minds’ of people. Care is about ‘taking care’ (washing and feeding) and ‘caring for’ through a kind word or a good conversation. We enjoy making love, but we especially want to give and receive love. Robotics can play a role in the functional side of social practices. Hereby, we must watch out that the focus on technology does not erase the ‘soft’ human side of the practice in question. Such a trap can easily lead to appalling practices: inhumane care, a repressive police force, a hardening of our sex culture, and cruel war crimes.

A central question is what decisive position people should take in the control hierarchy. The European Robotics Technology Platform looks to robots to play mainly a supporting role: “Robots should support, but not replace, human caretakers or teachers and should not imitate human form or behaviour” [ 31 ] (p. 9). Robotics does not exist for itself, but for society. Robotics ought to support humankind, not overshadow it. This begins with the realisation that new robotics offers numerous opportunities for improving the lives of people, but also that there is sometimes no space for robots. A robot exists that can play the trumpet very well. And yet it would be disgraceful if, for example, the daily performance of the Last Post in Ieper (Belgium), in memory of victims of the First World War, were to be outsourced to a robot. Furthermore, we must watch out that trust in technology does not lead to technological paternalism. Even if, in the very distant future, there are robots that are better at raising our children than we are, we must still do it ourselves. An important aspect of this is the notion of personal responsibility and the human right to make autonomous decisions and mistakes. Even if a robot can do something better than a human can, it could still be better that the human continues to do it less well.

www.robosoft.com/img/data/2010-03-Kompai-Robosoft_Press_Release_English_v2.pdf .

http://www.mobiserv.eu/index.php?option=com_content&view=article&id=46&Itemid=27&lang=en .

http://www.safespot-eu.org/ .

http://www.coopers-ip.eu/ .

http://www.cvisproject.org/ .

http://www.sartre-project.eu/ .

http://autonomos.inf.fu-berlin.de/ .

http://www.eurofot-ip.eu/ .

http://www.aide-eu.org/

http://newsroom.cisco.com/press-release-content?articleId=1184392&type=webcontent .

http://www.preserve-project.eu/ .

www.technovelgy.com/ct/Science-Fiction-News.asp?NewsNum=1330 .

www.thebureauinvestigates.com/ .

Aldrich FK (2003) Smarthomes: past, present and future. In: Harper R (ed) Inside the smart home. Springer, London, pp 17–39

Chapter   Google Scholar  

Alley R (2013) The drone debate. Sudden bullet or slow boomerang (discussion paper nr. 14/13). Centre for Strategic Studies, Wellington

Akrich M (1992) The description of technical objects. In: Bijker W, Law J (eds) Shaping technology/building society: studies in sociotechnical change. MIT Press, Cambridge, pp 205–224

Google Scholar  

Archer J, Fotheringham N, Symmons M, Corben B (2008) The impact of lowered speed limits in urban and metropolitan areas (Report #276). Monash University Accident Research Centre ( www.monash.edu.au/miri/research/reports/muarc276.pdf )

Arkin RC (2009) Governing lethal behavior in autonomous robots. Taylor and Francis, Boca Raton

Book   Google Scholar  

Arkin RC (2010) The case of ethical autonomy in unmanned systems. J Mil Ethics 9(4):332–341

Article   Google Scholar  

Arth M (2010) Democracy and the common wealth: breaking the stranglehold of the special interests. Golden Apples Media, DeLand

Asaro PM (2008) How just could a robot war be? In: Briggle A, Waelbers K, Brey Ph (eds) Current issues in computing and philosophy. IOS Press, Amsterdam, pp 50–64

Bacevich AJ, Cohen EA (2001) War over Kosovo: politics and strategy in a global age. Columbia University Press, Columbia

Birk A, Kenn H (2002) RoboGuard, a teleoperated mobile security robot. Control Eng Pract 10(11):1259–1264

Borenstein J, Pearson Y (2010) Robot caregivers: harbingers of expanded freedom for all? Ethics Inf Technol 12(3):277– 288

Breazeal C (2003) Toward sociable robots. Robot Auton Syst 42(3–4):167–175

Article   MATH   Google Scholar  

Breazeal C, Takanski A, Kobayashi T (2008) Social robots that interact with people. In: Siciliano B, Khatib O (eds) Springer handbook of robotics. Springer, Berlin, pp 1349–1369

Broggi A, Zelinsky A, Parent M, Thorpe CE (2008) Intelligent vehicles. In: Siciliano B, Khatib O (eds) Springer handbook of robotics. Springer, Berlin, pp 1175–1198

Butter M, Rensma A, Van Boxsel J et al (2008) Robotics for healthcare (final report). DG Information Society, European Commission, Brussels

Camillero JA (2013) Drone warfare: defending the indefensible. e-International relations. ( http://www.e-ir.info/2013/07/20/drone-warfare-defending-the-indefensible/ ). Aceessed 20 July 2013

Checkoway S, McCoy D, Kantor B et al. (2011) Comprehensive experimental analyses of automotive attack surfaces. In: Wagner D (ed) Proceedings of the 20th USENIX on security (SEC’11). USENIX Association, Berkeley. http://www.autosec.org/publications.html

Clark M (2013) States take the wheel on driverless cars. The Pew Charitable Trusts. ( http://www.pewtrusts.org/en/research-and-analysis/blogs/stateline/2013/07/29/states-take-the-wheel-on-driverless-cars ). Accessed 29 July 2013

Coeckelbergh M (2010) Health care, capabilities, and AI assistive technologies. Ethics Theory Moral Pract 13(2):181–190

Coll S (2012) Kill or capture. The New Yorker. ( http://www.newyorker.com/news/daily-comment/kill-or-capture ). Accessed 2 Aug 2012

Cummings ML (2006) Automation and accountability in decision support system interface design. J Technol Stud 32(1):23–31

Cummings ML (2010) Unmanned robotics and new warfare: a pilot/professor’s perspective. Harv Natl Secur J. http://harvardnsj.org/2010/03/unmanned-robotics-new-warfare-a-pilotprofessors-perspective/

Decker M (2008) Caregiving robots and ethical reflection: the perspective of interdisciplinary technology assessment. AI Soc 22(3):315–330

Detert J, Treviño L, Sweitzer V (2008) Moral disengagement in ethical decision making: a study of antecedents and outcomes. J Appl Psychol 93(2):374–391

Dewar RE, Olson PL (2007) Human factors in traffic safety, 2nd edn. Lawyers & Judges Publishing Company, Tucson

Donner E, Schollinski HL (2004) Deliverable D1, ADAS: market introduction scenarios and proper realisation. Response 2: advanced driver assistance systems: from introduction scenarios towards a code of practice for development and testing (Contract Number: ST 2001–37528). Köln

Dragutinovic N, Brookhuis KA, Hagenzieker MP, Marchau VAWJ (2005) Behavioural effects of advanced cruise control use. A meta-analytic approach. Eur J Transp Infrastruct Res 5(4):267–280

Duffy BR (2003) Anthropomorphism and the social robot. Robot Auton Syst 42(3–4):170–190

Duffy BR (2006) Fundamental issues in social robotics. Int Rev Inf Ethics 6:31–36

eCall Driving Group (2006) Recommendations of the DG eCall for the introduction of the pan-European eCall. eSafety Support, Brussels

EUROP (2009) Robotic visions to 2020 and beyond: the strategic research agenda for robotics in Europe, 07/2009. European Robotics Technology Platform (EUROP), Brussels

European Commission (2010) ICT Research: European Commission supports ’talking’ cars for safer and smarter mobility in Europe. European Commission, Brussels

European Commission (2010) European road safety action programme 2011–2020. European Commission, Brussels

European Commission (2011) Commission staff working paper. Impact assessment. Accompanying the document ‘Commission recommendation on support for an EU-wide eCall service in electronic communication networks for the transmission of in-vehicle emergency calls based on 112 (‘eCalls’). European Commission, Brussels

European Commission (2012) The 2012 Ageing Report: economic and budgetary projections for the EU27 Member States (2010–2060). European Commission, Brussels

Evans D (2010) Wanting the impossible. The Dilemma at the heart of intimate human-robot relationships. In: Wilks Y (ed) Close engagements with artificial companions. Key social, psychological, ethical and design issues. John Benjamins Publishing Company, Amsterdam, pp 75–88

Fieser J, Dowden B (2007) Just war theory. The internet encyclopedia of philosophy. http://www.iep.utm.edu/j/justwar.htm

Fong T, Nourbakhsh I, Dautenhahn K (2003) A survey of socially interactive robots. Robot Auton Syst 42(3–4):143–166

Gangloff M (2009) Curious about robot police used to greet Wytheville post office hostage suspect? The Roanoke Times. http://ww2.roanoke.com/news/roanoke/wb/230806 . Accessed 24 Dec 2009

Gates B (2007) A robot in every home. The leader of the PC revolution predicts that the next hot field will be robotics. Sci Am 296:58–65

Ghosh B, Thompson M (2009) The CIA’s silent war in Pakistan. Time. http://www.time.com/time/magazine/article/0,9171,1900248,00.html . Accessed 1 June 2009

Gorman S, Dreazen Y, Cole A (2009) Insurgents hack U.S. drones. Wall Str J. ( http://online.wsj.com/article/SB126102247889095011.html ). Accessed 17 Dec 2009

Gusikhin O, Filev D, Rychtyckyj N (2008) Intelligent vehicle systems: applications and new trends. Informatics in Control Automation and Robotic. Lect Notes Electr Eng 15:3–14

Hambling D (2010) Future police: meet the UK’s armed robot drones. Wired. ( http://www.wired.co.uk/news/archive/2010-02/10/future-police-meet-the-uk%2Cs-armed-robot-drones#comments ). Accessed 10 Feb 2010

Heerink M, Wielinga Kröse BJA, Wielinga BJ, Evers V (2009) Influence of social presence on acceptance of an assistive social robot and screen agent by elderly users. Adv Robot 23(14):1909–1923

Human Rights Watch (2012) Losing humanity: the case against killer robots. ( http://www.hrw.org/reports/2012/11/19/losing-humanity-0 )

Joy B (2000) Why the future doesn’t need us. Wired. ( http://www.wired.com/wired/archive/8.04/joy.html ). Accessed 8 April 2000

Khan AN (2011) The US’ policy of targeted killings by drones in Pakistan. IPRI J 11(1):21–40

Kloer A (2010) Are robots the future of prostitution? PinoyExchange. ( http://www.pinoyexchange.com/forums/showthread.php?t=442361 ). Accessed 28 April 2010

Krishnan A (2009) Killer robots. Ashgate Publishing Limited, Farnham, Legality and ethicality of autonomous weapons

Levy D (2007) Love + sex with robots. HarperCollins Publishers, New York, The evolution of human-robot relationships

Levy D (2007) Robot prostitutes as alternatives to human sex workers. In: IEEE international conference on robotics and automation, Rome. ( http://www.roboethics.org/icra2007/contributions/LEVY%20Robot%20Prostitutes%20as%20Alternatives%20to%20Human%20Sex%20Workers.pdf ). Accessed 14 April 2007

Levy D (2009) The ethical treatment of artificially conscious robots. Int J Soc Robot 1(3):209–216

Lewis P (2010) CCTV in the sky: police plan to use military: style spy drones. The Guardian?(January). ( http://www.theguardian.com/uk/2010/jan/23/cctv-sky-police-plan-drones )

Logtenberg H (2011) Digitale ring scant alle auto’s. Het Parool. ( http://www.parool.nl/parool/nl/4/AMSTERDAM/article/detail/2894179/2011/09/07/Digitale-ring-scant-alle-auto-s.dhtml ). Accessed 7 Sept 2011

Maines RP (1999) The technology of orgasm: “Hysteria,” the vibrator, and women’s sexual satisfaction. Johns Hopkins University Press, Baltimore

Marchant GE, Allenby B, Arkin RC et al (2011) International governance of autonomous military robots. Sci Technol Law Rev 12:272–315

McGillycuddy C (2015) She makes love just like a real woman, yes she does. Independent.ie. ( http://www.independent.ie/opinion/analysis/she-makes-love-just-like-a-real-woman-yes-she-does-26562121.html ). Accessed 17 Mar 2015

Melson GF, Kahn PH, Beck A, Friedman B (2009) Robotic pets in human lives: implications for the human-animal bond and for human relationships with personified technologies. J Soc Issues 65(3):545–569

Morsink P, Goldenbeld Ch, Dragutinovic N, Marchau V, Walta L, Brookhuis K (2007) Speed support through the intelligent vehicle (R-2006-25). SWOV, Leidschendam

Nagenborg M, Capurro R, Weber J, Pingel C (2008) Ethical regulations on robotics in Europe. AI Soc 22(3):349–366

Ness C (2010) Researchers develop a robot that folds towels. NewsCenter. ( http://newscenter.berkeley.edu/2010/04/02/robot/ ). Accessed 2 April 2010

New America Foundation (2011). The year of the drone. An analysis of U.S. drone strikes in Pakistan, 2004–2011 (report). ( http://counterterrorism.newamerica.net/drones )

NewScientist (2007) Armed autonomous robots cause concern. NewScientist. ( http://www.newscientist.com/article/dn12207-armed-autonomous-robots-cause-concern.html ). Accessed 7 July 2007

Office of the Surgeon Multinational Force-Iraq (2006) Mental health advisory team (MHAT) IV. Operation Iraqi freedom 05–07 (final report). ( http://www.armymedicine.army.mil/reports/mhat/mhat_iv/mhat-iv.cfm )

Oldenziel R, De la Bruhèze A, De Wit O (2005) Europe’s mediation junction: technology and consumer society in the twentieth century. Hist Technol 21(1):107–139

Oudshoorn N (2008) Diagnosis at a distance: the invisible work of patients and healthcare professionals in cardiac telemonitoring technology. Soc Health Illn 30(2):272–288

Ritzer G (1983) The McDonaldization of society. J Am Cult 6(1):100–107

Royakkers LMM, Van Est QC (2010) The cubicle warrior: the marionette of digitalized warfare. Ethics Inf Technol 12(3):289–296

Royakkers LMM, Van Est QC, Daemen F (2012) Overal robots. Automatisering van de liefde tot de dood, Boom Lemma, The Hague

Salvini P, Ciaravella G, Yu W, Ferri G et al (2010) How safe are service robots in urban environments? Bullying a robot. In: Proceedings 19th IEEE international symposium in robot and human interactive vommunication, Viareggio. ( http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5654677 ). Accessed 12–15 Sept 2010

Sanfeliu A, Punsola A, Yoshimura Y, Llácer MR, Gramunt MD (2009) Legal challenges for networking robots deployment in European urban areas: the privacy issue. In: Workshop on network robots systems, IEEE international conference on robotics and automation, Kobe. ( http://urus.upc.es/files/articles/LegalIssuesWorkshopNRSICRA09.pdf )

Scahill J, Greenwald G (2014) The NSA’s secret role in the U.S. assassination program. The Intercept. ( https://firstlook.org/theintercept/2014/02/10/the-nsas-secret-role/ ). Accessed 2 Oct 2014

Shachtman N (2011) Computer virus hits U.S. drone fleet. Wired.com. ( http://www.wired.com/2011/10/virus-hits-drone-fleet/ ). Accessed 7 Oct 2011

Sharkey N (2008) 2084: big robot is watching you. Report on the future for policing, surveillance and security. ( http://www.dcs.shef.ac.uk/noel/Future%20robot%20policing%20report20Final.doc )

Sharkey N (2008) Grounds for discrimination: autonomous robot weapons. RUSI Def Syst 11(2):86–89

Sharkey N (2008) Killer military robots pose latest threat to humanity. Keynote-presention at the Royal United Services Institute, Whitehall. Accessed 27 Feb 2008

Sharkey N (2010) Saying ‘no!’ to lethal autonomous targeting. J Mil Ethics 9(4):369–383

Sharkey A, Sharkey N (2012) Granny and the robots: ethical issues in robot care for the elderly. Ethics Inf Technol 14:27–40

Shaw-Garlock G (2011) Loving machines: theorizing human and sociable-technology interaction. In: Lamers MH, Verbeek FJ (eds) Human-robot personal relationships (LNICTS 59). Springer, Heidelberg, pp 1–10

Singer PW (2009) Military robots and the laws of war. New Atl 23:25–45

Singer PW (2009) Wired for war: the robotics revolution and conflict in the twenty-first century. The Penguin Press, New York

Sparrow R (2002) The march of the robot dogs. Ethics Inf Technol 4(4):305–318

Article   MathSciNet   Google Scholar  

Sparrow R (2007) Killer robots. J Appl Philos 24(1):62–77

Sparrow R, Sparrow L (2006) In the hands of machines? The future of aged care. Mind Mach 16(2):141–161

Strawser BJ (2010) Moral predators: the duty to employ uninhabited aerial vehicles. J Mil Ethics 9(4):342–368

Sullins JP (2010) RoboWarfare: can robots be more ethical than humans on the battlefield? Ethics Inf Technol 12(3):263–275

Sung J-Y, Grinter RE, Christensen HI, Guo L (2008) Housewives or technophiles? Understanding domestic robot owners. In: Proceedings of 3rd ACM/IEEE intelligent conference human robot interaction, Amsterdam. ACM, Georgia, pp 128–136. March 2008

Sung J-Y, Guo L, Grinter RE, Christensen HI (2007) “My Roomba is Rambo”: intimate home appliances. In: Krumm J et al (eds) UbiComp 2007 (LNCS 4717). Springer, Berlin, pp 145–162

Tanaka F, Cicourel A, Movellan JR (2007) Socialization between toddlers and robots at an early childhood education center. Proc Natl Acad Sci USA 104(46):17954–17958

Tanaka F, Kimura T (2009) The use of robots in early education: a scenario based on ethical consideration. In: Proceedings of the 18th IEEE international symposium on robot and human interactive communication (RO-MAN 2009), Toyama, pp 558–560

Thring MW (1964) A robot in the house. In: Calder N (ed) The world in 1984. Penguin Books, Baltimore

Thrun S (2010) What we’re driving at. Google Official Blog. ( http://googleblog.blogspot.nl/2010/10/what-were-driving-at.html ). Accessed 9 Oct 2010

TNO (2008) TNO moving forward. to safer, cleaner and more efficient mobility. TNO, The Hague

Turkle S (2006) A nascent robotics culture: New complicities for companionship. AAAI Technical Report Series. http://mit.edu/sturkle/www/nascentroboticsculture.pdf )

Turkle S (2011) Alone together. Basic Books, New York, Why we expect more from technology and less from each other

Underwood SE, Ervin RD, Chen K (1989) The future of intelligent vehicle-highway systems: a Delphi forecast of markets and sociotechnological determinants. University of Michigan, Transportation Research Institute, Michigan

United States Air Force (2009) Unmanned aircraft systems flight plan 2009–2047. Headquaters, United States Air Force, Washington

UN News Centre (2014) UN meeting targets ‘killer robots’. UN News Centre. http://www.un.org/apps/news/story.asp?NewsID=47794 . Accessed 14 May 2014

Vallor S (2011) Carebots and caregivers: sustaining the ethical ideal of care in the twenty-first century. Philos Technol 24(3):251–268

Van Arem B (2007) Cooperative vehicle-infrastructure systems: an intelligent way forward? (TNO report 2007-D-R0158/B). TNO, Delft

Vanden Brook T (2009) More training on UAV’s than bombers, fighters. USA Today. http://www.airforcetimes.com/news/2009/06/gnsairforceuav061609w/ . Accessed 16 June 2009

Van der Plas A, Smits M, Wehrman C (2010) Beyond speculative robot ethics: a vision assessment study on the future of the robotic caretaker. Account Res Policies Qual Assur 17(6):299–315

Van Driel CJG, Hoedemaeker M, Van Arem B (2007) Impacts of a congestion assistant on driving behaviour and acceptance using a driving simulator. Transp Res Part F 10(2):139–152

Van Oost E, Reed D (2011) Towards a sociological understanding of robots as companions. In: Lamers MH, Verbeek FJ (eds) Human-robot personal relationships (LNICTS 59). Springer, Heidelberg, pp 11–18

Van Wynsberghe A (2013) Designing robots for care: care centered value-sensitive design. Sci Eng Ethics 19(2):407–433

Veruggio G, Operto F (2008) Robothics: social and ethical implications of robotics. In: Siciliano B, Khatib O (eds) Springer handbook of robotics. Springer, Berlin, pp 1499–1524

Visbeek M, Van Renswouw CCM (2008) C, mm, n. Your mobility, our future. Twente University, Enschede

Wetmore JM (2003) Driving the dream. The history and motivations behind 60 years of automated highway systems in America. Automative History Review (summers):4–19

Whetham D (2013) Drones and targeting killing: angels or assassins? In: Strawser BJ, McMaham J (eds) Kiling by remote control: the ethics of an unmanned military. Oxford University Press, Oxford, pp 69–83

Whittle R (2013) Drone skies: The unmanned aircraft revolution is coming. Popular mechanics. http://www.popularmechanics.com/military/a9407/drone-skies-the-unmanned-aircraft-revolution-is-coming-15894155/ . Accessed 9 Sept 2013

Download references

Author information

Authors and affiliations.

School of Innovation Sciences, Eindhoven University of Technology, Eindhoven, The Netherlands

Lambèr Royakkers

Rathenau Institute, The Hague, The Netherlands

Rinie van Est

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Lambèr Royakkers .

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Royakkers, L., van Est, R. A Literature Review on New Robotics: Automation from Love to War. Int J of Soc Robotics 7 , 549–570 (2015). https://doi.org/10.1007/s12369-015-0295-x

Download citation

Accepted : 24 March 2015

Published : 09 April 2015

Issue Date : November 2015

DOI : https://doi.org/10.1007/s12369-015-0295-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Social robots
  • Find a journal
  • Publish with us
  • Track your research
  • Reference Manager
  • Simple TEXT file

People also looked at

Methods article, a review of artificial intelligence and robotics in transformed health ecosystems.

literature review on ai and robotics

  • 1 Institute for Medical Information, Bern University of Applied Sciences, Bern, Switzerland
  • 2 Object Management Group, Needham, MA, United States

Health care is shifting toward become proactive according to the concept of P5 medicine–a predictive, personalized, preventive, participatory and precision discipline. This patient-centered care heavily leverages the latest technologies of artificial intelligence (AI) and robotics that support diagnosis, decision making and treatment. In this paper, we present the role of AI and robotic systems in this evolution, including example use cases. We categorize systems along multiple dimensions such as the type of system, the degree of autonomy, the care setting where the systems are applied, and the application area. These technologies have already achieved notable results in the prediction of sepsis or cardiovascular risk, the monitoring of vital parameters in intensive care units, or in the form of home care robots. Still, while much research is conducted around AI and robotics in health care, adoption in real world care settings is still limited. To remove adoption barriers, we need to address issues such as safety, security, privacy and ethical principles; detect and eliminate bias that could result in harmful or unfair clinical decisions; and build trust in and societal acceptance of AI.

The Need for AI and Robotics in Transformed Health Ecosystems

“Artificial intelligence (AI) is the term used to describe the use of computers and technology to simulate intelligent behavior and critical thinking comparable to a human being” ( 1 ). Machine learning enables AI applications to automatically (i.e., without being explicitly programmed for) improving their algorithms through experiences gained by cognitive inputs or by the use of data. AI solutions provide data and knowledge to be used by humans or other technologies. The possibility of machines behaving in such a way was originally raised by Alan Turing and further explored starting in the 1950s. Medical expert systems such as MYCIN, designed in the 1970s for medical consultations ( 2 ), were internationally recognized a revolution supporting the development of AI in medicine. However, the clinical acceptance was not very high. Similar disappointments across multiple domains led to the so-called “AI winter,” in part because rule-based systems do not allow the discovery of unknown relationships and in part because of the limitations in computing power at the time. Since then, computational power has increased enormously.

Over the centuries, we have improved our knowledge about structure and function of the human body, starting with the organs, tissues, cells sub-cell components etc. Meanwhile, we could advance it up to the molecular and sub-molecular level, including protein coding genes, DNA sequences, non-coding RNA etc. and their effects and behavior in the human body. This has resulted in a continuously improving understanding of the biology of diseases and disease progressions ( 3 ). Nowadays, biomedical research and clinical practice are struggling with the size and complexity of the data produced by sequencing technologies, and how to derive from it new diagnoses and treatments. Experiment results, often hidden in clinical data warehouses, must be aggregated, analyzed, and exploited to derive our new, detailed and data-driven knowledge of diseases and enable better decision making.

New tools based on AI have been developed to predict disease recurrence and progression ( 4 ) or response to treatment; and robotics, often categorized as a branch of AI, plays an increasing role in patient care. In a medical context, AI means for example imitating the decision-making processes of health professionals ( 1 ). In contrast to AI that generates data, robotics provides touchable outcomes or realize physical tasks. AI and robotics use knowledge and patient data for various tasks such as: diagnosis; planning of surgeries; monitoring of patient physical and mental wellness; basic physical interventions to improve patient independence during physical or mental deterioration. We will review concrete realizations in a later section of this paper.

These advances are causing a revolution in health care, enabling it to become proactive as called upon by the concept of P5 medicine –a predictive, personalized, preventive, participatory and precision discipline ( 5 ). AI can help interpret personal health information together with other data to stratify the diseases to predict, stop or treat their progression.

In this paper, we describe the impact of AI and robotics on P5 medicine and introduce example use cases. We then discuss challenges faced by these developments. We conclude with recommendations to help AI and robotics transform health ecosystems. We extensively refer to appropriate literature for details on the underlying methods and technologies. Note that we concentrate on applications in the care setting and will not address in more detail the systems used for the education of professionals, logistics, or related to facility management–even though there are clearly important applications of AI in these areas.

Classification of AI and Robotic Systems in Medicine

We can classify the landscape of AI and robotic systems in health care according to different dimensions ( Figure 1 ): use, task, technology. Within the “use” dimension, we can further distinguish the application area or the care setting. The “task” dimension is characterized by the system's degree of autonomy. Finally, regarding the “technology” dimension, we consider the degree of intrusion into a patient and the type of system. Clearly, this is a simplification and aggregation: AI algorithms as such will not be located in a patient etc.

www.frontiersin.org

Figure 1 . Categorization of systems based on AI and robotics in health care.

Classification Based on Type of System

We can distinguish two types of such systems: virtual and physical ( 6 ).

• Virtual systems (relating to AI systems) range from applications such as electronic health record (EHR) systems, or text and data mining applications, to systems supporting treatment decisions.

• Physical systems relate to robotics and include robots that assist in performing surgeries, smart prostheses for handicapped people, and physical aids for elderly care.

There can also be hybrid systems combining AI with robotics, such as social robots that interact with users or microrobots that deliver drugs inside the body.

All these systems exploit enabling technologies that are data and algorithms (see Figure 2 ). For example, a robotic system may collect data from different sensors–visual, physical, auditory or chemical. The robot's processor manipulates, analyzes, and interprets the data. Actuators enable the robot to perform different functions including visual, physical, auditory or chemical responses.

www.frontiersin.org

Figure 2 . Types of AI-based systems and enabling technologies.

Two kinds of data are required: data that captures the knowledge and experience gained by the system during diagnosis and treatment, usually through machine learning; and individual patient data, which AI can assess and analyze to derive recommendations. Data can be obtained from physical sensors (wearable, non-wearable), from biosensors ( 7 ), or from other information systems such as an EHR application. From the collected data, digital biomarkers can be derived that AI can analyze and interpret ( 8 ).

AI-specific algorithms and methods allow data analysis, reasoning, and prediction. AI consists of a growing number of subfields such as machine learning (supervised, unsupervised, and reinforcement learning), machine vision, natural language processing (NLP) and more. NLP enables computers to process and understand natural language (written or spoken). Machine vision or computer vision extracts information from images. An authoritative taxonomy of AI does not exist yet, although several standards bodies have started addressing this task.

AI methodologies can be divided into knowledge-based AI and data-driven AI ( 9 ).

• Knowledge-based AI models human knowledge by asking experts for relevant concepts and knowledge they use to solve problems. This knowledge is then formalized in software ( 9 ). This is the form of AI closest to the original expert systems of the 1970s.

• Data-driven AI starts from large amounts of data, which are typically processed by machine learning methods to learn patterns that can be used for prediction. Virtual or augmented reality and other types of visualizations can be used to present and explore data, which helps understand relations among data items that are relevant for diagnosis ( 10 ).

To more fully exploit the knowledge captured in computerized models, the concept of digital twin has gained traction in the medical field ( 11 ). The terms “digital patient model,” “virtual physiological human,” or “digital phenotype” designate the same idea. A digital twin is a virtual model fed by information coming from wearables ( 12 ), omics, and patient records. Simulation, AI and robotics can then be applied to the digital twin to learn about the disease progression, to understand drug responses, or to plan surgery, before intervening on the actual patient or organ, effecting a significant digital transformation of the health ecosystems. Virtual organs (e.g., a digital heart) are an application of this concept ( 13 ). A digital twin can be customized to an individual patient, thus improving diagnosis.

Regardless of the specific kind of AI, there are some requirements that all AI and robotic systems must meet. They must be:

• Adaptive . Transformed health ecosystems evolve rapidly, especially since according to P5 principles they adapt treatment and diagnosis to individual patients.

• Context-aware . They must infer the current activity state of the user and the characteristics of the environment in order to manage information content and distribution.

• Interoperable . A system must be able to exchange data and knowledge with other ones ( 14 ). This requires common semantics between systems, which is the object of standard terminologies, taxonomies or ontologies such as SNOMED CT. NLP can also help with interoperability ( 15 ).

Classification Based on Degree of Autonomy

AI and robotic systems can be grouped along an assistive-to-autonomous axis ( Figure 3 ). Assistive systems augment the capabilities of their user by aggregating and analyzing data, performing concrete tasks under human supervision [for example, a semiautonomous ultrasound scanner ( 17 )], or learning how to perform tasks from a health professional's demonstrations. For example, a robot may learn from a physiotherapist how to guide a patient through repetitive rehabilitation exercises ( 18 ).

www.frontiersin.org

Figure 3 . Levels of autonomy of robotic and AI systems. [following models proposed by ( 16 )].

Autonomous systems respond to real world conditions, make decisions, and perform actions with minimal or no interaction with a human ( 19 ). They be encountered in a clinical setting (autonomous implanted devices), in support functions to provide assistance 1 (carrying things around in a facility), or to automate non-physical work, such as a digital receptionist handling patient check-in ( 20 ).

Classification Based on Application Area

The diversity of users of AI and robotics in health care implies an equally broad range of application areas described below.

Robotics and AI for Surgery

Robotics-assisted surgery, “the use of a mechanical device to assist surgery in place of a human-being or in a human-like way” ( 21 ) is rapidly impacting many common general surgical procedures, especially minimally invasive surgery. Three types of robotic systems are used in surgery:

• Active systems undertake pre-programmed tasks while remaining under the control of the operating surgeon;

• Semi-active systems allow a surgeon to complement the system's pre-programmed component;

• Master–slave systems lack any autonomous elements; they entirely depend on a surgeon's activity. In laparoscopic surgery or in teleoperation, the surgeon's hand movements are transmitted to surgical instruments, which reproduce them.

Surgeons can also be supported by navigation systems, which localize positions in space and help answer a surgeon's anatomical orientation questions. Real-time tracking of markers, realized in modern surgical navigation systems using a stereoscopic camera emitting infrared light, can determine the 3D position of prominent structures ( 22 ).

Robotics and AI for Rehabilitation

Various AI and robotic systems support rehabilitation tasks such as monitoring, risk prevention, or treatment ( 23 ). For example, fall detection systems ( 24 ) use smart sensors placed within an environment or in a wearable device, and automatically alert medical staff, emergency services, or family members if assistance is required. AI allows these systems to learn the normal behavioral patterns and characteristics of individuals over time. Moreover, systems can assess environmental risks, such as household lights that are off or proximity to fall hazards (e.g., stairwells). Physical systems can provide physical assistance (e.g., lifting items, opening doors), monitoring, and therapeutic social functions ( 25 ). Robotic rehabilitation applications can provide both physical and cognitive support to individuals by monitoring physiological progress and promoting social interaction. Robots can support patients in recovering motions after a stroke using exoskeletons ( 26 ), or recovering or supplementing lost function ( 27 ). Beyond directly supporting patients, robots can also assist caregivers. An overview on home-based rehabilitation robots is given by Akbari et al. ( 28 ). Virtual reality and augmented reality allow patients to become immersed within and interact with a 3D model of a real or imaginary world, allowing them to practice specific tasks ( 29 ). This has been used for motor function training, recovery after a stroke ( 30 ) and in pain management ( 31 ).

Robotics and AI for Telemedicine

Systems supporting telemedicine support among others the triage, diagnostic, non-surgical treatment, surgical treatment, consultation, monitoring, or provision of specialty care ( 32 ).

• Medical triage assesses current symptoms, signs, and test results to determine the severity of a patient's condition and the treatment priority. An increasing number of mobile health applications based on AI are used for diagnosis or treatment optimization ( 33 ).

• Smart mobile and wearable devices can be integrated into “smart homes” using Internet-of-Things (IoT) technologies. They can collect patient and contextual data, assist individuals with everyday functioning, monitor progress toward individualized care and rehabilitation goals, issue reminders, and alert care providers if assistance is required.

• Telemedicine for specialty care includes additional tools to track mood and behavior (e.g., pain diaries), AI-based chatbots can mitigate social isolation in home care environments 2 by offering companionship and emotional support to users, noting if they are not sleeping well, in pain or depressed, which could indicate a more complex mental condition ( 34 ).

• Beyond this, there are physical systems that can deliver specialty care: Robot DE NIRO can interact naturally, reliably, and safely with humans, autonomously navigate through environments on command, intelligently retrieve or move objects ( 35 ).

Robotics and AI for Prediction and Precision Medicine

Precision medicine considers the individual patients, their genomic variations as well as contributing factors (age, gender, ethnicity, etc.), and tailors interventions accordingly ( 8 ). Digital health applications can also incorporate data such as emotional state, activity, food intake, etc. Given the amount and complexity of data this requires, AI can learn from comprehensive datasets to predict risks and identify the optimal treatment strategy ( 36 ). Clinical decision support systems (CDSS) that integrate AI can provide differential diagnoses, recognize early warning signs of patient morbidity or mortality, or identify abnormalities in radiological images or laboratory test results ( 37 ). They can increase patient safety, for example by reducing medication or prescription errors or adverse events and can increase care consistency and efficiency ( 38 ). They can support clinical management by ensuring adherence to the clinical guidelines or automating administrative functions such as clinical and diagnostic encoding ( 39 ), patient triage or ordering of procedures ( 37 ).

AI and Agents for Management and Support Tasks

NLP applications, such as voice transcription, have proved helpful for clinical note-taking ( 40 ), compiling electronic health records, automatically generating medical reports from patient-doctor conversations or diagnostic reports ( 41 ). AI algorithms can help retrieving context-relevant patient data. Concept-based information retrieval can improve search accuracy and retrieval speed ( 42 ). AI algorithms can improve the use and allocation of hospital resources by predicting the length of stay of patients ( 43 ) or risk of re-admission ( 44 ).

Classification Based on Degree of Intrusion Into a Patient

Robotic systems can be used inside the body, on the body or outside the body. Those applied inside the body include microrobots ( 45 ), surgical robots and interventional robots. Microrobots are sub-millimeter untethered devices that can be propelled for example by chemical reactions ( 46 ), or physical fields ( 47 ). They can move unimpeded through the body and perform tasks such as targeted therapy (localized delivery of drugs) ( 48 ).

Microrobots can assist in physical surgery, for example by drilling through a blood clot or by opening up obstructions in the urinary tract to restore normal flow ( 49 ). They can provide directed local tissue heating to destroy cancer cells ( 50 ). They can be implanted to provide continuous remote monitoring and early awareness of an emerging disease.

Robotic prostheses, orthoses and exoskeletons are examples of robotic systems worn on the body. Exoskeletons are wearable robotic systems that are tightly physically coupled with a human body to provide assistance or enhance the wearer's physical capabilities ( 51 ). While they have often been developed for applications outside of health care, they can help workers with physically demanding tasks such as moving patients ( 52 ) or assist people with muscle weakness or movement disorders. Wearable technology can also be used to measure and transmit data about vital signs or physical activity ( 19 ).

Robotic systems applied outside the body can help avoid direct contact when treating patients with infectious diseases ( 53 ), assist in surgery (as already mentioned), including remote surgical procedures that leverage augmented reality ( 54 ) or assist providers when moving patients ( 55 ).

Classification Based on Care Setting

Another dimension of AI and robotics is the duration of their use, which directly correlates with the location of use. Both can significantly influence the requirements, design, and technology components of the solution. In a longer-term care setting, robotics can be used in a patient's home (e.g., for monitoring of vital signs) or for treatment in a nursing home. Shorter-term care settings include inpatient hospitals, palliative care facilities or inpatient psychiatric facilities. Example applications are listed in Table 1 .

www.frontiersin.org

Table 1 . Classification by care setting.

Sample Realizations

Having seen how to classify AI and robotic systems in health care, we turn to recent concrete achievements that illustrate their practical application and achievements already realized. This list is definitely not exhaustive, but it illustrates the fact that we're no longer purely at the research or experimentation stage: the technology is starting to bear fruit in a very concrete way–that is, by improving outcomes–even when only in the context of clinical trials prior to regulatory approval for general use.

Sepsis Onset Prediction

Sepsis was recently identified as the leading cause of death worldwide, surpassing even cancer or cardiovascular diseases. 3 And while timely diagnosis and treatment are difficult in other care settings, it is also the leading cause of death in hospitals in the United States (Sepsis Fact Sheet 4 ) A key reason is the difficulty of recognizing precursor symptoms early enough to initiate effective treatment. Therefore, early onset prediction promises to save millions of lives each year. Here are four such projects:

• Bayesian Health 5 , a startup founded by a researcher at Johns Hopkins University, applied its model to a test population of hospital patients and correctly identified 82% of the 9,800 patients who later developed sepsis.

• Dascena, a California startup, has been testing its software on large cohorts of patients since 2017, achieving significant improvements in outcomes ( 63 ).

• Patchd 6 uses wearable devices and deep learning to predict sepsis in high-risk patients. Early studies have shown that this technology can predict sepsis 8 h earlier, and more accurately, than under existing standards of care.

• A team of researchers from Singapore developed a system that combines clinical measures (structured data) with physician notes (unstructured data), resulting in improved early detection while reducing false positives ( 64 ).

Monitoring Systems in the Intensive Care Unit

For patients in an ICU, the paradox is that large amounts of data are collected, displayed on monitors, and used to trigger alarms, but these various data streams are rarely used together, nor can doctors or nurses effectively observe all the data from all the patients all the time.

This is an area where much has been written, but most available information points to studies that have not resulted in actual deployments. A survey paper alluded in particular to the challenge of achieving effective collaboration between ICU staff and automated processes ( 65 ).

In one application example, machine learning helps resolving the asynchrony between a mechanical ventilator and the patient's own breathing reflexes, which can cause distress and complicate recovery ( 66 ).

Tumor Detection From Image Analysis

This is another area where research has provided evidence of the efficacy of AI, generally not employed alone but rather as an advisor to a medical professional, yet there are few actual deployments at scale.

These applications differ based on the location of the tumors, and therefore on the imaging techniques used to observe them. AI makes the interpretation of the images more reliable, generally by pinpointing to the radiologists areas they might otherwise overlook.

• In a study performed in Korea, AI appeared to improve the recognition of lung cancer in chest X-rays ( 67 ). AI by itself performed better than unaided radiologists, and the improvement was greater when AI was used as an aid by radiologists. Note however that the sample size was fairly small.

• Several successive efforts aimed to use AI to classify dermoscopic images to discriminate between benign nevi and melanoma ( 68 ).

AI for COVID-19 Detection

The rapid and tragic emergence of the COVID-19 disease, and its continued evolution at the time of this writing, have mobilized many researchers, including the AI community. This domain is naturally divided into two areas, diagnostic and treatment.

An example of AI applied to COVID-19 diagnostic is based on an early observation that the persistent cough that is one of the common symptoms of the disease “sounds different” from the cough caused by other ailments, such as the common cold. The MIT Opensigma project 7 has “crowdsourced” sound recordings of coughs from many people, most of whom do not have the disease while some know that they have it or had it. Several similar projects have been conducted elsewhere ( 69 ).

Another effort used AI to read computer tomography images to provide a rapid COVID-19 test, reportedly achieving over 90% accuracy in 15 s ( 70 ). Curiously, after this news was widely circulated in February-March 2020, nothing else was said for several months. Six months later, a blog post 8 from the University of Virginia radiology and medical department asserted that “CT scans and X-rays have a limited role in diagnosing coronavirus.” The approach pioneered in China may have been the right solution at a specific point in time (many cases concentrated in a small geographical area, requiring a massive detection effort before other rapid tests were available), thus overriding the drawbacks related to equipment cost and patient exposure to radiation.

Patient Triage and Symptom Checkers

While the word triage immediately evokes urgent decisions about what interventions to perform on acutely ill patients or accident victims, it can also be applied to remote patient assistance (e.g., telehealth applications), especially in areas underserved by medical staff and facilities.

In an emergency care setting, where triage decisions can result in the survival or death of a person, there is a natural reluctance to entrust such decisions to machines. However, AI as a predictor of outcomes could serve as an assistant to an emergency technician or doctor. A 2017 study of emergency room triage of patients with acute abdominal pain only showed an “acceptable level of accuracy” ( 71 ), but more recently, the Mayo Clinic introduced an AI-based “digital triage platform” from Diagnostic Robotics 9 to “perform clinical intake of patients and suggest diagnoses and hospital risk scores.” These solutions can now be delivered by a website or a smartphone app, and have evolved from decision trees designed by doctors to incorporate AI.

Cardiovascular Risk Prediction

Google Research announced in 2018 that it has achieved “prediction of cardiovascular risk factors from retinal fundus photographs via deep learning” with a level of accuracy similar to traditional methods such as blood tests for cholesterol levels ( 72 ). The novelty consists in the use of a neural network to analyze the retina image, resulting in more power at the expense of explainability.

In practice, the future of such a solution is unclear: certain risk factors could be assessed from the retinal scan, but those were often factors that could be measured directly anyway–such as from blood pressure.

Gait Analysis

Many physiological and neurological factors affect how someone walks, given the complex interactions between the sense of touch, the brain, the nervous system, and the muscles involved. Certain conditions, in particular Parkinson's disease, have been shown to affect a person's gait, causing visible symptoms that can help diagnose the disease or measure its progress. Even if an abnormal gait results from another cause, an accurate analysis can help assess the risk of falls in elderly patients.

Compared to other applications in this section, gait analysis has been practiced for a longer time (over a century) and has progressed incrementally as new motion capture methods (film, video, infrared cameras) were developed. In terms of knowledge representation, see for example the work done at MIT twenty years ago ( 73 ). Computer vision, combined with AI, can considerably improve gait analysis compared to a physician's simple observation. Companies such as Exer 10 offer solutions that physical therapists can use to assess patients, or that can help monitor and improve a home exercise program. This is an area where technology has already been deployed at scale: there are more than 60 clinical and research gate labs 11 in the U.S. alone.

Home Care Robots

Robots that provide assistance to elderly or sick persons have been the focus of research and development for several decades, particularly in Japan due to the country's large aging population with above-average longevity. “Elder care robots” can be deployed at home (with cost being an obvious issue for many customers) or in senior care environments ( 74 ), where they will help alleviate a severe shortage of nurses and specialized workers, which cannot be easily addressed through the hiring of foreign help given the language barrier.

The types of robots used in such settings are proliferating. They range from robots that help patients move or exercise, to robots that help with common tasks such as opening the front door to a visitor or bringing a cup of tea, to robots that provide psychological comfort and even some form of conversation. PARO, for instance, is a robotic bay seal developed to provide treatment to patients with dementia ( 75 ).

Biomechatronics

Biomechatronics combines biology, mechanical engineering, and electronics to design assistive devices that interpret inputs from sensors and send commands to actuators–with both sensors and actuators attached in some manner to the body. The sensors, actuators, control system, and the human subject form together a closed-loop control system.

Biomechatronic applications live at the boundary of prosthetics and robotics, for example to help amputees achieve close-to-normal motion of a prosthetic limb. This work has been demonstrated for many years, with impressive results, at the MIT Media Lab under Prof. Hugh Herr 12 However, those applications have rarely left the lab environment due to the device cost. That cost could be lowered by production in large quantities, but coverage by health insurance companies or agencies is likely to remain problematic.

Mapping of Use Cases to Classification

Table 2 shows a mapping of the above use cases to the classification introduced in the first section of this paper.

www.frontiersin.org

Table 2 . Mapping of use cases to our classification.

Adoption Challenges to AI and Robotics in Health Care

While the range of opportunities, and the achievements to date, of robotics and AI are impressive as seen above, multiple issues impede their deployment and acceptance in daily practice.

Issues related to trust, security, privacy and ethics are prevalent across all aspects of health care, and many are discussed elsewhere in this issue. We will therefore only briefly mention those challenges that are unique to AI and robotics.

Resistance to Technology

Health care professionals may ignore or resist new technologies for multiple reasons, including actual or perceived threats to professional status and autonomy ( 76 ), privacy concerns ( 77 ) or the unresolved legal and ethical questions of responsibility ( 78 ). The issues of worker displacement by robots are just as acute in health care as in other domains. Today, while surgery robots operate increasingly autonomously, humans still perform many tasks and play an essential role in determining the robot's course of operation (e.g., for selecting the process parameters or for the positioning of the patient) ( 79 ). This allocation of responsibilities is bound to evolve.

Transparency and Explainability

Explainability is “a characteristic of an AI-driven system allowing a person to reconstruct why a certain AI came up with the presented prediction” ( 80 ). In contrast to rule-based systems, AI-based predictions can often not be explained in a human-intelligible manner, which can hide errors or bias (the “black box problem” of machine learning). The explainability of AI models is an ongoing research area. When information on the reasons for an AI-based decision is missing, physicians cannot judge the reliability of the advice and there is a risk to patient safety.

Responsibility, Accountability and Liability

Who is responsible when the AI or robot makes mistakes or creates harm in patients? Is it the programmer, manufacturer, end user, the AI/robotic system itself, the provider of the training dataset, or something (or someone) else? The answer depends on the system's degree of autonomy. The European Parliament's 2017 Resolution on AI ( 81 ) assigns legal responsibility for an action of an AI or robotic system to a human actor, which may be its owner, developer, manufacturer or operator.

Data Protection

Machine learning requires access to large quantities of data regarding patients as well as healthy people. This raises issues regarding the ownership of data, protection against theft, compliance with regulations such as HIPAA in the U.S. ( 82 ) or GDPR for European citizens ( 83 ), and what level of anonymization of data is necessary and possible. Regarding the last point, AI models could have unintended consequences, and the evolution of science itself could make patient re-identification possible in the future.

Data Quality and Integration

Currently, the reliability and quality of data received from sensors and digital health devices remain uncertain ( 84 )–a fact that future research and development must address. Datasets in medicine are naturally imperfect (due to noise, errors in documentation, incompleteness, differences in documentation granularities, etc.), hence it is impossible to develop error-free machine learning models ( 80 ). Furthermore, without a way to quickly and reliably integrate the various data sources for analysis, there is lost potential for fast diagnosis by AI algorithms.

Safety and Security

Introducing AI and robotics into the delivery of health care is likely to create new risks and safety issues. Those will exist even under normal functioning circumstances, when they may be due to design, programming or configuration errors, or improper data preparation ( 85 ).

These issues only get worse when considering the probability of cyberattacks:

• Patient data may be exposed or stolen, perhaps by scammers who want to exploit it for profit.

• Security vulnerabilities in robots that interact directly with patients may cause malfunctions that physically threaten the patient or professional. The robot may cause harm directly, or indirectly by giving a surgeon incorrect feedback. In case of unexpected robot behavior, it may be unclear to the user whether the robot is functioning properly or is under attack ( 86 ).

The EU Commission recently drafted a legal framework 13 addressing the risks of AI (not only in health care) in order to improve the safety of and trust in AI. The framework distinguishes four levels of risks: unacceptable risk, high risk, limited risk and minimal risk. AI systems with unacceptable risks will be prohibited, high-risk ones will have to meet strict obligations before release (e.g., risk assessment and mitigation, traceability of results). Limited-risk applications such as chatbots (which can be used in telemedicine) will require “labeling” so that users are made aware that they are interacting with an AI-powered system.

While P5 medicine aims at considering multiple factors–ethnicity, gender, socio-economic background, education, etc.–to come up with individualized care, current implementations of AI often demonstrate potential biases toward certain patient groups of the population. The training datasets may have under-represented those groups, or important features may be distributed differently across groups–for example, cardiovascular disease or Parkinson's disease progress differently in men and women ( 87 ), so the corresponding features will vary. These causes result in undesirable bias and “unintended of unnecessary discrimination” of subgroups ( 88 ).

On the flip side, careful implementations of AI could explicitly consider gender, ethnicity, etc. differences to achieve more effective treatments for patients belonging to those groups. This can be considered “desirable bias” that counteracts the undesirable kind ( 89 ) and gets us closer to the goals of P5 medicine.

Trust–An Evolving Relationship

The relationship between patients and medical professionals has evolved over time, and AI is likely to impact it by inserting itself into the picture (see Figure 4 ). Although AI and robotics are performing well, human surveillance is still essential. Robots and AI algorithms operate logically, but health care often requires acting empathically. If doctors become intelligent users of AI, they may retain the trust associated with their role, but most patients, who have a limited understanding of the technologies involved, would have much difficulty in trusting AI ( 90 ). Conversely, reliable and accurate diagnosis and beneficial treatment, and appropriate use of AI and robotics by the physician can strengthen the patient's trust ( 91 ).

www.frontiersin.org

Figure 4 . Physician-patient-AI relationship.

This assumes of course that the designers of those systems adhere to established guidelines for trustworthy AI in the first place, which includes such requirements as creating systems that are lawful, ethical, and robust ( 92 , 93 ).

AI and Robotics for Transformed Health Care–A Converging Path

We can summarize the previous sections as follows:

1. There are many types of AI applications and robotic systems, which can be introduced in many aspects of health care.

2. AI's ability to digest and process enormous amounts of data, and derive conclusions that are not obvious to a human, holds the promise of more personalized and predictive care–key goals of P5 medicine.

3. There have been, over the last few years, a number of proof-of-concept and pilot projects that have exhibited promising results for diagnosis, treatment, and health maintenance. They have not yet been deployed at scale–in part because of the time it takes to fully evaluate their efficacy and safety.

4. There is a rather daunting list of challenges to address, most of which are not purely technical–the key one being demonstrating that the systems are effective and safe enough to warrant the confidence of both the practitioners and their patients.

Based on this analysis, what is the roadmap to success for these technologies, and how will they succeed in contributing to the future of health care? Figure 5 depicts the convergent approaches that need to be developed to ensure safe and productive adoption, in line with the P5 medicine principles.

www.frontiersin.org

Figure 5 . Roadmap for transformed health care.

First, AI technology is currently undergoing a remarkable revival and being applied to many domains. Health applications will both benefit from and contribute to further advances. In areas such as image classification or natural language understanding, both of which have obvious utility in health care, the rate of progress is remarkable. Today's AI techniques may seem obsolete in ten years.

Second, the more technical challenges of AI–such as privacy, explainability, or fairness–are being worked on, both in the research community and in the legislative and regulatory world. Standard procedures for assessing the efficacy and safety of systems will be needed, but in reality, this is not a new concept: it is what has been developed over the years to approve new medicines. We need to be consistent and apply the same hard-headed validation processes to the new technologies.

Third, it should be clear from our exploration of this subject that education –of patients as well as of professionals–is key to the societal acceptance of the role that AI and robotics will be called upon to play. Every invention or innovation–from the steam engine to the telephone to the computer–has gone through this process. Practitioners must learn enough about how AI models and robotics work to build a “working relationship” with those tools and build trust in them–just as their predecessors learned to trust what they saw on an X-ray or CT scan. Patients, for their part, need to understand what AI and robotics can or cannot do, how the physician will remain in the loop when appropriate, and what data is being collected about them in the process. We will have a responsibility to ensure that complex systems that patients do not sufficiently understand cannot be misused against them, whether accidentally or deliberately.

Fourth, health care is also a business, involving financial transactions between patients, providers, and insurers (public or private, depending on the country). New cost and reimbursement models will need to be developed, especially given that when AI is used to assist professionals, not replace them, the cost of the system is additive to the human cost of assessing the data and reviewing the system's recommendations.

Fifth and last, clinical pathways have to be adapted and new role models for physicians have to be built. Clinical paths can already differ and make it harder to provide continuity of care to a patient who moves across care delivery systems that have different capabilities. This issue is being addressed by the BPM+ Health Community 14 using the business process, case management and decision modeling standards of the Object Management Group (OMG). The issue will become more complex by integrating AI and robotics: every doctor has similar training and a stethoscope, but not every doctor or hospital will have the same sensors, AI programs, or robots.

Eventually, the convergence of these approaches will help to build a complete digital patient model–a digital twin of each specific human being – generated out of all the data gathered from general practitioners, hospitals, laboratories, mHealth apps, and wearable sensors, along the entire life of the patient. At that point, AI will be able to support superior, fully personal and predictive medicine, while robotics will automate or support many aspects of treatment and care.

Data Availability Statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author Contributions

KD came up with the classification of AI and robotic systems. CB identified concrete application examples. Both authors contributed equally, identified adoption challenges, and developed recommendations for future work. Both authors contributed to the article and approved the submitted version.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

1. ^ https://cmte.ieee.org/futuredirections/2019/07/21/autonomous-systems-in-healthcare/

2. ^ https://emag.medicalexpo.com/ai-powered-chatbots-to-help-against-self-isolation-during-covid-19/

3. ^ https://www.med.ubc.ca/news/sepsis-leading-cause-of-death-worldwide/

4. ^ https://www.sepsis.org/wp-content/uploads/2017/05/Sepsis-Fact-Sheet-2018.pdf

5. ^ https://medcitynews.com/2021/07/johns-hopkins-spinoff-looking-to-build-better-risk-prediction-tooing,ls-emerges-with-15m/

6. ^ https://www.patchdmedical.com/

7. ^ https://hisigma.mit.edu

8. ^ https://blog.radiology.virginia.edu/covid-19-and-imaging/

9. ^ https://hitinfrastructure.com/news/diagnostic-robotics-mayo-clinic-bring-triage-platform-to-patients

10. ^ https://www.exer.ai

11. ^ https://www.gcmas.org/map

12. ^ https://www.media.mit.edu/groups/biomechatronics/overview/

13. ^ https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

14. ^ https://www.bpm-plus.org/

1. Amisha Malik P, Pathania M, Rathaur VK. Overview of artificial intelligence in medicine. J Fam Med Prim Care. (2019) 8:2328–31. doi: 10.4103/jfmpc.jfmpc_440_19

PubMed Abstract | CrossRef Full Text | Google Scholar

2. van Melle W, Shortliffe EH, Buchanan BG. EMYCIN: a knowledge engineer's tool for constructing rule-based expert systems. In: Buchanan BG, Shortliffe EH, editors. Rule-Based Expert Systems . Reading, MA: Addison-Wesley Publishing Company (1984). p. 302–13.

3. Tursz T, Andre F, Lazar V, Lacroix L, Soria J-C. Implications of personalized medicine—perspective from a cancer center. Nat Rev Clin Oncol. (2011) 8:177–83. doi: 10.1038/nrclinonc.2010.222

4. van't Veer LJ, Dai H, van de Vijver MJ, He YD, Hart AAM, Mao M, et al. Gene expression profiling predicts clinical outcome of breast cancer. Nature. (2002) 415:530–6. doi: 10.1038/415530a

5. Auffray C, Charron D, Hood L. Predictive, preventive, personalized and participatory medicine: back to the future. Genome Med. (2010) 2:57. doi: 10.1186/gm178

6. Hamet P, Tremblay J. Artificial intelligence in medicine. Metabolism. (2017) 69:S36–40. doi: 10.1016/j.metabol.2017.01.011

7. Kim J, Campbell AS, de Ávila BE-F, Wang J. Wearable biosensors for healthcare monitoring. Nat Biotechnol. (2019) 37:389–406. doi: 10.1038/s41587-019-0045-y

8. Nam KH, Kim DH, Choi BK, Han IH. Internet of things, digital biomarker, and artificial intelligence in spine: current and future perspectives. Neurospine. (2019) 16:705–11. doi: 10.14245/ns.1938388.194

9. Steels L, Lopez de, Mantaras R. The Barcelona declaration for the proper development and usage of artificial intelligence in Europe. AI Commun. (2018) 31:485–94. doi: 10.3233/AIC-180607

10. Olshannikova E, Ometov A, Koucheryavy Y, Olsson T. Visualizing big data with augmented and virtual reality: challenges and research agenda. J Big Data. (2015) 2:22. doi: 10.1186/s40537-015-0031-2

CrossRef Full Text

11. Björnsson B, Borrebaeck C, Elander N, Gasslander T, Gawel DR, Gustafsson M, et al. Digital twins to personalize medicine. Genome Med. (2019) 12:4. doi: 10.1186/s13073-019-0701-3

12. Bates M. Health care chatbots are here to help. IEEE Pulse. (2019) 10:12–4. doi: 10.1109/MPULS.2019.2911816

13. Corral-Acero J, Margara F, Marciniak M, Rodero C, Loncaric F, Feng Y, et al. The “Digital Twin” to enable the vision of precision cardiology. Eur Heart J. (2020) 41:4556–64. doi: 10.1093/eurheartj/ehaa159

14. Montani S, Striani M. Artificial intelligence in clinical decision support: a focused literature survey. Yearb Med Inform. (2019) 28:120–7. doi: 10.1055/s-0039-1677911

15. Oemig F, Blobel B. natural language processing supporting interoperability in healthcare. In: Biemann C, Mehler A, editors. Text Mining. Cham: Springer International Publishing (2014). p. 137–56. (Theory and Applications of Natural Language Processing). doi: 10.1007/978-3-319-12655-5_7

16. Bitterman DS, Aerts HJWL, Mak RH. Approaching autonomy in medical artificial intelligence. Lancet Digit Health. (2020) 2:e447–9. doi: 10.1016/S2589-7500(20)30187-4

17. Carriere J, Fong J, Meyer T, Sloboda R, Husain S, Usmani N, et al. An Admittance-Controlled Robotic Assistant for Semi-Autonomous Breast Ultrasound Scanning. In: 2019 International Symposium on Medical Robotics (ISMR). Atlanta, GA: IEEE (2019). p. 1–7. doi: 10.1109/ISMR.2019.8710206

CrossRef Full Text | Google Scholar

18. Tao R, Ocampo R, Fong J, Soleymani A, Tavakoli M. Modeling and emulating a physiotherapist's role in robot-assisted rehabilitation. Adv Intell Syst. (2020) 2:1900181. doi: 10.1002/aisy.201900181

19. Tavakoli M, Carriere J, Torabi A. Robotics, smart wearable technologies, and autonomous intelligent systems for healthcare during the COVID-19 pandemic: an analysis of the state of the art and future vision. Adv Intell Syst. (2020) 2:2000071. doi: 10.1002/aisy.202000071

20. Ahn HS, Yep W, Lim J, Ahn BK, Johanson DL, Hwang EJ, et al. Hospital receptionist robot v2: design for enhancing verbal interaction with social skills. In: 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). New Delhi: IEEE (2019). p. 1–6. doi: 10.1109/RO-MAN46459.2019.8956300

21. Lane T. A short history of robotic surgery. Ann R Coll Surg Engl . (2018) 100:5–7. doi: 10.1308/rcsann.supp1.5

22. Mezger U, Jendrewski C, Bartels M. Navigation in surgery. Langenbecks Arch Surg. (2013) 398:501–14. doi: 10.1007/s00423-013-1059-4

23. Luxton DD, June JD, Sano A, Bickmore T. Intelligent mobile, wearable, and ambient technologies for behavioral health care. In: Artificial Intelligence in Behavioral and Mental Health Care . Elsevier (2016). p. 137–62. Available online at: https://linkinghub.elsevier.com/retrieve/pii/B9780124202481000064

Google Scholar

24. Casilari E, Oviedo-Jiménez MA. Automatic fall detection system based on the combined use of a smartphone and a smartwatch. PLoS ONE. (2015) 10:e0140929. doi: 10.1371/journal.pone.0140929

25. Sriram KNV, Palaniswamy S. Mobile robot assistance for disabled and senior citizens using hand gestures. In: 2019 International Conference on Power Electronics Applications and Technology in Present Energy Scenario (PETPES) . Mangalore: IEEE (2019). p. 1–6. doi: 10.1109/PETPES47060.2019.9003821

26. Nibras N, Liu C, Mottet D, Wang C, Reinkensmeyer D, Remy-Neris O, et al. Dissociating sensorimotor recovery and compensation during exoskeleton training following stroke. Front Hum Neurosci. (2021) 15:645021. doi: 10.3389/fnhum.2021.645021

27. Maciejasz P, Eschweiler J, Gerlach-Hahn K, Jansen-Troy A, Leonhardt S. A survey on robotic devices for upper limb rehabilitation. J NeuroEngineering Rehabil. (2014) 11:3. doi: 10.1186/1743-0003-11-3

28. Akbari A, Haghverd F, Behbahani S. Robotic home-based rehabilitation systems design: from a literature review to a conceptual framework for community-based remote therapy during COVID-19 pandemic. Front Robot AI. (2021) 8:612331. doi: 10.3389/frobt.2021.612331

29. Howard MC. A meta-analysis and systematic literature review of virtual reality rehabilitation programs. Comput Hum Behav. (2017) 70:317–27. doi: 10.1016/j.chb.2017.01.013

30. Gorman C, Gustafsson L. The use of augmented reality for rehabilitation after stroke: a narrative review. Disabil Rehabil Assist Technol . (2020) 17:409–17. doi: 10.1080/17483107.2020.1791264

31. Li A, Montaño Z, Chen VJ, Gold JI. Virtual reality and pain management: current trends and future directions. Pain Manag. (2011) 1:147–57. doi: 10.2217/pmt.10.15

32. Tulu B, Chatterjee S, Laxminarayan S. A taxonomy of telemedicine efforts with respect to applications, infrastructure, delivery tools, type of setting and purpose. In: Proceedings of the 38th Annual Hawaii International Conference on System Sciences . Big Island, HI: IEEE (2005). p. 147.

33. Lai L, Wittbold KA, Dadabhoy FZ, Sato R, Landman AB, Schwamm LH, et al. Digital triage: novel strategies for population health management in response to the COVID-19 pandemic. Healthc Amst Neth. (2020) 8:100493. doi: 10.1016/j.hjdsi.2020.100493

34. Valtolina S, Marchionna M. Design of a chatbot to assist the elderly. In: Fogli D, Tetteroo D, Barricelli BR, Borsci S, Markopoulos P, Papadopoulos GA, Editors. End-User Development . Cham: Springer International Publishing (2021). p. 153–68. (Lecture Notes in Computer Science; Bd. 12724).

PubMed Abstract | Google Scholar

35. Falck F, Doshi S, Tormento M, Nersisyan G, Smuts N, Lingi J, et al. Robot DE NIRO: a human-centered, autonomous, mobile research platform for cognitively-enhanced manipulation. Front Robot AI. (2020) 7:66. doi: 10.3389/frobt.2020.00066

36. Bohr A, Memarzadeh K, . (Eds.) The rise of artificial intelligence in healthcare applications. In: Artificial Intelligence in Healthcare . Oxford: Elsevier (2020). p. 25–60. doi: 10.1016/B978-0-12-818438-7.00002-2

37. Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Digit Med. (2020) 3:17. doi: 10.1038/s41746-020-0221-y

PubMed Abstract | CrossRef Full Text

38. Saddler N, Harvey G, Jessa K, Rosenfield D. Clinical decision support systems: opportunities in pediatric patient safety. Curr Treat Options Pediatr. (2020) 6:325–35. doi: 10.1007/s40746-020-00206-3

39. Deng H, Wu Q, Qin B, Chow SSM, Domingo-Ferrer J, Shi W. Tracing and revoking leaked credentials: accountability in leaking sensitive outsourced data. In: Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security . New York, NY: Association for Computing Machinery (2014). p. 425–34. (ASIA CCS'14). doi: 10.1145/2590296.2590342

40. Leventhal R. How Natural Language Processing is Helping to Revitalize Physician Documentation . Cleveland, OH: Healthc Inform (2017). Vol. 34, p. 8–13.

41. Gu Q, Nie C, Zou R, Chen W, Zheng C, Zhu D, et al. Automatic generation of electromyogram diagnosis report. In: 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) . Seoul: IEEE (2020). p. 1645–50.

42. Jain V, Wason R, Chatterjee JM, Le D-N, editor. Ontology-Based Information Retrieval For Healthcare Systems. 1 st ed . Wiley-Scrivener (2020). doi: 10.1002/9781119641391

43. Awad A, Bader–El–Den M, McNicholas J. Patient length of stay and mortality prediction: a survey. Health Serv Manage Res. (2017) 30:105–20. doi: 10.1177/0951484817696212

44. Mahajan SM, Mahajan A, Nguyen C, Bui J, Abbott BT, Osborne TF. Predictive models for identifying risk of readmission after index hospitalization for hip arthroplasty: a systematic review. J Orthop. (2020) 22:73–85. doi: 10.1016/j.jor.2020.03.045

45. Ceylan H, Yasa IC, Kilic U, Hu W, Sitti M. Translational prospects of untethered medical microrobots. Prog Biomed Eng. (2019) 1:012002. doi: 10.1088/2516-1091/ab22d5

46. Sánchez S, Soler L, Katuri J. Chemically powered micro- and nanomotors. Angew Chem Int Ed Engl. (2015) 54:1414–44. doi: 10.1002/anie.201406096

47. Schuerle S, Soleimany AP, Yeh T, Anand GM, Häberli M, Fleming HE, et al. Synthetic and living micropropellers for convection-enhanced nanoparticle transport. Sci Adv. (2019) 5:eaav4803. doi: 10.1126/sciadv.aav4803

48. Erkoc P, Yasa IC, Ceylan H, Yasa O, Alapan Y, Sitti M. Mobile microrobots for active therapeutic delivery. Adv Ther. (2019) 2:1800064. doi: 10.1002/adtp.201800064

49. Yu C, Kim J, Choi H, Choi J, Jeong S, Cha K, et al. Novel electromagnetic actuation system for three-dimensional locomotion and drilling of intravascular microrobot. Sens Actuators Phys. (2010) 161:297–304. doi: 10.1016/j.sna.2010.04.037

50. Chang D, Lim M, Goos JACM, Qiao R, Ng YY, Mansfeld FM, et al. Biologically Targeted magnetic hyperthermia: potential and limitations. Front Pharmacol. (2018) 9:831. doi: 10.3389/fphar.2018.00831

51. Phan GH. Artificial intelligence in rehabilitation evaluation based robotic exoskeletons: a review. EEO. (2021) 20:6203–11. doi: 10.1007/978-981-16-9551-3_6

52. Hwang J, Kumar Yerriboina VN, Ari H, Kim JH. Effects of passive back-support exoskeletons on physical demands and usability during patient transfer tasks. Appl Ergon. (2021) 93:103373. doi: 10.1016/j.apergo.2021.103373

53. Hager G, Kumar V, Murphy R, Rus D, Taylor R. The Role of Robotics in Infectious Disease Crises. ArXiv201009909 Cs (2020).

54. Walker ME, Hedayati H, Szafir D. Robot teleoperation with augmented reality virtual surrogates. In: 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) . Daegu: IEEE (2019). p. 202–10. doi: 10.1109/HRI.2019.8673306

55. Ding M, Matsubara T, Funaki Y, Ikeura R, Mukai T, Ogasawara T. Generation of comfortable lifting motion for a human transfer assistant robot. Int J Intell Robot Appl. (2017) 1:74–85. doi: 10.1007/s41315-016-0009-z

56. Mohebali D, Kittleson MM. Remote monitoring in heart failure: current and emerging technologies in the context of the pandemic. Heart. (2021) 107:366–72. doi: 10.1136/heartjnl-2020-318062

57. Blasco R, Marco Á, Casas R, Cirujano D, Picking R. A smart kitchen for ambient assisted living. Sensors. (2014) 14:1629–53. doi: 10.3390/s140101629

58. Valentí Soler M, Agüera-Ortiz L, Olazarán Rodríguez J, Mendoza Rebolledo C, Pérez Muñoz A, Rodríguez Pérez I, et al. Social robots in advanced dementia. Front Aging Neurosci. (2015) 7:133. doi: 10.3389/fnagi.2015.00133

59. Bickmore TW, Mitchell SE, Jack BW, Paasche-Orlow MK, Pfeifer LM, O'Donnell J. Response to a relational agent by hospital patients with depressive symptoms. Interact Comput. (2010) 22:289–98. doi: 10.1016/j.intcom.2009.12.001

60. Chatzimina M, Koumakis L, Marias K, Tsiknakis M. Employing conversational agents in palliative care: a feasibility study and preliminary assessment. In: 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering (BIBE) . Athens: IEEE (2019). p. 489–96. doi: 10.1109/BIBE.2019.00095

61. Cecula P, Yu J, Dawoodbhoy FM, Delaney J, Tan J, Peacock I, et al. Applications of artificial intelligence to improve patient flow on mental health inpatient units - narrative literature review. Heliyon. (2021) 7:e06626. doi: 10.1016/j.heliyon.2021.e06626

62. Riek LD. Healthcare robotics. Comm ACM. (2017) 60:68–78. doi: 10.1145/3127874

63. Burdick H, Pino E, Gabel-Comeau D, McCoy A, Gu C, Roberts J, et al. Effect of a sepsis prediction algorithm on patient mortality, length of stay and readmission: a prospective multicentre clinical outcomes evaluation of real-world patient data from US hospitals. BMJ Health Care Inform. (2020) 27:e100109. doi: 10.1136/bmjhci-2019-100109

64. Goh KH, Wang L, Yeow AYK, Poh H, Li K, Yeow JJL, et al. Artificial intelligence in sepsis early prediction and diagnosis using unstructured data in healthcare. Nat Commun. (2021) 12:711. doi: 10.1038/s41467-021-20910-4

65. Uckun S. Intelligent systems in patient monitoring and therapy management. a survey of research projects. Int J Clin Monit Comput. (1994) 11:241–53. doi: 10.1007/BF01139876

66. Gholami B, Haddad WM, Bailey JM. AI in the ICU: in the intensive care unit, artificial intelligence can keep watch. IEEE Spectr. (2018) 55:31–5. doi: 10.1109/MSPEC.2018.8482421

67. Nam JG, Hwang EJ, Kim DS, Yoo S-J, Choi H, Goo JM, et al. Undetected lung cancer at posteroanterior chest radiography: potential role of a deep learning–based detection algorithm. Radiol Cardiothorac Imaging. (2020) 2:e190222. doi: 10.1148/ryct.2020190222

68. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. (2017) 542:115–8. doi: 10.1038/nature21056

69. Scudellari M. AI Recognizes COVID-19 in the Sound of a Cough . Available online at: https://spectrum.ieee.org/the-human-os/artificial-intelligence/medical-ai/ai-recognizes-covid-19-in-the-sound-of-a-cough (accessed November 4, 2020).

70. Ai T, Yang Z, Hou H, Zhan C, Chen C, Lv W, et al. Correlation of chest CT and RT-PCR testing for coronavirus disease 2019 (COVID-19) in China: a report of 1014 cases. Radiology. (2020) 296:E32–40. doi: 10.1148/radiol.2020200642

71. Farahmand S, Shabestari O, Pakrah M, Hossein-Nejad H, Arbab M, Bagheri-Hariri S. Artificial intelligence-based triage for patients with acute abdominal pain in emergency department; a diagnostic accuracy study. Adv J Emerg Med. (2017) 1:e5. doi: 10.22114/AJEM.v1i1.11

72. Poplin R, Varadarajan AV, Blumer K, Liu Y, McConnell MV, Corrado GS, et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat Biomed Eng. (2018) 2:158–64. doi: 10.1038/s41551-018-0195-0

73. Lee L. Gait analysis for classification . (Bd. Thesis Ph. D.)–Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science (2002). Available online at: http://hdl.handle.net/1721.1/8116

74. Foster M. Aging Japan: Robots May Have Role in Future of Elder Care . Healthcare & Pharma. Available online at: https://www.reuters.com/article/us-japan-ageing-robots-widerimage-idUSKBN1H33AB (accessed March 28, 2018).

75. Pu L, Moyle W, Jones C. How people with dementia perceive a therapeutic robot called PARO in relation to their pain and mood: a qualitative study. J Clin Nurs February. (2020) 29:437–46. doi: 10.1111/jocn.15104

76. Walter Z, Lopez MS. Physician acceptance of information technologies: role of perceived threat to professional autonomy. Decis Support Syst. (2008) 46:206–15. doi: 10.1016/j.dss.2008.06.004

77. Price WN, Cohen IG. Privacy in the age of medical big data. Nat Med. (2019) 25:37–43. doi: 10.1038/s41591-018-0272-7

78. Lamanna C, Byrne L. Should artificial intelligence augment medical decision making? the case for an autonomy algorithm. AMA J Ethics. (2018) 20:E902–910. doi: 10.1001/amajethics.2018.902

79. Fosch-Villaronga E, Drukarch H. On Healthcare Robots . Leiden: Leiden University (2021). Available online at: https://arxiv.org/ftp/arxiv/papers/2106/2106.03468.pdf

80. The The Precise4Q consortium, Amann J, Blasimme A, Vayena E, Frey D, Madai VI. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak . (2020) 20:310. doi: 10.1186/s12911-020-01332-6

81. European Parliament. Resolution with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). (2017). Available online at: http://www.europarl.europa.eu/

82. Mercuri RT. The HIPAA-potamus in health care data security. Comm ACM. (2004) 47:25–8. doi: 10.1145/1005817.1005840

83. Marelli L, Lievevrouw E, Van Hoyweghen I. Fit for purpose? the GDPR and the governance of European digital health. Policy Stud. (2020) 41:447–67. doi: 10.1080/01442872.2020.1724929

84. Poitras I, Dupuis F, Bielmann M, Campeau-Lecours A, Mercier C, Bouyer L, et al. Validity and reliability of wearable sensors for joint angle estimation: a systematic review. Sensors. (2019) 19:1555. doi: 10.3390/s19071555

85. Macrae C. Governing the safety of artificial intelligence in healthcare. BMJ Qual Saf June. (2019) 28:495–8. doi: 10.1136/bmjqs-2019-009484

86. Fosch-Villaronga E, Mahler T. Cybersecurity, safety and robots: strengthening the link between cybersecurity and safety in the context of care robots. Comput Law Secur Rev. (2021) 41:105528. doi: 10.1016/j.clsr.2021.105528

87. Miller IN, Cronin-Golomb A. Gender differences in Parkinson's disease: clinical characteristics and cognition. Mov Disord Off J Mov Disord Soc. (2010) 25:2695–703. doi: 10.1002/mds.23388

88. Cirillo D, Catuara-Solarz S, Morey C, Guney E, Subirats L, Mellino S, et al. Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare. Npj Digit Med. (2020) 3:81. doi: 10.1038/s41746-020-0288-5

89. Wolff RF, Moons KGM, Riley RD, Whiting PF, Westwood M, Collins GS, et al. PROBAST: a tool to assess the risk of bias and applicability of prediction model studies. Ann Intern Med. (2019) 170:51–8. doi: 10.7326/M18-1376

90. LaRosa E, Danks D. Impacts on trust of healthcare AI. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society . New Orleans, LA: ACM (2018). p. 210–5. doi: 10.1145/3278721.3278771

91. Lee D, Yoon SN. Application of artificial intelligence-based technologies in the healthcare industry: opportunities and challenges. Int J Environ Res Public Health. (2021) 18:271. doi: 10.3390/ijerph18010271

92. Smuha NA. Ethics guidelines for trustworthy AI. Comput Law Rev Int. (2019) 20:97–106. doi: 10.9785/cri-2019-200402

93. Grinbaum A, Chatila R, Devillers L, Ganascia J-G, Tessier C, Dauchet M. Ethics in robotics research: CERNA mission and context. IEEE Robot Autom Mag. (2017) 24:139–45. doi: 10.1109/MRA.2016.2611586

Keywords: artificial intelligence, robotics, healthcare, personalized medicine, P5 medicine

Citation: Denecke K and Baudoin CR (2022) A Review of Artificial Intelligence and Robotics in Transformed Health Ecosystems. Front. Med. 9:795957. doi: 10.3389/fmed.2022.795957

Received: 15 October 2021; Accepted: 15 June 2022; Published: 06 July 2022.

Reviewed by:

Copyright © 2022 Denecke and Baudoin. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Kerstin Denecke, kerstin.denecke@bfh.ch

This article is part of the Research Topic

Managing Healthcare Transformation Towards P5 Medicine

2024 Theses Doctoral

Artificial Intelligence vs. Human Coaches: A Mixed Methods Randomized Controlled Experiment on Client Experiences and Outcomes

Barger, Amber

The rise of artificial intelligence (AI) challenges us to explore whether human-to-human relationships can extend to AI, potentially reshaping the future of coaching. The purpose of this study was to examine client perceptions of being coached by a simulated AI coach, who was embodied as a vocally conversational live-motion avatar, compared to client perceptions of a human coach. It explored if and how client ratings of coaching process measures and outcome measures aligned between the two coach treatments. In this mixed methods randomized controlled trial (RCT), 81 graduate students enrolled in the study and identified a personally relevant goal to pursue. The study deployed an alternative-treatments between-subjects design, with one-third of participants receiving coaching from simulated AI coaches, another third engaging with seasoned human coaches, and the rest forming the control group. Both treatment groups had one 60-minute session guided by the CLEAR (contract, listen, explore, action, review) coaching model to support each person to gain clarity about their goal and identify specific behaviors that could help each make progress towards their goal. Quantitative data were captured through three surveys and qualitative input was captured through open-ended survey questions and 27 debrief interviews. The study utilized a Wizard of Oz technique from human-computer interaction research, ingeniously designed to sidestep the rapid obsolescence of technology by simulating an advanced AI coaching experience where participants unknowingly interacted with professional human coaches, enabling the assessment of responses to AI coaching in the absence of fully developed autonomous AI systems. The aim was to glean insights into client reactions to a future, fully autonomous AI with the expert capabilities of a human coach. Contrary to expectations from previous literature, participants did not rate professional human coaches higher than simulated AI coaches in terms of working alliance, session value, or outcomes, which included self-rated competence and goal achievement. In fact, both coached groups made significant progress compared to the control group, with participants convincingly engaging with their respective coaches, as confirmed by a novel believability index. The findings challenge prevailing assumptions about human uniqueness in relation to technology. The rapid advancement of AI suggests a revolutionary shift in coaching, where AI could take on a central and surprisingly effective role, redefining what we thought only human coaches could do and reshaping their role in the age of AI.

  • Adult education
  • Artificial intelligence--Educational applications
  • Graduate students
  • Educational technology--Evaluation
  • Education, Higher--Technological innovations
  • Education, Higher--Effect of technological innovations on

This item is currently under embargo. It will be available starting 2029-05-14.

More About This Work

  • DOI Copy DOI to clipboard

IMAGES

  1. (PDF) Artificial Intelligence and Robotics: A Research Overview

    literature review on ai and robotics

  2. (PDF) Robotics and Artificial Intelligence

    literature review on ai and robotics

  3. (PDF) A Review of Artificial Intelligence (AI) in Education during the

    literature review on ai and robotics

  4. (PDF) Artificial Intelligence in Service Delivery Systems: A Systematic

    literature review on ai and robotics

  5. 5 Impacts of AI in the World of Literature

    literature review on ai and robotics

  6. (PDF) Impact of Artificial Intelligence, Robotics, and Automation on

    literature review on ai and robotics

VIDEO

  1. Aided by A.I. Language Models, Google’s Robots Are Getting Smart

  2. Challenges in Robotics & AI

  3. Read Research paper in mins

  4. How to use generative artificial intelligence in your literature review: Perplexity AI and Elicit AI

  5. Literature Review AI TOOL & WoS

  6. Robot Hand Underactuated Design With Spring Agonists

COMMENTS

  1. Artificial intelligence, machine learning and deep learning in advanced robotics, a review

    1. Introduction. Artificial intelligence (AI), machine learning (ML), and deep learning (DL) are all important technologies in the field of robotics [1].The term artificial intelligence (AI) describes a machine's capacity to carry out operations that ordinarily require human intellect, such as speech recognition, understanding of natural language, and decision-making.

  2. Intelligent Physical Robots in Health Care: Systematic Literature Review

    Background. With the development of artificial intelligence (AI), physical robots with intelligent capabilities based on AI (hereinafter intelligent physical robots) have been applied in the health care context to expand the digitization of health care work processes and increase the use, fairness, and cost-effectiveness of health care services, such as in smart health care services [1,2 ...

  3. The AI revolution is coming to robots: how will it change them?

    Gopalakrishnan thinks that hooking up AI brains to physical robots will improve the foundation models, for example giving them better spatial reasoning. Meta, says Rai, is among those pursuing the ...

  4. A Systematic Review of Artificial Intelligence and Robots in Value Co

    As artificial intelligence (AI) and robots are increasingly taking place in practical service solutions, it is necessary to understand technology in value co-creation. We conducted a systematic literature review on the topic to advance theoretical analysis of AI and robots in value co-creation.

  5. A Systematic Literature Review on the Applications of Robots and ...

    Natural language processing (NLP) is the art of investigating others' positive and cooperative communication and rapprochement with others as well as the art of communicating and speaking with others. Furthermore, NLP techniques may substantially enhance most phases of the information-system lifecycle, facilitate access to information for users, and allow for new paradigms in the usage of ...

  6. Augmented Reality Meets Artificial Intelligence in Robotics: A

    To the best of our knowledge, this is the first literature review combining AR and AI in robotics where papers are systematically collected, reviewed, and analyzed. A categorical analysis is presented, where papers are classified based on which technology supports the other, i.e., AR supporting AI or vice versa, all under the hood of robotics.

  7. Recent Advances in Robotics and Intelligent Robots Applications

    Perception in robotics refers to the ability of robots to sense and understand their environment using various sensors, such as cameras, LiDAR, millimeter-wave radars, and ultrasonic sensors. This includes tasks such as object detection, recognition, localization, and mapping (well known as SLAM—simultaneous localization and mapping) [ 10 ].

  8. Electronics

    Intelligent robotics has the potential to revolutionize various industries by amplifying output, streamlining operations, and enriching customer interactions. This systematic literature review aims to analyze emerging technologies and trends in intelligent robotics, addressing key research questions, identifying challenges and opportunities, and proposing the best practices for responsible and ...

  9. Exploring the impact of Artificial Intelligence and robots on higher

    Artificial Intelligence (AI) and robotics are likely to have a significant long-term impact on higher education (HE). The scope of this impact is hard to grasp partly because the literature is siloed, as well as the changing meaning of the concepts themselves. But developments are surrounded by controversies in terms of what is technically possible, what is practical to implement and what is ...

  10. Primer on artificial intelligence and robotics

    Research on robotics and artificial intelligence builds off of the substantial body of literature surrounding innovation and technological development. Innovation is a key factor in contributing to economic growth (Solow 1957; Romer 1990) and has been an area of interest for both theorists and policymakers for decades.

  11. A Review of Artificial Intelligence and Robotics in Transformed Health

    Abstract. Health care is shifting toward become proactive according to the concept of P5 medicine-a predictive, personalized, preventive, participatory and precision discipline. This patient-centered care heavily leverages the latest technologies of artificial intelligence (AI) and robotics that support diagnosis, decision making and treatment.

  12. From Explainable to Interactive AI: A Literature Review on Current

    AI systems are increasingly being adopted across various domains and application areas. With this surge, there is a growing research focus and societal concern for actively involving humans in developing, operating, and adopting these systems. Despite this concern, most existing literature on AI and Human-Computer Interaction (HCI) primarily focuses on explaining how AI systems operate and, at ...

  13. Advances in the Application of AI Robots in Critical Care: Scoping Review

    The literature search was carried out on May 1, 2023, across 3 databases: PubMed, Embase, and the IEEE Xplore Digital Library. Eligible publications were initially screened based on their titles and abstracts. ... Conclusions: This review highlights the potential of AI robots to transform ICU care by improving patient treatment, support, and ...

  14. Human rights for robots? A literature review

    Abstract. This literature review of the most prominent academic and non-academic publications in the last 10 years on the question of whether intelligent robots should be entitled to human rights is the first review of its kind in the academic context. We review three challenging academic contributions and six non-academic but important popular ...

  15. A Review of Future and Ethical Perspectives of Robotics and AI

    In recent years, there has been increased attention on the possible impact of future robotics and AI systems. Prominent thinkers have publicly warned about the risk of a dystopian future when the complexity of these systems progresses further. These warnings stand in contrast to the current state-of-the-art of the robotics and AI technology. This article reviews work considering both the ...

  16. (PDF) A Systematic Review of Artificial Intelligence and Robots in

    With the identification of the first set of literature on AI and robots in value co-creation, we push forward an important sub-field of value co-creation literature.

  17. Artificial intelligence, robotics, advanced technologies and human

    Methodology. To delineate research patterns and discern avenues for future studies related to intelligent automation in HRM, we conducted a systematic literature review following the suggestions made by Tranfield et al. (Citation 2003) as well as Crossan and Apaydin (Citation 2010).A systematic approach was deemed appropriate because it enhances the overall quality of the review by using a ...

  18. Artificial Intelligence and Robotics for Prefabricated and Modular

    This paper aims to explore future research directions on AIR for prefabricated and modular construction through a systematic literature review drawing on a concept-methodology-value philosophical framework. The analysis involves 97 published journal articles carefully identified through the Web of Science and Scopus databases.

  19. Advances in the Application of AI Robots in Critical Care: Scoping Review

    The literature search was carried out on May 1, 2023, across 3 databases: PubMed, Embase, and the IEEE Xplore Digital Library. Eligible publications were initially screened based on their titles and abstracts. ... This review highlights the potential of AI robots to transform ICU care by improving patient treatment, support, and rehabilitation ...

  20. A Literature Review on New Robotics: Automation from Love to War

    This article investigates the social significance of robotics for the years to come in Europe and the US by studying robotics developments in five different areas: the home, health care, traffic, the police force, and the army. Our society accepts the use of robots to perform dull, dangerous, and dirty industrial jobs. But now that robotics is moving out of the factory, the relevant question ...

  21. Frontiers

    A Review of Artificial Intelligence and Robotics in Transformed Health Ecosystems. Kerstin Denecke 1* Claude R. Baudoin 2. 1 Institute for Medical Information, Bern University of Applied Sciences, Bern, Switzerland. 2 Object Management Group, Needham, MA, United States. Health care is shifting toward become proactive according to the concept of ...

  22. Robotics

    This literature review presents a comprehensive analysis of the use and potential application scenarios of collaborative robots in the industrial working world, focusing on their impact on human work, safety, and health in the context of Industry 4.0. The aim is to provide a holistic evaluation of the employment of collaborative robots in the current and future working world, which is being ...

  23. A Systematic Review of Artificial Intelligence and Robots in Value Co

    As artificial intelligence (AI) and robots are increasingly taking place in practical service solutions, it is necessary to understand technology in value co-creation. We conducted a systematic literature review on the topic to advance theoretical analysis of AI and robots in value co-creation. By systematically reviewing 61 AI

  24. Artificial Intelligence vs. Human Coaches: A Mixed Methods Randomized

    The rise of artificial intelligence (AI) challenges us to explore whether human-to-human relationships can extend to AI, potentially reshaping the future of coaching. The purpose of this study was to examine client perceptions of being coached by a simulated AI coach, who was embodied as a vocally conversational live-motion avatar, compared to client perceptions of a human coach. It explored ...

  25. AFR AI Summit 2024 LIVE: Ed Husic calls for company tax cuts

    Here's a recap of some of the news from the Summit so far: Husic calls for lower corporate taxes: Industry Minister Ed Husic has called for a lowering of corporate tax, either via direct ...

  26. 'Inflection point in history': Government unveils robotics plan

    Science and Industry Minister Ed Husic will unveil a long-planned National Robotics Strategy at the inaugural The Australian Financial Review Artificial Intelligence Summit in Sydney on Tuesday.