How to write a good scientific review article

Affiliation.

  • 1 The FEBS Journal Editorial Office, Cambridge, UK.
  • PMID: 35792782
  • DOI: 10.1111/febs.16565

Literature reviews are valuable resources for the scientific community. With research accelerating at an unprecedented speed in recent years and more and more original papers being published, review articles have become increasingly important as a means to keep up to date with developments in a particular area of research. A good review article provides readers with an in-depth understanding of a field and highlights key gaps and challenges to address with future research. Writing a review article also helps to expand the writer's knowledge of their specialist area and to develop their analytical and communication skills, amongst other benefits. Thus, the importance of building review-writing into a scientific career cannot be overstated. In this instalment of The FEBS Journal's Words of Advice series, I provide detailed guidance on planning and writing an informative and engaging literature review.

© 2022 Federation of European Biochemical Societies.

Publication types

  • Review Literature as Topic*

Elsevier QRcode Wechat

  • Research Process

Writing a good review article

  • 3 minute read
  • 77.6K views

Table of Contents

As a young researcher, you might wonder how to start writing your first review article, and the extent of the information that it should contain. A review article is a comprehensive summary of the current understanding of a specific research topic and is based on previously published research. Unlike research papers, it does not contain new results, but can propose new inferences based on the combined findings of previous research.

Types of review articles

Review articles are typically of three types: literature reviews, systematic reviews, and meta-analyses.

A literature review is a general survey of the research topic and aims to provide a reliable and unbiased account of the current understanding of the topic.

A systematic review , in contrast, is more specific and attempts to address a highly focused research question. Its presentation is more detailed, with information on the search strategy used, the eligibility criteria for inclusion of studies, the methods utilized to review the collected information, and more.

A meta-analysis is similar to a systematic review in that both are systematically conducted with a properly defined research question. However, unlike the latter, a meta-analysis compares and evaluates a defined number of similar studies. It is quantitative in nature and can help assess contrasting study findings.

Tips for writing a good review article

Here are a few practices that can make the time-consuming process of writing a review article easier:

  • Define your question: Take your time to identify the research question and carefully articulate the topic of your review paper. A good review should also add something new to the field in terms of a hypothesis, inference, or conclusion. A carefully defined scientific question will give you more clarity in determining the novelty of your inferences.
  • Identify credible sources: Identify relevant as well as credible studies that you can base your review on, with the help of multiple databases or search engines. It is also a good idea to conduct another search once you have finished your article to avoid missing relevant studies published during the course of your writing.
  • Take notes: A literature search involves extensive reading, which can make it difficult to recall relevant information subsequently. Therefore, make notes while conducting the literature search and note down the source references. This will ensure that you have sufficient information to start with when you finally get to writing.
  • Describe the title, abstract, and introduction: A good starting point to begin structuring your review is by drafting the title, abstract, and introduction. Explicitly writing down what your review aims to address in the field will help shape the rest of your article.
  • Be unbiased and critical: Evaluate every piece of evidence in a critical but unbiased manner. This will help you present a proper assessment and a critical discussion in your article.
  • Include a good summary: End by stating the take-home message and identify the limitations of existing studies that need to be addressed through future studies.
  • Ask for feedback: Ask a colleague to provide feedback on both the content and the language or tone of your article before you submit it.
  • Check your journal’s guidelines: Some journals only publish reviews, while some only publish research articles. Further, all journals clearly indicate their aims and scope. Therefore, make sure to check the appropriateness of a journal before submitting your article.

Writing review articles, especially systematic reviews or meta-analyses, can seem like a daunting task. However, Elsevier Author Services can guide you by providing useful tips on how to write an impressive review article that stands out and gets published!

What are Implications in Research

  • Manuscript Preparation

What are Implications in Research?

how to write the results section of a research paper

How to write the results section of a research paper

You may also like.

what is a descriptive research design

Descriptive Research Design and Its Myriad Uses

Doctor doing a Biomedical Research Paper

Five Common Mistakes to Avoid When Writing a Biomedical Research Paper

Writing in Environmental Engineering

Making Technical Writing in Environmental Engineering Accessible

Risks of AI-assisted Academic Writing

To Err is Not Human: The Dangers of AI-assisted Academic Writing

Importance-of-Data-Collection

When Data Speak, Listen: Importance of Data Collection and Analysis Methods

choosing the Right Research Methodology

Choosing the Right Research Methodology: A Guide for Researchers

Why is data validation important in research

Why is data validation important in research?

Scholarly Sources What are They and Where can You Find Them

Scholarly Sources: What are They and Where can You Find Them?

Input your search keywords and press Enter.

Reviewing review articles

A review article is written to summarize the current state of understanding on a topic, and peer reviewing these types of articles requires a slightly different set of criteria compared with empirical articles. Unless it is a systematic review/meta-analysis methods are not important or reported. The quality of a review article can be judged on aspects such as timeliness, the breadth and accuracy of the discussion, and if it indicates the best avenues for future research. The review article should present an unbiased summary of the current understanding of the topic, and therefore the peer reviewer must assess the selection of studies that are cited by the paper. As review article contains a large amount of detailed information, its structure and flow are also important.

Back  │  Next

  • PRO Courses Guides New Tech Help Pro Expert Videos About wikiHow Pro Upgrade Sign In
  • EDIT Edit this Article
  • EXPLORE Tech Help Pro About Us Random Article Quizzes Request a New Article Community Dashboard This Or That Game Popular Categories Arts and Entertainment Artwork Books Movies Computers and Electronics Computers Phone Skills Technology Hacks Health Men's Health Mental Health Women's Health Relationships Dating Love Relationship Issues Hobbies and Crafts Crafts Drawing Games Education & Communication Communication Skills Personal Development Studying Personal Care and Style Fashion Hair Care Personal Hygiene Youth Personal Care School Stuff Dating All Categories Arts and Entertainment Finance and Business Home and Garden Relationship Quizzes Cars & Other Vehicles Food and Entertaining Personal Care and Style Sports and Fitness Computers and Electronics Health Pets and Animals Travel Education & Communication Hobbies and Crafts Philosophy and Religion Work World Family Life Holidays and Traditions Relationships Youth
  • Browse Articles
  • Learn Something New
  • Quizzes Hot
  • This Or That Game
  • Train Your Brain
  • Explore More
  • Support wikiHow
  • About wikiHow
  • Log in / Sign up
  • Education and Communications
  • Critical Reviews

How to Write an Article Review (With Examples)

Last Updated: April 24, 2024 Fact Checked

Preparing to Write Your Review

Writing the article review, sample article reviews, expert q&a.

This article was co-authored by Jake Adams . Jake Adams is an academic tutor and the owner of Simplifi EDU, a Santa Monica, California based online tutoring business offering learning resources and online tutors for academic subjects K-College, SAT & ACT prep, and college admissions applications. With over 14 years of professional tutoring experience, Jake is dedicated to providing his clients the very best online tutoring experience and access to a network of excellent undergraduate and graduate-level tutors from top colleges all over the nation. Jake holds a BS in International Business and Marketing from Pepperdine University. There are 12 references cited in this article, which can be found at the bottom of the page. This article has been fact-checked, ensuring the accuracy of any cited facts and confirming the authority of its sources. This article has been viewed 3,101,737 times.

An article review is both a summary and an evaluation of another writer's article. Teachers often assign article reviews to introduce students to the work of experts in the field. Experts also are often asked to review the work of other professionals. Understanding the main points and arguments of the article is essential for an accurate summation. Logical evaluation of the article's main theme, supporting arguments, and implications for further research is an important element of a review . Here are a few guidelines for writing an article review.

Education specialist Alexander Peterman recommends: "In the case of a review, your objective should be to reflect on the effectiveness of what has already been written, rather than writing to inform your audience about a subject."

Article Review 101

  • Read the article very closely, and then take time to reflect on your evaluation. Consider whether the article effectively achieves what it set out to.
  • Write out a full article review by completing your intro, summary, evaluation, and conclusion. Don't forget to add a title, too!
  • Proofread your review for mistakes (like grammar and usage), while also cutting down on needless information.

Step 1 Understand what an article review is.

  • Article reviews present more than just an opinion. You will engage with the text to create a response to the scholarly writer's ideas. You will respond to and use ideas, theories, and research from your studies. Your critique of the article will be based on proof and your own thoughtful reasoning.
  • An article review only responds to the author's research. It typically does not provide any new research. However, if you are correcting misleading or otherwise incorrect points, some new data may be presented.
  • An article review both summarizes and evaluates the article.

Step 2 Think about the organization of the review article.

  • Summarize the article. Focus on the important points, claims, and information.
  • Discuss the positive aspects of the article. Think about what the author does well, good points she makes, and insightful observations.
  • Identify contradictions, gaps, and inconsistencies in the text. Determine if there is enough data or research included to support the author's claims. Find any unanswered questions left in the article.

Step 3 Preview the article.

  • Make note of words or issues you don't understand and questions you have.
  • Look up terms or concepts you are unfamiliar with, so you can fully understand the article. Read about concepts in-depth to make sure you understand their full context.

Step 4 Read the article closely.

  • Pay careful attention to the meaning of the article. Make sure you fully understand the article. The only way to write a good article review is to understand the article.

Step 5 Put the article into your words.

  • With either method, make an outline of the main points made in the article and the supporting research or arguments. It is strictly a restatement of the main points of the article and does not include your opinions.
  • After putting the article in your own words, decide which parts of the article you want to discuss in your review. You can focus on the theoretical approach, the content, the presentation or interpretation of evidence, or the style. You will always discuss the main issues of the article, but you can sometimes also focus on certain aspects. This comes in handy if you want to focus the review towards the content of a course.
  • Review the summary outline to eliminate unnecessary items. Erase or cross out the less important arguments or supplemental information. Your revised summary can serve as the basis for the summary you provide at the beginning of your review.

Step 6 Write an outline of your evaluation.

  • What does the article set out to do?
  • What is the theoretical framework or assumptions?
  • Are the central concepts clearly defined?
  • How adequate is the evidence?
  • How does the article fit into the literature and field?
  • Does it advance the knowledge of the subject?
  • How clear is the author's writing? Don't: include superficial opinions or your personal reaction. Do: pay attention to your biases, so you can overcome them.

Step 1 Come up with...

  • For example, in MLA , a citation may look like: Duvall, John N. "The (Super)Marketplace of Images: Television as Unmediated Mediation in DeLillo's White Noise ." Arizona Quarterly 50.3 (1994): 127-53. Print. [9] X Trustworthy Source Purdue Online Writing Lab Trusted resource for writing and citation guidelines Go to source

Step 3 Identify the article.

  • For example: The article, "Condom use will increase the spread of AIDS," was written by Anthony Zimmerman, a Catholic priest.

Step 4 Write the introduction.

  • Your introduction should only be 10-25% of your review.
  • End the introduction with your thesis. Your thesis should address the above issues. For example: Although the author has some good points, his article is biased and contains some misinterpretation of data from others’ analysis of the effectiveness of the condom.

Step 5 Summarize the article.

  • Use direct quotes from the author sparingly.
  • Review the summary you have written. Read over your summary many times to ensure that your words are an accurate description of the author's article.

Step 6 Write your critique.

  • Support your critique with evidence from the article or other texts.
  • The summary portion is very important for your critique. You must make the author's argument clear in the summary section for your evaluation to make sense.
  • Remember, this is not where you say if you liked the article or not. You are assessing the significance and relevance of the article.
  • Use a topic sentence and supportive arguments for each opinion. For example, you might address a particular strength in the first sentence of the opinion section, followed by several sentences elaborating on the significance of the point.

Step 7 Conclude the article review.

  • This should only be about 10% of your overall essay.
  • For example: This critical review has evaluated the article "Condom use will increase the spread of AIDS" by Anthony Zimmerman. The arguments in the article show the presence of bias, prejudice, argumentative writing without supporting details, and misinformation. These points weaken the author’s arguments and reduce his credibility.

Step 8 Proofread.

  • Make sure you have identified and discussed the 3-4 key issues in the article.

review articles for research

You Might Also Like

Write a Feature Article

  • ↑ https://libguides.cmich.edu/writinghelp/articlereview
  • ↑ https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4548566/
  • ↑ Jake Adams. Academic Tutor & Test Prep Specialist. Expert Interview. 24 July 2020.
  • ↑ https://guides.library.queensu.ca/introduction-research/writing/critical
  • ↑ https://www.iup.edu/writingcenter/writing-resources/organization-and-structure/creating-an-outline.html
  • ↑ https://writing.umn.edu/sws/assets/pdf/quicktips/titles.pdf
  • ↑ https://owl.purdue.edu/owl/research_and_citation/mla_style/mla_formatting_and_style_guide/mla_works_cited_periodicals.html
  • ↑ https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4548565/
  • ↑ https://writingcenter.uconn.edu/wp-content/uploads/sites/593/2014/06/How_to_Summarize_a_Research_Article1.pdf
  • ↑ https://www.uis.edu/learning-hub/writing-resources/handouts/learning-hub/how-to-review-a-journal-article
  • ↑ https://writingcenter.unc.edu/tips-and-tools/editing-and-proofreading/

About This Article

Jake Adams

If you have to write an article review, read through the original article closely, taking notes and highlighting important sections as you read. Next, rewrite the article in your own words, either in a long paragraph or as an outline. Open your article review by citing the article, then write an introduction which states the article’s thesis. Next, summarize the article, followed by your opinion about whether the article was clear, thorough, and useful. Finish with a paragraph that summarizes the main points of the article and your opinions. To learn more about what to include in your personal critique of the article, keep reading the article! Did this summary help you? Yes No

  • Send fan mail to authors

Reader Success Stories

Prince Asiedu-Gyan

Prince Asiedu-Gyan

Apr 22, 2022

Did this article help you?

Sammy James

Sammy James

Sep 12, 2017

Juabin Matey

Juabin Matey

Aug 30, 2017

Vanita Meghrajani

Vanita Meghrajani

Jul 21, 2016

F. K.

Nov 27, 2018

Am I a Narcissist or an Empath Quiz

Featured Articles

Make Paper Look Old

Trending Articles

How to Make Money on Cash App: A Beginner's Guide

Watch Articles

Make Homemade Liquid Dish Soap

  • Terms of Use
  • Privacy Policy
  • Do Not Sell or Share My Info
  • Not Selling Info

Don’t miss out! Sign up for

wikiHow’s newsletter

Detail of a painting depicting the landscape of New Mexico with mountains in the distance

Explore millions of high-quality primary sources and images from around the world, including artworks, maps, photographs, and more.

Explore migration issues through a variety of media types

  • Part of The Streets are Talking: Public Forms of Creative Expression from Around the World
  • Part of The Journal of Economic Perspectives, Vol. 34, No. 1 (Winter 2020)
  • Part of Cato Institute (Aug. 3, 2021)
  • Part of University of California Press
  • Part of Open: Smithsonian National Museum of African American History & Culture
  • Part of Indiana Journal of Global Legal Studies, Vol. 19, No. 1 (Winter 2012)
  • Part of R Street Institute (Nov. 1, 2020)
  • Part of Leuven University Press
  • Part of UN Secretary-General Papers: Ban Ki-moon (2007-2016)
  • Part of Perspectives on Terrorism, Vol. 12, No. 4 (August 2018)
  • Part of Leveraging Lives: Serbia and Illegal Tunisian Migration to Europe, Carnegie Endowment for International Peace (Mar. 1, 2023)
  • Part of UCL Press

Harness the power of visual materials—explore more than 3 million images now on JSTOR.

Enhance your scholarly research with underground newspapers, magazines, and journals.

Explore collections in the arts, sciences, and literature from the world’s leading museums, archives, and scholars.

  • Data, AI, & Machine Learning
  • Managing Technology
  • Social Responsibility
  • Workplace, Teams, & Culture
  • AI & Machine Learning
  • Diversity & Inclusion
  • Big ideas Research Projects
  • Artificial Intelligence and Business Strategy
  • Responsible AI
  • Future of the Workforce
  • Future of Leadership
  • All Research Projects
  • AI in Action
  • Most Popular
  • The Truth Behind the Nursing Crisis
  • Work/23: The Big Shift
  • Coaching for the Future-Forward Leader
  • Measuring Culture

Spring 2024 Issue

The spring 2024 issue’s special report looks at how to take advantage of market opportunities in the digital space, and provides advice on building culture and friendships at work; maximizing the benefits of LLMs, corporate venture capital initiatives, and innovation contests; and scaling automation and digital health platform.

  • Past Issues
  • Upcoming Events
  • Video Archive
  • Me, Myself, and AI
  • Three Big Points

MIT Sloan Management Review Logo

How AI Skews Our Sense of Responsibility

Research shows how using an AI-augmented system may affect humans’ perception of their own agency and responsibility.

  • Data, AI, & Machine Learning
  • AI & Machine Learning

review articles for research

Matt Chinworth/theispot.com

As artificial intelligence plays an ever-larger role in automated systems and decision-making processes, the question of how it affects humans’ sense of their own agency is becoming less theoretical — and more urgent. It’s no surprise that humans often defer to automated decision recommendations, with exhortations to “trust the AI!” spurring user adoption in corporate settings. However, there’s growing evidence that AI diminishes users’ sense of responsibility for the consequences of those decisions.

This question is largely overlooked in current discussions about responsible AI. In reality, such practices are intended to manage legal and reputational risk — a limited view of responsibility, if we draw on German philosopher Hans Jonas’s useful conceptualization . He defined three types of responsibility, but AI practice appears concerned with only two. The first is legal responsibility , wherein an individual or corporate entity is held responsible for repairing damage or compensating for losses, typically via civil law, and the second is moral responsibility , wherein individuals are held accountable via punishment, as in criminal law.

Get Updates on Leading With AI and Data

Get monthly insights on how artificial intelligence impacts your organization and what it means for your company and customers.

Please enter a valid email address

Thank you for signing up

Privacy Policy

What we’re most concerned about here is the third type, what Jonas called the sense of responsibility . It’s what we mean when we speak admiringly of someone “acting responsibly.” It entails critical thinking and predictive reflection on the purpose and possible consequences of one’s actions, not only for oneself but for others. It’s this sense of responsibility that AI and automated systems can alter.

To gain insight into how AI affects users’ perceptions of their own responsibility and agency, we conducted several studies. Two studies examined what influences a driver’s decision to regain control of a self-driving vehicle when the autonomous driving system is activated. In the first study, we found that the more individuals trust the autonomous system, the less likely they are to maintain situational awareness that would enable them to regain control of the vehicle in the event of a problem or incident. Even though respondents overall said they accepted responsibility when operating an autonomous vehicle, their sense of agency had no significant influence on their intention to regain control of the vehicle in the event of a problem or incident. On the basis of these findings, we might expect to find that a sizable proportion of users feel encouraged, in the presence of an automated system, to shun responsibility to intervene.

About the Author

Ryad Titah is associate professor and chair of the Academic Department of Information Technologies at HEC Montréal. The research in progress described in this article is being conducted with Zoubeir Tkiouat, Pierre-Majorique Léger, Nicolas Saunier, Philippe Doyon-Poulin, Sylvain Sénécal, and Chaïma Merbouh.

More Like This

Add a comment cancel reply.

You must sign in to post a comment. First time here? Sign up for a free account : Comment on articles and get access to many more articles.

MIT Technology Review

  • Newsletters

Google DeepMind’s new AlphaFold can model a much larger slice of biological life

AlphaFold 3 can predict how DNA, RNA, and other molecules interact, further cementing its leading role in drug discovery and research. Who will benefit?

  • James O'Donnell archive page

Google DeepMind has released an improved version of its biology prediction tool, AlphaFold, that can predict the structures not only of proteins but of nearly all the elements of biological life.

It’s a development that could help accelerate drug discovery and other scientific research. The tool is currently being used to experiment with identifying everything from resilient crops to new vaccines. 

While the previous model, released in 2020, amazed the research community with its ability to predict proteins structures, researchers have been clamoring for the tool to handle more than just proteins. 

Now, DeepMind says, AlphaFold 3 can predict the structures of DNA, RNA, and molecules like ligands, which are essential to drug discovery. DeepMind says the tool provides a more nuanced and dynamic portrait of molecule interactions than anything previously available. 

“Biology is a dynamic system,” DeepMind CEO Demis Hassabis told reporters on a call. “Properties of biology emerge through the interactions between different molecules in the cell, and you can think about AlphaFold 3 as our first big sort of step toward [modeling] that.”

AlphaFold 2 helped us better map the human heart , model antimicrobial resistance , and identify the eggs of extinct birds , but we don’t yet know what advances AlphaFold 3 will bring. 

Mohammed AlQuraishi, an assistant professor of systems biology at Columbia University who is unaffiliated with DeepMind, thinks the new version of the model will be even better for drug discovery. “The AlphaFold 2 system only knew about amino acids, so it was of very limited utility for biopharma,” he says. “But now, the system can in principle predict where a drug binds a protein.”

Isomorphic Labs, a drug discovery spinoff of DeepMind, is already using the model for exactly that purpose, collaborating with pharmaceutical companies to try to develop new treatments for diseases, according to DeepMind. 

AlQuraishi says the release marks a big leap forward. But there are caveats.

“It makes the system much more general, and in particular for drug discovery purposes (in early-stage research), it’s far more useful now than AlphaFold 2,” he says. But as with most models, the impact of AlphaFold will depend on how accurate its predictions are. For some uses, AlphaFold 3 has double the success rate of similar leading models like RoseTTAFold. But for others, like protein-RNA interactions, AlQuraishi says it’s still very inaccurate. 

DeepMind says that depending on the interaction being modeled, accuracy can range from 40% to over 80%, and the model will let researchers know how confident it is in its prediction. With less accurate predictions, researchers have to use AlphaFold merely as a starting point before pursuing other methods. Regardless of these ranges in accuracy, if researchers are trying to take the first steps toward answering a question like which enzymes have the potential to break down the plastic in water bottles, it’s vastly more efficient to use a tool like AlphaFold than experimental techniques such as x-ray crystallography. 

A revamped model  

AlphaFold 3’s larger library of molecules and higher level of complexity required improvements to the underlying model architecture. So DeepMind turned to diffusion techniques, which AI researchers have been steadily improving in recent years and now power image and video generators like OpenAI’s DALL-E 2 and Sora. It works by training a model to start with a noisy image and then reduce that noise bit by bit until an accurate prediction emerges. That method allows AlphaFold 3 to handle a much larger set of inputs.

That marked “a big evolution from the previous model,” says John Jumper, director at Google DeepMind. “It really simplified the whole process of getting all these different atoms to work together.”

It also presented new risks. As the AlphaFold 3 paper details, the use of diffusion techniques made it possible for the model to hallucinate, or generate structures that look plausible but in reality could not exist. Researchers reduced that risk by adding more training data to the areas most prone to hallucination, though that doesn’t eliminate the problem completely. 

Restricted access

Part of AlphaFold 3’s impact will depend on how DeepMind divvies up access to the model. For AlphaFold 2, the company released the open-source code , allowing researchers to look under the hood to gain a better understanding of how it worked. It was also available for all purposes, including commercial use by drugmakers. For AlphaFold 3, Hassabis said, there are no current plans to release the full code. The company is instead releasing a public interface for the model called the AlphaFold Server , which imposes limitations on which molecules can be experimented with and can only be used for noncommercial purposes. DeepMind says the interface will lower the technical barrier and broaden the use of the tool to biologists who are less knowledgeable about this technology.

Artificial intelligence

What’s next for generative video.

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

  • Will Douglas Heaven archive page

Is robotics about to have its own ChatGPT moment?

Researchers are using generative AI and other techniques to teach robots new skills—including tasks they could perform in homes.

  • Melissa Heikkilä archive page

Sam Altman says helpful agents are poised to become AI’s killer function

Open AI’s CEO says we won’t need new hardware or lots more training data to get there.

An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

Synthesia's new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

  • Search Menu
  • Author Guidelines
  • Submission Site
  • Open Access
  • About International Studies Review
  • About the International Studies Association
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Ai: a global governance challenge, empirical perspectives, normative perspectives, acknowledgement, conflict of interest.

  • < Previous

The Global Governance of Artificial Intelligence: Next Steps for Empirical and Normative Research

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

Jonas Tallberg, Eva Erman, Markus Furendal, Johannes Geith, Mark Klamberg, Magnus Lundgren, The Global Governance of Artificial Intelligence: Next Steps for Empirical and Normative Research, International Studies Review , Volume 25, Issue 3, September 2023, viad040, https://doi.org/10.1093/isr/viad040

  • Permissions Icon Permissions

Artificial intelligence (AI) represents a technological upheaval with the potential to change human society. Because of its transformative potential, AI is increasingly becoming subject to regulatory initiatives at the global level. Yet, so far, scholarship in political science and international relations has focused more on AI applications than on the emerging architecture of global AI regulation. The purpose of this article is to outline an agenda for research into the global governance of AI. The article distinguishes between two broad perspectives: an empirical approach, aimed at mapping and explaining global AI governance; and a normative approach, aimed at developing and applying standards for appropriate global AI governance. The two approaches offer questions, concepts, and theories that are helpful in gaining an understanding of the emerging global governance of AI. Conversely, exploring AI as a regulatory issue offers a critical opportunity to refine existing general approaches to the study of global governance.

La inteligencia artificial (IA) representa una revolución tecnológica que tiene el potencial de poder cambiar la sociedad humana. Debido a este potencial transformador, la IA está cada vez más sujeta a iniciativas reguladoras a nivel global. Sin embargo, hasta ahora, el mundo académico en el área de las ciencias políticas y las relaciones internacionales se ha centrado más en las aplicaciones de la IA que en la arquitectura emergente de la regulación global en materia de IA. El propósito de este artículo es esbozar una agenda para la investigación sobre la gobernanza global en materia de IA. El artículo distingue entre dos amplias perspectivas: por un lado, un enfoque empírico, destinado a mapear y explicar la gobernanza global en materia de IA y, por otro lado, un enfoque normativo, destinado a desarrollar y a aplicar normas para una gobernanza global adecuada de la IA. Los dos enfoques ofrecen preguntas, conceptos y teorías que resultan útiles para comprender la gobernanza global emergente en materia de IA. Por el contrario, el hecho de estudiar la IA como si fuese una cuestión reguladora nos ofrece una oportunidad de gran relevancia para poder perfeccionar los enfoques generales existentes en el estudio de la gobernanza global.

L'intelligence artificielle (IA) constitue un bouleversement technologique qui pourrait bien changer la société humaine. À cause de son potentiel transformateur, l'IA fait de plus en plus l'objet d'initiatives réglementaires au niveau mondial. Pourtant, jusqu'ici, les chercheurs en sciences politiques et relations internationales se sont davantage concentrés sur les applications de l'IA que sur l’émergence de l'architecture de la réglementation mondiale de l'IA. Cet article vise à exposer les grandes lignes d'un programme de recherche sur la gouvernance mondiale de l'IA. Il fait la distinction entre deux perspectives larges : une approche empirique, qui vise à représenter et expliquer la gouvernance mondiale de l'IA; et une approche normative, qui vise à mettre au point et appliquer les normes d'une gouvernance mondiale de l'IA adéquate. Les deux approches proposent des questions, des concepts et des théories qui permettent de mieux comprendre l’émergence de la gouvernance mondiale de l'IA. À l'inverse, envisager l'IA telle une problématique réglementaire présente une opportunité critique d'affiner les approches générales existantes de l’étude de la gouvernance mondiale.

Artificial intelligence (AI) represents a technological upheaval with the potential to transform human society. It is increasingly viewed by states, non-state actors, and international organizations (IOs) as an area of strategic importance, economic competition, and risk management. While AI development is concentrated to a handful of corporations in the United States, China, and Europe, the long-term consequences of AI implementation will be global. Autonomous weapons will have consequences for armed conflicts and power balances; automation will drive changes in job markets and global supply chains; generative AI will affect content production and challenge copyright systems; and competition around the scarce hardware needed to train AI systems will shape relations among both states and businesses. While the technology is still only lightly regulated, state and non-state actors are beginning to negotiate global rules and norms to harness and spread AI’s benefits while limiting its negative consequences. For example, in the past few years, the United Nations Educational, Scientific and Cultural Organization (UNESCO) adopted recommendations on the ethics of AI, the European Union (EU) negotiated comprehensive AI legislation, and the Group of Seven (G7) called for developing global technical standards on AI.

Our purpose in this article is to outline an agenda for research into the global governance of AI. 1 Advancing research on the global regulation of AI is imperative. The rules and arrangements that are currently being developed to regulate AI will have a considerable impact on power differentials, the distribution of economic value, and the political legitimacy of AI governance for years to come. Yet there is currently little systematic knowledge on the nature of global AI regulation, the interests influential in this process, and the extent to which emerging arrangements can manage AI’s consequences in a just and democratic manner. While poised for rapid expansion, research on the global governance of AI remains in its early stages (but see Maas 2021 ; Schmitt 2021 ).

This article complements earlier calls for research on AI governance in general ( Dafoe 2018 ; Butcher and Beridze 2019 ; Taeihagh 2021 ; Büthe et al. 2022 ) by focusing specifically on the need for systematic research into the global governance of AI. It submits that global efforts to regulate AI have reached a stage where it is necessary to start asking fundamental questions about the characteristics, sources, and consequences of these governance arrangements.

We distinguish between two broad approaches for studying the global governance of AI: an empirical perspective, informed by a positive ambition to map and explain AI governance arrangements; and a normative perspective, informed by philosophical standards for evaluating the appropriateness of AI governance arrangements. Both perspectives build on established traditions of research in political science, international relations (IR), and political philosophy, and offer questions, concepts, and theories that are helpful as we try to better understand new types of governance in world politics.

We argue that empirical and normative perspectives together offer a comprehensive agenda of research on the global governance of AI. Pursuing this agenda will help us to better understand characteristics, sources, and consequences of the global regulation of AI, with potential implications for policymaking. Conversely, exploring AI as a regulatory issue offers a critical opportunity to further develop concepts and theories of global governance as they confront the particularities of regulatory dynamics in this important area.

We advance this argument in three steps. First, we argue that AI, because of its economic, political, and social consequences, presents a range of governance challenges. While these challenges initially were taken up mainly by national authorities, recent years have seen a dramatic increase in governance initiatives by IOs. These efforts to regulate AI at global and regional levels are likely driven by several considerations, among them AI applications creating cross-border externalities that demand international cooperation and AI development taking place through transnational processes requiring transboundary regulation. Yet, so far, existing scholarship on the global governance of AI has been mainly descriptive or policy-oriented, rather than focused on theory-driven positive and normative questions.

Second, we argue that an empirical perspective can help to shed light on key questions about characteristics and sources of the global governance of AI. Based on existing concepts, the emerging governance architecture for AI can be described as a regime complex—a structure of partially overlapping and diverse governance arrangements without a clearly defined central institution or hierarchy. IR theories are useful in directing our attention to the role of power, interests, ideas, and non-state actors in the construction of this regime complex. At the same time, the specific conditions of AI governance suggest ways in which global governance theories may be usefully developed.

Third, we argue that a normative perspective raises crucial questions regarding the nature and implications of global AI governance. These questions pertain both to procedure (the process for developing rules) and to outcome (the implications of those rules). A normative perspective suggests that procedures and outcomes in global AI governance need to be evaluated in terms of how they meet relevant normative ideals, such as democracy and justice. How could the global governance of AI be organized to live up to these ideals? To what extent are emerging arrangements minimally democratic and fair in their procedures and outcomes? Conversely, the global governance of AI raises novel questions for normative theorizing, for instance, by invoking aims for AI to be “trustworthy,” “value aligned,” and “human centered.”

Advancing this agenda of research is important for several reasons. First, making more systematic use of social science concepts and theories will help us to gain a better understanding of various dimensions of the global governance of AI. Second, as a novel case of governance involving unique features, AI raises questions that will require us to further refine existing concepts and theories of global governance. Third, findings from this research agenda will be of importance for policymakers, by providing them with evidence on international regulatory gaps, the interests that have influenced current arrangements, and the normative issues at stake when developing this regime complex going forward.

The remainder of this article is structured in three substantive sections. The first section explains why AI has become a concern of global governance. The second section suggests that an empirical perspective can help to shed light on the characteristics and drivers of the global governance of AI. The third section discusses the normative challenges posed by global AI governance, focusing specifically on concerns related to democracy and justice. The article ends with a conclusion that summarizes our proposed agenda for future research on the global governance of AI.

Why does AI pose a global governance challenge? In this section, we answer this question in three steps. We begin by briefly describing the spread of AI technology in society, then illustrate the attempts to regulate AI at various levels of governance, and finally explain why global regulatory initiatives are becoming increasingly common. We argue that the growth of global governance initiatives in this area stems from AI applications creating cross-border externalities that demand international cooperation and from AI development taking place through transnational processes requiring transboundary regulation.

Due to its amorphous nature, AI escapes easy definition. Instead, the definition of AI tends to depend on the purposes and audiences of the research ( Russell and Norvig 2020 ). In the most basic sense, machines are considered intelligent when they can perform tasks that would require intelligence if done by humans ( McCarthy et al. 1955 ). This could happen through the guiding hand of humans, in “expert systems” that follow complex decision trees. It could also happen through “machine learning,” where AI systems are trained to categorize texts, images, sounds, and other data, using such categorizations to make autonomous decisions when confronted with new data. More specific definitions require that machines display a level of autonomy and capacity for learning that enables rational action. For instance, the EU’s High-Level Expert Group on AI has defined AI as “systems that display intelligent behaviour by analysing their environment and taking actions—with some degree of autonomy—to achieve specific goals” (2019, 1). Yet, illustrating the potential for conceptual controversy, this definition has been criticized for denoting both too many and too few technologies as AI ( Heikkilä 2022a ).

AI technology is already implemented in a wide variety of areas in everyday life and the economy at large. For instance, the conversational chatbot ChatGPT is estimated to have reached 100 million users just  two months after its launch at the end of 2022 ( Hu 2023 ). AI applications enable new automation technologies, with subsequent positive or negative effects on the demand for labor, employment, and economic equality ( Acemoglu and Restrepo 2020 ). Military AI is integral to lethal autonomous weapons systems (LAWS), whereby machines take autonomous decisions in warfare and battlefield targeting ( Rosert and Sauer 2018 ). Many governments and public agencies have already implemented AI in their daily operations in order to more efficiently evaluate welfare eligibility, flag potential fraud, profile suspects, make risk assessments, and engage in mass surveillance ( Saif et al. 2017 ; Powers and Ganascia 2020 ; Berk 2021 ; Misuraca and van Noordt 2022 , 38).

Societies face significant governance challenges in relation to the implementation of AI. One type of challenge arises when AI systems function poorly, such as when applications involving some degree of autonomous decision-making produce technical failures with real-world implications. The “Robodebt” scheme in Australia, for instance, was designed to detect mistaken social security payments, but the Australian government ultimately had to rescind 400,000 wrongfully issued welfare debts ( Henriques-Gomes 2020 ). Similarly, Dutch authorities recently implemented an algorithm that pushed tens of thousands of families into poverty after mistakenly requiring them to repay child benefits, ultimately forcing the government to resign ( Heikkilä 2022b ).

Another type of governance challenge arises when AI systems function as intended but produce impacts whose consequences may be regarded as problematic. For instance, the inherent opacity of AI decision-making challenges expectations on transparency and accountability in public decision-making in liberal democracies ( Burrell 2016 ; Erman and Furendal 2022a ). Autonomous weapons raise critical ethical and legal issues ( Rosert and Sauer 2019 ). AI applications for surveillance in law enforcement give rise to concerns of individual privacy and human rights ( Rademacher 2019 ). AI-driven automation involves changes in labor markets that are painful for parts of the population ( Acemoglu and Restrepo 2020 ). Generative AI upends conventional ways of producing creative content and raises new copyright and data security issues ( Metz 2022 ).

More broadly, AI presents a governance challenge due to its effects on economic competitiveness, military security, and personal integrity, with consequences for states and societies. In this respect, AI may not be radically different from earlier general-purpose technologies, such as the steam engine, electricity, nuclear power, and the internet ( Frey 2019 ). From this perspective, it is not the novelty of AI technology that makes it a pressing issue to regulate but rather the anticipation that AI will lead to large-scale changes and become a source of power for state and societal actors.

Challenges such as these have led to a rapid expansion in recent years of efforts to regulate AI at different levels of governance. The OECD AI Policy Observatory records more than 700 national AI policy initiatives from 60 countries and territories ( OECD 2021 ). Earlier research into the governance of AI has therefore naturally focused mostly on the national level ( Radu 2021 ; Roberts et al. 2021 ; Taeihagh 2021 ). However, a large number of governance initiatives have also been undertaken at the global level, and many more are underway. According to an ongoing inventory of AI regulatory initiatives by the Council of Europe, IOs overtook national authorities as the main source of such initiatives in 2020 ( Council of Europe 2023 ).  Figure 1 visualizes this trend.

Origins of AI governance initiatives, 2015–2022. Source: Council of Europe (2023).

Origins of AI governance initiatives, 2015–2022. Source : Council of Europe (2023 ).

According to this source, national authorities launched 170 initiatives from 2015 to 2022, while IOs put in place 210 initiatives during the same period. Over time, the share of regulatory initiatives emanating from IOs has thus grown to surpass the share resulting from national authorities. Examples of the former include the OECD Principles on Artificial Intelligence agreed in 2019, the UNESCO Recommendation on Ethics of AI adopted in 2021, and the EU’s ongoing negotiations on the EU AI Act. In addition, several governance initiatives emanate from the private sector, civil society, and multistakeholder partnerships. In the next section, we will provide a more developed characterization of these global regulatory initiatives.

Two concerns likely explain why AI increasingly is becoming subject to governance at the global level. First, AI creates externalities that do not follow national borders and whose regulation requires international cooperation. China’s Artificial Intelligence Development Plan, for instance, clearly states that the country is using AI as a leapfrog technology in order to enhance national competitiveness ( Roberts et al. 2021 ). Since states with less regulation might gain a competitive edge when developing certain AI applications, there is a risk that such strategies create a regulatory race to the bottom. International cooperation that creates a level playing field could thus be said to be in the interest of all parties.

Second, the development of AI technology is a cross-border process carried out by transnational actors—multinational firms in particular. Big tech corporations, such as Google, Meta, or the Chinese drone maker DJI, are investing vast sums into AI development. The innovations of hardware manufacturers like Nvidia enable breakthroughs but depend on complex global supply chains, and international research labs such as DeepMind regularly present cutting-edge AI applications. Since the private actors that develop AI can operate across multiple national jurisdictions, the efforts to regulate AI development and deployment also need to be transboundary. Only by introducing common rules can states ensure that AI businesses encounter similar regulatory environments, which both facilitates transboundary AI development and reduces incentives for companies to shift to countries with laxer regulation.

Successful global governance of AI could help realize many of the potential benefits of the technology while mitigating its negative consequences. For AI to contribute to increased economic productivity, for instance, there needs to be predictable and clear regulation as well as global coordination around standards that prevent competition between parallel technical systems. Conversely, a failure to provide suitable global governance could lead to substantial risks. The intentional misuse of AI technology may undermine trust in institutions, and if left unchecked, the positive and negative externalities created by automation technologies might fall unevenly across different groups. Race dynamics similar to those that arose around nuclear technology in the twentieth century—where technological leadership created large benefits—might lead international actors and private firms to overlook safety issues and create potentially dangerous AI applications ( Dafoe 2018 ; Future of Life Institute 2023 ). Hence, policymakers face the task of disentangling beneficial from malicious consequences and then foster the former while regulating the latter. Given the speed at which AI is developed and implemented, governance also risks constantly being one step behind the technological frontier.

A prime example of how AI presents a global governance challenge is the efforts to regulate military AI, in particular autonomous weapons capable of identifying and eliminating a target without the involvement of a remote human operator ( Hernandez 2021 ). Both the development and the deployment of military applications with autonomous capabilities transcend national borders. Multinational defense companies are at the forefront of developing autonomous weapons systems. Reports suggest that such autonomous weapons are now beginning to be used in armed conflicts ( Trager and Luca 2022 ). The development and deployment of autonomous weapons involve the types of competitive dynamics and transboundary consequences identified above. In addition, they raise specific concerns with respect to accountability and dehumanization ( Sparrow 2007 ; Stop Killer Robots 2023 ). For these reasons, states have begun to explore the potential for joint global regulation of autonomous weapons systems. The principal forum is the Group on Governmental Experts (GGE) within the Convention on Certain Conventional Weapons (CCW). Yet progress in these negotiations is slow as the major powers approach this issue with competing interests in mind, illustrating the challenges involved in developing joint global rules.

The example of autonomous weapons further illustrates how the global governance of AI raises urgent empirical and normative questions for research. On the empirical side, these developments invite researchers to map emerging regulatory initiatives, such as those within the CCW, and to explain why these particular frameworks become dominant. What are the principal characteristics of global regulatory initiatives in the area of autonomous weapons, and how do power differentials, interest constellations, and principled ideas influence those rules? On the normative side, these developments invite researchers to address key normative questions raised by the development and deployment of autonomous weapons. What are the key normative issues at stake in the regulation of autonomous weapons, both with respect to the process through which such rules are developed and with respect to the consequences of these frameworks? To what extent are existing normative ideals and frameworks, such as just war theory, applicable to the governing of military AI ( Roach and Eckert 2020 )? Despite the global governance challenge of AI development and use, research on this topic is still in its infancy (but see Maas 2021 ; Schmitt 2021 ). In the remainder of this article, we therefore present an agenda for research into the global governance of AI. We begin by outlining an agenda for positive empirical research on the global governance of AI and then suggest an agenda for normative philosophical research.

An empirical perspective on the global governance of AI suggests two main questions: How may we describe the emerging global governance of AI? And how may we explain the emerging global governance of AI? In this section, we argue that concepts and theories drawn from the general study of global governance will be helpful as we address these questions, but also that AI, conversely, raises novel issues that point to the need for new or refined theories. Specifically, we show how global AI governance may be mapped along several conceptual dimensions and submit that theories invoking power dynamics, interests, ideas, and non-state actors have explanatory promise.

Mapping AI Governance

A key priority for empirical research on the global governance of AI is descriptive: Where and how are new regulatory arrangements emerging at the global level? What features characterize the emergent regulatory landscape? In answering such questions, researchers can draw on scholarship on international law and IR, which have conceptualized mechanisms of regulatory change and drawn up analytical dimensions to map and categorize the resulting regulatory arrangements.

Any mapping exercise must consider the many different ways in global AI regulation may emerge and evolve. Previous research suggests that legal development may take place in at least three distinct ways. To begin with, existing rules could be reinterpreted to also cover AI ( Maas 2021 ). For example, the principles of distinction, proportionality, and precaution in international humanitarian law could be extended, via reinterpretation, to apply to LAWS, without changing the legal source. Another manner in which new AI regulation may appear is via “ add-ons ” to existing rules. For example, in the area of global regulation of autonomous vehicles, AI-related provisions were added to the 1968 Vienna Road Traffic Convention through an amendment in 2015 ( Kunz and Ó hÉigeartaigh 2020 ). Finally, AI regulation may appear as a completely new framework , either through new state behavior that results in customary international law or through a new legal act or treaty ( Maas 2021 , 96). Here, one example of regulating AI through a new framework is the aforementioned EU AI Act, which would take the form of a new EU regulation.

Once researchers have mapped emerging regulatory arrangements, a central task will be to categorize them. Prior scholarship suggests that regulatory arrangements may be fruitfully analyzed in terms of five key dimensions (cf. Koremenos et al. 2001 ; Wahlgren 2022 , 346–347). A first dimension is whether regulation is horizontal or vertical . A horizontal regulation covers several policy areas, whereas a vertical regulation is a delimited legal framework covering one specific policy area or application. In the field of AI, emergent governance appears to populate both ends of this spectrum. For example, the proposed EU AI Act (2021), the UNESCO Recommendations on the Ethics of AI (2021), and the OECD Principles on AI (2019), which are not specific to any particular AI application or field, would classify as attempts at horizontal regulation. When it comes to vertical regulation, there are fewer existing examples, but discussions on a new protocol on LAWS within the CCW signal that this type of regulation is likely to become more important in the future ( Maas 2019a ).

A second dimension runs from centralization to decentralization . Governance is centralized if there is a single, authoritative institution at the heart of a regime, such as in trade, where the World Trade Organization (WTO) fulfills this role. In contrast, decentralized arrangements are marked by parallel and partly overlapping institutions, such as in the governance of the environment, the internet, or genetic resources (cf. Raustiala and Victor 2004 ). While some IOs with universal membership, such as UNESCO, have taken initiatives relating to AI governance, no institution has assumed the role as the core regulatory body at the global level. Rather, the proliferation of parallel initiatives, across levels and regions, lends weight to the conclusion that contemporary arrangements for the global governance of AI are strongly decentralized ( Cihon et al. 2020a ).

A third dimension is the continuum from hard law to soft law . While domestic statutes and treaties may be described as hard law, soft law is associated with guidelines of conduct, recommendations, resolutions, standards, opinions, ethical principles, declarations, guidelines, board decisions, codes of conduct, negotiated agreements, and a large number of additional normative mechanisms ( Abbott and Snidal 2000 ; Wahlgren 2022 ). Even though such soft documents may initially have been drafted as non-legal texts, they may in actual practice acquire considerable strength in structuring international relations ( Orakhelashvili 2019 ). While some initiatives to regulate AI classify as hard law, including the EU’s AI Act, Burri (2017 ) suggests that AI governance is likely to be dominated by “supersoft law,” noting that there are currently numerous processes underway creating global standards outside traditional international law-making fora. In a phenomenon that might be described as “bottom-up law-making” ( Koven Levit 2017 ), states and IOs are bypassed, creating norms that defy traditional categories of international law ( Burri 2017 ).

A fourth dimension concerns private versus public regulation . The concept of private regulation overlaps partly with substance understood as soft law, to the extent that private actors develop non-binding guidelines ( Wahlgren 2022 ). Significant harmonization of standards may be developed by private standardization bodies, such as the IEEE ( Ebers 2022 ). Public authorities may regulate the responsibility of manufacturers through tort law and product liability law ( Greenstein 2022 ). Even though contracts are originally matters between private parties, some contractual matters may still be regulated and enforced by law ( Ubena 2022 ).

A fifth dimension relates to the division between military and non-military regulation . Several policymakers and scholars describe how military AI is about to escalate into a strategic arms race between major powers such as the United States and China, similar to the nuclear arms race during the Cold War (cf. Petman 2017 ; Thompson and Bremmer 2018 ; Maas 2019a ). The process in the CCW Group of Governmental Experts on the regulation of LAWS is probably the largest single negotiation on AI ( Maas 2019b ) next to the negotiations on the EU AI Act. The zero-sum logic that appears to exist between states in the area of national security, prompting a military AI arms race, may not be applicable to the same extent to non-military applications of AI, potentially enabling a clearer focus on realizing positive-sum gains through regulation.

These five dimensions can provide guidance as researchers take up the task of mapping and categorizing global AI regulation. While the evidence is preliminary, in its present form, the global governance of AI must be understood as combining horizontal and vertical elements, predominantly leaning toward soft law, being heavily decentralized, primarily public in nature, and mixing military and non-military regulation. This multi-faceted and non-hierarchical nature of global AI governance suggests that it is best characterized as a regime complex , or a “larger web of international rules and regimes” ( Alter and Meunier 2009 , 13; Keohane and Victor 2011 ) rather than as a single, discrete regime.

If global AI governance can be understood as a regime complex, which some researchers already claim ( Cihon et al. 2020a ), future scholarship should look for theoretical and methodological inspiration in research on regime complexity in other policy fields. This research has found that regime complexes are characterized by path dependence, as existing rules shape the formulation of new rules; venue shopping, as actors seek to steer regulatory efforts to the fora most advantageous to their interests; and legal inconsistencies, as rules emerge from fractious and overlapping negotiations in parallel processes ( Raustiala and Victor 2004 ). Scholars have also considered the design of regime complexes ( Eilstrup-Sangiovanni and Westerwinter 2021 ), institutional overlap among bodies in regime complexes ( Haftel and Lenz 2021 ), and actors’ forum-shopping within regime complexes ( Verdier 2022 ). Establishing whether these patterns and dynamics are key features also of the AI regime complex stands out as an important priority in future research.

Explaining AI Governance

As our understanding of the empirical patterns of global AI governance grows, a natural next step is to turn to explanatory questions. How may we explain the emerging global governance of AI? What accounts for variation in governance arrangements and how do they compare with those in other policy fields, such as environment, security, or trade? Political science and IR offer a plethora of useful theoretical tools that can provide insights into the global governance of AI. However, at the same time, the novelty of AI as a governance challenge raises new questions that may require novel or refined theories. Thus far, existing research on the global governance of AI has been primarily concerned with descriptive tasks and largely fallen short in engaging with explanatory questions.

We illustrate the potential of general theories to help explain global AI governance by pointing to three broad explanatory perspectives in IR ( Martin and Simmons 2012 )—power, interests, and ideas—which have served as primary sources of theorizing on global governance arrangements in other policy fields. These perspectives have conventionally been associated with the paradigmatic theories of realism, liberalism, and constructivism, respectively, but like much of the contemporary IR discipline, we prefer to formulate them as non-paradigmatic sources for mid-level theorizing of more specific phenomena (cf. Lake 2013 ). We focus our discussion on how accounts privileging power, interests, and ideas have explained the origins and designs of IOs and how they may help us explain wider patterns of global AI governance. We then discuss how theories of non-state actors and regime complexity, in particular, offer promising avenues for future research into the global governance of AI. Research fields like science and technology studies (e.g., Jasanoff 2016 ) or the political economy of international cooperation (e.g., Gilpin 1987 ) can provide additional theoretical insights, but these literatures are not discussed in detail here.

A first broad explanatory perspective is provided by power-centric theories, privileging the role of major states, capability differentials, and distributive concerns. While conventional realism emphasizes how states’ concern for relative gains impedes substantive international cooperation, viewing IOs as epiphenomenal reflections of underlying power relations ( Mearsheimer 1994 ), developed power-oriented theories have highlighted how powerful states seek to design regulatory contexts that favor their preferred outcomes ( Gruber 2000 ) or shape the direction of IOs using informal influence ( Stone 2011 ; Dreher et al. 2022 ).

In research on global AI governance, power-oriented perspectives are likely to prove particularly fruitful in investigating how great-power contestation shapes where and how the technology will be regulated. Focusing on the major AI powerhouses, scholars have started to analyze the contrasting regulatory strategies and policies of the United States, China, and the EU, often emphasizing issues of strategic competition, military balance, and rivalry ( Kania 2017 ; Horowitz et al. 2018 ; Payne 2018 , 2021 ; Johnson 2019 ; Jensen et al. 2020 ). Here, power-centric theories could help understand the apparent emphasis on military AI in both the United States and China, as witnessed by the recent establishment of a US National Security Commission on AI and China’s ambitious plans of integrating AI into its military forces ( Ding 2018 ). The EU, for its part, is negotiating the comprehensive AI Act, seeking to use its market power to set a European standard for AI that subsequently can become the global standard, as it previously did with its GDPR law on data protection and privacy ( Schmitt 2021 ). Given the primacy of these three actors in AI development, their preferences and outlook regarding regulatory solutions will remain a key research priority.

Power-based accounts are also likely to provide theoretical inspiration for research on AI governance in the domain of security and military competition. Some scholars are seeking to assess the implications of AI for strategic rivalries, and their possible regulation, by drawing on historical analogies ( Leung 2019 ; see also Drezner 2019 ). Observing that, from a strategic standpoint, military AI exhibits some similarities to the problems posed by nuclear weapons, researchers have examined whether lessons from nuclear arms control have applicability in the domain of AI governance. For example, Maas (2019a ) argues that historical experience suggests that the proliferation of military AI can potentially be slowed down via institutionalization, while Zaidi and Dafoe (2021 ), in a study of the Baruch Plan for Nuclear Weapons, contend that fundamental strategic obstacles—including mistrust and fear of exploitation by other states—need to be overcome to make regulation viable. This line of investigation can be extended by assessing other historical analogies, such as the negotiations that led to the Strategic Arms Limitation Talks (SALT) in 1972 or more recent efforts to contain the spread of nuclear weapons, where power-oriented factors have shown continued analytical relevance (e.g., Ruzicka 2018 ).

A second major explanatory approach is provided by the family of theoretical accounts that highlight how international cooperation is shaped by shared interests and functional needs ( Keohane 1984 ; Martin 1992 ). A key argument in rational functionalist scholarship is that states are likely to establish IOs to overcome barriers to cooperation—such as information asymmetries, commitment problems, and transaction costs—and that the design of these institutions will reflect the underlying problem structure, including the degree of uncertainty and the number of involved actors (e.g., Koremenos et al. 2001 ; Hawkins et al. 2006 ; Koremenos 2016 ).

Applied to the domain of AI, these approaches would bring attention to how the functional characteristics of AI as a governance problem shape the regulatory response. They would also emphasize the investigation of the distribution of interests and the possibility of efficiency gains from cooperation around AI governance. The contemporary proliferation of partnerships and initiatives on AI governance points to the suitability of this theoretical approach, and research has taken some preliminary steps, surveying state interests and their alignment (e.g., Campbell 2019 ; Radu 2021 ). However, a systematic assessment of how the distribution of interests would explain the nature of emerging governance arrangements, both in the aggregate and at the constituent level, has yet to be undertaken.

A third broad explanatory perspective is provided by theories emphasizing the role of history, norms, and ideas in shaping global governance arrangements. In contrast to accounts based on power and interests, this line of scholarship, often drawing on sociological assumptions and theory, focuses on how institutional arrangements are embedded in a wider ideational context, which itself is subject to change. This perspective has generated powerful analyses of how societal norms influence states’ international behavior (e.g., Acharya and Johnston 2007 ), how norm entrepreneurs play an active role in shaping the origins and diffusion of specific norms (e.g., Finnemore and Sikkink 1998 ), and how IOs socialize states and other actors into specific norms and behaviors (e.g., Checkel 2005 ).

Examining the extent to which domestic and societal norms shape discussions on global governance arrangements stands out as a particularly promising area of inquiry. Comparative research on national ethical standards for AI has already indicated significant cross-country convergence, indicating a cluster of normative principles that are likely to inspire governance frameworks in many parts of the world (e.g., Jobin et al. 2019 ). A closely related research agenda concerns norm entrepreneurship in AI governance. Here, preliminary findings suggest that civil society organizations have played a role in advocating norms relating to fundamental rights in the formulation of EU AI policy and other processes ( Ulnicane 2021 ). Finally, once AI governance structures have solidified further, scholars can begin to draw on norms-oriented scholarship to design strategies for the analysis of how those governance arrangements may play a role in socialization.

In light of the particularities of AI and its political landscape, we expect that global governance scholars will be motivated to refine and adapt these broad theoretical perspectives to address new questions and conditions. For example, considering China’s AI sector-specific resources and expertise, power-oriented theories will need to grapple with questions of institutional creation and modification occurring under a distribution of power that differs significantly from the Western-centric processes that underpin most existing studies. Similarly, rational functionalist scholars will need to adapt their tools to address questions of how the highly asymmetric distribution of AI capabilities—in particular between producers, which are few, concentrated, and highly resourced, and users and subjects, which are many, dispersed, and less resourced—affects the formation of state interests and bargaining around institutional solutions. For their part, norm-oriented theories may need to be refined to capture the role of previously understudied sources of normative and ideational content, such as formal and informal networks of computer programmers, which, on account of their expertise, have been influential in setting the direction of norms surrounding several AI technologies.

We expect that these broad theoretical perspectives will continue to inspire research on the global governance of AI, in particular for tailored, mid-level theorizing in response to new questions. However, a fully developed research agenda will gain from complementing these theories, which emphasize particular independent variables (power, interests, and norms), with theories and approaches that focus on particular issues, actors, and phenomena. There is an abundance of theoretical perspectives that can be helpful in this regard, including research on the relationship between science and politics ( Haas 1992 ; Jasanoff 2016 ), the political economy of international cooperation ( Gilpin 1987 ; Frieden et al. 2017 ), the complexity of global governance ( Raustiala and Victor 2004 ; Eilstrup-Sangiovanni and Westerwinter 2021 ), and the role of non-state actors ( Risse 2012 ; Tallberg et al. 2013 ). We focus here on the latter two: theories of regime complexity, which have grown to become a mainstream approach in global governance scholarship, as well as theories of non-state actors, which provide powerful tools for understanding how private organizations influence regulatory processes. Both literatures hold considerable promise in advancing scholarship of AI global governance beyond its current state.

As concluded above, the current structure of global AI governance fits the description of a regime complex. Thus, approaching AI governance through this theoretical lens, understanding it as a larger web of rules and regulations, can open new avenues of research (see Maas 2021 for a pioneering effort). One priority is to analyze the AI regime complex in terms of core dimensions, such as scale, diversity, and density ( Eilstrup-Sangiovanni and Westerwinter 2021 ). Pointing to the density of this regime complex, existing studies have suggested that global AI governance is characterized by a high degree of fragmentation ( Schmitt 2021 ), which has motivated assessments of the possibility of greater centralization ( Cihon et al. 2020b ). Another area of research is to examine the emergence of legal inconsistencies and tensions, likely to emerge because of the diverging preferences of major AI players and the tendency of self-interest actors to forum-shop when engaging within a regime complex. Finally, given that the AI regime complex exists in a very early state, it provides researchers with an excellent opportunity to trace the origins and evolution of this form of governance structure from the outset, thus providing a good case for both theory development and novel empirical applications.

If theories of regime complexity can shine a light on macro-level properties of AI governance, other theoretical approaches can guide research into micro-level dynamics and influences. Recognizing that non-state actors are central in both AI development and its emergent regulation, researchers should find inspiration in theories and tools developed to study the role and influence of non-state actors in global governance (for overviews, see Risse 2012 ; Jönsson and Tallberg forthcoming ). Drawing on such work will enable researchers to assess to what extent non-state actor involvement in the AI regime complex differs from previous experiences in other international regimes. It is clear that large tech companies, like Google, Meta, and Microsoft, have formed regulatory preferences and that their monetary resources and technological expertise enable them to promote these interests in legislative and bureaucratic processes. For example, the Partnership on AI (PAI), a multistakeholder organization with more than 50 members, includes American tech companies at the forefront of AI development and fosters research on issues of AI ethics and governance ( Schmitt 2021 ). Other non-state actors, including civil society watchdog organizations, like the Civil Liberties Union for Europe, have been vocal in the negotiations of the EU AI Act, further underlining the relevance of this strand of research.

When investigating the role of non-state actors in the AI regime complex, research may be guided by four primary questions. A first question concerns the interests of non-state actors regarding alternative AI global governance architectures. Here, a survey by Chavannes et al. (2020 ) on possible regulatory approaches to LAWS suggests that private companies developing AI applications have interests that differ from those of civil society organizations. Others have pointed to the role of actors rooted in research and academia who have sought to influence the development of AI ethics guidelines ( Zhu 2022 ). A second question is to what extent the regulatory institutions and processes are accessible to the aforementioned non-state actors in the first place. Are non-state actors given formal or informal opportunities to be substantively involved in the development of new global AI rules? Research points to a broad and comprehensive opening up of IOs over the past two decades ( Tallberg et al. 2013 ) and, in the domain of AI governance, early indications are that non-state actors have been granted access to several multilateral processes, including in the OECD and the EU (cf. Niklas and Dencik 2021 ). A third question concerns actual participation: Are non-state actors really making use of the opportunities to participate, and what determines the patterns of participation? In this vein, previous research has suggested that the participation of non-state actors is largely dependent on their financial resources ( Uhre 2014 ) or the political regime of their home country ( Hanegraaff et al. 2015 ). In the context of AI governance, this raises questions about if and how the vast resource disparities and divergent interests between private tech corporations and civil society organizations may bias patterns of participation. There is, for instance, research suggesting that private companies are contributing to a practice of ethics washing by committing to nonbinding ethical guidelines while circumventing regulation ( Wagner 2018 ; Jobin et al. 2019 ; Rességuier and Rodrigues 2020 ). Finally, a fourth question is to what extent, and how, non-state actors exert influence on adopted AI rules. Existing scholarship suggests that non-state actors typically seek to shape the direction of international cooperation via lobbying ( Dellmuth and Tallberg 2017 ), while others have argued that non-state actors use participation in international processes largely to expand or sustain their own resources ( Hanegraaff et al. 2016 ).

The previous section suggested that emerging global initiatives to regulate AI amount to a regime complex and that an empirical approach could help to map and explain these regulatory developments. In this section, we move beyond positive empirical questions to consider the normative concerns at stake in the global governance of AI. We argue that normative theorizing is needed both for assessing how well existing arrangements live up to ideals such as democracy and justice and for evaluating how best to specify what these ideals entail for the global governance of AI.

Ethical values frequently highlighted in the context of AI governance include transparency, inclusion, accountability, participation, deliberation, fairness, and beneficence ( Floridi et al. 2018 ; Jobin et al. 2019 ). A normative perspective suggests several ways in which to theorize and analyze such values in relation to the global governance of AI. One type of normative analysis focuses on application, that is, on applying an existing normative theory to instances of AI governance, assessing how well such regulatory arrangements realize their principles (similar to how political theorists have evaluated whether global governance lives up to standards of deliberation; see Dryzek 2011 ; Steffek and Nanz 2008 ). Such an analysis could also be pursued more narrowly by using a certain normative theory to assess the implications of AI technologies, for instance, by approaching the problem of algorithmic bias based on notions of fairness or justice ( Vredenburgh 2022 ). Another type of normative analysis moves from application to justification, analyzing the structure of global AI governance with the aim of theory construction. In this type of analysis, the goal is to construe and evaluate candidate principles for these regulatory arrangements in order to arrive at the best possible (most justified) normative theory. In this case, the theorist starts out from a normative ideal broadly construed (concept) and arrives at specific principles (conception).

In the remainder of this section, we will point to the promises of analyzing global AI governance based on the second approach. We will focus specifically on the normative ideals of justice and democracy. While many normative ideals could serve as focal points for an analysis of the AI domain, democracy and justice appear particularly central for understanding the normative implications of the governance of AI. Previous efforts to deploy political philosophy to shed light on normative aspects of global governance point to the promise of this focus (e.g., Caney 2005 , 2014 ; Buchanan 2013 ). It is also natural to focus on justice and democracy given that many of the values emphasized in AI ethics and existing ethics guidelines are analytically close to justice and democracy. Our core argument will be that normative research needs to be attentive to how these ideals would be best specified in relation to both the procedures and outcomes of the global governance of AI.

AI Ethics and the Normative Analysis of Global AI Governance

Although there is a rich literature on moral or ethical aspects related to specific AI applications, investigations into normative aspects of global AI governance are surprisingly sparse (for exceptions, see Müller 2020 ; Erman and Furendal 2022a , 2022b ). Researchers have so far focused mostly on normative and ethical questions raised by AI considered as a tool, enabling, for example, autonomous weapons systems ( Sparrow 2007 ) and new forms of political manipulation ( Susser et al. 2019 ; Christiano 2021 ). Some have also considered AI as a moral agent of its own, focusing on how we could govern, or be governed by, a hypothetical future artificial general intelligence ( Schwitzgebel and Garza 2015 ; Livingston and Risse 2019 ; cf. Tasioulas 2019 ; Bostrom et al. 2020 ; Erman and Furendal 2022a ). Examples such as these illustrate that there is, by now, a vibrant field of “AI ethics” that aims to consider normative aspects of specific AI applications.

As we have shown above, however, initiatives to regulate AI beyond the nation-state have become increasingly common, and they are often led by IOs, multinational companies, private standardization bodies, and civil society organizations. These developments raise normative issues that require a shift from AI ethics in general to systematic analyses of the implications of global AI governance. It is crucial to explore these normative dimensions of how AI is governed, since how AI is governed invokes key normative questions pertaining to the ideals that ought to be met.

Apart from attempts to map or describe the central norms in the existing global governance of AI (cf. Jobin et al.), most normative analyses of the global governance of AI can be said to have proceeded in two different ways. The dominant approach is to employ an outcome-based focus ( Dafoe 2018 ; Winfield et al. 2019 ; Taeihagh 2021 ), which starts by identifying a potential problem or promise created by AI technology and then seeks to identify governance mechanisms or principles that can minimize risks or make a desired outcome more likely. This approach can be contrasted with a procedure-based focus, which attaches comparatively more weight to how governance processes happen in existing or hypothetical regulatory arrangements. It recognizes that there are certain procedural aspects that are important and might be overlooked by an analysis that primarily assesses outcomes.

The benefits of this distinction become apparent if we focus on the ideals of justice and democracy. Broadly construed, we understand justice as an ideal for how to distribute benefits and burdens—specifying principles that determine “who owes what to whom”—and democracy as an ideal for collective decision-making and the exercise of political power—specifying principles that determine “who has political power over whom” ( Barry 1991 ; Weale 1999 ; Buchanan and Keohane 2006 ; Christiano 2008 ; Valentini 2012 , 2013 ). These two ideals can be analyzed with a focus on procedure or outcome, producing four fruitful avenues of normative research into global AI governance. First, justice could be understood as a procedural value or as a distributive outcome. Second, and likewise, democracy could be a feature of governance processes or an outcome of those processes. Below, we discuss existing research from the standpoint of each of these four avenues. We conclude that there is great potential for novel insights if normative theorists consider the relatively overlooked issues of outcome aspects of justice and procedural aspects of democracy in the global governance of AI.

Procedural and Outcome Aspects of Justice

Discussions around the implications of AI applications on justice, or fairness, are predominantly concerned with procedural aspects of how AI systems operate. For instance, ever since the problem of algorithmic bias—i.e., the tendency that AI-based decision-making reflects and exacerbates existing biases toward certain groups—was brought to public attention, AI ethicists have offered suggestions of why this is wrong, and AI developers have sought to construct AI systems that treat people “fairly” and thus produce “justice.” In this context, fairness and justice are understood as procedural ideals, which AI decision-making frustrates when it fails to treat like cases alike, and instead systematically treats individuals from different groups differently ( Fazelpour and Danks 2021 ; Zimmermann and Lee-Stronach 2022 ). Paradigmatic examples include automated predictions about recidivism among prisoners that have impacted decisions about people’s parole and algorithms used in recruitment that have systematically favored men over women ( Angwin et al. 2016 ; O'Neil 2017 ).

However, the emerging global governance of AI also has implications for how the benefits and burdens of AI technology are distributed among groups and states—i.e., outcomes ( Gilpin 1987 ; Dreher and Lang 2019 ). Like the regulation of earlier technological innovations ( Krasner 1991 ; Drezner 2019 ), AI governance may not only produce collective benefits, but also favor certain actors at the expense of others ( Dafoe 2018 ; Horowitz 2018 ). For instance, the concern about AI-driven automation and its impact on employment is that those who lose their jobs because of AI might carry a disproportionately large share of the negative externalities of the technology without being compensated through access to its benefits (cf. Korinek and Stiglitz 2019 ; Erman and Furendal 2022a ). Merely focusing on justice as a procedural value would overlook such distributive effects created by the diffusion of AI technology.

Moreover, this example illustrates that since AI adoption may produce effects throughout the global economy, regulatory efforts will have to go beyond issues relating to the technology itself. Recognizing the role of outcomes of AI governance entails that a broad range of policies need to be pursued by existing and emerging governance regimes. The global trade regime, for instance, may need to be reconsidered in order for the distribution of positive and negative externalities of AI technology to be just. Suggestions include pursuing policies that can incentivize certain kinds of AI technology or enable the profits gained by AI developers to be shared more widely (cf. Floridi et al. 2018 ; Erman and Furendal 2022a ).

In sum, with regard to outcome aspects of justice, theories are needed to settle which benefits and burdens created by global AI adoption ought to be fairly distributed and why (i.e., what the “site” and “scope” of AI justice are) (cf. Gabriel 2022 ). Similarly, theories of procedural aspects should look beyond individual applications of AI technology and ask whether a fairer distribution of influence over AI governance may help produce more fair outcomes, and if so how. Extending existing theories of distributive justice to the realm of global AI governance may put many of their central assumptions in a new light.

Procedural and Outcome Aspects of Democracy

Normative research could also fruitfully shed light on how emerging AI governance should be analyzed in relation to the ideal of democracy, such as what principles or criteria of democratic legitimacy are most defensible. It could be argued, for instance, that the decision process must be open to democratic influence for global AI governance to be democratically legitimate ( Erman and Furendal 2022b ). Here, normative theory can explain why it matters from the standpoint of democracy whether the affected public has had a say—either directly through open consultation or indirectly through representation—in formulating the principles that guide AI governance. The nature of the emerging AI regime complex—where prominent roles are held by multinational companies and private standard-setting bodies—suggests that it is far from certain that the public will have this kind of influence.

Importantly, it is likely that democratic procedures will take on different shapes in global governance compared to domestic politics ( Dahl 1999 ; Scholte 2011 ). A viable democratic theory must therefore make sense of how the unique properties of global governance raise issues or require solutions that are distinct from those in the domestic context. For example, the prominent influence of non-state actors, including the large tech corporations developing cutting-edge AI technology, suggests that it is imperative to ask whether different kinds of decision-making may require different normative standards and whether different kinds of actors may have different normative status in such decision-making arrangements.

Initiatives from non-state actors, such as the tech company-led PAI discussed above, often develop their own non-coercive ethics guidelines. Such documents may seek effects similar to coercively upheld regulation, such as the GDPR or the EU AI Act. For example, both Google and the EU specify that AI should not reinforce biases ( High-Level Expert Group on Artificial Intelligence 2019 ; Google 2022 ). However, from the perspective of democratic legitimacy, it may matter extensively which type of entity adopts AI regulations and on what grounds those decision-making entities have the authority to issue AI regulations ( Erman and Furendal 2022b ).

Apart from procedural aspects, a satisfying democratic theory of global AI governance will also have to include a systematic analysis of outcome aspects. Important outcome aspects of democracy include accountability and responsiveness. Accountability may be improved, for example, by instituting mechanisms to prevent corruption among decision-makers and to secure public access to governing documents, and responsiveness may be improved by strengthening the discursive quality of global decision processes, for instance, by involving international NGOs and civil movements that give voice to marginalized groups in society. With regard to tracing citizens’ preferences, some have argued that democratic decision-making can be enhanced by AI technology that tracks what people want and consistently reach “better” decisions than human decision-makers (cf. König and Wenzelburger 2022 ). Apart from accountability and responsiveness, other relevant outcome aspects of democracy include, for example, the tendency to promote conflict resolution, improve the epistemic quality of decisions, and dignity and equality among citizens.

In addition, it is important to analyze how procedural and outcome concerns are related. This issue is often neglected, which again can be illustrated by the ethics guidelines from IOs, such as the OECD Principles on Artificial Intelligence and the UNESCO Recommendation on Ethics of AI. Such documents often stress the importance of democratic values and principles, such as transparency, accountability, participation, and deliberation. Yet they typically treat these values as discrete and rarely explain how they are interconnected ( Jobin et al. 2019 ; Schiff et al. 2020 ; Hagendorff 2020 , 103). Democratic theory can fruitfully step in to explain how the ideal of “the rule by the people” includes two sides that are intimately connected. First, there is an access side of political power, where those affected should have a say in the decision-making, which might require participation, deliberation, and political equality. Second, there is an exercise side of political power, where those very decisions should apply in appropriate ways, which in turn might require effectiveness, transparency, and accountability. In addition to efforts to map and explain norms and values in the global governance of AI, theories of democratic AI governance can hence help explain how these two aspects are connected (cf. Erman 2020 ).

In sum, the global governance of AI raises a number of issues for normative research. We have identified four promising avenues, focused on procedural and outcome aspects of justice and democracy in the context of global AI governance. Research along these four avenues can help to shed light on the normative challenges facing the global governance of AI and the key values at stake, as well as provide the impetus for novel theories on democratic and just global AI governance.

This article has charted a new agenda for research into the global governance of AI. While existing scholarship has been primarily descriptive or policy-oriented, we propose an agenda organized around theory-driven positive and normative questions. To this end, we have outlined two broad analytical perspectives on the global governance of AI: an empirical approach, aimed at conceptualizing and explaining global AI governance; and a normative approach, aimed at developing and applying ideals for appropriate global AI governance. Pursuing these empirical and normative approaches can help to guide future scholarship on the global governance of AI toward critical questions, core concepts, and promising theories. At the same time, exploring AI as a regulatory issue provides an opportunity to further develop these general analytical approaches as they confront the particularities of this important area of governance.

We conclude this article by highlighting the key takeaways from this research agenda for future scholarship on empirical and normative dimensions of the global governance of AI. First, research is required to identify where and how AI is becoming globally governed . Mapping and conceptualizing the emerging global governance of AI is a first necessary step. We argue that research may benefit from considering the variety of ways in which new regulation may come about, from the reinterpretation of existing rules and the extension of prevailing sectoral governance to the negotiation of entirely new frameworks. In addition, we suggest that scholarship may benefit from considering how global AI governance may be conceptualized in terms of key analytical dimensions, such as horizontal–vertical, centralized–decentralized, and formal–informal.

Second, research is necessary to explain why AI is becoming globally governed in particular ways . Having mapped global AI governance, we need to account for the factors that drive and shape these regulatory processes and arrangements. We argue that political science and IR offer a variety of theoretical tools that can help to explain the global governance of AI. In particular, we highlight the promise of theories privileging the role of power, interests, ideas, regime complexes, and non-state actors, but also recognize that research fields such as science and technology studies and political economy can yield additional theoretical insights.

Third, research is needed to identify what normative ideals global AI governance ought to meet . Moving from positive to normative issues, a first critical question pertains to the ideals that should guide the design of appropriate global AI governance. We argue that normative theory provides the tools necessary to engage with this question. While normative theory can suggest several potential principles, we believe that it may be especially fruitful to start from the ideals of democracy and justice, which are foundational and recurrent concerns in discussions about political governing arrangements. In addition, we suggest that these two ideals are relevant both for the procedures by which AI regulation is adopted and for the outcomes of such regulation.

Fourth, research is required to evaluate how well global AI governance lives up to these normative ideals . Once appropriate normative ideals have been selected, we can assess to what extent and how existing arrangements conform to these principles. We argue that previous research on democracy and justice in global governance offers a model in this respect. A critical component of such research is the integration of normative and empirical research: normative research for elucidating how normative ideals would be expressed in practice, and empirical research for analyzing data on whether actual arrangements live up to those ideals.

In all, the research agenda that we outline should be of interest to multiple audiences. For students of political science and IR, it offers an opportunity to apply and refine concepts and theories in a novel area of global governance of extensive future importance. For scholars of AI, it provides an opportunity to understand how political actors and considerations shape the conditions under which AI applications may be developed and used. For policymakers, it presents an opportunity to learn about evolving regulatory practices and gaps, interests shaping emerging arrangements, and trade-offs to be confronted in future efforts to govern AI at the global level.

A previous version of this article was presented at the Global and Regional Governance workshop at Stockholm University. We are grateful to Tim Bartley, Niklas Bremberg, Lisa Dellmuth, Felicitas Fritzsche, Faradj Koliev, Rickard Söder, Carl Vikberg, Johanna von Bahr, and three anonymous reviewers for ISR for insightful comments and suggestions. The research for this article was funded by the WASP-HS program of the Marianne and Marcus Wallenberg Foundation (Grant no. MMW 2020.0044).

We use “global governance” to refer to regulatory processes beyond the nation-state, whether on a global or regional level. While states and IOs often are central to these regulatory processes, global governance also involves various types of non-state actors ( Rosenau 1999 ).

Abbott Kenneth W. , and Snidal Duncan . 2000 . “ Hard and Soft Law in International Governance .” International Organization . 54 ( 3 ): 421 – 56 .

Google Scholar

Acemoglu Daron , and Restrepo Pascual . 2020 . “ The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand .” Cambridge Journal of Regions, Economy and Society . 13 ( 1 ): 25 – 35 .

Acharya Amitav , and Johnston Alistair Iain . 2007 . “ Conclusion: Institutional Features, Cooperation Effects, and the Agenda for Further Research on Comparative Regionalism .” In Crafting Cooperation: Regional International Institutions in Comparative Perspective , edited by Acharya Amitav , Johnston Alistair Iain , 244 – 78 .. Cambridge : Cambridge University Press .

Google Preview

Alter Karen J. , and Meunier Sophie . 2009 . “ The Politics of International Regime Complexity .” Perspectives on Politics . 7 ( 1 ): 13 – 24 .

Angwin Julia , Larson Jeff , Mattu Surya , and Kirchner Lauren . 2016 . “ Machine Bias .” ProPublica , May 23 . Internet (last accessed August 25, 2023): https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing .

Barry Brian . 1991 . “ Humanity and Justice in Global Perspective .” In Liberty and Justice , edited by Barry Brian . Oxford : Clarendon .

Berk Richard A . 2021 . “ Artificial Intelligence, Predictive Policing, and Risk Assessment for Law Enforcement .” Annual Review of Criminology . 4 ( 1 ): 209 – 37 .

Bostrom Nick , Dafoe Allan , and Flynn Carrick . 2020 . “ Public Policy and Superintelligent AI: A Vector Field Approach .” In Ethics of Artificial Intelligence , edited by Liao S. Matthew , 293 – 326 .. Oxford : Oxford University Press .

Buchanan Allen , and Keohane Robert O. . 2006 . “ The Legitimacy of Global Governance Institutions .” Ethics & International Affairs . 20 (4) : 405 – 37 .

Buchanan Allen . 2013 . The Heart of Human Rights . Oxford : Oxford University Press .

Burrell Jenna . 2016 . “ How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms .” Big Data & Society . 3 ( 1 ): 1 – 12 .. https://doi.org/10.1177/2053951715622512 .

Burri Thomas . 2017 . “ International Law and Artificial Intelligence .” In German Yearbook of International Law , vol. 60 , 91 – 108 .. Berlin : Duncker and Humblot .

Butcher James , and Beridze Irakli . 2019 . “ What is the State of Artificial Intelligence Governance Globally?” . The RUSI Journal . 164 ( 5-6 ): 88 – 96 .

Büthe Tim , Djeffal Christian , Lütge Christoph , Maasen Sabine , and von Ingersleben-Seip Nora . 2022 . “ Governing AI—Attempting to Herd Cats? Introduction to the Special Issue on the Governance of Artificial Intelligence .” Journal of European Public Policy . 29 ( 11 ): 1721 – 52 .

Campbell Thomas A . 2019 . Artificial Intelligence: An Overview of State Initiatives . Evergreen, CO : FutureGrasp .

Caney Simon . 2005 . “ Cosmopolitan Justice, Responsibility, and Global Climate Change .” Leiden Journal of International Law . 18 ( 4 ): 747 – 75 .

Caney Simon . 2014 . “ Two Kinds of Climate Justice: Avoiding Harm and Sharing Burdens .” Journal of Political Philosophy . 22 ( 2 ): 125 – 49 .

Chavannes Esther , Klonowska Klaudia , and Sweijs Tim . 2020 . Governing Autonomous Weapon Systems: Expanding the Solution Space, From Scoping to Applying . The Hague : The Hague Center for Strategic Studies .

Checkel Jeffrey T . 2005 . “ International Institutions and Socialization in Europe: Introduction and Framework .” International organization . 59 ( 4 ): 801 – 26 .

Christiano Thomas . 2008 . The Constitution of Equality . Oxford : Oxford University Press .

Christiano Thomas . 2021 . “ Algorithms, Manipulation, and Democracy .” Canadian Journal of Philosophy . 52 ( 1 ): 109 – 124 .. https://doi.org/10.1017/can.2021.29 .

Cihon Peter , Maas Matthijs M. , and Kemp Luke . 2020a . “ Fragmentation and the Future: Investigating Architectures for International AI Governance .” Global Policy . 11 ( 5 ): 545 – 56 .

Cihon Peter , Maas Matthijs M. , and Kemp Luke . 2020b . “ Should Artificial Intelligence Governance Be Centralised? Design Lessons from History .” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society , 228 – 34 . New York, NY: ACM .

Council of Europe . 2023 . “ AI Initiatives ,” accessed 16 June 2023, AI initiatives (coe.int).

Dafoe Allan . 2018 . AI Governance: A Research Agenda . Oxford: Governance of AI Program , Future of Humanity Institute, University of Oxford . www.fhi.ox.ac.uk/govaiagenda .

Dahl Robert . 1999 . “ Can International Organizations Be Democratic: A Skeptic's View .” In Democracy's Edges , edited by Shapiro Ian , Hacker-Córdon Casiano , 19 – 36 .. Cambridge : Cambridge University Press .

Dellmuth Lisa M. , and Tallberg Jonas . 2017 . “ Advocacy Strategies in Global Governance: Inside versus Outside Lobbying .” Political Studies . 65 ( 3 ): 705 – 23 .

Ding Jeffrey . 2018 . Deciphering China's AI Dream: The Context, Components, Capabilities and Consequences of China's Strategy to Lead the World in AI . Oxford: Centre for the Governance of AI , Future of Humanity Institute, University of Oxford .

Dreher Axel , and Lang Valentin . 2019 . “ The Political Economy of International Organizations .” In The Oxford Handbook of Public Choice , Volume 2, edited by Congleton Roger O. , Grofman Bernhard , Voigt Stefan . Oxford : Oxford University Press .

Dreher Axel , Lang Valentin , Rosendorff B. Peter , and Vreeland James R. . 2022 . “ Bilateral or Multilateral? International Financial Flows and the Dirty Work-Hypothesis .” The Journal of Politics . 84 ( 4 ): 1932 – 1946 .

Drezner Daniel W . 2019 . “ Technological Change and International Relations .” International Relations . 33 ( 2 ): 286 – 303 .

Dryzek John . 2011 . “ Global Democratization: Soup, Society, or System? ” Ethics & International Affairs , 25 ( 2 ): 211 – 234 .

Ebers Martin . 2022 . “ Explainable AI in the European Union: An Overview of the Current Legal Framework(s) .” In Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence , edited by Lianne Colonna and Stanley Greenstein . Stockholm: The Swedish Law and Informatics Institute, Stockholm University .

Eilstrup-Sangiovanni Mette , and Westerwinter Oliver . 2021 . “ The Global Governance Complexity Cube: Varieties of Institutional Complexity in Global Governance .” Review of International Organizations . 17 (2): 233 – 262 .

Erman Eva , and Furendal Markus . 2022a . “ The Global Governance of Artificial Intelligence: Some Normative Concerns .” Moral Philosophy & Politics . 9 (2): 267−291. https://www.degruyter.com/document/doi/10.1515/mopp-2020-0046/html .

Erman Eva , and Furendal Markus . 2022b . “ Artificial Intelligence and the Political Legitimacy of Global Governance .” Political Studies . https://journals.sagepub.com/doi/full/10.1177/00323217221126665 .

Erman Eva . 2020 . “ A Function-Sensitive Approach to the Political Legitimacy of Global Governance .” British Journal of Political Science . 50 ( 3 ): 1001 – 24 .

Fazelpour Sina , and Danks David . 2021 . “ Algorithmic Bias: Senses, Sources, Solutions .” Philosophy Compass . 16 ( 8 ): e12760.

Finnemore Martha , and Sikkink Kathryn . 1998 . “ International Norm Dynamics and Political Change .” International Organization . 52 ( 4 ): 887 – 917 .

Floridi Luciano , Cowls Josh , Beltrametti Monica , Chatila Raja , Chazerand Patrice , Dignum Virginia , Luetge Christoph et al.  2018 . “ AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations .” Minds and Machines . 28 ( 4 ): 689 – 707 .

Frey Carl Benedikt . 2019 . The Technology Trap: Capital, Labor, and Power in the Age of Automation . Princeton, NJ : Princeton University Press .

Frieden Jeffry , Lake David A. , and Lawrence Broz J. 2017 . International Political Economy: Perspectives on Global Power and Wealth . Sixth Edition. New York, NY : W.W. Norton .

Future of Life Institute , 2023 . “ Pause Giant AI Experiments: An Open Letter .” Accessed June 13, 2023. https://futureoflife.org/open-letter/pause-giant-ai-experiments/ .

Gabriel Iason , 2022 . “ Toward a Theory of Justice for Artificial Intelligence .” Daedalus , 151 ( 2 ): 218 – 31 .

Gilpin Robert . 1987 . The Political Economy of International Relations . Princeton, NJ : Princeton University Press .

Google . 2022 . “ Artificial Intelligence at Google: Our Principles .” Internet (last accessed August 25, 2023): https://ai.google/principles/ .

Greenstein Stanley . 2022 . “ Liability in the Era of Artificial Intelligence .” In Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence , edited by Colonna Lianne , Greenstein Stanley . Stockholm: The Swedish Law and Informatics Institute, Stockholm University .

Gruber Lloyd . 2000 . Ruling the World . Princeton, NJ : Princeton University Press .

Haas Peter . 1992 . “ Introduction: Epistemic Communities and International Policy Coordination .” International Organization . 46 ( 1 ): 1 – 36 .

Haftel Yoram Z. , and Lenz Tobias . 2021 . “ Measuring Institutional Overlap in Global Governance .” Review of International Organizations . 17(2) : 323 – 347 .

Hagendorff Thilo . 2020 . “ The Ethics of AI Ethics: an Evaluation of Guidelines .” Minds and Machines . 30 ( 1 ): 99 – 120 .

Hanegraaff Marcel , Beyers Jan , and De Bruycker Iskander . 2016 . “ Balancing Inside and Outside Lobbying: The Political Strategy of Lobbyists at Global Diplomatic Conferences .” European Journal of Political Research . 55 ( 3 ): 568 – 88 .

Hanegraaff Marcel , Braun Caelesta , De Bièvre Dirk , and Beyers Jan . 2015 . “ The Domestic and Global Origins of Transnational Advocacy: Explaining Lobbying Presence During WTO Ministerial Conferences .” Comparative Political Studies . 48 : 1591 – 621 .

Hawkins Darren G. , Lake David A. , Nielson Daniel L. , Tierney Michael J. Eds. 2006 . Delegation and Agency in International Organizations . Cambridge : Cambridge University Press .

Heikkilä Melissa . 2022a . “ AI: Decoded. IoT Under Fire—Defining AI?—Meta's New AI Supercomputer .” Accessed June 5, 2022, https://www.politico.eu/newsletter/ai-decoded/iot-under-fire-defining-ai-metas-new-ai-supercomputer-2 /.

Heikkilä Melissa 2022b . “ AI: Decoded. A Dutch Algorithm Scandal Serves a Warning to Europe—The AI Act won't Save Us .” Accessed June 5, 2022, https://www.politico.eu/newsletter/ai-decoded/a-dutch-algorithm-scandal-serves-a-warning-to-europe-the-ai-act-wont-save-us-2/ .

Henriques-Gomes Luke . 2020 . “ Robodebt: Government Admits It Will Be Forced to Refund $550 m under Botched Scheme .” The Guardian . sec. Australia news . Internet (last accessed August 25, 2023): https://www.theguardian.com/australia-news/2020/mar/27/robodebt-government-admits-it-will-be-forced-to-refund-550m-under-botched-scheme .

Hernandez Joe . 2021 . “ A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says .” National Public Radio . Internet (las accessed August 25, 2023): https://www.npr.org/2021/06/01/1002196245/a-u-n-report-suggests-libya-saw-the-first-battlefield-killing-by-an-autonomous-d .

High-Level Expert Group on Artificial Intelligence . 2019 . Ethics Guidelines for Trustworthy AI . Brussels: European Commission . https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai .

Horowitz Michael . 2018 . “ Artificial Intelligence, International Competition, and the Balance of Power .” Texas National Security Review . 1 ( 3 ): 37 – 57 .

Horowitz Michael C. , Allen Gregory C. , Kania Elsa B. , Scharre Paul . 2018 . Strategic Competition in an Era of Artificial Intelligence . Washington D.C. : Center for a New American Security .

Hu Krystal . 2023 . ChatGPT Sets Record for Fastest-Growing User Base—Analyst Note. Reuters , February 2, 2023, sec. Technology , Accessed June 12, 2023, https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ .

Jasanoff Sheila . 2016 . The Ethics of Invention: Technology and the Human Future . New York : Norton .

Jensen Benjamin M. , Whyte Christopher , and Cuomo Scott . 2020 . “ Algorithms at War: the Promise, Peril, and Limits of Artificial Intelligence .” International Studies Review . 22 ( 3 ): 526 – 50 .

Jobin Anna , Ienca Marcello , and Vayena Effy . 2019 . “ The Global Landscape of AI Ethics Guidelines .” Nature Machine Intelligence . 1 ( 9 ): 389 – 99 .

Johnson J. 2019 . “ Artificial intelligence & Future Warfare: Implications for International Security .” Defense & Security Analysis . 35 ( 2 ): 147 – 69 .

Jönsson Christer , and Tallberg Jonas . Forthcoming. “Opening up to Civil Society: Access, Participation, and Impact .” In Handbook on Governance in International Organizations , edited by Edgar Alistair . Cheltenham : Edward Elgar Publishing .

Kania E. B . 2017 . Battlefield singularity. Artificial Intelligence, Military Revolution, and China's Future Military Power . Washington D.C.: CNAS .

Keohane Robert O . 1984 . After Hegemony . Princeton, NJ : Princeton University Press .

Keohane Robert O. , and Victor David G. . 2011 . “ The Regime Complex for Climate Change .” Perspectives on Politics . 9 ( 1 ): 7 – 23 .

König Pascal D. and Georg Wenzelburger 2022 . “ Between Technochauvinism and Human-Centrism: Can Algorithms Improve Decision-Making in Democratic Politics? ” European Political Science , 21 ( 1 ): 132 – 49 .

Koremenos Barbara , Lipson Charles , and Snidal Duncan . 2001 . “ The Rational Design of International Institutions .” International Organization . 55 ( 4 ): 761 – 99 .

Koremenos Barbara . 2016 . The Continent of International Law: Explaining Agreement Design . Cambridge : Cambridge University Press .

Korinek Anton and Stiglitz Joseph E. 2019 . “ Artificial Intelligence and Its Implications for Income Distribution and Unemployment ” In The Economics of Artificial Intelligence: An Agenda . edited by Agrawal A. , Gans J. and Goldfarb A. . University of Chicago Press . :.

Koven Levit Janet . 2007 . “ Bottom-Up International Lawmaking: Reflections on the New Haven School of International Law .” Yale Journal of International Law . 32 : 393 – 420 .

Krasner Stephen D . 1991 . “ Global Communications and National Power: Life on the Pareto Frontier .” World Politics . 43 ( 3 ): 336 – 66 .

Kunz Martina , and hÉigeartaigh Seán Ó . 2020 . “ Artificial Intelligence and Robotization .” In The Oxford Handbook on the International Law of Global Security , edited by Geiss Robin , Melzer Nils . Oxford : Oxford University Press .

Lake David. A . 2013 . “ Theory is Dead, Long Live Theory: The End of the Great Debates and the rise of eclecticism in International Relations .” European Journal of International Relations . 19 ( 3 ): 567 – 87 .

Leung Jade . 2019 . “ Who Will Govern Artificial Intelligence?” . Learning from the History of Strategic Politics in Emerging Technologies . Doctoral dissertation . Oxford: University of Oxford .

Livingston Steven , and Mathias Risse . 2019 , “ The Future Impact of Artificial Intelligence on Humans and Human Rights .” Ethics & International Affairs . 33 ( 2 ): 141 – 58 .

Maas Matthijs M . 2019a . “ How Viable is International Arms Control for Military Artificial Intelligence? Three Lessons from Nuclear Weapons .” Contemporary Security Policy . 40 ( 3 ): 285 – 311 .

Maas Matthijs M . 2019b . “ Innovation-proof Global Governance for Military Artificial Intelligence? How I Learned to Stop Worrying, and Love the Bot ,” Journal of International Humanitarian Legal Studies . 10 ( 1 ): 129 – 57 .

Maas Matthijs M . 2021 . Artificial Intelligence Governance under Change: Foundations, Facets, Frameworks . PhD dissertation . Copenhagen: University of Copenhagen .

Martin Lisa L . 1992 . “ Interests, Power, and Multilateralism .” International Organization . 46 ( 4 ): 765 – 92 .

Martin Lisa L. , and Simmons Beth A. . 2012 . “ International Organizations and Institutions .” In Handbook of International Relations , edited by Carlsnaes Walter , Risse Thomas , Simmons Beth A. . London : SAGE .

McCarthy John , Minsky Marvin L. , Rochester Nathaniel , and Shannon Claude E . 1955 . “ A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence .” AI Magazine . 27 ( 4 ): 12 – 14 (reprint) .

Mearsheimer John J. . 1994 . “ The False Promise of International Institutions .” International Security , 19 ( 3 ): 5 – 49 .

Metz Cade . 2022 . “ Lawsuit Takes Aim at the Way a.I. Is Built .” The New York Times , November 23, Accessed June 21, 2023. https://www.nytimes.com/2022/11/23/technology/copilot-microsoft-ai-lawsuit.html . June 21, 2023 .

Misuraca Gianluca , van Noordt Colin 2022 . “ Artificial Intelligence for the Public Sector: Results of Landscaping the Use of AI in Government Across the European Union .” Government Information Quarterly . 101714 . https://doi.org/10.1016/j.giq.2022.101714 .

Müller Vincent C . 2020 . “ Ethics of Artificial Intelligence and Robotics .” In Stanford Encyclopedia of Philosophy , edited by Zalta Edward N. Internet (last accessed August 25, 2023): https://plato.stanford.edu/archives/fall2020/entries/ethics-ai/ .

Niklas Jedrzen , Dencik Lina . 2021 . “ What Rights Matter? Examining the Place of Social Rights in the EU's Artificial Intelligence Policy Debate .” Internet Policy Review . 10 ( 3 ): 1 – 29 .

OECD . 2021 . “ OECD AI Policy Observatory .” Accessed February 17, 2022. https://oecd.ai .

O'Neil Cathy . 2017 . Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy . UK : Penguin Books .

Orakhelashvili Alexander . 2019 . Akehurst's Modern Introduction to International Law , Eighth Edition . London : Routledge .

Payne K . 2021 . I, Warbot: The Dawn of Artificially Intelligent Conflict . Oxford: Oxford University Press .

Payne Kenneth . 2018 . “ Artificial Intelligence: a Revolution in Strategic Affairs?” . Survival . 60 ( 5 ): 7 – 32 .

Petman Jarna . 2017 . Autonomous Weapons Systems and International Humanitarian Law: ‘Out of the Loop’ . Helsinki : The Eric Castren Institute of International Law and Human Rights .

Powers Thomas M. , and Ganascia Jean-Gabriel . 2020 . “ The Ethics of the Ethics of AI .” In The Oxford Handbook of Ethics of AI , edited by Dubber Markus D. , Pasquale Frank , Das Sunit , 25 – 51 .. Oxford : Oxford University Press .

Rademacher Timo . 2019 . “ Artificial Intelligence and Law Enforcement .” In Regulating Artificial Intelligence , edited by Wischmeyer Thomas , Rademacher Timo , 225 – 54 .. Cham: Springer .

Radu Roxana . 2021 . “ Steering the Governance of Artificial Intelligence: National Strategies in Perspective .” Policy and Society . 40 ( 2 ): 178 – 93 .

Raustiala Kal and David G. Victor . 2004 .“ The Regime Complex for Plant Genetic Resources .” International Organization , 58 ( 2 ): 277 – 309 .

Rességuier Anaïs , and Rodrigues Rowena . 2020 . “ AI Ethics Should Not Remain Toothless! A Call to Bring Back the Teeth of Ethics .” Big Data & Society . 7 ( 2 ). https://doi.org/10.1177/2053951720942541 .

Risse Thomas . 2012 . “ Transnational Actors and World Politics .” In Handbook of International Relations , 2nd ed., edited by Carlsnaes Walter , Risse Thomas , Simmons Beth A. . London : Sage .

Roach Steven C. , and Eckert Amy , eds. 2020 . Moral Responsibility in Twenty-First-Century Warfare: Just War Theory and the Ethical Challenges of Autonomous Weapons Systems . Albany, NY : State University of New York .

Roberts Huw , Cowls Josh , Morley Jessica , Taddeo Mariarosaria , Wang Vincent , and Floridi Luciano . 2021 . “ The Chinese Approach to Artificial Intelligence: An Analysis of Policy, Ethics, and Regulation .” AI & Society . 36 ( 1 ): 59 – 77 .

Rosenau James N . 1999 . “ Toward an Ontology for Global Governance .” In Approaches to Global Governance Theory , edited by Hewson Martin , Sinclair Timothy J. , 287 – 301 .. Albany, NY : SUNY Press .

Rosert Elvira , and Sauer Frank . 2018 . Perspectives for Regulating Lethal Autonomous Weapons at the CCW: A Comparative Analysis of Blinding Lasers, Landmines, and LAWS . Paper prepared for the workshop “New Technologies of Warfare: Implications of Autonomous Weapons Systems for International Relations,” 5th EISA European Workshops in International Studies , Groningen , 6-9 June 2018 . Internet (last accessed August 25, 2023): https://www.academia.edu/36768452/Perspectives_for_Regulating_Lethal_Autonomous_Weapons_at_the_CCW_A_Comparative_Analysis_of_Blinding_Lasers_Landmines_and_LAWS

Rosert Elvira , and Sauer Frank . 2019 . “ Prohibiting Autonomous Weapons: Put Human Dignity First .” Global Policy . 10 ( 3 ): 370 – 5 .

Russell Stuart J. , and Norvig Peter . 2020 . Artificial Intelligence: A Modern Approach . Boston, MA : Pearson .

Ruzicka Jan . 2018 . “ Behind the Veil of Good Intentions: Power Analysis of the Nuclear Non-proliferation Regime .” International Politics . 55 ( 3 ): 369 – 85 .

Saif Hassan , Dickinson Thomas , Kastler Leon , Fernandez Miriam , and Alani Harith . 2017 . “ A Semantic Graph-Based Approach for Radicalisation Detection on Social Media .” ESWC 2017: The Semantic Web—Proceedings, Part 1 , 571 – 87 .. Cham : Springer .

Schiff Daniel , Justin Biddle , Jason Borenstein , and Kelly Laas . 2020 . “ What’s Next for AI Ethics, Policy, and Governance? A Global Overview .” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society . ACM , , .

Schmitt Lewin . 2021 . “ Mapping Global AI Governance: A Nascent Regime in a Fragmented Landscape .” AI and Ethics . 2 ( 2 ): 303 – 314 .

Scholte Jan Aart . ed. 2011 . Building Global Democracy? Civil Society and Accountable Global Governance . Cambridge : Cambridge University Press .

Schwitzgebel Eric , and Garza Mara . 2015 . “ A Defense of the Rights of Artificial Intelligences .” Midwest Studies In Philosophy . 39 ( 1 ): 98 – 119 .

Sparrow Robert . 2007 . “ Killer Robots .” Journal of Applied Philosophy . 24 ( 1 ): 62 – 77 .

Steffek Jens and Patrizia Nanz . 2008 . “ Emergent Patterns of Civil Society Participation in Global and European Governance ” In Civil Society Participation in European and Global Governance . edited by Jens Steffek , Claudia Kissling , and Patrizia Nanz Basingstoke: Palgrave Macmillan . 1–29.

Stone Randall. W . 2011 . Controlling Institutions: International Organizations and the Global Economy . Cambridge : Cambridge University Press .

Stop Killer Robots . 2023 . “ About Us.” . Accessed June 13, 2023, https://www.stopkillerrobots.org/about-us/ .

Susser Daniel , Roessler Beate , Nissenbaum Helen . 2019 . “ Technology, Autonomy, and Manipulation .” Internet Policy Review . 8 ( 2 ):. https://doi.org/10.14763/2019.2.1410 .

Taeihagh Araz . 2021 . “ Governance of Artificial Intelligence .” Policy and Society . 40 ( 2 ): 137 – 57 .

Tallberg Jonas , Sommerer Thomas , Squatrito Theresa , and Jönsson Christer . 2013 . The Opening Up of International Organizations . Cambridge : Cambridge University Press .

Tasioulas John . 2019 . “ First Steps Towards an Ethics of Robots and Artificial Intelligence .” The Journal of Practical Ethics . 7 ( 1 ): 61-95. https://doi.org/10.2139/ssrn.3172840 .

Thompson Nicholas , and Bremmer Ian . 2018. “ The AI Cold War that Threatens us all .” Wired, October 23. Internet (last accessed August 25, 2023): https://www.wired.com/story/ai-cold-war-china-coulddoom-us-all/ .

Trager Robert F. , and Luca Laura M. . 2022 . “ Killer Robots Are Here—And We Need to Regulate Them .” Foreign Policy, May 11 . Internet (last accessed August 25, 2023): https://foreignpolicy.com/2022/05/11/killer-robots-lethal-autonomous-weapons-systems-ukraine-libya-regulation/

Ubena John . 2022 . “ Can Artificial Intelligence be Regulated?” . Lessons from Legislative Techniques . In Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence , edited by Colonna Lianne , Greenstein Stanley . Stockholm: The Swedish Law and Informatics Institute , Stockholm University.

Uhre Andreas Nordang . 2014 . “ Exploring the Diversity of Transnational Actors in Global Environmental Governance .” Interest Groups & Advocacy . 3 ( 1 ): 59 – 78 .

Ulnicane Inga . 2021 . “ Artificial Intelligence in the European Union: Policy, Ethics and Regulation .” In The Routledge Handbook of European Integrations , edited by Hoerber Thomas , Weber Gabriel , Cabras Ignazio . London : Routledge .

Valentini Laura . 2013 . “ Justice, Disagreement and Democracy .” British Journal of Political Science . 43 ( 1 ): 177 – 99 .

Valentini Laura . 2012 . “ Assessing the Global Order: Justice, Legitimacy, or Political Justice?” . Critical Review of International Social and Political Philosophy . 15 ( 5 ): 593 – 612 .

Vredenburgh Kate . 2022 . “ Fairness .” In The Oxford Handbook of AI Governance , edited by Bullock Justin B. , Chen Yu-Che , Himmelreich Johannes , Hudson Valerie M. , Korinek Anton , Young Matthew M. , Zhang Baobao . Oxford : Oxford University Press .

Verdier Daniel . 2022 . “ Bargaining Strategies for Governance Complex Games .” The Review of International Organizations , 17 ( 2 ): 349 – 371 .

Wagner Ben . 2018 . “ Ethics as an Escape from Regulation. From “Ethics-washing” to Ethics-shopping? .” In Being Profiled: Cogitas Ergo Sum. 10 Years of ‘Profiling the European Citizen , edited by Bayamiloglu Emre , Baraliuc Irina , Janssens Liisa , Hildebrandt Mireille Amsterdam : Amsterdam University Press .

Wahlgren Peter . 2022 . “ How to Regulate AI?” In Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence , edited by Colonna Lianne , Greenstein Stanley . Stockholm: The Swedish Law and Informatics Institute, Stockholm University .

Weale Albert . 1999 . Democracy . New York : St Martin's Press .

Winfield Alan F. , Michael Katina , Pitt Jeremy , and Evers Vanessa . 2019 . “ Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems .” Proceedings of the IEEE . 107 ( 3 ): 509 – 17 .

Zaidi Waqar , Dafoe Allan . 2021 . International Control of Powerful Technology: Lessons from the Baruch Plan for Nuclear Weapons . Working Paper 2021: 9 . Oxford : Centre for the Governance of AI .

Zhu J. 2022 . “ AI ethics with Chinese Characteristics? Concerns and preferred solutions in Chinese academia .” AI & Society . https://doi.org/10.1007/s00146-022-01578-w .

Zimmermann Annette , and Lee-Stronach Chad . 2022 . “ Proceed with Caution .” Canadian Journal of Philosophy . 52 ( 1 ): 6 – 25 .

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1468-2486
  • Print ISSN 1521-9488
  • Copyright © 2024 International Studies Association
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts

Research articles

review articles for research

Stem cells derived exosomes as biological nano carriers for VCR sulfate for treating breast cancer stem cells

  • Ahmed H. Farouk
  • Ahmed N. Abdallah

review articles for research

Inducible deletion of microRNA activity in kidney mesenchymal cells exacerbates renal fibrosis

  • Hirofumi Sakuma
  • Keisuke Maruyama
  • Naoki Nakagawa

review articles for research

Hydraulic property variations with depth in a loess mudstone landslide

  • Gaochao Lin

review articles for research

Magnesium implantation as a continuous hydrogen production generator for the treatment of myocardial infarction in rats

review articles for research

Effect of weight loss program using prebiotics and probiotics on body composition, physique, and metabolic products: longitudinal intervention study

  • Nayera E. Hassan
  • Sahar A. El-Masry
  • Khadija Alian

review articles for research

Quantitative analysis of social influence and digital piracy contagion with differential equations on networks

  • Dibyajyoti Mallick
  • Kumar Gaurav
  • Sayantari Ghosh

review articles for research

Enhancing stress measurements accuracy control in the construction of long-span bridges

  • Alvaro Gaute-Alonso
  • David Garcia-Sanchez
  • Vasileios Ntertimanis

Reply to: Differences in response-scale usage are ubiquitous in cross-country comparisons and a potential driver of elusive relationships

  • Piotr Sorokowski
  • Marta Kowal

review articles for research

A comparative study of progressive failure of granite and marble rock bridges under direct shearing

  • Guangming Luo
  • Shengwen Qi
  • Bowen Zheng

review articles for research

Study on the fracture propagation of ground fissures with syn-depositional structure in Fenwei Basin, China

  • Quanzhong Lu
  • Feilong Chen

review articles for research

Scaling of damage mechanism for additively manufactured alloys at very high cycle fatigue

  • B. S. Voloskov
  • M. V. Bannikov
  • I. V. Sergeichev

review articles for research

Significance of atherosclerotic plaque location in recanalizing non-acute long-segment occlusion of the internal carotid artery

  • Tong-Yuan Zhao
  • Gang-Qin Xu
  • Bu-Lang Gao

review articles for research

Specific contact resistivity reduction in amorphous IGZO thin-film transistors through a TiN/IGTO heterogeneous interlayer

  • Joo Hee Jeong
  • Seung Wan Seo
  • Jae Kyeong Jeong

review articles for research

Statistical analysis on the incidence and predictors of death among second-line ART patients in public hospitals of North Wollo and Waghemira Zones, Ethiopia, 2021

  • Atitegeb Abera Kidie
  • Seteamlak Adane Masresha
  • Fassikaw Kebede Bizuneh

review articles for research

Enabling CO 2 neutral metallurgy for ferrochromium production using bio-based reducing agents

  • Marcus Sommerfeld
  • Roberta Botinha
  • Bernd Friedrich

Evaluation of pretreatment methods for filamentous fungal detection

  • Xiaoli Jiang
  • Daiwen Xiao

review articles for research

Association between serum vitamin A and body mass index in adolescents from NHANES 1999 to 2006

  • Nishant Patel

review articles for research

Computation of molecular description of supramolecular Fuchsine model useful in medical data

  • Zunaira Kosar
  • Shahid Zaman
  • Melaku Berhe Belay

review articles for research

A quantitative comparison of urine centrifugation and filtration for the isolation and analysis of urinary nucleic acid biomarkers

  • Liz-Audrey Kounatse Djomnang
  • Iwijn De Vlaminck

review articles for research

A bicentric retrospective study of the correlation of EAU BCR risk groups with 18 F-PSMA-1007 PET/CT detection in prostate cancer biochemical recurrence

  • Nathan Poterszman
  • Charles Merlin
  • François Somme

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

review articles for research

review articles for research

RSC Advances

Addressing preliminary challenges in upscaling the recovery of lithium from spent lithium ion batteries by the electrochemical method: a review †.

ORCID logo

* Corresponding authors

a Department of Chemistry, Kulliyyah of Science, International Islamic University Malaysia, Jalan Sultan Ahmad Shah, 25200 Kuantan, Pahang, Malaysia E-mail: [email protected]

b Faculty of Applied Sciences, Universiti Teknologi MARA, 40450 Shah Alam, Selangor, Malaysia E-mail: [email protected]

c Department of Environment, Faculty of Forestry and Environment, Universiti Putra Malaysia, Serdang 43400, Selangor, Malaysia

d Kulliyyah of Architecture & Environmental Design, International Islamic University Malaysia, Gombak, 53100 Kuala Lumpur, Selangor, Malaysia

e Faculty of Artificial Intelligence, Universiti Teknologi Malaysia, 54100 Kuala Lumpur, Malaysia

f Group Research and Technology, PETRONAS Research Sdn. Bhd., Bandar Baru Bangi 43000, Selangor, Malaysia

g School of Chemical Engineering, College of Engineering, Universiti Teknologi MARA, 40450 Shah Alam, Selangor, Malaysia

The paramount importance of lithium (Li) nowadays and the mounting volume of untreated spent LIB have imposed pressure on innovators to tackle the near-term issue of Li resource depletion through recycling. The trajectory of research dedicated to recycling has skyrocketed in this decade, reflecting the global commitment to addressing the issues surrounding Li resources. Although metallurgical methods, such as pyro- and hydrometallurgy, are presently prevalent in Li recycling, they exhibit unsustainable operational characteristics including elevated temperatures, the utilization of substantial quantities of expensive chemicals, and the generation of emissions containing toxic gases such as Cl 2 , SO 2 , and NO x . Therefore, the alternative electrochemical method has gained growing attention, as it involves a more straightforward operation leveraging ion-selective features and employing water as the main reagent, which is seen as more environmentally benign. Despite this, intensive efforts are still required to advance the electrochemical method toward commercialisation. This review highlights the key points in the electrochemical method that demand attention, including the feasibility of a large-scale setup, consideration of the substantial volume of electrolyte consumption, the design of membranes with the desired features, a suitable layout of the membrane, and the absence of techno-economic assessments for the electrochemical method. The perspectives presented herein provide a crucial understanding of the challenges of advancing the technological readiness level of the electrochemical method.

Graphical abstract: Addressing preliminary challenges in upscaling the recovery of lithium from spent lithium ion batteries by the electrochemical method: a review

Supplementary files

  • Supplementary information PDF (144K)

Article information

review articles for research

Download Citation

Permissions.

review articles for research

Addressing preliminary challenges in upscaling the recovery of lithium from spent lithium ion batteries by the electrochemical method: a review

M. A. Kasri, M. Z. Mohd Halizan, I. Harun, F. I. Bahrudin, N. Daud, M. F. Aizamddin, S. N. Amira Shaffee, N. A. Rahman, S. A. Shafiee and M. M. Mahat, RSC Adv. , 2024,  14 , 15515 DOI: 10.1039/D4RA00972J

This article is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported Licence . You can use material from this article in other publications, without requesting further permission from the RSC, provided that the correct acknowledgement is given and it is not used for commercial purposes.

To request permission to reproduce material from this article in a commercial publication , please go to the Copyright Clearance Center request page .

If you are an author contributing to an RSC publication, you do not need to request permission provided correct acknowledgement is given.

If you are the author of this article, you do not need to request permission to reproduce figures and diagrams provided correct acknowledgement is given. If you want to reproduce the whole article in a third-party commercial publication (excluding your thesis/dissertation for which permission is not required) please go to the Copyright Clearance Center request page .

Read more about how to correctly acknowledge RSC content .

Social activity

Search articles by author, advertisements.

A case of sodium bromfenac eye drop-induced toxic epidermal necrolysis and literature review

  • RESEARCH LETTER
  • Published: 11 May 2024
  • Volume 316 , article number  167 , ( 2024 )

Cite this article

review articles for research

  • Liling Liu 1 ,
  • Xiaoqing Du 1 ,
  • Yanning Qi 1 &
  • Limin Yao 1  

Explore all metrics

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

review articles for research

Data availability

All relevant data to this case is reported in the manuscript. Further enquiries can be directed to the corresponding author.

Tsai TY, Huang IH, Chao YC et al (2021) Treating toxic epidermal necrolysis with systemic immunomodulating therapies: a systematic review and network meta-analysis. J Am Acad Dermatol 84(2):390–397. https://doi.org/10.1016/j.jaad.2020.08.122

Article   CAS   PubMed   Google Scholar  

Charlton OA, Harris V, Phan K, Mewton E, Jackson C, Cooper A (2020) Toxic epidermal necrolysis and steven-johnson syndrome: a comprehensive review. Adv Wound Care (New Rochelle) 9(7):426–439. https://doi.org/10.1089/wound.2019.0977

Article   PubMed   Google Scholar  

Creamer D, Walsh SA, Dziewulski P et al (2016) UK guidelines for the management of stevens-johnson syndrome/toxic epidermal necrolysis in adults 2016. J Plast Reconstr Aesthet Surg 69(6):e119–e153. https://doi.org/10.1016/j.bjps.2016.01.034

Sassolas B, Haddad C, Mockenhaupt M et al (2010) ALDEN, an algorithm for assessment of drug causality in Stevens-Johnson Syndrome and toxic epidermal necrolysis: comparison with case-control analysis. Clin Pharmacol Ther 88(1):60–68. https://doi.org/10.1038/clpt.2009.252

Schneck J, Fagot JP, Sekula P, Sassolas B, Roujeau JC, Mockenhaupt M (2008) Effects of treatments on the mortality of Stevens-Johnson syndrome and toxic epidermal necrolysis: a retrospective study on patients included in the prospective EuroSCAR Study. J Am Acad Dermatol 58(1):33–40. https://doi.org/10.1016/j.jaad.2007.08.039

Download references

Acknowledgements

We would like to acknowledge the hard and dedicated work of all the staff that implemented the intervention and evaluation components of the study.

This study was funded by the Medical Science Research Project of Hebei Province (20241336). The funding body had no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.

Author information

Authors and affiliations.

Department of Dermatology and Venereology, Bethune International Peace Hospital, No. 398 of Zhongshan West Road, Qiaoxi District, Shijiazhuang, 050200, China

Liling Liu, Xiaoqing Du, Yanning Qi & Limin Yao

You can also search for this author in PubMed   Google Scholar

Contributions

Liling Liu and Xiaoqing Du collected the data. Liling Liu and Xiaoqing Du analysed the data. Yanning Qi made a statistical analysis of the data. Limin Yao obtained the funding. Liling Liu and Limin Yao drafted the manuscript, then Yanning Qi, Liling Liu and Xiaoqing Du reviewed the manuscript. All authors read and approved the final draft.

Corresponding author

Correspondence to Limin Yao .

Ethics declarations

Conflict of interest.

The authors declare that they have no competing interests.

Ethics approval

I confirm that I have read the Editorial Policy pages. This study was conducted with approval from the Ethics Committee of Bethune International Peace Hospital. This study was conducted in accordance with the declaration of Helsinki.

Informed consent

Written informed consent was obtained from all participants.

Consent for publication

All participants signed a document of informed consent.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Liu, L., Du, X., Qi, Y. et al. A case of sodium bromfenac eye drop-induced toxic epidermal necrolysis and literature review. Arch Dermatol Res 316 , 167 (2024). https://doi.org/10.1007/s00403-024-02914-4

Download citation

Received : 21 March 2024

Revised : 12 April 2024

Accepted : 26 April 2024

Published : 11 May 2024

DOI : https://doi.org/10.1007/s00403-024-02914-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Genuine Reasons for How to Critique a Research Article

    review articles for research

  2. 50 Smart Literature Review Templates (APA) ᐅ TemplateLab

    review articles for research

  3. Research Paper vs. Review Paper: Differences Between Research Papers and Review Papers

    review articles for research

  4. Article Review Introduction Example / How To Write A Scientific Article

    review articles for research

  5. Reading and Analyzing Articles

    review articles for research

  6. Systematic Review Writing Service

    review articles for research

VIDEO

  1. Elsevier Africa

  2. Difference between Research paper and a review. Which one is more important?

  3. Free Access to Review Articles

  4. How to find research papers and related research literature articles

  5. Types of Research Articles

  6. TYPES OF PUBLICATIONS IN MEDICAL LITERATURE

COMMENTS

  1. Writing a Scientific Review Article: Comprehensive Insights for

    According to Miranda and Garcia-Carpintero , review articles are, on average, three times more frequently cited than original research articles; they also asserted that a 20% increase in review authorship could result in a 40-80% increase in citations of the author. As a result, writing reviews can significantly impact a researcher's citation ...

  2. Review Articles

    Review Article (1547) All; Review Article (1547) Year. All. ... The current state of the art of topological phenomena in photonics and acoustics is reviewed and future research directions for ...

  3. How to write a superb literature review

    One of my favourite review-style articles 3 presents a plot bringing together data from multiple research papers (many of which directly contradict each other). This is then used to identify broad ...

  4. How to write a good scientific review article

    A good review article provides readers with an in-depth understanding of a field and highlights key gaps and challenges to address with future research. Writing a review article also helps to expand the writer's knowledge of their specialist area and to develop their analytical and communication skills, amongst other benefits.

  5. How to write a good scientific review article

    A good review article provides readers with an in-depth understanding of a field and highlights key gaps and challenges to address with future research. Writing a review article also helps to expand the writer's knowledge of their specialist area and to develop their analytical and communication skills, amongst other benefits. Thus, the ...

  6. Review articles: purpose, process, and structure

    Many research disciplines feature high-impact journals that are dedicated outlets for review papers (or review-conceptual combinations) (e.g., Academy of Management Review, Psychology Bulletin, Medicinal Research Reviews).The rationale for such outlets is the premise that research integration and synthesis provides an important, and possibly even a required, step in the scientific process.

  7. The New England Journal of Medicine

    The New England Journal of Medicine (NEJM) is a weekly general medical journal that publishes new medical research and review articles, and editorial opinion on a wide variety of topics of ...

  8. What is a review article?

    A review article can also be called a literature review, or a review of literature. It is a survey of previously published research on a topic. It should give an overview of current thinking on the topic. And, unlike an original research article, it will not present new experimental results. Writing a review of literature is to provide a ...

  9. Writing a good review article

    A review article is a comprehensive summary of the current understanding of a specific research topic and is based on previously published research. Unlike research papers, it does not contain new results, but can propose new inferences based on the combined findings of previous research. Types of review articles

  10. Review Articles in 2022

    The progress and the outstanding issues in understanding the correlated phases in the unconventional iron-based superconductors is reviewed. Rafael M. Fernandes. Amalia I. Coldea. Gabriel Kotliar ...

  11. Writing an impactful review article: What do we know and what do we

    Overall, successful review articles identify research gaps and set future research agenda. 1. Introduction and scope of literature reviews. Subject areas advance when studies are synthesized and research gaps are identified ( Kumar, Paul, & Unnithan, 2020 ). In this context, systematic literature reviews allow researchers to identify gaps in ...

  12. Google Scholar

    Google Scholar provides a simple way to broadly search for scholarly literature. Search across a wide variety of disciplines and sources: articles, theses, books, abstracts and court opinions.

  13. Writing a literature review

    Writing a literature review requires a range of skills to gather, sort, evaluate and summarise peer-reviewed published data into a relevant and informative unbiased narrative. Digital access to research papers, academic texts, review articles, reference databases and public data sets are all sources of information that are available to enrich ...

  14. Reviewing review articles

    A review article is written to summarize the current state of understanding on a topic, and peer reviewing these types of articles requires a slightly different set of criteria compared with empirical articles. ... and if it indicates the best avenues for future research. The review article should present an unbiased summary of the current ...

  15. How to Write a Literature Review

    Show how your research addresses a gap or contributes to a debate; Evaluate the current state of research and demonstrate your knowledge of the scholarly debates around your topic. Writing literature reviews is a particularly important skill if you want to apply for graduate school or pursue a career in research. We've written a step-by-step ...

  16. Literature review as a research methodology: An ...

    An effective and well-conducted review as a research method creates a firm foundation for advancing knowledge and facilitating theory development (Webster & Watson, 2002). By integrating findings and perspectives from many empirical findings, a literature review can address research questions with a power that no single study has.

  17. How to Write an Article Review (With Samples)

    3. Identify the article. Start your review by referring to the title and author of the article, the title of the journal, and the year of publication in the first paragraph. For example: The article, "Condom use will increase the spread of AIDS," was written by Anthony Zimmerman, a Catholic priest.

  18. JSTOR Home

    Harness the power of visual materials—explore more than 3 million images now on JSTOR. Enhance your scholarly research with underground newspapers, magazines, and journals. Explore collections in the arts, sciences, and literature from the world's leading museums, archives, and scholars. JSTOR is a digital library of academic journals ...

  19. Narrative Reviews: Flexible, Rigorous, and Practical

    Introduction. Narrative reviews are a type of knowledge synthesis grounded in a distinct research tradition. They are often framed as non-systematic, which implies that there is a hierarchy of evidence placing narrative reviews below other review forms. 1 However, narrative reviews are highly useful to medical educators and researchers. While a systematic review often focuses on a narrow ...

  20. A systematic review of sustainable business models: Opportunities

    Case study/survey accounts for nearly 39.68% of the total research articles reviewed in this study. Next, the partial least square-structural equation modeling technique has been used maximum among the quantitative methodologies, accounting for nearly 9.52%. Other than these, traditional methods such as SLR (14.28%), review (11.11%) and ...

  21. How AI Skews Our Sense of Responsibility

    It's this sense of responsibility that AI and automated systems can alter. To gain insight into how AI affects users' perceptions of their own responsibility and agency, we conducted several studies. Two studies examined what influences a driver's decision to regain control of a self-driving vehicle when the autonomous driving system is ...

  22. MIT Technology Review

    DeepMind says that depending on the interaction being modeled, accuracy can range from 40% to over 80%, and the model will let researchers know how confident it is in its prediction.

  23. Global Governance of Artificial Intelligence: Next Steps for Empirical

    The purpose of this article is to outline an agenda for research into the global governance of AI. The article distinguishes between two broad perspectives: an empirical approach, aimed at mapping and explaining global AI governance; and a normative approach, aimed at developing and applying standards for appropriate global AI governance.

  24. Research articles

    Deep learning segmentation of non-perfusion area from color fundus images and AI-generated fluorescein angiography. Kanato Masayoshi. Yusaku Katada. Toshihide Kurihara. Article Open Access 11 May ...

  25. Addressing preliminary challenges in upscaling the ...

    The trajectory of research dedicated to recycling has skyrocketed in this decade, reflecting the global commi Jump to main content . Jump to site search . Publishing. Journals; ... Article type Review Article. Submitted 07 Feb 2024. Accepted 25 Apr 2024. First published 13 May 2024. This article is Open Access.

  26. A case of sodium bromfenac eye drop-induced toxic epidermal ...

    Archives of Dermatological Research - The 2016 UK Adult SJS/TEN Management Guidelines [] recommend utilizing the ALDEN scoring system [4, 5] to evaluate the likelihood of drugs causing TEN in patients exposed to multiple medications.After reviewing the medication history of the patient, considering factors such as drug half-life, TEN risk level, and liver and kidney function of the patient ...

  27. Increased Screen Time as a Cause of Declining Physical, Psychological

    According to research from the Centers for Disease Control and Prevention ... This review article studied the relationships between screen time and digital device usage, precisely during the night times, the quality of sleep, anxiety causes, feelings of depression, and issues related to self-esteem, as well as physical effects in individuals. ...