• Digital Marketing
  • Facebook Marketing
  • Instagram Marketing
  • Ecommerce Marketing
  • Content Marketing
  • Data Science Certification
  • Machine Learning
  • Artificial Intelligence
  • Data Analytics
  • Graphic Design
  • Adobe Illustrator
  • Web Designing
  • UX UI Design
  • Interior Design
  • Front End Development
  • Back End Development Courses
  • Business Analytics
  • Entrepreneurship
  • Supply Chain
  • Financial Modeling
  • Corporate Finance
  • Project Finance
  • Harvard University
  • Stanford University
  • Yale University
  • Princeton University
  • Duke University
  • UC Berkeley
  • Harvard University Executive Programs
  • MIT Executive Programs
  • Stanford University Executive Programs
  • Oxford University Executive Programs
  • Cambridge University Executive Programs
  • Yale University Executive Programs
  • Kellog Executive Programs
  • CMU Executive Programs
  • 45000+ Free Courses
  • Free Certification Courses
  • Free DigitalDefynd Certificate
  • Free Harvard University Courses
  • Free MIT Courses
  • Free Excel Courses
  • Free Google Courses
  • Free Finance Courses
  • Free Coding Courses
  • Free Digital Marketing Courses

40 Detailed Artificial Intelligence Case Studies [2024]

In this dynamic era of technological advancements, Artificial Intelligence (AI) emerges as a pivotal force, reshaping the way industries operate and charting new courses for business innovation. This article presents an in-depth exploration of 40 diverse and compelling AI case studies from across the globe. Each case study offers a deep dive into the challenges faced by companies, the AI-driven solutions implemented, their substantial impacts, and the valuable lessons learned. From healthcare and finance to transportation and retail, these stories highlight AI’s transformative power in solving complex problems, optimizing processes, and driving growth, offering insightful glimpses into the potential and versatility of AI in shaping our world.

Related: How to Become an AI Thought Leader?

1. IBM Watson Health: Revolutionizing Patient Care with AI

Task/Conflict: The healthcare industry faces challenges in handling vast amounts of patient data, accurately diagnosing diseases, and creating effective treatment plans. IBM Watson Health aimed to address these issues by harnessing AI to process and analyze complex medical information, thus improving the accuracy and efficiency of patient care.

Solution: Utilizing the cognitive computing capabilities of IBM Watson, this solution involves analyzing large volumes of medical records, research papers, and clinical trial data. The system uses natural language processing to understand and process medical jargon, making sense of unstructured data to aid medical professionals in diagnosing and treating patients.

Overall Impact:

  • Enhanced accuracy in patient diagnosis and treatment recommendations.
  • Significant improvement in personalized healthcare services.

Key Learnings:

  • AI can complement medical professionals’ expertise, leading to better healthcare outcomes.
  • The integration of AI in healthcare can lead to significant advancements in personalized medicine.

2. Google DeepMind’s AlphaFold: Unraveling the Mysteries of Protein Folding

Task/Conflict: The scientific community has long grappled with the protein folding problem – understanding how a protein’s amino acid sequence determines its 3D structure. Solving this problem is crucial for drug discovery and understanding diseases at a molecular level, yet it remained a formidable challenge due to the complexity of biological structures.

Solution: AlphaFold, developed by Google DeepMind, is an AI model trained on vast datasets of known protein structures. It assesses the distances and angles between amino acids to predict how a protein folds, outperforming existing methods in terms of speed and accuracy. This breakthrough represents a major advancement in computational biology.

  • Significant acceleration in drug discovery and disease understanding.
  • Set a new benchmark for computational methods in biology.
  • AI’s predictive power can solve complex biological problems.
  • The application of AI in scientific research can lead to groundbreaking discoveries.

3. Amazon: Transforming Supply Chain Management through AI

Task/Conflict: Managing a global supply chain involves complex challenges like predicting product demand, optimizing inventory levels, and streamlining logistics. Amazon faced the task of efficiently managing its massive inventory while minimizing costs and meeting customer demands promptly.

Solution: Amazon employs sophisticated AI algorithms for predictive inventory management, which forecast product demand based on various factors like buying trends, seasonality, and market changes. This system allows for real-time adjustments, adapting swiftly to changing market dynamics.

  • Reduced operational costs through efficient inventory management.
  • Improved customer satisfaction with timely deliveries and availability.
  • AI can significantly enhance supply chain efficiency and responsiveness.
  • Predictive analytics in inventory management leads to reduced waste and cost savings.

4. Tesla’s Autonomous Vehicles: Driving the Future of Transportation

Task/Conflict: The development of autonomous vehicles represents a major technological and safety challenge. Tesla aimed to create self-driving cars that are not only reliable and safe but also capable of navigating complex traffic conditions without human intervention.

Solution: Tesla’s solution involves advanced AI and machine learning algorithms that process data from various sensors and cameras to understand and navigate the driving environment. Continuous learning from real-world driving data allows the system to improve over time, making autonomous driving safer and more efficient.

  • Leadership in the autonomous vehicle sector, enhancing road safety.
  • Continuous improvements in self-driving technology through AI-driven data analysis.
  • Continuous data analysis is key to advancing autonomous driving technologies.
  • AI can significantly improve road safety and driving efficiency.

Related: High-Paying AI Career Options

5. Zara: Fashioning the Future with AI in Retail

Task/Conflict: In the fast-paced fashion industry, predicting trends and managing inventory efficiently are critical for success. Zara faced the challenge of quickly adapting to changing fashion trends while avoiding overstock and meeting consumer demand.

Solution: Zara employs AI algorithms to analyze fashion trends, customer preferences, and sales data. The AI system also assists in managing inventory, ensuring that popular items are restocked promptly and that stores are not overburdened with unsold products. This approach optimizes both production and distribution.

  • Increased sales and profitability through optimized inventory.
  • Enhanced customer satisfaction by aligning products with current trends.
  • AI can accurately predict consumer behavior and trends.
  • Effective inventory management through AI can significantly impact business success.

6. Netflix: Personalizing Entertainment with AI

Task/Conflict: In the competitive streaming industry, providing a personalized user experience is key to retaining subscribers. Netflix needed to recommend relevant content to each user from its vast library, ensuring that users remained engaged and satisfied.

Solution: Netflix developed an advanced AI-driven recommendation engine that analyzes individual viewing habits, ratings, and preferences. This personalized approach keeps users engaged, as they are more likely to find content that interests them, enhancing their overall viewing experience.

  • Increased viewer engagement and longer watch times.
  • Higher subscription retention rates due to personalized content.
  • Personalized recommendations significantly enhance user experience.
  • AI-driven content curation is essential for success in digital entertainment.

7. Airbus: Elevating Aircraft Maintenance with AI

Task/Conflict: Aircraft maintenance is crucial for ensuring flight safety and operational efficiency. Airbus faced the challenge of predicting maintenance needs to prevent equipment failures and reduce downtime, which is critical in the aviation industry.

Solution: Airbus implemented AI algorithms for predictive maintenance, analyzing data from aircraft sensors to identify potential issues before they lead to failures. This system assesses the condition of various components, predicting when maintenance is needed. The solution not only enhances safety but also optimizes maintenance schedules, reducing unnecessary inspections and downtime.

  • Decreased maintenance costs and reduced aircraft downtime.
  • Improved safety with proactive maintenance measures.
  • AI can predict and prevent potential equipment failures.
  • Predictive maintenance is essential for operational efficiency and safety in aviation.

8. American Express: Securing Transactions with AI

Task/Conflict: Credit card fraud is a significant issue in the financial sector, leading to substantial losses and undermining customer trust. American Express needed an efficient way to detect and prevent fraudulent transactions in real-time.

Solution: American Express utilizes machine learning models to analyze transaction data. These models identify unusual patterns and behaviors indicative of fraud. By constant learning from refined data, the system becomes increasingly accurate in detecting fraudulent activities, providing real-time alerts and preventing unauthorized transactions.

  • Minimized financial losses due to reduced fraudulent activities.
  • Enhanced customer trust and security in financial transactions.
  • Machine learning is highly effective in fraud detection.
  • Real-time data analysis is crucial for preventing financial fraud.

Related: Is AI a Good Career Option for Women?

9. Stitch Fix: Tailoring the Future of Fashion Retail

Task/Conflict: In the competitive fashion retail industry, providing a personalized shopping experience is key to customer satisfaction and business growth. Stitch Fix aimed to offer customized clothing selections to each customer, based on their unique preferences and style.

Solution: Stitch Fix uses AI and algorithms analyze customer feedback, style preferences, and purchase history to recommend clothing items. This personalized approach is complemented by human stylists, ensuring that each customer receives a tailored selection that aligns with their individual style.

  • Increased customer satisfaction through personalized styling services.
  • Business growth driven by a unique, AI-enhanced shopping experience.
  • AI combined with human judgment can create highly effective personalization.
  • Tailoring customer experiences using AI leads to increased loyalty and business success.

10. Baidu: Breaking Language Barriers with Voice Recognition

Task/Conflict: Voice recognition technology faces the challenge of accurately understanding and processing speech in various languages and accents. Baidu aimed to enhance its voice recognition capabilities to provide more accurate and user-friendly interactions in multiple languages.

Solution: Baidu employs deep learning algorithms for voice and speech recognition, training its system on a diverse range of languages and dialects. This approach allows for more accurate recognition of speech patterns, enabling the technology to understand and respond to voice commands more effectively. The system continuously improves as it processes more voice data, making technology more accessible to users worldwide.

  • Enhanced user interaction with technology in multiple languages.
  • Reduced language barriers in voice-activated services and devices.
  • AI can effectively bridge language gaps in technology.
  • Continuous learning from diverse data sets is key to improving voice recognition.

11. JP Morgan: Revolutionizing Legal Document Analysis with AI

Task/Conflict: Analyzing legal documents, such as contracts, is a time-consuming and error-prone process. JP Morgan sought to streamline this process, reducing the time and effort required while increasing accuracy.

Solution: JP Morgan implemented an AI-powered tool, COIN (Contract Intelligence), to analyze legal documents quickly and accurately. COIN uses NLP to interpret and extract relevant information from contracts, significantly reducing the time required for document review.

  • Dramatic reduction in time required for legal document analysis.
  • Increased accuracy and reduced human error in contract interpretation.
  • AI can efficiently handle large volumes of data, offering speed and accuracy.
  • Automation in legal processes can significantly enhance operational efficiency.

12. Microsoft: AI for Accessibility

Task/Conflict: People with disabilities often face challenges in accessing technology. Microsoft aimed to create AI-driven tools to enhance accessibility, especially for individuals with visual, hearing, or cognitive impairments.

Solution: Microsoft developed a range of AI-powered tools including applications for voice recognition, visual assistance, and cognitive support, making technology more accessible and user-friendly. For instance, Seeing AI, an app developed by Microsoft, helps visually impaired users to understand their surroundings by describing people, texts, and objects.

  • Improved accessibility and independence for people with disabilities.
  • Creation of more inclusive technology solutions.
  • AI can significantly contribute to making technology accessible for all.
  • Developing inclusive technology is essential for societal progress.

Related: How to get an Internship in AI?

13. Alibaba’s City Brain: Revolutionizing Urban Traffic Management

Task/Conflict: Urban traffic congestion is a major challenge in many cities, leading to inefficiencies and environmental concerns. Alibaba’s City Brain project aimed to address this issue by using AI to optimize traffic flow and improve public transportation in urban areas.

Solution: City Brain uses AI to analyze real-time data from traffic cameras, sensors, and GPS systems. It processes this information to predict traffic patterns and optimize traffic light timing, reducing congestion. The system also provides data-driven insights for urban planning and emergency response coordination, enhancing overall city management.

  • Significant reduction in traffic congestion and improved urban transportation.
  • Enhanced efficiency in city management and emergency response.
  • AI can effectively manage complex urban systems.
  • Data-driven solutions are key to improving urban living conditions.

14. Deep 6 AI: Accelerating Clinical Trials with Artificial Intelligence

Task/Conflict: Recruiting suitable patients for clinical trials is often a slow and cumbersome process, hindering medical research. Deep 6 AI sought to accelerate this process by quickly identifying eligible participants from a vast pool of patient data.

Solution: Deep 6 AI employs AI to sift through extensive medical records, identifying potential trial participants based on specific criteria. The system analyzes structured and unstructured data, including doctor’s notes and diagnostic reports, to find matches for clinical trials. This approach significantly speeds up the recruitment process, enabling faster trial completions and advancements in medical research.

  • Quicker recruitment for clinical trials, leading to faster research progress.
  • Enhanced efficiency in medical research and development.
  • AI can streamline the patient selection process for clinical trials.
  • Efficient recruitment is crucial for the advancement of medical research.

15. NVIDIA: Revolutionizing Gaming Graphics with AI

Task/Conflict: Enhancing the realism and performance of gaming graphics is a continuous challenge in the gaming industry. NVIDIA aimed to revolutionize gaming visuals by leveraging AI to create more realistic and immersive gaming experiences.

Solution: NVIDIA’s AI-driven graphic processing technologies, such as ray tracing and deep learning super sampling (DLSS), provide highly realistic and detailed graphics. These technologies use AI to render images more efficiently, improving game performance without compromising on visual quality. This innovation sets new standards in gaming graphics, making games more lifelike and engaging.

  • Elevated gaming experiences with state-of-the-art graphics.
  • Set new industry standards for graphic realism and performance.
  • AI can significantly enhance creative industries, like gaming.
  • Balancing performance and visual quality is key to gaming innovation.

16. Palantir: Mastering Data Integration and Analysis with AI

Task/Conflict: Integrating and analyzing large-scale, diverse datasets is a complex task, essential for informed decision-making in various sectors. Palantir Technologies faced the challenge of making sense of vast amounts of data to provide actionable insights for businesses and governments.

Solution: Palantir developed AI-powered platforms that integrate data from multiple sources, providing a comprehensive view of complex systems. These platforms use machine learning to analyze data, uncover patterns, and predict outcomes, assisting in strategic decision-making. This solution enables users to make informed decisions in real-time, based on a holistic understanding of their data.

  • Enhanced decision-making capabilities in complex environments.
  • Greater insights and efficiency in data analysis across sectors.
  • Effective data integration is crucial for comprehensive analysis.
  • AI-driven insights are essential for strategic decision-making.

Related: Surprising AI Facts & Statistics

17. Blue River Technology: Sowing the Seeds of AI in Agriculture

Task/Conflict: The agriculture industry faces challenges in increasing efficiency and sustainability while minimizing environmental impact. Blue River Technology aimed to enhance agricultural practices by using AI to make farming more precise and efficient.

Solution: Blue River Technology developed AI-driven agricultural robots that perform tasks like precise planting and weed control. These robots use ML to identify plants and make real-time decisions, such as applying herbicides only to weeds. This targeted approach reduces chemical usage and promotes sustainable farming practices, leading to better crop yields and environmental conservation.

  • Significant reduction in chemical usage in farming.
  • Increased crop yields through precision agriculture.
  • AI can contribute significantly to sustainable agricultural practices.
  • Precision farming is key to balancing productivity and environmental conservation.

18. Salesforce: Enhancing Customer Relationship Management with AI

Task/Conflict: In the realm of customer relationship management (CRM), personalizing interactions and gaining insights into customer behavior are crucial for business success. Salesforce aimed to enhance CRM capabilities by integrating AI to provide personalized customer experiences and actionable insights.

Solution: Salesforce incorporates AI-powered tools into its CRM platform, enabling businesses to personalize customer interactions, automate responses, and predict customer needs. These tools analyze customer data, providing insights that help businesses tailor their strategies and communications. The AI integration not only improves customer engagement but also streamlines sales and marketing efforts.

  • Improved customer engagement and satisfaction.
  • Increased business growth through tailored marketing and sales strategies.
  • AI-driven personalization is key to successful customer relationship management.
  • Leveraging AI for data insights can significantly impact business growth.

19. OpenAI: Transforming Natural Language Processing

Task/Conflict: OpenAI aimed to advance NLP by developing models capable of generating coherent and contextually relevant text, opening new possibilities in AI-human interaction.

Solution: OpenAI developed the Generative Pre-trained Transformer (GPT) models, which use deep learning to generate text that closely mimics human language. These models are trained on vast datasets, enabling them to understand context and generate responses in a conversational and coherent manner.

  • Pioneered advancements in natural language understanding and generation.
  • Expanded the possibilities for AI applications in communication.
  • AI’s ability to mimic human language has vast potential applications.
  • Advancements in NLP are crucial for improving AI-human interactions.

20. Siemens: Pioneering Industrial Automation with AI

Task/Conflict: Industrial automation seeks to improve productivity and efficiency in manufacturing processes. Siemens faced the challenge of optimizing these processes using AI to reduce downtime and enhance output quality.

Solution: Siemens employs AI-driven solutions for predictive maintenance and process optimization to reduce downtime in industrial settings. Additionally, AI optimizes manufacturing processes, ensuring quality and efficiency.

  • Increased productivity and reduced downtime in industrial operations.
  • Enhanced quality and efficiency in manufacturing processes.
  • AI is a key driver in the advancement of industrial automation.
  • Predictive analytics are crucial for maintaining efficiency in manufacturing.

Related: Top Books for Learning AI

21. Ford: Driving Safety Innovation with AI

Task/Conflict: Enhancing automotive safety and providing effective driver assistance systems are critical challenges in the auto industry. Ford aimed to leverage AI to improve vehicle safety features and assist drivers in real-time decision-making.

Solution: Ford integrated AI into its advanced driver assistance systems (ADAS) to provide features like adaptive cruise control, lane-keeping assistance, and collision avoidance. These systems use sensors and cameras to gather data, which AI processes to make split-second decisions that enhance driver safety and vehicle performance.

  • Improved safety features in vehicles, minimizing accidents and improving driver confidence.
  • Enhanced driving experience with intelligent assistance features.
  • AI can highly enhance safety in the automotive industry.
  • Real-time data processing and decision-making are essential for effective driver assistance systems.

22. HSBC: Enhancing Banking Security with AI

Task/Conflict: As financial transactions increasingly move online, banks face heightened risks of fraud and cybersecurity threats. HSBC needed to bolster its protective measures to secure user data and prevent scam.

Solution: HSBC employed AI-driven security systems to observe transactions and identify suspicious activities. The AI models analyze patterns in customer behavior and flag anomalies that could indicate fraudulent actions, allowing for immediate intervention. This helps in minimizing the risk of financial losses and protects customer trust.

  • Strengthened security measures and reduced incidence of fraud.
  • Maintained high levels of customer trust and satisfaction.
  • AI is critical in enhancing security in the banking sector.
  • Proactive fraud detection can prevent significant financial losses.

23. Unilever: Optimizing Supply Chain with AI

Task/Conflict: Managing a global supply chain involves complexities related to logistics, demand forecasting, and sustainability practices. Unilever sought to enhance its supply chain efficiency while promoting sustainability.

Solution: Unilever implemented AI to optimize its supply chain operations, from raw material sourcing to distribution. AI algorithms analyze data to forecast demand, improve inventory levels, and minimize waste. Additionally, AI helps in selecting sustainable practices and suppliers, aligning with Unilever’s commitment to environmental responsibility.

  • Enhanced efficiency and reduced costs in supply chain operations.
  • Better sustainability practices, reducing environmental impact.
  • AI can highly optimize supply chain management.
  • Integrating AI with sustainability initiatives can lead to environmentally responsible operations.

24. Spotify: Personalizing Music Experience with AI

Task/Conflict: In the competitive music streaming industry, providing a personalized listening experience is crucial for user engagement and retention. Spotify needed to tailor music recommendations to individual tastes and preferences.

Solution: Spotify utilizes AI-driven algorithms to analyze user listening habits, preferences, and contextual data to recommend music tracks and playlists. This personalization ensures that users are continually engaged and discover new music that aligns with their tastes, enhancing their overall listening experience.

  • Increased customer engagement and time spent on the platform.
  • Higher user satisfaction and subscription retention rates.
  • Personalized content delivery is key to user retention in digital entertainment.
  • AI-driven recommendations significantly enhance user experience.

Related: How can AI be used in Instagram Marketing?

25. Walmart: Revolutionizing Retail with AI

Task/Conflict: Retail giants like Walmart face challenges in inventory management and providing a high-quality customer service experience. Walmart aimed to use AI to optimize these areas and enhance overall operational efficacy.

Solution: Walmart deployed AI technologies across its stores to manage inventory levels effectively and enhance customer service. AI systems predict product demand to optimize stock levels, while AI-driven robots assist in inventory management and customer service, such as guiding customers in stores and handling queries.

  • Improved inventory management, reducing overstock and shortages.
  • Enhanced customer service experience in stores.
  • AI can streamline retail operations significantly.
  • Enhanced customer service through AI leads to better customer satisfaction.

26. Roche: Innovating Drug Discovery with AI

Task/Conflict: The pharmaceutical industry faces significant challenges in drug discovery, requiring vast investments of time and resources. Roche aimed to utilize AI to streamline the drug development process and enhance the discovery of new therapeutics.

Solution: Roche implemented AI to analyze medical data and simulate drug interactions, speeding up the drug discovery process. AI models predict the effectiveness of compounds and identify potential candidates for further testing, significantly minimizing the time and cost related with traditional drug development procedures.

  • Accelerated drug discovery processes, bringing new treatments to market faster.
  • Reduced costs and increased efficiency in pharmaceutical research.
  • AI can greatly accelerate the drug discovery process.
  • Cost-effective and efficient drug development is possible with AI integration.

27. IKEA: Enhancing Customer Experience with AI

Task/Conflict: In the competitive home furnishings market, enhancing the customer shopping experience is crucial for success. IKEA aimed to use AI to provide innovative design tools and improve customer interaction.

Solution: IKEA introduced AI-powered tools such as virtual reality apps that allow consumers to visualize furniture before buying. These tools help customers make more informed decisions and enhance their shopping experience. Additionally, AI chatbots assist with customer service inquiries, providing timely and effective support.

  • Improved customer decision-making and satisfaction with interactive tools.
  • Enhanced efficiency in customer service.
  • AI can transform the retail experience by providing innovative customer interaction tools.
  • Effective customer support through AI can enhance brand loyalty and satisfaction.

28. General Electric: Optimizing Energy Production with AI

Task/Conflict: Managing energy production efficiently while predicting and mitigating potential issues is crucial for energy companies. General Electric (GE) aimed to improve the efficiency and reliability of its energy production facilities using AI.

Solution: GE integrated AI into its energy management systems to enhance power generation and distribution. AI algorithms predict maintenance needs and optimize energy production, ensuring efficient operation and reducing downtime. This predictive maintenance approach saves costs and enhances the reliability of energy production.

  • Increased efficiency in energy production and distribution.
  • Reduced operational costs and enhanced system reliability.
  • Predictive maintenance is crucial for cost-effective and efficient energy management.
  • AI can significantly improve the predictability and efficiency of energy production.

Related: Use of AI in Sales

29. L’Oréal: Transforming Beauty with AI

Task/Conflict: Personalization in the beauty industry enhances customer satisfaction and brand loyalty. L’Oréal aimed to personalize beauty products and experiences for its diverse customer base using AI.

Solution: L’Oréal leverages AI to assess consumer data and provide personalized product suggestions. AI-driven tools assess skin types and preferences to recommend the best skincare and makeup products. Additionally, virtual try-on apps powered by AI allow customers to see how products would look before making a purchase.

  • Enhanced personalization of beauty products and experiences.
  • Increased customer engagement and satisfaction.
  • AI can provide highly personalized experiences in the beauty industry.
  • Data-driven personalization enhances customer satisfaction and brand loyalty.

30. The Weather Company: AI-Predicting Weather Patterns

Task/Conflict: Accurate weather prediction is vital for planning and safety in various sectors. The Weather Company aimed to enhance the accuracy of weather forecasts and provide timely weather-related information using AI.

Solution: The Weather Company employs AI to analyze data from weather sensors, satellites, and historical weather patterns. AI models improve the accuracy of weather predictions by identifying trends and anomalies. These enhanced forecasts help in better planning and preparedness for weather events, benefiting industries like agriculture, transportation, and public safety.

  • Improved accuracy in weather forecasting.
  • Better preparedness and planning for adverse weather conditions.
  • AI can enhance the precision of meteorological predictions.
  • Accurate weather forecasting is crucial for safety and operational planning in multiple sectors.

31. Cisco: Securing Networks with AI

Task/Conflict: As cyber threats evolve and become more sophisticated, maintaining robust network security is crucial for businesses. Cisco aimed to leverage AI to enhance its cybersecurity measures, detecting and responding to threats more efficiently.

Solution: Cisco integrated AI into its cybersecurity framework to analyze network traffic and identify unusual patterns indicative of cyber threats. This AI-driven approach allows for real-time threat detection and automated responses, thus improving the speed and efficacy of security measures.

  • Strengthened network security with faster threat detection.
  • Reduced manual intervention by automating threat responses.
  • AI is essential in modern cybersecurity for real-time threat detection.
  • Automating responses can significantly enhance network security protocols.

32. Adidas: AI in Sports Apparel Manufacturing

Task/Conflict: To maintain competitive advantage in the fast-paced sports apparel market, Adidas sought to innovate its manufacturing processes by incorporating AI to improve efficiency and product quality.

Solution: Adidas employed AI-driven robotics and automation technologies in its factories to streamline the production process. These AI systems optimize manufacturing workflows, enhance quality control, and reduce waste by precisely cutting fabrics and assembling materials according to exact specifications.

  • Increased production efficacy and reduced waste.
  • Enhanced consistency and quality of sports apparel.
  • AI-driven automation can revolutionize manufacturing processes.
  • Precision and efficiency in production lead to higher product quality and sustainability.

Related: How can AI be used in Disaster Management?

33. KLM Royal Dutch Airlines: AI-Enhanced Customer Service

Task/Conflict: Enhancing the customer service experience in the airline industry is crucial for customer satisfaction and loyalty. KLM aimed to provide immediate and effective assistance to its customers by integrating AI into their service channels.

Solution: KLM introduced an AI-powered chatbot, which provides 24/7 customer service across multiple languages. The chatbot handles inquiries about flight statuses, bookings, and baggage policies, offering quick and accurate responses. This AI solution helps manage customer interactions efficiently, especially during high-volume periods.

  • Improved customer service efficiency and responsiveness.
  • Increased customer satisfaction through accessible and timely support.
  • AI chatbots can highly improve user service in high-demand industries.
  • Effective communication through AI leads to better customer engagement and loyalty.

34. Novartis: AI in Drug Formulation

Task/Conflict: The pharmaceutical industry requires rapid development and formulation of new drugs to address emerging health challenges. Novartis aimed to use AI to expedite the drug formulation process, making it faster and more efficient.

Solution: Novartis applied AI to simulate and predict how different formulations might behave, speeding up the lab testing phase. AI algorithms analyze vast amounts of data to predict the stability and efficacy of drug formulations, allowing researchers to focus on the most promising candidates.

  • Accelerated drug formulation and reduced time to market.
  • Improved efficacy and stability of pharmaceutical products.
  • AI can significantly shorten the drug development lifecycle.
  • Predictive analytics in pharmaceutical research can lead to more effective treatments.

35. Shell: Optimizing Energy Resources with AI

Task/Conflict: In the energy sector, optimizing exploration and production processes for efficiency and sustainability is crucial. Shell sought to harness AI to enhance its oil and gas operations, making them more efficient and less environmentally impactful.

Solution: Shell implemented AI to analyze geological data and predict drilling outcomes, optimizing resource extraction. AI algorithms also adjust production processes in real time, improving operational proficiency and minimizing waste.

  • Improved efficiency and sustainability in energy production.
  • Reduced environmental impact through optimized resource management.
  • Automation can enhance the effectiveness and sustainability of energy production.
  • Real-time data analysis is crucial for optimizing exploration and production.

36. Procter & Gamble: AI in Consumer Goods Production

Task/Conflict: Maintaining operational efficiency and innovating product development are key challenges in the consumer goods industry. Procter & Gamble (P&G) aimed to integrate AI into their operations to enhance these aspects.

Solution: P&G employs AI to optimize its manufacturing processes and predict market trends for product development. AI-driven data analysis helps in managing supply chains and production lines efficiently, while AI in market research informs new product development, aligning with consumer needs.

  • Enhanced operational efficacy and minimized production charges.
  • Improved product innovation based on consumer data analysis.
  • AI is crucial for optimizing manufacturing and supply chain processes.
  • Data-driven product development leads to more successful market introductions.

Related: Use of AI in the Navy

37. Disney: Creating Magical Experiences with AI

Task/Conflict: Enhancing visitor experiences in theme parks and resorts is a priority for Disney. They aimed to use AI to create personalized and magical experiences for guests, improving satisfaction and engagement.

Solution: Disney utilizes AI to manage park operations, personalize guest interactions, and enhance entertainment offerings. AI algorithms predict visitor traffic and optimize attractions and staff deployment. Personalized recommendations for rides, shows, and dining options enhance the guest experience by leveraging data from past visits and preferences.

  • Enhanced guest satisfaction through personalized experiences.
  • Improved operational efficiency in park management.
  • AI can transform the entertainment and hospitality businesses by personalizing consumer experiences.
  • Efficient management of operations using AI leads to improved customer satisfaction.

38. BMW: Reinventing Mobility with Autonomous Driving

Task/Conflict: The future of mobility heavily relies on the development of safe and efficient autonomous driving technologies. BMW aimed to dominate in this field by incorporating AI into their vehicles.

Solution: BMW is advancing its autonomous driving capabilities through AI, using sophisticated machine learning models to process data from vehicle sensors and external environments. This technology enables vehicles to make intelligent driving decisions, improving safety and passenger experiences.

  • Pioneering advancements in autonomous vehicle technology.
  • Enhanced safety and user experience in mobility.
  • AI is crucial for the development of autonomous driving technologies.
  • Safety and reliability are paramount in developing AI-driven vehicles.

39. Mastercard: Innovating Payment Solutions with AI

Task/Conflict: In the digital age, securing online transactions and enhancing payment processing efficiency are critical challenges. Mastercard aimed to leverage AI to address these issues, ensuring secure and seamless payment experiences for users.

Solution: Mastercard integrates AI to monitor transactions in real time, detect fraudulent activities, and enhance the efficiency of payment processing. AI algorithms analyze spending patterns and flag anomalies, while also optimizing authorization processes to reduce false declines and improve user satisfaction.

  • Strengthened security and reduced fraud in transactions.
  • Improved efficiency and user experience in payment processing.
  • AI is necessary for securing and streamlining expense systems.
  • Enhanced transaction processing efficiency leads to higher customer satisfaction.

40. AstraZeneca: Revolutionizing Oncology with AI

Task/Conflict: Advancing cancer research and developing effective treatments is a pressing challenge in healthcare. AstraZeneca aimed to utilize AI to revolutionize oncology research, enhancing the development and personalization of cancer treatments.

Solution: AstraZeneca employs AI to analyze genetic data and clinical trial results, identifying potential treatment pathways and personalizing therapies based on individual genetic profiles. This approach accelerates the development of targeted treatments and improves the efficacy of cancer therapies.

  • Accelerated innovation and personalized treatment in oncology.
  • Better survival chances for cancer patients.
  • AI can significantly advance personalized medicine in oncology.
  • Data-driven approaches in healthcare lead to better treatment outcomes and innovations.

Related: How can AI be used in Tennis?

Closing Thoughts

These 40 case studies illustrate the transformative power of AI across various industries. By addressing specific challenges and leveraging AI solutions, companies have achieved remarkable outcomes, from enhancing customer experiences to solving complex scientific problems. The key learnings from these cases underscore AI’s potential to revolutionize industries, improve efficiencies, and open up new possibilities for innovation and growth.

  • 6 Ways to Pay for an Online Course [2024]
  • How to Choose an Online Course? [An Ultimate Checklist] [2024]

Team DigitalDefynd

We help you find the best courses, certifications, and tutorials online. Hundreds of experts come together to handpick these recommendations based on decades of collective experience. So far we have served 4 Million+ satisfied learners and counting.

write a case study on artificial intelligence

10 Ways Artificial Intelligence is Being Used in Space Exploration [2024]

write a case study on artificial intelligence

How can CMO (Chief Marketing Officer) use ChatGPT [2024]

write a case study on artificial intelligence

AI in Banking [5 Case Studies] [2024]

write a case study on artificial intelligence

Impact of AI on Biotechnology [2024]

write a case study on artificial intelligence

What Sales Jobs Are Safe from AI and Automation? [2024]

write a case study on artificial intelligence

How Can CEOs Use Generate AI? [2024]

Home › Study Tips › Artificial Intelligence case study

Artificial Intelligence case study

  • Published November 2, 2022

A robot touching a touch screen button

Table of Contents

A case study of artificial intelligence.

In a world where more people have a keen interest in artificial intelligence, we want to know what AI looks like in the real world – its threats, challenges, opportunities and solutions to modern-day human problems.

Can artificial intelligence really help humans thrive? And if so, what might be the common downfalls of depending on AI in certain industries?

In this article, we’ll take look at one artificial intelligence case study to begin to form insight into this compelling question. 

What is artificial intelligence? Case study artificial intelligence

Artificial intelligence, or AI, is the theory and development of computer systems that are able to perform tasks normally required by human intelligence. Some examples of these tasks include speech recognition, visual perception, decision-making, and translation between languages. 

In other words, every time you say “Hey Alexa..” you’re using AI. But in what other areas can we find AI in our lives? Take a look at these common examples you’re bound to already be familiar with:

  • Netflix uses AI to determine streaming suggestions based on your viewing history
  • Facebook uses all the data you input on the platform, from the videos you watch to what you say in your status update, to determine which advertisements you might be interested in
  • Universities use essay submission software to determine if work has been plagiarized
  • Google Maps utilizes ongoing satellite imagery to determine the best route for you to take on a given journey

From the above examples, you can see how artificial intelligence is now less a figment of a mere Sci-Fi novel and something we commonly interact with in our everyday lives, often without even thinking about it.

But scientists, scholars and innovators are keen to learn more about AI and what it can do on a more complex level.

Human Brain Chips, Elon Musk’s NeuroLink – An AI case study

Scholars have long been interested in how the brain works. Neuroscientists in particular have a vested interest in understanding the human brain, what makes it tick, and the causes and solutions to common conditions that limit a person’s uses of their brain and bodily functions.

The last two decades has seen significant increased interest in the realm of neurotechnology. In 2008, a monkey with an implant was successfully able to control a robotic arm to feed itself through activity in the brain, and as a result, in 2012, the first human brain-controlled robotic arm became a success. In 2017, a paralyzed human was able to control a cursor mentally to type out words and sentences on a computer and in 2018, that same person was able to use a tablet functionally to browse the web, send emails and play games. 

In 2019, Neurolink, a private company founded by famous billionaire and CEO of Tesla, Elon Musk, introduced further advancements in AI brain technology with a pig named Gertrude. 

Gertrude had a wireless device implanted in her brain that was able to monitor a thousand neurons at a time, a significant advancement in neuroscience technology that could potentially become another tool for understanding the brain, as well as lead to other technological advancements. Prior to this device, only 300 neurons could be transmitted at a time, therefore this piece of tech was pretty ground-breaking.

From the pig experiment, it became clear to the world that Neurolink was seriously invested in this area of neurotechnology and had the tools and vision to potentially advance AI beyond what it had been capable of up to that point in time.

“The initial goal of our technology is to help people with paralysis regain independence through the control of computers and mobile devices.” Neurolink states on their website. “Our devices are therefore currently being designed to one day give people the ability to communicate more easily via text or speech synthesis, to follow their curiosity on the web, or to express their creativity through photography, art, or writing apps.” Neurolink. 

In April 2021, another marvel was presented to the world by the company in the form of a real live macaque monkey that demonstrated its ability to play a video game called Mind-Pong using only brain power thanks to their new N1 device and pager. The monkey was able to play the game successfully with only its mind.

This communication from the brain to the screen was made possible through a small device and pager implanted into the monkey’s brain that essentially translated the primate’s synaptic input to initiate an action. In other words, the device was able to tell the technology what to do based on the messages received from the monkey’s brain activity. Sounds like science fiction right?

Neuroscientist Dr Paul Nuyujukian stated that “there was definitely a lot of clever engineering that went into that. To build a device, that can transmit 2,048 electrodes worth of spiking information.. Over a radio, wirelessly…When you have that many channels the performance that you should be able to get should be eclipse what we’ve been able to do in the academic field.” 

On the flip side of the advancement, however, many animal rights activists have called into question the ethics of implanting the device into the brains of innocent animals, many of whom have petitioned to the US government to see an end to Nuerolinks animal testing. The essential question here perhaps is – Is it ever okay to experiment on animals to advance the human condition?

Despite the backlash received from animal rights activists, the video marked an important milestone in neurotechnology, in just one small device capable of receiving and sending brain signals like never before.

The next step for Neurolink is to be able to start clinical trials whereby humans will become the experimental subjects. The N1 is currently awaiting FDA approval before it can be tested on humans. If Neurolink does get accepted for human trials, the implanting of it into the human brain will involve major, invasive neurosurgery that doesn’t come without risk. This type of surgery requires a patient to have a hole drilled into their skull and have the device implanted into the surface of their brain. Infection, bleeding and tissue damage are all common risks of this type of surgery. 

If the clinical trials work and the N1 is successful, the potential to improve patients’ lives who suffer from conditions such as Parkinson’s, epilepsy, dementia and even psychiatric diseases, is abundantly clear, though not without risk.

Will Neurolink eventually succeed in creating a nation of essentially cyborg humans? Will these advances improve human life for the better? Who knows. I suppose we’ll just have to wait and see… 

Are you interested in learning more about AI? Check out Immerse Education’s Artificial Intelligence courses for teenagers here. Spend your summer meeting like-minded peers, advance your skills and knowledge in artificial intelligence and explore one of the world’s most prestigious universities.

Related Content

Smart career paths: business.

write a case study on artificial intelligence

Artificial Intelligence Case Study Topics: Unleashing the Power of AI

Artificial Intelligence (AI) has emerged as one of the most transformative technologies in recent times, revolutionizing industries and reshaping the way we live and work. With its ability to analyze vast amounts of data, learn from patterns, and make autonomous decisions, AI has the potential to solve complex problems and unlock new possibilities. One of the key drivers of AI advancements is the utilization of case studies, which provide real-world examples of AI applications and their impact.

Introduction to AI Case Studies

Case studies serve as invaluable resources in understanding the practical applications of AI. They offer insights into how AI technologies are implemented, the challenges faced, and the outcomes achieved. By examining successful AI case studies, we can gain a deeper understanding of the potential of AI and how it can be harnessed to drive innovation and improve various aspects of our lives.

The Importance of AI Case Studies

AI case studies play a pivotal role in showcasing the capabilities of AI systems and their potential impact. These studies enable researchers, developers, and businesses to learn from past experiences, identify best practices, and avoid potential pitfalls. By studying successful AI case studies, decision-makers can make informed choices when implementing AI solutions, ensuring maximum efficiency and effectiveness.

Purpose of the Blog Post

The purpose of this blog post is to provide an in-depth exploration of artificial intelligence case study topics. We will delve into various industries and domains where AI has made significant strides, examining real-life examples and their impact. By the end of this comprehensive guide, you will have a clear understanding of the potential applications of AI across different sectors and gain insights into how these case studies have transformed industries.

Overview of Artificial Intelligence Case Studies

Before we dive into specific case studies, let's first establish a foundational understanding of AI case studies. These case studies involve the application of AI technologies to address a specific problem or challenge. They provide a detailed account of how AI systems were developed, implemented, and the outcomes achieved.

AI case studies offer a multifaceted perspective, encompassing various industries, including healthcare, finance, manufacturing, customer service, and transportation. Each case study presents a unique set of challenges and opportunities, highlighting the versatility and adaptability of AI in different contexts.

Real-life Examples of Successful AI Case Studies

To truly grasp the potential of AI, it is essential to explore real-life examples of successful AI case studies. These pioneering projects have showcased the transformative power of AI, pushing the boundaries of what was once thought possible. Let's take a glimpse into some notable AI case studies:

1. Google DeepMind's AlphaGo

In 2016, Google's DeepMind developed AlphaGo, an AI system that defeated the world champion Go player, Lee Sedol. This groundbreaking achievement highlighted the ability of AI to master complex strategic games that were previously considered beyond the reach of machines. AlphaGo's success demonstrated the potential of AI in problem-solving and decision-making in complex scenarios.

2. IBM Watson's Jeopardy! Victory

IBM's Watson showcased its cognitive capabilities by competing against human champions on the popular quiz show, Jeopardy! in 2011. Watson's ability to understand and process natural language, coupled with its vast knowledge base, enabled it to outperform the human contestants. This case study demonstrated the potential of AI in understanding and analyzing unstructured data, paving the way for advancements in natural language processing.

3. Tesla's Autopilot System

Tesla's Autopilot system utilizes AI algorithms and sensors to enable semi-autonomous driving. By analyzing real-time data from cameras, radar, and ultrasonic sensors, the Autopilot system can detect and respond to road conditions, other vehicles, and pedestrians. This case study showcases the potential of AI in the transportation industry, revolutionizing the concept of self-driving cars.

4. Amazon's Recommendation Engine

Amazon's recommendation engine is powered by AI algorithms that analyze customer preferences, purchase history, and browsing behavior to provide personalized product recommendations. This case study demonstrates how AI can enhance the customer experience by delivering targeted suggestions, improving sales, and fostering customer loyalty.

These real-life examples are just the tip of the iceberg when it comes to AI case studies. They illustrate the diverse range of industries and domains where AI has made significant contributions, showcasing the potential for innovation and transformation.

In the next section, we will explore the process of selecting artificial intelligence case study topics, considering various factors and identifying the most relevant and impactful areas of study. Stay tuned for an in-depth analysis of AI case studies in healthcare, finance, manufacturing, customer service, and transportation.

Note: In the following sections, we will explore each case study topic in greater detail, analyzing the problem at hand, the AI solution implemented, and the results and impact achieved.

Artificial intelligence (AI) case studies provide valuable insights into the practical applications and impact of AI technologies. These case studies offer a glimpse into the real-world implementation of AI systems, showcasing their capabilities, successes, and challenges. By examining these case studies, we can gain a deeper understanding of the potential of AI and its ability to transform various industries.

Explanation of AI Case Studies

AI case studies involve the application of AI technologies to solve specific problems or challenges within a given context. These studies provide detailed accounts of how AI systems were developed, implemented, and the outcomes achieved. By analyzing the methodologies and approaches used in these case studies, researchers, developers, and businesses can learn from past experiences and gain insights into the best practices for implementing AI solutions.

AI case studies often involve the utilization of machine learning algorithms, natural language processing, computer vision, robotics, and other AI techniques. They can range from small-scale projects to large-scale deployments, depending on the complexity of the problem being addressed.

Benefits of AI Case Studies

AI case studies offer numerous benefits for both researchers and practitioners in the field of AI. Here are some key advantages:

Insights into Implementation : Case studies offer insights into the practical implementation of AI systems. They provide details on the data collection process, model training, algorithm selection, and optimization techniques employed. This information can guide future AI projects and help avoid common pitfalls.

Benchmarking and Comparison : Case studies allow for benchmarking and comparison of different AI approaches. By examining multiple case studies within a specific domain, researchers can identify the strengths and weaknesses of various AI techniques, leading to advancements and improvements in the field.

Inspiration for Innovation : AI case studies can inspire new ideas and innovative solutions. By understanding the challenges faced in previous case studies and the methods used to overcome them, researchers can build upon existing knowledge and push the boundaries of AI capabilities.

To truly comprehend the power and potential of AI, it is essential to explore real-life examples of successful AI case studies. These examples highlight the impact that AI can have across various domains. Let's take a closer look at some notable AI case studies:

Google DeepMind's AlphaGo : AlphaGo, developed by Google DeepMind, made headlines in 2016 when it defeated the world champion Go player, Lee Sedol. This case study demonstrated the ability of AI to master complex strategic games and showcased the potential for AI in decision-making and problem-solving.

IBM Watson's Jeopardy! Victory : In 2011, IBM's Watson competed against human champions on the quiz show Jeopardy! and emerged victorious. Watson's success demonstrated the power of AI in natural language processing and understanding unstructured data.

Tesla's Autopilot System : Tesla's Autopilot system utilizes AI algorithms and sensors to enable semi-autonomous driving. This case study showcases the potential of AI in the transportation industry, revolutionizing the concept of self-driving cars.

Amazon's Recommendation Engine : Amazon's recommendation engine utilizes AI to analyze customer preferences and provide personalized product recommendations. This case study highlights how AI can enhance the customer experience and drive sales through targeted suggestions.

These real-life examples illustrate the diverse range of industries and domains where AI has made significant contributions. They serve as inspiration and provide valuable insights into the potential of AI technologies.

Choosing Artificial Intelligence Case Study Topics

When exploring the world of artificial intelligence case studies, it is essential to select the right topics that align with current AI trends and have the potential for significant impact. In this section, we will discuss the factors to consider when choosing case study topics and identify some promising areas for exploration.

Factors to Consider

Relevance to Current AI Trends : Selecting case study topics that align with current AI trends ensures that you are exploring areas of research and development that are actively advancing. Staying up-to-date with the latest advancements in AI will provide you with a better understanding of the challenges and opportunities in the field.

Availability of Data : Data availability is crucial for successful AI case studies. Consider topics where relevant and high-quality data is accessible. Adequate data sets are essential for training AI models effectively and obtaining reliable results.

Ethical Considerations : Ethical considerations should be an integral part of AI case study topic selection. It is important to choose topics that adhere to ethical guidelines and prioritize fairness, transparency, and accountability. Avoid topics that raise concerns regarding privacy, bias, or potential harm to individuals or society.

Identifying Potential Case Study Topics

Now, let's explore some potential case study topics in various industries where AI has shown promising applications:

Healthcare and Medical Diagnostics : AI has the potential to revolutionize healthcare by improving diagnostics, predicting disease outcomes, and enabling personalized treatment plans. Some potential case study topics in this domain include:

AI in Early Cancer Detection: Explore how AI algorithms can analyze medical imaging data to detect and diagnose cancer at an early stage, leading to improved patient outcomes.

AI in Medical Imaging Analysis: Investigate how AI can assist radiologists in analyzing medical images, such as X-rays, MRIs, and CT scans, to improve accuracy and speed in diagnosis.

Financial Services and Fraud Detection : AI offers significant potential in the finance industry, particularly in fraud detection and prevention. Some potential case study topics in this domain include:

AI in Fraud Detection for Banks: Examine how AI algorithms can analyze transaction data and detect fraudulent activities in real-time, enhancing security and minimizing financial losses.

AI in Credit Card Fraud Detection: Explore how AI can analyze patterns and anomalies in credit card transactions to identify and prevent fraudulent activities, ensuring the safety of customers' financial information.

Manufacturing and Process Optimization : AI can optimize manufacturing processes, improve efficiency, and reduce costs. Some potential case study topics in this domain include:

AI in Predictive Maintenance: Investigate how AI can analyze sensor data to predict machinery failures and schedule maintenance proactively, minimizing downtime and optimizing production.

AI in Supply Chain Optimization: Explore how AI algorithms can optimize supply chain operations by predicting demand, optimizing inventory levels, and improving logistics, leading to cost savings and improved customer satisfaction.

Customer Service and Chatbots : AI-powered chatbots have revolutionized customer service by providing instant responses and personalized experiences. Some potential case study topics in this domain include:

AI-powered Chatbots in E-commerce: Examine how AI-powered chatbots can enhance customer engagement, provide personalized product recommendations, and streamline the online shopping experience.

AI in Virtual Assistants for Customer Support: Explore how AI-based virtual assistants can handle customer inquiries, resolve issues, and provide 24/7 support, improving customer satisfaction and reducing support costs.

Transportation and Autonomous Vehicles : AI plays a critical role in the development of autonomous vehicles and traffic management systems. Some potential case study topics in this domain include:

AI in Self-Driving Cars: Investigate how AI algorithms enable autonomous vehicles to perceive the environment, make real-time decisions, and navigate safely on the roads.

AI in Traffic Management Systems: Explore how AI can optimize traffic flow, reduce congestion, and improve transportation efficiency by analyzing real-time traffic data and implementing intelligent control systems.

By considering these factors and exploring potential case study topics in various industries, you can select areas that align with your interests and have the potential to contribute to the advancement of AI technologies.

Deep Dive into Selected Artificial Intelligence Case Study Topics

In this section, we will delve deeper into selected artificial intelligence case study topics across various industries. By examining these case studies, we can gain a comprehensive understanding of the problem at hand, the AI solutions implemented, and the results and impact achieved.

Healthcare and Medical Diagnostics

Case Study: AI in Early Cancer Detection

Overview of the Problem: Early detection of cancer is crucial for successful treatment and improved patient outcomes. However, it can be challenging for healthcare professionals to accurately detect cancer at its early stages due to the complexity of medical imaging data and the potential for human error.

AI Solution and Implementation: In this case study, AI algorithms were developed and trained using large datasets of medical imaging data, including mammograms, CT scans, or MRIs. These algorithms utilize deep learning techniques to analyze and interpret the images, identifying potential cancerous cells or tumors. By comparing the patterns in the images to an extensive database of known cancer cases, the AI system can provide accurate early detection of cancer.

Results and Impact: The implementation of AI in early cancer detection has shown promising results. The AI system can analyze medical images with high accuracy, often outperforming human radiologists in detecting cancer at its early stages. Early detection allows for timely intervention, leading to improved treatment outcomes and increased survival rates for patients.

Case Study: AI in Medical Imaging Analysis

Overview of the Problem: Medical imaging, such as X-rays, MRIs, and CT scans, plays a crucial role in diagnosing and monitoring various medical conditions. However, the interpretation of these images can be time-consuming, subjective, and prone to human error.

AI Solution and Implementation: In this case study, AI algorithms were developed and trained using large datasets of labeled medical imaging data. These algorithms leverage deep learning techniques, such as convolutional neural networks (CNNs), to analyze and interpret the images. The AI system can identify anomalies, highlight potential abnormalities, and provide quantitative measurements to assist radiologists in making accurate diagnoses.

Results and Impact: The implementation of AI in medical imaging analysis has shown significant potential in improving diagnostic accuracy and efficiency. The AI system can assist radiologists in identifying subtle abnormalities that may be missed by the human eye, leading to early detection of diseases and improved patient care. Additionally, AI can help reduce the burden on radiologists by automating certain tasks, allowing them to focus on more complex cases.

Financial Services and Fraud Detection

Case Study: AI in Fraud Detection for Banks

Overview of the Problem: Fraudulent activities, such as identity theft and unauthorized transactions, pose significant challenges for banks and financial institutions. Traditional rule-based fraud detection systems often struggle to keep up with evolving fraud techniques and patterns.

AI Solution and Implementation: In this case study, AI algorithms were developed to analyze large volumes of transactional data in real-time. These algorithms utilize machine learning techniques, including anomaly detection and pattern recognition, to identify suspicious activities that deviate from normal patterns. By continuously learning from new data, the AI system can adapt and evolve to detect new and emerging fraud patterns.

Results and Impact: The implementation of AI in fraud detection for banks has led to improved fraud prevention and detection rates. The AI system can analyze vast amounts of transactional data quickly and accurately, flagging potentially fraudulent activities in real-time. By minimizing false positives and identifying fraudulent transactions promptly, banks can mitigate financial losses and protect their customers' assets.

Case Study: AI in Credit Card Fraud Detection

Overview of the Problem: Credit card fraud is a significant concern for both financial institutions and cardholders. Detecting fraudulent credit card transactions is challenging due to the large volume of transactions and the need for real-time analysis.

AI Solution and Implementation: In this case study, AI algorithms were developed to analyze credit card transaction data, including transaction amounts, merchant information, and cardholder behavior. These algorithms utilize machine learning techniques, such as supervised and unsupervised learning, to identify patterns and anomalies indicative of fraudulent activities. The AI system can learn from historical data to improve its fraud detection capabilities over time.

Results and Impact: The implementation of AI in credit card fraud detection has proven to be highly effective in reducing fraudulent activities. The AI system can quickly analyze transactions, identify suspicious patterns, and flag potentially fraudulent transactions for further investigation. By minimizing false positives and accurately detecting fraud, financial institutions can protect their customers and maintain trust in the credit card ecosystem.

In the next section, we will explore case studies in manufacturing and process optimization, showcasing how AI can enhance efficiency and streamline operations.

In this section, we will explore case studies in the domain of manufacturing and process optimization. These examples highlight how artificial intelligence (AI) can enhance efficiency, reduce costs, and streamline operations in manufacturing industries.

Manufacturing and Process Optimization

Case Study: AI in Predictive Maintenance

Overview of the Problem: Unplanned equipment failures and unexpected downtime can significantly impact manufacturing operations, leading to production delays and increased costs. Traditional maintenance strategies, such as reactive or preventive maintenance, may not effectively address the challenges of equipment failure prediction and maintenance scheduling.

AI Solution and Implementation: In this case study, AI algorithms were implemented to perform predictive maintenance. The algorithms utilize machine learning techniques, such as supervised learning and anomaly detection, to analyze sensor data from machines and predict potential failures. By continuously monitoring the health and performance of equipment, the AI system can identify early warning signs of impending failures and schedule maintenance proactively.

Results and Impact: The implementation of AI in predictive maintenance has proven to be highly beneficial for manufacturing industries. By detecting potential equipment failures in advance, companies can plan maintenance activities more efficiently, minimizing downtime and reducing costs associated with unscheduled repairs. This proactive approach to maintenance helps optimize production schedules and ensures smooth operations.

Case Study: AI in Supply Chain Optimization

Overview of the Problem: Supply chain management involves complex processes, including demand forecasting, inventory management, and logistics planning. Optimizing these processes is crucial for reducing costs, improving customer satisfaction, and increasing operational efficiency.

AI Solution and Implementation: In this case study, AI algorithms were utilized to optimize supply chain operations. The algorithms leverage machine learning techniques, such as demand forecasting, inventory optimization, and route optimization, to analyze historical and real-time data. By considering factors such as customer demand, lead times, transportation costs, and inventory levels, the AI system can generate optimal plans and recommendations for procurement, production, and distribution.

Results and Impact: The implementation of AI in supply chain optimization has led to significant improvements in efficiency and cost reduction. By accurately forecasting demand and optimizing inventory levels, companies can minimize stockouts and excess inventory, leading to reduced carrying costs and improved cash flow. AI-powered route optimization helps streamline logistics operations, optimizing delivery schedules and reducing transportation costs. These advancements in supply chain optimization ultimately lead to improved customer satisfaction through faster and more reliable deliveries.

These case studies highlight the potential impact of AI in manufacturing and process optimization. By leveraging AI technologies, companies can achieve greater efficiency, reduced costs, and improved operational effectiveness. In the next section, we will explore case studies in the domain of customer service and chatbots, showcasing how AI can enhance customer experiences and support interactions.

In this section, we will explore case studies in the domain of customer service and chatbots. These examples highlight how artificial intelligence (AI) can enhance customer experiences, streamline support interactions, and improve overall customer satisfaction.

Customer Service and Chatbots

Case Study: AI-powered Chatbots in E-commerce

Overview of the Problem: With the rise of e-commerce, providing personalized and timely customer support has become a crucial aspect of the online shopping experience. However, scaling customer service to meet the growing demands of a large customer base can be challenging and costly.

AI Solution and Implementation: In this case study, AI-powered chatbots were implemented to handle customer inquiries and provide support in e-commerce platforms. These chatbots utilize natural language processing (NLP) and machine learning algorithms to understand and respond to customer queries. They can provide instant and personalized responses, offer product recommendations based on customer preferences, and assist with order tracking and returns.

Results and Impact: The implementation of AI-powered chatbots in e-commerce has significantly improved customer experiences and operational efficiency. Chatbots provide instant responses, reducing customer wait times and ensuring 24/7 availability for support inquiries. By offering personalized product recommendations, chatbots can enhance the shopping experience and increase sales conversion rates. Additionally, chatbots can handle routine inquiries, freeing up human agents to focus on more complex customer issues, ultimately improving overall customer satisfaction.

Case Study: AI in Virtual Assistants for Customer Support

Overview of the Problem: Customer support departments often face high call volumes and long wait times, leading to customer frustration and decreased satisfaction. Providing timely and effective support to customers is critical for maintaining brand loyalty and positive customer experiences.

AI Solution and Implementation: In this case study, AI-powered virtual assistants were implemented to handle customer support interactions. These virtual assistants utilize AI technologies such as natural language processing, sentiment analysis, and knowledge graph systems. They can understand customer inquiries, provide accurate and personalized responses, and escalate complex issues to human agents when necessary. Virtual assistants continuously learn from customer interactions, improving their responses and problem-solving abilities over time.

Results and Impact: The implementation of AI-powered virtual assistants in customer support has proven to be highly effective in improving response times and customer satisfaction. Virtual assistants can provide instant support, reducing wait times and enabling customers to receive assistance at their convenience. By accurately understanding customer inquiries and providing relevant information, virtual assistants can resolve issues quickly and efficiently. This results in improved customer experiences, reduced support costs, and increased customer loyalty.

These case studies illustrate the potential of AI in enhancing customer service and support interactions. By leveraging AI-powered chatbots and virtual assistants, businesses can provide timely, personalized, and efficient support to their customers, resulting in improved customer satisfaction and loyalty. In the next section, we will explore case studies in the domain of transportation and autonomous vehicles, showcasing how AI is revolutionizing the way we travel and manage traffic.

In this section, we will explore case studies in the domain of transportation and autonomous vehicles. These examples highlight how artificial intelligence (AI) is revolutionizing the way we travel and manage traffic.

Transportation and Autonomous Vehicles

Case Study: AI in Self-Driving Cars

Overview of the Problem: Self-driving cars have the potential to transform the transportation industry by reducing accidents, improving traffic flow, and enhancing overall mobility. However, developing autonomous vehicles that can navigate safely and make real-time decisions in complex traffic scenarios is a significant challenge.

AI Solution and Implementation: In this case study, AI algorithms are utilized to power self-driving cars. These algorithms leverage a combination of computer vision, sensor fusion, machine learning, and decision-making models to perceive the environment, interpret traffic signs, detect obstacles, and make real-time driving decisions. By continuously analyzing sensor data and learning from past experiences, self-driving cars can navigate autonomously while adhering to traffic rules and ensuring passenger safety.

Results and Impact: The implementation of AI in self-driving cars has the potential to revolutionize transportation. Autonomous vehicles can reduce human errors and improve road safety by eliminating the risks associated with human distraction, fatigue, and impaired driving. Additionally, self-driving cars have the potential to optimize traffic flow, reduce congestion, and increase overall transportation efficiency, leading to reduced travel times and fuel consumption.

Case Study: AI in Traffic Management Systems

Overview of the Problem: Managing traffic flow in urban areas is a complex task that requires real-time analysis of traffic patterns, congestion, and accidents. Traditional traffic management systems often struggle to handle the dynamic nature of traffic and effectively optimize traffic flow.

AI Solution and Implementation: In this case study, AI algorithms are used to enhance traffic management systems. These algorithms leverage machine learning techniques and real-time data analysis to predict traffic congestion, optimize signal timings, and suggest alternative routes. By analyzing historical and real-time traffic data, the AI system can make intelligent decisions to improve traffic flow, reduce congestion, and minimize travel times.

Results and Impact: The implementation of AI in traffic management systems has shown significant potential in improving transportation efficiency. By optimizing signal timings based on real-time traffic conditions, AI can reduce congestion and ensure a smoother flow of vehicles. AI algorithms can also provide real-time traffic updates to drivers, enabling them to make informed decisions about alternative routes, further reducing travel times and improving overall traffic management.

These case studies highlight how AI is transforming the transportation industry. From self-driving cars to intelligent traffic management systems, AI technologies have the potential to revolutionize the way we travel, making transportation safer, more efficient, and environmentally friendly.

In this comprehensive guide, we have explored various artificial intelligence case study topics across different industries. We have witnessed the power of AI in healthcare, finance, manufacturing, customer service, and transportation. By examining real-life examples and understanding the problem-solving capabilities of AI, we have gained insights into the potential of this transformative technology.

AI case studies provide invaluable lessons and inspire innovation in the field of artificial intelligence. They offer opportunities for learning, benchmarking, and improving AI systems. By studying successful case studies, researchers, developers, and businesses can harness the power of AI to drive advancements, solve complex problems, and improve various aspects of our lives.

As AI continues to evolve, it is crucial to stay updated with the latest trends, research, and case studies. The potential of AI is immense, and by exploring and sharing knowledge, we can collectively shape a future where AI-driven solutions enhance our lives in remarkable ways.

Adrian Kennedy is an Operator, Author, Entrepreneur and Investor

Adrian Kennedy

The present and future of AI

Finale doshi-velez on how ai is shaping our lives and how we can shape ai.

image of Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences

Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences. (Photo courtesy of Eliza Grinnell/Harvard SEAS)

How has artificial intelligence changed and shaped our world over the last five years? How will AI continue to impact our lives in the coming years? Those were the questions addressed in the most recent report from the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted at Stanford University, that will study the status of AI technology and its impacts on the world over the next 100 years.

The 2021 report is the second in a series that will be released every five years until 2116. Titled “Gathering Strength, Gathering Storms,” the report explores the various ways AI is  increasingly touching people’s lives in settings that range from  movie recommendations  and  voice assistants  to  autonomous driving  and  automated medical diagnoses .

Barbara Grosz , the Higgins Research Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) is a member of the standing committee overseeing the AI100 project and Finale Doshi-Velez , Gordon McKay Professor of Computer Science, is part of the panel of interdisciplinary researchers who wrote this year’s report. 

We spoke with Doshi-Velez about the report, what it says about the role AI is currently playing in our lives, and how it will change in the future.  

Q: Let's start with a snapshot: What is the current state of AI and its potential?

Doshi-Velez: Some of the biggest changes in the last five years have been how well AIs now perform in large data regimes on specific types of tasks.  We've seen [DeepMind’s] AlphaZero become the best Go player entirely through self-play, and everyday uses of AI such as grammar checks and autocomplete, automatic personal photo organization and search, and speech recognition become commonplace for large numbers of people.  

In terms of potential, I'm most excited about AIs that might augment and assist people.  They can be used to drive insights in drug discovery, help with decision making such as identifying a menu of likely treatment options for patients, and provide basic assistance, such as lane keeping while driving or text-to-speech based on images from a phone for the visually impaired.  In many situations, people and AIs have complementary strengths. I think we're getting closer to unlocking the potential of people and AI teams.

There's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: Over the course of 100 years, these reports will tell the story of AI and its evolving role in society. Even though there have only been two reports, what's the story so far?

There's actually a lot of change even in five years.  The first report is fairly rosy.  For example, it mentions how algorithmic risk assessments may mitigate the human biases of judges.  The second has a much more mixed view.  I think this comes from the fact that as AI tools have come into the mainstream — both in higher stakes and everyday settings — we are appropriately much less willing to tolerate flaws, especially discriminatory ones. There's also been questions of information and disinformation control as people get their news, social media, and entertainment via searches and rankings personalized to them. So, there's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: What is the responsibility of institutes of higher education in preparing students and the next generation of computer scientists for the future of AI and its impact on society?

First, I'll say that the need to understand the basics of AI and data science starts much earlier than higher education!  Children are being exposed to AIs as soon as they click on videos on YouTube or browse photo albums. They need to understand aspects of AI such as how their actions affect future recommendations.

But for computer science students in college, I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc.  I'm really excited that Harvard has the Embedded EthiCS program to provide some of this education.  Of course, this is an addition to standard good engineering practices like building robust models, validating them, and so forth, which is all a bit harder with AI.

I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc. 

Q: Your work focuses on machine learning with applications to healthcare, which is also an area of focus of this report. What is the state of AI in healthcare? 

A lot of AI in healthcare has been on the business end, used for optimizing billing, scheduling surgeries, that sort of thing.  When it comes to AI for better patient care, which is what we usually think about, there are few legal, regulatory, and financial incentives to do so, and many disincentives. Still, there's been slow but steady integration of AI-based tools, often in the form of risk scoring and alert systems.

In the near future, two applications that I'm really excited about are triage in low-resource settings — having AIs do initial reads of pathology slides, for example, if there are not enough pathologists, or get an initial check of whether a mole looks suspicious — and ways in which AIs can help identify promising treatment options for discussion with a clinician team and patient.

Q: Any predictions for the next report?

I'll be keen to see where currently nascent AI regulation initiatives have gotten to. Accountability is such a difficult question in AI,  it's tricky to nurture both innovation and basic protections.  Perhaps the most important innovation will be in approaches for AI accountability.

Topics: AI / Machine Learning , Computer Science

Cutting-edge science delivered direct to your inbox.

Join the Harvard SEAS mailing list.

Scientist Profiles

Finale Doshi-Velez

Finale Doshi-Velez

Herchel Smith Professor of Computer Science

Press Contact

Leah Burrows | 617-496-1351 | [email protected]

Related News

Harvard SEAS students Samantha Nahari, Rama Edlabadkar, Vlad Ivanchuk with a poster for their computational science and engineering capstone project

Data science capstone spotlight: A Remote Sensing Framework for Rail Incident Situational Awareness Drones

Using drones to rapidly assess disaster sites

Academics , AI / Machine Learning , Applied Computation , Computer Science

Harvard SEAS students Yuhan Yao and Luis Henrique Simplício Ribeiro with their master's capstone project poster

Master's student capstone spotlight: AI for Fashion

Bridging fashion cultural heritage with innovative design

Academics , AI / Machine Learning , Applied Computation , Computer Science , Design

Harvard SEAS and GSAS banners, bagpipers, students in Crimson regalia

2024 Commencement photos

Images from the 373rd Harvard Commencement on Thursday, May 23

Academics , Applied Computation , Applied Mathematics , Applied Physics , Bioengineering , Computer Science , Environmental Science & Engineering , Events , Materials Science & Mechanical Engineering , Robotics

  • Partnerships
  • White Papers
  • Bias in Generative AI: Types, Examples, Solutions
  • AI Consulting
  • AI Software Development
  • Data Science Services
  • Machine Learning Consulting
  • Machine Learning Development
  • Customer Experience Consulting
  • AI Mobile App Development
  • ChatGPT Prompt Engineering
  • Marketing Campaign Performance Optimization
  • Generative AI Consulting
  • Generative AI Development
  • GPT Integration Services
  • AI Chatbot Development
  • LLM Development
  • ChatGPT Use Cases For Business
  • Generative AI – Everything You Need to Know
  • Big Data Development
  • Modern Data Architecture
  • Data Engineering Services
  • Big Data Analytics
  • Data Warehouse
  • BI & Data Visualizations
  • Cloud Services
  • Investment Data Management Solution
  • Food Supply Chain Management
  • Custom Web Development
  • Intelligent AI Cooking Assistant
  • Full-Cycle Web Application Development for a Retail Company
  • Virtual Assistant Tool
  • Text Analysis
  • Computer Vision
  • Custom Large Language Models
  • AI Call Center Solutions
  • Image Recognition
  • Natural Language Processing
  • Predictive Analytics
  • Pose Estimation
  • Consumer Sentiment Analysis
  • Recommendation Systems
  • Data Capture & OCR
  • Healthcare & Pharma
  • Game & Entertainment
  • Sport & Wellness
  • Marketing & Advertising
  • Media & Entertainment
  • InData Labs Services
  • Generative Artificial Intelligence
  • AI Call Center Solutions
  • Recommendation systems
  • All Success Stories

Top 11 case studies of artificial intelligence in manufacturing

Welcome to Industry 4.0, aka the Fourth Industrial Revolution or 4IR. Now is the latest phase in the digital makeover of manufacturing. This time around, it’s all about big changes driven by analytics, automation, and human-machine interaction.

The impetus for the first Industrial Revolution. The second was characterized by electricity dominance. And the third hinged on initial automation and mechanical innovations. Now, the Fourth Industrial Revolution is being shaped by cyberphysical systems—intelligent computational capabilities. And one of the key types of disruptive technologies behind reshaping the value chain is Artificial intelligence (AI).

In this article, we’ll explore how AI is used in manufacturing . We shall further analyze the most promising AI use cases in manufacturing by 11 leading global manufacturers.

What are examples of AI use cases in manufacturing?

The potential of AI and machine learning algorithms in manufacturing is only beginning to unfold. Beyond their established roles in robotics and automation, AI in manufacturing is now making its mark in broader areas.

Although covering all the AI use cases in manufacturing would go beyond the scope of this blog, let’s delve into the five most impactful ones. These serve as excellent starting points for manufacturers to direct their efforts.

Supply chain management

By harnessing the power of AI and ML in manufacturing, companies are transforming their supply chain strategies, for enhanced efficiency, precision, and cost-effectiveness.

AI in the supply chain involves predictive analytics, intelligent inventory management, refined demand forecasting, and optimized logistics. AI analyzes factors such as transportation costs, production capacity, and lead times to optimize the supply chain. This results in a streamlined order fulfillment system that guarantees timely deliveries, reduced transportation expenses and heightened customer satisfaction.

Predictive maintenance

This use case of AI in manufacturing empowers companies to observe equipment breakdowns proactively. It helps them minimize downtime and optimize maintenance schedules.

A pivotal component of predictive maintenance is the digital twin —an online replica of a physical asset. It captures real-time data and mimics its actions in a virtual setting.

By merging this digital twin with sensor data from actual machinery, AI in manufacturing can:

  • Study patterns
  • Spot anomalies
  • Anticipate potential malfunctions

The power of automotive AI -based predictive maintenance can be seen in the example of a leading automotive manufacturer- Ford.

Here, distinct digital twins are created for each vehicle model. Each oversees a different production stage—from conception to assembly to operation. Notably, it can precisely pinpoint energy wastage. It also suggests energy-saving opportunities, boosting overall production line performance.

Source: Capgemini

Product quality inspection

Just as recognizing subtle trends can help predict equipment glitches, looking into process details can proactively prevent quality concerns. AI streamlines defect detection by employing intelligent vision systems and video analytics technology. This adept vision system identifies misaligned, missing, or incorrect components with minimal room for human error.

The wide availability of computer vision-based cameras, and advanced image recognition, has made real-time checks during production much more affordable. This application gives manufacturers with a practical way to handle strict industry regulations, especially in fields like automobiles and consumer goods. It proves valuable in following product standards and compliance, which is essential to avoid problems like fines, legal actions, and unhappy customers.

Demand forecasting

Enterprises are partnering with AI companies to leverage ML and anticipate shifts in consumer demand with utmost accuracy. This equips them to anticipate changes in demand and align their production strategies accordingly. Thereby mitigating the risks of running out of stock or holding excess inventory.

Case in point—L’Oréal. The global beauty products leader leverages diverse data sources such as social media insights, Point-of-Sale data, and weather patterns to forecast shifts in customer preferences, predict trends, and optimize sales strategies.

Product innovation

With the integration of AI in manufacturing, companies are embracing more efficient workflows and redefining product development. AI’s standout advantage is its remarkable capacity to swiftly analyze vast data from market trends, customer preferences, and competitive landscapes. This informed approach aids decision-making and crafting products that precisely resonate with market demands.

AI technology captures and monitors design data, empowering engineers to create inventive product designs, shorten testing periods, and gain deeper insights into customer preferences.

Generative AI design software is another impactful AI application. Engineers input parameters and goals, and AI generates multiple design options, expediting design iterations for innovative products.

This results in data-driven decision-making, faster design cycles, and the ability to create products that fit market needs.

How the top 11 companies harness AI for manufacturing excellence

BMW Group uses AI across its operations, from production to customer experience. It is further embracing AI for manufacturing, enhancing efficiency in its Spartanburg plant. The South Carolina plant produces over 1,500 vehicles daily.

Robots with AI manage the intricate task of welding hundreds of metal studs onto SUV frames, ensuring precision down to the last detail. Even better, if any mistake happens, AI steps in to rectify it. This has saved over $1 million annually.

The technology doesn’t stop on the floor – AI aids inspections, too. Cameras identify issues, making the process faster and smarter.

General Motors

GM is making smart use of industrial AI in its manufacturing processes. They’re tapping into the images captured by cameras on assembly robots to detect potential problems with the robots themselves. The proactive approach prevented unplanned outages.

Working alongside their supplier, this system analyzes the images to identify signs of failing robotic components. A successful test run saw it catch 72 instances of component failure across 7,000 robots.

And here’s why it’s a big deal: just one minute of the assembly line stops can cost GM around $20,000. GM is also embracing AI-based generative design technology to design the next-gen of lighter vehicles. This innovation holds the key to crafting efficient, lighter alternatives for greener vehicles with zero emissions.

By tapping into it, GM engineers can swiftly explore numerous high-performance design choices ready for production. Since 2016, GM has rolled out 14 new vehicle models, slashing an impressive 350 pounds per vehicle. Based on recent reports, GM is working to integrate ChatGPT and incorporate a vehicle assistant that uses AI models behind ChatGPT , tailored for drivers.

Nissan Motor Corporation’s ‘Intelligent Factory’ leverages AI, IoT, and robotics to produce next-gen vehicles, while maintaining a zero-emission production system. It streamlines operations like the simultaneous underfloor mounting process. The process previously required six manual steps for components like the battery, motor, and rear suspension. Now, it is executed seamlessly in a single step, thanks to robotic assistance.

From fastening and alignment of suspension links to headliner installation, cockpit module integration, motor winding, and paint inspection. The entire production process will be automated in this intelligent factory.

The production line also incorporates AI-based quality assurance, remote equipment diagnosis, and maintenance solutions. Nissan has also created AI design tools to predict the aerodynamic performance of the new designs. By learning from vast data, AI has significantly reduced simulation durations from days to seconds.

Danone, the multinational food products leader, is tapping into the power of machine learning to revolutionize its manufacturing processes. It uses ML to predict shifts in demand variability and enhance planning. By harnessing this new capability, Danone has witnessed remarkable improvements in its forecasting procedures, leading to seamless coordination between departments like marketing and sales. And it’s working: their predictions are now 20% more accurate, and they’re losing 30% fewer sales. This change has improved everything, from marketing, sales, account management, and supply chain to finance.

This upgrade has translated into enhanced efficiency and optimized inventory management, particularly for the supply chain. It has led to a 30% reduction in product obsolescence, a 50% decrease in the workload of demand planners, and a 30% reduction in lost sales.

Airbus relies on AI across its operations, including manufacturing, quality checks, and the supply chain. Airbus demonstrates a high level of expertise in asset maintenance, in the manufacturing domain. It helps them monitor critical data from machine sensors, like temperature and pressure, sourced from parameters directly influencing machine performance.

The predictive software serves as an early-warning mechanism, enabling Airbus to swiftly halt machines, thereby preventing time and financial resource wastage. They have embraced ML to monitor supplier lead times. Through this, the company has effectively established buffers to guarantee the availability of parts, consequently streamlining assembly lead times. Airbus has also demonstrated manufacturing AI use cases to boost quality. AI-powered defect detection processes empower the company to identify issues early, effectively mitigating potential disruptions in aircraft production. And the outcomes are impressive – they’ve cut lead times by 20% and reduced missing parts by four units.

As one of the leading silicon manufacturers, Intel has honed its high-value AI strategy in its semiconductor manufacturing. They prioritize AI use cases in manufacturing that offer clear business benefits, practical feasibility, and swift value realization.

Over the past two decades, Intel has successfully implemented various manufacturing AI solutions, deploying thousands of AI models at scale. Their AI solutions cover various analytical stages, from in-line defect detection to advanced process control. Intel’s scaled manufacturing AI solutions have not only delivered substantial financial gains but also sped up manufacturing processes, leading to increased yields and productivity.

Bridgestone Corp.

Bridgestone’s AI in manufacturing case study showcases how AI can reshape manufacturing by fostering meticulous quality control and boosting performance standards. It has launched a groundbreaking tire-building and molding system, called “Examation”. It leverages AI in manufacturing to enhance tire quality, productivity, and consistency.

This AI-driven tool steers the production process in real-time, ensuring every component is assembled under optimal conditions. Additionally, it seamlessly integrates data generated during the tire-building process into the overall factory operations, pivotal in elevating the plant’s process capabilities. The outcome is high-precision manufacturing, with a remarkable 15% enhancement in uniformity compared to traditional methods.

ML is revolutionizing operations at Frito-Lay, the subsidiary of PepsiCo. They’re using lasers to hit chips and analyzing their sounds to determine texture—automating chip quality checks. Building on this, the company identified more AI use cases in manufacturing within the factory.

When paired with a vision system, a machine learning model predicts potato weights as they’re processed. This move saved the company a significant amount by eliminating the need for expensive weighing elements. Another ongoing project aims to assess the “percent peel” of a potato post-peeling. This data helps optimize the peeling system, potentially saving over $1 million annually for the company in the United States alone.

Kellogg’s has fully embraced the potential of AI across operations, from enhancing supply chain efficiency to crafting optimal flavor combinations for new products.

On the supply chain front, Kellogg’s leverages AI to ensure timely and cost-effective delivery of materials and products. The technology continually examines various data sources related to demand signals. When disruptions are detected or patterns that could lead to them, the system suggests strategies to avert these challenges.

Kellogg’s AI endeavors are firmly rooted in practicality, focusing on real business challenges and marketplace needs. This ensures a direct impact on business performance and resource optimization. The outcomes speak for themselves – Kellogg’s AI integration has led to reduced waste in the supply chain and a noticeable boost in sales.

Flex, a global electronics manufacturer, creates printed circuit boards (PCBs) that are pivotal in electronic devices. These need careful checking for quality, but traditional human inspection faced challenges as demand grew faster.

To counter this, Flex adopted an AI/ML-powered defect detection system. This innovation employs deep neural networks to spot defects that escape conventional vision systems and human scrutiny. This technology overhaul streamlined inspections, boosting efficiency by over 30% and elevating product yield by an impressive 97%. This shift also optimally utilized factory floor space by retiring legacy inspection setups, paving the way for other lines and solutions.

Kraft Heinz

Kraft Heinz, a major global food company, has embraced AI for manufacturing to make its manufacturing more efficient and enhance its product development processes. By utilizing AI-powered tools, it identifies waste and inefficiencies in manufacturing and supply chains, leading to optimization. The technology analyzes various factors to optimize processes, swiftly evaluating production and operational aspects that highlight areas for improvement.

For instance, they use AI to optimize the replenishment of tomato paste, taking into account supplier performance scores and predictive analytics. This data helps them receive higher-quality goods, which reduces the need for costly fillers to maintain product quality. Another AI use case in manufacturing involves condition-based maintenance. Sensors on production lines detect vibrations and send data to an external analyzer. This helps predict potential failures, allowing maintenance to be planned during regular windows instead of risking expensive unplanned downtime.

In the era of Industry 4.0, AI use cases are reshaping manufacturing. These 11 AI manufacturing case studies showcase how AI enhances efficiency, boosts quality, and revolutionizes processes. From predictive maintenance to supply chain optimization, AI’s impact drives the industry toward a smarter, more innovative future.

Carl Torrence is a Content Marketer. His core expertise lies in developing data-driven content for brands, SaaS businesses, and agencies. In his free time, he enjoys binge-watching time-travel movies and listening to Linkin Park and Coldplay albums.

Forecast goods demand, anticipate sales and seasonality

Predict trends and plan your business steps with custom AI manufacturing solutions .

Subscribe to our newsletter!

AI and data science news, trends, use cases, and the latest technology insights delivered directly to your inbox.

By clicking Subscribe, you agree to our Terms of Use and Privacy Policy .

Please leave this field empty.

Related articles

  • BI and Big Data
  • Data Engineering
  • Data Science and AI solutions
  • Data Strategy
  • ML Consulting
  • Generative AI/NLP
  • Sentiment/Text Analysis
  • InData Labs News

Privacy Overview

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Healthcare (Basel)

Logo of healthcare

Artificial Intelligence for Hospital Health Care: Application Cases and Answers to Challenges in European Hospitals

Matthias klumpp.

1 Fraunhofer Institute for Material Flow and Logistics (IML), Josef-von-Fraunhofer-Str. 2-4, 44227 Dortmund, Germany; [email protected] (M.H.); [email protected] (O.U.)

2 Department of Business Administration, Georg-August-University of Göttingen, Platz der Göttinger Sieben 3, 37073 Göttingen, Germany

Marcus Hintze

Milla immonen.

3 VTT Technical Research Centre of Finland Ltd., Kaitoväylä 1, 90571 Oulu, Finland; [email protected]

Francisco Ródenas-Rigla

4 Polibienestar Research Institute, University of Valencia, Carrer del Serpis 29, 46022 València, Spain; [email protected]

Francesco Pilati

5 Department of Industrial Engineering, University of Trento, Via Sommarive 9, 38123 Trento, Italy; [email protected]

Fernando Aparicio-Martínez

6 NUNSYS S.L., Calle Gustave Eiffel 3, 46980 Valencia, Spain; [email protected]

Dilay Çelebi

7 Department of Management Engineering, Istanbul Technical University, Macka, Beşiktaş, 34367 İstanbul, Turkey; rt.ude.uti@dibelec

Thomas Liebig

8 TU Dortmund, Artificial Intelligence Unit, Otto-Hahn-Straße 12, 44221 Dortmund, Germany; [email protected]

9 Materna Information & Communications SE, Artificial Intelligence Unit, Voßkuhle 37, 44141 Dortmund, Germany

Mats Jirstrand

10 Fraunhofer-Chalmers Centre & Fraunhofer Center for Machine Learning, Chalmers Science Park, 41288 Gothenburg, Sweden; es.sremlahc.ccf@jstam

Oliver Urbann

Marja hedman.

11 Heart Center, Kuopio University Hospital and Institute of Clinical Medicine, University of Eastern Finland, Ritva Jauhiainen-Bruun, 70029 Kuopio, Finland; [email protected]

Jukka A. Lipponen

12 Department of Applied Physics, University of Eastern Finland, Yliopistonranta 1, 70210 Kuopio, Finland; [email protected]

Silvio Bicciato

13 Interdepartmental Center for Stem Cells and Regenerative Medicine (CIDSTEM), Department of Life Sciences, University of Modena and Reggio Emilia, Via Gottardi 100, 41125 Modena, Italy; [email protected]

Anda-Petronela Radan

14 Department of Obstetrics and Gynecology, University Hospital of Bern, Murtenstraße 11, 3008 Bern, Switzerland; [email protected]

Bernardo Valdivieso

15 La Fe University Hospital Valencia, Avinguda de Fernando Abril Martorell 106, 46026 València, Spain; se.avg@reb_oseividlav

Wolfgang Thronicke

16 ATOS Information Technology GmbH, Fürstenallee 11, 33102 Paderborn, Germany; [email protected]

Dimitrios Gunopulos

17 Department of Informatics and Telecommunications, National and Kapodistrian University of Athens, Panepistimioupolis, Ilisia, 15784 Athens, Greece; moc.liamg@soluponugd

Ricard Delgado-Gonzalo

18 Centre Suisse d’Électronique et de Microtechnique CSEM, Jaquet Droz 1, 2002 Neuchâtel, Switzerland; [email protected]

Associated Data

Not applicable.

The development and implementation of artificial intelligence (AI) applications in health care contexts is a concurrent research and management question. Especially for hospitals, the expectations regarding improved efficiency and effectiveness by the introduction of novel AI applications are huge. However, experiences with real-life AI use cases are still scarce. As a first step towards structuring and comparing such experiences, this paper is presenting a comparative approach from nine European hospitals and eleven different use cases with possible application areas and benefits of hospital AI technologies. This is structured as a current review and opinion article from a diverse range of researchers and health care professionals. This contributes to important improvement options also for pandemic crises challenges, e.g., the current COVID-19 situation. The expected advantages as well as challenges regarding data protection, privacy, or human acceptance are reported. Altogether, the diversity of application cases is a core characteristic of AI applications in hospitals, and this requires a specific approach for successful implementation in the health care sector. This can include specialized solutions for hospitals regarding human–computer interaction, data management, and communication in AI implementation projects.

1. Introduction

Research into applications of artificial intelligence (AI) in health care and within hospitals is a crucial area of innovation [ 1 ]. Smart health care with the support of AI technologies, such as Machine Learning (ML), is needed due to specific challenges in the provision of medical support in European countries as well as in the rest of the world. It is not only the outbreak of the COVID-19 pandemic that reveals the current problems and challenges facing European hospitals. The success in the science of medicine in the last decades has had the effect of patients becoming older, frailer, and multi-morbid due to a longer lifetime expectation [ 2 ].

This is accompanied by the fact that medical care and diseases are becoming increasingly complex. Due to this medical complexity, medical personnel are becoming more and more specialized, which cannot in general be fully provided for by smaller hospitals in rural areas. Added to this is the demographic change already emerging in Europe, e.g., the population of over 80-year-olds in the EU27 will double from 6.1% in 2020 to 12.5% in 2060 [ 3 ]. Hence, more older people with their specific health problems will use the health care system. In contrast to this, the number of young well-trained medical personnel is currently decreasing and a shortage of skilled personnel, such as doctors and nurses, is already emerging in many European nations [ 4 ].

The challenges of the simultaneous increase of older and multi-morbid patients with complex diseases and the shortage of skilled personnel are also hampered by the increasing economic constraints on hospitals. An increase in chronic diseases due to aging populations and shortage of medical specialists results in resource scarcity and medical sustainability challenges. In order not to endanger the living and health standards of the European nations it will be necessary to develop applied AI-solutions to relieve the burden of increased workload as well as being instrumental to deliver efficient, effective, and high-quality health care.

Adaptability and agility at hospitals are major prerequisites in this context, and narrowing the application of AI to optimization solely does miss the point in many cases. By opening a wider range of actionable options, from personalized medical diagnosis and treatment to choices in care, sourcing, and logistics areas, AI applications will provide more important support avenues than efficiency enhancements only [ 5 , 6 ]. In addition, multiple benefits regarding the ongoing COVID-19 pandemic can also be expected and should be further explored, especially regarding data analysis and preventing unnecessary patient contact for health care personnel in hospitals as centres of the fight against the viral disease [ 7 ].

AI can also contribute to the fight against pandemics as COVID-19, helping hospitals focus resources on pandemic patient’s treatments in the current as well as possible future situations. In this sense, most AI applications are directed at contactless analysis, diagnosis, and treatment (e.g., self-treatment and prevention), reducing the number of personal contacts and hospital visits, therefore reducing the potential spread of COVID-19 and other viral pandemics. AI in particular offers great potential for improving medical care and supporting the medical staff. The state of the art and the challenges regarding AI applications in hospitals and the health care sector are described for specific application areas in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is healthcare-09-00961-g001.jpg

Interrelation structure of AI application areas for AI in hospitals.

With regards to the introduction of AI applications in hospitals, two specific questions arise, with the answers to them as the central contributions of this paper: First, what are the requirements and hospital setups for AI applications? To this end, the authors carried out a survey of different European hospitals and identified relevant projects in this field. As a result, the main fields of application of AI for hospitals are found as care, diagnosis, and logistics. The hospitals surveyed saw the greatest medical and economical potential in these three areas through the use of AI. Building on this, the paper outlines altogether 11 use cases in 9 hospitals across Europe, informing how AI can contribute to agility and efficiency in hospitals, improving health care from the resource efficiency as well as the service quality and choice side, aligned with the core hospital workflow and value adding processes. The second question is: How can a basic structure for the different AI use cases be established to avoid the mistake of developing isolated solutions that are difficult to transfer across hospitals? The authors propose three basics support areas which help to ensure a holistic approach to AI application implementation and transfer within the paper.

The paper is structured as follows: The following section is outlining the applied use case methodology for the analysis presented. The next section is describing the specific use case descriptions and expectations of hospitals towards AI applications. The following section presents a discussion regarding possible benefits and challenges as well as concept items such as human–computer interaction and medical data space concepts to overcome the challenges posed by AI applications in the hospital context. The final section provides an outlook towards future developments and challenges for AI applications in hospitals.

2. Use Case Methodology

The first step to identify the current challenges and areas of interest of European hospitals was to create a survey. The survey was carried out to obtain a differentiated view of the needs of European hospitals. Specifics were requested, such as country, type, number of patients and beds, and the main health care areas. In addition, hospital decision-makers identified specific areas of application and presented the focus and expected output of the utility of AI. The following Table 1 outlines the specific setup of these hospital characteristics for the institutions included in the survey.

Included survey and case study hospitals in Europe.

1 Data from hospital sources. Definitions might differ due to national data regulations. 2 University Hospital of Bern: http://www.frauenheilkunde.insel.ch/de/ueber-die-klinik , accessed on 2 October 2020. 3 Kuopio University Hospital: https://www.psshp.fi/web/en/organisation/operations-and-tasks , accessed on 2 October 2020. 4 Südtiroler Sanitätsbetrieb: https://www.sabes.it/de/578.asp , accessed on 2 October 2020. 5 La Fe University Hospital: Hospital activity report, 2019. 6 Federico II University of Naples. 7 Orton Ltd. University Hospital. 8 Odense University Hospital: https://en.ouh.dk/about-ouh/key-figures , accessed on 2 October 2020. 9 Bayındır Hospital. 10 Universitätsklinikum Essen: https://www.uk-essen.de , accessed on 2 October 2020.

The framework situations for the outlined AI use cases are characterized by their specific hospital setup in a broad multitude of European hospitals. By means of surveys carried out in the hospitals participating in this analysis, different health care personnel have provided systematic answers to a structured questionnaire dealing with relevant aspects to the study. The hospitals where asked to detail current practical problems in different areas, how are they currently managing these problems, ways and mechanisms to improve in these areas by means of AI, and relevant KPIs determining qualitative and quantitative improvements related to the adoption of the AI application. As a result, after extracting the information from these surveys, use cases could be drafted for the different health institutions, based on real and actual needs and opportunities. Societies require an effective and efficient health care system and especially hospitals as nodes in a network of actors providing high-quality services, resources and serving patients. The following table summarizes the main expectations as stated by the health organizations in the survey (see Table 2 ).

From the expectations, a total of 11 use cases in different health areas has been envisioned. It turns out that three particular fields are of specific interest to the hospitals surveyed: diagnosis, care and logistics.

In the field of diagnosis, clinical decisions still mostly depend on the application of clinical practice guidelines, instead of being based on the use of automatic decision support tools that exploit the increasing availability of medical data from molecular assays, electronic health records, clinical and pathological images, and wearable connected sensors. Nowadays, clinicians face enormous challenges in reconciling heterogeneous clinical data and exploiting the information content to make optimal decisions when assessing a disease or its progression, and this situation has become more evident in the midst of the global COVID-19 pandemic. Thus, there is an urgent need to develop smart decision support systems, which assist clinicians in making rapid and precise diagnostic decisions through the combination of multiple data sources. AI-based methodologies for medical diagnosis and medical decision support have gained attention in the recent years as these systems hold promise to automate the diagnosis and triage processes, thus optimizing and accelerating the referral process especially in urgent and critical cases. Recently, state-of-the-art examples demonstrated that software based on AI can be used in clinical practice to improve decision-making and to achieve fast and accurate databased diagnosis of various pathologies. In particular, AI has been proven particularly helpful in areas where the diagnostic information is already digitized, such as: for detection of cancers based on molecular, genomic, and radiological data [ 8 ], making individual prognosis in psychiatry using neuroimaging [ 9 , 10 ] identifying strokes from computed tomography scans [ 11 ], assessing the risk of sudden cardiac death or other heart diseases based on electrocardiograms and cardiac magnetic resonance images [ 12 , 13 ], classifying skin lesions from skin images [ 14 ], finding indicators of diabetic retinopathy in eye images [ 15 ], and detect phenotypes that correlate with rare genetic diseases from patient facial photos [ 16 ]. The change in clinical practice through and by the means of technological innovation is today decisively enabling health care systems to face to the continuous economic, socio-demographic and epidemiological pressures [ 17 ]. However, technological innovation, although important and central, must be carefully examined and accompanied to ensure that it really corresponds to effective social innovation. As addressed by MedTech Europe, developing AI systems and algorithms for healthcare settings requires specific skillsets which are in short supply, and investment in education and training of professionals involved (e.g., data scientists, practitioners, software engineers, clinical engineers), is mandatory [ 18 ].

In the field of care, AI for health has shown great potential to improve healthcare efficiency, considering the relationship between health factors, including service and management, and ICT factors that include sensors, networks, data resources, platforms, applications and solutions [ 19 ]. For the hospital facilities, AI is one of the most powerful technologies from the perspectives of data, computing power and algorithms. Research in Health 4.0 has been conducted in an interdisciplinary way with a diversified set of applications and functionalities and in terms of its implementation, it has been more commonly found in hospitals’ information flows, especially the ones related to healthcare treatments [ 20 ]. In this context, it is also necessary to consider and to assess the prevailing opinions and expectations among stakeholders regarding ICT health solutions, such as the improvement of factors that affect quality of life, quality of health care, patient’s knowledge, monetary aspects, or data security and privacy [ 21 ]. Although the research trend in the field of chronic care is to keep a continuous monitoring of each patient (promoting continuity of health and social care), tools to identify chronic patients and analyze the use of health services (care pathways) that they perform do not exist yet, and in addition there are no AI models that facilitate the design of integrated care pathways. There is clear evidence of the relevance of organization and management of the technological issue in the health care, concept further reinforced on the light of recent COVID-19 pandemic. Assessment, supply, prioritization, appropriate usage, and exploitation are indeed not a trivial duty, and the final success of any health process is widely affected by technology management issues.

In the field of logistics, AI can be applied in the forms of optimizing ML algorithms for scheduling and transportation planning [ 22 , 23 , 24 ]. This has not been extended to AI-led prognosis applications at least with empirical testing. The currently existing industry standard draws on manual processes to plan and optimize resource use. Software applications are being widely used in hospitals for this problem area, such as ORBIS, Medico or M-KIS that rely on an old architecture and non-intelligent, manual interaction with users. Even specialized software modules such as myMedis support the whole process of OR management and related resource planning but still do not use AI-based technology and thus are not able to cope with rising complexity in resource planning optimization [ 25 , 26 , 27 ]. It has been reported that AI adoption by key stakeholders such as doctors remains low [ 28 ], and that existing applications do not cater enough to the specific needs of human stakeholders that are supposed to interact with the systems [ 29 ]. Accordingly, a focus on human–computer interaction (HCI) spanning pre-design, design and post-design phases as well as catering to user, system, task, and interaction characteristics [ 30 ] holds the potential to increase AI adoption and user satisfaction [ 31 ]. While expertise in HCI has been developed in the fields of computer science [ 32 , 33 ], it has not been systematically applied to the hospital context.

3. Use Cases Descriptions and Expectations

In the field of diagnosis, we propose to advance the methods that intelligently utilize heterogeneous data from various sources and novel AI-based methods for supporting medical diagnosis and decision making inside clinics. More specifically, we propose to increase the utilization of AI-based methods in four selected use cases: diagnosing coronary artery disease (CAD), assessing fetal state during labor, diagnosing epidermolysis bullosa (a rare genetic disease) and diagnosing arrhythmias automatically. All the use cases provide heterogeneous data, which at the same time is a challenge for the medical experts to handle and on the other hand provide a possibility for the rise of novel AI-based methods in supporting diagnosis and clinical decision-making. AI-based methods also enable detection of factors in medical diagnosis that are unnoticeable for humans. Collaboration between technical and medical experts is crucial to co-create such tools to be used in clinics that are highly acceptable, highly deployed, and provide real value for patients, doctors and societies.

3.1. Use Case 1: Coronary Artery Disease Diagnosis

Among all routinely available diagnostic tests, coronary CT angiography (CCTA) has the highest sensitivity (95–99%) for detection of coronary artery disease (CAD), with a specificity of 64–83%, and it has recently set up as the first-hand diagnostic tool for stabile chest pain. However, after CCTA there are still several patients for whom the diagnosis and reason for symptoms remains unclear and further imaging studies (myocardial perfusion and/or invasive coronary angiography) are needed to decide the best way of the treatment. Training a ML algorithm to recognize those cases for whom further imaging is likely to provide essential information among the unclear cases with suspected CAD would improve the cost-efficiency and logistic of the diagnosis of chest pain patients. In other words, the aim would be to develop a tool for evaluating the risk of the patient to have prognostic CAD for customized clinical decision-making. The number of the patients with suspected CAD transmitted to hospital for diagnostic imaging is likely to grow in the future worldwide due to recently published clinical guidelines emphasizing the use of CCTA. For the study, a number of contemporary CCTA studies imaged and essential clinical data (age, sex, cardiovascular risk factors and medication) could be used to train a machine-learning algorithm such as Disease State Index (DSI), which is a method to quantify the probability to belonging to a certain disease population, originally developed to support clinicians in diagnosing Alzheimer’s Disease [ 34 ].

3.2. Use Case 2: AI Based Automatic Arrhythmia Analysis

Atrial fibrillation (AF) is the most common sustained arrhythmia and is associated with significant morbidity and adverse outcomes (stroke, heart failure, death). Overall, AF is associated with five-fold greater risk of stroke. Anticoagulation therapy has been demonstrated to reduce AF-related stroke risk significantly. Paroxysmal AF (PAF) is a self-terminating recurrent form of AF. The diagnosis of PAF is often tricky since PAF episodes can be short in duration, asymptomatic and the episode incidence can be low. It is estimated that the stroke causes total costs of EUR 45 billion/year across Europe. In European countries, 1.5 million peoples are diagnosed with stroke every year, 9 million are living with stroke and it is responsible for 9% (0.4 million) of all deaths in EU [ 2 ]. Cryptogenic stroke (CS) and transient Ischemic Attack (TIA) patients and cardiac surgery patients are the three most clinically significant patient groups where PAF is often underdiagnosed. In this use case, state of the art AI-based arrhythmia analysis algorithms are developed for PAF-screening in patients with TIA or cryptogenic stroke and detection of post-operative atrial fibrillation in cardiac surgery patients. AI-based automatic arrhythmia analysis implemented in wearable sensors enables longer monitoring time with improved patient usability and still requires minimal effort from healthcare professionals. Developing novel, AI-based non-invasive methods for PAF screening, using simple wearable ECG or PPG measurement would lead to increasing rate of PAF diagnosis in cardiac surgery, CS and TIA patients. These monitoring methods will be easily exploitable and inexpensive. The timely diagnosis of PAF has an important impact since anticoagulation may save the patient’s life or prevent stroke-related disabilities such as paralysis, aphasia and chronic pain. There is a high-cost saving potential, since one prevented stroke can save EUR 20,000 of direct medical costs and more than EUR 100,000 of indirect costs (disability-adjusted life years lost).

3.3. Use Case 3: Fetal State Assessment during Labour

Cardiotocography (CTG), also known as electronic fetal monitoring (EFM), is used for fetal assessment before and during labour and largely replaced the use of intermittent heart rate auscultation. Visual interpretation of CTG traces is characterized today by a great inter- and intra-observer variability with low specificity. EFM has been shown to lead to unnecessary medical interventions such as caesarean section and vaginal-operative deliveries, with the associated health consequences and economic costs. The low specificity for identifying fetal hypoxia can be partially interpreted in the context of observer variability. CTG recording is widely performed for fetal assessment during delivery and has become routine in most hospitals worldwide. A software program connected to the electrodes of the electronic fetal monitoring system (EFM) registers fetal and maternal data such as fetal heart rate and its variations, maternal heart rate, uterine contractions and fetal movements. Currently, the most specific available CTG interpretation system is the FIGO (Fédération Internationale de Gynécologie et d’Obstétrique) classification, which is most commonly used worldwide [ 35 ]. Fetal outcomes after delivery are being measured by assessing following two parameters: (1) arterial pH directly after birth (blood from the umbilical cord); (2) APGAR score assessment at 1, 5 and 10 min after delivery. This not only offers information about the fetal state, but also gives observer (obstetricians and midwives) direct feedback about previous CTG interpretation during delivery as well as prediction of fetal hypoxia/acidosis. An arterial pH under 7.15 is considered to be pathologic and is a direct indicator of fetal hypoxia. An APGAR score under 7, measured 5 min after delivery is also considered to be pathologic. APGAR as scoring system based on five fetal features—appearance, pulse, grimace, activity and respiration—providing information about the status of the new-born after delivery [ 36 ]. Considering the problematic of observer variability, four scenarios are possible when CTG interpretation is performed by obstetricians or midwives: (1) normal CTG, normal outcomes (pH/APGAR); (2) pathological CTG, normal outcomes (pH/APGAR); (3) normal CTG, pathological outcomes (pH/APGAR); (4) pathological CTG, pathological outcomes (pH/APGAR). By introducing AI interpretation, the purpose is to improve scenario 2 and 3, which will in most cases lead to avoidance of surgical interventions, since the main problem of CTG is specificity; or to performing interventions at moments where one would otherwise refrain from doing so (version 3). The AI system could provide feedback when fetal asphyxia is expected (pH < 7.15 or APGAR at 5 min < 7), as well as warnings, if applicable. The proposed AI (or ensemble of several AI instances) would help in removing the existing great inter- and intra-observer variability and would lead to a direct and positive impact on effectiveness and efficiency through: (1) decrease of unnecessary caesarean section and instrumental delivery; (2) increase of specificity for identifying fetal hypoxia; (3) decrease of unnecessary health costs derived from unnecessary surgical procedures.

3.4. Use Case 4: Diagnosis in Epidermolysis Bullosa, a Rare Genetic Disease

In Europe, a disease is considered rare when it affects less than 1 in 2000 people. There are more than 7000 rare diseases (RDs) worldwide, about 80% of them has a genetic origin and approximately 75% affect children. RDs are estimated to affect 350 million people globally [ 37 ]. In better-resourced countries, correct diagnosis of rare genetic diseases takes on average between 5.5 and 7.5 years. In Europe and United States, nearly half of the first diagnoses are only partially correct. The deployment of effective diagnostic procedures is hampered by the underestimation of the true disease frequency (owing to the lack of RDs’ awareness) and by an insufficient knowledge of the disease pathophysiology and natural history combined with the paucity of validated disease-specific biomarkers. Epidermolysis bullosa (EB) is a group of inherited, genetic diseases in which the skin (and the mucous membranes) is very fragile and forms severe, chronic blisters and lesions after even minor frictions or trauma. This rare genetic disorder affects all genders, ethnic and racial groups and determines either an early death or a long-term debilitating and life-threatening condition, since the severe blistering and associated scarring and deformities result in poor quality of life and reduce life expectancy. In the world there are about 500,000 persons affected by this disease and 36,000 in the European Union (EU). EB can be classified into four major subtypes, such as dystrophic EB (DEB), junctional EB (JEB), EB simplex (EBS), and Kindler Syndrome depending on the gene mutations and the level of skin cleavage [ 38 ]. Within the subtypes, EB has different severity levels and clinical manifestations. There is an urgent need to develop efficient methods for the early diagnosis of the EB subtype, the prediction of the disease progression and, consequently, the selection of individualized, precision therapeutic strategies. In this endeavour, “omics technologies”, as genomic analysis by means of next generation sequencing (NGS), have recently found applications in the diagnosis, molecular subtyping, and follow-up prediction of EB. Information retrieved from these technologies represents a substantial increase in the amount of data that can be used to support EB patients, provided that advanced computational methods are available for their integrative and combinatorial analysis. In this use case, state-of-the-art AI algorithms are developed and applied for supporting early diagnosis, sub-classification, and therapeutic stratification of EB, as an example of rare genetic disease. In particular, AI-based methods will be applied to the integrative analysis of biological (genomics, molecular, immunological, and images) and epidemiological (medical records) data with the aim to: (1) support disease and disease subtype diagnosis; (2) identify distinctive features (genomic lesions, proteins, and immunological states) associated to disease severity (biomarkers) for the prediction of disease progression; (3) detect molecular signatures for guiding patient stratification for novel means of treatment (precision therapeutics). ML algorithms can be trained to integrate phenotypic and clinical data for the prioritization of disease-related genes and mutations, for the prediction of the pathogenicity and disease clinical relevance of genetic variants, and for the identification of pathogenic variant combinations. Furthermore, AI-based methods could be used for disease comprehension and therapeutic target selection by unravelling the affected genetic and molecular players and pathways. AI and ML can be applied to detect anomalies in gene expression and to correlate transcriptional patterns with molecular mechanisms and clinical phenotypes, to learn low frequency patterns, and to deliver automated class attribution [ 37 ]. Results from these analyzes would facilitate the recommendation of optimal treatment approaches and the identification of reliable biomarkers of normal versus pathogenic states and of response to therapeutics interventions. AI methods focusing on removing the existing limitations in the correct diagnosis of EB subtypes and in the prediction of the clinical course of EB patients might achieve at least the same average accuracy as medical doctors following the latest consensus reclassification of inherited EB. The AI-based integrative analysis of biological and medical data will have a direct and positive impact on effectiveness and efficiency through: (1) decrease in the time needed for the diagnosis of the correct EB subtype and the stratification of the patient for the most effective therapeutic treatment; (2) increase in the number and efficacy of diagnostic and prognostic biomarker; (3) increase in the efficacy of selection criteria to identify patients who will benefit from ex vivo gene therapy; (4) decrease of unnecessary life-threatening conditions and health costs derived from delayed diagnosis and treatment administration.

In the field of care, AI will be applied in four other use cases: to improve the management and decision support process, specifically in the chronic care pathway and resources characterization, simulation of demand and prognosis, adverse events identification and prevention, chronic resources management support tool and monitoring of the recovery process. Novel innovative tools for simulation and prognosis would become available, projecting the demand in terms of health resources for a given characteristic population in a territory, considering temporary projections of frailty condition of population and patients. As for recovery monitoring, contactless determination of vital signs will suppose an advanced functional aspect by monitoring of all patients and not only critical cases. Patients will benefit from reduced restrictions due to cables and devices. In addition, there is a time saving for nursing staff, as they do not have to put the devices on the patient and disinfect them. Regarding prevention of adverse critical conditions, the proposed approach relies on the analysis of the entire temporal series of vital signs by means of deep neural networks and hybrid approaches.

3.5. Use Case 5: AI Chronic Management and Decision Support Engine

According to the data of the World Health Organization (WHO), respiratory diseases together with cardiovascular diseases are leading causes of death and disability in the world. Considering this premise, the use of case will focus on the analysis of data from chronic patients diagnosed with one of these four common pathologies: COPD, asthma, coronary heart disease (e.g., heart attack) and cerebrovascular disease (e.g., stroke). The objective would be to apply AI in the clinical context of chronic care to characterize the pathways and resources used, as well as anticipate the demand of resources in order to optimize the economic costs. ML could be then used to analyze data of patients related to clinical parameters (e.g., laboratory tests), use of resources (e.g., hospitalizations), sociodemographic data (e.g., age, gender), and quality of life, among others. The AI engine would be able to support two analysis processes: the chronic care pathway and resources characterization (stratify patients by degree of frailty and map pathways), and resources demand simulation and prognosis (according to each pathway/patient strata).

3.6. Use Case 6: Chronic Resources Management Support Tool

As stated by the surveyed hospitals, efficient and effective scheduling of the resources is a challenge for most hospitals. Possible resources to be scheduled are patients’ beds, material, medicament and assistance kit, medical equipment (e.g., diagnostic machines) or operating theatres. The goal would be to automatically schedule the usage of the considered resources as well as to measure and improve quantitative KPIs considered relevant for the most significant hospital metrics, e.g., cost, service level, delivery time, resource utilization, etc. To achieve this objective it is necessary to carry out the following activities: (1) translating hospital needs, often presented in a medical language, in technical concepts; (2) define the scheduling problem to be tackled by the intelligent algorithm and input data; (3) development an intelligent algorithm to automatically schedule the usage of resources and to measure quantitative KPIs over time; (4) test and validation of the intelligent algorithm using real datasets with the aim to fine-tune the procedures and selection rules implemented in the algorithm; (5) continuous learning of the intelligent algorithm by its utilization, performances and evolution of the surrounding environment.

3.7. Use Case 7: Adverse Events Identification and Prevention

Clinicians require support in the identification and prevention of adverse clinical conditions (ACC), as well as in identifying the main related care pathways. The technology could support the clinician in the automatic identification of ACC, such as a reaction to a new drug assumed by the patient after a change of her/his treatment plan. The AI tools could analyze data caught by vital signs monitoring systems, such as heart rate, pressure, body temperature and other data coming from the patient, such as information inferred by dialog systems based on natural language processing that would periodically interact with the patient to identify specific symptoms. Additionally, the tools would be able to support clinical staff in case a change within the care pathway is needed due. The objective would be to identify and forecast ACC for patients with non-communicable chronic diseases, particularly referring to cardiovascular diseases, by using AI. Models and tools for the automatic identification of ACC would be preliminarily realized adopting retrospective data and classic ML algorithms using current guidelines on the management of diseases of interest. Such models and tools, however, could be continuously improved, following a continuous learning approach. Successively, the prevention of ACC could be attempted by advanced classification systems, based on a combination of deep learning and reinforcement learning approaches that will analyze time series data concerning the patient condition evolution at different stages of the care pathway.

3.8. Use Case 8: Monitoring of the Recovery Process

Monitoring of the recovery process is a key hospital process. In order to achieve a high, continuous quality, vital parameters have to be monitored constantly. Vital parameters such as the heart rate or the respiration rate are key indicators for the current health status, urgent emergencies and the recovery process. Especially, persons with chronic diseases benefit from a continuous monitoring. In areas such as operation theatres or ICUs, there is a high coverage, whereas in normal wards or floors there is little to no coverage. The objective would be to remote determination of vital parameters such as heart rate and respiration rate for an improved recovery monitoring in a patient friendly method especially for chronic diseases. This could be realized by optical sensors with remote working mode and AI algorithms such as CNNs, BNNs or adaptive optical flow. To achieve the objective it is necessary to carry out the following activities: (1) identifying of optimal positioning of optical sensors within the hospital; (2) analysis of algorithms of remote vital parameter determination in clinical environments; (3) transfer and implementation of algorithms to the clinical setting; (4) evaluation of algorithms in clinical setting by means of reference systems, which would stayed synchronized; (5) interface protocol for transmission of vital parameters to central processing unit in the hospital. It should be guaranteed that only this meta data are transferred but not the raw data, thus protecting the privacy of the patients.

In the field of logistics, AI can be implemented for example in three different use cases as described below. The main focus is the optimization of resource use. It is expected that AI will help to better predict material consumption and needs in the whole process. Besides material consumption, transport planning is a further focus point in the field of logistics.

3.9. Use Case 9: Material Consumption Recognition and Prognosis

Currently, in the University Hospital in Essen as well as many other hospitals in Europe the documentation of used materials with hospital patients is a non-digital paper-pencil process consuming a lot of human work time. Therefore, digital improvements regarding automated capture system for material consumption are a prominent request in hospitals and addressed in this use case. Together with an industry partner an innovative care trolley is developed with a camera system and the complementary AI-based software using ML to recognize the consumed objects with patient processes automatically. User interaction can be implemented according to current state-of-the-art concepts. It will provide a data recognition and prognosis tool relating actual material consumption to patient cases and therefore enabling a bottom-up planning and prognosis for optimized procurement and logistics in hospitals.

3.10. Use Case 10: Optimization of Human-Robot Teams in Hospital Logistics Operations

Odense’s University Hospital (OUH) will benefit from a reactive AI-based resource management and scheduling system for material transport logistic operations. The main goal is to improve upon current task management systems with the inclusion of an AI-driven optimized scheduler that will be able to oversee all the available robots and to plan, schedule and assign tasks to the relevant hospital workforce, mainly logistic robots but also employees. The proposed task management software will have several functions and therefore will contain several different conceptual elements: (1) an automated task-generation system, based on Reinforcement Learning (RL) algorithm, that analyzes the relationship between room use and materials requirements to predict what will be needed where and when based on past experience; (2) a scheduling element that knows what transport resources are available to it, their status and where they are; and can create an optimal schedule out of transport requests generated from user input or the task generation above; (3) a reactive planning element that will rework the schedule regularly, e.g., either every hour or when new on-demand transport requests are received; (4) a transport optimizing element that analyzes the efficiency of the transport and adjusts scheduling parameters to produce maximal transport for minimal energy use and minimal task requests to humans; (5) a route generator element that creates efficient routes for the robots and sends these to robots with their new tasks, in accordance with the schedule, coupled with a route status analyzer which takes input from sensors on the robots and around the hospital to determine the location of any blockages; (6) A sensory data analyzer that can use incoming data from various infrastructure sources to inform the decision-making elements, e.g., use of elevator position to inform the route generator or use of smart cameras that can measure room occupancy for the task generator; (7) A representation of (a) task criticality, i.e., planned, urgent and critical in emergency situations, (b) the current status of the material flow, (c) the robots (name, capabilities, location, current task and status) and (d) item transport requests (also available in a form readable by humans); (8) and a supervision element that will be utilized to identify and criticize any suboptimal decisions made by the scheduler and provide feedback that will be used as input for a reinforcement learning sub-component. Task and material flow reports collected and shared by the hospital service and logistics departments of OUH, currently exceeding 555,000 entries describing various material flow logistic cases, i.e., transfer of medication, healthcare equipment and samples, will provide a variety of types of inputs and tasks. The system could automatically obtain information from various hospital software sources, e.g., human workforce positions provided by the proposed event-based messaging system by updating and adapting the current emergency messaging solution elevator status and sensors in the hospital.

3.11. Use Case 11: Co-Development and Evaluation

Bayındır Hospital Söğütözü in Ankara is one of the three high-capacity hospitals that belongs to Bayındır Healthcare Group. Bayındır Healthcare Group have three hospitals, one medical center and seven dental clinics. All healthcare facilities material management system can be centrally monitored and controlled. This provides an additional opportunity to study the impact of planned AI implementations over multi-location inventory systems. The hospital has specific experiences and requirements regarding healthcare logistics. It has an existing barcode scanning system for collecting healthcare and inventory information that aggregates centrally for the planning the availability of medical supplies and logistics management. However, the hospital may still benefit from a new picture recognition and AI-based system in terms of time savings, reductions in human error, and an increase the safety by reducing the contact between the healthcare staff and patients. Furthermore, material management and operation room scheduling are highly interrelated in practice. Using the OR schedules to trigger the purchase of perioperative materials is expected to further reduce inventory costs and increase operational efficiency compared to independent material management systems [ 39 ]. In a comparison to standalone applications of automated inventory tracking, predictive logistics, and cognitive automation, an additional understanding of the impact of integrated AI applications on healthcare logistics operations will bring several challenges, including data storage and management, data exchange, security and privacy, and integrated decision-making.

4. Discussion: Benefits and Challenges for AI in Hospitals

The specific benefits and data as well as AI application challenges are presented and discussed in this section, based on the outlined case studies and additionally directed towards the contribution against pandemic situations, such as COVID-19.

The use cases presented in Table 3 are distinguished by specific aspects often related to the area of interest, e.g., diagnosis, care, treatment, logistics or rehabilitation, or to the targeted goals, e.g., increase the efficiency of a certain health care process, improve its quality, or increase the service level. However, the detailed description of the aforementioned case studies suggests how all the involved hospitals are affected by common challenges and potential barriers to the adoption of AI to their healthcare processes on regular basis. In particular, it is possible to define three main issues which should be properly managed to ensure an efficient and effective adoption of AI tools and techniques in the healthcare delivery processes which distinguish European hospitals. The first aspect to be considered is the human acceptance and the real adoption of AI solutions in hospitals. The resistance to automated and partially obscure tools which offer assistance in several healthcare services is a major obstacle to overcome. Leveraging such tools in traditional diagnosis, care and treatment processes is useful but often distinguished by a low level of trust, in particular by doctors and medical personnel. Furthermore, the usage of such AI solutions should not increase the complexity or time required to complete certain medical process, therefore offering an adequate and well-designed interaction with human adopters. The second challenge to be tackled to foster the adoption of AI in European hospitals is the proper management of medical data. This information is distinguished by some features which make their storage and usage much more sensitive than other data typically collected in digital environments.

AI Use Cases, AI Methods and Outcomes.

However, as COVID-19 dramatically revealed, the value beyond medical data is huge. In particular, the opportunity to systematically collect data concerning the patient conditions, made diagnosis, performed treatments and defined care offer to the hospitals of the future the chance to significantly increase the efficacy and efficiency of the healthcare services delivered. The last area involved by AI structural adoption in European hospitals deals with technology selection and ethics. The former includes the complex and interrelated process of selecting a novel technology for its adoption in healthcare services, as represented by the solutions based on AI algorithms. The assessment of the most appropriate AI based technology to be adopted to ease diagnosis, treatment or care activities is a complex and distinguished by uncertain and multiple feasible outcomes with different and contrasting scenarios. The latter deals with the ethical aspects involved in the adoption of AI tools and techniques, from machine based medical decision to personalized treatments, from sharing of personal health data to acceptance of robot medical personnel. Finally, a latter aspect concerning the challenges of adopting AI in hospitals necessarily has to be mentioned, e.g., the appropriate involvement of adequate stakeholders. Indeed, this last issue is of fundamental importance to ensure the real usage of AI-based solutions in daily hospital activities by doctors, acceptance of renovated treatments and procedures by patients as well as commitment by local administrators to this modern form of health care assistance. Therefore, the process of stakeholder commitment is of paramount importance and should be adequately planned and implemented. Considering all the abovementioned challenges and potential obstacles, the following paragraphs propose possible solutions to overcome these difficulties, to ensure the adoption of AI solutions in European hospitals and maximizing the efficacy of the innovation provided. In particular, the proposed actions are grouped into three categories, human–computer interaction, medical data space, and guidebook and ethics. The linkage between these transversal activities with the application areas proposed in the manuscript is presented in the following Figure 2 .

An external file that holds a picture, illustration, etc.
Object name is healthcare-09-00961-g002.jpg

Linkage between transversal activities and application areas for AI adoption in European hospitals.

Human–Computer-Interaction : Despite progress in the field of health care data analytics, resulting in more and more prototypes and technical advancement, actual adoption by key stakeholders such as doctors remains low [ 28 , 29 ]. This aspect will rise in relevance when the respective systems increase in intelligence and analytical capability. Accordingly, an increased focus on human–computer interaction spanning pre-design, design and post-design phases as well as catering to user, system, task and interaction characteristics [ 30 ] holds the potential to increase AI adoption and user satisfaction in clinical practice [ 31 ].

Medical Data Space : In addition, data connections in a Medical Data Space (MDS) with distributed AI applications will help to share resources and to support specially and severely affected regions and hospitals. In additions, overall data transparency and analysis will help to fight virus outbreaks earlier through faster detection and containment options due to AI analysis. The Medical Data Space (MDS) is a specialization of the International Data Space (IDS), which provides a trustworthy, secure and cross-domain data space allowing to build an economy of data between companies of all domains and sizes. IDS was the result of R&D activities in 2015 and is now actively promoted through the Industrial Data Space Association. It is in cooperation with the OPC foundation, the FIWARE foundation and the Industrial Value Chain Initiative and the Platform Industry 4.0. The IDS and thus the MDS define an architecture of data providers and consumers, which are linked through connectors forming the data space. The architecture is defined in the IDS document describing the layers of the architecture model which in turn describe the key components necessary to realize a data space [ 40 ]. The first prototype has been presented in 2018 at the Hannover fair. The MDS concept targets the connectivity of local data spaces in hospitals for analytics and the application of AI-based algorithms for research or hospital internal use. Therefore, special services are necessary to not only store and manage the transfer of medical data securely and maintaining the sovereignty of the data owner, but it must additionally conform to requirements on anonymity and protection of personal medical data sets. Here, the element of value-added services for the data space becomes relevant enabling pseudonymization and anonymization features in the process.

Medical data of patients is a highly sensitive and therefore regulated asset which requires handling in a secure and protected environment. The Medical Data Space (MDS) builds upon the international data space to deliver a secured, controlled data storage and processing environment to build an economy of data between providers and consumers retaining sovereignty and control. The MDS extends this to address the additional medical constraints. They key concept in MDS is the trusted connector which links both parties and enforces the security and privacy policies defined. In addition to access management the MDS architecture introduces data-processing services (data-apps) which can preprocess data before or after transfer. As AI-driven smart hospitals rely basically on data targets the connectivity of local data spaces in hospitals for analytics and the application of AI-based algorithms for research or hospital internal will be used. Therefore, special services are necessary to not only store and manage the transfer of medical data securely and maintaining the sovereignty of the data owner, but it must additionally conform to requirements on anonymity and protection of personal medical data sets. Here the element of value-added services (data-apps) for the data space becomes relevant enabling specifically pseudonymization and anonymization features in the process. In future works, we plan to demonstrate that medical data space technology can provide the foundation for the development and deployment of novel AI and data management data-apps. Specifically, a pilot program for the analysis and management of in-hospital cardiac patient intervention treatment with the goal of understanding and analyzing several key factors that impact the ability and capacity of a hospital to provide treatment. The location for this future installation will be the Evaggelismos Hospital in Athens.

Guidebook and Ethics : There is clear evidence of the relevance of organization and management of the technological issue in the health care, concept further reinforced on the light of recent COVID-19 pandemic [ 41 ]. Assessment, supply, prioritization, appropriate usage and exploitation are indeed not trivial duties, and the final success of any health process is widely affected by technology management issues. In the modern re-setting of health-care delivery via technology innovation, data driven management, health technology assessment, clinical practice guidelines as well as medical leadership are the main topics that have to be addressed [ 42 ]. Knowledge management and technology innovation with their continuously growing potentiality can indeed transversally represent the answer to the demand of efficacy and efficiency of the system. Furthermore, great expectations are placed in information and communication technologies (ICT) with their contribution in the development of eHealth and closely in AI with its paramount applications in the various sectors of medical practice and public health. The change in clinical practice through and by means of the injection of technological innovation is today decisive to make the health and care systems able to face to the continuous economic, socio-demographic and epidemiological pressures [ 17 ]. However, technological innovation, although important and central, must be carefully examined and accompanied to ensure that it really corresponds to effective social innovation [ 43 ]. Furthermore, as really recently underlined by a joint report of EIT Health and McKinsey [ 44 ]. AI has indeed many potentialities for the improvement in care outcomes, patient experience and access to healthcare services. AI is thought to increase productivity and the efficiency of care delivery and allow healthcare systems to provide more and better care to more people. Finally, it can support the faster delivery of care, mainly by accelerating diagnosis time, and help healthcare systems manage population health more proactively, dynamically allocating resources to where they can have the largest impact and need. As addressed by MedTech Europe, developing AI systems and algorithms for healthcare settings requires specific skillsets which are in short supply, and investment in education and training of professionals involved (e.g., data scientists, practitioners, software engineers, clinical engineers), is mandatory [ 18 ].

Ethical issues are a major hurdle to full-scale AI application use as many cases might bring about risks such as wrong diagnosis or deviant therapy, as well as dissent among personnel due to different opinions regarding correct AI analysis and advice. Therefore, not only HCI issues but also human-human interaction and collaboration issues and ethical questions to be solved and communicated among people first of all before AI can contribute according to the full potential in health care.

AI will play a significant role in future hospital health care systems. Applications such as ML will further advance the development of processes in several fields inside the hospital, of which we focus in medical diagnosis, logistics and care in this article. Important obstacles remain, such as regulations, integrations to the Electronic Health Record (EHR), standardization, medical devices certificates, training professionals, costs, updates—but this is manageable. It is important to stress that AI applications will not replace human clinicians but help them to concentrate on important human-related processes and to make correct diagnoses with less analysis and decision time. This hopefully provides them with time and focus to support patients from a specific human perspective. As a result of the developments in computational power and algorithmic advancements, combined with digitalization and improvements in data collection methods and storage technologies, the healthcare sector today is supported by AI, ML and robotics as never before in the history of medicine. Besides monitoring large-scale medical trends, these new technologies also allow measurement of individual risks based on predictions from big data analysis. AI has a key function in the healthcare management of the future. Research has already proven the game changing potential of AI in various fields of healthcare, such as those outlined in the use cases in this article. AI-based methods have been successfully developed to address several healthcare logistics problems such as appointment planning, patient and resources scheduling, resource utilization, and predicting demand for emergency departments, intensive care units, or ambulances [ 45 ]. In addition, there already exist a number of research studies which suggest that AI can perform at least as good as humans at basic healthcare functions, such as diagnosis. Today, malignant tumors are spotted more successfully by algorithms than humans [ 46 ]. As a consequence of rapid technological advancements, combined with ML’s enhanced ability to transform data into insight, many of the medical tasks previously limited to humans are expected to be taken on by algorithms [ 47 ]. However, there are several reasons why it will take a long time before AI might take over comprehensive fields of activity from humans in hospitals and healthcare: recent developments show that AI systems will not replace humans on a large scale, but rather will support them in their efforts of patient care. Progressing into future times, healthcare specialists can switch to tasks and job designs focusing on unique human skills such as empathy and care. One risk within this development might be the position of healthcare providers who are unable or refuse to work in collaboration with AI applications, endangering their contributions and jobs. The most important obstacle regarding AI applications in healthcare are not the capabilities or benefits of the technologies themselves, but their applicability in medical practice. Widespread use of AI systems requires approval by regulating institutions, integration with existing systems, sufficient standardization with similar products, training of healthcare professionals, and solutions regarding issues of data privacy and security. These challenges will eventually be solved, but it will take significant time and resources [ 46 ]. The COVID-19 crisis has revealed the challenges for healthcare systems—also for future pandemic situations. This increased attention to the potential of AI in healthcare as one means of pandemic management and prevention. Major challenges in responding to COVID-19, such as managing limited healthcare resources, developing personalized treatment plans, or predicting virus spread rates, can be addressed by recent developments in AI and ML. Wynants et al. [ 48 ] have already listed 31 prediction models in a review of early studies of COVID-19. The prospective post-COVID-19 era in preparation for future pandemic events will likely feature advanced healthcare solutions in combination with operation research modeling [ 49 ]—and AI will be a crucial part of it as outlined in this paper with 11 use case studies from European hospitals. The challenges connected to such AI applications such as data management (HCI) have to be addressed soon in order to prepare hospitals for future challenges, e.g., pandemic situations [ 50 ]. This is a core challenge for health care management science and the implication for hospital practice in order to apply the full potential of AI and ML to health care systems [ 51 ].

Author Contributions

Conceptualization, M.K., M.H. (Marcus Hintze), M.I.; methodology, M.H. (Marcus Hintze), M.I., F.R.-R., F.P., F.A.-M., D.Ç.; validation, F.A.-M., D.Ç., T.L., M.J., O.U., M.H. (Marcus Hintze), J.A.L., S.B., A.-P.R.; formal analysis, M.K., B.V., W.T., D.G.; investigation, M.I., F.R.-R., F.P., F.A.-M., D.Ç., T.L., M.J., O.U., B.V., D.G., R.D.-G.; writing—original draft preparation, M.I., F.R.-R., F.P., D.Ç., T.L., M.J., O.U., M.H. (Marcus Hintze & Marja Hedman), J.A.L., S.B., B.V., W.T., R.D.-G.; writing—review and editing, M.K., M.H. (Marcus Hintze), D.G.; visualization, M.H. (Marcus Hintze), F.R.-R., T.L., M.J., A.-P.R.; supervision, M.K., M.I., F.A.-M. All authors have read and agreed to the published version of the manuscript.

This research received no external funding.

Institutional Review Board Statement

Data availability statement, conflicts of interest.

The authors declare no conflict of interest.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Artificial intelligence1

  • More on Ethics of Artificial Intelligence
  • Events Roadmap
  • Innovation in teaching and learning
  • Women’s access & participation in technological developments
  • Ethics of Science, Technology & Bioethics
  • For Policymakers
  • For the judiciary
  • Developing policies & capacities
  • Fostering gender equality and youth inclusion
  • Rights, Openness, Accessibility, & Multi-stakeholder
  • Report of COMEST on robotics ethics
  • Global South map of emerging areas in Artificial Intelligence
  • 7 minutes to understand AI
  • On the Ethics of Artificial Intelligence
  • On a possible standard-setting instrument on the ethics of AI
  • On the technical and legal aspects relating to a standard-setting instrument on the ethics of AI

Artificial Intelligence: examples of ethical dilemmas

Type “greatest leaders of all time” in your favourite search engine and you will probably see a list of the world’s prominent male personalities. How many women do you count? 

An image search for “school girl” will most probably reveal a page filled with women and girls in all sorts of sexualised costumes. Surprisingly, if you type “school boy”, results will mostly show ordinary young school boys. No men in sexualised costumes or very few.

These are examples of gender bias in artificial intelligence, originating from stereotypical representations deeply rooted in our societies.

AI-systems deliver biased results. Search-engine technology is not neutral as it processes big data and prioritises results with the most clicks relying both on user preferences and location. Thus, a search engine can become an echo chamber that upholds biases of the real world and further entrenches these prejudices and stereotypes online.

How can we ensure more equalised and accurate results? Can we report biased search results? What would or should be the accurate representation of women in search results?

Gender bias should be avoided or at the least minimized in the development of algorithms, in the large data sets used for their learning, and in AI use for decision-making.

To not replicate stereotypical representations of women in the digital realm, UNESCO addresses gender bias in AI in the UNESCO Recommendation on the Ethics of Artificial Intelligence , the very first global standard-setting instrument on the subject.

Artificial Intelligence: example of biased AI

AI in the Court of Law

The use of AI in judicial systems around the world is increasing, creating more ethical questions to explore. AI could presumably evaluate cases and apply justice in a better, faster, and more efficient way than a judge. 

AI methods can potentially have a huge impact in a wide range of areas, from the legal professions and the judiciary to aiding the decision-making of legislative and administrative public bodies. For example, they can increase the efficiency and accuracy of lawyers in both counselling and litigation, with benefits to lawyers, their clients and society as a whole. Existing software systems for judges can be complemented and enhanced through AI tools to support them in drafting new decisions. This trend towards the ever-increasing use of autonomous systems has been described as the automatization of justice.

Some argue that AI could help create a fairer criminal judicial system, in which machines could evaluate and weigh relevant factors better than human, taking advantage of its speed and large data ingestion. AI would therefore make decisions based on informed decisions devoid of any bias and subjectivity. 

But there are many ethical challenges:

  • Lack of transparency of AI tools: AI decisions are not always intelligible to humans.
  • AI is not neutral: AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, embedded or inserted bias.
  • Surveillance practices for data gathering and privacy of court users.
  • New concerns for fairness and risk for Human Rights and other fundamental values.

So, would you want to be judged by a robot in a court of law? Would you, even if we are not sure how it reaches its conclusions?

This is why UNESCO adopted the UNESCO Recommendation on the Ethics of Artificial Intelligence , the very first global standard-setting instrument on the subject.

Artificial Intelligence in the court of law

AI creates art

The use of AI in culture raises interesting ethical reflections.

In 2016, a Rembrandt painting, “the Next Rembrandt”, was designed by a computer and created by a 3D printer, 351 years after the painter’s death. 

To achieve such technological and artistic prowess, 346 Rembrandt paintings were analysed pixel by pixel and upscaled by deep learning algorithms to create a unique database. Every detail of Rembrandt’s artistic identity could then be captured and set the foundation for an algorithm capable of creating an unprecedented masterpiece. To bring the painting to life, a 3D printer recreated the texture of brushstrokes and layers of pain on the canvas for a breath-taking result that could trick any art expert.  

But who can be designated as the author? The company which orchestrated the project, the engineers, the algorithm, or… Rembrandt himself?

In 2019, the Chinese technology company Huawei announced that an AI algorithm has been able to complete the last two movements of Symphony No.8, the unfinished composition that Franz Schubert started in 1822, 197 years before. So what happens when AI has the capacity to create works of art itself? If a human author is replaced by machines and algorithms, to what extent copyrights can be attributed at all? Can and should an algorithm be recognized as an author, and enjoy the same rights as an artist? 

Work of art produced by AI requires a new definition of what it means to be an “author”, in order to do justice to the creative work of both the “original” author and the algorithms and technologies that produced the work of art itself.

Creativity, understood as the capacity to produce new and original content through imagination or invention, plays a central role in open, inclusive and pluralistic societies. For this reason, the impact of AI on human creativity deserves careful attention. While AI is a powerful tool for creation, it raises important questions about the future of art, the rights and remuneration of artists and the integrity of the creative value chain. 

We need to develop new frameworks to differentiate piracy and plagiarism from originality and creativity, and to recognize the value of human creative work in our interactions with AI. These frameworks are needed to avoid the deliberate exploitation of the work and creativity of human beings, and to ensure adequate remuneration and recognition for artists, the integrity of the cultural value chain, and the cultural sector’s ability to provide decent jobs.

Artificial Intelligence creates art

Autonomous car

An autonomous car is a vehicle that is capable of sensing its environment and moving with little or no human involvement. For the vehicle to move safely and to understand its driving environment, an enormous amount of data needs to be captured by myriad different sensors across the car at all time. These are then processed by the vehicle’s autonomous driving computer system. 

The autonomous car must also undertake a considerable amount of training in order to understand the data it is collecting and to be able to make the right decision in any imaginable traffic situation.

Moral decisions are made by everyone daily. When a driver chooses to slam on the brakes to avoid hitting a jaywalker, they are making the moral decision to shift risk from the pedestrian to the people in the car.

Imagine an autonomous car with broken brakes going at full speed towards a grand-mother and a child. By deviating a little, one can be saved.

This time, it is not a human driver who is going to take the decision, but the car’s algorithm. 

Who would you choose, the grandmother or the child? Do you think there is only one right answer? 

This is a typical ethical dilemma, that shows the importance of ethics in the development of technologies.

Artificial Intelligence: example of the autonomous car

Related items

  • Social and human sciences
  • Artificial intelligence
  • Ethics of artificial intelligence
  • Ethics of science
  • Ethics of technology
  • See more add

Marketing Artificial Intelligence Institute

4 Incredible AI Case Studies in Content Marketing

By Ashley Sams on March 10, 2022

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Artificial intelligence (AI) is giving businesses the ability to create and promote content at scale.

Which means every business that does content marketing needs to pay attention...

Because if your competitors start adopting AI for content marketing before you, you're toast.

That's because there's more than one AI case study where companies are using AI technology and machine learning to make their content marketing campaigns insanely successful.

Here are four AI case studies to keep an eye on.

1. Vanguard Increases Conversion Rates by 15% with AI

Vanguard is one of the world's biggest investment firms, with $7 trillion under management.

The company needed to promote its Vanguard Institutional business, but it had a problem:

The company does business in an industry that highly regulates what you can say in advertising. As a result, it was hard to stand out in the financial services ad landscape, since everyone used the same type of language.

That's when Vanguard turned to AI language platform Persado. Using AI from Persado, Vanguard was able to personalize its ads based on the specific messaging that resonated most with consumers.

See the Case Study

2. Tomorrow Sleep Boosts Web Traffic 10,000%

Sleep system startup Tomorrow Sleep started creating content shortly after its launch with the hope of attracting droves of web visitors.

After several months of pushing out top-quality content and manually tracking and analyzing keyword analytics, they were averaging around 4,000 users to their site every month.

Not bad, but not great. If they wanted to compete with long-standing players in the crowded sleep market, something had to change.

Sleep Tomorrow needed a way to plan and produce content at scale that would reach their target audience.

Enter artificial intelligence.

Tomorrow Sleep began using an AI solution called MarketMuse. MarketMuse's AI-powered content intelligence and strategy platform.

It used the platform's AI research application to understand which high-value topics the company needed to be talking about. Next, it used one of the tool's advanced analytics applications to see where competitors ranked for each of these topics.

This intel illuminated the gaps and opportunities in the current content plan, leading Tomorrow Sleep to create content around key topics where it could quickly establish itself as an expert.

The result?

  • 400,000 monthly visits to its website (a 10,000% increase).
  • Ranked for multiple positions in a single search result.
  • Domain authority to secure Google's featured snippet for specific results.

MarketMuse is an AI-driven assistant for building content strategies. It will show you exactly what terms you need to target to compete in certain topic categories. It'll also surface topics you may need to target if you want to own certain topics.

See the Case Study

3. The American Marketing Association Automatically Writes and Hyper-Personalizes Its Newsletter

The American Marketing Association (AMA) strives to be the most relevant voice shaping marketing around the world.

Its website is a marketplace of industry knowledge and resources on branding, careers customer experience, digital marketing, ethics, and more.

One unique aspect of its community is the vast number of industries it represents. Because every business has marketing needs, its members hail from industries across the globe such as education, finance, healthcare, insurance, manufacturing, real estate, and more.

It shares its wealth of knowledge with over 100,000 subscribers in its email newsletter.

However, to serve its subscribers only the most relevant and deserving content, it pulled in rasa.io.

This AI system uses natural language processing and machine learning to generate personalized Smart Newsletters and provide newsletter automation. By doing so, it dramatically increases reader engagement and provides rich insights back to the brand, while saving organizations time.

To personalize each newsletter to a subscriber, the solution uses AI for both curation and filtering content from sources chosen by the AMA. This includes the selection of each individual piece of content, the placement of articles, and the subject line selected for each reader.

The result? A newsletter that provides a perfectly personalized experience to each and every reader.

Plus, the platform is able to infuse the newsletters with AMA's internally produced content and feature it at the top of the newsletter, maximizing visibility.

See the Case Study

4. Adobe Generates $10M+ in Revenue with an AI Chatbot + Content

Website content is a key way for consumers to learn about your products and solutions, and find answers to their top questions. And boy does software giant Adobe have a lot of website content.

However, with all the website content the company has, it's sometimes hard to keep consumers engaged and find them exactly what they need at any given moment.

To solve this challenge, Adobe turned to conversational AI from Drift. Drift's chatbot uses AI to have natural language conversations with site visitors at every stage of their journey. The bot was able to direct visitors to what they needed when they needed it. It was also able to hand off conversations to humans when the time was right.

See the Case Study

Ashley Sams

Ashley Sams is director of marketing at Ready North. She joined the agency in 2017 with a background in marketing, specifically for higher education and social media. Ashley is a 2015 graduate of The University of Mount Union where she earned a degree in marketing.

Related Posts

How olay uses artificial intelligence to double conversion rate.

Our team reads articles every day to deliver the best artificial intelligence and machine learning news to you. This week we can't stop reading about Olay's use of AI for a doubled conversion rate, Element AI's new podcast for real business applications and Yext's partnership with Amazon Alexa.

5 Powerful Examples of AI in Marketing

How do real brands use AI to improve their marketing? Keep reading to learn about actionable ways top companies we’ve interviewed use marketing AI.

How Brands Target Consumers Better and Sell More with Artificial Intelligence [Case Studies Included]

Real brands are using AI to better target and sell to consumers. Discover actual case studies on how they’re doing it in this post.

write a case study on artificial intelligence

Princeton Dialogues on AI and Ethics

Princeton University

Case Studies

Princeton Dialogues on AI and Ethics Case Studies

The development of artificial intelligence (AI) systems and their deployment in society gives rise to ethical dilemmas and hard questions. By situating ethical considerations in terms of real-world scenarios, case studies facilitate in-depth and multi-faceted explorations of complex philosophical questions about what is right, good and feasible. Case studies provide a useful jumping-off point for considering the various moral and practical trade-offs inherent in the study of practical ethics.

Case Study PDFs : The Princeton Dialogues on AI and Ethics has released six long-format case studies exploring issues at the intersection of AI, ethics and society. Three additional case studies are scheduled for release in spring 2019.

Methodology : The Princeton Dialogues on AI and Ethics case studies are unique in their adherence to five guiding principles: 1) empirical foundations, 2) broad accessibility, 3) interactiveness, 4) multiple viewpoints and 5) depth over brevity.

One Hundred Year Study on Artificial Intelligence (AI100)

Executive Summary

Main navigation, related documents.

2015 Study Panel Charge

June 2016 Interim Summary

Download Full Report

[ turn on 2021 annotations ]

Artificial Intelligence (AI) is a science and a set of computational technologies that are inspired by—but typically operate quite differently from—the ways people use their nervous systems and bodies to sense, learn, reason, and take action. While the rate of progress in AI has been patchy and unpredictable, there have been significant advances since the field's inception sixty years ago. Once a mostly academic area of study, twenty-first century AI enables a constellation of mainstream technologies that are having a substantial impact on everyday lives. Computer vision and AI planning, for example, drive the video games that are now a bigger entertainment industry than Hollywood. Deep learning, a form of machine learning based on layered representations of variables referred to as neural networks, has made speech-understanding practical on our phones and in our kitchens, and its algorithms can be applied widely to an array of applications that rely on pattern recognition. Natural Language Processing (NLP) and knowledge representation and reasoning have enabled a machine to beat the Jeopardy champion and are bringing new power to Web searches.

While impressive, these technologies are highly tailored to particular tasks. Each application typically requires years of specialized research and careful, unique construction. In similarly targeted applications, substantial increases in the future uses of AI technologies, including more self-driving cars, healthcare diagnostics and targeted treatments, and physical assistance for elder care can be expected. AI and robotics will also be applied across the globe in industries struggling to attract younger workers, such as agriculture, food processing, fulfillment centers, and factories. They will facilitate delivery of online purchases through flying drones, self-driving trucks, or robots that can get up the stairs to the front door.

This report is the first in a series to be issued at regular intervals as a part of the One Hundred Year Study on Artificial Intelligence (AI100). Starting from a charge given by the AI100 Standing Committee to consider the likely influences of AI in a typical North American city by the year 2030, the 2015 Study Panel, comprising experts in AI and other relevant areas focused their attention on eight domains they considered most salient: transportation; service robots; healthcare; education; low-resource communities; public safety and security; employment and workplace; and entertainment. In each of these domains, the report both reflects on progress in the past fifteen years and anticipates developments in the coming fifteen years. Though drawing from a common source of research, each domain reflects different AI influences and challenges, such as the difficulty of creating safe and reliable hardware (transportation and service robots), the difficulty of smoothly interacting with human experts (healthcare and education), the challenge of gaining public trust (low-resource communities and public safety and security), the challenge of overcoming fears of marginalizing humans (employment and workplace), and the social and societal risk of diminishing interpersonal interactions (entertainment). The report begins with a reflection on what constitutes Artificial Intelligence, and concludes with recommendations concerning AI-related policy. These recommendations include accruing technical expertise about AI in government and devoting more resources—and removing impediments—to research on the fairness, security, privacy, and societal impacts of AI systems.

Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future. Instead, increasingly useful applications of AI, with potentially profound positive impacts on our society and economy are likely to emerge between now and 2030, the period this report considers. At the same time, many of these developments will spur disruptions in how human labor is augmented or replaced by AI, creating new challenges for the economy and society more broadly. Application design and policy decisions made in the near term are likely to have long-lasting influences on the nature and directions of such developments, making it important for AI researchers, developers, social scientists, and policymakers to balance the imperative to innovate with mechanisms to ensure that AI's economic and social benefits are broadly shared across society. If society approaches these technologies primarily with fear and suspicion, missteps that slow AI's development or drive it underground will result, impeding important work on ensuring the safety and reliability of AI technologies. On the other hand, if society approaches AI with a more open mind, the technologies emerging from the field could profoundly transform society for the better in the coming decades.

Study Panel: 

Peter Stone, Chair, University of Texas at Austin Rodney Brooks, Rethink Robotics Erik Brynjolfsson, Massachussets Institute of Technology Ryan Calo, University of Washington Oren Etzioni, Allen Institute for AI Greg Hager, Johns Hopkins University Julia Hirschberg, Columbia University Shivaram Kalyanakrishnan, Indian Institute of Technology Bombay Ece Kamar, Microsoft Research Sarit Kraus, Bar Ilan University Kevin Leyton-Brown, University of British Columbia David Parkes, Harvard University William Press, University of Texas at Austin AnnaLee (Anno) Saxenian, University of California, Berkeley Julie Shah, Massachussets Institute of Technology Milind Tambe, University of Southern California Astro Teller, X

Cite This Report

Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller.  "Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA,  September 2016. Doc:  http://ai100.stanford.edu/2016-report . Accessed:  September 6, 2016.

Report Authors

AI100 Standing Committee and Study Panel 

© 2016 by Stanford University. Artificial Intelligence and Life in 2030 is made available under a Creative Commons Attribution-NoDerivatives 4.0 License (International):  https://creativecommons.org/licenses/by-nd/4.0/ .

Research and Practice of AI Ethics: A Case Study Approach Juxtaposing Academic Discourse with Organisational Reality

  • Original Research/Scholarship
  • Open access
  • Published: 08 March 2021
  • Volume 27 , article number  16 , ( 2021 )

Cite this article

You have full access to this open access article

write a case study on artificial intelligence

  • Mark Ryan   ORCID: orcid.org/0000-0003-4850-0111 1 ,
  • Josephina Antoniou 2 ,
  • Laurence Brooks 3 ,
  • Tilimbe Jiya 4 ,
  • Kevin Macnish 5 &
  • Bernd Stahl 3  

13k Accesses

21 Citations

7 Altmetric

Explore all metrics

This study investigates the ethical use of Big Data and Artificial Intelligence (AI) technologies (BD + AI)—using an empirical approach. The paper categorises the current literature and presents a multi-case study of 'on-the-ground' ethical issues that uses qualitative tools to analyse findings from ten targeted case-studies from a range of domains. The analysis coalesces identified singular ethical issues, (from the literature), into clusters to offer a comparison with the proposed classification in the literature. The results show that despite the variety of different social domains, fields, and applications of AI, there is overlap and correlation between the organisations’ ethical concerns. This more detailed understanding of ethics in AI + BD is required to ensure that the multitude of suggested ways of addressing them can be targeted and succeed in mitigating the pertinent ethical issues that are often discussed in the literature.

Similar content being viewed by others

write a case study on artificial intelligence

The GenAI is out of the bottle: generative artificial intelligence from a business model innovation perspective

write a case study on artificial intelligence

The Ethical Implications of Artificial Intelligence (AI) For Meaningful Work

write a case study on artificial intelligence

Inductive Content Analysis

Avoid common mistakes on your manuscript.

Introduction

Big Data and Artificial Intelligence (BD + AI) are emerging technologies that offer great potential for business, healthcare, the public sector, and development agencies alike. The increasing impact of these two technologies and their combined potential in these sectors can be highlighted for diverse organisational aspects such as for customisation of organisational processes and for automated decision making. The combination of Big Data and AI, often in the form of machine learning applications, can better exploit the granularity of data and analyse it to offer better insights into behaviours, incidents, and risk, eventually aiming at positive organisational transformation.

Big Data offers fresh and interesting insights into structural patterns, anomalies, and decision-making in a broad range of different applications (Cuquet & Fensel, 2018 ), while AI provides predictive foresight, intelligent recommendations, and sophisticated modelling. The integration and combination of AI + BD offer phenomenal potential for correlating, predicting and prescribing recommendations in insurance, human resources (HR), agriculture, and energy, as well as many other sectors. While BD + AI provides a wide range of benefits, they also pose risks to users, including but not limited to privacy infringements, threats of unemployment, discrimination, security concerns, and increasing inequalities (O’Neil, 2016 ). Footnote 1 Adequate and timely policy needs to be implemented to prevent many of these risks occurring.

One of the main limitations preventing key decision-making for ethical BD + AI use is that there are few rigorous empirical studies carried out on the ethical implications of these technologies across multiple application domains. This renders it difficult for policymakers and developers to identify when ethical issues resulting from BD + AI use are only relevant for isolated domains and applications, or whether there are repeated/universal concerns which can be seen across different sectors. While the field lacks literature evaluating ethical issues Footnote 2 ‘on the ground’, there are even fewer multi-case evaluations.

This paper provides a cohesive multi-case study analysis across ten different application domains, including domains such as government, agriculture, insurance, and the media. It reviews ethical concerns found within these case studies to establish cross-cutting thematic issues arising from the implementation and use of BD + AI. The paper collects relevant literature and proposes a simple classification of ethical issues (short term, medium term, long term), which is then juxtaposed with the ethical concerns highlighted from the multiple-case study analysis. This multiple-case study analysis of BD + AI offers an understanding of current organisational practices.

The work described in this paper makes an important contribution to the literature, based on its empirical findings. By presenting the ethical issues across an array of application areas, the paper provides much-needed rigorous empirical insight into the social and organisational reality of ethics of AI + BD. Our empirical research brings together a collection of domains that gives a broad oversight about issues that underpin the implementation of AI. Through its empirical insights the paper provides a basis for a broader discussion of how these issues can and should be addressed.

This paper is structured in six main sections: this introduction is followed by a literature review, which allows for an integrated review of ethical issues, contrasting them with those found in the cases. This provides the basis for a categorisation or classification of ethical issues in BD + AI. The third section contains a description of the interpretivist qualitative case study methodology used in this paper. The subsequent section provides an overview of the organisations participating in the cases to contrast similarities and divisions, while also comparing the diversity of their use of BD + AI. Footnote 3 The fifth section provides a detailed analysis of the ethical issues derived from using BD + AI, as identified in the cases. The concluding section analyses the differences between theoretical and empirical work and spells out implications and further work.

Literature Review

An initial challenge that any researcher faces when investigating ethical issues of AI + BD is that, due to the popularity of the topic, there is a vast and rapidly growing literature to be considered. Ethical issues of AI + BD are covered by a number of academic venues, including some specific ones such as the AAAI/ACM Conference on AI, Ethics, and Society ( https://dl.acm.org/doi/proceedings/10.1145/3306618 ), policy initiative and many publicly and privately financed research reports (Whittlestone, Nyrup, Alexandrova, Dihal, & Cave, 2019 ). Initial attempts to provide overviews of the area have been published (Jobin, 2019 ; Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016 ), but there is no settled view on what counts as an ethical issue and why. In this paper we aim to provide a broad overview of issues found through the case studies. This paper puts forward what are commonly perceived to be ethical issues within the literature or concerns that have ethical impacts and repercussions. We explicitly do not apply a particular philosophical framework of ethics but accept as ethical issues those issues that we encounter in the literature. This review is based on an understanding of the current state of the literature by the paper's authors. It is not a structured review and does not claim comprehensive coverage but does share some interesting insights.

To be able to undertake the analysis of ethical issues in our case studies, we sought to categorise the ethical issues found in the literature. There are potentially numerous ways of doing so and our suggestion does not claim to be authoritative. Our suggestion is to order ethical issues in terms of their temporal horizon, i.e., the amount of time it is likely to take to be able to address them. Time is a continuous variable, but we suggest that it is possible to sort the issues into three clusters: short term, medium term, and long term (see Fig.  1 ).

figure 1

Temporal horizon for addressing ethical issues

As suggested by Baum ( 2017 ), it is best to acknowledge that there will be ethical issues and related mitigating activities that cannot exclusively fit in as short, medium or long term.

ather than seeing it as an authoritative classification, we see this as a heuristic that reflects aspects of the current discussion. One reason why this categorisation is useful is that the temporal horizon of ethical issues is a potentially useful variable, with companies often being accused of favouring short-term gains over long-term benefits. Similarly, short-term issues must be able to be addressed on the local level for short-term fixes to work.

Short-term issues

These are issues for which there is a reasonable assumption that they are capable of being addressed in the short term. We do not wish to quantify what exactly counts as short term, as any definition put forward will be contentious when analysing the boundaries and transition periods. A better definition of short term might therefore be that such issues can be expected to be successfully addressed in technical systems that are currently in operation or development. Many of the issues we discuss under the heading of short-term issues are directly linked to some of the key technologies driving the current AI debate, notably machine learning and some of its enabling techniques and approaches such as neural networks and reinforcement learning.

Many of the advantages promised by BD + AI involve the use of personal data, data which can be used to identify individuals. This includes health data; customer data; ANPR data (Automated Number Plate Recognition); bank data; and even includes data about farmers’ land, livestock, and harvests. Issues surrounding privacy and control of data are widely discussed and recognized as major ethical concerns that need to be addressed (Boyd & Crawford, 2012 ; Tene & Polonetsky, 2012 , 2013 ; Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016 ; Jain, Gyanchandani, & Khare, 2016 ; Mai, 2016 ; Macnish, 2018 ). The concern surrounding privacy can be put down to a combination of a general level of awareness of privacy issues and the recently-introduced General Data Protection Regulation (GDPR). Closely aligned with privacy issues are those relating to transparency of processes dealing with data, which can often be classified as internal, external, and deliberate opaqueness (Burrell, 2016 ; Lepri, Staiano, Sangokoya, Letouzé, & Oliver, 2017 ; Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016 ).

The Guidelines for Trustworthy AI Footnote 4 were released in 2018 by the High-Level Expert Group on Artificial Intelligence (AI HLEG Footnote 5 ), and address the need for technical robustness and safety, including accuracy, reproducibility, and reliability. Reliability is further linked to the requirements of diversity, fairness, and social impact because it addresses freedom from bias from a technical point of view. The concept of reliability, when it comes to BD + AI, refers to the capability to verify the stability or consistency of a set of results (Bush, 2012 ; Ferraggine, Doorn, & Rivera, 2009 ; Meeker and Hong, 2014 ).

If a technology is unreliable, error-prone, and unfit-for-purpose, adverse ethical issues may result from decisions made by the technology. The accuracy of recommendations made by BD + AI is a direct consequence of the degree of reliability of the technology (Barolli, Takizawa, Xhafa, & Enokido, 2019 ). Bias and discrimination in algorithms may be introduced consciously or unconsciously by those employing the BD + AI or because of algorithms reflecting pre-existing biases (Baroccas and Selbst, 2016 ). Examples of bias have been documented often reflecting “an imbalance in socio-economic or other ‘class’ categories—ie, a certain group or groups are not sampled as much as others or at all” (Panch et al., 2019 ). have the potential to affect levels of inequality and discrimination, and if biases are not corrected these systems can reproduce existing patterns of discrimination and inherit the prejudices of prior decision makers (Barocas & Selbst, 2016 , p. 674). An example of inherited prejudices is documented in the United States, where African-American citizens, more often than not, have been given longer prison sentences than Caucasians for the same crime.

Medium-term issues

Medium-term issues are not clearly linked to a particular technology but typically arise from the integration of AI techniques including machine learning into larger socio-technical systems and contexts. They are thus related to the way life in modern societies is affected by new technologies. These can be based on the specific issues listed above but have their main impact on the societal level. The use of BD + AI may allow individuals’ behaviour to be put under scrutiny and surveillance , leading to infringements on privacy, freedom, autonomy, and self-determination (Wolf, 2015 ). There is also the possibility that the increased use of algorithmic methods for societal decision-making may create a type of technocratic governance (Couldry & Powell, 2014 ; Janssen & Kuk, 2016 ), which could infringe on people’s decision-making processes (Kuriakose & Iyer, 2018 ). For example, because of the high levels of public data retrieval, BD + AI may harm people’s freedom of expression, association, and movement, through fear of surveillance and chilling effects (Latonero, 2018 ).

Corporations have a responsibility to the end-user to ensure compliance, accountability, and transparency of their BD + AI (Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016 ). However, when the source of a problem is difficult to trace, owing to issues of opacity, it becomes challenging to identify who is responsible for the decisions made by the BD + AI. It is worth noting that a large-scale survey in Australia in 2020 indicated that 57.9% of end-users are not at all confident that most companies take adequate steps to protect user data. The significance of understanding and employing responsibility is an issue targeted in many studies (Chatfield et al., 2017 ; Fothergill et al., 2019 ; Jirotka et al., 2017 ; Pellé & Reber, 2015 ). Trust and control over BD + AI as an issue is reiterated by a recent ICO report demonstrating that most UK citizens do not trust organisations with their data (ICO, 2017 ).

Justice is a central concern in BD + AI (Johnson, 2014 , 2018 ). As a starting point, justice consists in giving each person his or her due or treating people equitably (De George, p. 101). A key concern is that benefits will be reaped by powerful individuals and organisations, while the burden falls predominantly on poorer members of society (Taylor, 2017 ). BD + AI can also reflect human intentionality, deploying patterns of power and authority (Portmess & Tower, 2015 , p. 1). The knowledge offered by BD + AI is often in the hands of a few powerful corporations (Wheeler, 2016 ). Power imbalances are heightened because companies and governments can deploy BD + AI for surveillance, privacy invasions and manipulation, through personalised marketing efforts and social control strategies (Lepri, Staiano, Sangokoya, Letouzé, & Oliver, 2017 , p. 11). They play a role in the ascent of datafication, especially when specific groups (such as corporate, academic, and state institutions) have greater unrestrained access to big datasets (van Dijck, 2014 , p. 203).

Discrimination , in BD + AI use, can occur when individuals are profiled based on their online choices and behaviour, but also their gender, ethnicity and belonging to specific groups (Calders, Kamiran, & Pechenizkiy, 2009 ; Cohen et al., 2014 ; and Danna & Gandy, 2002 ). Data-driven algorithmic decision-making may lead to discrimination that is then adopted by decision-makers and those in power (Lepri, Staiano, Sangokoya, Letouzé, & Oliver, 2017 , p. 4). Biases and discrimination can contribute to inequality . Some groups that are already disadvantaged may face worse inequalities, especially if those belonging to historically marginalised groups have less access and representation (Barocas & Selbst, 2016 , p. 685; Schradie, 2017 ). Inequality-enhancing biases can be reproduced in BD + AI, such as the use of predictive policing to target neighbourhoods of largely ethnic minorities or historically marginalised groups (O’Neil, 2016 ).

BD + AI offers great potential for increasing profit, reducing physical burdens on staff, and employing innovative sustainability practices (Badri, Boudreau-Trudel, & Souissi, 2018 ). They offer the potential to bring about improvements in innovation, science, and knowledge; allowing organisations to progress, expand, and economically benefit from their development and application (Crawford et al., 2014 ). BD + AI are being heralded as monumental for the economic growth and development of a wide diversity of industries around the world (Einav & Levin, 2014 ). The economic benefits accrued from BD + AI may be the strongest driver for their use, but BD + AI also holds the potential to cause economic harm to citizens and businesses or create other adverse ethical issues (Newman, 2013 ).

However, some in the literature view the co-development of employment and automation as somewhat naïve outlook (Zuboff, 2015 ). BD + AI companies may benefit from a ‘post-labour’ automation economy, which may have a negative impact on the labour market (Bossman, 2016 ), replacing up to 47% of all US jobs within the next 20 years (Frey & Osborne, 2017 ). The professions most at risk of affecting employment correlated with three of our case studies: farming, administration support and the insurance sector (Frey & Osborne, 2017 ).

Long-term issues

Long-term issues are those pertaining to fundamental aspects of nature of reality, society, or humanity. For example, that AI will develop capabilities far exceeding human beings (Kurzweil, 2006 ). At this point, sometimes called the ‘ singularity ’ machines achieve human intelligence, are expected to be able to improve on themselves and thereby surpass human intelligence and become superintelligent (Bostrom, 2016 ). If this were to happen, then it might have dystopian consequences for humanity as often depicted in science fiction. Also, it stands to reason that the superintelligent, or even just the normally intelligent machines may acquire a moral status.

It should be clear that these expectations are not universally shared. They refer to what is often called ‘ artificial general intelligence’ (AGI), a set of technologies that emulate human reasoning capacities more broadly. Footnote 6

Furthermore, if we may acquire new capabilities, e.g. by using technical implants to enhance human nature. The resulting being might be called a transhuman , the next step of human evolution or development. Again, it is important to underline that this is a contested idea (Livingstone, 2015 ) but one that has increasing traction in public discourse and popular science accounts (Harari, 2017 ).

We chose this distinction of three groups of issues for understanding how mitigation strategies within organisations can be contextualised. We concede that this is one reading of the literature and that many others are possible. In this account of the literature we tried to make sense of the current discourse to allow us to understand our empirical findings which are introduced in the following sections.

Case Study Methodology

Despite the impressive amount of research undertaken on ethical issues of AI + BD (e.g. Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016 ; Zwitter, 2014 ), there are few case studies exploring such issues. This paper builds upon this research and employs an interpretivist methodology to do so, focusing on how, what, and why questions relevant to the ethical use of BD + AI (Walsham, 1995a , b ). The primary research questions for the case studies were: How do organisations perceive ethical concerns related to BD + AI and in what ways do they deal with them?

We sought to elicit insights from interviews, rather than attempting to reach an objective truth about the ethical impacts of BD + AI. The interpretivist case study approach (Stake 2003) allowed the researchers ‘to understand ‘reality’ as the blending of the various (and sometimes conflicting) perspectives which coexist in social contexts, the common threads that connect the different perspectives and the value systems that give rise to the seeming contradictions and disagreements around the topics discussed. Whether one sees this reality as static (social constructivism) or dynamic (social constructionism) was also a point of consideration, as they both belong in the same “family” approach where methodological flexibility is as important a value as rigour’ (XXX).

Through extensive brainstorming within the research team, and evaluations of relevant literature, 16 social application domains were established as topics for case study analysis. Footnote 7 The project focused on ten out of these application domains in accordance with the partners’ competencies. The case studies have covered ten domains, and each had their own unique focus, specifications, and niches, which added to the richness of the evaluations (Table 1 ).

The qualitative analysis approach adopted in this study focused on these ten standalone operational case studies that were directly related to the application domains presented in Table 1 . These individual case studies provide valuable insights (Yin, 2014 , 2015 ); however, a multiple-case study approach offers a more comprehensive analysis of ethical issues related to BD + AI use (Herriott & Firestone, 1983 ). Thus, this paper adopts a multiple-case study methodology to identify what insights can be obtained from the ten cases, identifies whether any generalisable understandings can be retrieved, and evaluates how different organisations deal with issues pertaining to BD + AI development and use. The paper does not attempt to derive universal findings from this analysis, in line with the principles of interpretive research, but further attempts to gain an in-depth understanding of the implications of selected BD + AI applications.

The data collection was guided by specific research questions identified through each case, including five desk research questions (see appendix 1); 24 interview questions (see appendix 2); and a checklist of 17 potential ethical issues, developed by the project leader Footnote 8 (see appendix 3). A thematic analysis framework was used to ‘highlight, expose, explore, and record patterns within the collected data. The themes were patterns across data sets that were important to describe several ethical issues which arise through the use of BD  +  AI across different types of organisations and application domains’ (XXX).

A workshop was then held after the interviews were carried out. The workshop brought together the experts in the case study team to discuss their findings. This culminated in 26 ethical issues Footnote 9 that were inductively derived from the data collected throughout the interviews (see Fig.  2 and Table 3). Footnote 10 In order to ensure consistency and rigour in the multiple-case study approach, researchers followed a standardised case study protocol (Yin, 2014 ). Footnote 11

figure 2

The Prevalence of Ethical Issues in the Case Studies

Thirteen different organisations were interviewed for 10 case studies, consisting of 22 interviews in total. Footnote 12 These ranged from 30 min to 1 ½ hours in-person or Skype interviews. The participants that were selected for interviews represented a very broad range of application domains and organisations that use BD + AI. The case study organisations were selected according to their relevance to the overall case study domains and considering their fit with the domains and likelihood of providing interesting insights. The interviewees were then selected according to their ability to explain their BD + AI and its role in their organisation. In addition to interviews, a document review provided supporting information about the organisation. Thus, websites and published material were used to provide background to the research.

Findings: Ten Case Studies

This section gives a brief overview of the cases, before analysing their similarities and differences. It also highlights the different types of BD + AI being used, and the types of data used by the BD + AI in the case study organisations, before conducting an ethical analysis of the cases. Table 2 presents an overview of the 10 cases to show the roles of the interviewees, the focus of the technologies being used, and the data retrieved by each organisation’s BD + AI. All interviews were conducted in English.

The types of organisations that were used in the case studies varied extensively. They included start-ups (CS10), niche software companies (CS1), national health insurers (Organisation X in CS6), national energy providers (CS7), chemical/agricultural multinational (CS3), and national (CS9) and international (CS8) telecommunications providers. The case studies also included public (CS2, Organisation 1 and 4 in CS4) and semi-public (Organisation 2 in CS4) organisations, as well as a large scientific research project (CS5).

The types of individuals interviewed also varied extensively. For example, CS6 and CS7 did not have anyone with a specific technical background, which limited the possibility of analysing issues related to the technology itself. Some case studies only had technology experts (such as CS1, CS8, and CS9), who mostly concentrated on technical issues, with much less of a focus on ethical concerns. Other case studies had a combination of both technical and policy-focused experts (i.e. CS3, CS4, and CS5). Footnote 13

Therefore, it must be made fundamentally clear that we are not proposing that all of the interviewees were authorities in the field, or that even collectively they represent a unified authority on the matter, but instead, that we are hoping to show what are the insights and perceived ethical issues of those currently working with AI on the ground view as ethical concerns. While the paper is presenting the ethical concerns found within an array of domains, we do not claim that any individual case study is representative of their entire industry, but instead, our intent was to capture a wide diversity of viewpoints, domains, and applications of AI, to encompass a broad amalgamation of concerns. We should also state that this is not a shortcoming of the study but that it is the normal approach that social science often takes.

The diversity of organisations and their application focus areas also varied. Some organisations focused more so on the Big Data component of their AI, while others more strictly on the AI programming and analytics. Even when organisations concentrated on a specific type of BD + AI, such as Big Data, its use varied immensely, including retrieval (CS1), analysis (CS2), predictive analytics (CS10), and transactional value (Organisation 2 in CS4). Some domains adopted BD + AI earlier and more emphatically than others (such as communications, healthcare, and insurance). Also, the size, investment, and type of organisation played a part in the level of BD + AI innovation (for example, the two large multinationals in CS3 and CS8 had well-developed BD + AI).

The maturity level of BD + AI was also determined by how it was integrated, and its importance, within an organisation. For instance, in organisations where BD + AI were fundamental for the success of the business (e.g. CS1 and CS10), they played a much more important role than in companies where there was less of a reliance (e.g. CS7). In some organisations, even when BD + AI was not central to success, the level of development was still quite advanced because of economic investment capabilities (e.g. CS3 and CS8).

These differences provided important questions to ask throughout this multi-case study analysis, such as: Do certain organisations respond to ethical issues relating to BD + AI in a certain way? Does the type of interviewee affect the ethical issues discussed—e.g. case studies without technical experts, those that only had technical experts, and those that had both? Does the type of BD + AI used impact the types of ethical issues discussed? What significance does the type of data retrieved have on ethical issues identified by the organisations? These inductive ethical questions provided a template for the qualitative analysis in the following section.

Ethical Issues in the Case Studies

Based on the interview data, the ethical issues identified in the case studies were grouped into six specific thematic sections to provide a more conducive, concise, and pragmatic methodology. Those six sections are: control of data, reliability of data, justice, economic issues, role of organisations, and individual freedoms. From the 26 ethical issues, privacy was the only ethical issue addressed in all 10 case studies, which was not surprising because it has received a great deal of attention recently because of the GDPR. Also, security, transparency, and algorithmic bias are regularly discussed in the literature, so we expected them to be significant issues across many of the cases. However, there were many issues that received less attention in the literature—such as access to BD + AI, trust, and power asymmetries—which were discussed frequently in the interviews. In contrast to this, there were ethical issues that were heavily discussed in the literature which received far less attention in the interviews, such as employment, autonomy, and criminal or malicious use of BD + AI (Fig.  2 ).

The ethical analysis was conducted using a combination of literature reviews and interviews carried out with stakeholders. The purpose of the interviews was to ensure that there were no obvious ethical issues faced by stakeholders in their day-to-day activities which had been missed in the academic literature. As such, the starting point was not an overarching normative theory, which might have meant that we looked for issues which fit well with the theory but ignored anything that fell outside of that theory. Instead the combined approach led to the identification of the 26 ethical issues, each labelled based on particular words or phrases used in the literature or by the interviewees. For example, the term "privacy" was used frequently and so became the label for references to and instances of privacy-relevant concerns. In this section we have clustered issues together based on similar problems faced (e.g. accuracy of data and accuracy of algorithms within the category of ‘reliability of data’).

In an attempt to highlight similar ethical issues and improve the overall analysis to better capture similar perspectives, the research team decided to use the method of clustering, a technique often used in data mining to efficiently group similar elements together. Through discussion in the research team, and bearing in mind that the purpose of the clustering process was to form clusters that would enhance understanding of the impact of these ethical issues, we arrived at the following six clusters: the control of data (covering privacy, security, and informed consent); the reliability of data (accuracy of data and accuracy of algorithms); justice (power asymmetries, justice, discrimination, and bias); economic issues (economic concerns, sustainability, and employment); the role of organisations (trust and responsibility); and human freedoms (autonomy, freedom, and human rights). Both the titles and the precise composition of each cluster of issues are the outcome of a reasoned agreement of the research team. However, it should be clear that we could have used different titles and different clustering. The point is not that each cluster forms a distinct group of ethical issues, independent from any other. Rather the ethical issues faced overlap and play into one another, but to present them in a manageable format we have opted to use this bottom-up clustering approach.

Human Freedoms

An interviewee from CS10 stated that they were concerned about human rights because they were an integral part of the company’s ethics framework. This was beneficial to their business because they were required to incorporate human rights to receive public funding by the Austrian government. The company ensured that they would not grant ‘full exclusivity on generated social unrest event data to any single party, unless the data is used to minimise the risk of suppression of unrest events, or to protect the violation of human rights’ (XXX). The company demonstrates that while BD + AI has been criticised for infringing upon human rights in the literature, they also offer the opportunity to identify and prevent human rights abuses. The company’s moral framework definitively stemmed from regulatory and funding requirements, which lends itself to the benefit of effective ethical top-down approaches, which is a divisive topic in the literature, with diverging views about whether top-down or bottom-up approaches are better options for improved AI ethics.

Trust & Responsibility

Responsibility was a concern in 5 of the case studies, confirming the importance it is given in the literature (see Sect.  3 ). Trust appeared in seven of the case studies. The cases focused on concerns found in the literature, such as BD + AI use in policy development, public distrust about automated decision-making and the integrity of corporations utilising datafication methods (van Dijck 2014 ).

Trust and control over BD + AI were an issue throughout the case studies. The organisation from the predictive intelligence case study (CS10) identified that their use of social media data raised trust issues. They converged with perspectives found in the literature that when people feel disempowered to use or be part of the BD + AI development process, they tend to lose trust in the BD + AI (Accenture, 2016 , 2017 ). In CS6, stakeholders (health insurers) trusted the decisions made by BD + AI when they were engaged and empowered to give feedback on how their data was used. Trust is enhanced when users can refuse the use of their data (CS7), which correlates with the literature. Companies discussed the benefits of establishing trustworthy relationships. For example, in CS9, they have “ been trying really hard to avoid the existence of fake [mobile phone] base stations, because [these raise] an issue with the trust that people put in their networks” (XXX).

Corporations need to determine the objective of the data analysis (CS3), what data is required for the BD + AI to work (CS2), and accountability for when it does not work as intended or causes undesirable outcomes (CS4). The issue here is whether the organisation takes direct responsibility for these outcomes, or, if informed consent has been given, can responsibility be shared with the granter of consent (CS3). The cases also raised the question of ‘responsible to whom’, the person whose data is being used or the proxy organisation who has provided data (CS6). For example, in the insurance case study, the company stated that they only had a responsibility towards the proxy organisation and not the sources of the data. All these issues are covered extensively in the literature in most application domains.

Control of Data

Concerns surrounding the control of data for privacy reasons can be put down to a general awareness of privacy issues in the press, reinforced by the recently-introduced GDPR. This was supported in the cases, where interviewees expressed the opinion that the GDPR had raised general awareness of privacy issues (CS1, CS9) or that it had lent weight to arguments concerning the importance of privacy (CS8).

The discussion of privacy ranged from stressing that it was not an issue for some interviewees, because there was no personal information in the data they used (CS4), to its being an issue for others, but one which was being dealt with (CS2 and CS8). One interviewee (CS5) expressed apprehension that privacy concerns conflicted with scientific innovation, introducing hitherto unforeseen costs. This view is not uncommon in scientific and medical innovation, where harms arising from the use of anonymised medical data are often seen as minimal and the potential benefits significant (Manson & O’Neill, 2007 ). In other cases (CS1), there was a confusion between anonymisation (data which cannot be traced back to the originating source) and pseudonymisation (where data can be traced back, albeit with difficulty) of users’ data. A common response from the cases was that providing informed consent for the use of personal data waived some of the rights to privacy of the user.

Consent may come in the form of a company contract Footnote 14 or an individual agreement. Footnote 15 In the former, the company often has the advantage of legal support prior to entering a contract and so should be fully aware of the information provided. In individual agreements, though, the individual is less likely to be legally supported, and so may be at risk of exploitation through not reading the information sufficiently (CS3), or of responding without adequate understanding (CS9). In one case (CS5), referring to anonymised data, consent was implied rather than given: the interviewee suggested that those involved in the project may have contributed data without giving clear informed consent. The interviewee also noted that some data may have been shared without the permission, or indeed knowledge, of those contributing individuals. This was acknowledged by the interviewee as a potential issue.

In one case (CS6), data was used without informed consent for fraud detection purposes. The interviewees noted that their organisation was working within the parameters of national and EU legislation, which allows for non-consensual use of data for these ends. One interviewee in this case stated that informed consent was sought for every novel use of the data they held. However, this was sought from the perceived owner of the data (an insurance company) rather than from the originating individuals. This case demonstrates how people may expect their data to be used without having a full understanding of the legal framework under which the data are collected. For example, data relating to individuals may legally be accessed for fraud detection without notifying the individual and without relying on the individual’s consent.

This use of personal data for fraud detection in CS6 also led to concerns regarding opacity. In both CS6 and CS10 there was transparency within the organisations (a shared understanding among staff as to the various uses of the data) but that did not extend to the public outside those organisations. In some cases (CS5) the internal transparency/external opacity meant that those responsible for developing BD + AI were often hard to meet. Of those who were interviewed in CS5, many did not know the providence of the data or the algorithms they were using. Equally, some organisations saw external opacity as integral to the business environment in which they were operating (CS9, CS10) for reasons of commercial advantage. The interviewee in CS9 cautioned that this approach, coupled with a lack of public education and the speed of transformation within the industry, would challenge any meaningful level of public accountability. This would render processes effectively opaque to the public, despite their being transparent to experts.

Reliability of Data

There can be multiple sources of unreliability in BD + AI. Unreliability originating from faults in the technology can lead to algorithmic bias, which can cause ethical issues such as unfairness, discrimination, and general negative social impact (CS3 and CS6). Considering algorithmic bias as a key input to data reliability, there exist two types of issues that may need to be addressed. Primarily, bias may stem from the input data, referred to as training data, if such data excludes adequate representation of the world, e.g. gender-biased datasets (CS6). Secondly, an inadequate representation of the world may be the result of lack of data, e.g. a correctly designed algorithm to learn from and predict a rare disease, may not have sufficient representative data to achieve correct predictions (CS5). In either case the input data are biased and may result in inaccurate decision-making and recommendations.

The issues of reliability of data stemming from data accuracy and/or algorithmic bias, may escalate depending on their use, as for example in predictive or risk-assessment algorithms (CS10). Consider the risks of unreliable data in employee monitoring situations (CS1), detecting pests and diseases in agriculture (CS3), in human brain research (CS5) or cybersecurity applications (CS8). Such issues are not singular in nature but closely linked to other ethical issues such as information asymmetries, trust, and discrimination. Consequently, the umbrella issue of reliability of data must be approached from different perspectives to ensure the validity of the decision-making processes of the BD + AI.

Data may over-represent some people or social groups who are likely to be already privileged or under-represent disadvantaged and vulnerable groups (CS3). Furthermore, people who are better positioned to gain access to data and have the expertise to interpret them may have an unfair advantage over people devoid of such competencies. In addition, BD + AI can work as a tool of disciplinary power, used to evaluate people’s conformity to norms representing the standards of disciplinary systems (CS5). We focus on the following aspects of justice in our case study analysis: power asymmetries, discrimination, inequality, and access.

The fact that issues of power can arise in public as well as private organisations was discussed in our case studies. The smart city case (CS4) showed that the public organisations were aware of potential problems arising from companies using public data and were trying to put legal safeguards in place to avoid such misuse. As a result of misuse, there is the potential that cities, or the companies with which they contract, may use data in harmful or discriminatory ways. Our case study on the use of BD + AI in scientific research showed that the interviewees were acutely aware of the potential of discrimination (CS10). They stated that biases in the data may not be easy to identify, and may lead to misclassification or misinterpretation of findings, which may in turn skew results. Discrimination refers to the recognition of difference, but it may also refer to unjust treatment of different categories of people based on their gender, sex, religion, race, class, or disability. BD + AI are often employed to distinguish between different cases, e.g. between normal and abnormal behaviour in cybersecurity. Determining whether such classification entails discrimination in the latter sense can be difficult, due to the nature of the data and algorithms involved.

Examples of potential inequality based on BD + AI could be seen in several case studies. The agricultural case (CS3) highlighted the power differential between farmers and companies with potential implications for inequality, but also the global inequality between farmers, linked to farming practices in different countries (CS3). Subsistence farmers in developing countries, for example, might find it more difficult to benefit from these technologies than large agro-businesses. The diverging levels of access to BD + AI entail different levels of ability to benefit from them and counteract possible disadvantages (CS3). Some companies restrict access to their data entirely, and others sell access at a fee, while others offer small datasets to university-based researchers (Boyd & Crawford, 2012 , p. 674).

Economic Issues

One economic impact of BD + AI outlined in the agriculture case study (CS3) focused on whether this technology, and their ethical implementation, were economically affordable. If BD + AI could not improve economic efficiency, they would be rejected by the end-user, whether they were more productive, sustainable, and ethical options. This is striking, as it raises a serious challenge for the AI ethics literature and industry. It establishes that no matter how well intentioned and principled AI ethics guidelines and charters are, unless their implementation can be done in an economically viable way, their implementation will be challenged and resisted by those footing the bill.

The telecommunications case study (CS9) focused on how GDPR legislation may economically impact businesses using BD + AI by creating disparities in competitiveness between EU and non-EU companies developing BD + AI. Owing to the larger data pools of the latter, their BD + AI may prove to be more effective than European-manufactured alternatives, which cannot bypass the ethical boundaries of European law in the same way (CS8). This is something that is also being addressed in the literature and is a very serious concern for the future profitability and development of AI in Europe (Wallace & Castro, 2018 ). The literature notes additional issues in this area that were not covered in the cases. There is the potential that the GDPR will increase costs of European AI companies by having to manually review algorithmic decision-making; the right to explanation could reduce AI accuracy; and the right to erasure could damage AI systems (Wallace & Castro, 2018 , p. 2).

One interviewee stated that public–private BD + AI projects should be conducted in a collaborative manner, rather than a sale-of-service (CS4). However, this harmonious partnership is often not possible. Another interviewee discussed the tension between public and private interests on their project—while the municipality tried to focus on citizen value, the ICT company focused on the project’s economic success. The interviewee stated that the project would have terminated earlier if it were the company’s decision, because it was unprofitable (CS4). This is a huge concern in the literature, whereby private interests will cloud, influence, and damage public decision-making within the city because of their sometimes-incompatible goals (citizen value vs. economic growth) (Sadowski & Pasquale, 2015 ). One interviewee said that the municipality officials were aware of the problems of corporate influence and thus are attempting to implement the approach of ‘data sovereignty’ (CS2).

During our interviews, some viewed BD + AI as complementary to human employment (CS3), collaborative with such employment (CS4), or as a replacement to employment (CS6). The interviewees from the agriculture case study (CS3) stated that their BD + AI were not sufficiently advanced to replace humans and were meant to complement the agronomist, rather than replace them. However, they did not indicate what would happen when the technology is advanced enough, and it becomes profitable to replace the agronomist. The insurance company interviewee (CS6) stated that they use BD + AI to reduce flaws in personal judgment. The literature also supports this viewpoint, where BD + AI is seen to offer the potential to evaluate cases impartially, which is beneficial to the insurance industry (Belliveau, Gray, & Wilson, 2019 ). Footnote 16 The interviewee reiterated this and also stated that BD + AI would reduce the number of people required to work on fraud cases. The interviewee stated that BD + AI are designed to replace these individuals, but did not indicate whether their jobs were secure or whether they would be retrained for different positions, highlighting a concern found in the literature about the replacement and unemployment of workers by AI (Bossman, 2016 ). In contrast to this, a municipality interviewee from CS4 stated that their chat-bots are used in a collaborative way to assist customer service agents, allowing them to concentrate on higher-level tasks, and that there are clear policies set in place to protect their jobs.

Sustainability was only explicitly discussed in two interviews (CS3 and CS4). The agriculture interviewees stated that they wanted to be the ‘first’ to incorporate sustainability metrics into agricultural BD + AI, indicating a competitive and innovative rationale for their company (CS3). Whereas the interviewee from the sustainable development case study (CS4) stated that their goal of using BD + AI was to reduce Co2 emissions and improve energy and air quality. He stated that there are often tensions between ecological and economic goals and that this tension tends to slow down the efforts of BD + AI public–private projects—an observation also supported by the literature (Keeso, 2014 ). This tension between public and private interests in BD + AI projects was a recurring issue throughout the cases, which will be the focus of the next section on the role of organisations.

Discussion and Conclusion

The motivation behind this paper is to come to a better understanding of ethical issues related to BD + AI based on a rich empirical basis across different application domains. The exploratory and interpretive approach chosen for this study means that we cannot generalise from our research to all possible examples of BD + AI, but it does allow us to generalise to theory and rich insights (Walsham, 1995a , b , 2006 ). These theoretical insights can then provide the basis for further empirical research, possibly using other methods to allow an even wider set of inputs to move beyond some of the limitations of the current study.

Organisational Practice and the Literature

The first point worth stating is that there is a high level of consistency both among the case studies and between cases and literature. Many of the ethical issues identified cut across the cases and are interpreted in similar ways by different stakeholders. The frequency distribution of ethical issues indicates that very few, if any, issues are relevant to all cases but many, such as privacy, have a high level of prevalence. Despite appearing in all case studies, privacy was not seen as overly problematic and could be dealt with in the context of current regulatory principles (GDPR). Most of the issues that we found in the literature (see Sect.  2 ) were also present in the case studies. In addition to privacy and data protection, this included accuracy, reliability, economic and power imbalances, justice, employment, discrimination and bias, autonomy and human rights and freedoms.

Beyond the general confirmation of the relevance of topics discussed in the literature, though, the case studies provide some further interesting insights. From the perspective of an individual case some societal factors are taken for granted and outside of the control of individual actors. For example, intellectual property regimes have significant and well-recognised consequences for justice, as demonstrated in the literature. However, there is often little that individuals or organisations can do about them. Even in cases where individuals may be able to make a difference and the problem is clear, it is not always obvious how to do this. Some well-publicised discrimination cases may be easy to recognise, for example where an HR system discriminates against women or where a facial recognition system discriminates against black people. But in many cases, it may be exceedingly difficult to recognise discrimination where it is not clear how a person is discriminated against. If, for example, an image-based medical diagnostic system leads to disadvantages for people with genetic profiles, this may not be easy to identify.

With regards to the classification of the literature suggested in Sect.  2 along the temporal dimension, we can see that the attention of the case study respondents seems to be correlated to the temporal horizon of the issues. The issues we see as short-term figures most prominently, whereas the medium-term issues, while still relevant and recognisable, appear to be less pronounced. The long-term questions are least visible in the cases. This is not very surprising, as the short-term issues are those that are at least potentially capable of being addressed relatively quickly and thus must be accessible on the local level. Organisations deploying or using AI therefore are likely to have a responsibility to address these issues and our case studies have shown that they are aware of this and putting measures in place. This is clearly true for data protection or security issues. The medium-term issues that are less likely to find local resolutions still figure prominently, even though an individual organisation has less influence on how they can be addressed. Examples of this would be questions of unemployment, justice, or fairness. There was little reference to what we call long-term issues, which can partly be explained by the fact that the type of AI user organisations we investigated have very limited influence on how they are perceived and how they may be addressed.

Interpretative Differences on Ethical Issues

Despite general agreement on the terminology used to describe ethical issues, there are often important differences in interpretation and understanding. In the first ethics theme, control of data, the perceptions of privacy ranged from ‘not an issue’ to an issue that was being dealt with. Some of this arose from the question of informed consent and the GDPR. However, a reliance on legislation, such as GDPR, without full knowledge of the intricacies of its details (i.e. that informed consent is only one of several legal bases of lawful data processing), may give rise to a false sense of security over people’s perceived privacy. This was also linked to the issue of transparency (of processes dealing with data), which may be external to the organisation (do people outside understand how an organisation holds and processes their data), or internal (how well does the organisation understand the algorithms developed internally) and sometimes involve deliberate opacity (used in specific contexts where it is perceived as necessary, such as in monitoring political unrest and its possible consequences). Therefore, a clearer and more nuanced understanding of privacy and other ethical terms raised here might well be useful, albeit tricky to derive in a public setting (for an example of complications in defining privacy, see Macnish, 2018 ).

Some issues from the literature were not mentioned in the cases, such as warfare. This can easily be explained by our choice of case studies, none of which drew on work done in this area. It indicates that even a set of 10 case studies falls short of covering all issues.

A further empirical insight is in the category we called ‘role of organisations’, which covers trust and responsibility. Trust is a key term in the discussion of the ethics of AI, prominently highlighted by the focus on trustworthy AI by the EU’s High-Level Expert Group, among others. We put this into the ‘role of organisations’ category because our interaction with the case study respondents suggested that they felt it was part of the role of their organisations to foster trust and establish responsibilities. But we are open to the suggestion that these are concepts on a slightly different level that may provide the link between specific issues in applications and broader societal debate.

Next Steps: Addressing the Ethics of AI and Big Data

This paper is predominantly descriptive, and it aims to provide a theoretically sound and empirically rich account of ethical concerns in AI + BD. While we hope that it proves to be insightful it is only a first step in the broader journey towards addressing and resolving these issues. The categorisation suggested here gives an initial indication of which type of actor may be called upon to address which type of issue. The distinction between micro-, meso- and macro perspectives suggested by Haenlein and Kaplan ( 2019 ) resonates to some degree with our categorisation of issues.

This points to the question what can be done to address these ethical issues and by whom should it be done? We have not touched on this question in the theoretical or empirical part of the paper, but the question of mitigation is the motivating force behind much of the AI + BD ethics research. The purpose of understanding these ethical questions is to find ways of addressing them.

This calls for a more detailed investigation of the ethical nature of the issues described here. As indicated earlier, we did not begin with a specific ethical theoretical framework imposed onto the case studies, but did have some derived ethics concepts which we explored within the context of the cases and allowed others to emerge over the course of the interviews. One issue is the philosophical question whether the different ethical issues discussed here are of a similar or comparable nature and what characterises them as ethical issues. This is not only a philosophical question but also a practical one for policymakers and decision makers. We have alluded to the idea that privacy and data protection are ethical issues, but they also have strong legal implications and can also be human rights issues. It would therefore be beneficial to undertake a further analysis to investigate which of these ethical issues are already regulated and to what degree current regulation covers BD + AI, and how this varies across the various EU nations and beyond.

Another step could be to expand an investigation like the one presented here to cover the ethics of AI + BD debate with a focus on suggested resolutions and policies. This could be achieved by adopting the categorisation and structure presented here and extending it to the currently discussed option for addressing the ethical issues. These include individual and collective activities ranging from technical measures to measure bias in data or individual professional guidance to standardisation, legislation, the creation of a specific regulator and many more. It will be important to understand how these measures are conceptualised as well as which ones are already used to which effect. Any such future work, however, will need to be based on a sound understanding of the issues themselves, which this paper contributes to. The key contribution of the paper, namely the presentation of empirical findings from 10 case studies show in more detail how ethical issues play out in practice. While this work can and should be expanded by including an even broader variety of cases and could be supplemented by other empirical research methods, it marks an important step in the development of our understanding of these ethical issues. This should form a part of the broader societal debate about what these new technologies can and should be used for and how we can ensure that their consequences are beneficial for individuals and society.

Throughout the paper, XXX will be used to anonymise relevant text that may identify the authors, either through the project and/or publications resulting from the individual case studies. All case studies have been published individually. Several the XXX references in the findings refer to these individual publications which provide more detail on the cases than can be provided in this cross-case analysis.

The ethical issues that we discussed throughout the case studies refers to issues broadly construed as ethical issues, or issues that have ethical significance. While some issues may not be directly obvious how they are ethical issues, they may give rise to significant harm relevant to ethics. For example, accuracy of data may not explicitly be an ethical issue, if inaccurate data is used in algorithms, it may lead to discrimination, unfair bias, or harms to individuals.

Such as chat-bots, natural language processing AI, IoT data retrieval, predictive risk analysis, cybersecurity machine-learning, and large dataset exchanges.

https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/1 .

https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence .

The type of AI currently in vogue, as outlined earlier, is based on machine learning, typically employing artificial neural networks for big data analysis. This is typically seen as ‘narrow AI’ and it is not clear whether there is a way from narrow to general AI, even if one were to accept that achieving general AI is fundamentally possible.

The 16 social domains were: Banking and securities; Healthcare; Insurance; Retail and wholesale trade; Science; Education; Energy and utilities; Manufacturing and natural resources; Agriculture; Communications, media and entertainment; Transportation; Employee monitoring and administration; Government; Law enforcement and justice; Sustainable development; and Defence and national security.

This increased to 26 ethical issues following a group brainstorming session at the case study workshop.

The nine additional ethical issues from the initial 17 drafted by the project leader were: human rights, transparency, responsibility, ownership of data, algorithmic bias, integrity, human rights, human contact, and accuracy of data.

The additional ethical issues were access to BD + AI, accuracy of data, accuracy of recommendations, algorithmic bias, economic, human contact, human rights, integrity, ownership of data, responsibility, and transparency. Two of the initial ethical concerns were removed (inclusion of stakeholders and environmental impact). The issues raised concerning inclusion of stakeholders were deemed to be sufficiently included in access to BD + AI, and those relating to environmental impact were felt to be sufficiently covered by sustainability.

The three appendices attached in this paper comprise much of this case study protocol.

CS4 evaluated four organisations, but one of these organisations was also part of CS2 – Organisation 1. CS6 analysed two insurance organisations.

Starting out, we aimed to have both policy/ethics-focused experts within the organisation and individuals that could also speak with us about the technical aspects of the organisation’s BD + AI. However, this was often not possible, due to availability, organisations’ inability to free up resources (e.g. employee’s time) for interviews, or lack of designated experts in those areas.

For example, in CS1, CS6, and CS8.

For example, in CS2, CS3, CS4, CS5, CS6, and CS9.

As is discussed elsewhere in this paper, algorithms also hold the possibility of reinforcing our prejudices and biases or creating new ones entirely.

Accenture. (2016). Building digital trust: The role of data ethics in the digital age. Retrieved December 1, 2020 from https://www.accenture.com/t20160613T024441__w__/us-en/_acnmedia/PDF-22/Accenture-Data-Ethics-POV-WEB.pdf .

Accenture. (2017). Embracing artificial intelligence. Enabling strong and inclusive AI driven growth. Retrieved December 1, 2020 from https://www.accenture.com/t20170614T130615Z__w__/us-en/_acnmedia/Accenture/next-gen-5/event-g20-yea-summit/pdfs/Accenture-Intelligent-Economy.pdf .

Antoniou, J., & Andreou, A. (2019). Case study: The Internet of Things and Ethics. The Orbit Journal, 2 (2), 67.

Google Scholar  

Badri, A., Boudreau-Trudel, B., & Souissi, A. S. (2018). Occupational health and safety in the industry 4.0 era: A cause for major concern? Safety Science, 109, 403–411. https://doi.org/10.1016/j.ssci.2018.06.012

Article   Google Scholar  

Barolli, L., Takizawa, M., Xhafa, F., & Enokido, T. (ed.) (2019). Web, artificial intelligence and network applications. In Proceedings of the workshops of the 33rd international conference on advanced information networking and applications , Springer.

Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104 (671), 671–732. https://doi.org/10.15779/Z38BG31

Baum, S. D. (2017). Reconciliation between factions focused on near-term and long-term artificial intelligence. AI Society, 2018 (33), 565–572.

Belliveau, K. M., Gray, L. E., & Wilson, R. J. (2019). Busting the Black Box: Big Data Employment and Privacy | IADC LAW. https://www.iadclaw.org/publications-news/defensecounseljournal/busting-the-black-box-big-data-employment-and-privacy/ . Accessed 10 May 2019.

Bossman, J. (2016). Top 9 ethical issues in artificial intelligence. World Economic Forum . https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/ . Accessed 10 May 2019.

Bostrom, N. (2016). Superintelligence: Paths . OUP Oxford.

Boyd, D., & Crawford, K. (2012). Critical questions for big data. Information, Communication and Society, 15 (5), 662–679. https://doi.org/10.1080/1369118X.2012.678878

Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data and Society, 3 (1), 2053951715622512.

Bush, T., (2012). Authenticity in Research: Reliability, Validity and Triangulation. Chapter 6 in edited “Research Methods in Educational Leadership and Management”, SAGE Publications.

Calders, T., Kamiran, F., & Pechenizkiy, M. (2009). Building classifiers with independency constraints. In IEEE international conference data mining workshops , ICDMW’09, Miami, USA.

Chatfield, K., Iatridis, K., Stahl, B. C., & Paspallis, N. (2017). Innovating responsibly in ICT for ageing: Drivers, obstacles and implementation. Sustainability, 9 (6), 971. https://doi.org/10.3390/su9060971 .

Cohen, I. G., Amarasingham, R., Shah, A., et al. (2014). The legal and ethical concerns that arise from using complex predictive analytics in health care. Health Affairs, 33 (7), 1139–1147.

Couldry, N., & Powell, A. (2014). Big Data from the bottom up. Big Data and Society, 1 (2), 205395171453927. https://doi.org/10.1177/2053951714539277

Crawford, K., Gray, M. L., & Miltner, K. (2014). Big data| critiquing big data: Politics, ethics, epistemology | special section introduction. International Journal of Communication, 8, 10.

Cuquet, M., & Fensel, A. (2018). The societal impact of big data: A research roadmap for Europe. Technology in Society, 54, 74–86.

Danna, A., & Gandy, O. H., Jr. (2002). All that glitters is not gold: Digging beneath the surface of data mining. Journal of Business Ethics, 40 (4), 373–438.

European Convention for the Protection of HUman Rights and Fundamental Freedoms, pmbl., Nov. 4, 1950, 213 UNTS 221.

Herriott, E. R., & Firestone, W. (1983). Multisite qualitative policy research: Optimizing description and generalizability. Educational Researcher, 12, 14–19. https://doi.org/10.3102/0013189X012002014

Einav, L., & Levin, J. (2014). Economics in the age of big data. Science, 346 (6210), 1243089. https://doi.org/10.1126/science.1243089

Ferraggine, V. E., Doorn, J. H., & Rivera, L. C. (2009). Handbook of research on innovations in database technologies and applications: Current and future trends (pp. 1–1124). IGI Global.

Fothergill, B. T., Knight, W., Stahl, B. C., & Ulnicane, I. (2019). Responsible data governance of neuroscience big data. Frontiers in Neuroinformatics, 13 . https://doi.org/10.3389/fninf.2019.00028

Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254–280. https://doi.org/10.1016/j.techfore.2016.08.019

Haenlein, M., & Kaplan, A. (2019). A brief history of artificial intelligence: On the past, present, and future of artificial intelligence. California Management Review, 61 (4), 5–14.

Harari, Y. N. (2017). Homo deus: A brief history of tomorrow (1st ed.). Vintage.

Book   Google Scholar  

ICO. (2017). Big data, artificial intelligence, machine learning and data protection. Retrieved December 1, 2020 from Information Commissioner’s Office website: https://iconewsblog.wordpress.com/2017/03/03/ai-machine-learning-and-personal-data/ .

Ioannidis, J. P. (2013). Informed consent, big data, and the oxymoron of research that is not research. The American Journal of Bioethics., 2, 15.

Jain, P., Gyanchandani, M., & Khare, N. (2016). Big data privacy: A technological perspective and review. Journal of Big Data, 3 (1), 25.

Janssen, M., & Kuk, G. (2016). The challenges and limits of big data algorithms in technocratic governance. Government Information Quarterly, 33 (3), 371–377. https://doi.org/10.1016/j.giq.2016.08.011

Jirotka, M., Grimpe, B., Stahl, B., Hartswood, M., & Eden, G. (2017). Responsible research and innovation in the digital age. Communications of the ACM, 60 (5), 62–68. https://doi.org/10.1145/3064940

Jiya, T. (2019). Ethical Implications Of Predictive Risk Intelligence. ORBIT Journal, 2 (2), 51.

Jiya, T. (2019). Ethical reflections of human brain research and smart information systems. The ORBIT Journal, 2 (2), 1–24.

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1 (9), 389–399. https://doi.org/10.1038/s42256-019-0088-2

Johnson, J. A. (2014). From open data to information justice. Ethics and Information Technology, 4 (16), 263–274.

Johnson, J. A. (2018). Open data, big data, and just data. In J. A. Johnson (Ed.), Toward information justice (pp. 23–49). Berlin: Springer.

Chapter   Google Scholar  

Kancevičienė, N. (2019). Insurance, smart information systems and ethics: a case study. The ORBIT Journal, 2 (2), 1–27.

Keeso, A. (2014). Big data and environmental sustainability: A conversation starter . https://www.google.com/search?rlz=1C1CHBF_nlNL796NL796&ei=YF3VXN3qCMLCwAKp4qjYBQ&q=Keeso+Big+Data+and+Environmental+Sustainability%3A+A+Conversation+Starter&oq=Keeso+Big+Data+and+Environmental+Sustainability%3A+A+Conversation+Starter&gs_l=psy-ab.3...15460.16163..16528...0.0..0.76.371.6......0....1..gws-wiz.......0i71j35i304i39j0i13i30.M_8nNbaL2E8 . Accessed 10 May 2019.

Kuriakose, F., & Iyer, D. (2018). Human Rights in the Big Data World (SSRN Scholarly Paper No. ID 3246969). Rochester, NY: Social Science Research Network. https://papers.ssrn.com/abstract=3246969 . Accessed 13 May 2019.

Kurzweil, R. (2006). The singularity is near . Gerald Duckworth & Co Ltd.

Latonero, M. (2018). Big data analytics and human rights. New Technologies for Human Rights Law and Practice. https://doi.org/10.1017/9781316838952.007

Lepri, B., Staiano, J., Sangokoya, D., Letouzé, E., & Oliver, N. (2017). The tyranny of data? the bright and dark sides of data-driven decision-making for social good. In Transparent data mining for big and small data (pp. 3–24). Springer.

Livingstone, D. (2015). Transhumanism: The history of a dangerous idea . CreateSpace Independent Publishing Platform.

Macnish, K. (2018). Government surveillance and why defining privacy matters in a post-snowden world. Journal of Applied Philosophy, 35 (2), 417–432.

Macnish, K., & Inguanzo, A. (2019). Case study-customer relation management, smart information systems and ethics. The ORBIT Journal, 2 (2), 1–24.

Macnish, K., Inguanzo, A. F., & Kirichenko, A. (2019). Smart information systems in cybersecurity. ORBIT Journal, 2 (2), 15.

Mai, J. E. (2016). Big data privacy: The datafication of personal information. The Information Society, 32 (3), 192–199.

Manson, N. C., & O’Neill, O. (2007). Rethinking informed consent in bioethics . Cambridge University Press.

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data and Society, 3 (2), 2053951716679679.

Meeker, Q. W., & , Hong, Y. . (2014). Reliability Meets big data: Opportunities and challenges. Quality Engineering, 26 (1), 102–116.

Newman, N. (2013). The costs of lost privacy: Consumer harm and rising economic inequality in the age of google (SSRN Scholarly Paper No. ID 2310146). Rochester: Social Science Research Network. https://papers.ssrn.com/abstract=2310146 . Accessed 10 May 2019.

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy . Crown Publishers.

Panch, T., Mattie, H., & Atun, R. (2019). Artificial intelligence and algorithmic bias: implications for health systems. Journal of global health, 9 (2).

Pellé, S., & Reber, B. (2015). Responsible innovation in the light of moral responsibility. Journal on Chain and Network Science, 15 (2), 107–117. https://doi.org/10.3920/JCNS2014.x017

Portmess, L., & Tower, S. (2015). Data barns, ambient intelligence and cloud computing: The tacit epistemology and linguistic representation of Big Data. Ethics and Information Technology, 17 (1), 1–9. https://doi.org/10.1007/s10676-014-9357-2

Ryan, M. (2019). Ethics of public use of AI and big data. ORBIT Journal, 2 (2), 15.

Ryan, M. (2019). Ethics of using AI and big data in agriculture: The case of a large agriculture multinational. The ORBIT Journal, 2 (2), 1–27.

Ryan, M., & Gregory, A. (2019). Ethics of using smart city AI and big data: The case of four large European cities. The ORBIT Journal, 2 (2), 1–36.

Sadowski, J., & Pasquale, F. A. (2015). The spectrum of control: A social theory of the smart city. First Monday, 20 (7), 16.

Schradie, J. (2017). Big data is too small: Research implications of class inequality for online data collection. In D. June & P. Andrea (Eds.), Media and class: TV, film and digital culture . Abingdon: Taylor and Francis.

Taylor, L. (2017). ‘What is data justice? The case for connecting digital rights and freedoms globally’ In Big data and society (pp. 1–14). https://doi.org/10.1177/2053951717736335 .

Tene, O., & Polonetsky, J. (2012). Big data for all: Privacy and user control in the age of analytics. The Northwestern Journal of Technology and Intellectual Property, 11, 10.

Tene, O., & Polonetsky, J. (2013). A theory of creepy: technology, privacy and shifting social norms. Yale JL and Technology, 16, 59.

Van Dijck, J., & Poell, T. (2013). Understanding social media logic. Media and Communication, 1 (1), 2–14.

Voinea, C., & Uszkai, R. (n.d.). An assessement of algorithmic accountability methods .

Walsham, G. (1995). Interpretive case studies in IS research: nature and method. European Journal of Information Systems, 4 (2), 74–81.

Wallace, N., & Castro, D. (2018) The Impact of the EU’s New Data Protection Regulation on AI, Centre for Data Innovation .

Walsham, G. (1995). Interpretive case-studies in IS research-nature and method. European Journal of Information Systems, 4 (2), 74–81.

Walsham, G. (2006). Doing interpretive research. European Journal of Information Systems, 15 (3), 320–330.

Wheeler, G. (2016). Machine epistemology and big data. In L. McIntyre & A. Rosenburg (Eds.), Routledge Companion to Philosophy of Social Science . Routledge.

Whittlestone, J., Nyrup, R., Alexandrova, A., Dihal, K., & Cave, S. (2019). Ethical and societal implications of algorithms, data, and artificial intelligence: A roadmap for research. https://www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf .

Wolf, B. (2015). Burkhardt Wolf: Big data, small freedom? / Radical Philosophy. Radical Philosophy . https://www.radicalphilosophy.com/commentary/big-data-small-freedom . Accessed 13 May 2019.

Yin, R. K. (2014). Case study research: Design and methods (5th ed.). SAGE.

Yin, R. K. (2015). Qualitative research from start to finish . Guilford Publications.

Zwitter, A. (2014). Big data ethics. Big Data and Society, 1 (2), 51.

Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization (April 4, 2015). Journal of Information Technology, 2015 (30), 75–89. https://doi.org/10.1057/jit.2015.5

Download references

Acknowledgements

This SHERPA Project has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 786641. The author(s) acknowledge the contribution of the consortium to the development and design of the case study approach.

Author information

Authors and affiliations.

Wageningen Economic Research, Wageningen University and Research, Wageningen, The Netherlands

UCLan Cyprus, Larnaka, Cyprus

Josephina Antoniou

De Montford University, Leicester, UK

Laurence Brooks & Bernd Stahl

Northampton University, Northampton, UK

Tilimbe Jiya

The University of Twente, Enschede, The Netherlands

Kevin Macnish

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mark Ryan .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix 1: Desk Research Questions

Number Research Question.

In which sector is the organisation located (e.g. industry, government, NGO, etc.)?

What is the name of the organisation?

What is the geographic scope of the organisation?

What is the name of the interviewee?

What is the interviewee’s role within the organisation?

Appendix 2: Interview Research Questions

No Research Question.

What involvement has the interviewee had with BD + AI within the organisation?

What type of BD + AI is the organisation using? (e.g. IBM Watson, Google Deepmind)

What is the field of application of the BD + AI (e.g. administration, healthcare, retail)

Does the BD + AI work as intended or are there problems with its operation?

What are the innovative elements introduced by the BD + AI (e.g. what has the technology enabled within the organisation?)

What is the level of maturity of the BD + AI ? (i.e. has the technology been used for long at the organisation? Is it a recent development or an established approach?)

How does the BD + AI interact with other technologies within the organisation?

What are the parameters/inputs used to inform the BD + AI ? (e.g. which sorts of data are input, how is the data understood within the algorithm?). Does the BD + AI collect and/or use data which identifies or can be used to identify a living person (personal data)?. Does the BD + AI collect personal data without the consent of the person to whom those data relate?

What are the principles informing the algorithm used in the BD + AI (e.g. does the algorithm assume that people walk in similar ways, does it assume that loitering involves not moving outside a particular radius in a particular time frame?). Does the BD + AI classify people into groups? If so, how are these groups determined? Does the BD + AI identify abnormal behaviour? If so, what is abnormal behaviour to the BD + AI ?

Are there policies in place governing the use of the BD + AI ?

How transparent is the technology to administrators within the organisation, to users within the organisation?

Who are the stakeholders in the organisation?

What has been the impact of the BD + AI on stakeholders?

How transparent is the technology to people outside the organisation?

Are those stakeholders engaged with the BD + AI ? (e.g. are those affected aware of the BD + AI, do they have any say in its operation?). If so, what is the nature of this engagement? (focus groups, feedback, etc.)

In what way are stakeholders impacted by the BD + AI ? (e.g. what is the societal impact: are there issues of inequality, fairness, safety, filter bubbles, etc.?)

What are the costs of using the BD + AI to stakeholders? (e.g. potential loss of privacy, loss of potential to sell information, potential loss of reputation)

What is the expected longevity of this impact? (e.g. is this expected to be temporary or long-term?)

Are those stakeholders engaged with the BD + AI ? (e.g. are those affected aware of the BD + AI, do they have any say in its operation?)

If so, what is the nature of this engagement? (focus groups, feedback, etc.)

Appendix 3: Checklist of Ethical Issues

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Ryan, M., Antoniou, J., Brooks, L. et al. Research and Practice of AI Ethics: A Case Study Approach Juxtaposing Academic Discourse with Organisational Reality. Sci Eng Ethics 27 , 16 (2021). https://doi.org/10.1007/s11948-021-00293-x

Download citation

Received : 26 August 2019

Accepted : 10 February 2021

Published : 08 March 2021

DOI : https://doi.org/10.1007/s11948-021-00293-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Smart information systems
  • Big data analytics
  • Artificial intelligence ethics
  • Multiple-case study analysis
  • Philosophy of technology
  • Find a journal
  • Publish with us
  • Track your research

AI Case Study Generator

Generate professional and engaging case studies effortlessly with our free AI Case Study creator. Simplify the process and showcase your success.

To unlock limitless input, upgrade to our Pro plan

  • Convincing Pro
  • Critical Pro
  • Humorous Pro
  • Informative Pro
  • Inspirational Pro
  • Passionate Pro
  • Thoughtful Pro
  • Worried Pro
  • English (US)
  • German (Germany)
  • Italian (Italy)
  • Japanese (Japan)
  • Russian (Russia)
  • Portuguese (Portugal)
  • Hindi (India)
  • Urdu (Pakistan)
  • Arabic (Saudi Arabia)
  • French (France)
  • English (UK) Pro
  • English (Australia) Pro
  • English (Canada) Pro
  • English (India) Pro
  • English (Singapore) Pro
  • English (New Zealand) Pro
  • English (South Africa) Pro
  • Spanish (Spain) Pro
  • Spanish (Mexico) Pro
  • Spanish (United States) Pro
  • Arabic (Egypt) Pro
  • Arabic (United Arab Emirates) Pro
  • Arabic (Kuwait) Pro
  • Arabic (Bahrain) Pro
  • Arabic (Qatar) Pro
  • Arabic (Oman) Pro
  • Arabic (Jordan) Pro
  • Arabic (Lebanon) Pro
  • Danish (Denmark) Pro
  • German (Switzerland) Pro
  • German (Austria) Pro
  • French (Canada) Pro
  • French (Switzerland) Pro
  • French (Belgium) Pro
  • Italian (Switzerland) Pro
  • Dutch (Netherlands) Pro
  • Dutch (Belgium) Pro
  • Portuguese (Brazil) Pro
  • Chinese (China) Pro
  • Chinese (Taiwan) Pro
  • Chinese (Hong Kong) Pro
  • Chinese (Singapore) Pro
  • Korean (South Korea) Pro
  • Finnish (Finland) Pro
  • Greek (Greece) Pro
  • Czech (Czech Republic) Pro
  • Swedish (Sweden) Pro
  • Norwegian (Norway) Pro
  • Turkish (Turkey) Pro
  • Polish (Poland) Pro
  • Romanian (Romania) Pro
  • Hungarian (Hungary) Pro
  • Thai (Thailand) Pro
  • Hebrew (Israel) Pro
  • Indonesian (Indonesia) Pro
  • Vietnamese (Vietnam) Pro
  • Malay (Malaysia) Pro
  • Tagalog (Philippines) Pro
  • Swahili (Kenya) Pro
  • Swahili (Tanzania) Pro
  • Zulu (South Africa) Pro
  • Xhosa (South Africa) Pro
  • Amharic (Ethiopia) Pro
  • Tamil (India) Pro
  • Tamil (Sri Lanka) Pro
  • Bengali (Bangladesh) Pro
  • Bengali (India) Pro
  • Punjabi (Pakistan) Pro
  • Punjabi (India) Pro
  • Marathi (India) Pro
  • Telugu (India) Pro
  • Kannada (India) Pro
  • Gujarati (India) Pro
  • Oriya (India) Pro
  • Malayalam (India) Pro
  • Urdu (India) Pro
  • Persian (Iran) Pro
  • Azerbaijani (Azerbaijan) Pro
  • Ukrainian (Ukraine) Pro
  • Belarusian (Belarus) Pro
  • Catalan (Spain) Pro
  • Basque (Spain) Pro
  • Galician (Spain) Pro
  • Slovak (Slovakia) Pro
  • Lithuanian (Lithuania) Pro
  • Latvian (Latvia) Pro
  • Estonian (Estonia) Pro
  • Bulgarian (Bulgaria) Pro
  • Albanian (Albania) Pro
  • Croatian (Croatia) Pro
  • Slovenian (Slovenia) Pro
  • Bosnian (Bosnia and Herzegovina) Pro
  • Serbian (Serbia) Pro
  • Macedonian (North Macedonia) Pro
  • Montenegrin (Montenegro) Pro
  • Maltese (Malta) Pro
  • Irish (Ireland) Pro
  • Welsh (United Kingdom) Pro
  • Scots Gaelic (United Kingdom) Pro
  • Icelandic (Iceland) Pro
  • Luxembourgish (Luxembourg) Pro
  • Afrikaans (South Africa) Pro
  • Hausa (Nigeria) Pro
  • Yoruba (Nigeria) Pro
  • Somali (Somalia) Pro
  • Tigrinya (Eritrea) Pro
  • Kinyarwanda (Rwanda) Pro
  • Sesotho (Lesotho) Pro
  • Shona (Zimbabwe) Pro
  • Sinhala (Sri Lanka) Pro
  • Dhivehi (Maldives) Pro
  • Burmese (Myanmar) Pro
  • Lao (Laos) Pro
  • Khmer (Cambodia) Pro
  • Mongolian (Mongolia) Pro
  • Tibetan (China) Pro
  • Uighur (China) Pro
  • Pashto (Afghanistan) Pro
  • Dari (Afghanistan) Pro
  • Nepali (Nepal) Pro
  • Dzongkha (Bhutan) Pro
  • Sesotho (South Africa) Pro
  • Setswana (Botswana) Pro
  • Seselwa Creole (Seychelles) Pro
  • Mauritian Creole (Mauritius) Pro
  • Haitian Creole (Haiti) Pro
  • Greenlandic (Greenland) Pro
  • Faroese (Faroe Islands) Pro
  • Samoan (Samoa) Pro
  • Tongan (Tonga) Pro

Popular Writing Apps

  • Journalist Outreach PR Plan Generator
  • PR Tactics Guide Creator for Startups
  • Acronym Generator
  • Active to Passive Voice Converter
  • AI Answer Generator
  • AI Argument Enhancer
  • AI CSR PR Campaign Creator
  • AI Essay Generator
  • AI Language Translator

Unlock the power of our case study creator tool—Generate compelling case studies effortlessly with our creator and captivate your audience. With just a few clicks, our smart technology helps you understand data, find trends, and make insightful reports, making your experience better and improving your SEO strategy.

What is a Case Study

A case study is like a detailed story that looks closely at a particular situation, person, or event, especially in the business world. It's a way to understand how things work in real life and learn valuable lessons. For instance, if a business wanted to figure out how another one became successful, they might study that business as a case study.

Let's say there's a small company that started selling handmade products online and became successful. A case study about this business could explain the challenges they faced, the strategies they used to grow, and the results they achieved. By reading this case study, other businesses could learn useful tips and apply them to their situations to improve and succeed.

7 Tips For Writing Great Case Studies

  • Pick a Familiar Topic: Choose a client or project that your audience can relate to. This makes it easier for them to see how your solutions might work for their situations.
  • Clear Structure: Start with a concise introduction that sets the stage for the case study. Clearly outline the problem, solution, and results to make your case study easy to follow.
  • Engaging Storytelling: Turn your case study into a compelling narrative. Use real-world examples, anecdotes, and quotes to make it relatable and interesting for your audience.
  • Focus on the Problem: Clearly define the problem or challenge your case study addresses. This helps readers understand the context and sets the foundation for the solution.
  • Highlight Solutions: Showcase the strategies or solutions implemented to overcome the problem. Provide details about the process, tools used, and any unique approaches that contributed to the success.
  • Optimize for SEO: By incorporating your case study into a blog post using a blog post generator, you enhance its visibility and reach. This, in turn, improves the search engine rankings of your blog post, attracting more organic traffic.
  • Quantify Results: Use data and metrics to quantify the impact of your solutions. Whether it's increased revenue, improved efficiency, or customer satisfaction, concrete results add credibility and demonstrate the value of your case study.

What is a Case Study Creator

A free case study generator is a tool or system designed to automatically create detailed case studies. It typically uses predefined templates and may incorporate artificial intelligence (AI) to generate comprehensive analyses of specific situations, events, or individuals.

This tool streamlines the process of crafting informative case studies by extracting key details, analyzing data, and presenting the information in a structured format.

Case study generators are valuable for businesses, students, or professionals seeking to efficiently produce well-organized and insightful case studies without the need for extensive manual effort.

Benefits of Using Case Study Generator

In today's competitive landscape, showcasing your product or service successes is vital. While case studies offer a compelling way to do this, starting from scratch can be time-consuming. That's where case study generators step in, providing a robust solution to streamline the process and unlock various advantages.

  • Easy and Quick: A case study generator makes it simple to create detailed studies without spending a lot of time. It's a fast and efficient way to compile information.
  • Accessible Online: As an online case study generator, you can use it from anywhere with an internet connection. No need for installations or downloads.
  • Free of Cost: Many case study creators are free to use, eliminating the need for any financial investment. This makes it budget-friendly for businesses or individuals.
  • AI-Powered Insights: Some generators use AI (artificial intelligence) to analyze data and provide valuable insights. This adds depth and accuracy to your case studies.
  • Save Time and Effort: Generate a polished case study in minutes, automating tasks like data analysis and content creation. This frees up your time to focus on other aspects of your business.
  • Enhance Quality and Consistency: Case study creators offer templates and AI-powered suggestions, ensuring your studies are well-structured and visually appealing. Consistent quality strengthens your brand image.
  • Improve Brand Awareness and Credibility: Sharing case studies on your platforms increases brand awareness and builds trust. Positive impacts on others establish you as a credible provider.
  • Boost Lead Generation and Sales: Compelling case studies build trust and showcase your value, attracting leads and converting them into customers, ultimately boosting your sales.
  • Increase Customer Engagement and Loyalty: Case studies provide insights into your company, fostering deeper connections, increasing engagement, and promoting long-term loyalty.
  • Improve Your Writing Skills: Free AI Case study generators act as learning tools, offering guidance on structure, content, and storytelling. Studying generated drafts refines your writing skills for crafting impactful case studies in the future.

How AI Case Study Generator Works

An online case study generator works by leveraging artificial intelligence algorithms to analyze and synthesize information, creating comprehensive case studies. Here's a simplified explanation of its functioning:

Data Input:

Algorithm analysis:, content generation:, language processing:, who needs a case study creator.

Anyone looking to create informative and detailed case studies can benefit from using an online case study generator. This tool is useful for

Businesses:

Professionals:, individuals:, marketing professionals:, researchers:, why opt for our case study creator.

Are you on the lookout for a top-notch case study generator that combines outstanding features with user-friendliness, all at no cost and without the need for registration? Your search ends here. Our AI-driven case study generator is the ideal solution for you. Here's why you should choose our tool:

Craft Case Study in 50+ Languages:

Incorporate keywords in case study:, user-friendly interface:, 100% free, no registration:, 20+ diverse tones for versatile styles:, frequently asked questions.

  • Not focusing on the benefits to the reader.
  • Not using data and results to support their claims.
  • Not telling a compelling story.
  • Not using visuals effectively.
  • Not promoting their case study.

Upgrade Pro to unlimited access

✨ Upgrade now to our yearly plan and save big! Join now and start enjoying exclusive benefits!

  • 500+ AI Apps
  • 100+ Languages
  • 150K Words of Human-like AI
  • Unlimited Documents
  • Unlimited Input

Machines of mind: The case for an AI-powered productivity boom

Subscribe to the economic studies bulletin, martin neil baily , martin neil baily senior fellow emeritus - economic studies , center on regulation and markets erik brynjolfsson , and erik brynjolfsson director - stanford digital economy lab, jerry yang and akiko yamazaki professor and senior fellow - stanford institute for human centered ai anton korinek anton korinek nonresident fellow - economic studies , center on regulation and markets @akorinek.

May 10, 2023

Large language models such as ChatGPT are emerging as powerful tools that not only make workers more productive but also increase the rate of innovation, laying the foundation for a significant acceleration in economic growth. As a general purpose technology, AI will impact a wide array of industries, prompting investments in new skills, transforming business processes, and altering the nature of work. However, official statistics will only partially capture the boost in productivity because the output of knowledge workers is difficult to measure. The rapid advances can have great benefits but may also lead to significant risks, so it is crucial to ensure that we steer progress in a direction that benefits all of society.

On a recent Friday morning, one of us sat down in his favorite coffee shop to work on a new research paper regarding how AI will affect the labor market. To begin, he pulled up ChatGPT , a generative AI tool. After entering a few plain-English prompts, the system was able to provide a suitable economic model, draft code to run the model, and produce potential titles for the work. By the end of the morning, he had achieved a week’s worth of progress on his research.

We expect millions of knowledge workers, ranging from doctors and lawyers to managers and salespeople to experience similar ground-breaking shifts in their productivity within a few years, if not sooner.

The potential of the most recent generation of AI systems is illustrated vividly by the viral uptake of ChatGPT, a large language model (LLM) that captured public attention by its ability to generate coherent and contextually appropriate text. This is not an innovation that is languishing in the basement. Its capabilities have already captivated hundreds of millions of users.

Other LLMs that were recently rolled out publicly include Google’s Bard and Anthropic’s Claude . But generative AI is not limited to text: in recent years, we have also seen generative AI systems that can create images, such as Midjourney , Stable Diffusion or DALL-E , and more recently multi-modal systems that combine text, images, video, audio and even robotic functions . These technologies are foundation models , which are vast systems based on deep neural networks that have been trained on massive amounts of data and can then be adapted to perform a wide range of different tasks. Because information and knowledge work dominates the US economy, these machines of the mind will dramatically boost overall productivity.

The power of productivity growth

The primary determinant of our long-term prosperity and welfare is the rate of productivity growth: the amount of output created per hour worked. This holds even though changes in productivity are not immediately felt by everyone and, in the short run, workers’ perceptions of the economy are dominated by the business cycle. From World War II until the early 1970s, labor productivity grew at over 3% a year, more than doubling over the period, ushering in an era of prosperity for most Americans. In the early 1970s productivity growth slowed dramatically, rebounding in the 1990s, only to slow again since the early 2000s.

Figure 1 illustrates the story. It decomposes the overall growth in labor productivity into two components: total factor productivity (which is a measure of the impact of technology) and the contribution of the labor composition and capital intensity. The figure illustrates that the key driver of changes in labor productivity is changes total factor productivity (TFP). There are many reasons for America’s recent economic struggles, but slow TFP growth is a key cause, slowly eating away at the country’s prosperity, making it harder to fight inflation, eroding workers’ wages and worsening budget deficits.

The generally slow pace of economic growth, together with the outsized profits of tech companies, has resulted in skepticism about the benefits of digital technologies for the broad economy. However, for about 10 years starting in the 1990s there was a surge in productivity growth, as shown in Figure 1, driven primarily by a huge wave of investment in computers and communications , which in turn drove business transformations. Even though there was a stock market bubble as well as significant reallocation of labor and resources, workers were generally better off. Furthermore, the federal budget was balanced from 1998 to 2001 —a double win. Digital technology can drive broad economic growth, and it happened less than thirty years ago.

Early estimates of AI’s productivity effects

The recent advances in generative AI have been driven by progress in software, hardware, data collection, and growing amounts of investment in cutting-edge models. Sevilla et al. (2022) observe that the amount of compute (computing power) used to train cutting-edge AI systems has been doubling every six months over the past decade. The capabilities of generative AI systems have grown in tandem, allowing them to perform many tasks that used to be reserved for cognitive workers, such as writing well-crafted sentences, creating computer code, summarizing articles, brainstorming ideas, organizing plans, translating other languages, writing complex emails, and much more.

Generative AI has broad applications that will impact a wide range of workers, occupations, and activities. Unlike most advances in automation in the past, it is a machine of the mind affecting cognitive work. As noted in a recent research paper (Eloundou et al., 2023) , LLMs could affect 80% of the US workforce in some form.

There is an emerging literature that estimates the productivity effects of AI on specific occupations or tasks. Kalliamvakou (2022) finds that software engineers can code up to twice as fast using a tool called Codex, based on the previous version of the large language model GPT-3. That’s a transformative effect. Noy and Zhang (2023) find that many writing tasks can also be completed twice as fast and Korinek (2023) estimates, based on 25 use cases for language models, that economists can be 10-20% more productive using large language models.

Related Content

David Kiron, Elizabeth J. Altman, Christoph Riedl

April 13, 2023

David Autor, Anna Salomons, Bryan Seegmiller

March 9, 2023

Sanjay Patnaik, James Kunhardt, Richard G. Frank

April 18, 2023

But can these gains in specific tasks translate into significant gains in a real-world setting? The answer appears to be yes. Brynjolfsson, Li, and Raymond (2023) show that call center operators became 14% more productive when they used the technology, with the gains of over 30% for the least experienced workers. What’s more, customer sentiment was higher when interacting with operators using generative AI as an aid, and perhaps as a result, employee attrition was lower. The system appears to create value by capturing and conveying some of the tacit organizational knowledge about how to solve problems and please customers that previously was learned only via on-the-job experience.

Criticism of large language models as merely “stochastic parrots” is misplaced. Most cognitive work involves drawing on past knowledge and experience and applying it to the problem at hand. It is true that generative AI programs are prone to certain types of mistakes, but the form of these mistakes is predictable. For example, language models tend to engage in “hallucinations,” i.e., to make up facts and references. As a result, they clearly require human oversight. However, their economic value depends not on whether they are flawless, but on whether they can be used productively. By that criterion, they are already poised to have a massive impact. Moreover, the accuracy of generative AI models continues to improve rapidly.

Quantifying the productivity effects

A recent report by Goldman Sachs suggests that generative AI could raise global GDP by 7%, a truly significant effect for any single technology. Based on our analysis of a variety of use cases and the share of the workforce doing mainly cognitive work, this estimate strikes us as being reasonable, though there remains great uncertainty about the ultimate productivity and growth effects of AI.

It is useful to rigorously break down the channels through which we expect generative AI to produce growth in productivity, output, and ultimately in social welfare in a model.

The first channel is the increased efficiency of output production. By making cognitive workers engaged in production more efficient, the level of output increases. Economic theory tells us that, in competitive markets, the effect of a productivity boost in a given sector on aggregate productivity and output is equal to the size of the productivity boost multiplied by the size of the sector ( Hulten’s theorem ). For instance, if generative AI makes cognitive workers on average 30% more productive over a decade or two and cognitive work makes up about 60% of all value added in the economy (as measured by the wage bill attributable to cognitive tasks), this amounts to a 18% increase in aggregate productivity and output, spread out over those years.

The second, and ultimately more important, channel is the acceleration of innovation and thus future productivity growth. Cognitive workers not only produce current output but also invent new things, engage in discoveries, and generate the technological progress that boosts future productivity. This includes R&D—what scientists do—and perhaps more importantly, the process of rolling out new innovations into production activities throughout the economy—what managers do. If cognitive workers are more efficient, they will accelerate technological progress and thereby boost the rate of productivity growth—in perpetuity. For example, if productivity growth was 2% and the cognitive labor that underpins productivity growth is 20% more productive, this would raise the growth rate of productivity by 20% to 2.4%. In a given year, such a change is barely noticeable and is usually swamped by cyclical fluctuations.

But productivity growth compounds. After a decade, the described tiny increase in productivity growth would leave the economy 5% larger, and the growth would compound further every year thereafter. What’s more, if the acceleration applied to the growth rate of the growth rate (for instance if one of the applications of AI was to improving AI itself ), then of course, growth would accelerate even more over time.

Figure 2 schematically illustrates the effects of the two channels of productivity growth over a twenty year horizon. The baseline follows the current projection of the Congressional Budget Office (CBO) of 1.5% productivity growth , giving rise to a total of 33% productivity growth over 20 years. The projection labeled “Level” assumes that generative AI raises the level of productivity and output by an additional 18% over ten years, as suggested by the illustrative numbers we discussed for the first channel. After ten years, growth reverts to the baseline rate. The third projection labeled “Level+Growth” additionally includes a one percentage point boost in the rate of growth over the baseline rate, resulting from the additional innovation triggered by generative AI. At first, the resulting growth trajectory is barely distinguishable from the “Level” projection, but through the power of compounding, the effects grow bigger over time, leading to a near doubling of output after 20 years, far greater than the baseline projection.

Barriers and drivers of adoption

For the productivity gains to materialize, advances in AI have to disseminate throughout the economy. Traditionally, this has always taken time, so we would not expect potential productivity gains to show up immediately. The advances need to be taken up and rolled out by businesses and organizations that employ cognitive labor throughout the economy, including small and medium-sized businesses, some of which may be slow to realize the potential of adapting advanced new technologies or may lack the required skills to use them well. For example, the Goldman report assumes it takes 10 years for the gains to fully materialize.

The “productivity J-curve” (Brynjolfsson et al., 2021) describes how new technologies, especially general purpose technologies, deliver productivity gains only after a period of investment in complementary intangible goods, such as business processes and new skills. In fact, this can temporarily even drag down measured productivity. As a result, earlier general purpose technologies like electricity and the first wave of computers took decades to have a significant effect on productivity. Additional barriers to adoption and rollout include concerns about job losses and institutional inertia and regulation, in areas from the medicine to finance and law.

However, in the case of generative AI there are also factors that can mitigate these barriers, or even accelerate adoption. First, in contrast to physical automation, one benefit of cognitive automation is that it can often be rolled out quickly via software. This is particularly true now that a ubiquitous digital infrastructure is available: the Internet. ChatGPT famously was the most rapid product launch in history—it gained 100 million users in just two months —because it was accessible to anyone with an internet connection and did not require any hardware investment on the users’ side.

Both Microsoft and Google are in the process of rolling out Generative AI tools as part of their search engines and office suites, offering access to generative AI to a large fraction of the cognitive workforce in advanced countries who regularly use these tools. Furthermore, application programming interfaces (APIs) are increasingly available to enable seamless modularization and connectivity between systems, and a marketplace for plug-ins and extensions is rapidly growing, making it much easier to add functionality. Finally, in contrast to other technologies, users of generative AI can interact with the technology in natural language rather than special codes or commands, making it easier to learn and adopt these tools.

These reasons for optimism suggest that the rollout of these new technologies may be faster than in the past. Still, the importance of training to make optimal use of these tools cannot be overstated.

Problems of measurement – silent productivity growth

The most common measure of productivity, non-farm business productivity, is quite adept at capturing increases in  productivity in the industrial sector where inputs and outputs are tangible and easy to account for. However, productivity of cognitive labor is harder to measure. Statisticians who compile GDP and productivity statistics sometimes resort to valuing the output of cognitive activity simply by assuming it is proportional to the quantity of labor input being used to produce it, which of course eliminates any scope for productivity growth.

For example, generative AI enables economists to write more thought pieces and provide deeper analyses of the economy than before, yet this output would not directly show up in GDP statistics. Readers may feel that they have access to better and deeper economic analyses (contributing to channel 1 above). Moreover, the analyses may also play a part in enabling business leaders and policymakers to better harness the positive productivity effects of generative AI (contributing to channel 2 above). Neither of these positive productivity effects of such work would be directly captured in official GDP or productivity statistics, yet the benefits of economists’ productivity gains would still lead to greater social welfare.

The same holds true for many other cognitive workers throughout the economy. This may give rise to significant under-measurement or “silent productivity growth.”

Productivity growth, labor markets, and income distribution

A bigger pie does not automatically mean everyone benefits evenly, or at all. The productivity effects of generative AI are likely to go hand in hand with significant disruption in the job market as many workers may see downward wage pressures. For example, the Eloundou et al. paper cited earlier predicts that up to 49% of the workforce could eventually have half or more of their job tasks performed by AI. Will the demand for these tasks increase enough to compensate for such efficiency gains? Will the workers find other tasks to do? The answers are far from certain. In past technological transformations, workers who lost their jobs could transition to new jobs, and on average pay increased. However, given the scale of the impending disruption and the labor-saving nature of it, it remains to be seen whether this will be the case in the age of generative AI.

Moreover, the current wave of cognitive automation marks a change from most earlier waves of automation, which focused on physical jobs or routine cognitive tasks. Now, creative and unstructured cognitive jobs are also being impacted. Instead of the lowest paid workers bearing the brunt of the disruption, now many of the highest-paying occupations will be affected. These workers may find the disruption to be quite unexpected. If their skills are general, they may find it easier to adjust to displacement than blue-collar workers. However, if they have acquired a significant amount of human capital that becomes obsolete, they may experience much larger income losses than blue-collar workers who were displaced by previous rounds of automation.

The idea of jobs created versus jobs displaced is the most tangible manifestation of job market disruption for lay people. Job losses are indeed a significant social concern, and we need policies to facilitate adjustment. However, as economists, we note that the key factor in determining the influence of new technologies on the labor market is ultimately their effect on labor demand. Counting how many jobs are created versus how many are destroyed misses that employment is determined as the equilibrium of labor demand and labor supply. Labor supply is quite inelastic, reflecting that most working-age people want to or have to work independently of whether their incomes go up or down. Workers who lose their jobs as a result of changing technology will seek alternative employment. And, to the extent that changing technology raises productivity, this will increase national income and spur the demand for labor. Over the long run, the labor market can be expected to equilibrate, meaning that the supply of jobs, the demand for jobs and the level of wages will adjust to maintain full employment. This is evidenced by the fact that the unemployment rate in the United States has remained consistently low in the postwar period (with help from monetary and fiscal policy to recover from recessions). Job destruction has always been offset by job creation. Instead, the effects of automation and augmentation tend to be reflected in wages and income.

The effect of generative AI on labor demand depends on whether the systems complement or substitute for labor . Substitution occurs when AI models automate most or all tasks of certain jobs, while complementing occurs if they automate small parts of certain jobs, leaving humans indispensable. Additionally, AI systems can be complementary to human labor if they enable new tasks or increase quality.

As companies invest more in generative AI, they often have choices about whether to emphasize substitution or complementarity. For example, if call centers can use AI to complement human operators, or, as AI improves, they may restructure their processes to have the systems address more and more queries without human operators being involved. At the same time, higher productivity growth across the economy may make the overall effects more complementary by increasing overall labor demand and may mitigate the disruption.

In recent decades, there have been three main forces impacting income distribution. First, there has been an overall shift of income away from wages and towards corporate capital. Second, there has been an increase in the return to the skills that are valued by companies (reflected in part by higher returns to education). Third, there has been a shift caused by increased foreign competition .

It is hard to predict how generative AI will impact this mix. A positive interpretation is that workers who currently struggle with aspects of math and writing will become more productive with the help of these new tools and will be able to take better-paid jobs with the help of the new technology. A negative interpretation is that companies will use the technology to eliminate or de-skill more and more positions pushing a larger fraction of the workforce into unfulfilling jobs, raising the share of profits in income and, perhaps, increasing the demand for the most elite members of the workforce.

No doubt technological progress will not stop with the current wave of generative AI. Instead, we can expect even more dramatic advances in AI, bringing the technology closer to what is called artificial general intelligence (AGI). This will lead to even more radical transformations of life and work . The scarcity of human labor has been a double-edged sword throughout our history : on the one hand, it has held back economic growth because greater production would require more labor; on the other hand, it has been highly beneficial for income distribution since wages represent the market value of scarce labor. If labor can be replaced by machines across a wide range of tasks in the future, both points may no longer hold, and we may experience an AI-powered growth take-off at the same time as that the value of labor declines. This would present a significant challenge for our society . Moreover, AGI may also impose large risks on humanity if not aligned with human objectives .

Large language models and other forms of generative AI are still at an early stage, making it difficult to predict with great confidence the exact productivity effects they will have. Yet as we have argued, we expect that generative AI will have tremendous positive productivity effects, both by increasing the level of productivity and accelerating future productivity growth.

For policymakers, the goal should be to allow for the positive productivity gains while mitigating the risks and downsides of ever-more powerful AI. Faster productivity growth is an elixir that can solve or mitigate many of our society’s challenges, from raising living standards and addressing poverty to providing healthcare for all and strengthening our defenses. Indeed, it will be nearly impossible to fix some of our budgetary challenges, including the growing deficits, without sufficiently stronger growth.

AI-powered productivity growth will also create challenges. There may be a need for updating social programs and tax policy to soften the welfare costs of labor market disruptions and ensure that the benefits of AI give rise to shared prosperity rather than concentration of wealth. Other harms will also need to be addressed, including the amplification of misinformation and polarization, potentially destabilizing our democracy, and the creation of new biological and other weapons that could injure or kill untold numbers of people.

Therefore, we cannot let the capabilities of AI outstrip our understanding of their potential impacts. Economists and other social scientists will need to accelerate their work on AI’s impacts to keep up with our colleagues in AI research who are rapidly advancing the technologies. If we do that, we are optimistic our society can harness the productivity benefits and growth acceleration delivered by artificial intelligence to substantially advance human welfare in the coming years.

The authors used GPT4 for writing assistance in producing this text but assume full responsibility for its content and accuracy.

The Brookings Institution is financed through the support of a diverse array of foundations, corporations, governments, individuals, as well as an endowment. A list of donors can be found in our annual reports published online here . The findings, interpretations, and conclusions in this report are solely those of its author(s) and are not influenced by any donation.

Artificial Intelligence Technology Policy & Regulation

Regulatory Policy

Economic Studies

Center on Regulation and Markets

Anthony F. Pipa

June 4, 2024

Daniel S. Schiff, Kaylyn Jackson Schiff, Natália Bueno

May 30, 2024

May 28, 2024

Case studies on artificial intelligence

We are proud to present case studies from members that are pushing the frontier in the development and artificial intelligence.

LG Electronics’ Vision on Artificial Intelligence

Watch as LG’s Chief Technology Officer Dr. IP Park talks about LG’s vision for their future work with artificial intelligence.

Microsoft’s AI for Accessibility

Microsoft’s AI for Accessibility is a  Microsoft grant program that harnesses the power of AI to amplify human capability for the more than one billion people around the world with a disability.

Microsft’s 2030 vision on Healthcare, Artificial Intelligence, Data and Ethics

The intersection between technology and health has been an increasing area of focus for policymakers, patient groups, ethicists and innovators. As a company, we found ourselves in the midst of many different discussions with customers in both the private and public sectors, seeking to harness technology, including cloud computing and AI, all for the end goal of improving human health. Many customers were struggling with the same questions, among them how to be responsible data stewards, how to design tools that advanced social good in ethical ways, and how to promote trust in their digital health-related products and services. […]

Finland training & development plan

AI has been extensively discussed in Finland. The University of Helsinki and Reaktor launched a free and public course to educate 1% of the Finnish population on AI by the end of this year. They have challenged companies to train employees on AI during 2018 and many member companies of the Technology Industries of Finland association (e.g. Nokia, Kone, F-Secure) have joined and support the programme. More than 90,000 people have enrolled in these courses.

SAP – Training for boosting people’s AI skills

SAP has made available various Massive Open Online Courses (MOOCs) both for internal and external users, with goals ranging from basic knowledge/awareness building, for example programmes and courses on ‘Enterprise Machine Learning in a Nutshell’ (see: https://open.sap.com/courses/ml1-1 ), as well as more advanced skills, for instance on deep learning (see: https://open.sap.com/courses/ml2 ). Two-thirds of SAP’s own machine learning (ML) team is made up of people who already worked for SAP in non-ML roles and then acquired the necessary ML knowledge and skills on the job.

SAP – Addressing bias & ensuring diversity

SAP created a formal internal and diverse AI Ethics & Society Steering Committee. The committee is creating and enforcing a set of guiding principles for SAP to address the ethical and societal challenges of AI. It is comprised of senior leaders from across the entire organisation such as Human resources, Legal, Sustainability and AI Research departments. This interdisciplinary membership helps ensuring diversity of thought when considering how to address concerns around AI, e.g. those related to bias.

AI itself can also help increase diversity in the workplace and eliminate biases. SAP uses, offers and continues to develop AI powered HR services that eliminate biases in the application process. For example, SAP’s “Bias Language Checker” (see:  https://news.sap.com/2017/10/sap-introduces-intelligent-hr-solution-to-help-businesses-eliminate-bias/ ) helps HR identifying areas where the wording of a Job Description lacks inclusivity and may deter a prospective applicant from submitting their application.

Who can be held liable for damages caused by autonomous systems?

AI and robotics have raised some questions regarding liability. Take for example the scenario of an ‘autonomous’ or AI-driven robot moving through a factory. Another robot surprisingly crosses its way and our robot draws aside to prevent collision. However, by this manoeuvre the robot injures a person. Who can be held liable for damages caused by autonomous systems? The manufacturer using the robots, one or both or the robot manufacturers or one of the companies that programmed the software of the robots?

Existing approaches would likely already provide a good approach. For example, owner’s liability, as with motor vehicles, could be introduced for autonomous systems (whereas ‘owner’ means the person using or having used the system for its purposes). The injured party should be able to file a claim for personal or property damages applying strict liability standards against the owner of the autonomous system.

Sony – Neural Network Libraries available in open source 

Sony has made available in open source its “Neural Network Libraries” which serve as framework for creating deep learning programmes for AI. Software engineers and designers can use these core libraries free of charge to develop deep learning programmes and incorporate them into their products and services. This shift to open source is also intended to enable the development community to further build on the core libraries’ programmes.

Deep learning refers to a form of machine learning that uses neural networks modelled after the human brain. By making the switch to deep learning-based machine learning, the past few years have seen a rapid improvement in image and voice recognition technologies, even outperforming humans in certain areas. Compared to conventional forms of machine learning, deep learning is especially notable for its high versatility, with applications covering a wide variety of fields besides image and voice recognition, including machine translation, signal processing and robotics. As proposals are made to expand the scope of deep learning to fields where machine learning has not been traditionally used, there has been an accompanying surge in the number of deep learning developers.

Neural network design is very important for deep learning programme development. Programmers construct the neural network best suited to the task at hand, such as image or voice recognition, and load it into a product or service after optimising the network’s performance through a series of trials. The software contained in these core libraries efficiently facilitates all the above-mentioned development processes.

Cisco – Reinventing the network & making security foundational

Cisco is reinventing networking with the network intuitive. Cisco employs machine learning (ML) to analyse huge amounts of network data and understand anomalies as well as optimal network configurations. Ultimately, Cisco will enable an intent-based, self-driving and selfhealing network. The network will redirect traffic on its own and heal itself from internal shocks, such as device malfunctions, and external shocks, such as cyberattacks.

To simplify wide area network (WAN) deployments and improve performance, ML software observes configuration, telemetry and traffic patterns and recommends optimisation and security measures via a centralised management application. Machine learning plays a role in analysing network data to identify activity indicative of threats such as ransomware, cryptomining and advanced persistent threats within encrypted traffic flows.

Moreover, to help safeguard organisations in a constantly changing threat landscape, Cisco is using AI and ML to support comprehensive, automated, coordinated responses between various security components. For businesses in a multi-cloud environment, cloud access is secured by leveraging machine intelligence to uncover malicious domains, IPs, and URLs before they are even used in attacks. Once a malicious agent is discovered on one network, it is blacklisted across all customer networks. Machine learning is also used to detect anomalies in IT environments in order to safeguard the use of SaaS applications by adaptively learning user behaviour. Infrastructure-as-a-Service instances as well are safeguarded by using machine learning to discover advanced threats and malicious communications.

Intel – AI for cardiology treatment

Precision medicine for cancers requires the delivery of individually-adapted medical care based on the genetic characteristics of each patient. The last decade witnessed the development of high-throughput technologies such as next-generation sequencing, which paved their way in the field of oncology. While the cost of these technologies decreases, we are facing an exponential increase in the amount of data produced. In order to open the access to more and more patients to precision medicine-based therapies, healthcare providers have to rationalise both their data production and utilisation and this requires the implementation of the cuttingedge technology of high-performance computing and artificial intelligence.

Before taking a therapeutic decision based on the genome interpretation of a cancer, the physician can be presented with an overwhelming number of genes variants. In order to identify key actionable variants that can be targeted by treatments, the physician needs tools to sift through this large volume of variants. While the use of AI in genome interpretation is still nascent, it is growing rapidly as acting filter to dramatically reduce the number of variants, providing invaluable help to the physician. The mastering of high-performance computing methods on modern hardware infrastructure is becoming a key factor of the cancer genome interpretation process while being efficient, cost-effective and adjustable over time.

The pioneer collaboration initiated between the Curie Institute Bioinformatics platform and Intel aims at answering those challenges by defining a leading model in France and Europe. This collaboration will grant Institute Curie access to Intel experts for defining highperformance computing and artificial intelligence infrastructure and ensuring its optimisation in order to implement the Intel Genomics ecosystem partner solutions and best practices, for example the Broad Institute for Cancer Genomics pipeline optimisation. Also anticipated is the development of additional tailored tools needed to integrate and analyse heterogeneous biomedical data.

MSD – AI for healthcare professionals

MSD has launched, as part of its MSD Salute programme in Italy, a chatbot for physicians, powered by AI and machine learning. It has already achieved a large uptake with healthcare professionals in Italy. The programme’s sector of focus is immune-oncology.

From the MSD prospective, physicians are digital consumers looking for relevant information for their professional activity. Some key factors like the increase of media availability, mobile devices penetration and the decrease of time available, are resulting in a reduction of time spent navigating and searching on the web. Therefore users (and physicians with their pragmatic approach) read what they see and do not navigate as much but just ‘read and go’. This means that there is an urgent need to access content quickly, easily and efficiently.

The chatbot is developed in partnership with Facebook and runs on their Messenger app framework. As an easy and practical tool, it helps to establish a conversational relationship between the users. The MSD Italy ChatBot service is available only for registered physicians. Integration with Siri and other voice recognition systems is also worked on, to improve the human experience during the interaction with the chatbot. This initiative is a key item in MSD Italy’s digital strategy which focuses on new channels and touch-points with healthcare professionals, leveraging on new technologies.

Philips – AI in clinics and hospitals

With the clinical introduction of digital pathology, pioneered by Philips, it has become possible to implement more efficient pathology diagnostic workflows. This can help pathologists to streamline diagnostic processes, connect a team, even remotely, to enhance competencies and maximise use of resources, unify patient data for informed decision-making, and gain new insights by turning data into knowledge. Philips is working with PathAI to build deep learning applications. By analysing massive pathology data sets, we are developing algorithms aimed at supporting the detection of specific types of cancer and that inform treatment decisions.

Further, AI and machine learning for adaptive intelligence can also support quick action to address patient needs at the bedside. Manual patient health audits used to be timeconsuming, putting a strain on general ward staff. Nurses need to juggle a range of responsibilities: from quality of care to compliance with hospital standards. Information about the patient’s health was scattered across various records, making it even harder for nurses to focus their attention and take the right actions. Philips monitoring and notification systems assist nurses to detect a patient’s deterioration much quicker. All patient vital signs are automatically captured in one place to provide an Early Warning Score (EWS).

Microsoft – Machine learning for tumour detection and genome research

Microsoft’s Project InnerEye developed machine learning techniques for the automatic delineation of tumours as well as healthy anatomy in 3D radiological images. This technology helps to enable fast radiotherapy planning and precise surgery planning and navigation. Project InnerEye builds upon many years of research in computer vision and machine learning. The software learned how to mark organs and tumours up by training on a robust data set of images for patients that had been seen by experienced consultants.

The current process of marking organs and tumours on radiological images is done by medical practitioners and is very time consuming and expensive. Further, the process is a bottleneck to treatment – the tumour and healthy tissues must be delineated before treatment can begin. The InnerEye technology performs this task much more quickly than when done by hand by clinicians, reducing burdens on personnel and speeding up treatment.

The technology, however, does not replace the expertise of medical practitioners; it is designed to assist them and reduce the time needed for the task. The delineation provided by the technology is designed to be readily refined and adjusted by expert clinicians until completely satisfied with the results. Doctors maintain full control of the results at all times.

Further, Microsoft has partnered with St. Jude Children’s Research Hospital and DNANexus to develop a genomics platform that provides a database to enable researchers to identify how genomes differ. Researchers can inspect the data by disease, publication, gene mutation and also upload and test their own data using the bioinformatics tools. Researchers can progress their projects much faster and more cost-efficiently because the data and analysis run in the cloud, powered by rapid computing capabilities that do not require downloading anything.

Siemens – AI for Industry, Power Grids and Rail Systems

Siemens has been using smart boxes to bring older motors and transmissions into the digital age. These boxes contain sensors and communication interfaces for data transfer. By analysing the data, AI systems can draw conclusions regarding a machine’s condition and detect irregularities in order to make predictive maintenance possible.

AI is used also beyond industrial settings, for example to improve the reliability of power grids by making them smarter and providing the devices that control and monitor electrical networks with AI. This enables the devices to classify and localise disruptions in the grid. A special feature of this system is that the associated calculations are not performed centrally at a data centre, but de-centrally between the interlinked protection devices.

In cooperation with Deutsche Bahn, Siemens is running a pilot project for the predictive maintenance and repair of high-speed trains. Data analysts and software recognise patterns and trends from the vehicles’ operating data. Moreover, AI helps build optimised control centres for switch towers. From the billions of possible hardware configurations for a switch tower, the software selects options that fulfil all the requirements, including those regarding reliable operation.

Schneider Electric – AI for industry applications

Schneider Electric has used AI and machine learning in various sectors. In the oil and gas industry for example, machine learning is steering the operation of Realift rod pump control to monitor and configure pump settings and operations remotely, sending personnel onsite only when necessary for repair or maintenance – when Realift indicates that something has gone wrong. Anomalies in temperature and pressure, for instance, can flag potential problems, even issues brewing a mile below the surface. Intelligence edge devices can run analytics locally without having to tap the cloud — a huge deal for expensive, remote assets such as oil pumps.

To enable this solution an AI model is previously trained to recognise correct pump operation and also different types of failures a pump can experience, the AI model is deployed on a gateway at oil field for each pump and is fed with data collected at each pump stroke. Then, it outputs a prediction regarding the pump state. As we mimic the expert diagnostics, predictions can be easily validated, explained and interpreted.

Schneider Electric – Improving agriculture and farming with AI

Another example is in the agriculture sector, where Schneider Electric has proposed an AI solution for Waterforce, an irrigation solutions builder and water management company in New Zealand. Schneider Electric’ solution makes water use more efficient and effective in water use, saving up to 50% in energy costs, and provides remote monitoring capabilities that reduce the time farmers have to spend driving to inspect assets. The solution is able to collect data, from the weather forecast, pressure of pumps, temperatures, level of water, humidity of the ground, cleaning and selecting quality data, and preparing the data, in order to propose services such as fault diagnosis, performance benchmarking, recommendation and advise on operations.

AI and machine learning therefore represent a new way for humans and machines to work together – to learn about predictive tendencies and to solve complex problems. In the above examples, the challenges presented today in managing a process that requires tight control of temperatures, pressures, and liquid flows is quite complex and prone to error. Many variables need to be factored in to achieve a successful outcome – and the quality of the data that trains the AI algorithms could deliver very different results that the human brain should anyhow interpreted and guide. With the support of AI to make better operational decisions, critical factors such as safety, security, efficiency, productivity, and even profitability can be optimised in conjunction between machine/process and operator. This way, the training and combined skills from AI and expertise are a key success factor to deliver those values to Industry.

Canon – Application of automation in the office environment

Canon’s digital mailroom solution has been at the forefront of Robotic Process Automation (RPA) since it was first launched. A digital mailroom allows all incoming mail to be automatically captured, identified, validated and sent with relevant index data to the right systems or people. RPA technology is centred on removing the mundane to make lives easier. In the P2P world, RPA automates labour-intensive activities that require accessing multiple systems or that need to be audited for compliance.

Canon believes the next step in automation is the intelligent mailroom. The key challenge of the future will be the integration of digital and paper-based information into robust, effective and efficient processes. This means that organisations need more intelligent, digital mailroom solutions that enable data capture across every channel. One example of intelligent mailroom is the Multichannel Advanced Capture. This allows banks to enable customers to apply for an account minimising the amount of paper and using a mobile-friendly web page capturing the core details required. Automated checks on customers’ ID and credit history are made first. If all initial checks are valid, a second human check can be made. The bank is then presented with all the information required to make an informed decision on the application to open the bank account, based on applicable business rules as well as on (automatically) gathered historical business process knowledge.

SAS – Crowdsourcing and analysing data for endangered wildlife

The WildTrack Footprint Identification Technique (FIT) is a tool developed in partnership with SAS for non-invasive monitoring of endangered species through digital images of footprints. Measurements from these images are analysed by customised mathematical models that help to identify the species, individual, sex and age-class. AI could add the ability to adapt through progressive learning algorithms and tell an even more complete story.

Ordinary people would not necessarily be able to dart a rhino, but they can take an image of a footprint. WildTrack therefore has data coming in from everywhere. As this represents too much information to manage manually AI can automate repetitive learning through data, performing frequent, high-volume, computerised tasks reliably and without fatigue.

SAS – Using AI for real-time sports analytics

AI can also be used to analyse sports and football data. For example, SciSports models on-field movements using machine learning algorithms, which by nature improve on performing a task as they gain more experience. It works by automatically assigning a value to each action, such as a corner kick. Over time, these values change based on their success rate. A goal, for example, has a high value, but a contributing action – which may have previously had a low value – can become more valuable as the platform masters the game.

AI and machine learning will play an important role in the future of SciSports and football analytics in general. Existing mathematical models shape existing knowledge and insights in football, while AI and machine learning will make it possible to discover new connections that people would not make themselves.

Various other tools such as SAS Event Stream Processing and SAS Viya can then be utilised for real-time image recognition, with deep learning models, to distinguish between players, referees and the ball. The ability to deploy deep learning models in memory onto cameras and then do the inferencing in real time is cutting-edge science.

Google & TNO – AI for data analysis on traffic safety

TNO is one of the partners of InDeV, an international collaboration of researchers which was created to develop new ways of measuring traffic safety. Statistics about traffic safety were unreliable, insufficiently detailed, and hard to collect. Researchers often resort to filming busy intersections and manually reviewing the recording. This a time-intensive and expensive process. A single intersection needs to be monitored for three weeks with two cameras to create an estimation of its safety, adding up to six weeks of footage, which can take six weeks of work to analyse. Typically, less than one percent of the recorded material is actually of interest to researchers. The job of TNO is to apply machine learning to video of accident-prone hot spots to rate intersections on a scale according to their safety. With TNO’s neural network based on TensorFlow, researchers report that it takes only one hour to review footage that would previously have taken a week to inspect.

AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries

A new study reveals the need for benchmarking and public evaluations of AI tools in law.

Scales of justice illustrated in code

Artificial intelligence (AI) tools are rapidly transforming the practice of law. Nearly  three quarters of lawyers plan on using generative AI for their work, from sifting through mountains of case law to drafting contracts to reviewing documents to writing legal memoranda. But are these tools reliable enough for real-world use?

Large language models have a documented tendency to “hallucinate,” or make up false information. In one highly-publicized case, a New York lawyer  faced sanctions for citing ChatGPT-invented fictional cases in a legal brief;  many similar cases have since been reported. And our  previous study of general-purpose chatbots found that they hallucinated between 58% and 82% of the time on legal queries, highlighting the risks of incorporating AI into legal practice. In his  2023 annual report on the judiciary , Chief Justice Roberts took note and warned lawyers of hallucinations. 

Across all areas of industry, retrieval-augmented generation (RAG) is seen and promoted as the solution for reducing hallucinations in domain-specific contexts. Relying on RAG, leading legal research services have released AI-powered legal research products that they claim  “avoid” hallucinations and guarantee  “hallucination-free” legal citations. RAG systems promise to deliver more accurate and trustworthy legal information by integrating a language model with a database of legal documents. Yet providers have not provided hard evidence for such claims or even precisely defined “hallucination,” making it difficult to assess their real-world reliability.

AI-Driven Legal Research Tools Still Hallucinate

In a new  preprint study by  Stanford RegLab and  HAI researchers, we put the claims of two providers, LexisNexis (creator of Lexis+ AI) and Thomson Reuters (creator of Westlaw AI-Assisted Research and Ask Practical Law AI)), to the test. We show that their tools do reduce errors compared to general-purpose AI models like GPT-4. That is a substantial improvement and we document instances where these tools provide sound and detailed legal research. But even these bespoke legal AI tools still hallucinate an alarming amount of the time: the Lexis+ AI and Ask Practical Law AI systems produced incorrect information more than 17% of the time, while Westlaw’s AI-Assisted Research hallucinated more than 34% of the time.

Read the full study, Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools

To conduct our study, we manually constructed a pre-registered dataset of over 200 open-ended legal queries, which we designed to probe various aspects of these systems’ performance.

Broadly, we investigated (1) general research questions (questions about doctrine, case holdings, or the bar exam); (2) jurisdiction or time-specific questions (questions about circuit splits and recent changes in the law); (3) false premise questions (questions that mimic a user having a mistaken understanding of the law); and (4) factual recall questions (questions about simple, objective facts that require no legal interpretation). These questions are designed to reflect a wide range of query types and to constitute a challenging real-world dataset of exactly the kinds of queries where legal research may be needed the most.

comparison of hallucinated and incomplete responses

Figure 1: Comparison of hallucinated (red) and incomplete (yellow) answers across generative legal research tools.

These systems can hallucinate in one of two ways. First, a response from an AI tool might just be  incorrect —it describes the law incorrectly or makes a factual error. Second, a response might be  misgrounded —the AI tool describes the law correctly, but cites a source which does not in fact support its claims.

Given the critical importance of authoritative sources in legal research and writing, the second type of hallucination may be even more pernicious than the outright invention of legal cases. A citation might be “hallucination-free” in the narrowest sense that the citation  exists , but that is not the only thing that matters. The core promise of legal AI is that it can streamline the time-consuming process of identifying relevant legal sources. If a tool provides sources that  seem authoritative but are in reality irrelevant or contradictory, users could be misled. They may place undue trust in the tool's output, potentially leading to erroneous legal judgments and conclusions.

examples of hallucinations from models

Figure 2:  Top left: Example of a hallucinated response by Westlaw's AI-Assisted Research product. The system makes up a statement in the Federal Rules of Bankruptcy Procedure that does not exist (and Kontrick v. Ryan, 540 U.S. 443 (2004) held that a closely related bankruptcy deadline provision was not jurisdictional). Top right: Example of a hallucinated response by LexisNexis's Lexis+ AI. Casey and its undue burden standard were overruled by the Supreme Court in Dobbs v. Jackson Women's Health Organization, 597 U.S. 215 (2022); the correct answer is rational basis review. Bottom left: Example of a hallucinated response by Thomson Reuters's Ask Practical Law AI. The system fails to correct the user’s mistaken premise—in reality, Justice Ginsburg joined the Court's landmark decision legalizing same-sex marriage—and instead provides additional false information about the case. Bottom right: Example of a hallucinated response from GPT-4, which generates a statutory provision that has not been codified.

RAG Is Not a Panacea

a chart showing an overview of the retrieval-augmentation generation (RAG) process.

Figure 3: An overview of the retrieval-augmentation generation (RAG) process. Given a user query (left), the typical process consists of two steps: (1) retrieval (middle), where the query is embedded with natural language processing and a retrieval system takes embeddings and retrieves the relevant documents (e.g., Supreme Court cases); and (2) generation (right), where the retrieved texts are fed to the language model to generate the response to the user query. Any of the subsidiary steps may introduce error and hallucinations into the generated response. (Icons are courtesy of FlatIcon.)

Under the hood, these new legal AI tools use retrieval-augmented generation (RAG) to produce their results, a method that many tout as a potential solution to the hallucination problem. In theory, RAG allows a system to first  retrieve the relevant source material and then use it to  generate the correct response. In practice, however, we show that even RAG systems are not hallucination-free. 

We identify several challenges that are particularly unique to RAG-based legal AI systems, causing hallucinations. 

First, legal retrieval is hard. As any lawyer knows, finding the appropriate (or best) authority can be no easy task. Unlike other domains, the law is not entirely composed of verifiable  facts —instead, law is built up over time by judges writing  opinions . This makes identifying the set of documents that definitively answer a query difficult, and sometimes hallucinations occur for the simple reason that the system’s retrieval mechanism fails.

Second, even when retrieval occurs, the document that is retrieved can be an inapplicable authority. In the American legal system, rules and precedents differ across jurisdictions and time periods; documents that might be relevant on their face due to semantic similarity to a query may actually be inapposite for idiosyncratic reasons that are unique to the law. Thus, we also observe hallucinations occurring when these RAG systems fail to identify the truly binding authority. This is particularly problematic as areas where the law is in flux is precisely where legal research matters the most. One system, for instance, incorrectly recited the “undue burden” standard for abortion restrictions as good law, which was overturned in  Dobbs (see Figure 2). 

Third, sycophancy—the tendency of AI to agree with the user's incorrect assumptions—also poses unique risks in legal settings. One system, for instance, naively agreed with the question’s premise that Justice Ginsburg dissented in  Obergefell , the case establishing a right to same-sex marriage, and answered that she did so based on her views on international copyright. (Justice Ginsburg did not dissent in  Obergefell and, no, the case had nothing to do with copyright.) Notwithstanding that answer, here there are optimistic results. Our tests showed that both systems generally navigated queries based on false premises effectively. But when these systems do agree with erroneous user assertions, the implications can be severe—particularly for those hoping to use these tools to increase access to justice among  pro se and under-resourced litigants.

Responsible Integration of AI Into Law Requires Transparency

Ultimately, our results highlight the need for rigorous and transparent benchmarking of legal AI tools. Unlike other domains, the use of AI in law remains alarmingly opaque: the tools we study provide no systematic access, publish few details about their models, and report no evaluation results at all.

This opacity makes it exceedingly challenging for lawyers to procure and acquire AI products. The large law firm  Paul Weiss spent nearly a year and a half testing a product, and did not develop “hard metrics” because checking the AI system was so involved that it “makes any efficiency gains difficult to measure.” The absence of rigorous evaluation metrics makes responsible adoption difficult, especially for practitioners that are less resourced than Paul Weiss. 

The lack of transparency also threatens lawyers’ ability to comply with ethical and professional responsibility requirements. The bar associations of  California ,  New York , and  Florida have all recently released guidance on lawyers’ duty of supervision over work products created with AI tools. And as of May 2024,  more than 25 federal judges have issued standing orders instructing attorneys to disclose or monitor the use of AI in their courtrooms.

Without access to evaluations of the specific tools and transparency around their design, lawyers may find it impossible to comply with these responsibilities. Alternatively, given the high rate of hallucinations, lawyers may find themselves having to verify each and every proposition and citation provided by these tools, undercutting the stated efficiency gains that legal AI tools are supposed to provide.

Our study is meant in no way to single out LexisNexis and Thomson Reuters. Their products are far from the only legal AI tools that stand in need of transparency—a slew of startups offer similar products and have  made   similar   claims , but they are available on even more restricted bases, making it even more difficult to assess how they function. 

Based on what we know, legal hallucinations have not been solved.The legal profession should turn to public benchmarking and rigorous evaluations of AI tools. 

This story was updated on Thursday, May 30, 2024, to include analysis of a third AI tool, Westlaw’s AI-Assisted Research.

Paper authors: Varun Magesh is a research fellow at Stanford RegLab. Faiz Surani is a research fellow at Stanford RegLab. Matthew Dahl is a joint JD/PhD student in political science at Yale University and graduate student affiliate of Stanford RegLab. Mirac Suzgun is a joint JD/PhD student in computer science at Stanford University and a graduate student fellow at Stanford RegLab. Christopher D. Manning is Thomas M. Siebel Professor of Machine Learning, Professor of Linguistics and Computer Science, and Senior Fellow at HAI. Daniel E. Ho is the William Benjamin Scott and Luna M. Scott Professor of Law, Professor of Political Science, Professor of Computer Science (by courtesy), Senior Fellow at HAI, Senior Fellow at SIEPR, and Director of the RegLab at Stanford University. 

More News Topics

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

AI Is Making Economists Rethink the Story of Automation

  • Walter Frick

write a case study on artificial intelligence

Economists have traditionally believed that new technology lifts all boats. But in the case of AI, some are asking: Will some employees get left behind?

Will artificial intelligence take our jobs? As AI raises new fears about a jobless future, it’s helpful to consider how economists’ understanding of technology and labor has evolved. For decades, economists were relatively optimistic, and pointed out that previous waves of technology had not led to mass unemployment. But as income inequality rose in much of the world, they began to revise their theories. Newer models of technology’s affects on the labor market account for the fact that it absolutely can displace workers and lower wages. In the long run, technology does tend to raise living standards. But how soon and how broadly? That depends on two factors: Whether technologies create new jobs for people to do and whether workers have a voice in technology’s deployment.

Is artificial intelligence about to put vast numbers of people out of a job? Most economists would argue the answer is no: If technology permanently puts people out of work then why, after centuries of new technologies, are there still so many jobs left ? New technologies, they claim, make the economy more productive and allow people to enter new fields — like the shift from agriculture to manufacturing. For that reason, economists have historically shared a general view that whatever upheaval might be caused by technological change, it is “somewhere between benign and benevolent.”

  • Walter Frick is a contributing editor at Harvard Business Review , where he was formerly a senior editor and deputy editor of HBR.org. He is the founder of Nonrival , a newsletter where readers make crowdsourced predictions about economics and business. He has been an executive editor at Quartz as well as a Knight Visiting Fellow at Harvard’s Nieman Foundation for Journalism and an Assembly Fellow at Harvard’s Berkman Klein Center for Internet & Society. He has also written for The Atlantic , MIT Technology Review , The Boston Globe , and the BBC, among other publications.

Partner Center

ORIGINAL RESEARCH article

Explainable artificial intelligence and microbiome data for food geographical origin: the mozzarella di bufala campana pdo case of study.

\r\nMichele Magarelli

  • 1 Dipartimento di Scienze del Suolo, della Pianta e degli Alimenti, Università degli Studi di Bari Aldo Moro, Bari, Italy
  • 2 Istituto Nazionale di Fisica Nucleare, Sezione di Bari, Bari, Italy
  • 3 Dipartimento di Agraria, Università degli Studi di Napoli Federico II, Naples, Italy
  • 4 Dipartimento Interateneo di Fisica M. Merlin, Università degli Studi di Bari Aldo Moro, Bari, Italy

Identifying the origin of a food product holds paramount importance in ensuring food safety, quality, and authenticity. Knowing where a food item comes from provides crucial information about its production methods, handling practices, and potential exposure to contaminants. Machine learning techniques play a pivotal role in this process by enabling the analysis of complex data sets to uncover patterns and associations that can reveal the geographical source of a food item. This study aims to investigate the potential use of explainable artificial intelligence for identifying the food origin. The case of study of Mozzarella di Bufala Campana PDO has been considered by examining the composition of the microbiota in each samples. Three different supervised machine learning algorithms have been compared and the best classifier model is represented by Random Forest with an Area Under the Curve (AUC) value of 0.93 and the top accuracy of 0.87. Machine learning models effectively classify origin, offering innovative ways to authenticate regional products and support local economies. Further research can explore microbiota analysis and extend applicability to diverse food products and contexts for enhanced accuracy and broader impact.

1 Introduction

With the burgeoning demand for high-quality, region-specific products, the need to ensure the origin and treceability of food products plays a pivotal role in ensuring authenticity, quality, and safety in the global food supply chain ( Gallo et al., 2021 ). The concepts of food traceability and origin are closely interlinked and hold pivotal significance in ensuring food safety and transparency throughout the production process but also supports local economies and encourages sustainable agricultural practices. They are integral in guaranteeing that foods are safe, genuine, and adhere to quality standards. Traceability refers to the ability to follow the journey of a product along the entire supply chain, encompassing detailed information about its production, processing, packaging, distribution, and sale ( del Rio-Lavín et al., 2023 ). On the other hand, the origin of food products indicates the specific location where they were cultivated, manufactured, or processed. Understanding the origin of a food item is essential for various reasons, including ensuring its safety, quality, and sustainability. Presently, determining the origin of a food product relies on diverse methods and tools. Collaboration among producers, distributors, and other stakeholders in the supply chain is crucial to ensuring transparency and accuracy in disclosing the origin of food products ( Corallo et al., 2020 ). Some food products may acquire origin certifications, such as the Protected Designation of Origin (PDO) in Europe or other regional certifications, which verify that the product originates from a specific geographical area and complies with designated standards ( Badia-Melis et al., 2015 ). Analyzing the intricate ecosystem of microorganisms inhabiting food, known as the food microbiota, can be a useful tool for understanding the safety, quality, and characteristics of food products of foods. This diverse microbial community, comprising bacteria, fungi, and viruses, is influenced by various factors such as geographical location, production methods, and processing techniques. A fundamental aspect of harnessing the food microbiota for product origin lies in its dynamic composition, which reflects the unique environmental conditions and production practices of each food item. By scrutinizing the microbiota composition of food samples, distinctive microbial signatures indicative of their origin or production environment can be discerned. Recent advancements in molecular biology and sequencing technologies have revolutionized our ability to characterize the food microbiota with unprecedented precision and speed. High-throughput sequencing methods, including next-generation sequencing, facilitate rapid and accurate identification of microbial species present in food samples ( Reuter et al., 2015 ). Comparative analysis of microbiota profiles among different food samples enables the identification of subtle variations that serve as valuable markers for product origin. Specific microbial strains or community structures may be linked to particular regions or production facilities, offering distinctive identifiers for food products. Moreover, the food microbiota serves as a sentinel for monitoring food quality and safety along the supply chain ( Guidone et al., 2016 ). Alterations in microbial composition or abundance can signal potential contamination or spoilage incidents, enabling prompt interventions to mitigate risks and uphold food safety standards. In addition to conventional laboratory techniques, emerging methodologies such as metagenomics and metatranscriptomics provide comprehensive insights into the food microbiota. These cutting-edge approaches enable holistic analysis of all microbial genetic material within a sample, facilitating deeper understanding of microbial dynamics and functions ( Cao et al., 2021 ). The use of machine learning in food classification and origin represents a significant step forward in ensuring the safety and authenticity of food products. Firstly, machine learning enables the development of predictive models that can differentiate between different types of foods based on specific characteristics. By leveraging machine learning algorithms, it becomes possible to process vast amounts of data, including information on production practices, environmental factors, and biochemical compositions, to accurately predict the origin of a food product. For example, using data from chemical, sensory, or genetic analyses, models can be trained to recognize the presence of contaminants or identify the geographical origin of a food. Furthermore, the application of machine learning to food classification offers numerous opportunities to enhance food safety, ensure product authenticity, and optimize the identification of food origin. The integration of machine learning and microbiota offers an innovative approach to understanding the complexity of interactions between the microbiome and food. By analyzing microbiome data using machine learning algorithms, it is possible to identify patterns and associations that can be valuable for enabling the develop preventive strategies to reduce risks and improve the nutritional quality of foods. The application of machine learning techniques in the field of food microbiota presents multiple opportunities to analyze large amounts of microbiological data, identify patterns and associations between microbial composition and food characteristics, predict food quality and safety, to understanding microbial dynamics and search for solutions to promote health ( Bellantuono et al., 2023 ; Papoutsoglou et al., 2023 ). Through data analysis and the development of predictive models, crucial challenges in the food industry can be addressed, promoting greater transparency and trust among consumers. Explainable Artificial Intelligence (XAI) algorithms are useful to make artificial intelligence (AI) models understandable and interpretable to humans, because many machine learning and AI models often operate as “black boxes,” making it difficult to understand how and why they produce certain predictions or decisions. The goal of XAI is to provide explanations and insights into the operation of AI models, enabling users to understand the reasons behind their predictions or decisions. This is particularly important in contexts where transparency, accountability, and trust in AI are crucial. In Explainable Artificial Intelligence (XAI), trustworthiness plays a role in ensuring the reliability and transparency of AI models. It refers to the degree of confidence and faith users have in the explanations provided by the model regarding its predictions and decision-making processes. XAI techniques may include SHapley Additive exPlanations (SHAP) analysis that seek to translate the internal workings of AI models into understandable human explanations ( Novielli et al., 2024 ). This research delves into the crucial realm of preserving and authenticating the geographical origin of Mozzarella di Bufala Campana PDO, specifically focusing on the provinces of Salerno and Caserta. The characteristic that will be used for data analysis is the abundance of bacteria present in the microbiota of the samples. This information will be crucial for identifying any patterns or correlations between bacterial composition and the geographical origin of Mozzarella di Bufala PDO. By utilizing data analysis techniques such as machine learning ( Monaco et al., 2021 ; Papoutsoglou et al., 2023 ), it will be possible to create predictive models capable of accurately classifying the geographical origin of each sample based on microbiota information. This approach will provide a trustworthy assessment of the mozzarella's origins, thereby contributing to food quality and safety.

2 Materials

The data utilized in this study, decripted in Table 1 stems from the microbiological analysis of the microbiome of 65 samples of Mozzarella di Bufala PDO originating from 30 dairies in the province of Salerno and 35 dairies in the province of Caserta. These samples underwent thorough examination in the laboratories of the Microbiology Division within the Department of Agricultural Sciences at the University of Naples Federico II. All dairies were located within the PDO area produced traditional Mozzarella di Bufala according to the PDO regulation. Total DNA was extracted using the Qiagen Power Soil Pro kit. Metagenomic libraries were prepared using the Nextera XT Index Kit (Illumina, San Diego, California, United States), then whole metagenome sequencing was performed on an Illumina NovaSeq platform, leading to 2 × 150 bp, paired-end reads. Reads were quality-checked and filtered through Prinseq-lite v. 0.20.4, using parameters “-trim_qual_right 5” and “-min_len 60.” An average of 25 M of paired-end reads were obtained (2 × 150 bp) for each sample. Raw reads were pre-processed and filtered as previously described ( De Filippis et al., 2021 ). Briefly, contamination from host reads was removed using the Human Sequence Removal pipeline developed within the Human Microbiome Project by using the Best Match Tagger (BMtagger) mapping reads against the Bubalus bubalis (Mediterranean breed) genome (accession number: GCA003121395.1). Then, non-host reads were quality-filtered using PRINSEQ v. 0.20.4 ( Schmieder and Edwards, 2011 ). Bases having a Phred score < 15 were trimmed and those < 75 bp were discarded. High-quality reads were further processed to obtain microbiome taxonomic profiles using MetaPhlAn v. 4.0 ( Blanco-Míguez et al., 2023 ).

www.frontiersin.org

Table 1 . Description of samples and input variables.

Our analysis encompasses a diverse set of samples, reflecting the regional diversity of Mozzarella di Bufala PDO production across different dairies in the provinces of Salerno and Caserta. The 65 samples provide a robust dataset for investigating variations in microbial composition, offering valuable insights into the distinctive qualities of Mozzarella di Bufala PDO from different geographic origins. The species abundance data unveils the relative prevalence of microbial species, offering insights into the intricate microbiome of Mozzarella di Bufala PDO. This information is organized in a tabular format, where each row corresponds to a specific sample, and each column represents a distinct microbial species. To enhance our understanding of the origin of each Mozzarella di Bufala PDO sample, we include details about the respective cheese dairy, specifying both the dairy name and its geographic origin. Each sample presents 139 output variables, each representing the abundance of a specific bacterium. In the context of your analysis on the microbiome of Mozzarella di Bufala PDO, these output variables likely reflect the proportions or relative quantities of different types of bacteria present in each sample. The type of bacteria and their relative abundance in each sample could have significant implications for the quality and sensory characteristics of the product. Since many samples have abundance values equal to zero, indicating the absence of the bacteria, a preprocessing step was performed. In this pre-processing step, columns with more than 70% zero values were removed, reducing the total number of columns to 23. In order to conduct a robust analysis, the initial dataset has been strategically partitioned into a validation dataset and a test dataset to. This partitioning is designed to ensure a representative and unbiased evaluation of the models developed during the study ( Ibrahimi et al., 2023 ). The validation dataset consists of 22 samples from the province of Salerno and 33 samples from the province of Caserta. This division allows for the exploration of regional variations within the microbiome of Mozzarella di Bufala PDO, considering the distinctive characteristics of these geographical locations. The validation set was then used to assess three different classifiers through a five-fold cross-validation repeated 20 times ( Schaffer, 1993 ), and the performance of the best classifier (Random Forest, RF) was analyzed. Following that, the trained model was tested on the test dataset, and its performance was evaluated on this separate set of samples.

The independent test dataset, on the other hand, comprises eight samples from Salerno and two samples from Caserta. Notably, these 10 test samples are collected on the same day from the same dairy as the samples present in the validation set. By adopting this partitioning strategy, we aim to develop a model that not only captures the nuances of the training dataset but also demonstrates robust predictive abilities when faced with previously unseen samples.

The main steps of our analysis are outlined in the flowcharts in Figure 1 . It provides a comprehensive overview of the model's performance during both the training and validation phases, as well as in the subsequent testing phase, allowing for an overall evaluation of its predictive capabilities.

www.frontiersin.org

Figure 1 . The flowchart outlines the steps of the conducted analysis. The validation set was used to assess three different classifiers through a five-fold repeated 20 times cross-validation, and the performance of the best classifier (Random Forest, RF) was analyzed. Following that, the trained model was tested on the test dataset, and its performance was evaluated on this separate set of samples.

3.1 Machine learning based classification

To assess the classification of these samples, three distinct supervised machine learning methods were employed: Random Forest, XGBoost, and Multi-Layer Perceptron (MLP). The identification of the optimal classifier was based on both accuracy and Area Under the Curve (AUC).

3.1.1 Random forest classifier

The Random Forest Classifier represents a sophisticated ensemble learning algorithm within the realm of machine learning ( Chaudhary et al., 2016 ). Envisioned as a confluence of decision trees, it operates on the principle of aggregating predictions from diverse models to augment stability and overall performance. The ensemble is constituted by an assembly of decision trees, each meticulously trained on a distinct subset of the training dataset through the lens of bootstrap sampling a method characterized by its sampling with replacement. The algorithm's efficacy is derived from the varied nature of decision trees. This diversity, arising from the differential subsets of data upon which each tree is trained, mitigates the risk of overfitting, fostering a robust model. In the predictive phase, each decision tree contributes its prediction, and the final class is determined through a majoritarian consensus. This collective decision-making process amplifies the model's resilience and generalization capabilities ( Breiman, 2001 ).

3.1.2 EXtreme gradient boosting classifier

EXtreme Gradient Boosting (XGBoost) is a widely-used machine learning algorithm for regression and classification problems renowned for its prowess in diverse applications, particularly excelling in the realm of structured or tabular data and supervised learning scenarios ( Shwartz-Ziv and Armon, 2022 ). XGBoost has been extensively used in data science and machine learning competitions due to its ability to achieve excellent performance on a wide range of problems and datasets. It's also known for its flexibility and ability to handle large amounts of data. Positioned within the domain of ensemble learning, XGBoost elevates traditional gradient boosting algorithms to new heights. XGBoost typically builds an ensemble of decision trees, where each tree contributes to the final prediction. The combination of multiple trees enhances the model's predictive capabilities. XGBoost supports built-in cross-validation, enabling robust model evaluation and parameter tuning for optimal performance. XGBoost has an high predictive accuracy. By constructing an ensemble of models, each correcting the errors of the others, it can provide more accurate predictions compared to many other algorithms. It also incorporates regularization techniques that help manage the issue of overfitting, keeping the model general and adaptable to new data ( Chen and Guestrin, 2016 ).

3.1.3 Multi-layer perceptron classifier

The Multi-Layer Perceptron (MLP) stands as a sophisticated architecture within the domain of artificial neural networks, prominently featured in the landscape of machine learning. It is distinguished by its layered composition, comprising an input layer, one or more hidden layers, and an output layer. Each layer encompasses interconnected nodes, or artificial neurons, where the transmission of information follows a feedforward trajectory, progressing from the input layer through the hidden layers and culminating in the output layer. In a Multi-Layer Perceptron (MLP), input nodes constitute the initial layer of the neural network and serve as the units through which data is introduced into the system. Each input node represents a specific feature or variable from the dataset intended for model training. The hidden layers are intermediary layers between the input and output layers, responsible for capturing and learning complex patterns and representations within the input data. These layers contribute to the model's ability to discern intricate relationships that may not be immediately apparent in the raw features. Output nodes constitute the final layer of the neural network and are responsible for producing the model's predictions or outcomes. The configuration and characteristics of the output layer depend on the nature of the task, whether it involves classification, regression, or other specific objectives ( Ruck et al., 1990 ).

3.2 Evaluation metrics

Evaluation metrics are crucial tools for assessing the performance and effectiveness of machine learning models ( Ferrer, 2022 ). These metrics provide quantitative measures that help quantify how well a model is performing on a given task. The choice of evaluation metrics depends on the nature of the problem (classification, regression, etc.) and the specific goals of the analysis. Here are some commonly used evaluation metrics:

• Accuracy:

The proportion of correctly classified instances among the total instances

• Sensitivity:

The fraction of true positive predictions out of all actual positive instances

• Specificity:

Specificity is the proportion of actual negatives correctly identified by the model out of the total number of actual negatives.

• Precision:

The fraction of true positive predictions out of all positive predictions

• Area Under the ROC Curve (AUC-ROC):

The Receiver Operating Characteristic (ROC) curve and Area Under the Curve (AUC) are assessment tools employed to gauge the effectiveness of a binary classification model. The ROC curve presents a graphical depiction of how sensitivity (true positives) and specificity (true negatives) change across various classification thresholds. Essentially, it illustrates the balance between accurately identifying positive and negative instances by the model. The AUC quantifies the overall performance of the model by measuring the area under the ROC curve: a value closer to 1 signifies superior model performance, while a value around 0.5 suggests random classification. In summary, these metrics are vital for evaluating and contrasting the classification ability of binary models ( Ozenne et al., 2015 ).

3.3 Explainable artificial intelligence methods

Explainable Artificial Intelligence (XAI) is a crucial aspect in the development of AI systems, focused on making artificial intelligence (AI) models understandable and interpretable to humans. A specific method employed for XAI is the SHapley Additive exPlanations (SHAP) ( Arrieta et al., 2020 ). SHAP values are used to evaluate the impact of individual features on the model's performance, particularly on a validation set. Mathematically, the SHAP value for a specific feature ( j ) is calculated based on the inclusion or exclusion of that feature from the model as:

where Φ j ( x ) represents the SHAP value of feature j for the prediction of the model f given the input x , S is the set of all features, F ⊆ S −{ j } represents all possible subsets of features excluding feature j , | F | ! ( | S | - | F | - 1 ) ! | S | ! is a weight parameter that multiplies all of the permutations of S! by the potential permutations of the remaining class that doesn't belong to S, while f x ( F ∪ j ) and f x ( F ) denote respectively the model's prediction when feature j is added to the subset F and when it is absent ( Lundberg and Lee, 2017 ). We also averaged the ten realizations of SHAP values in order to obtain a single representative SHAP vector.

The SHAP value measures how much including feature j changes the model's prediction compared to the prediction without feature j, averaged over all possible combinations of features. Positive SHAP values indicate that the feature contributes positively to the prediction, while negative values indicate a negative contribution. The SHAP values provide a quantitative measure of the contribution of each feature to the model's output, enabling a more interpretable understanding of how individual features influence the algorithm's decision-making process. This transparency is crucial for building trust in AI systems and facilitating their use in various real-world applications where interpretability is essential ( Janzing et al., 2020 ). This approach contributes to the trustworthiness and applicability of our findings, enhancing the overall validity of the study's outcomes in the context of Mozzarella di Bufala PDO from Salerno and Caserta.

This study aims to investigate the potential use of explainable artificial intelligence for identifying the food origin. The case of study of Mozzarella di Bufala Campana PDO has been considered by examining the composition of the microbiota in 65 samples.

This study involved evaluating the effectiveness of three supervised machine learning algorithms, namely XGBoost, Random Forest, and a complex Multi-Layer Perceptron network. The analysis revealed that the Random Forest classifier outperformed the others, demonstrating the highest Area Under the Curve (AUC) value of 0.93 ± 0.10 and the top accuracy score of 0.87 ± 0.11. Table 2 provides a comprehensive comparison of the three models based on their AUC and accuracy scores.

www.frontiersin.org

Table 2 . Comparison between evaluation metrics of XGBoost (XGB), Random Forest (RF), and Multi-Layer Perceptron (MLP) classifiers.

4.1 Machine learning analysis

The results are illustrated in the confusion matrix in Table 3 , obtained following a five-fold repeated 20 times cross-validation procedure on the validation set. This methodology allows us to assess the effectiveness of our algorithm in a robust and reliable manner. In Figure 2 it is possible to observe the boxplot displaying the trend evaluation metrics, including accuracy ( Equation 1 ), specificity ( Equation 3 ), sensitivity ( Equation 2 ) and precision ( Equation 4 ), obtained through a five-fold repeated cross-validation scheme.

www.frontiersin.org

Table 3 . Confusion matrix depicts predicted values against actual values.

www.frontiersin.org

Figure 2 . Boxplot of the distributions of evaluation metrics (accuracy, specificity, sensitivity and precision) following five-fold cross-validation repeated 20 times.

The confusion matrix highlights the algorithm's ability to correctly classify observations based on the geographical origin of the samples, divided between the Salerno and Caserta areas. We observe that the algorithm achieved an accuracy of 87.87% in correctly identifying samples from the Salerno area and 86.36% for those from the Caserta area. These results indicate a good capability of our machine learning model in distinguishing the geographical origin of Mozzarella di Bufala Campana PDO based on the microbiota structure. The accuracy in both cases is quite high, suggesting that the model generalizes well to new data and could be used as a supportive tool in determining the geographical origin of unknown samples.

The Receiver Operating Characteristic curve in the Figure 3 defines AUC score, measuring the area under this curve, is 0.93 ± 0.10 and it suggests a high accuracy in classifying samples based on their geographical origin, affirming the robustness of the model's performance.

www.frontiersin.org

Figure 3 . ROC curve depicts the classification model's ability to vary the trade-off between sensitivity (True Positive Rate) and specificity (1 – False Positive Rate).

After conducting cross-validation, the outcomes were then utilized to compute feature importance employing SHapley Additive exPlanations (SHAP), as expressed in Equation (5) . The SHAP ranking plot is a graph that displays the importance of features in machine learning models using SHAP and features are arranged along the y-axis based on their importance, with the most important features at the top and the least important ones at the bottom. Each colored point represents a single data instance, and the horizontal position of the point indicates the value of the shap for that specific instance. The color of the point indicates the value of the feature: higher values are represented in warm colors (red), while lower values are represented in cool colors (blue). Through a SHAP analysis, the 20 most important feature were identified, deriving from the analysis of the microbiota 65 samples. In the SHAP plot in Figure 4 it is evident how certain features, such as Lactococcus lactis and Moraxella osloensis , contribute significantly to the model's prediction. The feature Lactobacillus helveticus is important for the model's interpretability, as the colored points are well distinguished, and red points indicate that high values of that bacterium have influenced Salerno class, and vice versa. This suggests that these elements play a crucial role in the geographical discrimination of the samples.

www.frontiersin.org

Figure 4 . The SHapley Additive exPlanations (SHAP) summary plot provides an overview of the importance of features in contributing to model predictions. In this type of plot, each point represents a data instance, and the horizontal position of the point indicates how much the effect of a specific feature contributes to the change in prediction compared to the model's average prediction. The color of the point represents the value of the feature, with darker colors indicating higher values.

The results of the Shap analysis highlight the fact that two Phyla are most represented (Firmicutes and Proteobacteria). The taxonomy of each sample of SHAP analysis is descripted in Table 4 . Lactobacillaceae is represented by five bacteria, Moraxella family is represented by four bacteria, while Lactococcaceae family is represented by three bacteria. Starting from the taxonomic group of the genus, it can be seen that there is a significant diversity of microbes, even if the Lactococcus genus and Lacotbacillus genus is represented three times each other.

www.frontiersin.org

Table 4 . Classification of the first 20 bacteria deriving from the Shap analysis.

A possible application of the classification model is to execute it on the previously selected test dataset. In testing the model, a dataset consisting of 10 samples from the same study was utilized, including two from Caserta and eight from Salerno. These samples were previously excluded during the model training phase. The confusion matrix of the test, depicted in the figure, provides a detailed overview of the model's performance on this specific test dataset. It is particularly noteworthy that all samples from Caserta were correctly classified by the model. On the other hand, only one sample from Salerno was misclassified. This result suggests a significant accuracy in the model's ability to discriminate between the two production locations, with a particularly high success rate for samples from Caserta. The confusion matrix in Table 5 offers a detailed assessment of the model's performance on the specific test dataset.

www.frontiersin.org

Table 5 . Confusion matrix depicts predicted values against actual values.

5 Discussion

Mozzarella di Bufala Campana PDO is a designation that certifies the mozzarella is produced in the Campania region, Italy, and follows traditional production methods and established quality standards to preserve its authenticity and excellence. The PDO protects the product name from imitations and assures buyers that they are purchasing a genuine product produced according to the traditional specifications of the designated area. Recognizing the correct origin is crucial to preserving the diversity and excellence of local productions. Protection against imitations and counterfeits, guaranteed by the PDO, helps maintain the product's reputation and preserves its cultural history. Ultimately, correctly identifying the origin of PDO mozzarella not only ensures product quality but also contributes to preserving the cultural and gastronomic heritage associated with this unique Italian specialty.

Indeed, the integration of machine learning (ML) and explainable artificial intelligence (XAI) techniques holds significant value in various contexts, including the analysis of biological data such as microbiota and metabolomics. Machine learning facilitates the creation of accurate predictive models based on microbiological data, aiding in the authentication and protection of PDO products like Mozzarella di Bufala Campana. XAI techniques ensure transparency and interpretability, reinforcing trust among consumers, regulators, and industry stakeholders. This combination not only enhances the certification of food origin but also strengthens the preservation of cultural and gastronomic heritage associated with traditional foods. Overall, microbiota analysis plays a vital role in ensuring the authenticity, quality, and safety of food products like Mozzarella di Bufala Campana PDO. In this study, each sample exhibits a relative abundance of various microbial species, which are not present in all samples. The most prevalent genera are Pseudomonas, Lactobacillus, Streptococcus , and Acinetobacter . The cheese-making process of Mozzarella di Bufala Campana is a combination of high-quality ingredients and specific procedures, with particular attention to the crucial role played by natural whey containing thermophilic lactic bacteria. The presence of thermophilic lactic bacteria is interesting because they survive at high temperatures during the processing, thus contributing to the uniqueness of Mozzarella di Bufala Campana ( Levante et al., 2023 ). The ecological complexity of these thermophilic lactic bacteria is an aspect that can be studied in detail to better understand the fermentation process and the production of this traditional cheese. Research conducted has shown that, despite ecological complexity, only certain thermophilic lactic acid bacteria (LAB), namely Streptococcus thermophilus, Lactobacillus delbrueckii , and Lactobacillus helveticus , are the main players in the curd fermentation. This is one of the peculiarities that helps preserve the unique characteristics of the cheese and protects local producers from imitations and counterfeits. It also assures buyers that they are purchasing an authentic and high-quality product, respecting the long history and reputation of Mozzarella di Bufala Campana as a traditional and artisanal product ( Pisano et al., 2016 ).

6 Conclusion

This paper is an example of how an XAI analysis can be applied with trustworthiness in the context of discriminating the geographical origin of PDO Mozzarella di Bufala Campana based on microbiota bacterial abundance. This validates the approach employed in our study and confirms that certain bacteria can be considered reliable indicators of geographical origin. The predictive models developed using machine learning techniques have proven to be effective in classifying the geographical origin of mozzarella samples. These results provides strong support for food traceability, enabling consumers to make informed choices and ensuring that products are authentic and safe. The results obtained have significant implications for the food industry as they offer an innovative and reliable method to authenticate and protect high-quality regional products. This can contribute to strengthening consumer confidence in food products and supporting local economies through the promotion of sustainable agricultural practices. Further research could delve deeper into microbiota analysis and assess the effectiveness of other analytical techniques in improving the accuracy of predictions regarding the geographical origin of food products. Machine learning facilitates the creation of robust predictive models capable of accurately identifying the origin of food products based on microbiological data. Furthermore, XAI techniques provide transparency and interpretability, enabling stakeholders to understand how these models arrive at their conclusions. This combination not only ensures the trustworthiness of predictions but also fosters trust among consumers, regulators, and industry professionals. Moving forward, further research could delve deeper into microbiota analysis and explore the effectiveness of additional analytical techniques in enhancing the accuracy of predictions regarding the geographical origin of food products. Additionally, investigating the application of these approaches in diverse contexts and food products would expand the scope and applicability of our findings, driving continual advancements in food traceability and quality assurance practices.

Data availability statement

The data presented in the study are deposited in the Sequence Read Archive (SRA) database of the NCBI, accession numbers PRJNA1084214 and PRJNA997821.

Author contributions

MM: Writing – review & editing, Writing – original draft, Software, Methodology, Investigation, Formal analysis. PN: Writing – review & editing, Writing – original draft, Visualization, Validation, Methodology, Investigation, Conceptualization. FD: Writing – review & editing, Validation, Investigation, Data curation. RM: Writing – review & editing, Data curation. PD: Writing – review & editing, Validation. DD: Writing – review & editing, Validation. RB: Writing – review & editing, Validation. ST: Writing – review & editing, Writing – original draft, Validation, Supervision, Project administration, Methodology, Investigation, Funding acquisition, Conceptualization.

The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. METROFOOD-IT project has received funding from the European Union—NextGenerationEU, PNRR—Mission 4 “Education and Research” Component 2: from research to business, Investment 3.1: Fund for the realization of an integrated system of research and innovation infrastructures - IR0000033 (D.M. Prot. n.120 del 21/06/2022).

Acknowledgments

Authors would like to thank the resources made available by ReCaS, a project funded by the MIUR (Italian Ministry for Education, University and Re- 270 search) in the “PON Ricerca e Competitivit'a 2007–2013-Azione I-Interventi di rafforzamento strutturale” PONa3 00052, Avviso 254/Ric, University of Bari.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., et al. (2020). Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inform. Fusion 58, 82–115. doi: 10.1016/j.inffus.2019.12.012

Crossref Full Text | Google Scholar

Badia-Melis, R., Mishra, P., and Ruiz-García, L. (2015). Food traceability: new trends and recent advances. A review. Food Control 57, 393–401. doi: 10.1016/j.foodcont.2015.05.005

Bellantuono, L., Tommasi, R., Pantaleo, E., Verri, M., Amoroso, N., Crucitti, P., et al. (2023). An explainable artificial intelligence analysis of Raman spectra for thyroid cancer diagnosis. Sci. Rep . 13:16590. doi: 10.1038/s41598-023-43856-7

PubMed Abstract | Crossref Full Text | Google Scholar

Blanco-Míguez, A., Beghini, F., Cumbo, F., McIver, L. J., Thompson, K. N., Zolfo, M., et al. (2023). Extending and improving metagenomic taxonomic profiling with uncharacterized species using metaphlan 4. Nat. Biotechnol . 41, 1633–1644. doi: 10.1038/s41587-023-01688-w

Breiman, L. (2001). Random forests. Mach. Learn . 45, 5–32. doi: 10.1023/A:1010933404324

Cao, Q., Sun, X., Rajesh, K., Chalasani, N., Gelow, K., Katz, B., et al. (2021). Effects of rare microbiome taxa filtering on statistical analysis. Front. Microbiol . 11:607325. doi: 10.3389/fmicb.2020.607325

Chaudhary, A., Kolhe, S., and Kamal, R. (2016). An improved random forest classifier for multi-class classification. Inf. Process. Agric . 3, 215–222. doi: 10.1016/j.inpa.2016.08.002

Chen, T., and Guestrin, C. (2016). “Xgboost: a scalable tree boosting system,” in Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (New York, NY: ACM), 785–794. doi: 10.1145/2939672.2939785

Corallo, A., Latino, M. E., Menegoli, M., and Striani, F. (2020). The awareness assessment of the italian agri-food industry regarding food traceability systems. Trends Food Sci. Technol . 101, 28–37. doi: 10.1016/j.tifs.2020.04.022

De Filippis, F., Valentino, V., Alvarez-Ordóñez, A., Cotter, P. D., and Ercolini, D. (2021). Environmental microbiome mapping as a strategy to improve quality and safety in the food industry. Curr. Opin. Food Sci . 38, 168–176. doi: 10.1016/j.cofs.2020.11.012

del Rio-Lavín, A., Monchy, S., Jiménez, E., and Pardo, M. Á. (2023). Gut microbiota fingerprinting as a potential tool for tracing the geographical origin of farmed mussels ( Mytilus galloprovincialis ). PLoS ONE 18:e0290776. doi: 10.1371/journal.pone.0290776

Ferrer, L. (2022). Analysis and comparison of classification metrics. arXiv [Preprint]. arXiv:2209.05355. doi: 10.48550/arXiv.2209.05355

Gallo, A., Accorsi, R., Goh, A., Hsiao, H., and Manzini, R. (2021). A traceability-support system to control safety and sustainability indicators in food distribution. Food Control 124:107866. doi: 10.1016/j.foodcont.2021.107866

Guidone, A., Zotta, T., Matera, A., Ricciardi, A., De Filippis, F., Ercolini, D., et al. (2016). The microbiota of high-moisture mozzarella cheese produced with different acidification methods. Int. J. Food Microbiol . 216, 9–17. doi: 10.1016/j.ijfoodmicro.2015.09.002

Ibrahimi, E., Lopes, M. B., Dhamo, X., Simeon, A., Shigdel, R., Hron, K., et al. (2023). Overview of data preprocessing for machine learning applications in human microbiome research. Front. Microbiol . 14:1250909. doi: 10.3389/fmicb.2023.1250909

Janzing, D., Minorics, L., and Blöbaum, P. (2020). Feature relevance quantification in explainable AI: a causality problem. arXiv [Preprint]. arXiv :1910.13413.

Google Scholar

Levante, A., Bertani, G., Marrella, M., Mucchetti, G., Bernini, V., Lazzi, C., et al. (2023). The microbiota of Mozzarella di Bufala Campana PDO cheese: a study across the manufacturing process. Front. Microbiol . 14:1196879. doi: 10.3389/fmicb.2023.1196879

Lundberg, S. M., and Lee, S.-I. (2017). “A unified approach to interpreting model predictions,” in Advances in Neural Information Processing Systems 30 , eds. I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Curran Associates, Inc), 4765–4774.

Monaco, A., Pantaleo, E., Amoroso, N., Lacalamita, A., Giudice, C. L., Fonzino, A., et al. (2021). A primer on machine learning techniques for genomic applications. Comput. Struct. Biotechnol. J . 19, 4345–4359. doi: 10.1016/j.csbj.2021.07.021

Novielli, P., Romano, D., Magarelli, M., Bitonto, P. D., Diacono, D., Chiatante, A., et al. (2024). Explainable artificial intelligence for microbiome data analysis in colorectal cancer biomarker identification. Front. Microbiol . 15:1348974. doi: 10.3389/fmicb.2024.1348974

Ozenne, B., Subtil, F., and Maucort-Boulch, D. (2015). The precision-recall curve overcame the optimism of the receiver operating characteristic curve in rare diseases. J. Clin. Epidemiol . 68, 855–859. doi: 10.1016/j.jclinepi.2015.02.010

Papoutsoglou, G., Tarazona, S., Lopes, M. B., Klammsteiner, T., Ibrahimi, E., Eckenberger, J., et al. (2023). Machine learning approaches in microbiome research: challenges and best practices. Front. Microbiol . 14:1261889. doi: 10.3389/fmicb.2023.1261889

Pisano, M. B., Scano, P., Murgia, A., Cosentino, S., and Caboni, P. (2016). Metabolomics and microbiological profile of Italian mozzarella cheese produced with buffalo and cow milk. Food Chem . 192, 618–624. doi: 10.1016/j.foodchem.2015.07.061

Reuter, J. A., Spacek, D. V., and Snyder, M. P. (2015). High-throughput sequencing technologies. Mol. Cell 58, 586–597. doi: 10.1016/j.molcel.2015.05.004

Ruck, D. W., Rogers, S. K., and Kabrisky, M. (1990). Feature selection using a multilayer perceptron. J. Neural Netw. Comput . 2, 40–48.

Schaffer, C. (1993). Selecting a classification method by cross-validation. Mach. Learn . 13, 135–143. doi: 10.1007/BF00993106

Schmieder, R., and Edwards, R. (2011). Quality control and preprocessing of metagenomic datasets. Bioinformatics 27, 863–864. doi: 10.1093/bioinformatics/btr026

Shwartz-Ziv, R., and Armon, A. (2022). Tabular data: deep learning is not all you need. Inform. Fusion 81, 84–90. doi: 10.1016/j.inffus.2021.11.011

Keywords: explainable artificial intelligence, machine learning, microbiome, food origin, PDO

Citation: Magarelli M, Novielli P, De Filippis F, Magliulo R, Di Bitonto P, Diacono D, Bellotti R and Tangaro S (2024) Explainable artificial intelligence and microbiome data for food geographical origin: the Mozzarella di Bufala Campana PDO Case of Study. Front. Microbiol. 15:1393243. doi: 10.3389/fmicb.2024.1393243

Received: 28 February 2024; Accepted: 13 May 2024; Published: 03 June 2024.

Reviewed by:

Copyright © 2024 Magarelli, Novielli, De Filippis, Magliulo, Di Bitonto, Diacono, Bellotti and Tangaro. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Sabina Tangaro, sabina.tangaro@uniba.it

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

write a case study on artificial intelligence

Should We Fear Human Stupidity More Than Artificial Intelligence?

I n recent years, artificial intelligence (AI) has sparked intense debate and fear among the public. The rise of AI technologies, from autonomous vehicles to sophisticated chatbots, has many people worried about job losses, privacy concerns, and even the dystopian scenarios depicted in science fiction.

While these concerns are not unfounded, it is crucial to consider a different perspective: perhaps we should be more afraid of human stupidity than of AI.

AI: A Tool, Not a Threat

AI, at its core, is a tool created by humans to solve problems and improve efficiency. It has the potential to revolutionize industries, enhance our daily lives, and tackle some of the world’s most pressing issues, such as climate change and disease.

For instance, AI algorithms are already being used to predict weather patterns, optimize renewable energy use, and develop new treatments for illnesses.

The fear that AI will become uncontrollable and turn against humanity is largely rooted in fiction rather than reality. AI systems operate within the parameters set by their human creators. They lack consciousness, emotions, and intentions.

Unlike humans, AI does not make decisions based on biases, prejudices, or irrational fears. It processes data and generates outcomes based on logic and probability.

The Real Danger: Human Error and Ignorance

On the other hand, human stupidity—manifested through ignorance, negligence, and poor decision-making—poses a more immediate and tangible threat. History is replete with examples of human errors leading to catastrophic outcomes.

Consider the Chernobyl disaster, the financial crisis of 2008, or the ongoing challenges in addressing climate change. These events were not caused by AI, but by human actions and decisions.

Case Studies: Human Stupidity vs. AI

To illustrate the point further, let's examine some notable case studies.

The Chornobyl Disaster: One of the most catastrophic nuclear accidents in history, the Chornobyl disaster, was caused by human error. In 1986, a safety test went wrong due to the operators' lack of understanding of reactor safety protocols and their failure to follow procedures. The result was a catastrophic explosion that released massive amounts of radioactive material into the environment.

Financial Crisis of 2008 : The global financial crisis, which led to severe economic downturns worldwide, was largely a result of human greed, poor regulatory oversight, and risky financial practices. AI had little to no role in these decisions; it was human error and negligence that brought the global economy to its knees.

Climate Change : The ongoing environmental crisis is another example of human actions—industrial pollution, deforestation, and rampant use of fossil fuels—leading to dire consequences. Conversely, AI is being used to find solutions to mitigate these effects, such as optimizing energy consumption and improving carbon capture technologies.

Regardless of what side you are on the "climate change" (or global warming) debate, the point is that AI can offer insights, or even solve the debate.

AI’s Potential for Positive Change

While human errors have led to some of the most significant disasters in history, AI has shown immense potential for positive change.

Here are a few ways AI is making a difference:

AI is revolutionizing healthcare by enabling earlier and more accurate diagnoses, personalized treatment plans, and advanced research into diseases. For example, AI algorithms can analyze medical images faster and with higher accuracy than human radiologists, leading to quicker detection of conditions like cancer.

Environmental Conservation

AI is being used to monitor and protect endangered species, manage natural resources more effectively, and combat climate change. Machine learning models can predict deforestation patterns, track illegal poaching, and optimize renewable energy production.

Disaster Response

AI systems can analyze vast amounts of data in real-time to predict natural disasters, such as earthquakes and hurricanes, and coordinate response efforts. This leads to more efficient evacuation plans, better resource allocation, and ultimately, saved lives.

Self-Driving Cars: Balancing Innovation with Human Oversight

Self-driving cars promise to revolutionize transportation by reducing accidents, improving traffic flow, and providing mobility to those unable to drive. The primary appeal of self-driving cars lies in their potential to significantly reduce accidents caused by human error.

According to the National Highway Traffic Safety Administration (NHTSA), human error is a factor in 94% of all traffic accidents. Equipped with advanced sensors, cameras, and AI algorithms, autonomous vehicles can detect and respond to hazards more quickly and consistently than human drivers.

Self-driving cars also have the potential to improve traffic flow and reduce congestion. AI algorithms can optimize routes, communicate with other vehicles, and manage speeds to avoid traffic jams and reduce travel times.

Related: The Unspoken Truths of Electric Vehicles: Range, Cost, and More

The Role of Education in Mitigating Fear

To harness the benefits of AI while mitigating its risks, it is essential to focus on education and ethical development. By educating the public about AI’s capabilities and limitations, we can dispel myths and reduce irrational fears.

Fostering a culture of ethical AI development ensures that these technologies are designed and deployed responsibly.

Understanding AI's true nature and potential is the first step toward reducing unfounded fears. Public education initiatives, workshops, and accessible resources can help demystify AI and provide a clearer picture of how it works and what it can achieve.

Ethical Development and Regulation

Regulations and guidelines play a critical role in shaping the ethical use of AI. These frameworks advocate for transparency, accountability, and fairness in AI systems, helping to prevent misuse and protect public interests.

Ethical AI development involves creating systems that are transparent, accountable, and fair. This includes using unbiased data, regularly auditing AI systems for ethical compliance, and ensuring that AI applications do not infringe on individual rights or perpetuate harmful biases.

People Are Also Reading

  • Lost to Progress? 20 Things Society Has Left Behind (For Better or Worse)
  • The Evolution From Baby Boomers to Gen Alpha

Embracing AI with Caution and Wisdom

While it is natural to have concerns about emerging technologies, it is important to distinguish between realistic and exaggerated fears. AI, when developed and used responsibly, offers immense potential to improve our world.

The greatest threat lies not in the machines we create, but in our own actions and decisions.

By prioritizing education, ethical development, and critical thinking, we can ensure that AI serves humanity’s best interests. In the end, it is not the intelligence of our creations that we should fear, but the potential folly of our own choices.

Like any other tool, artificial intelligence is only as good as its users. By addressing human error and promoting responsible AI use, we can unlock the full potential of this transformative technology and create a brighter, more innovative future.

🙋‍♀️If you like what you just read, then  subscribe to my newsletter  and  follow us on YouTube .👈

Next Up From ChaChingQueen

  • How To Save On Amazon Prime Membership + Prime Benefits
  • Experts Warn: These 13 Things Around Your House Impact Your Health
  • 30 Things Likely To Vanish With Baby Boomers: A Farewell to Yesterday’s Norms

AI and a human created this together.

AI is not dystopia

IMAGES

  1. Reflective essay: Review paper on artificial intelligence

    write a case study on artificial intelligence

  2. Artificial Intelligence Essay

    write a case study on artificial intelligence

  3. (PDF) The Road to Enterprise Artificial Intelligence: A Case Studies

    write a case study on artificial intelligence

  4. (PDF) A Case Study Based On Developing Human Artificial Intelligence

    write a case study on artificial intelligence

  5. Case study

    write a case study on artificial intelligence

  6. (PDF) A Review on Artificial Intelligence in Education

    write a case study on artificial intelligence

VIDEO

  1. HOW TO WRITE CASE STUDY QUESTIONS?

  2. The Craziest Stories in AI History Part 2

  3. Study at UTN

  4. Study Artificial intelligence and Robotices in the UK

  5. Best Colleges to Study ARTIFICIAL INTELLIGENCE! 🤖

  6. Solved cases study June 2023-22 AS Economics

COMMENTS

  1. 40 Detailed Artificial Intelligence Case Studies [2024]

    40 Detailed Artificial Intelligence Case Studies [2024] In this dynamic era of technological advancements, Artificial Intelligence (AI) emerges as a pivotal force, reshaping the way industries operate and charting new courses for business innovation. This article presents an in-depth exploration of 40 diverse and compelling AI case studies from ...

  2. Artificial Intelligence case study

    Case study artificial intelligence. Artificial intelligence, or AI, is the theory and development of computer systems that are able to perform tasks normally required by human intelligence. Some examples of these tasks include speech recognition, visual perception, decision-making, and translation between languages.

  3. Artificial Intelligence Case Studies

    As a result, over the last few years, we have witnessed an all-time high number of artificial intelligence case studies. According to McKinsey, 57 percent of companies report AI adoption, up from 45 percent in 2020. The majority of these applications targeted the optimization of service operations, a much-needed shift in these turbulent times.

  4. Artificial Intelligence Case Study Topics

    Artificial Intelligence Case Study Topics: Unleashing the Power of AI. Artificial Intelligence (AI) has emerged as one of the most transformative technologies in recent times, revolutionizing industries and reshaping the way we live and work. With its ability to analyze vast amounts of data, learn from patterns, and make autonomous decisions ...

  5. AI in Industry: Real-World Applications and Case Studies

    Abstract — Artificial intelligence (AI) has advanced rapidly and is becoming a cornerstone. technology that drives innovation and efficiency in various industries. This paper examines. the real ...

  6. The present and future of AI

    The 2021 report is the second in a series that will be released every five years until 2116. Titled "Gathering Strength, Gathering Storms," the report explores the various ways AI is increasingly touching people's lives in settings that range from movie recommendations and voice assistants to autonomous driving and automated medical ...

  7. Top 11 case studies of artificial intelligence in manufacturing

    In the era of Industry 4.0, AI use cases are reshaping manufacturing. These 11 AI manufacturing case studies showcase how AI enhances efficiency, boosts quality, and revolutionizes processes. From predictive maintenance to supply chain optimization, AI's impact drives the industry toward a smarter, more innovative future.

  8. Artificial Intelligence for Hospital Health Care: Application Cases and

    1. Introduction. Research into applications of artificial intelligence (AI) in health care and within hospitals is a crucial area of innovation [].Smart health care with the support of AI technologies, such as Machine Learning (ML), is needed due to specific challenges in the provision of medical support in European countries as well as in the rest of the world.

  9. Artificial Intelligence: examples of ethical dilemmas

    AI creates art. The use of AI in culture raises interesting ethical reflections. In 2016, a Rembrandt painting, "the Next Rembrandt", was designed by a computer and created by a 3D printer, 351 years after the painter's death. To achieve such technological and artistic prowess, 346 Rembrandt paintings were analysed pixel by pixel and ...

  10. 4 Incredible AI Case Studies in Content Marketing

    Here are four AI case studies to keep an eye on. 1. Vanguard Increases Conversion Rates by 15% with AI. Vanguard is one of the world's biggest investment firms, with $7 trillion under management. The company needed to promote its Vanguard Institutional business, but it had a problem:

  11. Case Studies

    The development of artificial intelligence (AI) systems and their deployment in society gives rise to ethical dilemmas and hard questions. By situating ethical considerations in terms of real-world scenarios, case studies facilitate in-depth and multi-faceted explorations of complex philosophical questions about what is right, good and feasible ...

  12. (PDF) The Impact of Artificial Intelligence on Students' Learning

    Case studies were analyzed to evaluate the power of AI-based teaching and learning. Implications of using AI in higher education in terms of psychology, social, cultural, and ethical aspects have ...

  13. Artificial Intelligence in Healthcare: Review and Prediction Case Studies

    1. Introduction. Artificial intelligence (AI) is defined as the intelligence of machines, as opposed to the intelligence of humans or other living species [1], [2].AI can also be defined as the study of "intelligent agents"—that is, any agent or device that can perceive and understand its surroundings and accordingly take appropriate action to maximize its chances of achieving its ...

  14. Executive Summary

    Executive Summary. [ turn on 2021 annotations] Artificial Intelligence (AI) is a science and a set of computational technologies that are inspired by—but typically operate quite differently from—the ways people use their nervous systems and bodies to sense, learn, reason, and take action. While the rate of progress in AI has been patchy and ...

  15. Using artificial intelligence in academic writing and research: An

    A case study from the "Nature Reviews Urology" titled "Artificial intelligence in academic writing: a paradigm-shifting technological advance" explored AI's transformative impact on academic writing. It highlighted AI's role in enhancing the efficiency and depth of literature review and synthesis in academic research [7].

  16. Ethics of Artificial Intelligence: Case Studies and Options for

    case studies is a suitable approach to engage a broader audience with an interest in AI ethics. The chapter provides a brief overvie w of the structure and logic of the

  17. How To Build A Business Case For Artificial Intelligence

    Building a business case includes analyzing the expected benefits and costs associated with a project. However, in the case of AI, the answer is unlikely to be straightforward. AI projects can appear costly without any immediate gains — particularly for loosely bound scenarios and in organizations that aren't used to setting aside budget to ...

  18. Artificial Intelligence (AI)

    Datamatics is a Digital Technologies, Operations, and Experiences company that enables enterprises to go Deep in Digital to boost their productivity, customer experience and competitive advantage. Datamatics e-waste management policy. Stay updated with the latest news & insights related to Artificial Intelligence (AI) with our blogs, press ...

  19. Research and Practice of AI Ethics: A Case Study Approach ...

    This study investigates the ethical use of Big Data and Artificial Intelligence (AI) technologies (BD + AI)—using an empirical approach. The paper categorises the current literature and presents a multi-case study of 'on-the-ground' ethical issues that uses qualitative tools to analyse findings from ten targeted case-studies from a range of domains. The analysis coalesces identified singular ...

  20. Free AI Case Study Generator: Create Case Studies Easily

    What is a Case Study Creator. A free case study generator is a tool or system designed to automatically create detailed case studies. It typically uses predefined templates and may incorporate artificial intelligence (AI) to generate comprehensive analyses of specific situations, events, or individuals. This tool streamlines the process of ...

  21. Machines of mind: The case for an AI-powered productivity boom

    From World War II until the early 1970s, labor productivity grew at over 3% a year, more than doubling over the period, ushering in an era of prosperity for most Americans. In the early 1970s ...

  22. Case Studies: Success Stories of AI Email Implementation

    This case study demonstrates the crucial role of AI in enhancing email security and protecting against ever-evolving cyber threats. Optimizing Email Content: A Travel Agency's Adventure

  23. Case studies on artificial intelligence

    Microsft's 2030 vision on Healthcare, Artificial Intelligence, Data and Ethics. The intersection between technology and health has been an increasing area of focus for policymakers, patient groups, ethicists and innovators. As a company, we foundourselves in the midst of many different discussions with customers in both the privateand public ...

  24. AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More

    Artificial intelligence (AI) tools are rapidly transforming the practice of law. Nearly three quarters of lawyers plan on using generative AI for their work, from sifting through mountains of case law to drafting contracts to reviewing documents to writing legal memoranda. But are these tools reliable enough for real-world use?

  25. PDF The Use of Artificial Intelligence in Science, Technology, Engineering

    Below are relevant case studies, with their impact description extract from the UK REF Impact Case Study database. The tables outline the Case Study highlighted and a summary of the impact derived from the REF database. 8.1.4.1 Life science impact case studies Case study Impact Intelligent Energy Management Unit of Assessment: Computer

  26. COVID-19 Risk Analysis Based on Population Migration Big Data: A Case

    ICBAR '23: Proceedings of the 2023 3rd International Conference on Big Data, Artificial Intelligence and Risk Management ... A Case Study of Wuhan. Pages 940-946. Previous Chapter Next Chapter. ABSTRACT. Population movement between regions is one of the main ways for the spread of COVID-19. The Chinese government has adopted unprecedented ...

  27. AI Is Making Economists Rethink the Story of Automation

    Summary. Will artificial intelligence take our jobs? As AI raises new fears about a jobless future, it's helpful to consider how economists' understanding of technology and labor has evolved.

  28. Explainable artificial intelligence and microbiome data for food

    The case of study of Mozzarella di Bufala Campana PDO has been considered by examining the composition of the microbiota in each samples. ... Explainable Artificial Intelligence (XAI) algorithms are useful to make artificial intelligence (AI) models understandable and interpretable to humans, because many machine learning and AI models often ...

  29. The Impact of Artificial Intelligence on Digital Marketing

    This presentation explores the transformative impact of Artificial Intelligence on digital marketing, highlighting key benefits such as improved efficiency, enhanced customer insights, and personalized marketing strategies. Discover how AI-powered tools, predictive analytics, and successful case studies are shaping the future of digital marketing.

  30. Should We Fear Human Stupidity More Than Artificial Intelligence?

    Artificial intelligence has sparked debate and fear among the public. The rise of AI technologies, from autonomous vehicles to sophisticated chatbots, has many people worried about job losses ...