• Open access
  • Published: 08 June 2022

A systematic review on digital literacy

  • Hasan Tinmaz   ORCID: orcid.org/0000-0003-4310-0848 1 ,
  • Yoo-Taek Lee   ORCID: orcid.org/0000-0002-1913-9059 2 ,
  • Mina Fanea-Ivanovici   ORCID: orcid.org/0000-0003-2921-2990 3 &
  • Hasnan Baber   ORCID: orcid.org/0000-0002-8951-3501 4  

Smart Learning Environments volume  9 , Article number:  21 ( 2022 ) Cite this article

52k Accesses

42 Citations

10 Altmetric

Metrics details

The purpose of this study is to discover the main themes and categories of the research studies regarding digital literacy. To serve this purpose, the databases of WoS/Clarivate Analytics, Proquest Central, Emerald Management Journals, Jstor Business College Collections and Scopus/Elsevier were searched with four keyword-combinations and final forty-three articles were included in the dataset. The researchers applied a systematic literature review method to the dataset. The preliminary findings demonstrated that there is a growing prevalence of digital literacy articles starting from the year 2013. The dominant research methodology of the reviewed articles is qualitative. The four major themes revealed from the qualitative content analysis are: digital literacy, digital competencies, digital skills and digital thinking. Under each theme, the categories and their frequencies are analysed. Recommendations for further research and for real life implementations are generated.

Introduction

The extant literature on digital literacy, skills and competencies is rich in definitions and classifications, but there is still no consensus on the larger themes and subsumed themes categories. (Heitin, 2016 ). To exemplify, existing inventories of Internet skills suffer from ‘incompleteness and over-simplification, conceptual ambiguity’ (van Deursen et al., 2015 ), and Internet skills are only a part of digital skills. While there is already a plethora of research in this field, this research paper hereby aims to provide a general framework of digital areas and themes that can best describe digital (cap)abilities in the novel context of Industry 4.0 and the accelerated pandemic-triggered digitalisation. The areas and themes can represent the starting point for drafting a contemporary digital literacy framework.

Sousa and Rocha ( 2019 ) explained that there is a stake of digital skills for disruptive digital business, and they connect it to the latest developments, such as the Internet of Things (IoT), cloud technology, big data, artificial intelligence, and robotics. The topic is even more important given the large disparities in digital literacy across regions (Tinmaz et al., 2022 ). More precisely, digital inequalities encompass skills, along with access, usage and self-perceptions. These inequalities need to be addressed, as they are credited with a ‘potential to shape life chances in multiple ways’ (Robinson et al., 2015 ), e.g., academic performance, labour market competitiveness, health, civic and political participation. Steps have been successfully taken to address physical access gaps, but skills gaps are still looming (Van Deursen & Van Dijk, 2010a ). Moreover, digital inequalities have grown larger due to the COVID-19 pandemic, and they influenced the very state of health of the most vulnerable categories of population or their employability in a time when digital skills are required (Baber et al., 2022 ; Beaunoyer, Dupéré & Guitton, 2020 ).

The systematic review the researchers propose is a useful updated instrument of classification and inventory for digital literacy. Considering the latest developments in the economy and in line with current digitalisation needs, digitally literate population may assist policymakers in various fields, e.g., education, administration, healthcare system, and managers of companies and other concerned organisations that need to stay competitive and to employ competitive workforce. Therefore, it is indispensably vital to comprehend the big picture of digital literacy related research.

Literature review

Since the advent of Digital Literacy, scholars have been concerned with identifying and classifying the various (cap)abilities related to its operation. Using the most cited academic papers in this stream of research, several classifications of digital-related literacies, competencies, and skills emerged.

Digital literacies

Digital literacy, which is one of the challenges of integration of technology in academic courses (Blau, Shamir-Inbal & Avdiel, 2020 ), has been defined in the current literature as the competencies and skills required for navigating a fragmented and complex information ecosystem (Eshet, 2004 ). A ‘Digital Literacy Framework’ was designed by Eshet-Alkalai ( 2012 ), comprising six categories: (a) photo-visual thinking (understanding and using visual information); (b) real-time thinking (simultaneously processing a variety of stimuli); (c) information thinking (evaluating and combining information from multiple digital sources); (d) branching thinking (navigating in non-linear hyper-media environments); (e) reproduction thinking (creating outcomes using technological tools by designing new content or remixing existing digital content); (f) social-emotional thinking (understanding and applying cyberspace rules). According to Heitin ( 2016 ), digital literacy groups the following clusters: (a) finding and consuming digital content; (b) creating digital content; (c) communicating or sharing digital content. Hence, the literature describes the digital literacy in many ways by associating a set of various technical and non-technical elements.

  • Digital competencies

The Digital Competence Framework for Citizens (DigComp 2.1.), the most recent framework proposed by the European Union, which is currently under review and undergoing an updating process, contains five competency areas: (a) information and data literacy, (b) communication and collaboration, (c) digital content creation, (d) safety, and (e) problem solving (Carretero, Vuorikari & Punie, 2017 ). Digital competency had previously been described in a technical fashion by Ferrari ( 2012 ) as a set comprising information skills, communication skills, content creation skills, safety skills, and problem-solving skills, which later outlined the areas of competence in DigComp 2.1, too.

  • Digital skills

Ng ( 2012 ) pointed out the following three categories of digital skills: (a) technological (using technological tools); (b) cognitive (thinking critically when managing information); (c) social (communicating and socialising). A set of Internet skill was suggested by Van Deursen and Van Dijk ( 2009 , 2010b ), which contains: (a) operational skills (basic skills in using internet technology), (b) formal Internet skills (navigation and orientation skills); (c) information Internet skills (fulfilling information needs), and (d) strategic Internet skills (using the internet to reach goals). In 2014, the same authors added communication and content creation skills to the initial framework (van Dijk & van Deursen). Similarly, Helsper and Eynon ( 2013 ) put forward a set of four digital skills: technical, social, critical, and creative skills. Furthermore, van Deursen et al. ( 2015 ) built a set of items and factors to measure Internet skills: operational, information navigation, social, creative, mobile. More recent literature (vaan Laar et al., 2017 ) divides digital skills into seven core categories: technical, information management, communication, collaboration, creativity, critical thinking, and problem solving.

It is worth mentioning that the various methodologies used to classify digital literacy are overlapping or non-exhaustive, which confirms the conceptual ambiguity mentioned by van Deursen et al. ( 2015 ).

  • Digital thinking

Thinking skills (along with digital skills) have been acknowledged to be a significant element of digital literacy in the educational process context (Ferrari, 2012 ). In fact, critical thinking, creativity, and innovation are at the very core of DigComp. Information and Communication Technology as a support for thinking is a learning objective in any school curriculum. In the same vein, analytical thinking and interdisciplinary thinking, which help solve problems, are yet other concerns of educators in the Industry 4.0 (Ozkan-Ozen & Kazancoglu, 2021 ).

However, we have recently witnessed a shift of focus from learning how to use information and communication technologies to using it while staying safe in the cyber-environment and being aware of alternative facts. Digital thinking would encompass identifying fake news, misinformation, and echo chambers (Sulzer, 2018 ). Not least important, concern about cybersecurity has grown especially in times of political, social or economic turmoil, such as the elections or the Covid-19 crisis (Sulzer, 2018 ; Puig, Blanco-Anaya & Perez-Maceira, 2021 ).

Ultimately, this systematic review paper focuses on the following major research questions as follows:

Research question 1: What is the yearly distribution of digital literacy related papers?

Research question 2: What are the research methods for digital literacy related papers?

Research question 3: What are the main themes in digital literacy related papers?

Research question 4: What are the concentrated categories (under revealed main themes) in digital literacy related papers?

This study employed the systematic review method where the authors scrutinized the existing literature around the major research question of digital literacy. As Uman ( 2011 ) pointed, in systematic literature review, the findings of the earlier research are examined for the identification of consistent and repetitive themes. The systematic review method differs from literature review with its well managed and highly organized qualitative scrutiny processes where researchers tend to cover less materials from fewer number of databases to write their literature review (Kowalczyk & Truluck, 2013 ; Robinson & Lowe, 2015 ).

Data collection

To address major research objectives, the following five important databases are selected due to their digital literacy focused research dominance: 1. WoS/Clarivate Analytics, 2. Proquest Central; 3. Emerald Management Journals; 4. Jstor Business College Collections; 5. Scopus/Elsevier.

The search was made in the second half of June 2021, in abstract and key words written in English language. We only kept research articles and book chapters (herein referred to as papers). Our purpose was to identify a set of digital literacy areas, or an inventory of such areas and topics. To serve that purpose, systematic review was utilized with the following synonym key words for the search: ‘digital literacy’, ‘digital skills’, ‘digital competence’ and ‘digital fluency’, to find the mainstream literature dealing with the topic. These key words were unfolded as a result of the consultation with the subject matter experts (two board members from Korean Digital Literacy Association and two professors from technology studies department). Below are the four key word combinations used in the search: “Digital literacy AND systematic review”, “Digital skills AND systematic review”, “Digital competence AND systematic review”, and “Digital fluency AND systematic review”.

A sequential systematic search was made in the five databases mentioned above. Thus, from one database to another, duplicate papers were manually excluded in a cascade manner to extract only unique results and to make the research smoother to conduct. At this stage, we kept 47 papers. Further exclusion criteria were applied. Thus, only full-text items written in English were selected, and in doing so, three papers were excluded (no full text available), and one other paper was excluded because it was not written in English, but in Spanish. Therefore, we investigated a total number of 43 papers, as shown in Table 1 . “ Appendix A ” shows the list of these papers with full references.

Data analysis

The 43 papers selected after the application of the inclusion and exclusion criteria, respectively, were reviewed the materials independently by two researchers who were from two different countries. The researchers identified all topics pertaining to digital literacy, as they appeared in the papers. Next, a third researcher independently analysed these findings by excluded duplicates A qualitative content analysis was manually performed by calculating the frequency of major themes in all papers, where the raw data was compared and contrasted (Fraenkel et al., 2012 ). All three reviewers independently list the words and how the context in which they appeared and then the three reviewers collectively decided for how it should be categorized. Lastly, it is vital to remind that literature review of this article was written after the identification of the themes appeared as a result of our qualitative analyses. Therefore, the authors decided to shape the literature review structure based on the themes.

As an answer to the first research question (the yearly distribution of digital literacy related papers), Fig.  1 demonstrates the yearly distribution of digital literacy related papers. It is seen that there is an increasing trend about the digital literacy papers.

figure 1

Yearly distribution of digital literacy related papers

Research question number two (The research methods for digital literacy related papers) concentrates on what research methods are employed for these digital literacy related papers. As Fig.  2 shows, most of the papers were using the qualitative method. Not stated refers to book chapters.

figure 2

Research methods used in the reviewed articles

When forty-three articles were analysed for the main themes as in research question number three (The main themes in digital literacy related papers), the overall findings were categorized around four major themes: (i) literacies, (ii) competencies, (iii) skills, and (iv) thinking. Under every major theme, the categories were listed and explained as in research question number four (The concentrated categories (under revealed main themes) in digital literacy related papers).

The authors utilized an overt categorization for the depiction of these major themes. For example, when the ‘creativity’ was labelled as a skill, the authors also categorized it under the ‘skills’ theme. Similarly, when ‘creativity’ was mentioned as a competency, the authors listed it under the ‘competencies’ theme. Therefore, it is possible to recognize the same finding under different major themes.

Major theme 1: literacies

Digital literacy being the major concern of this paper was observed to be blatantly mentioned in five papers out forty-three. One of these articles described digital literacy as the human proficiencies to live, learn and work in the current digital society. In addition to these five articles, two additional papers used the same term as ‘critical digital literacy’ by describing it as a person’s or a society’s accessibility and assessment level interaction with digital technologies to utilize and/or create information. Table 2 summarizes the major categories under ‘Literacies’ major theme.

Computer literacy, media literacy and cultural literacy were the second most common literacy (n = 5). One of the article branches computer literacy as tool (detailing with software and hardware uses) and resource (focusing on information processing capacity of a computer) literacies. Cultural literacy was emphasized as a vital element for functioning in an intercultural team on a digital project.

Disciplinary literacy (n = 4) was referring to utilizing different computer programs (n = 2) or technical gadgets (n = 2) with a specific emphasis on required cognitive, affective and psychomotor skills to be able to work in any digital context (n = 3), serving for the using (n = 2), creating and applying (n = 2) digital literacy in real life.

Data literacy, technology literacy and multiliteracy were the third frequent categories (n = 3). The ‘multiliteracy’ was referring to the innate nature of digital technologies, which have been infused into many aspects of human lives.

Last but not least, Internet literacy, mobile literacy, web literacy, new literacy, personal literacy and research literacy were discussed in forty-three article findings. Web literacy was focusing on being able to connect with people on the web (n = 2), discover the web content (especially the navigation on a hyper-textual platform), and learn web related skills through practical web experiences. Personal literacy was highlighting digital identity management. Research literacy was not only concentrating on conducting scientific research ability but also finding available scholarship online.

Twenty-four other categories are unfolded from the results sections of forty-three articles. Table 3 presents the list of these other literacies where the authors sorted the categories in an ascending alphabetical order without any other sorting criterion. Primarily, search, tagging, filtering and attention literacies were mainly underlining their roles in information processing. Furthermore, social-structural literacy was indicated as the recognition of the social circumstances and generation of information. Another information-related literacy was pointed as publishing literacy, which is the ability to disseminate information via different digital channels.

While above listed personal literacy was referring to digital identity management, network literacy was explained as someone’s social networking ability to manage the digital relationship with other people. Additionally, participatory literacy was defined as the necessary abilities to join an online team working on online content production.

Emerging technology literacy was stipulated as an essential ability to recognize and appreciate the most recent and innovative technologies in along with smart choices related to these technologies. Additionally, the critical literacy was added as an ability to make smart judgements on the cost benefit analysis of these recent technologies.

Last of all, basic, intermediate, and advanced digital assessment literacies were specified for educational institutions that are planning to integrate various digital tools to conduct instructional assessments in their bodies.

Major theme 2: competencies

The second major theme was revealed as competencies. The authors directly categorized the findings that are specified with the word of competency. Table 4 summarizes the entire category set for the competencies major theme.

The most common category was the ‘digital competence’ (n = 14) where one of the articles points to that category as ‘generic digital competence’ referring to someone’s creativity for multimedia development (video editing was emphasized). Under this broad category, the following sub-categories were associated:

Problem solving (n = 10)

Safety (n = 7)

Information processing (n = 5)

Content creation (n = 5)

Communication (n = 2)

Digital rights (n = 1)

Digital emotional intelligence (n = 1)

Digital teamwork (n = 1)

Big data utilization (n = 1)

Artificial Intelligence utilization (n = 1)

Virtual leadership (n = 1)

Self-disruption (in along with the pace of digitalization) (n = 1)

Like ‘digital competency’, five additional articles especially coined the term as ‘digital competence as a life skill’. Deeper analysis demonstrated the following points: social competences (n = 4), communication in mother tongue (n = 3) and foreign language (n = 2), entrepreneurship (n = 3), civic competence (n = 2), fundamental science (n = 1), technology (n = 1) and mathematics (n = 1) competences, learning to learn (n = 1) and self-initiative (n = 1).

Moreover, competencies were linked to workplace digital competencies in three articles and highlighted as significant for employability (n = 3) and ‘economic engagement’ (n = 3). Digital competencies were also detailed for leisure (n = 2) and communication (n = 2). Furthermore, two articles pointed digital competencies as an inter-cultural competency and one as a cross-cultural competency. Lastly, the ‘digital nativity’ (n = 1) was clarified as someone’s innate competency of being able to feel contented and satisfied with digital technologies.

Major theme 3: skills

The third major observed theme was ‘skills’, which was dominantly gathered around information literacy skills (n = 19) and information and communication technologies skills (n = 18). Table 5 demonstrates the categories with more than one occurrence.

Table 6 summarizes the sub-categories of the two most frequent categories of ‘skills’ major theme. The information literacy skills noticeably concentrate on the steps of information processing; evaluation (n = 6), utilization (n = 4), finding (n = 3), locating (n = 2) information. Moreover, the importance of trial/error process, being a lifelong learner, feeling a need for information and so forth were evidently listed under this sub-category. On the other hand, ICT skills were grouped around cognitive and affective domains. For instance, while technical skills in general and use of social media, coding, multimedia, chat or emailing in specific were reported in cognitive domain, attitude, intention, and belief towards ICT were mentioned as the elements of affective domain.

Communication skills (n = 9) were multi-dimensional for different societies, cultures, and globalized contexts, requiring linguistic skills. Collaboration skills (n = 9) are also recurrently cited with an explicit emphasis for virtual platforms.

‘Ethics for digital environment’ encapsulated ethical use of information (n = 4) and different technologies (n = 2), knowing digital laws (n = 2) and responsibilities (n = 2) in along with digital rights and obligations (n = 1), having digital awareness (n = 1), following digital etiquettes (n = 1), treating other people with respect (n = 1) including no cyber-bullying (n = 1) and no stealing or damaging other people (n = 1).

‘Digital fluency’ involved digital access (n = 2) by using different software and hardware (n = 2) in online platforms (n = 1) or communication tools (n = 1) or within programming environments (n = 1). Digital fluency also underlined following recent technological advancements (n = 1) and knowledge (n = 1) including digital health and wellness (n = 1) dimension.

‘Social intelligence’ related to understanding digital culture (n = 1), the concept of digital exclusion (n = 1) and digital divide (n = 3). ‘Research skills’ were detailed with searching academic information (n = 3) on databases such as Web of Science and Scopus (n = 2) and their citation, summarization, and quotation (n = 2).

‘Digital teaching’ was described as a skill (n = 2) in Table 4 whereas it was also labelled as a competence (n = 1) as shown in Table 3 . Similarly, while learning to learn (n = 1) was coined under competencies in Table 3 , digital learning (n = 2, Table 4 ) and life-long learning (n = 1, Table 5 ) were stated as learning related skills. Moreover, learning was used with the following three terms: learning readiness (n = 1), self-paced learning (n = 1) and learning flexibility (n = 1).

Table 7 shows other categories listed below the ‘skills’ major theme. The list covers not only the software such as GIS, text mining, mapping, or bibliometric analysis programs but also the conceptual skills such as the fourth industrial revolution and information management.

Major theme 4: thinking

The last identified major theme was the different types of ‘thinking’. As Table 8 shows, ‘critical thinking’ was the most frequent thinking category (n = 4). Except computational thinking, the other categories were not detailed.

Computational thinking (n = 3) was associated with the general logic of how a computer works and sub-categorized into the following steps; construction of the problem (n = 3), abstraction (n = 1), disintegration of the problem (n = 2), data collection, (n = 2), data analysis (n = 2), algorithmic design (n = 2), parallelization & iteration (n = 1), automation (n = 1), generalization (n = 1), and evaluation (n = 2).

A transversal analysis of digital literacy categories reveals the following fields of digital literacy application:

Technological advancement (IT, ICT, Industry 4.0, IoT, text mining, GIS, bibliometric analysis, mapping data, technology, AI, big data)

Networking (Internet, web, connectivity, network, safety)

Information (media, news, communication)

Creative-cultural industries (culture, publishing, film, TV, leisure, content creation)

Academia (research, documentation, library)

Citizenship (participation, society, social intelligence, awareness, politics, rights, legal use, ethics)

Education (life skills, problem solving, teaching, learning, education, lifelong learning)

Professional life (work, teamwork, collaboration, economy, commerce, leadership, decision making)

Personal level (critical thinking, evaluation, analytical thinking, innovative thinking)

This systematic review on digital literacy concentrated on forty-three articles from the databases of WoS/Clarivate Analytics, Proquest Central, Emerald Management Journals, Jstor Business College Collections and Scopus/Elsevier. The initial results revealed that there is an increasing trend on digital literacy focused academic papers. Research work in digital literacy is critical in a context of disruptive digital business, and more recently, the pandemic-triggered accelerated digitalisation (Beaunoyer, Dupéré & Guitton, 2020 ; Sousa & Rocha 2019 ). Moreover, most of these papers were employing qualitative research methods. The raw data of these articles were analysed qualitatively using systematic literature review to reveal major themes and categories. Four major themes that appeared are: digital literacy, digital competencies, digital skills and thinking.

Whereas the mainstream literature describes digital literacy as a set of photo-visual, real-time, information, branching, reproduction and social-emotional thinking (Eshet-Alkalai, 2012 ) or as a set of precise specific operations, i.e., finding, consuming, creating, communicating and sharing digital content (Heitin, 2016 ), this study reveals that digital literacy revolves around and is in connection with the concepts of computer literacy, media literacy, cultural literacy or disciplinary literacy. In other words, the present systematic review indicates that digital literacy is far broader than specific tasks, englobing the entire sphere of computer operation and media use in a cultural context.

The digital competence yardstick, DigComp (Carretero, Vuorikari & Punie, 2017 ) suggests that the main digital competencies cover information and data literacy, communication and collaboration, digital content creation, safety, and problem solving. Similarly, the findings of this research place digital competencies in relation to problem solving, safety, information processing, content creation and communication. Therefore, the findings of the systematic literature review are, to a large extent, in line with the existing framework used in the European Union.

The investigation of the main keywords associated with digital skills has revealed that information literacy, ICT, communication, collaboration, digital content creation, research and decision-making skill are the most representative. In a structured way, the existing literature groups these skills in technological, cognitive, and social (Ng, 2012 ) or, more extensively, into operational, formal, information Internet, strategic, communication and content creation (van Dijk & van Deursen, 2014 ). In time, the literature has become richer in frameworks, and prolific authors have improved their results. As such, more recent research (vaan Laar et al., 2017 ) use the following categories: technical, information management, communication, collaboration, creativity, critical thinking, and problem solving.

Whereas digital thinking was observed to be mostly related with critical thinking and computational thinking, DigComp connects it with critical thinking, creativity, and innovation, on the one hand, and researchers highlight fake news, misinformation, cybersecurity, and echo chambers as exponents of digital thinking, on the other hand (Sulzer, 2018 ; Puig, Blanco-Anaya & Perez-Maceira, 2021 ).

This systematic review research study looks ahead to offer an initial step and guideline for the development of a more contemporary digital literacy framework including digital literacy major themes and factors. The researchers provide the following recommendations for both researchers and practitioners.

Recommendations for prospective research

By considering the major qualitative research trend, it seems apparent that more quantitative research-oriented studies are needed. Although it requires more effort and time, mixed method studies will help understand digital literacy holistically.

As digital literacy is an umbrella term for many different technologies, specific case studies need be designed, such as digital literacy for artificial intelligence or digital literacy for drones’ usage.

Digital literacy affects different areas of human lives, such as education, business, health, governance, and so forth. Therefore, different case studies could be carried out for each of these unique dimensions of our lives. For instance, it is worth investigating the role of digital literacy on lifelong learning in particular, and on education in general, as well as the digital upskilling effects on the labour market flexibility.

Further experimental studies on digital literacy are necessary to realize how certain variables (for instance, age, gender, socioeconomic status, cognitive abilities, etc.) affect this concept overtly or covertly. Moreover, the digital divide issue needs to be analysed through the lens of its main determinants.

New bibliometric analysis method can be implemented on digital literacy documents to reveal more information on how these works are related or centred on what major topic. This visual approach will assist to realize the big picture within the digital literacy framework.

Recommendations for practitioners

The digital literacy stakeholders, policymakers in education and managers in private organizations, need to be aware that there are many dimensions and variables regarding the implementation of digital literacy. In that case, stakeholders must comprehend their beneficiaries or the participants more deeply to increase the effect of digital literacy related activities. For example, critical thinking and problem-solving skills and abilities are mentioned to affect digital literacy. Hence, stakeholders have to initially understand whether the participants have enough entry level critical thinking and problem solving.

Development of digital literacy for different groups of people requires more energy, since each group might require a different set of skills, abilities, or competencies. Hence, different subject matter experts, such as technologists, instructional designers, content experts, should join the team.

It is indispensably vital to develop different digital frameworks for different technologies (basic or advanced) or different contexts (different levels of schooling or various industries).

These frameworks should be updated regularly as digital fields are evolving rapidly. Every year, committees should gather around to understand new technological trends and decide whether they should address the changes into their frameworks.

Understanding digital literacy in a thorough manner can enable decision makers to correctly implement and apply policies addressing the digital divide that is reflected onto various aspects of life, e.g., health, employment, education, especially in turbulent times such as the COVID-19 pandemic is.

Lastly, it is also essential to state the study limitations. This study is limited to the analysis of a certain number of papers, obtained from using the selected keywords and databases. Therefore, an extension can be made by adding other keywords and searching other databases.

Availability of data and materials

The authors present the articles used for the study in “ Appendix A ”.

Baber, H., Fanea-Ivanovici, M., Lee, Y. T., & Tinmaz, H. (2022). A bibliometric analysis of digital literacy research and emerging themes pre-during COVID-19 pandemic. Information and Learning Sciences . https://doi.org/10.1108/ILS-10-2021-0090 .

Article   Google Scholar  

Beaunoyer, E., Dupéré, S., & Guitton, M. J. (2020). COVID-19 and digital inequalities: Reciprocal impacts and mitigation strategies. Computers in Human Behavior, 111 , 10642. https://doi.org/10.1016/j.chb.2020.106424

Blau, I., Shamir-Inbal, T., & Avdiel, O. (2020). How does the pedagogical design of a technology-enhanced collaborative academic course promote digital literacies, self-regulation, and perceived learning of students? The Internet and Higher Education, 45 , 100722. https://doi.org/10.1016/j.iheduc.2019.100722

Carretero, S., Vuorikari, R., & Punie, Y. (2017). DigComp 2.1: The Digital Competence Framework for Citizens with eight proficiency levels and examples of use (No. JRC106281). Joint Research Centre, https://publications.jrc.ec.europa.eu/repository/handle/JRC106281

Eshet, Y. (2004). Digital literacy: A conceptual framework for survival skills in the digital era. Journal of Educational Multimedia and Hypermedia , 13 (1), 93–106, https://www.learntechlib.org/primary/p/4793/

Eshet-Alkalai, Y. (2012). Thinking in the digital era: A revised model for digital literacy. Issues in Informing Science and Information Technology, 9 (2), 267–276. https://doi.org/10.28945/1621

Ferrari, A. (2012). Digital competence in practice: An analysis of frameworks. JCR IPTS, Sevilla. https://ifap.ru/library/book522.pdf

Fraenkel, J. R., Wallen, N. E., & Hyun, H. H. (2012). How to design and evaluate research in education (8th ed.). Mc Graw Hill.

Google Scholar  

Heitin, L. (2016). What is digital literacy? Education Week, https://www.edweek.org/teaching-learning/what-is-digital-literacy/2016/11

Helsper, E. J., & Eynon, R. (2013). Distinct skill pathways to digital engagement. European Journal of Communication, 28 (6), 696–713. https://doi.org/10.1177/0267323113499113

Kowalczyk, N., & Truluck, C. (2013). Literature reviews and systematic reviews: What is the difference ? . Radiologic Technology, 85 (2), 219–222.

Ng, W. (2012). Can we teach digital natives digital literacy? Computers & Education, 59 (3), 1065–1078. https://doi.org/10.1016/j.compedu.2012.04.016

Ozkan-Ozen, Y. D., & Kazancoglu, Y. (2021). Analysing workforce development challenges in the Industry 4.0. International Journal of Manpower . https://doi.org/10.1108/IJM-03-2021-0167

Puig, B., Blanco-Anaya, P., & Perez-Maceira, J. J. (2021). “Fake News” or Real Science? Critical thinking to assess information on COVID-19. Frontiers in Education, 6 , 646909. https://doi.org/10.3389/feduc.2021.646909

Robinson, L., Cotten, S. R., Ono, H., Quan-Haase, A., Mesch, G., Chen, W., Schulz, J., Hale, T. M., & Stern, M. J. (2015). Digital inequalities and why they matter. Information, Communication & Society, 18 (5), 569–582. https://doi.org/10.1080/1369118X.2015.1012532

Robinson, P., & Lowe, J. (2015). Literature reviews vs systematic reviews. Australian and New Zealand Journal of Public Health, 39 (2), 103. https://doi.org/10.1111/1753-6405.12393

Sousa, M. J., & Rocha, A. (2019). Skills for disruptive digital business. Journal of Business Research, 94 , 257–263. https://doi.org/10.1016/j.jbusres.2017.12.051

Sulzer, A. (2018). (Re)conceptualizing digital literacies before and after the election of Trump. English Teaching: Practice & Critique, 17 (2), 58–71. https://doi.org/10.1108/ETPC-06-2017-0098

Tinmaz, H., Fanea-Ivanovici, M., & Baber, H. (2022). A snapshot of digital literacy. Library Hi Tech News , (ahead-of-print).

Uman, L. S. (2011). Systematic reviews and meta-analyses. Journal of the Canadian Academy of Child and Adolescent Psychiatry, 20 (1), 57–59.

Van Deursen, A. J. A. M., Helsper, E. J., & Eynon, R. (2015). Development and validation of the Internet Skills Scale (ISS). Information, Communication & Society, 19 (6), 804–823. https://doi.org/10.1080/1369118X.2015.1078834

Van Deursen, A. J. A. M., & van Dijk, J. A. G. M. (2009). Using the internet: Skills related problems in users’ online behaviour. Interacting with Computers, 21 , 393–402. https://doi.org/10.1016/j.intcom.2009.06.005

Van Deursen, A. J. A. M., & van Dijk, J. A. G. M. (2010a). Measuring internet skills. International Journal of Human-Computer Interaction, 26 (10), 891–916. https://doi.org/10.1080/10447318.2010.496338

Van Deursen, A. J. A. M., & van Dijk, J. A. G. M. (2010b). Internet skills and the digital divide. New Media & Society, 13 (6), 893–911. https://doi.org/10.1177/1461444810386774

van Dijk, J. A. G. M., & Van Deursen, A. J. A. M. (2014). Digital skills, unlocking the information society . Palgrave MacMillan.

van Laar, E., van Deursen, A. J. A. M., van Dijk, J. A. G. M., & de Haan, J. (2017). The relation between 21st-century skills and digital skills: A systematic literature review. Computer in Human Behavior, 72 , 577–588. https://doi.org/10.1016/j.chb.2017.03.010

Download references

This research is funded by Woosong University Academic Research in 2022.

Author information

Authors and affiliations.

AI & Big Data Department, Endicott College of International Studies, Woosong University, Daejeon, South Korea

Hasan Tinmaz

Endicott College of International Studies, Woosong University, Daejeon, South Korea

Yoo-Taek Lee

Department of Economics and Economic Policies, Bucharest University of Economic Studies, Bucharest, Romania

Mina Fanea-Ivanovici

Abu Dhabi School of Management, Abu Dhabi, United Arab Emirates

Hasnan Baber

You can also search for this author in PubMed   Google Scholar

Contributions

The authors worked together on the manuscript equally. All authors have read and approved the final manuscript.

Corresponding author

Correspondence to Hasnan Baber .

Ethics declarations

Competing of interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Tinmaz, H., Lee, YT., Fanea-Ivanovici, M. et al. A systematic review on digital literacy. Smart Learn. Environ. 9 , 21 (2022). https://doi.org/10.1186/s40561-022-00204-y

Download citation

Received : 23 February 2022

Accepted : 01 June 2022

Published : 08 June 2022

DOI : https://doi.org/10.1186/s40561-022-00204-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Digital literacy
  • Systematic review
  • Qualitative research

research paper about computer literacy

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.8(3); 2022 Mar

Logo of heliyon

Influence of computers in students’ academic achievement

Associated data.

Data will be made available on request.

With fast-growing technology, schools have to adapt and use technology constantly as a tool to grow. This study aims to understand the influence of computer factors on students' academic achievement. We propose a model on the influence of computer attitudes, computer learning environments, computer learning motivations, computer confidence, computer use, computer self-efficacy, loneliness, mothers' education, parents' marital status and family size on academic achievement (AA). To validate the conceptual model, 286 students aged 16–18 years old answered an online questionnaire. The most important drivers that positively affect AA are computer use, employment motivations, and mothers' education. While enjoyment attitudes, school environment, interest motivations, and loneliness influence AA negatively. Also, family size and computer self-efficacy work as moderators, and computer use works as a mediator between computer learning environments and academic achievement.

Academic achievement; Computers; Family; Learning; Students.

1. Introduction

Countries are constantly facing everchanging economic challenges and social transformations due to globalisation and technology development. Education helps overcome these challenges by developing knowledge and high skills, allowing better opportunities and faster economic progression ( OECD, 2019 ). Computers and information technology have become key to educational institutions worldwide ( Hsu and Huang, 2006 ). With the advantages of the digital era through digital markets, advanced scientific and social networks, there is a growth in innovation, development and employment ( OECD, 2015 ). Education needs to adapt to social changes, students' needs, and technology growth ( OECD, 2019 ), the perfect example of this adaptation is during the recent pandemic. The COVID-19 pandemic (meaning "CO" - corona; "VI" – virus; "D" – disease; "19" - "2019″) started in December 2019 in Wuhan, a province of China. It is caused by a highly contagious virus that has already claimed millions of lives worldwide ( Roy et al., 2020 ). The virus forced schools to close, and since classes had to continue, teachers and students had to adapt, resorting to virtual classes ( Ng and Peggy, 2020 ). However, it impacted academic life in yet unknown dimensions ( Rajkumar, 2020 ).

Digital technology provides access to high-quality learning and consequently allows schools to develop their teaching and learning methods ( Ertmer et al., 2012 ). Nonetheless, access to computers at home or the internet is not equal in every dwelling, and some students have the disadvantage of not having parental support or engagement to learn by themselves online. For these reasons, the pandemic can bestow tremendous advantages in digital education and academic achievement or significant disadvantages, mostly in developing countries. Therefore, access to technology is not enough; fostering a close relationship between families and teachers is essential ( OECD, 2020 ). Technology has been an invaluable tool, and it is being taken under consideration in students' academic achievement, including not only in access to the internet but also the way students use it ( Levine and Donitsa-Schmidt, 1998 ; Torres-Díaz et al., 2016 ; Voogt et al., 2013 ). Schools are expected to have a particular concern regarding integrating computers in classroom teaching ( Schmid and Petko, 2019 ), and technical devices such as computers, laptops, tablets and mobile phones should be included wisely in adolescent education. Through the information gathered, this study was motivated mainly by the atual pandemic context and the important role technology has on the academic achievement.

Over the years, researchers have tried to identify the variables that contribute to academic excellence in an attempt to understand which factors lead to better students' performance ( Valli Jayanthi et al., 2014 ). A vast number of studies have been conducted to identify predictors of academic achievement ( Gonzalez-pienda et al., 2002 ; J. Lee, Shute and Lee, 2010 ; Suárez-álvarez et al., 2014 ) although few have studied computer influences on the prediction of students' academic achievement.

Since there is a need to extend innovations in education ( Admiraal et al., 2017 ), we identified a need to investigate how students' relationships with computers impact their academic performance to understand the real impact of computers on schooling. To the best of our knowledge, some studies address computers' impact on academic achievement, but the data available is not totally enlightening. With the actual context of the pandemic, this subject gains additional importance, comparing technology use and academic achievement (AA) in such a tumultuous time for the world. This study presents three contributions. Firstly , it identifies which the best computer-related determinants to understand AA are through a research model that combines computer-related variables to students' grades. In this way, we identify the factors that lead to better academic achievement, helping schools and parents use them as a strategic advantage. Secondly , it investigates the moderation effect of family size and computer self-efficacy and the mediation effect of computer use between the factors identified and AA. Finally , to understand how the COVID-19 pandemic is influencing students' AA, using the variable loneliness, we explore how forced social isolation affected the use of computers and students' academic achievement in the pandemic period.

A literature review is presented in the next section. Section 3 introduces a theoretical model explaining academic achievement. Section 4 elucidates on the data-collection methods, followed by the results in Section 5 . The results are discussed in Section 6 , and conclusions are outlined in the final section.

2. Literature review and hypotheses

2.1. computer attitudes.

Attitudes and perceptions play a pivotal role in learning behaviours. Some researchers tested a model based on the concept of the attitude-behaviour theory, which argues that beliefs lead to attitudes, and attitudes are an essential factor to predict behaviour ( Levine and Donitsa-Schmidt, 1998 ). They predicted that computer use leads to more computer confidence and positive attitudes towards computers, and these elements influence each other. The computer attitudes refer to the opinion of students about: the stereotypes of those who use the computer the most – stereotypes; the use of computers for education purposes – educational; and about the use of the computer for fun – enjoyment. In their view, student achievement is a reflection of their behaviour in school. Even with the change of technology over time, recent studies support their theory that positive computer attitudes and positive computer confidence continue to lead to better outcomes ( Lee et al., 2019 ). Stereotypes associated with computers are usually on gender, proving the idea that women have less computer knowledge than men ( Punter et al., 2017 ). However, there are no results on how other stereotypes, such as the lack of computer use by athletes', or even if the concept of people who use computers are considered nerds, negatively affects the confidence of those who use computers.

Regarding the attitudes of enjoyment and educational use of computers, there is no consensus in the literature. Some researchers found a positive association between students' academic achievement and computer use for interactive social media and video gaming, as well as for educational purposes ( Bowers and Berland, 2013 ; Tang and Patrick, 2018 ), although other researchers have found that students who play more videogames have worse results in school ( Bae and Wickrama, 2015 ), some previous studies suggest that the technology intervention has a positive effect on students' attitudes toward the use of computers for educational purposes ( Gibson et al., 2014 ). Others show concerns on the effects of technology and social media use on students' outcomes and confirm that students who have lower grades spend more time using computers for fun ( Bae and Wickrama, 2015 ; Tang and Patrick, 2018 ), others find no evidence that using computers for fun causes higher or lower achievement ( Hamiyet, 2015 ). Milani et al. (2019) demonstrated that using computers with moderate levels of video gaming may improve student achievement because it increases visual-spatial skills ( Milani et al., 2019 ) when complemented with educational use such as homework, extracurricular activities, and reading ( Bowers and Berland, 2013 ). Regarding the effect on computer confidence, we expect students to feel confident about using computers when using them for school ( Claro et al., 2012 ) and even more when using them for recreational purposes. Taking this background into account, we propose the following hypotheses.

Educational attitudes have a positive effect on computer confidence.

Educational attitudes have a positive effect on academic achievement.

Stereotype attitudes have a negative effect on computer confidence.

Enjoyment attitudes have a positive effect on computer confidence.

Enjoyment attitudes have a negative effect on academic achievement.

2.2. Learning environments and motivations

The environment where students learn can affect their attitudes ( Hsu and Huang, 2006 ). Studies have found that students achieve higher grades when they have a computer at home ( Fairlie, 2012 ; Fairlie et al., 2010 ) and use it daily to facilitate their school work ( Gu and Xu, 2019 ), suggesting that home computers improve educational outcomes and computer skills, leading to more efficient use of computers ( Fairlie and London, 2012 ). Many researchers pointed to a positive impact of computer use in schools on students' educational outcomes ( Bayrak and Bayram, 2010 ; Murillo-Zamorano et al., 2019 ; Xiao and Sun, 2021 ). The integration of computers in the classroom positively influences the interaction between students and increases learning and teaching ( Murillo-Zamorano et al., 2019 ). Experimental class manipulations using a computer in class were tested over the years, with positive results: students' academic achievement increases when a computer assists them in learning ( Bayrak and Bayram, 2010 ). However, most students show dissatisfaction with the learning environment of schools ( Hsu and Huang, 2006 ). So, we propose that home and school environments positively influence computer use in general and student achievement particularly, as hypothesised below.

Home environments have a positive effect on computer use.

Home environments have a positive effect on academic achievement.

Computer use mediates the effect of home environment on academic achievement

School environments have a positive effect on computer use.

School environments have a positive effect on academic achievement

Computer use mediates the effect of school environment on academic achievement

Regarding motivations, several types of motivations have already been studied to predict academic achievement, and the best predictor so far is associated with interest. If the student is interested, he will engage in the activity independently, and there is also evidence that interest motivations directly affect reading achievements ( Habók et al., 2020 ). When analysing students' motivations for using computers, studies show that using computers at school and for schoolwork results in higher motivation when studying and positively impacts academic achievement ( Partovi and Razavi, 2019 ). Likewise, when the students' perceptions of learning motivations are improved, there is an increasing computer use by the students and, as a result, it enhances their computer self-efficacy - perceived skill on the use ( Rohatgi et al., 2016 ) - indirectly ( Hsu and Huang, 2006 ). Therefore, in order to increase computer self-efficacy, students need to use computers more frequently. Previous results indicate that interest motivations positively affect computer use and computer self-efficacy, predicting that when student interests in computers are higher, student computer self-efficacy increases. Students are also motivated by employment and recognise that computer abilities can help them get a good job ( Hsu and Huang, 2006 ). This factor can be predicted by self-efficacy because it defines the confidence and ability on achieving success ( Serge et al., 2018 ). A study showed that learners who are more engaged and motivated use more technology for their learning purposes, most likely for individual learning than for collaborative tasks ( Lee et al., 2019 ). Regarding the use of technology, students who use it more are more motivated to do it and have better grades ( Higgins, Huscroft-D’Angelo and Crawford, 2019 ), and students who are motivated by attaining better grades tend to use e-learning more ( Dunn and Kennedy, 2019 ). In line with the literature, we expect the confirmation of the presented hypotheses.

Interest motivations have a positive effect on computer use.

Interest motivations have a positive effect on academic achievement.

Interest motivations have a positive effect on computer self-efficacy.

Employment motivations have a positive effect on computer self-efficacy.

Employment motivations have a positive effect on academic achievement.

2.3. Computer confidence, computer use & computer self-efficacy

Hands-on experience with technology is the most important factor in increasing students' confidence while using it and consequently increasing their perceived computer self-efficacy ( Hatlevik and Bjarnø, 2021 ). Students with access to a computer are more involved and interested in their classwork ( Gibson et al., 2014 ). Higher commitment to school, curiosity, and positivism can help students develop motivation and interest in school subjects, leading to higher self-efficacy and consequently better academic achievement ( Stajkovic et al., 2018 ).

Computer use has a positive effect on computer confidence.

Computer confidence has a positive effect on computer self-efficacy.

Computer confidence has a positive effect on academic achievement.

Computer use has a positive effect on academic achievement.

We know from previous literature that employment motivations positively influence academic achievement, and computer self-efficacy is also a significant influence factor on employment ( Serge et al., 2018 ) to explain academic achievement, so we believe that computer self-efficacy can moderate this relation by proposing H14 .

Computer self-efficacy moderates the effect of employment motivations on academic achievement.

2.4. Loneliness

Due to the coronavirus pandemic, schools were closed to slow down the virus transmission as a control measure, affecting half of the students globally ( Viner et al., 2020 ). Schools were forced to adapt during coronavirus outbreaks since campus classes were suspended, and online platforms have been exploited to conduct virtual classes ( Ng and Peggy, 2020 ). Ng and Peggy (2020) states that virtual classes can improve students' learning outcomes if all students are self-disciplined. However, self-isolation may affect people's mental health ( Roy et al., 2020 ), primarily impacting adolescents, influencing their behaviours and achievement in academic pursuits. Interaction with others is a pivotal factor for academic performance since students who engage with colleagues and teachers tend to have more academic success than those who study by themselves ( Torres-Díaz et al., 2016 ). Loneliness or social isolation is linked to anxiety and self-esteem ( Helm et al., 2020 ), leading to unhealthy smartphone use ( Shen and Wang, 2019 ) and sedentary behaviours ( Werneck et al., 2019 ), motivating us to posit the following.

Loneliness has a negative effect on academic achievement.

2.5. Family and students' factors

Technology use is linked to additional factors that influence adolescents' academic outcomes such as family socioeconomic factors – in particular, parents' occupation, marital status ( Abosede and Akintola, 2016 ; Asendorpf and Conner, 2012 ), parents' educational level ( Chesters and Daly, 2017 ) and family size - and student socio-emotional factors - such as relationship with colleagues, student motivation and anxiety ( Balogun et al., 2017 ). Family involvement and closeness to younger progeny have positive impacts on their achievements ( Fang, 2020 ), so we believe that the relation between using computers in a school environment on academic achievement, verified above, may change depending on the family size. Also, we know from the previous results that computer use has increased with the pandemic due to online classes, and family context has a significant impact on home computer use, so we predict a moderation effect on the relation between computer use and academic achievement. The psychological status of parents, mostly their marital status and economic status, has a powerful association with the family environment and consequently on their child's educational attainments ( Poon, 2020 ). We predict there is a positive impact of mothers' education on academic achievement since the maternal figure is the most relevant for children ( Abosede and Akintola, 2016 ). Expecting that the higher the level of education of mothers, the better the students result at school, also, we predict that parents being married have a positive influence on students' results, H15 and H16 .

Family size moderates the school environment on academic achievement.

Family size moderates computer use on academic achievement.

Parents marital status has a positive effect on academic achievement.

Mothers' education has a positive effect on academic achievement.

According to their age and gender, students' grades can differ independently of their family characteristics: female students tend to achieve higher scores than male students ( Valli Jayanthi et al., 2014 ) and older students showed lower grades compared to younger students ( Chowa et al., 2015 ). Some of these factors are not of primary interest for this study. Nevertheless, it is crucial to include them in the research to control for bias since they influence the association between the use of technology and adolescents' outcomes ( Tang and Patrick, 2018 ). We have therefore used age and gender as a control variable on our research model.

2.6. Conceptual model

Figure 1 illustrates our proposed model. We focus our research on computers and their influence on academic achievement. The drivers shown in the research model emerged from the literature above. We first gathered information and identified the main factors that influence academic achievement through computer use, and from the most significant constructs relating to computers and academic achievement, we examined and analysed their viability on the study. From the computers' context, the most significant constructs found were computer attitudes (educational attitudes, enjoyable attitudes, stereotypes attitudes), computer use, computer confidence ( Levine and Donitsa-Schmidt, 1998 ), computer self-efficacy, learning environments (home environment, school environment) and learning motivations (interest motivations, employment motivations) ( Hsu and Huang, 2006 ). We identified loneliness as the most relevant construct from the pandemic context considering its impact on academic achievement ( Helm et al., 2020 ). We identified mothers' education, marital status, and family size as the most relevant influencers from the family context. Finally, with our central construct, academic achievement, we are trying to understand how it is impacted by computers, the pandemic and family factors from students' points of view. So, the proposed model tries to predict AA through students' computer attitudes, learning environments, learning motivations, computer confidence, computer use, computer self-efficacy and loneliness, adding sociodemographic data related to students and their families - parents' marital status, mothers' education and family size, where the latter only works as a moderator, including two additional control variables, age and gender. This model integrates several constructs on the literature relevant to the study of computers influence on academic achievement since is essential to fortify and unify the knowledge in this investigation field. As explained above, the model merges two existing models ( Hsu and Huang, 2006 ; Levine and Donitsa-Schmidt, 1998 ), allowing us to update the previous results and test new hypothesis. Additionally, the integration of the covid pandemic context brings a different and important analysis of today's reality.

Figure 1

Conceptual model.

3.1. Participants and procedure

For this study, we developed a questionnaire for students enrolled in public high schools. The survey, with an estimated completion time of 8 min was sent by e-mail to several schools in Portugal to achieve more diversity within the collected answers. The participants consented to the use of their information as long as it was anonymous and confidential. The questionnaire was answered online and comprised 26 closed questions (please, see Appendix A ) inquiring about computer attitudes, motivations, use at home and school, frequency of use, students' grade average from 0 to 20 marks, and sociodemographic information. With this data, we can compare and analyse the impact of their type of use and opinion about computers on their achievement in school. The study's target population were 16 to 18-year-old adolescents in the 10 th , 11 th and 12 th grades at secondary schools. This range of students allowed us to surround a group of people with similar maturity and identical needs in digital use. We chose to study public school students because teaching methods in private schools are quite different, as are the type of students and families who choose private schools. Also, most students in Portugal study at public schools, and it seems more coherent to study only public education since it is more accessible to address. According to the Ethics Committee of NOVA IMS and MagIC Research Center regulations, this project was considered to meet the requirements, being considered approved.

A pilot test with 30 answers allowed us to comprehend the viability of some survey questions and their order, and afterwards, when evaluating the model, the strength of constructs led us to drop a few items due to the lack of importance and correlations within them. The pilot test allowed us to improve the questionnaire to facilitate answering and adapt the research model initially built. After the complete collection of data, we considered only student responses 100% completed, amounting to 286 valid responses, from a total of 465 answers. We had 98 boys and 188 girls among the respondents, with an average age of 17 years old, with an average global grade of 15 points (on a scale from 0 to 20). Students' academic achievement was measured through students' average grades - on reading, mathematics and global average grade. Computer use was measured through a scale range from 1 (never) to 5 (every day) to measure the frequency of use. A 3-item loneliness scale was used to assess the loneliness construct ( Hughes et al., 2004 ) based on the UCLA Loneliness Scale ( Russel, 1996 ). This scale has been used in several studies recently ( Helm et al., 2020 ; Liu et al., 2020 ; Shen and Wang, 2019 ) to study loneliness as a consequence of the coronavirus. The remaining items, apart from the demographic variables (age, gender, marital status, mothers' education, family size), were measured through a scale range from 1 (strongly disagree) to 5 (strongly agree).

4. Analysis and results

We used structural equation modelling (SEM) to test the relations estimated in our theoretical model and its effects ( Marsh et al., 2004 ). Consequently, we applied partial least squares (PLS), a method used to develop theories in explanatory research. The use of PLS-SEM is to maximise the explained variance in the dependent constructs and evaluate data quality, knowing that it is a method that works better on bigger sample sizes and larger complexity with less restrictive assumptions on data (Joe F Hair et al., 2014 ). We used the partial least squares method as the recommended two-step approach that first tests the reliability and validity of the measurement model and then assesses the structural model ( Anderson and Gerbing, 1988 ).

4.1. Measurement model

Measurement models measure the relation between the latent variables and their indicators for both reflective and formative constructs. In this study, all constructs are reflective except computer use, which is formative.

The internal consistency, convergent validity and discriminatory validity must be verified to assess the reflective measurement model. The composite reliability (CR), shown in Appendix B, is higher than 0.7 in all constructs, reflecting internal consistency ( Mcintosh et al., 2014 ). Also, by analysing the loadings of the items, which are all higher than 0,6, we can conclude there is indicator reliability. To demonstrate convergent validity, we verify the average variance extracted (AVE) values of constructs, and they are all higher than 0.5 (please see Appendix B), confirming there is convergent validity ( Sarstedt et al., 2017 ). To analyse discriminant validity, we implemented three methods - the Fornell-Larcker criterion, the loadings and cross-loadings analysis, and the heterotrait-monotrait ratio (HTMT) methodology. The Fornell-Larcker criterion supports that the AVE square root of each construct should be higher than the correlation between constructs ( Fornell and Larcker, 1981 ), which Appendix B can confirm. The second criteria support that the loadings should be higher than the respective cross-loadings (Joseph F Hair et al., 2014 ), which is observed in Appendix C. The HTMT method sustains that the HTMT values should be lower than 0.9 (Joseph F Hair et al., 2017 ; Sarstedt et al., 2017 ), confirmed by Appendix D. Thus, all the constructs have discriminant validity.

In order to assess the validity of the formative construct computer use, we assessed the model for multicollinearity using (variance inflation factor) VIF. Table 1 shows the VIF values are all under 5 (Joseph F Hair et al., 2017 ), as the threshold indicates it should be, so the model does not have multicollinearity problems. In terms of significance, the three items are statistically significant (p < 0.05), as Table 1 confirms, concluding that the formative construct is reliable.

Table 1

Formative measurement model evaluation.

Note: ∗p < 0.05, ∗∗p < 0.01, ∗∗∗p < 0.001.

We can conclude that both reflective and formative constructs present a good measurement model. For this reason, we can move to the structural model.

4.2. Structural model

To estimate the structural model, first, we assessed the VIF to check the model for multicollinearity issues. The VIF values are below the threshold of 5 ( Sarstedt et al., 2017 ), so the model does not have multicollinearity problems. To evaluate the statistical significance of the path coefficients, we did a bootstrap with 5000 resamples. Results from the model are presented in Figure 2 .

Figure 2

Conceptual model results.

The model explains 30.5% of computer confidence. Educational attitudes (β = 0.307, p < 0.001), stereotype attitudes (β = - 0.160, p < 0.01), enjoyment attitudes (β = 0.236, p < 0.001) and computer use (β = 0.136, p < 0.05) are statistically significant in explaining computer confidence, confirming hypotheses H1 a, H2 , H3 a and H8 . The explained variation of computer use is 42,5%. The results show that home environment (β = 0.421, p < 0.001), school environment (β = 0.317, p < 0.05) and interest motivations (β = 0.124, p < 0.05) are statistically significant and have a positive influence on computer use, thus hypotheses H4 a, H5 a and H6 a are supported. The model explains 35.8% of computer self-efficacy. The home environment construct (β = 0.200, p < 0.01), interest motivations (β = - 0.156, p < 0.05), and employment motivations (β = 0.217, p < 0.01) are statistically significant however, home environment and employment motivation show a positive influence on computer self-efficacy, supporting hypotheses H4 c, H7 a and interest motivations show a negative influence on computer self-efficacy where we expected a positive influence, rejecting H6 c.

The model explains 31.1% of students' academic achievement. Enjoyment attitudes (β = - 0.162, p < 0.05), employment motivations (β = 0.183, p < 0.05), computer use (β = 0.257, p < 0.05), loneliness (β = - 0.150, p < 0.05) and mother's education (β = 0.135, p < 0.05) are statistically significant in explaining academic achievement, supporting the hypotheses, H3 b, H7 b, H11 , H13 and H16 . We reject respective hypotheses H5 b and H6 b respectively, despite school environment (β = - 0.246, p < 0.001) and interest motivations (β = - 0.159, p < 0.05), being statistically significant, because we suggested that school environment and interest motivations would positively influence academic achievement, and the results observe a negative influence. Educational attitudes (β = -0.003, p > 0.05), home environment (β = 0.100, p > 0.05), computer confidence (0.105, p > 0.05) and parental marital status (β = 0.067, p > 0.05) show a non-significant effect on explaining academic achievement, rejecting H1 b, H4 b, H10 and H15 . The moderation effect of computer self-efficacy in employment motivations (β = 0.108, p < 0.05) is statistically significant, supporting H12 . The moderation effect of family size on school environment (β = 0.141, p < 0.05) and on computer use (β = - 0.233, p < 0.01) is statistically significant, supporting H14 a and H14 b.

Table 2 summarises the research hypotheses results. We can conclude that 17 of the 25 proposed hypotheses were supported.

Table 2

Research hypotheses results.

Notes: n.a. - not applicable; ∗ significant at p < 0.05; ∗∗ significant at p < 0.01; ∗∗∗ significant at p < 0.001.

5. Discussion

This research model contributes to and extends the literature review on computers and academic achievement. This study relates academic achievement with loneliness, family and computer-related variables such as computer confidence, computer self-efficacy, computer attitudes, computer learning motivations and computer learning environments.

The results show that educational and enjoyment computer attitudes positively influence computer confidence, while stereotype attitudes negatively influence it. We expected this negative relation regarding stereotypes since there are the same results regarding stereotypes on gender and age ( Punter et al., 2017 ), although similar results concerning stereotypes on computer users have not yet been found. As for the influence of attitudes on academic achievement, educational computer attitudes do not have a statistically significant relationship with academic achievement. On the other hand, enjoyable computer attitudes have a significant negative impact on academic achievement, which leads us to conclude that there is no relation between computers as an educational tool and academic achievement. In fact, apart from some specific high school vocational courses oriented to computing skills, most classes happen in a classic lecture setting and rely mostly on textbook manuals as learning tools, which can help explain the results regarding educational computer attitude. However, using computers for recreational purposes negatively influences students' academic achievement, as similar results have already been observed - students who play more video games have a lower achievement ( Tang and Patrick, 2018 ). Two possible reasons can explain this phenomenon. First, because young adults are so engaged and skilled with technology use for game playing and social media that they do not make the best use of these skills for academic purposes, for instance ( Gurung and Rutledge, 2014 ) and second, because excessive use and multitasking can lead to distractions and lack of time to study ( Rashid and Asghar, 2016 ).

The construct computer use, measured as the frequency of use, positively impacts computer confidence and academic achievement. Thus, the greater the use of computers, the more confident students are while using them, and so the more use of the computer, the better the performance achieved. Several other studies contradict the negative influence verified between school environment and academic achievement ( Bayrak and Bayram, 2010 ; Carle et al., 2009 ; Murillo-Zamorano et al., 2019 ). However, this can be explained by the rapid development of computer technology and the massive use of computers at home compared to the lack of use at school due to schools' technology being obsolete, and students preferring the home environment.

The results demonstrate that computer use works as a full mediator for home environment and academic achievement since there is no relation between home environment and academic achievement, contrary to another study ( Fairlie et al., 2010 ). However, with computer use as a mediator, we suggest that the home environment influences academic achievement when computer use increases since there is a positive relation between home environment and computer use ( Hsu and Huang, 2006 ), i.e., students who use a computer at home have better results. Also, computer use works as a partial mediator for the school environment and academic achievement. Hence, we suggest that, although the use of computers at school already directly (but negatively) influences students' performance, computer use mediates this relation positively. This effect is likely due to the fact that even though there is an effort to implement digital transformation in the education sector, there is still a lack of computers at schools: most students do not have easy access to computers in school (high schools in Portugal have an average 4.2 students per computer), but those who use them benefit on their grades. These results allow us to confirm our second contribution, the investigation of the mediation effect of computer use between the factors identified and academic achievement. The mediation results are shown in Table 3 .

Table 3

Hypotheses testing on mediation.

Note: ∗ |t|> 1.96 and p-value = 0.05.; ∗∗ |t| > 2.57 and p-value = 0.01; ∗∗∗ |t| > 3.291 and p-value = 0.001.

Regarding motivations, interest motivation impacts computer use positively, as concluded by other similar findings ( Rohatgi et al., 2016 ), i.e. the more interested students are in computers, the more they use them. Nonetheless, it negatively influences academic achievement and computer self-efficacy, concluding that the bigger the interest motivation, the more the use of computers but the lower the achievement and the computer self-efficacy. These two negative relations are quite controversial compared to the literature. However, it may mean that the more interest in computers, the more use for recreational purposes, negatively impacting academic achievement ( Rashid and Asghar, 2016 ). The more interest students have in computers, the more knowledge of using the devices, and the perceived efficacy starts to decrease. Thus further research is needed to draw any conclusions on this.

Computer confidence has a strong positive effect on computer self-efficacy, meaning that the perceived computer self-efficacy increases when the confidence in the device is higher, as stated in similar findings ( Hatlevik and Bjarnø, 2021 ). Although, we cannot conclude there is a relation between computer confidence and academic achievement. All the previous results allow us to reflect on the influence that the computer-related variables studied have on the student performance, contributing with data for future research and confirming our first contribution of the study.

The loneliness construct, used as a measure of coronavirus effects, negatively influenced academic achievement, as expected. While students were in lockdown having remote classes, without any presential contact with their school, teachers, and colleagues, the feeling of loneliness and isolation negatively impacted their performance indeed, as observed in our results. These results confirm our contribution to understanding how the COVID-19 pandemic influences students’ academic achievement. Recent studies found negative impacts of loneliness ( Roy et al., 2020 ) on students, demonstrating the importance of cooperating with colleagues ( Torres-Díaz et al., 2016 ). However, there are yet no results of the direct impact of loneliness deriving from the pandemic on academic achievement.

There are three moderation hypotheses using family size and computer self-efficacy. From the family size moderator, we can conclude that family size influences the relation between school environment and academic achievement. In Figure 3 , we can see that when the family size decreases, the negative impact the school environment has on academic achievement increases, suggesting that the smaller the family, the students tend to have worse grades when studying in a school environment. Regarding family size in the relation between computer use and academic achievement, shown in Figure 4 , when the family size decreases, computer use is more important to explain academic achievement because when the family is small, students need to use the computer more to achieve better results. Relating to the computer self-efficacy moderator, in Figure 5 , it impacts the relationship between employment motivations and academic achievement positively, meaning that the better students perceive their computer self-efficacy, the stronger positive impact employment motivation has on academic achievement. This effect can be explained due to the increase of technological jobs: students who feel more capable in their computer skills (with a higher computer self-efficacy) and are more motivated to pursue a technological career have higher academic achievement. These results allow us to confirm our second contribution, the investigation of the moderation effect family size and computer self-efficacy.

Figure 3

Structural model (variance-based technique) for academic achievement.

Figure 4

In this study, we found that marital status does not have any effect on academic achievement, but mothers' education has a positive impact on students' achievement, reinforcing the literature ( Abosede and Akintola, 2016 ).

5.1. Practical implications

Academic achievement is a widely topic studied because there is an ongoing concern for understanding the factors that lead to better academic achievements. Since students practically depend on computers for school nowadays, we tried to relate the most studied computer variables in the literature with academic achievement, expecting results that answer the gaps identified in the literature. To our knowledge, no study has yet provided a conclusion on the influence of loneliness provoked by the COVID-19 pandemic on academic achievement, neither of interest and employment motivations on AA. Moreover, there is no consensus in the literature on the influence of the use of computers for fun and academic performance. We can contribute to the literature with the answers to these questions: students who feel lonely have worse academic achievement, students motivated by an interest in computers have worse academic achievement and students motivated by the expectation of having a good job have better grades. Also, enjoyable computer attitudes negatively influence academic achievement, so the students who find the computer a good tool for recreational purposes have worse grades.

Contrary to the literature, we found that computer confidence does not influence academic achievement; apart from this, we concur with the available results published by other researchers. There are clear positive implications on using computers in education, and consequently, in students' outcomes. Therefore, teachers and parents should encourage using computers in adolescents' education to improve their school performance and future.

5.2. Limitations and further research

The present study has some limitations that point to future research directions on the role of students' academic achievement and its predictors. First, the data collected does not have sufficient diversity in country dispersity and gender balance since most participants were girls hailing from Portugal. Also, better results can be obtained with a more significant sample. Secondly, the fact that we are going through a pandemic forced schools and students to attend classes online, which on the one hand, is an advantage because it provides the opportunity to study loneliness deriving from the pandemic. On the other hand, it could bias the students' answers to the questionnaire and the subsequent results because their opinion on computers could have changed during home-schooling compared to the usual previous schooling method since the literature is related to regular presential school attendance.

In further research, other factors regarding loneliness should be studied to understand the impact of coronavirus on students' lives better, comparing pre-pandemic and pandemic daily computer usage. Other factors such as addiction to technology should be analysed.

6. Conclusions

This study proposes a theoretical model on the influence of several computer factors on the academic achievement of high school students. The results, in general, empirically support the literature in similar findings. The proposed conceptual model explains 31.1% of academic achievement. We found that students who use computers for recreational purposes or feel that a computer is a tool to "pass the time" or play games are those who have the worst grades. We can conclude this through the negative relation between enjoyment attitudes and academic achievement. Nevertheless, there is no relation between students who perceive computers as an educational tool and their academic achievement. We believe this conclusion results from how teenagers use their computers and smartphones excessively, not prioritising the use for school, leading to the observed results. Our results also show that there are still stereotypes about who uses computers most. Respondents believe that peers who play sports do not have the same likelihood of using computers excessively, and those that frequently use computers are not sociable. This mindset leads to less confidence in computers.

A significant conclusion was found regarding the computer use environment, though the mediation effect of computer use. When students use the computer at home, they need to use it frequently to influence their academic achievement, but when students use the computer at school, it will influence their academic achievement positively independently of the frequency of use. However, the frequency of computer use itself influences academic achievement. As we expected, the feelings of loneliness associated with the coronavirus negatively influence students' academic achievement, an important new conclusion in the literature. The moderation effect on family size allows us to conclude that students with a smaller family tend to have worse grades when studying in a school environment and need to use computers more to have better school results than those in larger families. Moreover, the moderation effect on computer self-efficacy lets us conclude that students who perceive better computer self-efficacy, have better grades and academic achievement is influenced by employment motivation.

Declarations

Author contribution statement.

Sofia Simões: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.

Tiago Oliveira: Conceived and designed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.

Catarina Nunes Analyzed and interpreted the data; Wrote the paper.

Funding statement

This work was supported by FCT (Fundação para a Ciência e a Tecnologia) under project DSAIPA/DS/0032/2018 (DS4AA).

Data availability statement

Declaration of interests statement.

The authors declare no conflict of interest.

Additional information

No additional information is available for this paper.

Appendix A. Constructs table

Notes: 1, 2, 3, 4, 5, 6, 7, 9, 10 Range scale from 1 (Strongly Disagree) to 5 (Strongly Agree); 8 Range scale from 1 (Never) to 5 (Everyday); 11 Ordinal Scale (Hardly ever, some of the time, often); 12 Ratio scale from 0 to 20 (number); 13 Nominal scale (number); 14 Nominal scale (married, divorced, in a domestic partnership, widowed, other); 15 Ordinal scale (less than high school, high school or equivalent, bachelor's degree, master's degree, doctorate, other); 16 Ratio scale (number); 17 Nominal scale (male, female).

Appendix B. Descriptive statistics, correlation, composite reliability (CR), and average variance extracted (AVE)

Note: Values in diagonal (bold) are the AVE square root.

Appendix C. Outer Loadings and Cross-Loadings

Appendix d. heterotrait-monotrait ratio (htmt).

  • Abosede S., Akintola O. Mothers ’ employment , marital status and educational level on students ’ academic achievement in business studies. Multidiscip. Res. 2016; 4 (2):159–165. [ Google Scholar ]
  • Admiraal W., Louws M., Lockhorst D., Paas T., Buynsters M., Cviko A., Kester L. Teachers in school-based technology innovations: a typology of their beliefs on teaching and technology. Comput. Educ. 2017; 114 :57–68. [ Google Scholar ]
  • Anderson J.C., Gerbing D.W. Structural equation modeling in practice : a review and recommended two-step approach. Psychol. Bull. 1988; 103 (3):411–423. [ Google Scholar ]
  • Asendorpf J.B., Conner M. Conflict resolution as a dyadic mediator: considering the partner pespective on conflict resolution. Eur. J. Pers. 2012; 119 :108–119. [ Google Scholar ]
  • Bae D., Wickrama K.A.S. Family socioeconomic status and academic achievement among Korean adolescents: linking mechanisms of family processes and adolescents’ time use. J. Early Adolesc. 2015; 35 (7):1014–1038. [ Google Scholar ]
  • Balogun A.G., Balogun S.K., Onyencho C.V. Test anxiety and academic performance among undergraduates: the moderating role of achievement motivation. Spanish J. Psychol. 2017; 20 :1–8. [ PubMed ] [ Google Scholar ]
  • Bayrak B.K., Bayram H. The effect of computer aided teaching method on the student’s academic achievement in the science and technology course. Procedia - Soc. Behav. Sci. 2010; 9 :235–238. [ Google Scholar ]
  • Bowers A.J., Berland M. Does recreational computer use affect high school achievement? Educ. Technol. Res. Dev. 2013; 61 (1):51–69. [ Google Scholar ]
  • Carle A.C., Jaffee D., Miller D. Engaging college science students and changing academic achievement with technology: a quasi-experimental preliminary investigation. Comput. Educ. 2009; 52 (2):376–380. [ Google Scholar ]
  • Chesters J., Daly A. Do peer effects mediate the association between family socio-economic status and educational achievement? Aust. J. Soc. Issues. 2017; 52 (1):65–77. [ Google Scholar ]
  • Chowa G.A.N., Masa R.D., Ramos Y., Ansong D. How do student and school characteristics influence youth academic achievement in Ghana? A hierarchical linear modeling of Ghana YouthSave baseline data. Int. J. Educ. Dev. 2015; 45 :129–140. [ Google Scholar ]
  • Claro M., Preiss D.D., San Martín E., Jara I., Hinostroza J.E., Valenzuela S., Nussbaum M. Assessment of 21st century ICT skills in Chile: test design and results from high school level students. Comput. Educ. 2012; 59 (3):1042–1053. [ Google Scholar ]
  • Dunn T.J., Kennedy M. Technology enhanced learning in higher education; motivations, engagement and academic achievement. Comput. Educ. 2019; 137 (March):104–113. [ Google Scholar ]
  • Ertmer P.A., Ottenbreit-Leftwich A.T., Sadik O., Sendurur E., Sendurur P. Teacher beliefs and technology integration practices: a critical relationship. Comput. Educ. 2012; 59 (2):423–435. [ Google Scholar ]
  • Fairlie R.W. Academic achievement, technology and race: experimental evidence. Econ. Educ. Rev. 2012; 31 (5):663–679. [ Google Scholar ]
  • Fairlie R.W., Beltran D.O., Das K.K. Home computers and educational outcomes: evidence from the NLSY97 and cps. Econ. Inq. 2010; 48 (3):771–792. [ Google Scholar ]
  • Fairlie R.W., London R.A. The effects of home computers on educational outcomes: evidence from a field experiment with community college students. Econ. J. 2012; 122 (561):727–753. [ Google Scholar ]
  • Fang L. Acculturation and academic achievement of rural to urban migrant youth: the role of school satisfaction and family closeness. Int. J. Intercult. Relat. 2020; 74 (November 2019):149–160. [ Google Scholar ]
  • Fornell C., Larcker D. Evaluating structural equation models with unobservable variables and measurement error. J. Market. Res. 1981; 18 (1):39–50. [ Google Scholar ]
  • Gibson P.A., Stringer K., Cotten S.R., Simoni Z., O’Neal L.J., Howell-Moroney M. Changing teachers, changing students? the impact of a teacher-focused intervention on students’ computer usage, attitudes, and anxiety. Comput. Educ. 2014; 71 :165–174. [ Google Scholar ]
  • Gonzalez-pienda J.A., Nunez J.C., Gonzalez-pumariega S., Alvarez L., Roces C., Garcia M. A structural equation model of parental involvement, motivational and aptitudinal characteristics, and academic achievement. J. Exp. Educ. 2002; 70 (3):257–287. [ Google Scholar ]
  • Gu X., Xu H. Missing piece in understanding student learning: out-of-school computer use. J. Educ. Comput. Res. 2019; 57 (2):320–342. [ Google Scholar ]
  • Gurung B., Rutledge D. Digital learners and the overlapping of their personal and educational digital engagement. Comput. Educ. 2014; 77 :91–100. [ Google Scholar ]
  • Habók A., Magyar A., Németh M.B., Csapó B. Motivation and self-related beliefs as predictors of academic achievement in reading and mathematics: structural equation models of longitudinal data. Int. J. Educ. Res. 2020; 103 (May):101634. [ Google Scholar ]
  • Hair Joe F., Ringle C.M., Sarstedt M., Hair J.F., Ringle C.M., Sarstedt M. 2014. PLS-SEM : Indeed a Silver Bullet; pp. 37–41. [ Google Scholar ]
  • Hair Joseph F., Hult Gt., Ringle C.M., Sarstedt M. second ed. SAGE Publications, Inc; Los Angeles: 2017. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM) [ Google Scholar ]
  • Hamiyet S. The effects of computer games on the achievement of basic mathematical skills. Educ. Res. Rev. 2015; 10 (22):2846–2853. [ Google Scholar ]
  • Hatlevik O.E., Bjarnø V. Examining the relationship between resilience to digital distractions , ICT self-ef fi cacy , motivation , approaches to studying , and time spent on individual studies. Teach. Teach. Educ. 2021; 102 :103326. [ Google Scholar ]
  • Helm P.J., Jimenez T., Bultmann M., Lifshin U., Greenberg J., Arndt J. Existential isolation, loneliness, and attachment in young adults. Pers. Indiv. Differ. 2020; 159 (February):109890. [ Google Scholar ]
  • Higgins K., Huscroft-D’Angelo J., Crawford L. Effects of technology in mathematics on achievement, motivation, and attitude: a Meta-Analysis. J. Educ. Comput. Res. 2019; 57 (2):283–319. [ Google Scholar ]
  • Hsu W.K.K., Huang S.H.S. Determinants of computer self-efficacy - an examination of learning motivations and learning environments. J. Educ. Comput. Res. 2006; 35 (3):245–265. [ Google Scholar ]
  • Hughes M.E., Waite L.J., Hawkley L.C., Cacioppo J.T. A short scale for measuring loneliness in large surveys: results from two population-based studies. Res. Aging. 2004; 26 (6):655–672. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Lee C., Yeung A.S., Cheung K.W. Learner perceptions versus technology usage: a study of adolescent English learners in Hong Kong secondary schools. Comput. Educ. 2019; 133 (August 2017):13–26. [ Google Scholar ]
  • Lee J., Shute V.J., Lee J. 2010. Personal and Social-Contextual Factors in K – 12 Academic Performance : an Integrative Perspective on Student Learning Personal and Social-Contextual Factors in K – 12 Academic Performance : an Integrative Perspective on Student Learning; pp. 37–41. (December 2014) [ Google Scholar ]
  • Levine T., Donitsa-Schmidt S. Computer use, confidence, attitudes, and knowledge: a causal analysis. Comput. Hum. Behav. 1998; 14 (1):125–146. [ Google Scholar ]
  • Liu C.H., Zhang E., Wong G.T.F., Hyun S., Hahm H. Chris. Factors associated with depression, anxiety, and PTSD symptomatology during the COVID-19 pandemic: clinical implications for U.S. young adult mental health. Psychiatr. Res. 2020; 290 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Marsh H.W., Wen Z., Hau K.T. Structural equation models of latent interactions: evaluation of alternative estimation strategies and indicator construction. Psychol. Methods. 2004; 9 (3):275–300. [ PubMed ] [ Google Scholar ]
  • Mcintosh C.N., Edwards J., Antonakis J. 2014. Reflections on Partial Least Squares Path Modeling. [ Google Scholar ]
  • Milani L., Grumi S., Di Blasio P. Positive effects of videogame use on visuospatial competencies: the impact of visualization style in preadolescents and adolescents. Front. Psychol. 2019; 10 (MAY):1–9. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Murillo-Zamorano L.R., López Sánchez J.Á., Godoy-Caballero A.L. How the flipped classroom affects knowledge, skills, and engagement in higher education: effects on students’ satisfaction. Comput. Educ. 2019; 141 October 2018. [ Google Scholar ]
  • Ng Y.-M., Peggy P.L. Coronavirus disease (COVID-19) prevention: virtual classroom education for hand hygiene. Nurse Educ. Pract. 2020:102782. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • OECD . 2015. OECD Digital Economy Outlook 2015. [ Google Scholar ]
  • OECD . 2019. Benchmarking Higher Education System Performance: Norway. [ Google Scholar ]
  • OECD . 2020. Learning Remotely when Schools Close : How Well Are Students and Schools Prepared ? Insights from PISA. [ Google Scholar ]
  • Partovi T., Razavi M.R. The effect of game-based learning on academic achievement motivation of elementary school students. Learn. Motiv. 2019; 68 (August):101592. [ Google Scholar ]
  • Poon K. The impact of socioeconomic status on parental factors in promoting academic achievement in Chinese children. Int. J. Educ. Dev. 2020; 75 (December 2019):102175. [ Google Scholar ]
  • Punter R.A., Meelissen M.R.M., Glas C.A.W. Gender differences in computer and information literacy: an exploration of the performances of girls and boys in ICILS 2013. Eur. Educ. Res. J. 2017; 16 (6):762–780. [ Google Scholar ]
  • Rajkumar R.P. COVID-19 and mental health: a review of the existing literature. Asian J. Psychiatr. 2020; 52 (March):102066. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Rashid T., Asghar H.M. Technology use, self-directed learning, student engagement and academic performance: examining the interrelations. Comput. Hum. Behav. 2016; 63 :604–612. [ Google Scholar ]
  • Rohatgi A., Scherer R., Hatlevik O.E. The role of ICT self-efficacy for students ’ ICT use and their achievement in a computer and information literacy test. Comput. Educ. 2016; 102 :103–116. [ Google Scholar ]
  • Roy D., Tripathy S., Kar S.K., Sharma N., Verma S.K., Kaushal V. Study of knowledge, attitude, anxiety & perceived mental healthcare need in Indian population during COVID-19 pandemic. Asian J. Psychiatr. 2020; 51 :102083. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Russel D. UCLA loneliness scale. J. Pers. Assess. 1996; 66 (1):20–40. [ PubMed ] [ Google Scholar ]
  • Sarstedt M., Ringle C.M., Hair J.F. 2017. Partial Least Squares Structural Equation Modeling. [ Google Scholar ]
  • Schmid R., Petko D. Does the use of educational technology in personalized learning environments correlate with self-reported digital skills and beliefs of secondary-school students? Comput. Educ. 2019; 136 :75–86. [ Google Scholar ]
  • Serge P., Veiga M., Turban D.B. Insight into job search self-regulation : effects of employment self-efficacy and perceived progress on job search intensity. J. Vocat. Behav. 2018; 108 :57–66. [ Google Scholar ]
  • Shen X., Wang J.L. Loneliness and excessive smartphone use among Chinese college students: moderated mediation effect of perceived stressed and motivation. Comput. Hum. Behav. 2019; 95 (January):31–36. [ Google Scholar ]
  • Stajkovic A.D., Bandura A., Locke E.A., Lee D., Sergent K. Test of three conceptual models of influence of the big five personality traits and self-efficacy on academic performance: a meta-analytic path-analysis. Pers. Indiv. Differ. 2018; 120 (August 2017):238–245. [ Google Scholar ]
  • Suárez-álvarez J., Fernández-alonso R., Muñiz J. Self-concept, motivation, expectations, and socioeconomic level as predictors of academic performance in mathematics. Learn. Indiv Differ. 2014; 30 :118–123. [ Google Scholar ]
  • Tang S., Patrick M.E. Technology and interactive social media use among 8th and 10th graders in the U.S. and associations with homework and school grades. Comput. Hum. Behav. 2018; 86 :34–44. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Tesfagiorgis M., Tsegai S., Mengesha T., Craft J., Tessema M. The correlation between parental socioeconomic status (SES) and children’s academic achievement: the case of Eritrea. Child. Youth Serv. Rev. 2020; 116 (July):105242. [ Google Scholar ]
  • Torres-Díaz J.C., Duart J.M., Gómez-Alvarado H.F., Marín-Gutiérrez I., Segarra-Faggioni V. Internet use and academic success in university students. Comunicar. 2016; 24 (48):61–70. [ Google Scholar ]
  • Valli Jayanthi S., Balakrishnan S., Lim Siok Ching A., Aaqilah Abdul Latiff N., Nasirudeen A.M.A. Factors contributing to academic performance of students in a tertiary institution in Singapore. Am. J. Educ. Res. 2014; 2 (9):752–758. [ Google Scholar ]
  • Viner R.M., Russell S.J., Croker H., Packer J., Ward J., Stansfield C., Booy R. School closure and management practices during coronavirus outbreaks including COVID-19: a rapid systematic review. Lancet Child Adolescent Health. 2020; 4 (5):397–404. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Voogt J., Erstad O., Dede C., Mishra P. Challenges to learning and schooling in the digital networked world of the 21st century. J. Comput. Assist. Learn. 2013; 29 (5):403–413. [ Google Scholar ]
  • Werneck A.O., Collings P.J., Barboza L.L., Stubbs B., Silva D.R. Associations of sedentary behaviors and physical activity with social isolation in 100,839 school students: the Brazilian Scholar Health Survey. Gen. Hosp. Psychiatr. 2019; 59 (April):7–13. [ PubMed ] [ Google Scholar ]
  • Xiao F., Sun L. Profiles of student ICT use and their relations to background, motivational factors, and academic achievement. J. Res. Technol. Educ. 2021 0(0),1–17. [ Google Scholar ]
  • Research article
  • Open access
  • Published: 19 June 2006

Computer literacy and attitudes towards e-learning among first year medical students

  • Thomas Michael Link 1 &
  • Richard Marz 1  

BMC Medical Education volume  6 , Article number:  34 ( 2006 ) Cite this article

62k Accesses

87 Citations

Metrics details

At the Medical University of Vienna, most information for students is available only online. In 2005, an e-learning project was initiated and there are plans to introduce a learning management system. In this study, we estimate the level of students' computer skills, the number of students having difficulty with e-learning, and the number of students opposed to e-learning.

The study was conducted in an introductory course on computer-based and web-based training (CBT/WBT). Students were asked to fill out a questionnaire online that covered a wide range of relevant attitudes and experiences.

While the great majority of students possess sufficient computer skills and acknowledge the advantages of interactive and multimedia-enhanced learning material, a small percentage lacks basic computer skills and/or is very skeptical about e-learning. There is also a consistently significant albeit weak gender difference in available computer infrastructure and Internet access. As for student attitudes toward e-learning, we found that age, computer use, and previous exposure to computers are more important than gender. A sizable number of students, 12% of the total, make little or no use of existing e-learning offerings.

Many students would benefit from a basic introduction to computers and to the relevant computer-based resources of the university. Given to the wide range of computer skills among students, a single computer course for all students would not be useful nor would it be accepted. Special measures should be taken to prevent students who lack computer skills from being disadvantaged or from developing computer-hostile attitudes.

Peer Review reports

Computer literacy has been a subject of educational research ever since personal computers were introduced to the classroom, either as teaching aids or as tools for self-study. In the 1980s, research on computer literacy focused on the question whether medical students were ready for the foreseeable omnipresence of computers in the future doctors' professional environments [ 1 – 4 ], i.e., whether they possessed the necessary computer skills [ 2 , 5 – 9 ]. The vision of a knowledge-based society saw future economic wealth dependent on people's abilities to deal with the growing information load and to adapt to an ever-changing working environment [ 10 – 13 ]. It was assumed that computers would become ubiquitous tools for managing medical knowledge [ 14 ]. In some medical schools, a privately owned computer was made a requirement for medical students [ 15 , 16 ].

E-Learning, in particular the use of learning management systems (LMSs), introduced a new aspect. Researchers [ 17 ] suggested that some students may lack the necessary skills to use web-based learning platforms effectively and are therefore handicapped. This issue is often discussed in the context of gender differences. The main concern is that female students are at a disadvantage due to different patterns of computer usage, e.g. a less dominant style of discussion in web-based communication [ 18 , 19 ]. These gender differences can be observed in students' computer-related behaviors but also in their attitudes towards computer-based and web-based training (CBT/WBT). In a Danish study, Dørup [ 9 ] reported that among first-year students, 46% of the men were in favor of replacing "traditional teaching with use of computers if possible" while only 22% women agreed with this statement.

In 2004, 80% of Austria's 20–29 year olds had Internet access and 75% of university and high school students used a computer daily [ 20 ]. We can thus assume that, in general, students entering university have good basic computer skills. Studies nevertheless demonstrate that there is a considerable difference in computer use according to students' disciplines. Middendorff [ 21 ] reports that German medical students spend an average of 8 hours per week at the computer (including private activities). This is the lowest value of all disciplines, what makes it difficult to draw conclusions about medical students' computer use from general surveys. Often the degree of "informational fluency" remains at a basic level and students tend to over-estimate their computer skills [ 22 ].

This study examines the level of computer literacy and patterns of computer usage of first-year medical students at the Medical University of Vienna. It was conducted in an introductory course for first-year students on CBT/WBT. The goal of the study was to determine the need for such introductory courses and to provide information that could be used to improve them. A secondary aim was to identify difficulties that may be encountered in implementing a university-wide LMS due to students' lack of computer literacy or low acceptance of e-learning. While multimedia learning programs have been praised for their educational superiority, actual use of these programs has sometimes failed to meet our expectations.

Since autumn 2003, we have required students to take an introductory course on CBT/WBT as a single 90-minute class session. This course is held for first-year students (about 1500 students took it in 2004 and 2005) and second-year students (about 600 students from 2003 to 2005) [ 23 ]. The course serves two main purposes:

To ensure a certain level of computer and information literacy, including online communication skills.

To acquaint students with computer and web-based learning materials.

In 2003 and 2004, students had to review web-based learning programs (e.g. [ 24 ]) and post their statements in a dedicated online forum. In the course for first-year students we used a student-developed platform [ 25 ]. In the course for second-year students, we used Manila [ 26 ] in 2003 and TikiWiki [ 27 ] in 2004 as a collaboration tool. In 2005, we switched to tools that were partly self-developed and less demanding with respect to the server load.

This paper reports on data from an online survey for the 2004 course for first-year students. Participation in the survey was voluntary and anonymous (though students were asked to give their student ID if they wanted to). The tutors were not able to determine who has or has not filled out the questionnaire. Using class time for students to fill out the questionnaire nevertheless ensured a high response rate of 79%.

A total of 1232 questionnaires were completed, 1160 of which remained in the data set after applying some filtering rules in order to eliminate records of uncertain origin. The gender breakdown of respondents was 61% female and 39% male. This corresponds exactly to the gender breakdown of the 1560 students entering the study module (61% female and 39% male). We thus conclude that our sample was representative of the 2004 cohort. Missing values due to non-responses are not included in tables or figures. Differences between the reported counts and the sample size (n = 1160) are thus due to missing responses.

Questionnaire

The questionnaire [ 28 ] (see Additional file 1 ) was designed to collect the following information:

Overall evaluation of the course

Attitudes towards e-learning as well as previous experiences and expectations about the use of CBT/WBT

Computer and Internet usage

Extent of students' private computer infrastructure

Basic demographic data.

In the following, we will focus on students' computer usage and private computer infrastructure as well as their attitudes toward e-learning.

Attitudes towards e-learning (understood as an umbrella concept for learning methods supported by information- and communication technologies (ICT) in general) were determined by the students' agreement or disagreement with several statements about the importance of ICT in medical education. These statements contained items like "Web-based learning programs are able to replace lectures" or "In medical teaching, there is no need for the use of Web-based programs." The students rated their agreement or disagreement on a bi-polar eight-point Likert scale. For the purpose of comparability with Dørup [ 9 ], we recoded their answers into dichotomous variables. As computer use and attitudes towards e-learning were measured on an ordinal scale, we accordingly used Spearman rho to describe the statistical relationship of these variables with other items. For other metric variables Pearson r was used.

Computer infrastructure

Almost all students (94%) have access to a privately owned PC they can use for their studies, which is either owned by the students themselves (74%) or shared with family members or roommates (20%). Only 5% rely primarily on public computer facilities (Table 1 ).

Student-owned PCs are on average 2.3 years old; 92% are newer than 5 years, 87% newer than 4 years. This corresponds to the life span of computers in companies or public administration offices. Only 3.2% of the students have a computer older than 6 years. Male students' PCs (mean ± SD: 2 ± 1.42 years) are newer than those owned by women (2.5 ± 2.05 years). The 95% confidence interval for the difference is 0.33..0.79 years.

Internet access

The great majority of students also have access to the Internet, though the quality of connectivity varies widely; 60% have access via ADSL, cable TV, or LAN (which, however, usually signifies the use of public facilities at the university or elsewhere); 37% have access using a telephone connection (modem or ISDN) (Table 2 ). The type of Internet access differs according to gender (Cramer V = 0.28, p = 0.001). Male students tend to have faster Internet access while older technologies (e.g. modem) are more common among women. The proportion of modem users is twice as high among women (33%) than among men (15%).

Computer use

Types of computer use.

Students are familiar with e-mail and the use of the Internet for information research; 94% of the students communicate via e-mail and 97% use the Internet for information research at least several times per month. While the use of word processors is very common (82% use such a program several times a month), students are less familiar with other program types (Table 3 ).

Very few medical students have experience in Web design or the creation of HTML documents (5% at least weekly) and thus make no use of the Internet for publishing or more sophisticated collaboration purposes. The frequencies of using communication technologies other than e-mail, e.g., chats (21%), forums and bulletin boards (13%), are also low.

One noteworthy detail is the proportion of students who use computers for organizing appointments, to do lists, or making notes: 28% use such a personal organizer software several times per week, which may point to the use of personal digital assistants (PDA) or smart cell phones.

Except for the categories "Word Processor" and "E-mail," male students use the computer significantly more often than women. The strength of this statistical relationship is weak. Spearman rho is highest for the categories "Web-design" (r s = 0.25, p = 0.001), "Games" (r s = 0.23, p = 0.001), "Forums" (r s = 0.21, p = 0.001), and "Spreadsheets" (r s = 0.20, p = 0.001).

Age when using a computer for the first time

Half of all students (50%) used a computer for the first time by the age of 11 (mean 11.2 ± 3.77 SD). By the time they entered university, i.e., before the age of 18, fully 96% of all students had begun to use computers. The average age when students began using computers for the first time is slightly lower for men (10.7 ± 3.40 years) than women (11.5 ± 3.96 years). The 95% confidence interval for this difference is 0.33..1.24 years.

Prior experiences and expectations

Half of the students (49%) report using a computer or Web-based learning program at least once per month. In order to determine how many students have little or no experience with e-learning, we consolidated answers to questions about four different kinds of e-learning programs (information retrieval, downloading scripts, LMS, and CBT/WBT) into one index. Because of the high response rates for "downloading learning material," we defined inexperienced users as those who answered "less often" or "never" to questions about at least three of these kinds of programs. Following this typology, 12% of the students are inexperienced, having used at most one kind of e-learning program at least once per term (Table 4 ).

The majority of students (66%) have already used a computer or Web-based dictionary like the Pschyrembel medical dictionary, which is one of the standard references used by Vienna medical students. Half of them (50%) have used an online image repository at least once and 42% have used some kind of online quiz to test their knowledge (Table 5 ). Other kinds of learning programs, such as those associated with a constructivist approach, are less well known among first-year Vienna students. The results given in Tables 4 and 5 relating to students' use of LMS are inconsistent. This inconsistency arises most likely from the students' lack of understanding of what a LMS is since very few lecturers use this kind of software to support their courses.

About 10% of the students have never used any of the above-mentioned kinds of e-learning programs and 4.4% do not regard any of them as helpful. Those who regard only two or fewer as helpful tend to prefer learning programs that have no "built-in" educational theory, such as encyclopedias (38%), image collections (23%), and quizzes (23%). The number of different kinds of programs that students have experience with and that they consider helpful correlates with Pearson r = 0.32 (p = 0.001) – the more kinds of programs they know, the more kinds they consider useful.

A majority of the students agree (median = 2, interquartile range = 3) that CBT/WBT should be offered as a supplement to lectures and seminars (Figure 1 ). On the other hand, most students disagree with the statement that e-learning should replace these traditional forms of teaching (median = 7, IQR = 4).

figure 1

Students' agreement or disagreement with statements on the usefulness of e-learning . The x-axis represents the values of an 8-point bi-polar rating scale: 1 = strong agreement, 8 = strong disagreement. The boxes show the quartiles (25% of the distribution) and the median (50% cut).

Men (median = 6) tend to be slightly more in favor of replacing traditional lectures with CBT/WBT than women (median = 7). The strength of this effect is negligible (r s = 0.06, p = 0.041). After recoding to a dichotomous scale (1..4 = pro, 5..8 = contra), 28% of male and 25% of female students can be considered favoring the replacement of traditional teaching methods with e-learning. The gender difference is slightly bigger for the item "Computer or Web-based training should play a more important role" but still hardly noteworthy (r s = 0.16, p = 0.001). In general, the following variables have bigger effects on e-learning-related attitudes than gender per se:

Lack of experience with CBT/WBT

Productive computer and Internet use (e.g. spreadsheets, organizer, word processor, graphics, e-mail, Web design, and information research).

We consolidated statements 2 to 4 in Figure 1 into one index (Cronbach alpha = 0.65; inclusion of the items 1 and 5 leads to a slight decrease in reliability). In a regression model (Table 6 ) that includes the above 3 variables and gender (R 2 adj = 0.15, p = 0.001, SEE = 1.54), gender is not statistically significant (p = 0.41). When the stepwise regression method is used, gender is excluded from the final model.

Computer infrastructure and internet access

A sizable number of students still have Internet access only via dial-up connections using a modem. This mode of Internet access is slow and impedes the use of synchronous communication tools that require one to stay online for a long period of time. Even if the majority of students do have broadband access to the Internet, mandatory e-learning solutions cannot rely on synchronous online communication tools like chats and on extensive video material, e.g. recordings from lectures. Instead, preference should be given to asynchronous online communication tools and textual information along with videos. Asynchronous communication tools also have the advantage that teachers and students do not have to be online at the same time.

Computer use patterns

Only a small number of students have experience with Internet publishing and asynchronous communication tools like BBS or forums. Thus, most of our students are rather passive Internet users and miss out on numerous possibilities of virtual communities and Web-based publishing. The lack of experience with synchronous and asynchronous online communication, with the exception of e-mail, may cause problems when using the collaboration tools included in an LMS [ 29 ].

Attitudes towards e-learning

Most students agree that e-learning could serve as a supplement for lectures and seminars. However, about as many students disagree with the statement that e-learning could replace traditional ways of teaching. In the Danish context, Dørup [ 9 ] reported a slightly greater proportion of first-year medical students in favor of replacing traditional lectures with e-learning (47% men, 22% women). These higher levels of agreement could be explained by the different response scales used but also by the fact that Danish people in general are reported [ 30 ] to be more "digital literate" than Austrians – although this difference cannot be claimed for persons under 24 years of age [ 30 ].

The intensity of computer use and previous experience with CBT/WBT have the greatest effect on students' attitudes towards e-learning. The explanation for this could be a general discomfort with the technology that makes students who lack experience with ICT express themselves cautiously about its use in education [ 31 ]. It could also be explained by the relative novelty of e-learning and students' difficulties in integrating CBT/WBT into their way of learning [ 32 ].

Most students seem to acknowledge the range of possibilities of new media to enhance their learning experience although they consider CBT/WBT a supplement to rather than a replacement of other learning materials. However, there is also a group of students who are strictly opposed to CBT/WBT (4.4% of the first-year students do not value any of the kinds of programs mentioned above). More disturbing, 24% strongly agree (values 1 and 2 on an 8-point rating scale) with the statement that the Medical University of Vienna could do well without CBT/WBT. When introducing an online LMS or Web-based learning program, special care should be taken not to lose these students because of the choice of a certain learning technology.

In December 2005, we also held a few focus groups with teachers and students on a similar subject. In the course of these discussions it became clear how some characteristics of the new curriculum, especially the emphasis on the MCQ-based year-end examinations, impeded the use of CBT/WBT. In these discussions the students had doubts about the usability and efficiency of e-learning (with regard to costs, handling of ICT, but also learning efficiency) while they still acknowledged the possibilities of ICT support with respect to visualization, simulation, self-quizzing, and fast information retrieval from several sources such as encyclopedias or Web pages.

Gender differences

We were able to identify gender differences for all computer-related variables. In sum, men make more frequent use of computers and have access to better computer infrastructure and faster Internet connections. While this difference is quite consistent over several variables, the strength of the statistical relationship is weak and, with respect to students' attitudes towards e-learning, overshadowed by other variables (e.g. previous exposure to CBT/WBT) that are more important for predicting students' attitudes.

With respect to the implementation of an LMS, the most important difference between men and women is the relatively high number of women still using a slow dial-up connection to the Internet, which could impede the use of synchronous communication tools or multimedia-rich Web applications. Well planned use of e-learning and supportive measures should help to neutralize this difference. Although women have less experience with forums, Gunn [ 19 ] showed that these differences in online communication behavior do not necessarily result in worse examination outcomes.

E-Learning must be appropriate to students' level of computer expertise in order not to become a source of frustration. Courses to develop students' computer skills can improve this situation by influencing students' attitudes and capabilities. Our conclusions with respect to such introductory courses are twofold. Students certainly need some kind of formal introduction to the new ICT for learning purposes. But due to the wide range of previous experience and computer skills, there is no one-size-fits-all course design available. Such a course should either be split into several tracks according to students' different levels of computer literacy [ 33 ], or it should be held only for students with little or no computer experience.

There is, however, the danger that precisely those students who need this course the most will hesitate to attend it voluntarily. It is difficult to say how these students could be persuaded to take such a course despite their skepticism towards ICT and e-learning. One strategy would be to emphasize the practical value for solving everyday problems and obtaining useful information. Once they have learned how computers help them solve recurring problems, they will perhaps develop more computer-friendly attitudes. Another solution could be to make the course compulsory but to make the impact negligible for students with good ICT knowledge. This could be achieved with a Web-based entry test. Students who pass the test would be exempted from having to take the course.

When introducing a campus-wide LMS, one has to take into consideration that some students lack the necessary computer skills or infrastructure to participate effectively in online courses, and that others are strictly opposed to e-learning. Introducing a campus-wide e-learning solution thus poses not only technical and organizational challenges but also calls for a promotional strategy. In the future, we can expect more students to think of computers as standard tools for learning as schools make more use ICT in their classrooms. For example, an "avant-garde" of Vienna medical students already created an online forum [ 34 – 36 ] for informally exchanging information about courses as well as students authored learning materials.

Bresnitz EA, Stettin GD, Gabrielson IW: A survey of computer literacy among medical students. J Med Educ. 1986, 61: 410-2.

Google Scholar  

Osman LM, Muir AL: Computer skills and attitudes to computer-aided learning among medical students. Med Educ. 1994, 28: 381-5.

Article   Google Scholar  

Koschmann T: Medical education and computer literacy: learning about, through, and with computers. Acad Med. 1995, 70: 818-21. 10.1097/00001888-199509000-00018.

McGowan JJ, Berner ES: Computer in medical education. International handbook of research in medical education. Edited by: Norman GR, van der Vleuten CPM, Newble DI. 2002, Dordrecht (Netherlands): Kluwer, 537-79.

Chapter   Google Scholar  

SEUSISS Project. Survey of European universities skills in ICT of students and staff. Final report. 2003, Edinburgh (United Kingdom): University of Edinburgh, [ http://www.intermedia.uib.no/seusiss/seusissreport.pdf ]

Asgari-Jirhandeh N, Haywood J: Computer awareness among medical students: a survey. Med Educ. 1997, 31 (3): 225-9.

Seago BL, Schlesinger JB, Hampton CL: Using a decade of data on medical student computer literacy for strategic planning. J Med Libr Assoc. 2002, 90 (2): 202-209. [ http://www.pubmedcentral.gov/articlerender.fcgi?tool=pubmed&pubmedid=11999178 ]

Hollander S: Assessing and enhancing medical students' computer skills: a two-year experience. Bull Med Libr Assoc. 1999, 87 (1): 67-73. [ http://www.pubmedcentral.gov/articlerender.fcgi?tool=pubmed&pubmedid=9934531 ]

Dørup J: Experience and attitudes towards information technology among first-year medical students in Denmark: longitudinal questionnaire survey. J Med Internet Res. 2004, 6 (1): e10-10.2196/jmir.6.1.e10. [ http://www.jmir.org/2004/1/e10/ ]

Bell D: The coming of the post-industrial society. 1999, New York: Basic Books, [1973]

Lyotard JF: The postmodern condition: a report on knowledge. 1994, Manchester: Manchester University Press, [1979]

Book   Google Scholar  

Virkus S: Information literacy in Europe: a literature review. Information Research. 2003, 8 (4): paper no. 159, [ http://informationr.net/ir/8-4/paper159.html ]

OECD: The knowledge-based economy. 1996, Paris (France): OECD, [ http://www.oecd.org/dataoecd/51/8/1913021.pdf ]

Faughnan JG, Elson R: Information technology and the clinical curriculum: some predictions and their implications for the class of 2003. Acad Med. 1998, 73 (7): 766-769. 10.1097/00001888-199807000-00013.

Mavis BE, Smith JM: Mandatory microcomputers: potential effects on medical school recruitment and admissions. Medical Education Online. 1997, 2: 5-[ http://www.med-ed-online.org/volume2.htm ]

Blue AV, Elam CL, Rubeck R, Nora LM: Implementing a requirement for computer ownership: One medical school's experience. Medical Education Online. 1997, 2-[ http://www.med-ed-online.org/t0000002.htm ]

Brüchner K: A longitudinal analysis of the development of learner-prerequisites for e-learning using a typology of computer use [in German]. Proceedings "med:u – e-learning in medical education", 7.-8.11.03 [in German]. 2003, Kiel (Germany): University of Kiel, 15-32.

Richter T, Naumann J, Horz H: Computer literacy, computer-related attitudes, and computer use among male and female students [in German]. People and computer. 1st interdisciplinary conference [in German]. Edited by: Oberquelle H, Oppermann R, Krause J Stuttgart. 2001, (Germany): B.G. Teubner, 71-80. [ http://mc.informatik.uni-hamburg.de/konferenzbaende/mc2001/V23.pdf ]

Gunn C: Dominant or different? Gender issues in computer supported learning. Journal of Asynchronous Learning Networks. 2003, 7 (1): 14-30. [ http://www.aln.org/publications/jaln/v7n1/pdf/v7n1_gunn.pdf ]

Integral. Austrian Internet Monitor – Internet development [in German]. Vienna (Austria): Integral. 2005

Middendorff E: Computer use and new media in higher education. Results from the 16th social survey of the German Students Association conducted by the HIS Higher Education-Information-System [in German]. 2002, Bonn (Germany): Bundesministerium für Bildung und Forschung

Gavriilidis K: No leap in quality ahead – empiric results concerning the integration and use of electronic information in higher education [in German]. Virtual campus: scenarios, strategies, studies [in German]. Edited by: Wagner E, Kindt M. 2001, Münster: Waxmann, 266-274.

Link TM, Marz R: Introduction to computer-based learning. Science and medicine. A textbook for the special studies module (SSM1) [in German]. Edited by: Trappl R. 2005, Vienna: Veritas, 147-162.

HemoSurf. [ http://www.aum.iawf.unibe.ch/vlz/BWL/HemoSurf/ ]

eMed. [ http://emed.mitsch.cc ]

Manila. [ http://manila.userland.com ]

TikiWiki. [ http://tikiwiki.org ]

1 st year students 2004 questionnaire. the orginal version in German http://www.meduniwien.ac.at/user/thomas.link/b2/index.php?review=04w , [ http://www.meduniwien.ac.at/user/thomas.link/b2_en/index.php?review=04w ]

Stokes CW, Cannavina C, Cannavina G: The state of readiness of student health professionals for web-based learning environments. Health Informatics Journal. 2004, 10 (3): 195-204. 10.1177/1460458204045434.

SIBIS – Statistical Indicators Benchmarking the Information Society: SIBIS pocket book 2002/03. Measuring the information society in the EU, the EU accession countries, Switzerland and the US. 2003, Bonn (Germany) empirica, [ http://www.sibis-eu.org/publications/pocketbook.htm ]

SPOT+: Students' perspective on technology. Survey report. 2004, [ http://www.spotplus.odl.org/downloads/Survey_report_final.pdf ]

Schanze S: The use of concept mapping in the med:u project – e-learning in medical education. A tool for structuring complex information and for testing learning performance [in German]. Evaluation of e-learning. Goals, methodological aspects, future [in German]. Edited by: Meister DM, Tergan S, Zentel P. 2004, Münster (Germany): Waxmann, 171-187.

Gibson KE, Silverberg M: A two-year experience teaching computer literacy to first-year medical students using skill-based cohorts. Bull Med Libr Assoc. 2000, 88 (2): 157-164.

MCW Portal. [ http://www.mcw-portal.com ]

Med-Forum. [ http://www.med-forum.at/ ]

Medizinstudium.at. [ http://www.medizinstudium.at ]

Pre-publication history

The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1472-6920/6/34/prepub

Download references

Acknowledgements

We thank Thomas Benesch for statistical advice. We would also like to thank Jens Dørup, William Fulton, and Sean Marz for critically reading the manuscript and their helpful suggestions.

Author information

Authors and affiliations.

Core Unit for Medical Education, Medical University of Vienna, Vienna, Austria

Thomas Michael Link & Richard Marz

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Thomas Michael Link .

Additional information

Competing interests.

The author(s) declare that they have no competing interests.

Authors' contributions

RM and TML planned and organized courses [ 23 ] to promote computer literacy among medical students.

TML was responsible for designing the study, implementing the online questionnaire, analyzing the data, writing the first draft, and proofreading the final draft.

RM was responsible for designing the course content, recruiting and training the tutors and supervising all aspects of the course. He revised the article extensively.

Both authors read and approved the final version.

Thomas Michael Link and Richard Marz contributed equally to this work.

Electronic supplementary material

12909_2006_133_moesm1_esm.htm.

Additional File 1: This is the HTML output of the English translation of the questionnaire we used. The HTML was originally generated by a PHP script. As this is only the HTML representation of the questionnaire, clicking on the "Submit" button has no effect. The file can be viewed in any Internet browser. (HTM 53 KB)

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2, authors’ original file for figure 3, rights and permissions.

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Link, T.M., Marz, R. Computer literacy and attitudes towards e-learning among first year medical students. BMC Med Educ 6 , 34 (2006). https://doi.org/10.1186/1472-6920-6-34

Download citation

Received : 02 March 2006

Accepted : 19 June 2006

Published : 19 June 2006

DOI : https://doi.org/10.1186/1472-6920-6-34

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Internet Access
  • Computer Literacy
  • Computer Skill
  • Computer Infrastructure
  • Basic Computer Skill

BMC Medical Education

ISSN: 1472-6920

research paper about computer literacy

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 04 June 2024

Design and analysis of personalized serious games for information literacy: catering to introverted and extraverted individuals through game elements

  • Phimphakan Thongthip 1 ,
  • Kannikar Intawong 2 ,
  • Suepphong Chernbumroong 3 ,
  • Perasuk Worragin 3 ,
  • Yootthapong Tongpaeng 4 &
  • Kitti Puritat 1  

Humanities and Social Sciences Communications volume  11 , Article number:  725 ( 2024 ) Cite this article

33 Accesses

1 Altmetric

Metrics details

  • Science, technology and society

Personalized learning has become increasingly prevalent in computer-based education. Nevertheless, there remains a scarcity of studies addressing personalized serious games. This paper delves into a study on a personalized serious game that utilizes suitable game elements tailored to students’ personality traits, focusing on the dimensions of introversion and extraversion. To evaluate the impact of personalized serious games, a comparative experiment was conducted. The study involved both a Control group (Non-personalized game-based approach) and an Experimental group (Personalized game-based approach). Participants were assessed using pre-post tests measuring knowledge acquisition and retention in information literacy for source evaluation, as well as intrinsic motivation measured through the IMI questionnaire. The findings indicate that while personalized serious games can enhance intrinsic motivation, particularly in terms of perceived competence and effort, they did not significantly impact knowledge acquisition and retention. User behavior data statistics revealed a substantial 37% improvement in engagement, measured by both average and total playtime, especially noticeable among introverted participants. However, this personalized approach was found to be less effective for extraverted participants.

Similar content being viewed by others

research paper about computer literacy

Development of immersive learning framework (ILF) in achieving the goals of higher education: measuring the impact using a pre–post design

research paper about computer literacy

A bibliometric analysis of the use of the Gamification Octalysis Framework in training: evidence from Web of Science

research paper about computer literacy

The role of game genres and gamers’ communication networks in perceived learning

Introduction.

People have the unprecedented ability to communicate with one another effortlessly in the contemporary era, which is characterized by the digital age, which adopts and integrates digital technologies, such as the internet and computers, facilitated by abounding amounts of information and knowledge sources. With the emergence of technological advancements, an abundance of social networking platforms has arisen, serving as the favored sources of news consumption for a large number of people. According to a study conducted by Agarwal and Dixit ( 2020 ), social media platforms have become increasingly prevalent as primary sources of news consumption. The study reveals that a significant proportion of individuals rely on social media platforms for news, with specific statistics indicating that nearly 450 million of the surveyed individuals will consider social media platforms as their primary source of daily news consumption by 2020 (Agarwal & Dixit, 2020 ). This expansion, which occurred at a rate of approximately one million users per month per country, resulted in a total of 4.1 billion active social media users globally (Kemp, 2019 ). Furthermore, Velichety and Shrivastava ( 2022 ) conducted a research study to better understand the dependency on social media platforms and search engines for news consumption. The study discovered that approximately 65% of those polled cited social media platforms and search engines as their major sources of daily news consumption. These results underline the importance of social media platforms and search engines in altering people’s news-consuming habits in modern society. This immense rise, together with the current user base, demonstrates the widespread dependence on social media platforms to obtain news and information. Despite their extensive use and acknowledgment, social media platforms confront a variety of challenges, including the prevalence of online misinformation, sometimes known as fake news. Hern ( 2017 ), Allcott and Gentzkow ( 2017 ) highlight the concerning growth of online misinformation on social media platforms, attributing it to the lack of editorial oversight and the platform’s inherent design, which allows for easy transmission to large audiences, spreading false information to many readers. Hern’s results highlight the urgent necessity of dealing with the issue of misinformation on social media platforms to protect users from misleading information. While this issue has persisted since the inception of mass communication, the ascent of social media platforms has significantly exacerbated it. The acquisition of information literacy skills has become a significant educational endeavor in order to reduce the risk of exposure to falsehoods and protect people from falling victim to false information stories. These skills, which are an important aspect of 21st-century competence, are extensively imparted to students. Recognizing the issues posed by misinformation, developing information literacy abilities has become an important educational activity. Aydin ( 2021 ) emphasizes the need for information literacy abilities in countering misinformation and preventing people from being victims of misleading information stories The learning of these abilities provides individuals with the capacity to critically analyze information and distinguish between reliable sources and misinformation. Furthermore, educational institutions are increasingly using e-learning pedagogical techniques to teach students. Nonetheless, effective motivation and engagement of students in the usage of these digital learning platforms remain a difficulty. Poondej, Lerdpornkulrat ( 2019 ) emphasize the problems that educational institutions encounter while engaging students in e-learning platforms, underscoring the significance of resolving these issues to improve the success of digital learning efforts. Serious games have emerged as potent tools for promoting lifelong learning and assisting individuals in staying abreast of evolving trends. The current interdisciplinary applications substantiate the significance of this emerging paradigm in the context of learning (Chen et al., 2019 ). Furthermore, in order to render a game adaptable and enjoyable for diverse player profiles, the incorporation of game personalization has been employed as a means to augment player engagement and attentiveness.

This research investigates the impact of a personalized serious game designed to enhance and instruct information literacy skills among students, specifically focusing on the differences between introverted and extroverted individuals. The study aims to incorporate suitable gaming elements as tools necessary to effectively combat the dissemination of fake news across various mass media platforms.

Literature review

Fake news and information literacy.

In the delineation of false or deceptive information in news, extensive investigations by researchers and journalists have led to the dissemination of their findings. Numerous initiatives have been undertaken to discern and differentiate misleading information. For example, Wang ( 2020 ) defined fake news as “reports disseminating erroneous or inaccurate information while failing to acknowledge the inaccuracies within”. Park et al. ( 2020 ) established a comprehensive framework for categorizing fake news based on the intent to deceive and inflict harm, comprising four distinct categories: non-information, disinformation, misinformation, and mal-information (Park et al., 2020 ). Non-information involves substantial intent to deceive with minimal intent to inflict harm, while disinformation denotes significant intent for both deception and harm. Misinformation indicates minimal intent for both deception and harm, while mal-information involves limited intent to deceive with a pronounced intent to cause harm (Park et al., 2020 ). Despite the presence of four separate types of fake news: non-information, disinformation, misinformation, and mal-information, this study centers solely on misinformation. The definition of misinformation has been presented by several researchers. Pennycook and Rand ( 2019 ) define misinformation as “inaccurate or misleading facts that are disseminated with the intention to mislead.” Lewandowsky, Ecker, and Cook ( 2017 ) define misinformation as “information deemed inaccurate or misleading.” In their study, Vosoughi et al. ( 2018 ) define misinformation as “bogus data that is conveyed without considering a motive to mislead.” Tandoc ( 2021 ) argued that fake news represents a distinct category of deliberately crafted falsehoods mimicking legitimate news content. Consequently, individuals must possess essential skills for information retrieval and utilization due to the prevalent use of digital technology and the interconnectedness of the Internet in today’s information-centric society.

To combat fake news, information literacy can serve as an educational tool. Information literacy, defined differently by various authors, showcases an individual’s ability to determine their information requirements, efficiently search for information, critically assess procured information, and apply it to achieve their objectives (Guo & Huang, 2020 ). It involves defining information needs, managing information sources effectively, and proficiently using digital technology for sourcing, analyzing, and synthesizing information resources (Pinto et al., 2020 ). Information literacy encompasses critically evaluating the credibility of resources, utilizing appropriate citation methodologies, adhering to legal and ethical considerations, and formulating research inquiries precisely and efficiently (Reddy, et al., 2020 ). However, the Internet poses risks, especially for those lacking information literacy skills. In the 21st century, education emphasizes lifelong learning skills, where information literacy forms the foundation for adapting to an ever-evolving information landscape (White ( 2019 )). White’s findings support that individuals equipped with information literacy and lifelong learning skills are better prepared to face the multifaceted challenges of the 21st century (White ( 2019 )). Therefore, information literacy has evolved into an indispensable skill set for contemporary survival and proficiency (Banik & Kumar, 2019 ).

Serious games for education

Serious video games have increased in popularity being used in educational contexts because of their ability to improve educational experiences and performance. Such games are created with particular educational objectives in mind, seeking to involve students in engaging and immersive experiences that enhance knowledge acquisition, improve skills, and critical thinking. In contrast to traditional educational techniques, serious games provide an innovative and interactive atmosphere for learning that fosters active involvement and engagement (Chen & Michael, 2005 ). Landers ( 2014 ) differentiates serious games from those aimed purely at amusement by focusing on promoting learning in various forms. These games attempt to improve players’ understanding of various subjects while also helping to change their attitudes and views on such subjects. Serious games create informative material during gameplay, usually in the form of raw user data that represents the player’s actions inside the game. This information reveals the learner’s techniques for finding, acquiring, and digesting knowledge. Vidakis et al. ( 2019 ) demonstrate how distinct serious games gather varying datasets, including queries, user interactions, and task completions, which are used to compute educational progression results. Connolly et al. ( 2012 ) underline various well-established benefits of using serious games in education, such as active learner involvement, facilitated and seamless learning experiences, timely feedback, and a significant increase in student motivation and engagement. Romero et al. ( 2015 ) go on to assert that well-designed serious games have proven to be very effective instructional tools, boosting students’ learning capacity by encouraging participation and generating critical thinking throughout the educational process. Furthermore, Ketamo et al. ( 2018 ) highlight how educational video games offer the customization of learning material and manner of delivery in order to accommodate customized learning experiences. This personalizing component improves the flexibility of educational interventions, making them more successful at fulfilling learners’ different requirements and preferences.

Personalized educational games

Personalized educational games have developed as a potential strategy for meeting the different educational requirements and preferences of individual students. These games are intended to customize instructional material and learning experiences depending on each student’s unique features, talents, and interests, thereby increasing engagement and improving successful learning results. Personalized learning systems possess the capacity to adapt the presented educational content in real-time, tailoring it to the specific preferences, abilities, and existing knowledge of individual learners. Furthermore, these systems have the capability to individualize instructional methods to optimize the learner’s performance. Consequently, they offer solutions to several challenges commonly encountered in traditional learning environments, including resource constraints, learner engagement, and the diversity of learners’ knowledge and preferred learning modalities (Tlili et al., 2019 ). Hence, to accommodate the distinctive needs of learners, numerous researchers and educators have contemplated the utilization of personalized learning systems as an alternative to the conventional one-size-fits-all approach (West, 2011 ). Hwang and Tsai ( 2011 ) presented the “Personalized Learning Environment” (PLE) framework, which is an example of customized games for learning. The PLE framework takes into account a variety of student characteristics, including their cognitive style, approach to learning, and previous knowledge, to personalize educational material and tasks according to learners’ requirements. PLEs provide specific educational experiences that increase the effectiveness of learning by utilizing unique learning methods. Additionally, devised an educational system known as LearnFit, which employs the Myers Briggs Type Indicator (MBTI) to assess the learner’s personality. Subsequently, it tailors a personalized learning approach for each individual based on their personality traits. Nevertheless, numerous personality models featuring diverse personality dimensions have been elucidated in the academic literature (Tlili et al., 2019 ). Among these dimensions, the introvert-extravert dimension is a prominent element, commonly observed in well-known personality models like the Big Five Factor and Myers-Briggs (Tlili et al., 2019 ). In addition to personality, learning styles, cognitive capacities, and past knowledge are important components in customized learning systems (Newman et al., 1995). Furthermore, technological advancements such as artificial intelligence and machine learning are improving the potential of personalized educational platforms, allowing for greater accuracy and effective adjustment to learners’ demands (VanLehn, 2006 ).

Aspects of introverted and extraverted personality types

Personality traits influence people’s habits of study and attitudes toward education. According to study findings, traits of personality, including extraversion, introversion, openness to experience, conscientiousness, agreeableness, and neuroticism, impact how people perform learning activities and engage in educational contexts (McCrae & Costa, 1987 ). Extraverts, for example, may favor collaboration environments for learning and prefer social engagement in the course of their studies, whereas introverts may favor isolated and independent study (McCrae & Costa, 1987 ). Mount et al. ( 2005 ) characterize personality traits as enduring psychological attributes that delineate individuals’ conduct and cognitive approach. Extensive research in the literature has demonstrated that personality exerts influence on learners across multiple dimensions, including their perception of educational systems (Cohen & Baruth, 2017 ), preferences for game genres and design elements in educational environments (Denden et al., 2018 ; Schimmenti et al., 2017 ), and their learning performance (Anderson et al., 2018 ). Moreover, extraversion and introversion preferences contribute to people’s perceptions of education, affecting their perspectives, drive, and participation in educational activities (Poropat, 2009 ). Extraverted people may see school as a chance for social engagement, connections, and personal development, whereas introverted people may perceive it as a method of intellectual study, discovering themselves, and contemplation (Poropat, 2009 ). Chen et al. ( 2016 ) discovered that variances in learners’ personalities can significantly influence their learning behaviors within the context of Massive Open Online Courses (MOOCs). As mentioned in the previous section, this study focuses on the introvert-extravert dimension. The designations “introversion” and “extraversion” trace their origins to the 1920s and are credited to the work of the psychologists Jung and Baynes ( 1923 ). The concept of introversion pertains to individuals who derive their vitality from internal sources such as thoughts, imagery, and contemplation. Conversely, extraversion refers to individuals who are action-oriented and derive their vitality from the external world (Tlili et al., 2019 ). Costa, McCrae ( 2008 ) have delineated six facets corresponding to each trait within the Big Five Factor model, including extraversion, neuroticism, agreeableness, conscientiousness, and openness to experience. In the context of extraversion, the six facets, as defined by Costa, McCrae ( 2008 ), are outlined as follows:

Warmth: These individuals are characterized by their amiable nature, demonstrating friendliness and affection towards others.

Excitement seeking: They exhibit a propensity for experiencing boredom quickly and actively seek excitement and stimulation.

Activity: They are marked by their high energy levels, liveliness, and preference for physical movement.

Assertiveness: These individuals exude self-assurance and often take on leadership roles within their social groups.

Gregariousness: They have an aversion to solitude and generally favor the company of others.

Positive emotions: They frequently experience a profusion of positive emotional states.

Introversion is characterized by being thoughtful, less inclined towards socializing, often reserved in speech, and feeling self-conscious when engaging in social interactions. Introverts typically enjoy smaller gatherings or one-on-one interactions, preferring to take their time getting acquainted with someone new. However, it is important to recognize that introverts simply express their sociality in unique ways, valuing personal connections, privacy, and tranquility (Paradilla et al., 2021 ). Introverts typically enjoy moments of solitude, favoring self-reflection over outward expression, prioritizing depth in their interactions, and often showing less emotional demonstration. They tend to confide personal information in a small circle of trusted individuals. Writing may be more appealing to introverts than verbal communication, and they may occasionally experience social fatigue, seeking consolation in solitude to recharge their energy (Dossey, 2016 ). The book by Grimes et al. ( 2011 ) “Four Meanings of Introversion: Social, Thinking, Anxious, and Inhibited Introversion” studies many aspects of introversion. Following is a description of each type:

“Social Introversion” is mainly concerned with interaction with others. People with social introversion may favor isolation or small groups over significant social gatherings. They prefer to recharge by enjoying themselves on their own or with a small group of companions, as opposed to huge groups.

“Thinking introversion” refers to how people organize knowledge for themselves. These people can devote a lot of time to meditating on their ideas and thoughts, valuing reflective thinking over external expression. They can succeed at tasks that involve profound conceptualization or creative thinking.

“Anxious Introversion” type is characterized by an increased response to social interactions as well as an inclination for sensations of unease in situations that involve others. People with such a type of introversion can become self-conscious or uneasy around other people, which causes them to refrain from some social gatherings or encounters.

“Inhibited Introversion” indicates a guarded or wary attitude toward interacting with other people. Those with this form of introversion can be cautious when interacting with other people or conveying themselves completely. They can choose to watch instead of taking part in social events, usually spending their time relaxing before utterly participating.

Nevertheless, according to Blevins et al. ( 2022 ), introversion can occasionally be interpreted incorrectly as being a complete contrast to extraversion. The misunderstanding associates introversion with unfavorable valenced features, such as awkwardness in social situations as well as low social self-esteem. This misperception comes from an inclination to perceive extraversion and introversion as opposite in characteristics, with introverts perceived as having no ability to interact with others or confidence when contrasted to extroverts. Nevertheless, this simplification dismisses the richness and variety that come with every character type. Table 1 provides a summary of the distinctions between extraversion and introversion traits, focusing on empirical data and theoretical frameworks.

Personality traits are notably linked to the learning styles of individuals. Extraversion and introversion are two essential personality types that have a major effect on people’s ways of learning and attitudes toward education. Extraversion is defined by socialization, assertiveness, and a desire for outward stimulation, whereas introversion is distinguished by contemplation, reserve, and an inclination for the inside thought (Costa, McCrae ( 2008 )). Introvert learners are characterized as reflective thinkers, gravitating towards more contemplative learning approaches; in contrast, extravert learners exhibit a preference for active and interactive learning methodologies (Felder & Silverman, 1988 ). Individuals with a high level of extraversion excel in interactive and social educational environments, enjoying collaborative tasks, and discussions in groups. According to research, extraverts tend to favor active learning techniques that involve group assignments and hands-on tasks, which encourage social connection and participation (Borg & Shapiro, 1996 ). Furthermore, extraverts can try out extracurricular events and leadership positions to enhance their overall school experience and personal growth (Borg & Shapiro, 1996 ). Introverts, on the other hand, may favor isolated and independent study, as well as serene and concentrated situations that enable deep thought and concentration (McCrae & Costa, 1987 ). Introverts may succeed in self-paced educational settings where they can investigate issues at their own speed and dive deeply into complicated topics (McCrae & Costa, 1987 ). Furthermore, introverts may favor written communications and written reflection projects that allow them to communicate their ideas and opinions. Moreover, several research investigations have underscored the impact of personality on the cognitive load experienced by individuals (Gray et al., 2005 ). Within this framework, Eysenck ( 1967 ) posited within his personality model that introverted and extraverted individuals may exhibit distinct cognitive loads contingent upon the level of arousal present in a given learning environment. As demonstrated by Nuckcheddy ( 2018 ) through a comprehensive literature review, personality traits, particularly introverted and extraverted characteristics, can exert a discernible influence on the motivation levels of individuals within a workplace. Introverted and extraverted tendencies exhibit distinct approaches to dealing with the two motivational elements delineated in Frederick Herzberg’s motivation theory, namely motivators and hygiene factors. For example, individuals with introverted tendencies tend to adeptly address hygiene factors like supervision and interpersonal relationships, while those with extraverted inclinations effectively navigate the existence of intrinsic motivators such as attainment and acknowledgement (Herzberg ( 2008 ).

Game elements for learner personalities

Learner personalities have been observed to exert influence over their inclinations towards specific game elements. For example, in the study by Jia et al. ( 2016 ), it was noted that extravert learners displayed significantly favorable attitudes toward utilizing game elements like leaderboards and progress bars, whereas their introvert counterparts did not share this preference. In other words, extraverted individuals exhibited a higher level of motivation when exposed to points, levels, and ranking elements. Moreover, researchers observed that individuals with extraverted personalities tended to derive greater enjoyment from rewards (Jia et al., 2016 ). Additionally, Codish, Ravid ( 2014a ) discovered that extravert learners exhibited a greater inclination for using badges compared to introvert learners. Besides, they conducted a study using preference questionnaires to investigate how individuals with extraverted and introverted characteristics perceived gamification. It was observed that extraverts exhibited a preference for badges in this context (Codish, Ravid ( 2014b )). Further, participants exhibiting introverted characteristics, within both the Experimental and Control groups, demonstrated a notably increased accumulation of logins in comparison to the extraverted participants (Smiderle et al., 2020 ). Besides, the inclusion of an avatar element can provide a degree of anonymity to introverted individuals, facilitating their self-expression in a more carefree manner. Introverted users found that avatars offered novel avenues for adjusting their appearances to facilitate communication in various contexts (Baker et al., 2021 ). Table 2 demonstrates the summary of the effects of game elements on introverts and extraverts.

Purpose of study and research questions

Regarding the creation of an effective serious game, numerous research studies have highlighted the potential for designing personalized games to enhance various aspects of learning, including knowledge acquisition (Pakinee & Puritat, 2021 ), motivation (Harteveld & Sutherland, 2017 ), and technology acceptance (Svendsen et al., 2013 ; Maican et al., 2019 ). However, to the best of our current knowledge, no study has reported the implementation of personalized serious games that consider learners’ personalities by implementing game elements to impact learner outcomes in knowledge acquisition, motivation, knowledge retention, and learner behavior. Therefore, this study contributes to the existing literature by developing a personalized serious game for learners focusing on information literacy and fake news. It is grounded in the Big Five theory, with specific emphasis on extraversion and introversion. The study’s aim is to explore the effects of personalization on motivation, knowledge acquisition, knowledge retention, and user behavior within the context of our online serious game. To this end, the study addresses the following research questions:

(RQ1) Is there a significant difference in enhancing knowledge acquisition between Personalized and Non-personalized serious games?

(RQ2) Is there a significant difference in enhancing knowledge retention between Personalized and Non-personalized serious games?

(RQ3) Is there a significant difference in enhancing learning motivation between Personalized and Non-personalized serious games?

(RQ4) What is the effect on engagement for extraversion and introversion personality traits in Personalized and Non-personalized serious games?

Addressing these research questions carries substantial advantages for the field of library and information science, as well as the broader educational and media literacy domains. Through an exploration of how personalization within serious games impacts learners’ motivation, knowledge acquisition, and knowledge retention, this study can offer invaluable insights that extend to the development of effective educational resources. These findings can be of immense practical value to libraries and educators, serving as a guideline for librarians and researchers aiming to design personality-centered serious games. This research can facilitate the creation of tailored educational tools that enhance information literacy, equipping librarians and educators with powerful resources to instruct and guide users effectively in navigating the complexities of the information landscape.

Game design and implementation

Libraries and Library Information Science (LIS) professionals play a pivotal role in guiding user communities to access and critically evaluate accurate information (Agosto, 2018 ; De Paor & Heravi, 2020 ). They are also responsible for developing tools that enhance media and digital literacy competencies to combat the proliferation of fake news. In line with this mission, the Thailand’s Safe and Creative Media Development Fund has collaborated with the Department of Library and Information Science at Chiang Mai University to develop an online serious game aimed at enhancing information literacy and addressing the issue of misinformation among students in higher education and the wider community.

In this study, we utilize an online serious game as an educational tool for teaching information literacy within the university curriculum. Information Literacy is a general subject open to all students, and it focuses on equipping them with the essential skills to critically assess information, particularly in today’s digital age characterized by the proliferation of false or misleading information. This curriculum includes teaching source evaluation, fact-checking, and using methods like the Currency, Relevance, Authority, Accuracy, and Purpose (CRAAP) test (Liu, 2021 ) to evaluate sources for their credibility and reliability.

The game design concept centers on a 2D platformer side-scrolling Role-Playing Game (RPG). In this design, learners take on the role of librarians responsible for managing the news content to be disseminated by the National Library of Thailand to the broader community. The primary task for players is to assess and verify news sources gathered from various social media platforms. Their objective is to distinguish between accurate information and misinformation. Subsequently, players investigate the impact of the news through city observations and by listening to residents. The game’s activities encompass key steps: engagement with citizens, observation of the city, and evaluation of news sources. Each of these steps will be described in more detail below.

Engagement with citizens

The initial game activity is intended to facilitate learning about the impact of misinformation by engaging with the city’s residents. Players can control their character, walking and communicating with citizens from their homes, progressing through various areas including rural zones, hospitals, the city center, and business communities before reaching the workplace. The game’s mechanics are designed to help players understand people’s perspectives on news and its effects. Additionally, during this game activity, “men in black” non-player characters (NPCs) have been incorporated to impart information literacy concepts and the CRAAP test for source verification to players along the way through pop-up messages, as depicted in Fig. 1 .

figure 1

NPCs provide the knowledge of information literacy.

Observation of the city

This key game activity aims to simulate the effects of fake news on a broader community and society. Similar to the previous game activity, players can control their character to walk through the city and observe the impact of fake news. The game design includes five categories of news: education, economy, social matters, health, and religion and belief. This game design is a crucial concept for learners to understand and become aware of the impact of misinformation. It illustrates how a single piece of misinformation, when disseminated to the wider community, can rapidly have a serious impact, as shown in Fig. 2 .

figure 2

How players can learn about the impact of misinformation on the city.

Evaluation of news sources

This activity constitutes the most critical game design element for learners to enhance their news source evaluation skills. The game utilizes a quiz-based format where users are assigned the task of assessing six news items on their table daily, spanning various categories. Players must thoughtfully determine which of these news items are fake and which are authentic before delivering them to the city’s residents. The primary objective of this learning activity is to enhance skills in evaluating news using the CRAAP test. The game mechanics provide real-time feedback to players in the form of scoring points. The gameplay for this activity is depicted in Fig. 3 .

figure 3

The player’s task of evaluating the authenticity of news articles to distinguish between real and fake information.

In summary, the game is structured around a seven-day time limit for play and involves verifying approximately forty news items to determine their authenticity. Each round of play takes approximately 2–3 hours to complete. The game design is intentional, aiming to foster engagement and replayability (Adellin et al., 2019 ) for players by offering numerous alternative scenarios of fake news that can be influenced by the player’s choices. This design approach enhances the game’s value for continuous play. For instance, a player might be aware that COVID-19 cannot spread through the air, but they may choose to characterize it as real news to observe the impact on society and individuals. We used the Unity game engine to develop this game, and it is available on two platforms: PC-based and online web browsers. Additionally, for research purposes, two versions of the game were developed to cater to introverted and extraverted players by incorporating suitable game elements.

Game design elements for introverts

Introverted learners exhibit a preference for thoughtful and reflective approaches to education (Felder & Silverman, 1988 ) often finding more satisfaction in solitary tasks than in group settings. Consequently, in a gaming context, they might not favor making their activities visible to others (Jia et al., 2016 ). For instance, the act of logging in can significantly motivate introverted learners as it’s a solitary task (Smiderle et al., 2020 ). They may choose to engage in daily login tasks individually, avoiding group activities or competitions.

Due to introverted learners’ tendency toward lower self-confidence, a gaming system could integrate game elements or rewards, particularly an individual ranking system, designed to boost their confidence within the gaming environment (Tlili et al. ( 2019 )). Incorporating an avatar element provides introverted individuals with a sense of anonymity, enabling them to express themselves more openly. Avatars offer a unique opportunity for introverted individuals to modify their visual appearance, facilitating communication in various situations (Baker et al., 2021 ). In our experiment, which specifically targeted the introvert group, we developed a game design tailored for introverted individuals. This design integrated three core game elements—Avatar, Daily Reward, and Ranking System—into the comprehensive framework of a serious game. Through this customized version, we conducted an assessment of the effects of a personalized serious game, explicitly designed for introverted players (Fig. 4 ).

figure 4

The design of game elements applied to the introvert-focused serious game version.

Game design elements for extraverts

Extraversion characterizes individuals who derive their energy from external stimuli and possess an action-oriented nature (Tlili et al. ( 2019 )). Extravert learners favor dynamic and participatory learning approaches, often showing a preference for socially competitive activities and a tendency to showcase their achievements (Jia et al., 2016 ). They are inclined to actively engage in social systems, particularly when a larger audience is involved (Nov & Arazy, 2013 ).

Enjoying the spotlight and social prominence, extraverts are drawn to elements such as the Progress system, Levels, and Leaderboards, perceiving Rewards as particularly enjoyable. To enhance prolonged engagement among extraverted users, the inclusion of Leaderboards appears to be an effective choice. Leaderboards offer a platform for users to interact within a dynamically evolving social group, catering to and adjusting to the actions of extraverted users (Jia et al., 2016 ). In our study focusing on the extravert group, we created an Extravert version with specifically tailored game elements—Badge, Progress, and Leaderboard—for extraverts. Notably, the Badge and Progress elements allow other players to view and compare achievements on the leaderboard, similarly to the application of introvert game elements, as depicted in Fig. 5 .

figure 5

The design of game elements applied to the extravert-focused serious game version.

Research methodology

This study employs a quasi-experimental design to investigate the effectiveness of “How to Spot Fake News,” a serious game that offers two versions. The first version is designed for a Non-personalized group, while the second version is tailored to accommodate the learning preferences of introvert and extravert participants. The primary goal is to assess how these versions contribute to enhancing participants’ information literacy and their ability to discern fake news. Quasi-experimental designs are particularly well-suited for educational research, as they allow for the evaluation of interventions in real-world settings while controlling for various factors that might impact the outcomes (Maciejewski, 2020 ).

Participants

We recruited undergraduate students enrolled in Information Literacy and Information Presentation courses at Chiang Mai University. These courses collectively enroll approximately 1000 students each year from various academic faculties. The rationale behind selecting participants from these courses was to ensure a relatively consistent baseline level of knowledge regarding information literacy and fake news. Subsequently, we extended invitations to enrolled students to participate in our experiments. To encourage their active engagement during the online experiments, we informed them about a financial compensation of 200 baht (approximately 5 USD). This compensation aimed to acknowledge the time and effort invested by participants and incentivize them to provide high-quality data for the questionnaire while encouraging their long-term participation to facilitate data collection for knowledge retention. Through these recruitment techniques, a total of 86 students volunteered to participate in the experiment. All participants were required to complete the Big Five Inventory (BFI) questionnaire (Zhao & Seibert, 2006 ), a widely recognized instrument in the field of psychology for assessing personality traits. Participants were selected based on an even distribution across gender and personality traits. The weekly gameplay sessions ranged from 1 to 5 hours per week. Finally, our sample consisted of 58 participants aged between 20 and 22, with an average age of 20.91, evenly divided into 29 males and 29 females. This ensured a balanced representation of introverted and extraverted personalities, who were randomly assigned to Control and Experimental groups to maintain an equal distribution of personality types. The demographics of the participants are presented in Table 3 .

Instruments

Online serious game.

The primary instrument in this study is our online serious game called “How to Spot Fake News,” as detailed in Section 3. This serious game was specifically designed to provide instructions within university information literacy courses, with a particular focus on the evaluation of fake news and misinformation. The game was meticulously crafted to enhance replay value, offering players multiple alternative scenarios influenced by their choices, ensuring a continuous and engaging gameplay experience. To facilitate our research, we customized the serious game into two versions. The personalized version was tailored to cater to two personality types, introverts and extraverts, by adapting various game elements, as discussed in the design section. In contrast, the non-personalized version retained the original game design, without any customization or addition of game elements. This approach allowed us to explore the impact of personalization on user experiences and outcomes.

Pre/post tests

The pre/post tests and questionnaire instruments were designed to collect quantitative data and consisted of two parts. The first part encompassed pre/post tests aimed at evaluating knowledge acquisition and retention related to information literacy and fake news, with a specific focus on assessing the comprehension of the CRAAP test for source verification in social media. These tests consisted of 30 questions and were developed by a lecturer from the Department of Library and Information Science.

To measure the impact of the serious game on both groups, with regard to motivation, we turned to the Self Determination Theory (SDT) (Ryan & Deci, 2000 ). This theory distinguishes between two types of motivation that originate from within an individual. Intrinsic motivation is characterized by an innate desire to engage in an activity because it is inherently enjoyable, fulfilling, or meaningful. In contrast, extrinsic motivation is rooted in external factors and incentives, involving engagement in an activity primarily for the purpose of obtaining rewards, avoiding punishments, or meeting societal expectations. Research suggests that educational games have the potential to satisfy the psychological needs for autonomy, competence, and relatedness (Przybylski et al., 2010 ). The satisfaction of these needs has been positively linked to the enjoyment of human computation games, which, in turn, impacts intrinsic motivation (Pe-Than et al., 2014 ). This enhancement of intrinsic motivation is particularly beneficial for learning, as games stimulate effort, curiosity, and interest in learning activities within the context of serious games (Chen et al., 2017 ; Pakinee & Puritat, 2021 ). Thus, the second part of the questionnaire involved the short Intrinsic Motivation Inventory (IMI) questionnaire (Vos et al., 2011 ; Leenaraj et al., 2023 ), adapted from the original version Ryan & Deci, 2000 ). The IMI questionnaire comprises 14 items rated on a five-point Likert scale, ranging from “strongly disagree” to “strongly agree,” covering aspects of intrinsic motivation, i.e., Effort, Interest, and Perceived Competence. Both parts are included in the Appendix.

Login data statistics platform

Throughout the experiments, participants accessed the serious game via their online accounts on the webpage. The serious game, in turn, collected essential login data, game scores, login timestamps, and time duration. These data collection forms served as the foundation for the systematicrecording and organization of login events, ensuring a structured approach to quantitative data collection. Comprehensive logs were meticulously maintained, documenting user activities, including usernames, timestamps, time duration and the outcomes of login attempts. This data statistics platform provided valuable insights into user behaviors and motivations, further enriching our data collection. Subsequently, the platform played a pivotal role in processing and analyzing the accumulated data, offering a diverse array of statistical and visual tools that facilitated the extraction of insights to effectively address our research questions.

After recruiting the eligible 58 participants, we structured the experiment into four distinct steps. All participants completed the required questionnaires online in a designated room for all steps. The flowchart illustrating our experimental procedure is presented in Fig. 6 . A comprehensive description of each of the four steps is provided below.

figure 6

Overview of research methodology.

Step 1: Consent form, pre-test, and pre-IMI questionnaire

In the first step, participants were introduced to the research study and were informed about its purpose. They were then asked to sign the consent form, granting their informed willingness to participate in the research experiments. Following this, participants completed a pre-test designed to assess their baseline knowledge of information literacy and fake news, which was relevant to the information literacy course. Additionally, participants were administered the Pre-IMI questionnaire to measure their intrinsic motivation. Subsequently, participants were instructed to attend their regular lecture class in the information literacy course, specifically focusing on evaluating fake news and misinformation. Following the class, participants were provided with login credentials for the online experiment. It’ s important to note that our serious game was designed as supplementary material for students after they had enrolled in the lecture class. This step was estimated to take approximately one hour to complete.

Step 2: Experiment

In the second step, participants were divided into two groups: the Non-personalized and the Personalized game-based groups. Gender and personality traits, including introversion and extraversion, were evenly distributed among both groups, as shown in Fig. 6 . Notably, participants were unaware of the group to which they had been assigned. Subsequently, participants were briefed on the duration of gameplay, which was limited to two weeks. During this period, participants were encouraged to engage with the game as much as they wished, with the requirement of completing the game at least once. However, our game design was thoughtfully constructed to maximize replay value, recognizing that the skill of evaluating news sources benefits from practice and repetition. This approach allowed us to investigate participant motivation and behavior effectively. It’s important to note that we customized the login system to direct participants to the appropriate game version based on their group assignment. This step was estimated to take approximately two weeks to complete, after that the login account was expired.

Step 3: Post-test, post-IMI questionnaire, and statistical data collection

Following the completion of the serious game experiment, participants from both groups gathered in a designated room to undertake a post-test, which assessed their knowledge acquisition related to information literacy. Additionally, participants filled out the post-IMI questionnaire to gauge their motivation levels linked to the serious game. Subsequently, we compared the pre-test and post-test scores to evaluate the impact of the serious game on knowledge acquisition and motivation, distinguishing between the Control and Experimental groups. The data collected from the questionnaires and pre/post-tests were analyzed using quantitative methods, utilizing the SPSS program. Finally, we extracted the statistical data from the web server to conduct an in-depth analysis of user behavior. This step was anticipated for participants to require approximately one hour to complete.

Step 4: Delayed post-test for knowledge retention

In the final step, we conducted a follow-up session with the same participants from both groups two months later, having them take the same post-test assessing knowledge acquisition in the designated room. Upon completing the examination, we provided all participants with a financial compensation of 200 baht. This compensation was allocated in the final step to ensure the participation of all individuals in the delayed post-test for knowledge retention, which was crucial for our data collection and for answering RQ2. Please note that even though we let participants take part in the survey by taking the questionnaire online, it still had to be done in the designated room order to avoid the use of any alternative tools for the questionnaire, which could have biased our data collection. This step was estimated to take approximately one hour to complete.

Data analysis

We collected and analyzed the results of the pre/post tests and the delayed post test, as well as the IMI questionnaire, to assess knowledge acquisition, retention and intrinsic motivation in information literacy, specifically in evaluating media sources. This analysis involved a comparison between the Control group (Non-personalized serious game) and the Experimental group (Personalized serious game). In the pre/post tests, we initially utilized the Shapiro-Wilk test to confirm the normal distribution of the data. Subsequently, we employed the parametric paired samples t-test to effectively evaluate knowledge acquisition, retention, and intrinsic motivation. A p -value of less than 0.05 was considered statistically significant. Additionally, we calculated the effect size using Cohen’s d test. A small Cohen’s d value of 0.2 suggests that the difference may not be very meaningful in the real world, while a large value, typically 0.8 or higher, indicates a practical and significant difference when comparing the two groups. Please note that, even though we let the participants do the survey questionnaire online, it still had to be done in the designated room to avoid the use of alternative tools for taking the questionnaire, which could have biased our data collection.

The pre/post tests of knowledge acquisition and retention

The impact of the serious game on learning was assessed through the pre/post tests to address the first and second research questions. The results are presented in Tables 4 , 5 .

Table 4 presents the results focusing on knowledge acquisition and retention through pre/post tests. The pre-test score highlights participants’ initial knowledge levels before any intervention (Control: M = 6.24, SD = 2.42; Experimental: M = 6.48, SD = 2.14). After completing the 2-weeks experiment, both groups showed significant improvement in knowledge acquisition, as indicated by their respective post-test scores (Control: M = 16.68, SD = 4.04; Experimental: M = 17.34, SD = 4.62). The results of the delayed post-test were M = 13.65, SD = 4.19 for the Control group and M = 14.37, SD = 4.22 for the Experimental group.

As shown in Table 5 , the paired samples t-test conducted to compare the pre/post differences between the two groups was not significant ( p  = 0.711), suggesting that the difference in the effectiveness between the Non-personalized and Personalized serious game was not statistically significant; the small effect size (d = 0.296) also showed no practical significance. Comparison of the pre-test was not significant ( p  = 0.619) with a small, practically insignificant effect size (d = 0.272).

Regarding Tables 4 and 5 , the data suggest that the use of personalized serious games among students at Chiang Mai University significantly enhanced knowledge acquisition and retention. However, this improvement was also observed in the control group, indicating that the utilization of a non-personalized serious game may have similarly enhanced knowledge acquisition and retention. Therefore, while both interventions led to improved outcomes, the personalized game did not demonstrate superiority over the non-personalized game in terms of knowledge acquisition and retention.

The pre/post tests of intrinsic motivation

The data on intrinsic motivation are summarized in Table 6 , including the pre and post IMI questionnaires for both groups. For the Control group, the data showed a significant difference only in the dimension of Interest ( p  < 0.001). On the other hand, data for the Experimental group demonstrated a significant improvement in all dimensions, including Perceived competence ( p  = 0.003), Interest ( p  < 0.001), and Effort ( p  < 0.001).

Table 7 provides a comprehensive overview of the comparative analysis between the Control and Experimental groups using the IMI questionnaire. Accordingly, we first conducted the Shapiro-Wilk test to confirm the normal distribution of data. Subsequently, we utilized Welch’s t-test for comparison, revealing a significant difference only in the dimension of Perceived Competence ( p  = 0.035). In contrast, the differences in the dimensions of Interest ( p  = 0.714) and Effort ( p  = 0.175) were not statistically significant.

Login data statistics (user behavior)

The statistical analysis of login data for both groups aimed to discern the differences in user behavior between Personalized and Non-personalized serious games, providing insights into player engagement (Rapp, 2022 ). Fig. 7 presents the total playtime in minutes for both Control and Experimental groups over a 14-day period. Initially, both groups experienced a surge in playtime, peaking at slightly above 1000 min on the first day, followed by a sharp decline until the fourth day. Afterward, playtime stabilized around the 200 min mark for the remaining observation period in the Control group. In contrast, the Experimental group demonstrated a more gradual decline in playtime from the outset, reaching a minimum around day 7. Subsequently, there was a noticeable upward trend from day 8 to 14, indicating sustained player engagement over an extended period.

figure 7

Comparison of total playtimes between the Non-personalized and Personalized games.

A more detailed comparison of the total playtime for extraverts in both groups is presented in Fig. 8 (Left), revealing that both the Experimental and Control groups exhibited consistently similar playtimes. Interestingly, for introverts, (Fig. 8 , Right), playtime commenced at an apex close to 500 min on day one but swiftly dropped in the Control group, leveling off near 100 min by day 4 and continuing in this range with minor fluctuations for the remainder of the observation period. Conversely, the Experimental group demonstrated a more varied pattern, with playtime hovering around the 100 min mark for most days but a notable increase from day 12, culminating at nearly 250 min on day 14. Comparing this data to extraverted individuals, it’s evident that while both personality types experienced a sharp decline in engagement with Non-personalized games, introverts displayed a more pronounced resurgence in playtime with Personalized games in the latter part of the observation window.

figure 8

Comparison of total playtimes between the two versions: Extravert (Left) and Introvert (Right).

Table 8 provides a comprehensive summary of user behavior statistics for both groups, with participants categorized based on their personality traits as introverts and extraverts. In terms of total playtime, the Experimental group logged 3635 min, significantly surpassing the 2726 min recorded by the Control group. However, when examining the data by personality traits, a more nuanced pattern emerged. Notably, extraverted participants in both groups exhibited similar total playtime, with the Control group accumulating 1413 min and the Experimental group totaling 1527 min. The substantial contrast was apparent within the introvert category, where the Control group displayed a total playtime of 2108 min, considerably higher than the 1313 min logged by the Experimental group. This pattern is consistent when analyzing the average playtime per user and the number of logins. These data comparisons highlight that the personalized game had a more pronounced impact on increasing playtime and engagement, particularly among introverted participants, while for extraverted participants, the effect on total playtime was less pronounced.

In terms of measuring statistical significance, we compared the total playtime between introvert and extravert participants using Welch’s t-test in both the Control and Experimental Groups, as shown in Table 9 . The results align with the visual comparison of the means, indicating a significant difference in the personality trait of introversion (p < 0.001). On the other hand, the differences linked to extraversion (p = 0.053) were not found to be statistically significant.

Discussion and findings

Effects of the personalized serious game.

Concerning the effect of personalized serious games on knowledge acquisition (RQ1), the results of pre-post tests using a questionnaire related to information literacy and source evaluation revealed that the utilization of personalized serious gamesamong students at Chiang Mai University did not significantly enhance knowledge acquisition, as compared to the use of a non-personalized game. This outcome contradicts the findings of Tlili et al. ( 2019 ), who reported an improvement in knowledge acquisition by reducing the cognitive load during learning. Moreover, another study Smiderle et al. ( 2019 ) supported our results, indicating that both introverted and extraverted groups displayed enhanced performance in learning. However, the difference between the two groups was not statistically significant.

Regarding knowledge retention (RQ2), a post-test was conducted two months after the initial study to assess knowledge acquisition. The results revealed no significant difference between the two groups. Notably, there is a dearth of studies examining personalized games for introverts and extraverts concerning knowledge retention. Our findings indicate that the use of personalized serious games, incorporating suitable game elements, did not demonstrate a significant difference between introverted and extraverted individuals in terms of both knowledge acquisition and retention. This lack of distinction might be attributed to the limited impact of individual game elements on knowledge acquisition (Landers et al., 2017 ).

Concerning intrinsic motivation (RQ3), the group using personalized games displayed significance across all aspects, including Perceived Competence, Interest, and Effort. In contrast, the Non-personalized game group exhibited significance only in the aspect of Interest. However, comparative analysis using Welch’s t-test revealed a significant difference in Perceived Competence (p = 0.035). This suggests that the use of personalized serious games among students at Chiang Mai University significantly enhanced learning motivation, particularly in the dimension of Perceived Competence. Nonetheless, the observed improvement in Interest, regardless of game personalization, is consistent with previous studies indicating that educational games generally boost intrinsic motivation by providing fun and interest (Arayaphan et al., 2022 ; Leenaraj et al., 2023 ; Nieuwhof-Leppink et al., 2019 ). However, the divergent results in intrinsic motivation for both groups, specifically in terms of Perceived Competence and Effort, may be ascribed to the differential impacts of each game element on introverted and extraverted individuals. For instance, the Leaderboard game element (Jia et al., 2016 ) might induce pressure or anxiety in introverts due to concerns about their ranking, thereby leading to discomfort with attention and competitive stress. On the other hand, extraverts may experience positive effects from Leaderboards, as these features align with their inclination for social interaction, motivating them to compete and engage more actively with peers in the game.

The impact of player engagement in relation to extraversion and introversion (RQ4) was assessed through the analysis of login data statistics and usage, as well as statistical significance. The results revealed no statistically significant difference in engagement for extraverts across both groups, as measured by average and total playtime. However, a notable finding emerged for introverts, indicating a significant 37 percent increase in engagement, in terms of playtime for the Personalized version, compared to the Non-personalized version. This aligns with the results of Welch’s t-test, which showed a very significant improvement among introverted individuals ( p  < 0.001). The observed difference in engagement among introverts within the context of personalized serious games might be attributed to their tendency to devote extended periods to exploring game mechanics, storylines, and intricate details. Introverts often engage deeply with games, appreciating their depth and complexity without external interruptions. In contrast, extraverts might experience more interruptions due to real-world activities, potentially affecting their engagement levels, even with a personalized version of the game.

Limitations and future research

The present study has limitations. It may face potential constraints in generalizability due to its focus on a specific demographic sample within a controlled educational setting. Factors such as varying access to technology and participants with devices could influence their experiences. Furthermore, the complex interactions between game elements and personality traits might pose challenges in identifying the most effective combinations, which could impact the outcomes and implications of the study. Addressing these aspects in future research endeavors will contribute to a more comprehensive understanding of the application and effectiveness of personalized serious games within diverse educational contexts.

Future research should aim to explore specific game elements or combinations that extend beyond introversion and extraversion. This exploration could enhance the parameters of personalization, optimizing the effectiveness of such games. Diversifying participant groups to encompass a broader demographic range would aid in understanding how various demographics respond to personalized serious games, considering factors such as age, cultural backgrounds, and educational diversity. Moreover, expanding the application of personalized serious games to different subjects or domains could provide valuable insights into the generalizability of this approach. Additionally, assessing the integration of emerging technologies like AI, virtual reality, or augmented reality into personalized serious games holds potential to enhance interactivity and improve learning outcomes.

Conclusions

This paper explores the development of serious games for information literacy and fake news, assessing the impact of serious games, personalized by incorporating suitable game elements tailored to students’ personality traits, considering dimensions of introversion and extraversion. The study employs a quasi-experimental design to examine a serious game presenting two versions: one for the Non-personalized group and the other specifically tailored to suit the learning preferences of introverted and extraverted participants. The findings indicate the following:

Personalization of serious games for educational purposes by means of suitable game elements for introverts and extraverts cannot effectively improve knowledge acquisition and retention compared to Non-personalized serious games.

Personalized serious games can enhance intrinsic motivation, specifically in terms of perceived competence, which can be used as an interesting tool for educational environments and settings.

Personalized serious games can significantly improve engagement, almost 37 percent in terms of average and total playtime for introverted people, but may not affect extraverted people.

Designing personalized serious games by integrating suitable game elements for various personality traits might offer a cost-effective approach. However, careful consideration is essential regarding the selection of game elements, as their individual effectiveness may vary significantly.

Data availability

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy and ethical reasons.

Adellin R, Khuan CT, Gertrude LD (2019) Conceptual framework puzzle game with high replayability. J Phys Conf Ser 1228(1):012070. https://doi.org/10.1088/1742-6596/1228/1/012070

Article   Google Scholar  

Agarwal A, & Dixit A (2020) Fake news detection: An ensemble learning approach. 2020 4th International Conference on Intelligent Computing and Control Systems (ICICCS)

Agosto DE (n.d.) Information literacy and libraries in the age of fake news . Abc-clio.com. Retrieved November 14, 2023, from https://publisher.abc-clio.com/9781440864193/

Allcott H, Gentzkow M (2017) Social media and fake news in the 2016 election. J Econ Perspect: A J Am Econ Assoc 31(2):211–236. https://doi.org/10.1257/jep.31.2.211

Anderson G, Keith MJ, Francisco J, Fox S (2018) The effect of software team personality composition on learning and performance: Making the “dream” team. Proceedings of the 51st Hawaii International Conference on System Sciences

Arayaphan W, Sirasakmol, O, Nadee, W, Puritat, K (2022). Enhancing intrinsic motivation of librarian students using virtual reality for education in the context of culture heritage museums. TEM Journal , 620–630. https://doi.org/10.18421/tem112-16

Asendorpf JB (2000) A person-centered approach to personality and social relationships: Findings from the Berlin Relationship Study. In Bergman LR, Cairns RB, Nilsson L-G, Nystedt L (eds) Developmental science and the holistic approach . Lawrence Erlbaum Associates Publishers, pp 281–298

Aydin M (2021) Does the digital divide matter? Factors and conditions that promote ICT literacy. Telemat Inform 58(101536):101536. https://doi.org/10.1016/j.tele.2020.101536

Baker S, Waycott J, Carrasco R, Kelly RM, Jones AJ, Lilley J, Dow B, Batchelor F, Hoang T, & Vetere F (2021) Avatar-mediated communication in social VR: An in-depth exploration of older adult interaction in an emerging communication platform. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems

Banik P, Kumar B (2019) Impact of information literacy skill on students’ academic performance in Bangladesh. Int J Eur Stud 3(1):27. https://doi.org/10.11648/j.ijes.20190301.15

Blevins HM, Xu Y, Biby S, Zhang S (2022) The NLRP3 inflammasome pathway: A review of mechanisms and inhibitors for the treatment of inflammatory diseases. Front Aging Neurosci 14. https://doi.org/10.3389/fnagi.2022.879021

Borg MO, Shapiro SL (1996) Personality type and student performance in principles of economics. J Econ Educ 27(1):3–25. https://doi.org/10.1080/00220485.1996.10844890

Carver CS, White TL (1994) Behavioral inhibition, behavioral activation, and affective responses to impending reward and punishment: The BIS/BAS Scales. J Pers Soc Psychol 67(2):319–333. https://doi.org/10.1037/0022-3514.67.2.319

Chen S, Michael D (2005) Serious games: Games that educate, train, and inform . Course Technology

Chen C-H, Huang C-Y, Chou Y-Y (2017) Integrating augmented reality into blended learning for elementary science course. Proceedings of the 5th International Conference on Information and Education Technology

Chen G, Davis D, Hauff C, & Houben G-J (2016) On the impact of personality in massive open online learning. Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization

Chen, Tsai, Chang (2019) Effects of game-based instruction on the results of primary school children taking a natural science course. Educ Sci 9(2):79. https://doi.org/10.3390/educsci9020079

Codish D, Ravid G (2014a) Academic course gamification: The art of perceived playfulness. Interdiscip J E-Skills Lifelong Learn 10:131–151. https://doi.org/10.28945/2066

Codish D, Ravid G (2014b) Personality based gamification-Educational gamification for extroverts and introverts. In Proceedings of the 9th CHAIS Conference for the Study of Innovation and Learning Technologies: Learning in the Technological Era (pp. 36–44)

Cohen A, Baruth O (2017) Personality, learning, and satisfaction in fully online academic courses. Computers Hum Behav 72:1–12. https://doi.org/10.1016/j.chb.2017.02.030

Connolly TM, Boyle EA, MacArthur E, Hainey T, Boyle JM (2012) A systematic literature review of empirical evidence on computer games and serious games. Computers Educ 59(2):661–686. https://doi.org/10.1016/j.compedu.2012.03.004

Costa PT, McCrae RR (2008) The Revised NEO Personality Inventory (NEO-PI-R). In The SAGE Handbook of Personality Theory and Assessment: Volume 2 — Personality Measurement and Testing (Vol. 2, pp. 179–198). SAGE Publications Ltd

Costa PT, McCrae RR (1980) Influence of extraversion and neuroticism on subjective well-being: Happy and unhappy people. J Personal Social Psychol 38(4):668–678. https://doi.org/10.1037/0022-3514.38.4.668

Denden, M, Tlili, A, Essalmi, F, & Jemni, M (2017) An investigation of the factors affecting the perception of gamification and game elements. 2017 6th International Conference on Information and Communication Technology and Accessibility (ICTA)

Denden M, Tlili A, Essalmi F, Jemni M (2018) Does personality affect students’ perceived preferences for game elements in gamified learning environments? 2018 IEEE 18th International Conference on Advanced Learning Technologies (ICALT)

De Paor S, Heravi B (2020) Information literacy and fake news: How the field of librarianship can help combatthe epidemic of fake news. J Acad Libr 46(5), 102218. https://doi.org/10.1016/j.acalib.2020.102218

DeYoung CG (2015) Cybernetic Big Five Theory. J Res Personal 56:33–58. https://doi.org/10.1016/j.jrp.2014.07.004

Dossey L (2016) Introverts: A defense. Explore (New York, NY), 12(3):151–160. https://doi.org/10.1016/j.explore.2016.02.007

Eysenck HJ (1967) The biological basis of personality. Thomas: Spring-field

Eysenck HJ (1977) The biological basis of personality . Charles C. Thomas Publisher

Felder RM, Silverman LK (1988) Learning and teaching styles in engineering education. Eng Educ 78:674–681

Fernandes FT, Junior PTA (2016) Gamification aspects in the context of electronic government and education: A case study. In HCI in Business, Government, and Organizations: Information Systems (pp. 140–150). Springer International Publishing

Ferro LS (2021) The Game Element and Mechanic (GEM) framework: A structural approach for implementing game elements and mechanics into game experiences. Entertain Comput 36(100375):100375. https://doi.org/10.1016/j.entcom.2020.100375

Gray JR, Burgess GC, Schaefer A, Yarkoni T, Larsen RJ, Braver TS (2005) Affective personality differences in neural processing efficiency confirmed using fMRI. Cogn, Affect Behav Neurosci 5(2):182–190. https://doi.org/10.3758/cabn.5.2.182

Article   PubMed   Google Scholar  

Grimes JO, Cheek JM, Norem JK (2011) Four meanings of introversion: Social, thinking, anxious, and inhibited introversion. Scottbarrykaufman.com . https://scottbarrykaufman.com/wpcontent/uploads/2014/06/Introversion_4_domains_SPSP_2011_Grimes__Cheek___Norem-libre.pdf

Guo J, Huang J (2020) Information literacy education in WeChat environment at academic libraries in China. J Acad Librariansh 46(1):102073. https://doi.org/10.1016/j.acalib.2019.102073

Harteveld C, Sutherland SC (2017) Personalized gaming for motivating social and behavioral science participation. Proceedings of the 2017 ACM Workshop on Theory-Informed User Modeling for Tailoring and Personalizing Interfaces

Hern A (2017, December 28) Facebook and Twitter threatened with sanctions in UK “fake news” inquiry. The Guardian . https://www.theguardian.com/media/2017/dec/28/facebook-and-twitter-threatened-with-sanctions-in-uk-fake-news-inquiry

Herzberg F (2008) One more time: How do you motivate employees? Harvard Business Review Press

Hwang G-J, Tsai C-C (2011) Research trends in mobile and ubiquitous learning: a review of publications inselected journals from 2001 to 2010. Br J Educ Technol 42(4). https://doi.org/10.1111/j.1467-8535.2011.01183.x

Jia Y, Xu B, Karanam Y, Voida S (2016) Personality-targeted gamification: A survey study on personality traits and motivational affordances. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems

Jung CG, Baynes GH (1923) Psychological Types. J Philos 20(23):636–640

Kemp S (2019, January 30) Digital trends 2019: Every single stat you need to know about the internet . The Next Web. https://thenextweb.com/news/digital-trends-2019-every-single-stat-you-need-to-know-about-the-internet

Ketamo H, Devlin K, Kiili K (2018) Gamifying Assessment: Extending Performance Measures With Gaming Data. Proceedings of American Educational Researcher Association’s Annual Meeting AERA2018

Landers RN (2014) Developing a theory of gamified learning: Linking serious games and gamification of learning. Simul Gaming 45(6):752–768. https://doi.org/10.1177/1046878114563660

Landers RN, Armstrong MB, Collmus AB (2017) How to use game elements to enhance learning: Applications of the theory of gamified learning. In Serious Games and Edutainment Applications (pp. 457–483). Springer International Publishing

Leenaraj B, Arayaphan W, Intawong K, Puritat K (2023) A gamified mobile application for first-year student orientation to promote library services. J Librariansh Inf Sci 55(1):137–150. https://doi.org/10.1177/09610006211067273

Lewandowsky S, Ecker UKH, Cook J (2017) Beyond misinformation: Understanding and coping with the “post-truth” era. J Appl Res Mem Cogn 6(4):353–369. https://doi.org/10.1016/j.jarmac.2017.07.008

Liu G (2021) Moving up the ladder of source assessment: Expanding the CRAAP test with critical thinking and metacognition. Coll Res Libraries N. 82(2):75. https://doi.org/10.5860/crln.82.2.75

Article   ADS   Google Scholar  

Maciejewski ML (2020) Quasi-experimental design. Biostatistics Epidemiol 4(1):38–47. https://doi.org/10.1080/24709360.2018.1477468

McCrae RR, Costa PT (1987) Validation of the five-factor model of personality across instruments and observers. J Pers Soc Psychol 52(1):81–90. https://doi.org/10.1037/0022-3514.52.1.81

Article   CAS   PubMed   Google Scholar  

McCrae R. R, Costa P. T (1989) More reasons to adopt the five-factor model. The American Psychologist 44(2):451–452. https://doi.org/10.1037/0003-066x.44.2.451

Maican CI, Cazan A-M, Lixandroiu RC, Dovleac L (2019) A study on academic staff personality and technology acceptance: The case of communication and collaboration applications. Computers Educ 128:113–131. https://doi.org/10.1016/j.compedu.2018.09.010

Mount MK, Barrick MR, Scullen SM, Rounds J (2005) Higher‐order dimensions of the Big five Personality Traits and the Big six vocational interest types. Pers Psychol 58(2):447–478. https://doi.org/10.1111/j.1744-6570.2005.00468.x

Nieuwhof-Leppink AJ, de Jong TPVM, van de Putte EM, Schappin R (2019) Does a serious game increase intrinsic motivation in children receiving urotherapy? J Pediatr Urol 15(1):36.e1–36.e7. https://doi.org/10.1016/j.jpurol.2018.09.003

Nov O, Arazy O (2013) Personality-targeted design: theory, experimental procedure, and preliminary results. Proceedings of the 2013 Conference on Computer-Supported Cooperative Work

Nuckcheddy A (2018) The effect of personality on motivation and organisational behaviour. Psychol Behav Sci Int J 9(2). https://doi.org/10.19080/pbsij.2018.09.555760

Pakinee A, Puritat K (2021) Designing a gamified e-learning environment for teaching undergraduate ERP course based on big five personality traits. Educ Inf Technol 26(4):4049–4067. https://doi.org/10.1007/s10639-021-10456-9

Paradilla N, Dj MZ, Hasanah U (2021) The students' extrovert and introvert personality toward speaking performance. Int J Engl Lang Teach Appl Linguist 1(1):39–50. https://doi.org/10.30863/ijretal.v1i1.1197

Park A, Montecchi M, Feng C‘mitsu, Plangger K, Pitt L (2020) Understanding ‘fake news’: A bibliographic perspective. Def Strategic Commun 8:141–172. https://doi.org/10.30966/2018.riga.8.4

Pe-Than EPP, Goh DH-L, Lee CS (2014) Making work fun: Investigating antecedents of perceived enjoyment in human computation games for information sharing. Computers Hum Behav 39:88–99. https://doi.org/10.1016/j.chb.2014.06.023

Pennycook G, Rand DG (2019) Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition 188:39–50

Pinto M, Caballero D, Sales D, Fernández-Pascual R (2020) MOBILE-APPS questionnaire: Developing and validating a scale to measure the attitudes and perceptions of undergraduate students on mobile information literacy. J Librariansh Inf Sci 52(4):1063–1072. https://doi.org/10.1177/0961000620902260

Poondej C, Lerdpornkulrat T (2019) Gamification in e-learning: A Moodle implementation and its effect on student engagement and performance. Interact Technol Smart Educ 17(1):56–66. https://doi.org/10.1108/itse-06-2019-0030

Poropat AE (2009) A meta-analysis of the five-factor model of personality and academic performance. Psychol Bull 135(2):322–338. https://doi.org/10.1037/a0014996

Przybylski AK, Rigby CS, Ryan RM (2010) A motivational model of video game engagement. Rev Gen Psychol: J Div 1, Am Psychol Assoc 14(2):154–166. https://doi.org/10.1037/a0019440

Rapp A (2022) Time, engagement and video games: How game design elements shape the temporalities of play in massively multiplayer online role‐playing games. Inf Syst J 32(1):5–32. https://doi.org/10.1111/isj.12328

Reddy P, Sharma B, Chaudhary K (2020) Digital literacy: A review of literature. Int J technoethics 11(2):65–94. https://doi.org/10.4018/ijt.20200701.oa1

Romero M, Usart M, Ott M (2015) Can serious games contribute to developing and sustaining 21st century skills? Games Cult 10(2):148–177. https://doi.org/10.1177/1555412014548919

Ryan RM, Deci EL (2000) Intrinsic and extrinsic motivations: Classic definitions and new directions. Contemp Educ Psychol 25(1):54–67. https://doi.org/10.1006/ceps.1999.1020

Schimmenti A, Infanti A, Badoud D, Laloyaux J, Billieux J (2017) Schizotypal personality traits and problematic use of massively-multiplayer online role-playing games (MMORPGs). Computers Hum Behav 74:286–293. https://doi.org/10.1016/j.chb.2017.04.048

Smiderle R, Marques L, de M Coelho JAP, Rigo SJ, Jaques PA (2019) Studying the impact of gamification on learning and engagement of introverted and extroverted students. 2019 IEEE 19th International Conference on Advanced Learning Technologies (ICALT)

Smiderle R, Rigo SJ, Marques LB, Peçanha de Miranda Coelho JA, Jaques PA (2020) The impact of gamification on students’ learning, engagement and behavior based on their personality traits. Smart Learning Environ 7(1). https://doi.org/10.1186/s40561-019-0098-x

Svendsen GB, Johnsen J-AK, Almås-Sørensen L, Vittersø J (2013) Personality and technology acceptance: the influence of personality factors on the core constructs of the Technology Acceptance Model. Behav Inf Technol 32(4):323–334. https://doi.org/10.1080/0144929x.2011.553740

Tandoc EC Jr, Thomas RJ, Bishop L (2021) What is (fake) news? Analyzing news values (and more) in fake stories. Media and Commun 9(1):110–119. https://doi.org/10.17645/mac.v9i1.3331

Tlili A, Denden M, Essalmi F, Jemni M, Kinshuk, Chen N-S, Huang R (2019) Does providing a personalized educational game based on personality matter? A case study. IEEE Access: Practical Innov, Open Solut 7:119566–119575. https://doi.org/10.1109/access.2019.2936384

Velichety S, Shrivastava U (2022) Quantifying the impacts of online fake news on the equity value of social media platforms – Evidence from Twitter. Int J Inf Manag 64(102474):102474. https://doi.org/10.1016/j.ijinfomgt.2022.102474

VanLehn K (2006) The behavior of tutoring systems. Int J Artif Intell Educ 16(3):227–265

Google Scholar  

Vidakis N, Barianos A, Trampas A, Papadakis S, Kalogiannakis M, Vassilakis K (2019) Generating education in-game data: The case of an ancient theatre serious game. Proceedings of the 11th International Conference on Computer Supported Education

Vos N, van der Meijden H, Denessen E (2011) Effects of constructing versus playing an educational game on student motivation and deep learning strategy use. Computers Educ 56(1):127–137. https://doi.org/10.1016/j.compedu.2010.08.013

Vosoughi S, Roy D, Aral S (2018) The spread of true and false news online . Science (New York, NY), 359(6380):1146–1151. https://doi.org/10.1126/science.aap9559

Wang C-C (2020) Fake news and related concepts: Definitions and recent research development. Contemp Manag Res 16(3):145–174. https://doi.org/10.7903/cmr.20677

Watson D, Clark LA (1997) Extraversion and its positive emotional core . In Handbook of Personality Psychology. Elsevier, pp 767–793

West DM (2011, October 6) Using technology to personalize learning and assess students in real-time . Brookings. https://www.brookings.edu/articles/using-technology-to-personalize-learning-and-assess-students-in-real-time/

White AMJ (2019) Information Literacy and critical thinking in higher education: Some considerations. In Handbook of Research on Critical Thinking and Teacher Education Pedagogy (pp. 367–381). IGI Global

Zhao H, Seibert SE (2006) The big five personality dimensions and entrepreneurial status: a meta-analytical review. J Appl Psychol 91(2):259–271. https://doi.org/10.1037/0021-9010.91.2.259

Download references

Acknowledgements

The authors express their gratitude to Thailand’s Safe and Creative Media Development Fund for funding this research. Additionally, this research was partially supported by Chiang Mai University.

Author information

Authors and affiliations.

Department of Library and Information Science, Faculty of Humanities, Chiang Mai University, Chiang Mai, Thailand

Phimphakan Thongthip & Kitti Puritat

Faculty of Public Health, Chiang Mai University, Chiang Mai, Thailand

Kannikar Intawong

College of Arts, Media and Technology, Chiang Mai University, Chiang Mai, Thailand

Suepphong Chernbumroong & Perasuk Worragin

School of Information Technology, Mae Fah Luang University, Chiang Rai, Thailand

Yootthapong Tongpaeng

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization, KT and PT; Methodology, KT; Software KT and PW; Validation SP and YT, Writing–original draft, KT and PT; Writing–review & editing, KT, KI, and PT. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Kitti Puritat .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

This study was approved by the Chiang Mai University Research Ethics Committee (CMUREC No.64/092).

Informed consent

The informed consent was obtained from all participants.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Appendix a, appendix b, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Thongthip, P., Intawong, K., Chernbumroong, S. et al. Design and analysis of personalized serious games for information literacy: catering to introverted and extraverted individuals through game elements. Humanit Soc Sci Commun 11 , 725 (2024). https://doi.org/10.1057/s41599-024-03172-5

Download citation

Received : 14 November 2023

Accepted : 13 May 2024

Published : 04 June 2024

DOI : https://doi.org/10.1057/s41599-024-03172-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

research paper about computer literacy

Exploring Students’ Generative AI-Assisted Writing Processes: Perceptions and Experiences from Native and Nonnative English Speakers

  • Original research
  • Open access
  • Published: 30 May 2024

Cite this article

You have full access to this open access article

research paper about computer literacy

  • Chaoran Wang   ORCID: orcid.org/0000-0002-4140-2757 1  

505 Accesses

Explore all metrics

Generative artificial intelligence (AI) can create sophisticated textual and multimodal content readily available to students. Writing intensive courses and disciplines that use writing as a major form of assessment are significantly impacted by advancements in generative AI, as the technology has the potential to revolutionize how students write and how they perceive writing as a fundamental literacy skill. However, educators are still at the beginning stage of understanding students’ integration of generative AI in their actual writing process. This study addresses the urgent need to uncover how students engage with ChatGPT throughout different components of their writing processes and their perceptions of the opportunities and challenges of generative AI. Adopting a phenomenological research design, the study explored the writing practices of six students, including both native and nonnative English speakers, in a first-year writing class at a higher education institution in the US. Thematic analysis of students’ written products, self-reflections, and interviews suggests that students utilized ChatGPT for brainstorming and organizing ideas as well as assisting with both global (e.g., argument, structure, coherence) and local issues of writing (e.g., syntax, diction, grammar), while they also had various ethical and practical concerns about the use of ChatGPT. The study brought to front two dilemmas encountered by students in their generative AI-assisted writing: (1) the challenging balance between incorporating AI to enhance writing and maintaining their authentic voice, and (2) the dilemma of weighing the potential loss of learning experiences against the emergence of new learning opportunities accompanying AI integration. These dilemmas highlight the need to rethink learning in an increasingly AI-mediated educational context, emphasizing the importance of fostering students’ critical AI literacy to promote their authorial voice and learning in AI-human collaboration.

Similar content being viewed by others

research paper about computer literacy

Students’ voices on generative AI: perceptions, benefits, and challenges in higher education

Examining science education in chatgpt: an exploratory study of generative artificial intelligence.

research paper about computer literacy

Artificial Intelligence (AI) Student Assistants in the Classroom: Designing Chatbots to Support Student Success

Avoid common mistakes on your manuscript.

1 Introduction

The rapid development of large language models such as ChatGPT and AI-powered writing tools has led to a blend of apprehension, anxiety, curiosity, and optimism among educators (Warner, 2022 ). While some are optimistic about the opportunities that generative AI brings to classrooms, various concerns arise especially in terms of academic dishonesty and the biases inherent in these AI tools (Glaser, 2023 ). Writing classes and disciplines that use writing as a major form of assessment, in particular, are significantly impacted. Generative AI has the potential to transform how students approach writing tasks and demonstrate learning through writing, thus impacting how they view writing as an essential literacy skill. Educators are concerned that when used improperly, the increasingly AI-mediated literacy practices may AI-nize students’ writing and thinking.

Despite the heated discussion among educators, there remains a notable gap in empirical research on the application of generative AI in writing classrooms (Yan, 2023 ) and minimal research that systematically examines students’ integration of AI in their writing processes (Barrot, 2023a ). Writing–an activity often undertaken outside the classroom walls–eludes comprehensive observation by educators, leaving a gap in instructors’ understandings of students’ AI-assisted writing practices. Furthermore, the widespread institutional skepticism and critical discourse surrounding the use of generative AI in academic writing may deter students from openly sharing their genuine opinions of and experiences with AI-assisted writing. These situations can cause disconnect between students’ real-life practices and instructors’ understandings. Thus, there is a critical need for in-depth investigation into students’ decision-making processes involved in their generative AI-assisted writing.

To fill this research gap, the current study explores nuanced ways students utilize ChatGPT, a generative AI tool, to support their academic writing in a college-level composition class in the US. Specifically, the study adopts a phenomenological design to examine how college students use ChatGPT throughout the various components of their writing processes such as brainstorming, revising, and editing. Using sense-making theory as the theoretical lens, the study also analyzes students’ perceived benefits, challenges, and considerations regarding AI-assisted academic writing. As writing is also a linguistic activity, this study includes both native and non-native speaking writers, since they may have distinct needs and perspectives on the support and challenges AI provides for writing.

2 Literature Review

2.1 ai-assisted writing.

Researchers have long been studying the utilization of AI technologies to support writing and language learning (Schulze, 2008 ). Three major technological innovations have revolutionized writing: (1) word processors, which represented the first major shift from manual to digital writing, replacing traditional typewriters and manual editing processes; (2) the Internet, which introduced web-based platforms, largely promoting the communication and interactivity of writing; and (3) natural language processing (NLP) and artificial intelligence, bringing about tools capable of real-time feedback and content and thinking assistance (Kruse et al., 2023 ). These technologies have changed writing from a traditionally manual and individual activity into a highly digital nature, radically transforming the writing processes, writers’ behaviors, and the teaching of writing. This evolution reflects a broader need towards a technologically sophisticated approach to writing instruction.

AI technologies have been used in writing instruction in various ways, ranging from assisting in the writing process to evaluating written works. One prominent application is automatic written evaluation (AWE), which comprises two main elements: a scoring engine producing automatic scores and a feedback engine delivering automated written corrective feedback (AWCF) (Koltovskaia, 2020 ). Adopting NLP to analyze language features, diagnose errors, and evaluate essays, AWE was first implemented in high-stakes testing and later adopted in writing classrooms (Link et al., 2022 ). Scholars have reported contrasting findings regarding the impact of AWE on student writing (Koltovskaia, 2020 ). Barrot ( 2023b ) finds that tools offering AWCF, such as Grammarly, improves students’ overall writing accuracy and metalinguistic awareness, as AWCF allows students to engage with self-directed learning about writing via personalized feedback. Thus the system can contribute to classroom instruction by reducing the burden on teachers and aiding students in writing, revision, and self-learning (Almusharraf & Alotaibi, 2023 ). However, scholars have also raised concerns regarding its accuracy and its potential misrepresentation of the social nature of writing (Shi & Aryadoust, 2023 ). Another AI application that has been used to assist student writing is intelligent tutoring system (ITS). Research shows that ITS could enhance students’ vocabulary and grammar development, offer immediate sentence- and paragraph-level suggestions, and provide insights into students’ writing behaviors (Jeon, 2021 ; Pandarova et al., 2019 ). Scholars also investigate chatbots as writing partners for scaffolding students’ argumentative writing (Guo et al., 2022 ; Lin & Chang, 2020 ) and incorporating Google’s neural machine translation system in second language (L2) writing (Cancino & Panes, 2021 ; Tsai, 2019 ).

Research suggests that adopting AI in literacy and language education has advantages such as supporting personalized learning experiences, providing differentiated and immediate feedback (Huang et al., 2022 ; Bahari, 2021 ), and reducing students’ cognitive barriers (Gayed et al., 2022 ). Researchers also note challenges such as the varied level of technological readiness among teachers and students as well as concerns regarding accuracy, biases, accountability, transparency, and ethics (e.g., Kohnke et al., 2023 ; Memarian & Doleck, 2023 ; Ranalli, 2021 ).

2.2 Integrating Generative AI into Writing

With sophisticated and multilingual language generation capabilities, the latest advancements of generative AI and large language models, such as ChatGPT, unlock new possibilities and challenges. Scholars have discussed how generative AI can be used in writing classrooms. Tseng and Warschauer ( 2023 ) point out that ChatGPT and AI-writing tools may rob language learners of essential learning experiences; however, if banning them, students will also lose essential opportunities to learn how to use AI in supporting their learning and their future work. They suggest that educators should not try to “beat” but rather “join” and “partner with” AI (p. 1). Barrot ( 2023a ) and Su et al. ( 2023 ) both review ChatGPT’s benefits and challenges for writing, pointing out that ChatGPT can offer a wide range of context-specific writing assistance such as idea generation, outlining, content improvement, organization, editing, proofreading, and post-writing reflection. Similar to Tseng and Warschauer ( 2023 ), Barrot ( 2023a ) is also concerned about students’ learning loss due to their use of generative AI in writing and their over-reliance on AI. Moreover, Su et al. ( 2023 ) specifically raise concerns about the issues of authorship and plagiarism, as well as ChatGPT’s shortcomings in logical reasoning and information accuracy.

Among the existing empirical research, studies have explored the quality of generative AI’s feedback on student essays in comparison to human feedback. Steiss et al. ( 2024 ) analyzed 400 feedback instances—half generated by human raters and half by ChatGPT—on the same essays. The findings showed that human raters provided higher-quality feedback in terms of clarity, accuracy, supportive tone, and emphasis on critical aspects for improvement. In contrast, AI feedback shone in delivering criteria-based evaluations. The study generated important implications for balancing the strengths and limitations of ChatGPT and human feedback for assessing student essays. Other research also examined the role of generative AI tools in L1 multimodal writing instruction (Tan et al., 2024 ), L1 student writers’ perceptions of ChatGPT as writing partner and AI ethics in college composition classes (Vetter et al., 2024 ), and the collaborative experience of writing instructors and students in integrating generative AI into writing (Bedington et al., 2024 ).

Specifically with regard to classroom-based research in L2 writing, Yan ( 2023 ) examined the use of ChatGPT through the design of a one-week L2 writing practicum at a Chinese university. Analyzing eight students’ classroom behaviors, learning logs, and interviews, the study showed that the use of generative AI helped L2 learners write with fewer grammatical errors and more lexical diversity. The study also found that the students’ biggest concerns were the threat to academic honesty and educational equity. This study is a pioneer in exploring students’ strategies and engagement with ChatGPT in writing; however, it was only conducted through a one-week practicum which did not involve authentic writing assignment tasks. Furthermore, students’ use of ChatGPT was limited to editing AI-generated texts instead of incorporating AI in a wider range of writing activities such as pre-writing and revising human generated texts. In another study by Han et al. ( 2023 ), the authors designed a platform that integrated ChatGPT to support L2 writers in improving writing quality in South Korea. Analyzing 213 students’ interaction data with the platform, survey results, as well as a focus group interview with six students and one instructor, the study found that the students generally held positive experiences with ChatGPT in supporting their academic writing. Although the study undertook a more extensive investigation involving a larger poll of participants with authentic writing assignments, it only explored generative AI’s role as a revision tool without examining its use across various stages of writing. Furthermore, participants in this study were tasked with engaging with a ChatGPT embedded platform of predefined prompts designed by the researchers. Consequently, how students interact with ChatGPT in natural settings remains largely unknown for researchers and educators.

2.3 Writing Process

Since the early 1980s until now, scholars have proposed various writing process models (e.g., Abdel Latif, 2021 ; Flower & Hayes, 1981 ; Hayes, 2012 ; Kellogg, 1996 ), yet they are still trying to form a complete understanding of composing processes. Despite the distinct specific aspects that different models highlight in the writing process, they all negate writing as a linear, sequential process of simply a text generation labor, but emphasize the non-linear and recursive nature of the writing process. Abdel Latif ( 2021 ) noted that various components of writing process such as ideational planning, searching for content, and revising interact with each other, and that both novice and experienced writers employ all of the components but with varying degrees and strategies. For instance, skilled writers refine and revise their ideas during writing, whereas novice writers mostly engage in sentence level changes such as fixing grammatical and lexical issues (e.g., Khuder & Harwood, 2015 ). For L2 students, writing can be very complex and cognitively daunting (Mohsen, 2021 ) due to reasons including but not limited to linguistic barriers (Johnson, 2017 ). Furthermore, writing is more than a cognitive process, it is also a social, cultural, and situated activity. For instance, the concept of plagiarism may carry different meanings and consequences across different cultural contexts. Thus, writing should be investigated in consideration of its dynamic interplay with institutional, cultural, and technological factors (Atkinson, 2003 ).

Considering the intricate nature of writing as a cognitive and social activity, it is thus important to investigate how generative AI may impact the different components of students’ writing processes. However, there is still a substantial gap in knowledge and research about students’ real-world integration of AI into their writing workflows, their decision-making processes, and the rationale behind their decision making while they interact with generative AI and utilize the technology in their writing in formal educational settings. While previous studies shed light on the impacts of generative AI on English writing, empirical classroom-based research remains limited. To further understand how students, both L1 and L2 writers, engage with generative AI in real-life classroom contexts, with authentic writing tasks, and throughout their various processes of writing, the current study thus undertook a naturalistic, exploratory direction that focused on how college students utilized ChatGPT in a first-year writing class in the US. Understanding and unpacking students’ AI-assisted writing processes could help educators better adjust their pedagogy in the face of the growing AI influences. The following research questions guided the present study:

How do students utilize ChatGPT in their writing processes?

How do student writers perceive the benefits of integrating ChatGPT into their writing?

What concerns and limitations do students experience when using ChatGPT to assist with their writing?

What considerations do students identify as important when engaging in generative AI-assisted writing?

3 Theoretical Framework

This study adopts sensemaking theory as its theoretical lens. Sensemaking has been conceptualized as the process through which individuals make meaning from ambiguous and puzzling situations that happen in their experience (Golob, 2018 ). Some scholars view sensemaking as a cognitive process of managing and processing information. This perspective focuses on the cognitive strategies employed in connecting and utilizing information to achieve the purpose of explaining the world (Klein et al., 2006 ). Alternatively, a socio-cultural orientation towards sensemaking regard it as construction of collective identity through an individual’s ongoing interactions with the educational context (Weick, 2005 ). Poquet ( 2024 ) integrates these two theoretical orientations, proposing that sensemaking encompasses both the individual and the collective, drawing attention to how learners explain the cognitive aspects of their learning as well as the social and cultural factors shape their learning experiences.

According to Poquet ( 2024 ), there are three components of the sensemaking process: (1) An individual’s understanding of the activity, available tools, and the situation is the antecedent of sensemaking. (2) Noticing and perceiving constitute the process of sensemaking per se. Noticing involves the identification of salient features of the tool(s) for the activity, while perceiving goes beyond noticing through making sense of the observed, taking into account contextual factors such as learner characteristics and the type of activity undertaken. Perceiving leads to the formulation of meaning and potential implications of what is noticed, playing a critical role in decision-making and action. (3) Outcomes of sensemaking may range from perceived affordances of tools for the activity to casual explanations for the observed phenomena. As defined by Poquet ( 2024 ), sensemaking involves learners crafting explanations for unclear situations through dynamically connecting information within the context of a specific activity. Essentially, sensemaking is both an intentional and intuitive process shaped by how learners understand their environment and their role within it.

Because sensemaking theories aim to examine people’s meaning-making, acting, and experience in “unknown,” “less deliberate,” and “more intuitive” situations (Poquet, 2024 , p. 5), it well aligns with the purpose of this study which is to form an emergent understanding of a less known situation given the relatively new phenomenon of generative AI-assisted writing practices among college students. Adopting a sensemaking lens helps to understand how students make sense of generative AI, how they perceive its affordances, what strategies they develop to use it to assist with their writing, what puzzling experiences they may have, and how they make decisions in those puzzling situations. The dual focus of the cognitive and the social is critical when examining how students engage with and perceive the AI technology and how they negotiate these perceptions and experiences within the learning communities of higher education. Sensemaking theory can also capture the range of individual experiences and shared interpretations among them, elucidating how they deal with uncertainty and make judgments generative AI usage.

4 Research Design

This qualitative study adopted a phenomenological research design, which focuses on understanding and interpreting a particular aspect of shared human experience (Moran, 2002 ; Smith, 1996 ). Phenomenology seeks to form a close and clear account of people’s perceptions and lived experiences as opposed to delivering a positivist conclusion of human encounters, as “pure experience is never accessible” (Smith et al., 2009 , p. 33). In the present study, as there is limited understanding of students’ engagement with ChatGPT in their writing process, a phenomenological lens could help capture participants’ own sense making of their AI-assisted writing experiences.

4.1 Context and Participants

The study took place in spring 2023 at a higher education institution in the US. I chose to focus on first-year writing as the study setting, as it is a required course in most colleges and universities, thus a typical writing and learning context for most college students. First-year writing serves as the foundation for cultivating academic writing skills, with the aim of developing students’ essential literacy and writing proficiency needed for their undergraduate learning experiences. The 14-week course focused on English academic reading, writing, and critical thinking and consisted of three major units.

This study focused on the last unit, which was about argumentative writing, a common type of academic essay writing (American Psychological Association, 2020 ). The final essay asked students to form an argumentative response to a research question of their own choice. The unit, lasting for three weeks, was structured as follows (see Fig.  1 ): During the first week, the instructor spent two classes, each for 75 min, introducing ChatGPT (GPT 3.5) as a large language model and inviting students to explore ChatGPT as a tool for writing. The instructor carefully chose and assigned five readings that allowed the students to grasp the ongoing academic and public debates and concerns regarding the use of ChatGPT in writing and educational settings. During the class sessions, students participated in various activities exploring the functionalities of ChatGPT, discussed ethics and academic integrity, and critiqued AI-generated writing. As part of the discussions on ethics, the instructor explicitly addressed academic integrity issues drawing upon both the writing program’s guidelines and the institution’s academic integrity policies to ensure that the students were aware of and committed to ethical use of generative AI in the writing class. During the second week, students learned various strategies for integrating sources in academic writing and practiced ways of using sources to build arguments. During the last week, students spent time peer reviewing each other’s work and met with the instructor individually to improve their drafts.

figure 1

Unit design with key topics and learning tasks over the three weeks

The final essay allowed but did not mandate students to use ChatGPT. For those who used ChatGPT and AI writing tools, disclosure and transparency about how AI was used were required as part of the submission to the assignment. The instructor left using AI in their final essay as an open option to the students themselves, ensuring that students could pursue the option that works best for their individual needs. Thus the unit provided various opportunities and flexibility for planning, researching, drafting, reviewing, and editing with ChatGPT throughout students’ writing process.

There were 11 students, all freshmen, enrolled in the class. All but one reported using ChatGPT in their writing. Six students were recruited based on their willingness to participate and the diversity of their first language to ensure a balanced coverage. Table 1 shows the demographic information of the students (with pseudonyms).

4.2 Data Collection

Aligned with an interpretive phenomenological design that focuses on exploring participants’ lived experiences and how they construct meaning of their own experiences (Smith & Shinebourne, 2012 ), I collected three major types of data in order to uncover the students’ writing processes involving ChatGPT and their perceptions. First, I collected students’ written products and artifacts such as in-class writing, screenshots of students’ conversations with ChatGPT, informal short writing assignments, and the formal writing assignments for the final argumentative essay. Second, I collected students’ written self-reflections about their use of ChatGPT in writing. Finally, the participants were interviewed for around 30–40 min, and all interviews were audio-recorded. These semi-structured interviews were developed around students’ former experiences with ChatGPT, their views of the tool, and the ways they utilized ChatGPT for their writing assignments in this class.

Students’ conversational screenshots with ChatGPT and their in-class and outside class writing drafts could demonstrate their interactions with AI as well as the changes they made upon contemplating the responses from the chatbot. The interviews and students’ self-reflections could further shed light on their perceptions and decision-making. Multiple sources of data helped to understand students’ behaviors, perceptions, and engagement with AI during different stages of writing. Triangulation of the data also helped me to understand students’ rationale for and practices of integrating, discounting, and reflecting on the chatbot’s output into their writing.

It is important to note that a phenomenological qualitative research design like this aims to provide in-depth understanding and insights into participants’ experiences. The context of the study—a first year writing class—and the specific type of assignment investigated are both common scenarios in college classrooms, thereby enhancing the study’s relevance despite its limited sample size and scale. Furthermore, the incorporation of data collected from multiple and diverse sources for triangulation adds to insights into participants’ experiences, which helps strengthen the credibility of the study.

4.3 Data Analysis

Thematic analysis (Creswell, 2017 ) was used to analyze the written drafts and transcriptions of interview data as it is commonly used in qualitative studies to identify patterns across various types of data (Lapadat, 2012 ). While transcribing all the interview data verbatim into written scripts, I took notes with the research questions in mind. Then I organized and read through the various types of written data to get familiar with and form a holistic impression of participants’ perceptions and experiences of AI-assisted writing. The coding, conducted on Nvivo, a qualitative data analysis software, followed an inductive and iterative process. During the first cycle of coding, I reviewed the data line-by-line and applied in vivo coding to generate initial, descriptive codes using participants’ voices (Saldaña, 2016 ). For the second cycle, I identified patterns across the in vivo codes and synthesized them into 20 pattern codes (Saldaña, 2016 ). During the third cycle, I clustered and grouped the pattern codes into four emerging themes. To finalize and refine the themes, I double checked themes, codes, and the supporting data guided by the research questions. Table 2 shows the themes and pattern codes. To ensure the trustworthiness of the qualitative analysis, I also conducted a peer debriefing (Lincoln & Guba, 1985 ) on the codebook with an experienced qualitative researcher. Furthermore, member check was also conducted with each participant via email to minimize the possible misinterpretations of their perceptions and experiences.

5.1 How Do Students Utilize ChatGPT in Their Writing Processes?

The students reported using ChatGPT throughout different components of writing their argumentative essays including (1) brainstorming, (2) outlining, (3) revising, and (4) editing.

In terms of brainstorming, the students acknowledged the value of ChatGPT in helping them get initial ideas and inspirations prior to the research phase for their essays. For instance, Lydia was interested in writing about the cause of the low fertility rate in South Korea but she “had trouble thinking of any focus areas” (Lydia, Reflection). In order to narrow down the topic and find a good focus, she used ChatGPT for exploring possible directions she could pursue. As she noted:

It immediately gave me suggestions to approach the cause from demographic changes, economic factors, traditional gender roles, governmental policies, and cultural attitudes with detailed explanations beside each suggestion. So, I went on to pick economic reasons, which I think were the most accessible to write about. (Lydia, Reflection)

ChatGPT’s feedback facilitated a smoother decision-making process for Lydia regarding the specific topic to further investigate. Another student Kevin mentioned that running his initial research idea into ChatGPT was helpful because ChatGPT gave him “some relevant ideas that hadn’t crossed his mind when thinking about the topic” (Kevin, Written Assignment).

Considering ChatGPT’s suggestions did not mean that the students just took them for granted and incorporated them unquestioningly. For instance, Nora was interested in writing about the impact of AI on human lives. Upon putting her initial research question into ChatGPT, she found the feedback helpful and decided to do more research on the aspects highlighted by ChatGPT (see Fig.  2 ).

figure 2

Screenshot of Nora’s conversation with ChatGPT

Students also reported using ChatGPT for outlining. Emma used ChatGPT extensively to help organize her outline and shared her procedure as follows:

I wrote my own outline first consisting of my own ideas and then put it into ChatGPT. I asked ChatGPT to make the outline flow better. I was surprised with the results it gave me. It made the ideas more concise and connected better off of each other...I tried it a few times, and every time it gave me a different version of the outline that I could potentially use. I ultimately compared the information from my sources and chose an outline I thought best suited my essay and my essay question. (Emma, Reflection)

Emma’s approach revolved around utilizing ChatGPT to unearth linkages among her various initial yet disorganized ideas she already had. By experimenting with diverse ways to build coherence and connection among her thoughts with the aid of AI, she shortcut the mental task of structuring her ideas from scratch.

Using ChatGPT for refining the flow of ideas was also a strategy adopted by other students, but not always during the outlining stage. For instance, after completing her first draft, Lydia “copied and pasted her entire essay into the chatbox and asked for suggestions on how to improve the structure and argument” (Lydia, Reflection). Lydia underlined that her revision process with ChatGPT was iterative, as she put her revised version back into the chatbot and went through another round of feedback and subsequent revision. Additional applications reported by students also encompassed employing ChatGPT to reduce redundancy and enhance conciseness of content (Emma) as well as to refine topic sentences for accurately summarizing the main ideas of body paragraphs (Kevin).

Apart from utilizing ChatGPT to assist with global level issues such as structure, argument, and coherence, the students also harnessed the AI tool for sentence-level issues. They unanimously agreed that ChatGPT was a valuable tool for language editing. Alex, a L1 student, commented that ChatGPT could edit essays “exceptionally well.” Alex not only used the AI tool to help improve the syntax of his writing such as “run-on sentences” but also consulted it as his dictionary for “providing academic diction” (Alex, Interview). The L2 participants particularly acknowledged ChatGPT as beneficial for enhancing the accuracy of their writing. Lydia shared that upon completing a paragraph of her essay, she would put it into ChatGPT and ask it to “revise the wording and grammar only” so she could refine her language and keep the content original (Lydia, Reflection). Another L2 student Nora noted that “when I struggle with expressing my thoughts accurately in English words, ChatGPT can help me express those ideas in a more powerful and accurate way. It removes communication barriers” (Nora, Written Assignment).

5.2 How Do Student Writers Perceive the Benefits of Integrating ChatGPT into Their Writing?

Utilizing ChatGPT in their various writing process components, the students reported that ChatGPT had the following benefits: (1) accelerating their writing process, (2) easing their cognitive load, (3) fostering new learning opportunities, (4) getting immediate feedback, and (5) promoting positive feelings about writing.

Students stated that using ChatGPT could “speed up the process of writing” (Alex, Interview) as exemplified by the following quotes: “ChatGPT really helped me to explore the essay topics that I’m interested in within a very short amount of time and identify what can be written about” (Nora, Interview); “I discovered after using it for my final essay that ChatGPT can greatly improve the efficiency of my writing” (Alex, Reflection). For L2 writers, it significantly saved the time they typically spent on editing, as mentioned by Lydia:

As an international student who is not a native English speaker, writing college essays would take me double the amount of time compared to those who write essays in their first language. Oftentimes, the biggest time I spent was on editing the grammar and trying to make my language readable and understandable. (Lydia, Reflection)

The benefits of saving the time and energy on language concerns, grammar, wording, and the organization of ideas and messy drafts, furthermore, reduced the cognitive burden among the student writers, both L1 and L2. For instance, knowing ChatGPT’s editing power, Alex felt that he was able to “focus more on the subject of the writing rather than the language itself” and “spew out thoughts freely” when drafting the essay (Alex, Interview). Likewise, the L2 students noted that ChatGPT allowed them to delay their concerns about the linguistic forms of ideas and alleviate the demanding cognitive load associated with L2 writing. As claimed by Lydia, “It freed my thoughts so that I could spend more time revising the content, but not worry about how to express my ideas for the essay” (Lydia, Interview).

The students conveyed that incorporating ChatGPT in different components of writing also fostered new learning opportunities for them to improve writing. Nora shared that “ChatGPT not only made my language more fluent and comprehensible, but it also helped me to learn new ways of expression in English” (Nora, Interview). Su remarked that although ChatGPT’s feedback was generic, it promoted her to do further research about her topic and learn more writing strategies (Su, Written Assignment).

Students particularly highlighted the “instant and personalized feedback” (Kevin, Reflection) provided by ChatGPT as a strong impetus and benefit. For instance, as a frequent visitor of the school’s writing center, Lydia mentioned she typically scheduled two to three appointments with a writing tutor for each major writing assignment she had worked. With ChatGPT, she could obtain feedback anytime: “Now I don’t have to physically go to the writing center at 11 pm, waiting for the previous visitor to finish their session” (Lydia, Interview). She used “my walking AI tutor” to describe the role of AI in her writing.

Ultimately, the students mentioned that these cognitive and practical benefits of ChatGPT not only improved their efficiency of writing, but also promoted positive feelings about writing. They used words such as “more relieved” (Emma), “sense of accomplishment” (Lydia), and “less anxious” (Nora) to describe the AI-assisted writing process. Although the students expressed different needs and utilization of ChatGPT, they all conveyed that they would like to continue using it in the future.

5.3 What Concerns and Limitations Do Students Experience When Using ChatGPT to Assist with Their Writing?

Despite the benefits and usefulness of ChatGPT for assisting with students’ writing, they also expressed many reservations and limitations regarding the AI tool. The first concern was about the false information it produced and its potential to mislead people. The students commented that ChatGPT tended to “make up information” (Emma), “make assumptions and guesses” (Su), and generate “inaccurate information” (Nora), “wrong information” (Alex), and “nonsense” (Lydia). Furthermore, the students pointed out that ChatGPT was inadequate in addressing high-level questions requiring critical thinking, as Su explained: “When I was consulting with ChatGPT, I learned that it has a very limited understanding of the topic I was talking about” (Su, Reflection). Other students also pointed out that the responses they got from ChatGPT could be “very generalized” (Kevin) and lacked “depth and complexity” (Nora).

The next shortcoming of ChatGPT, as noted by the students, is the lack of creativity and originality. Su highlighted that relying on ChatGPT’s ideas would not yield intriguing essays, as even though ChatGPT’s responses may “appear to make sense,” they usually came across as “cliched and superficial.” Su understood that it was because ChatGPT and large language models “work based on the patterns and data they have been trained on and cannot think outside of this” (Su, Reflection). Therefore, it is “not effective in generating new ideas” for an essay (Alex, Interview).

The algorithm unavoidably led to another limitation as observed by the students, which is the lack of reliable evidence and support for the content generated by ChatGPT. Su acknowledged that ChatGPT was not a good source for writing as it was impossible for a reader to trace the original information. Apart from the lack of clarity and transparency about the sources ChatGPT draws upon, Kevin pointed out an additional drawback that ChatGPT’s ideas were “not up to date,” thus not a good source for academic writing (Kevin, Written Assignment).

5.4 What Considerations Do Students Identify as Important When Engaging in Generative AI-Assisted Writing?

Presented with these limitations of ChatGPT, the students shared some important aspects they think should be considered when incorporating AI into writing, summarized as follows: (1) balanced and moderate use of AI, (2) critical use of AI, (3) ethical considerations, (4) the need for human voice, (5) the importance of authenticity, (6) seizing AI as a learning opportunity, and (7) transparency from and conversation between teachers and students.

The students worried that over-reliance on ChatGPT would undermine their writing ability, so they should use ChatGPT to a balanced and moderate extent. The students believed that ChatGPT should be used as “guidance,” “support,” “supplement,” and “assistant” (Alex, Reflection) rather than a “substitute” or “replacement” (Su, Reflection).

Furthermore, the students emphasized the importance of critical use of AI. Emma noted that AI platforms could “decline the need to think critically” as some students might want to “take the easy route and just get the answer” (Emma, Interview). They insisted keeping a critical eye on the information AI generated as it was not reliable. To do this, students shared similar strategies which was to use ChatGPT as a departure rather than a destination for writing, thinking, and research. They underscored the importance of validation and critical thinking in this process.

Another facet to consider is the ethical use of AI. The students believed that one must be very careful when using ChatGPT as it can easily walk the line of plagiarism. They deemed acts such as using ChatGPT to generate new ideas and write entire essays unethical, as these are forms of taking credit for other people’s work based on their language and ideas (Kevin, In-Class Writing). Thus students emphasized the importance of “doing research on your own” (Emma), “making sure the ideas are my own” (Lydia), and “not using everything (i.e. sentence by sentence, word by word) provided by ChatGPT” (Su).

The students also regarded the issue of retaining human voice a pivotal consideration for AI-assisted writing. They pointed out that writing should be a means to express the writer’s identity and thoughts, but AI was not able to personalize the text to their individual style and voice. Wary of the threat posed by extensive adoption of ChatGPT to individual expressions, Lydia commented, “ChatGPT tended to use similar dictions and patterns of wording and sentence structures. If everyone uses ChatGPT, our style will become more and more alike” (Lydia, Interview). Similarly, Su pointed out that ChatGPT could make the text “sound generic and impersonal,” which is a problem “when you are trying to convey your own ideas, feelings, and perspectives” (Su, Written Assignment). To “truly present a unique perspective and make writing individualized,” one must “take full control” of their writing to deliver a powerful message (Kevin, Reflection). This process requires the discernment to dismiss advice from ChatGPT to avoid generating an impersonal, blunt style of writing that lacks the writer’s distinct character.

Students also pointed out that the involvement of ChatGPT in writing may not only jeopardize how human voice is conveyed through the ideas ChatGPT generates, but also through the language it produces, thus “ruining the authenticity of an essay” (Alex, Reflection). He questioned himself for a paradoxical use of ChatGPT. On the one hand, he utilized ChatGPT for editing and better academic diction; on the other, he was perplexed and apprehended about the tipping point where the essay would start to sound “more like ChatGPT rather than yourself.” As he explained:

ChatGPT suggested some words I never would have used, and I decided not to include them. While they may obviously sound better than my own authentic words, I just did not feel honest using them. For instance, when writing this paper, ChatGPT suggested I use “judiciously” rather than “in moderation.” I never would have used “judiciously,” and it felt unauthentic to use it. (Alex, Reflection)

The students suggested cautious, strategic, and purposeful use of ChatGPT’s editing features to ensure it amplifies rather than conflicts with their own writing style.

However, boundaries like this still appeared to the students as vague. Hence, the students called for guidelines and instructions in the classroom and open conversation between teachers and students. The students expressed their confusion over the lack of clear guidelines across their classes. As Alex commented, “It’s hard to draw lines with different ways of using ChatGPT and which one would be considered cheating or not” (Alex, Interview). The students hoped that all their instructors, instead of only writing teachers, could engage in comprehensive discussions about what specific ways of using ChatGPT would be regarded as acceptable or problematic according to their disciplinary conventions and learning purposes.

Participants also expected that school policies and instructors would not shut down AI as a resource and learning opportunity for students. Emma said, “It’s tricky as there are a lot of different opinions, but technology is the world we live in. We should go with the grain as opposed to against it” (Emma, Interview). Cautious of possible missed learning opportunities that AI might bring to thinking, Lydia commented, “I am afraid of becoming lazy…But I guess it also depends on how you use it. It gives a shortcut for people who do not want to make the effort to learn and think. But it could be useful for those who really want to learn” (Lydia, Interview). Alex noted that to prevent the loss of learning opportunity, for instance, he decided that rather than taking ChatGPT’s diction suggestion immediately, he “would use those words in the next essay,” demonstrating his effort in learning and internalizing the knowledge. In general, the students were still exploring ways to use ChatGPT in critical, authentic, and ethical ways that would promote rather than harm their learning.

6 Discussion

Adopting sensemaking theory, the study investigated how students made sense of their AI-assisted writing practices, providing insights into students’ learning process and their shared practices emerging around the AI technology. Confirming previous research (e.g., Guo et al., 2022 ; Holmes et al., 2019 ; Su et al., 2023 ), this study found that the students overall had positive experiences with generative AI-assisted writing, for it could accelerate their writing process, reduce their cognitive load and anxiety, and provide prompt feedback. The students integrated ChatGPT into various components of their composing process, such as searching for content, ideational planning, language editing, and revising. Although the students acknowledged the cognitive and affective benefits (e.g., Ebadi & Amini, 2022 ; Fryer & Carpenter, 2006 ) of using ChatGPT in writing, they were very cautious about adopting its ideas and suggestions at different discourse levels (i.e., essay, paragraph, and sentence levels) due to critical, ethical, and authentic concerns. This finding extends previous research which identified that students’ primary concerns were academic dishonesty and educational inequity (Yan, 2023 ). Despite recognizing AI’s limitations such as the lack of in-depth insights (Gao et al., 2022 ), originality, creativity, and reliability—qualities essential for good academic writing—the students deemed it necessary to embrace rather than abandon the tool, with the purpose of fostering one’s critical thinking and writing skills. The results suggest that students’ sensemaking of AI-assisted writing is shaped by their prior knowledge and understanding of writing as a cognitive and sociocultural activity, their exploration of AI’s functionalities and strategies for leveraging them to achieve learning goals, and their interrogation of the appropriateness and limitations of AI in the specific context of academic writing.

The study highlights two emerging dilemmas students experienced in their generative AI-assisted writing processes. The first dilemma, as Alex put it, is the choice between sounding better or sounding like me when integrating AI into the decision making process of writing, reflecting a larger issue about academic integrity, authenticity, and voice in human-AI collaboration. The participants believed that it is crucial to prevent their writing from being AI-nized, which could lead to either plagiarism or a writing style resembling AI that overshadows their own voice—the very essence of their “identity and presentation of the self in writing” (Prince & Archer, 2014 , p. 40). The students’ beliefs align with a connectivism paradigm of AI in Education (AIEd) outlined by Ouyang and Jiao ( 2021 ), in which AI serves as a tool to augment human intelligence and capability (Yang et al., 2021 ) and learner agency is placed at the core. Reliance on AI could lead to superficial engagement with writing tasks, discouraging deeper, reflective thought processes essential for original creative expression. Furthermore, when AI suggests similar vocabulary, structures, and styles to various learners, it risks imposing a uniformity in expression that undermines the educational value of cultivating each individual’s unique and creative voice. AI may hinder students from exploring how language variation and linguistic diversity can be rich resources for meaning-making, creativity, identity formation, problem-solving (Wang et al., 2020 ). Such critical engagement with diverse language resources is crucial for developing students’ literacy skills in a digital age where multicultural awareness is an integral part of education (Sánchez-Martín et al., 2019 ). As Dixon-Román et al. ( 2020 ) noted, educators must be wary of AI’s “racializing forces,” which standardize learning processes in ways that can marginalize non-dominant forms of knowledge and communication, as well as students whose experiences and identities are either not represented or misrepresented in the system.

While the participants concurred that upholding human voice and agency entails possessing integrity and alignment not only at the ideational level but also in the linguistic expression of those ideas, the L2 writers in this study added another nuanced dimension to the impact of AI on human voice and authenticity in the context of AI-assisted writing. As the L2 students experienced, ChatGPT’s language suggestions might not pose a threat to their voice but serve as a catalyst for augmenting their voice, as AI helped overcome their language barriers and better express ideas true to themselves. In other words, generative AI afforded the L2 writers powerful language repertoires that enhanced the accuracy and efficiency of “linguistic rehearing” (Abdel Latif, 2021 ) or “translating” (Kellogg, 1996 ) component of their writing process, thus allowing L2 students to produce writing more authentic to themselves. The finding highlights how learner characteristics and individual differences play an important role in students’ sensemaking of AI-assisted writing, complicating the existing understanding of AI’s affordances for learners with diverse linguistic backgrounds and learning needs.

From earlier conceptualizations of authenticity as “ownedness” and “being one’s own” by Heidegger (1927/ 1962 ), to contemporary perceptions as the “self-congruency” of an individual, group, or symbolic identity (Ferrara, 1998 , p. 70), the notion of authenticity has been evolving and becoming more pluralistic. As Rings ( 2017 ) acknowledged, authenticity extends beyond adherent to personally endorsed commitments; it requires a comprehensive consideration of one’s self-awareness and the changing social context. Scholars should further pursue what it means by authenticity and academic integrity in an increasingly AI-mediated educational context, ways to promote students’ authorial voice and agency, as well as the complicated authorship issues (Jabotinsky & Sarel, 2022 ) involved in AI-human collaboratively generated texts. As Eaton ( 2023 ) claims, it is time to contemplate “postplagiarism” and academic integrity in a future where “hybrid human-AI writing will become normal”.

Apart from the sounding better or sounding like me dilemma experienced by students, another paradox is whether AI caused missed learning opportunities or created new learning opportunities . As noted by the previous literature, AI-writing tools may rob students of essential learning experiences (Barrot, 2023a ; Tseng & Warschauer, 2023 ). Adding to this concern from educators and scholars, the present study shows that the students themselves are also cognizant of the possible learning loss due to AI adoption. Furthermore, the study shows that rather than passively indulging themselves in the convenience of AI tools, a common concern among educators (Chan & Hu, 2023 ; Graham, 2023 ), the student writers attempted to seize new learning opportunities that emerged from AI technologies to promote their critical thinking and writing. This finding suggests a nuanced addition to sensemaking theory: the process of making sense of uncertainties in AI-infused literacy practices can also be uncertain, involving reconciling dilemmas and acknowledging perplexing experiences. While not always yielding clear-out outcomes or casual attributions for the observed phenomena and personal experience as suggested by Poquet ( 2024 ), noticing and perceiving the unpredictable impacts of generative AI on students’ own learning processes can, in itself, be empowering. The process fosters a sense of agency and critical engagement, suggesting that the outcomes of sensemaking in the context of AI-assisted writing can be open-ended yet profound.

This important finding leads scholars to reconsider the essence of learning in an era of generative AI. Hwang et al. ( 2020 ) and Southworth et al. ( 2023 ) argued that AI is likely to transform not only the learning environment, but also the learning process, and even what it means to learn. This perspective finds resonance in the experiences of the participants in this study. While AI may shortcut traditional ways of doing writing, it does not inherently imply a reduction in students’ cognitive, behavioral, and affective engagement with writing, learning, and thinking. AI does not necessarily make writing easier; on the contrary, a critical, ethical, and authentic approach to AI-assisted writing pushes students to think further and prioritize their own voice, originality, and creativity, leading to high quality writing. In this sense, when used properly, AI has the potential to introduce a new avenue for humanizing writing and education. As generative AI technologies are advancing rapidly, an expanding array of AI-powered writing assistance, intelligent tutoring systems, and feedback tools has the promise to cater to the diverse needs and learning styles of language learners and writers. These tools are not limited to mere textual assistance; the multimodal functionalities of generative AI can also allow writers to explore creative expressions and multimodal writing, enriching students’ literacy practices by integrating more visual, auditory, and interactive elements into the composition process (Kang & Yi, 2023 ; Li et al., 2024 ). As noted by Cao and Dede ( 2023 ), our educational model has long been centered around the product , directing our focus towards the outcomes and grades students achieve, often overlooking the learning process itself. The judgmental calls involved in students’ interactions with AI, as showcased in the nuances of participants’ AI-assisted writing process in this study, represent emerging learning opportunities that require students to draw upon a range of knowledge, skills, awareness of ethics and the self, criticality, and self-reflection to make informed decisions about AI in learning. The present study shows that such decision making process can play a pivotal role in cultivating students’ “AI literacy” (Ng et al., 2021 ) and promoting their responsible use of AI. Therefore, it should also be recognized as a valuable teaching opportunity that educators should not overlook.

7 Conclusion

This study explored students’ generative AI-assisted writing processes in a first-year writing class in an American college. The study found that students utilized generative AI for assisting with both global (e.g., argument, structure, coherence) and local issues of writing (e.g., syntax, diction, grammar), while they also had various ethical and practical concerns about the use of AI. Findings showed that large language models offer unique benefits for L2 writers to leverage its linguistic capabilities. The study highlights the urgency of explicit teaching of critical AI literacy and the value of (post)process-oriented writing pedagogy (e.g., Graham, 2023 ) in college writing classrooms so that students not only understand AI writing tools’ functions and limitations but also know how to utilize and evaluate them for specific communication and learning purposes.

However, writing instruction is still at the beginning stage of addressing this pressing need. Thus, pedagogical innovations, policy adjustments, new forms of writing assessments, and teacher education (Zhai, 2022 ) are needed to adapt to the potential impact of AI on desired student learning outcomes within specific writing curriculums. For instance, integrating critical digital pedagogy into writing instruction and inviting students to reflect on their relevant AI literacy practices allow writing instructors to more effectively guide students in critically engaging with AI technologies in their academic literacy development. Policy adjustments should aim to cultivate an inclusive rather than “policing” environment (Johnson, 2023 ) that encourages students to use AI responsibly and as a means of fostering self-learning. Furthermore, writing assessment methods should evolve to not just evaluate final learning outcomes such as the written products but also the learning journey itself such as the decision-making involved in their AI-assisted writing. This shift encourages students to appreciate learning processes and the productive struggles they encounter along the way, so that they can move beyond seeing AI as a shortcut but as assistance in their quest for learning and writing development. In this way, students can leverage the linguistic, multimodal, interactive, and adaptable affordances of generative AI tools for personalized learning. This facilitates greater student ownership of their learning, enhancing their learner competence through self-direction, self-assessment, and self-reflection when interacting with AI tools (Barrot, 2023c ; Fariani et al., 2023 ).

Following a phenomenological research design, the present study aims to provide in-depth understanding of college students’ use of ChatGPT in their academic writing, yet it is limited due to its small sample size and duration. Therefore, the findings may not apply to other classroom contexts and to a wide range of student populations. Future research could benefit from adopting a large scale, longitudinal design to examine generative AI’s impacts on student writing and students’ long-term engagement with generative AI tools, both in formal classroom settings and in informal learning contexts. It is also worth exploring students of diverse age groups and language proficiency levels as well as writing courses of different languages, purposes, and writing genres to examine other factors that may influence students’ generative AI assisted writing. After all, the participants in this study have already developed some proficiency and skills in academic writing, but holding learner agency (Ouyang & Jiao, 2021 ) can be more complex and challenging for younger learners. Further research is needed to understand students with varied domain knowledge, expertise, and writing abilities (Yan, 2023 ) and uncover individual differences in AI-assisted writing. Additionally, the participants in this study utilized GPT 3.5 for their AI-assisted writing practices. Given the rapid advancement of AI technologies, new AI models and applications are continuously emerging. Thus, future research should investigate how various AI models and functionalities might differently influence students, taking into account the ongoing developments and innovations in AI.

Data Availability

The data are available from the author upon reasonable request.

Abdel Latif, M. M. A. (2021). Remodeling writers’ composing processes: Implications for writing assessment. Assessing Writing, 50 , 100547. https://doi.org/10.1016/j.asw.2021.100547

Article   Google Scholar  

Almusharraf, N., & Alotaibi, H. (2023). An error-analysis study from an EFL writing context: Human and automated essay scoring approaches. Technology, Knowledge and Learning, 28 (3), 1015–1031. https://doi.org/10.1007/s10758-022-09592-z

American Psychological Association. (2020).  Publication manual of the American Psychological Association  (7th ed.).

Atkinson, D. (2003). L2 writing in the post-process era: Introduction. Journal of Second Language Writing, 12 (1), 3–15. https://doi.org/10.1016/S1060-3743(02)00123-6

Bahari, A. (2021). Computer-mediated feedback for L2 learners: Challenges versus affordances. Journal of Computer Assisted Learning, 37 (1), 24–38. https://doi.org/10.1111/jcal.12481

Barrot, J. S. (2023a). Using ChatGPT for second language writing: Pitfalls and potentials. Assessing Writing, 57 , 100745. https://doi.org/10.1016/j.asw.2023.100745

Barrot, J. S. (2023b). Using automated written corrective feedback in the writing classrooms: Effects on L2 writing accuracy. Computer Assisted Language Learning, 36 (4), 584–607.

Barrot, J. S. (2023c). ChatGPT as a language learning tool: An emerging technology report. Technology, Knowledge and Learning. https://doi.org/10.1007/s10758-023-09711-4

Bedington, A., Halcomb, E. F., McKee, H. A., Sargent, T., & Smith, A. (2024). Writing with generative AI and human-machine teaming: Insights and recommendations from faculty and students. Computers and Composition, 71 , 102833. https://doi.org/10.1016/j.compcom.2024.102833

Cancino, M., & Panes, J. (2021). The impact of google translate on L2 writing quality measures: Evidence from chilean EFL high school learners. System, 98 , 102464. https://doi.org/10.1016/j.system.2021.102464

Cao, L., & Dede, C. (2023). Navigating a world of generative AI: Suggestions for educators . The next level lab at harvard graduate school of education. President and Fellows of Harvard College: Cambridge, MA.

Chan, C., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education . https://doi.org/10.1186/s41239-023-00411-8

Creswell, J. W. (2017). Qualitative inquiry and research design: Choosing among the five traditions . Sage.

Google Scholar  

Dixon-Román, E., Nichols, T. P., & Nyame-Mensah, A. (2020). The racializing forces of/in AI educational technologies. Learning, Media and Technology, 45 (3), 236–250. https://doi.org/10.1080/17439884.2020.1667825

Eaton, S. (2023). Six tenets of postplagiarism: Writing in the age of artificial intelligence. University of Calgary. http://hdl.handle.net/1880/115882 .

Ebadi, S., & Amini, A. (2022). Examining the roles of social presence and human-likeness on Iranian EFL learners’ motivation using artificial intelligence technology: A case of CSIEC chatbot. Interactive Learning Environments, 32 (2), 1–19. https://doi.org/10.1080/10494820.2022.2096638

Fariani, R. I., Junus, K., & Santoso, H. B. (2023). A systematic literature review on personalised learning in the higher education context. Technology, Knowledge and Learning, 28 (2), 449–476. https://doi.org/10.1007/s10758-022-09628-4

Ferrara, A. (1998). Reflective authenticity . Routledge.

Flower, L., & Hayes, J. R. (1981). A cognitive process theory of writing. College Composition & Communication, 32 (4), 365–387.

Fryer, L. K., & Carpenter, R. (2006). Bots as language learning tools. Language Learning & Technology, 10 , 8–14.

Gayed, J. M., Carlon, M. K. J., Oriola, A. M., & Cross, J. S. (2022). Exploring an AI-based writing Assistant’s impact on English language learners. Computers and Education: Artificial Intelligence, 3 , 100055. https://doi.org/10.1016/j.caeai.2022.100055

Glaser, N. (2023). Exploring the potential of ChatGPT as an educational technology: An emerging technology report. Technology, Knowledge and Learning, 28 (4), 1945–1952. https://doi.org/10.1007/s10758-023-09684-4

Golob, U. (2018). Sense-making. In R. L. Heath, W. Johansen, J. Falkheimer, K. Hallahan, J. J. C. Raupp, & B. Steyn (Eds.), The international encyclopedia of strategic communication (pp. 1–9). Wiley.

Graham, S. S. (2023). Post-process but not post-writing: large language models and a future for composition pedagogy. Composition Studies, 51 (1), 162–218.

Guo, K., Wang, J., & Chu, S. K. W. (2022). Using chatbots to scaffold EFL students’ argumentative writing. Assessing Writing, 54 , 100666. https://doi.org/10.1016/j.asw.2022.100666

Han, J., Yoo, H., Kim, Y., Myung, J., Kim, M., Lim, H., Kim, J., Lee, T., Hong, H., Ahn, S., & Oh, A. (2023). RECIPE: How to Integrate ChatGPT into EFL writing education. arXiv:2305.11583 . https://doi.org/10.48550/arXiv.2305.11583

Hayes, J. R. (2012). Modeling and remodeling writing. Written Communication, 29 (3), 369–388. https://doi.org/10.1177/0741088312451260

Heidegger, M. (1962). Being and time (J. Macquarrie & E. Robinson, Trans.). New York: Harper & Row (Original work published 1927).

Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning . Center for Curriculum Redesign.

Huang, W., Hew, K., & Fryer, L. (2022). Chatbots for language learning—Are they really useful? A systematic review of chatbot-supported language learning. Journal of Computer Assisted Learning, 38 (1), 237–257.

Hwang, G. J., Xie, H., Wah, B. W., & Gašević, D. (2020). Vision, challenges, roles and research issues of Artificial Intelligence in Education. Computers & Education: Artificial Intelligence, 1 , Article 100001. https://doi.org/10.1016/j.caeai.2020.100001

Jabotinsky, H. Y., & Sarel, R. (2022). Co-authoring with an AI? Ethical dilemmas and artificial intelligence. SSRN Scholarly Paper . https://doi.org/10.2139/ssrn.4303959

Jeon, J. (2021). Chatbot-assisted dynamic assessment (CA-DA) for L2 vocabulary learning and diagnosis. Computer Assisted Language Learning, 36 (7), 1–27. https://doi.org/10.1080/09588221.2021.1987272

Johnson, M. D. (2017). Cognitive task complexity and L2 written syntactic complexity, accuracy, lexical complexity, and fluency: A research synthesis and meta-analysis. Journal of Second Language Writing, 37 , 13–38. https://doi.org/10.1016/j.jslw.2017.06.001

Johnson, G. P. (2023). Don’t act like you forgot: Approaching another literacy “crisis” by (re)considering what we know about teaching writing with and through technologies. Composition Studies, 51 (1), 169–175.

Kang, J., & Yi, Y. (2023). Beyond ChatGPT: Multimodal generative AI for L2 writers. Journal of Second Language Writing, 62 , 101070. https://doi.org/10.1016/j.jslw.2023.101070

Kellogg, R. T. (1996). A model of working memory in writing. In C. M. Levy & S. Ransdell (Eds.), The science of writing: Theories, methods, individual differences and applications (pp. 57–71). Laurence Erlbaum Associates.

Khuder, B., & Harwood, N. (2015). Writing in test and non-test situations: Process and product. Journal of Writing Research, 6 (3), 233–278.

Klein, G., Moon, B., & Hoffman, R. R. (2006). Making sense of sensemaking: A macrocognitive model. IEEE Intelligent Systems, 21 (5), 88–92.

Kohnke, L., Moorhouse, B. L., & Zou, D. (2023). Exploring generative artificial intelligence preparedness among university language instructors: A case study. Computers and Education: Artificial Intelligence, 5 , 100156. https://doi.org/10.1016/j.caeai.2023.100156

Koltovskaia, S. (2020). Student engagement with automated written corrective feedback (AWCF) provided by Grammarly: A multiple case study. Assessing Writing, 44 , 100450.

Kruse, O., Rapp, C., Anson, C., Benetos, K., Cotos, E., Devitt, A., & Shibani, A. (Eds.). (2023). Digital writing technologies in higher education . Springer.

Lapadat, J. C. (2012). Thematic analysis. In A. J. Mills, G. Durepos, & E. Weibe (Eds.), The encyclopedia of case study research (pp. 926–927). Sage.

Li, B., Wang, C., Bonk, C., & Kou, X. (2024). Exploring inventions in self-directed language learning with generative AI: Implementations and perspectives of YouTube content creators. TechTrends . https://doi.org/10.1007/s11528-024-00960-3

Lin, M. P. C., & Chang, D. (2020). Enhancing post-secondary writers’ writing skills with a chatbot: A mixed-method classroom study. Journal of Educational Technology & Society, 23 (1), 78–92.

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry . Sage.

Book   Google Scholar  

Link, S., Mehrzad, M., & Rahimi, M. (2022). Impact of automated writing evaluation on teacher feedback, student revision, and writing improvement. Computer Assisted Language Learning, 35 (4), 605–634. https://doi.org/10.1080/09588221.2020.1743323

Memarian, B., & Doleck, T. (2023). Fairness, accountability, transparency, and ethics (FATE) in artificial intelligence (AI), and higher education: A systematic review. Computers and Education: Artificial Intelligence. https://doi.org/10.1016/j.caeai.2023.100152

Mohsen, M. A. (2021). L1 versus L2 writing processes: What insight can we obtain from a keystroke logging program? Language Teaching Research, 4 , 48–62. https://doi.org/10.1177/13621688211041292

Moran, D. (2002). Introduction to phenomenology . Routledge.

Ng, D., Leung, J., Chu, S., & Qiao, M. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2 , 100041. https://doi.org/10.1016/j.caeai.2021.100041

Ouyang, F., & Jiao, P. (2021). Artificial intelligence in education: The three paradigms. Computers and Education: Artificial Intelligence, 2 , 100020. https://doi.org/10.1016/j.caeai.2021.100020

Pandarova, I., Schmidt, T., Hartig, J., Boubekki, A., Jones, R. D., & Brefeld, U. (2019). Predicting the difficulty of exercise items for dynamic difficulty adaptation in adaptive language tutoring. International Journal of Artificial Intelligence in Education, 29 (3), 342–367. https://doi.org/10.1007/s40593-019-00180-4

Poquet, O. (2024). A shared lens around sensemaking in learning analytics: What activity theory, definition of a situation and affordances can offer. British Journal of Educational Technology . https://doi.org/10.1111/bjet.13435

Prince, R., & Archer, A. (2014). Exploring voice in multimodal quantitative texts. Literacy & Numeracy Studies, 22 (1), 39–57. https://doi.org/10.5130/lns.v22i1.4178

Ranalli, J. (2021). L2 student engagement with automated feedback on writing: Potential for learning and issues of trust. Journal of Second Language Writing, 52 , 100816. https://doi.org/10.1016/j.jslw.2021.100816

Rings, M. (2017). Authenticity, self-fulfillment, and self-acknowledgment. The Journal of Value Inquiry, 51 (3), 475–489.

Saldaña, J. (2016). The coding manual for qualitative researchers (3rd ed.). Sage.

Sanchez-Martin, C., Hirsu, L., Gonzales, L., & Alvarez, S. P. (2019). Pedagogies of digital composing through a translingual approach. Computers and Composition, 52 , 142–157. https://doi.org/10.1016/j.compcom.2019.02.007

Schulze, M. (2008). AI in CALL: Artificially inflated or almost imminent? CALICO Journal, 25 (3), 510–527. https://doi.org/10.1558/cj.v25i3.510-527

Shi, H., & Aryadoust, V. (2023). A systematic review of automated writing evaluation systems. Education and Information Technologies, 28 (1), 771–795. https://doi.org/10.1007/s10639-022-11200-7

Smith, J. A. (1996). Beyond the divide between cognition and discourse: Using interpretative phenomenological analysis in health psychology. Psychology and Health, 11 (2), 261–271. https://doi.org/10.1080/08870449608400256

Smith, J. A., Flower, P., & Larkin, M. (2009). Interpretative phenomenological analysis: Theory, method and research . Sage.

Smith, J. A., & Shinebourne, P. (2012). Interpretative phenomenological analysis. In H. Cooper, P. M. Camic, D. L. Long, A. T. Panter, D. Rindskopf, & K. J. Sher. (Eds.), Research designs: Quantitative, qualitative, neuropsychological, and biological (pp. 73–82). American Psychological Association. https://doi.org/10.1037/13620-005 .

Southworth, J., Migliaccio, K., Glover, J., Reed, D., McCarty, C., Brendemuhl, J., & Thomas, A. (2023). Developing a model for AI Across the curriculum: Transforming the higher education landscape via innovation in AI literacy. Computers and Education: Artificial Intelligence, 4 , 100127. https://doi.org/10.1016/j.caeai.2023.100127

Steiss, J., Tate, T. P., Graham, S., Cruz, J., Hebert, M., Wang, J., Moon, Y., Tseng, W., Warschauer, M., & Olson, C. (2024). Comparing the quality of human and ChatGPT feedback on students’ writing. Learning and Instruction . https://doi.org/10.1016/j.learninstruc.2024.101894

Su, Y., Lin, Y., & Lai, C. (2023). Collaborating with ChatGPT in argumentative writing classrooms. Assessing Writing, 57 , 100752. https://doi.org/10.1016/j.asw.2023.100752

Tan, X., Xu, W., & Wang, C. (2024). Purposeful remixing with generative AI: Constructing designer voice in multimodal composing. arXiv preprint arXiv:2403.19095.

Tsai, S. C. (2019). Using google translate in EFL drafts: A preliminary investigation. Computer Assisted Language Learning, 32 (5–6), 510–526. https://doi.org/10.1080/09588221.2018.1527361

Tseng, W., & Warschauer, M. (2023). AI-writing tools in education: If you can’t beat them, join them. Journal of China Computer-Assisted Language Learning, 3 (2), 258–262. https://doi.org/10.1515/jccall-2023-0008

Vetter, M. A., Lucia, B., Jiang, J., & Othman, M. (2024). Towards a framework for local interrogation of AI ethics: A case study on text generators, academic integrity, and composing with ChatGPT. Computers and Composition, 71 , 102831. https://doi.org/10.1016/j.compcom.2024.102831

Wang, C., Samuelson, B., & Silvester, K. (2020). Zhai nan, mai meng and filial piety: The translingual creativity of Chinese university students in an academic writing course. Journal of Global Literacies, Technologies, and Emerging Pedagogies, 6 (2), 1120–1143.

Warner, B. (2022). AI for Language Learning: ChatGPT and the Future of ELT . TESOL. http://blog.tesol.org/ai-for-language-learning-chatgpt-and-the-future-of-elt/?utm_content=buffer7d9a4&utm_medium=social&utm_source=linkedin.com&utm_campaign=buffer .

Weick, K. E., Sutcliffe, K. M., & Obstfeld, D. (2005). Organizing and the process of sensemaking. Organization Science, 16 (4), 409–421.

Yan, D. (2023). Impact of ChatGPT on learners in a L2 writing practicum: An exploratory investigation. Education and Information Technologies, 28 , 1–25. https://doi.org/10.1007/s10639-023-11742-4

Yang, S. J., Ogata, H., Matsui, T., & Chen, N. S. (2021). Human-centered artificial intelligence in education: Seeing the invisible through the visible. Computers & Education: Artificial Intelligence, 2 , Article 100008. https://doi.org/10.1016/j.caeai.2021.100008

Zhai, X. (2022). ChatGPT user experience: Implications for education. SSRN Scholarly Paper . https://doi.org/10.2139/ssrn.4312418 .

Download references

The author acknowledges that the research did not receive any funding.

Author information

Authors and affiliations.

Colby College, Waterville, ME, USA

Chaoran Wang

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Chaoran Wang .

Ethics declarations

Conflict of interest.

The author confirms that there are no conflicts of interest.

Ethical Approval

The study is conducted with permission from and following the guidelines of the university’s Institutional Review Board.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Wang, C. Exploring Students’ Generative AI-Assisted Writing Processes: Perceptions and Experiences from Native and Nonnative English Speakers. Tech Know Learn (2024). https://doi.org/10.1007/s10758-024-09744-3

Download citation

Accepted : 14 May 2024

Published : 30 May 2024

DOI : https://doi.org/10.1007/s10758-024-09744-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial Intelligence
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. (PDF) Information and computer literacy of high school students

    research paper about computer literacy

  2. (PDF) Computer Literacy: Implications for Teaching a College-Level Course

    research paper about computer literacy

  3. RSA Computer Literacy and Information Technology Essay Example

    research paper about computer literacy

  4. (PDF) Reflections on Digital Literacy

    research paper about computer literacy

  5. THE EFFECT OF THE COMPUTER LITERACY ON THE STUDENTS.docx

    research paper about computer literacy

  6. (PDF) QUANTITATIVE ANALYSIS OF COMPUTER LITERACY CLASS

    research paper about computer literacy

VIDEO

  1. Mathematical literacy Paper revision. Volumes and Capacity with conversions

  2. Computer Literacy for Highland Children: Enhancing Education in Remote Areas

  3. Breaking Barriers through Technology: Teaching Computer Literacy to Highland Children

  4. Computer Literacy G25

  5. MATHEMATICAL LITERACY P2 ( 2024 FINAL REVISION PAPER 2) GRADE 12 M/J [THUNDEREDUC]

  6. World Computer Literacy Day

COMMENTS

  1. (PDF) Computer literacy: Today and tomorrow

    Computer literacy is part of the perception of a new type of literacy that is caused by the use of computers and the application of technology in all aspects of society [1]. Technology changes by ...

  2. Students' Computer Literacy and Academic Performance

    2019-2020. This study determined t he level of computer literacy and its influence on the academic performance of Grade 10 students in. Silanga National High School under the District of ...

  3. Students and Their Computer Literacy: Evidence and Curriculum

    Abstract. For a number of years, education authorities have responded to the importance of school students developing computer literacy by including it as part of the school curriculum, directly as a cross-curriculum capability, and by assessing the extent to which students are computer literate. Computer literacy and related concepts, such as ...

  4. Impact of digital literacy on academic achievement: Evidence from an

    While the phenomenon has been studied for at least 50 years, the term "digital literacy" was first coined by Paul Gilster in 1997 (Gilster, 1997; Martínez-Bravo et al., 2020), who included four core competencies.The definitions of digital literacy have transformed over the past 50 years and include both specific skills as well as general perspectives (Martínez-Bravo et al., 2020; Pool ...

  5. Digital competence and digital literacy in higher education research

    Digital literacy is defined here as the "capabilities which fit an individual for living, learning and working in a digital society" and as the "integration of computer literacy, information literacy, media literacy, the ability to communicate and collaborate using digital networks, to participate in research and scholarship dependent on ...

  6. A systematic review on digital literacy

    Computer literacy, media literacy and cultural literacy were the second most common literacy (n = 5). ... The initial results revealed that there is an increasing trend on digital literacy focused academic papers. Research work in digital literacy is critical in a context of disruptive digital business, and more recently, the pandemic-triggered ...

  7. The relationship between ICT literacy and academic ...

    JRC Working Papers. (2017) Google Scholar. Cener et al., 2015. ... Measured effect of gender and computer literacy on students' academic achievement in social studies and civic education in selected secondary schools in Ondo state ... K., & Zhang, H.(2008. Research on the cultivation of ICT literacy of undergraduate in normal university. Modern ...

  8. Key factors in digital literacy in learning and education: a ...

    This research aims at providing an overview of the research field of digital literacy into learning and education. Using text mining, it reviews 1037 research articles published on the topic between 2000 and 2020. This review reveals that there is a plurality of terms associated with digital literacy. Moreover, our research identifies six key factors that define the literature, which are ...

  9. Critical digital literacies at school level: A systematic review

    For example, technology literacy, ICT literacy and computer literacy have been used to refer to the technical competencies required for digital technology use. ... 33.4% of hits in research papers) and less so (13, 15.5% of hits) in policy documents. The focus in both research articles and policy documents was on the risks associated with ...

  10. PDF Computer Literacy Systematic Literature Review Method

    1.4. Summarizing Computer Literacy De nitions To identify the common elements of computer literacy as de ned, we examined all explicit de nitions of computer literacy within the 190 consulted papers. Of the 70 papers where computer literacy was mentioned within the body of the text, only 31 papers gave an explicit de nition of the construct. We ...

  11. An assessment of the interplay between literacy and digital ...

    3.1 Information literacy. According to the American Library Association (ALA) (), information literacy is defined as abilities related to recognising when information is needed and then being able to locate, evaluate, and use information effectively.As noted by Johnston and Webber (), IL emphasis on recognising an information need, evaluating what is found, and using the information effectively.

  12. What predicts students' future ICT literacy? Evidence from a large

    ICT Literacy in Grades 6 to 9. To address RQ1, we first considered the first-stage data of 6810 students and specified five models, which all included ICT literacy in Grade 6 (see Table 1).The associations between ICT literacy in Grade 6 and ICT literacy in Grade 9 ranged from β = 0.765 (SE = 0.020, p < .001) in M1 to β = 0.397 in M5 (SE = 0.030, p < .001).

  13. PDF Computer Literacy in Learning Academic English:

    computer literacy, which are required for educational purposes. Computer literacy appears to differ from context to context. The findings of previous research on computer literacy imply that the majority of students need training for the efficient use of computers and technology for educational purposes.

  14. PDF Students' Computer Literacy and Academic Performance

    Students' Computer Literacy and Academic Performance Alona Medalia Cadiz-Gabejan 1 and Melinda Jr C. Takenaka 2 , 1 Assistant Professor II, Samar State University, Philippines

  15. A systematic review on digital literacy

    Computer literacy, media literacy and cultural literacy were the second most common literacy (n = 5). ... The initial results revealed that there is an increasing trend on digital literacy focused academic papers. Research work in digital literacy is critical in a context of disruptive digital business, and more recently, the pandemic-triggered ...

  16. Influence of computers in students' academic achievement

    2.1. Computer attitudes. Attitudes and perceptions play a pivotal role in learning behaviours. Some researchers tested a model based on the concept of the attitude-behaviour theory, which argues that beliefs lead to attitudes, and attitudes are an essential factor to predict behaviour (Levine and Donitsa-Schmidt, 1998).They predicted that computer use leads to more computer confidence and ...

  17. (PDF) Computer Literacy with Skills of Seeking for Information

    Research implications: The research has practical implications, as the developed framework simplifies and operationalizes the concepts of "Computer Literacy," "Digital Literacy," and "Digital ...

  18. PDF A Study on The Impact of Computer Literacy on Learning With ...

    To assess the Impact of Computer Literacy on Learning with reference To High Schools in Karnataka. RESEARCH METHODOLOGY: This research paper is based on secondary sources of data such as websites, books, journals, articles and many more. IMPACT OF COMPUTER LITERACY LEARNING: ENHANCED TEACHING AND LEARNING:

  19. Digital literacy, technological literacy, and internet literacy as

    Even though there is an abundance of research on computer supported education (CSE), digital literacy (DL), technological literacy (TL), and internet literacy (IL), the correlation between them and their effect on each other have not been analyzed in the literature. However, no study has been conducted on the correlation between and effect of CSE, DL, TL, and IL and which additionally explains ...

  20. Computer literacy and attitudes towards e-learning among first year

    In the 1980s, research on computer literacy focused on the question whether medical students were ready for the foreseeable omnipresence of computers in the future doctors' professional environments [1 ... This paper reports on data from an online survey for the 2004 course for first-year students. Participation in the survey was voluntary and ...

  21. Assessing the computer literacy of university graduates

    Assessing the Computer Liter acy of University Graduate s. Serkan Dincer. Cukurova University. Adana, Turkey. Abstract. Computer literacy is the level of knowledge and skill regarding. effective ...

  22. Design and analysis of personalized serious games for ...

    Personalized learning has become increasingly prevalent in computer-based education. Nevertheless, there remains a scarcity of studies addressing personalized serious games. This paper delves into ...

  23. (PDF) ASSESSMENT OF COMPUTER LITERACY SKILLS AMONG ...

    The Computer Literacy Survey consists of 40 Likert-type questions measuring four specific dimensions of computer literacy: General Computer Knowledge (software and hardware), Documents and ...

  24. Exploring Students' Generative AI-Assisted Writing Processes

    2.1 AI-Assisted Writing. Researchers have long been studying the utilization of AI technologies to support writing and language learning (Schulze, 2008).Three major technological innovations have revolutionized writing: (1) word processors, which represented the first major shift from manual to digital writing, replacing traditional typewriters and manual editing processes; (2) the Internet ...

  25. Senior High School Students' Awareness and Literacy on Computer

    Grade 11 and 12 students are aware of ICT fundamentals and different computer software applications. It. was further r evealed that there wa s significant difference in the computer software ...