Education Resources for Teachers

of Deaf/Hard of Hearing Students

Case Studies

Video : Maren Hadley discusses use of progress monitoring data.

The following are a series of examples including two students, Crystal and Henry.

Part 1.  The first set of examples focuses on assessment and the progressive use of progress monitoring data.

, a fourth grader who is deaf, is reading at the 3.5 grade level but her teacher thinks she should be “doing better”. At the first quarterly review meeting, the teacher expresses concern that Crystal may not be able to “keep up” with her hearing peers. 

, a second grader who is hard of hearing, is having difficulty meeting criteria in the reading curriculum. His classroom teacher does not want him to move to the next set of reading materials until he meets the performance criteria in the phonemic awareness and sound-letter identification level. Henry has been working at this level for 16 weeks with minimal progress.

Should the teacher of the deaf provide instruction to Crystal or Henry? What progress monitoring strategies would you use with Crystal or Henry?

Crystal’s teacher gives Crystal 3rd and 4th grade CBM Maze probes. The teacher uses 3rd grade level probes to monitor Crystal’s weekly reading growth. The teacher uses the 4th grade level probe to screen Crystal’s performance and to compare Crystal to her 4th grade peers.

Henry’s teacher uses pre-reading measures. The measures are Letter Identification, Letter-Sound Identification, Nonsense- Word Production, and Initial Phoneme Identification.

How will the data be used to address the problem?

Answer: The data will be used to quantify the difference between what it is and what we think it should be.

Using the data displayed on a graph, Crystal’s teacher shows that her current reading performance level is lower than her fourth grade peers. Crystal’s current reading performance is also lower than the district benchmarks for 4 graders. Crystal’s teacher identifies a set goal for Crystal at the end of the school year.

Henry’s progress monitoring graphs and mastery of letter-sound identification indicate that he is significantly behind his classroom peers. Henry’s teacher sets an ambitious and realistic goal for Henry at the end of 8 weeks.

How can the progress monitoring data be used to make instructional changes?

Answer: Identify alternative hypotheses (Maybe if we tried…?)

Crystal’s teacher meets with the child study team and discusses Crystal’s discrepant performance in reading and describes what interventions she has tried in the past with Crystal. The team agrees to try adding more instruction time by having Crystal work 1-on-1 with a special education assistant for 20 minutes a day. 

Henry’s teacher reviews various evidence-based interventions for beginning readers. She selects a supplemental curriculum that has less emphasis on auditory discrimination and decides to try it with Henry.

How can progress monitoring data be used to determine if instructional changes are effective?

Answer: Monitor fidelity of intervention and progress monitoring data collection (CBM).

Crystal’s teacher continues to monitor her progress using CBM Maze procedures. She records Crystal’s scores and the start date of the additional instruction time. The teacher and the SEA record the days and times of the sessions with Crystal, to establish treatment fidelity.

Henry’s teacher continues to monitor Henry’s progress using word identification and initial phoneme identification. She keeps a record of when she started her selected intervention and the day-to-day intensity and duration of the implemented intervention.

How do we know the intervention is implemented?

Answer: Re-quantify the differences.

Crystal’s D/HH teacher meets with the classroom teacher to review Crystal’s graphs and to determine if Crystal’s level of discrepant performance has changed relative to her classmates since the implementation of additional instruction. 

Henry’s teacher graphs his scores and visually analyzes the graph to determine if Henry is making adequate progress toward his 8-week goal.

How do we know the intervention is effective?

Answer: The instructional goal has been met.

The second set of examples uses an evaluation approach, using a different set of questions to review the progress monitoring data.

Crystal’s teacher has data that suggests that Crystal is lagging behind her peers in reading and is not “catching up.” Crystal’s teacher and her IEP team are concerned about Crystal falling further behind if the problem is not addressed early.

Henry’s teacher says that Henry is not progressing in his phonemic awareness and letter-sound identification skills. Henry’s peers have mastered these areas and are using a different set of materials. Henry’s teacher is feeling frustrated about Henry’s low rate of progress.

Does the problem exist?

As a student with a hearing loss, Crystal is progressing in reading, but is at risk for falling further behind her peers as material and demands become more challenging over time.

Henry, who is hard of hearing, likely needs extra support in receiving auditory-based instruction and learning auditory-based information. His teacher sees him lagging behind his peers in acquiring essential reading skills and this gap will not change if his current instruction or programming is not effective.

Is the problem important?

Crystal’s teacher considers Crystal’s past and current instructional experience and discusses with the child study team a variety of options to adjust Crystal’s current programming. The teacher feels that Crystal has potential to “catch up” if she had more direct instruction time.

Henry’s teacher thinks that her instructional reading strategies are effective for most of her students, but she knows that a different strategy needs to be considered for Henry. She wants to use an intervention that has evidence supporting its use in the classroom. She will choose one and monitor its effectiveness in promoting Henry’s pre-reading growth.

What is the best means to address the problem? What are the best instructional strategies/ interventions to address the problem?

Crystal’s D/HH teacher looks her progress monitoring graphs with the classroom teacher. They decide that Crystal’s rate of progress has improved since the additional instructional time was implemented. His initial phoneme identification skills have not progressed.

Henry’s teacher reviews his progress monitoring graphs and sees that Henry is progressing in word identification at a faster rate since the new intervention was implemented.  His initial phoneme identification skills have not progressed.

Is the instructional intervention we are using increasing the student’s progress as planned?

Crystal’s rate of progress has improved and the gap between Crystal and her peers is closing. Crystal’s teacher is happy with the rate of growth, and will continue to have Crystal receive additional instructional time.

Henry is progressing with the new intervention, but hoped that his rate of progress would be higher. She will continue to monitor Henry before making a new decision.

Is the original problem being solved through the intervention?

Literacy assessment – a case study in diagnosing and building a struggling reader’s profile

Abha Gupta 

Old Dominion University, U.S.A.; https://orcid.org/0000-0002-1863-359X

DOI :  https://doi.org/10.36534/erlj.2023.02.10

Bibliographic citation: (ISSN 2657-9774) Educational Role of Language Journal.  Volume 2023-2(10).  THE EMOTIONAL DIMENSION OF LANGUAGE AND OF LINGUISTIC EDUCATION, pp. 115-131.

                                                           

Abstract                                                                                                                                                                                                                                                              

Poor language and literacy abilities negatively impact students emotionally, causing low self-esteem, anxiety, and frustration. This affects their attitudes towards learning, reduces motivation, and limits opportunities. Thus, addressing early language and literacy challenges through intervention and accurate assessment is vital for not only positive emotional development but for all round academic growth. In a single-case study, the reading skills of a struggling third grade reader were assessed using tools such as QRI (Qualitative Reading Inventory) and visual-discrimination assessments to create a diagnostic profile. The study aimed to identify the student’s reading level, and factors that affect language and literacy abilities. Results showed that the student’s instructional reading level was at a low average range for expository texts but at a much higher level (fourth) for narrative texts. Strong word recognition skills were observed, but difficulty in comprehending expository, informational texts was evident. Recommendations include using targeted strategies to improve comprehension skills at all levels (literal, inferential, evaluative) for expository texts, while also addressing the emotional and social development of learners.

Keywords : literacy assessment, struggling reader, language and literacy, case study, elementary education

FULL Article (PDF)

Go to full Volume 2023-2(10)

Go to Educational Role of Language Journal – main page

Go to International Association for the Educational Role of Language – main page

  • Open supplemental data
  • Reference Manager
  • Simple TEXT file

People also looked at

Original research article, toward a differential and situated view of assessment literacy: studying teachers' responses to classroom assessment scenarios.

literacy assessment case study

  • Faculty of Education, Queen's University, Kingston, ON, Canada

Research has consistently demonstrated that teachers' assessment actions have a significant influence on students' learning experience and achievement. While much of the assessment research to date has investigated teachers' understandings of assessment purposes, their developing assessment literacy, or specific classroom assessment practices, few studies have explored teachers' differential responses to specific and common classroom assessment scenarios. Drawing on a contemporary view of assessment literacy, and providing empirical evidence for assessment literacy as a differential and situated professional competency, the purpose of this study is to explore teachers' approaches to assessment more closely by examining their differential responses to common classroom assessment scenarios. By drawing on data from 453 beginning teachers who were asked to consider their teaching context and identify their likely actions in response to common assessment scenarios, this paper makes a case for a situated and contextualized view of assessment work, providing an empirically-informed basis for reconceptualizing assessment literacy as negotiated, situated, and differential across teachers, scenarios, and contexts. Data from survey that presents teachers with assessment scenarios are analyzed through descriptive statistics and significance testing to observe similarities and differences by scenario and by participants' teaching division 1 (i.e., elementary and secondary). The paper concludes by considering implications for assessment literacy theory and future related research.

At the start of their initial teacher education program, we invited nearly 500 teacher candidates to identify and reflect on three of their most memorable moments from their schooling experience. For well over half of these students, at least one of their memories related to assessment—experiences of failing tests and their abilities being misjudged, experiences of powerful feedback that set them in new directions, or experiences of unfairness and bias in assessment results and reporting. Assessment is a powerful and enduring force within classroom learning. How teachers approach assessment in their classrooms has been shown in the research to either motivate or demotivate their students' learning ( Harlen, 2006 ; Hattie, 2008 ; Cauley and McMillan, 2010 ), engage or disengage students from school ( Brookhart, 2008 ; Gilboy et al., 2015 ), and promote or hinder student growth ( Black and Wiliam, 1998b ; Gardner, 2006 ). Importantly for this study, teachers' classroom assessment actions can also expose their fundamental beliefs about teaching and learning ( Xu and Brown, 2016 ; Looney et al., 2017 ; Herppich et al., 2018 ).

Across many parts of the world, the past 20 years has seen significant policy developments toward increased accountability mandates and standards-based curricula that have resulted in the proliferation of assessment practices and uses within schools ( Herman, 2008 ; Bennett and Gitomer, 2009 ; Brookhart, 2011 ). This proliferation has not only contributed to a greater complexity in the variety of assessments teachers are expected to use but has also demanded ongoing communication of evidence about student learning to various stakeholders—students, parents, teachers, administrators, and the public. Moreover, classroom assessment continues to occupy an ever-expanding role in classrooms from providing initial diagnostic information to guide beginning instruction to dominant traditional summative purposes of assessment for grading. Increasingly, teachers are also called to leverage daily formative assessments (i.e., assessment for learning) to monitor and support learning, as well as use assessment as learning to enhance students' metacognitive and self-regulatory capacities. Unsurprisingly, many teachers report feeling underprepared for these many assessment demands, particularly as they enter the teaching profession ( MacLellan, 2004 ; Volante and Fazio, 2007 ; Herppich et al., 2018 ).

Alongside these practical and policy developments, researchers have worked to understand ways to define the skills and knowledge teachers need within the current assessment climate, a field known as “assessment literacy” ( Popham, 2013 ; Willis et al., 2013 ; Xu and Brown, 2016 ; DeLuca et al., 2018 ) Arguably, among the many topics that exist within assessment and measurement, “assessment literacy” has received comparatively little attention despite its importance to the realities of how assessment is taken up in schools. Our interest in this paper, and in this Special Issue, is to prioritize “assessment literacy” as a focal area of assessment theory, one that requires increased theoretical attention given the contemporary professional demands on teachers.

While assessment literacy was first understood as a technical process, a set of skills and knowledge that teachers needed to know to be “literate” in the area of assessment, current thinking in the field suggests that teachers instead use a diversity of assessment practices derived from the integration of various sources of knowledge shaped by their unique contexts and background experiences ( Herppich et al., 2018 ). Understanding classroom assessment then requires looking beyond teachers' knowledge in assessment, and rather investigating teachers' approaches to assessments in relation to their classroom teaching and learning contexts. To date, the majority of studies in the field have worked to (a) delineate characteristics for teacher assessment literacy, standards for assessment practice, and teachers' conceptualizations of assessment (e.g., Brown, 2004 ; Coombs et al., 2018 ; Herppich et al., 2018 ), (b) investigate teachers' specific assessment practices in various contexts (e.g., Cizek et al., 1995 ; Cauley and McMillan, 2010 ), (c) explore the reliability and validity of teacher judgments related to formative and summative assessments (e.g., Brookhart, 2015 ; Brookhart et al., 2016 ), and (d) examine the alignment between teachers' assessment activities and system priorities and practices (e.g., Guskey, 2000 ; Alm and Colnerud, 2015 ). As the field of classroom assessment research continues to mature ( McMillan, 2017 ), additional studies that explore the nuanced differences in teachers' conceptualizations and enactment of assessment based on context and background would serve to provide empirical credence for a more contemporary understanding of assessment literacy, one that views assessment as a negotiated set of integrated knowledges and that is enacted differentially across contexts and teachers ( Willis et al., 2013 ).

Our intention in this paper is to explore the notion of a differential view of assessment literacy. Specifically, we are interested in how elementary (grades K−8) and secondary (grades 9–12) teachers might respond differently to common assessment scenarios given differences in their teaching contexts yet similarities in the broader policy environment and their pre-service education background. By drawing on data from 453 teachers' responses to five common classroom assessment scenarios, we begin to observe patterns in teachers' responses to the assessment scenarios, which provide initial evidence for differential approaches to assessment based on teaching context. In analyzing teachers' responses to assessment scenarios, our intention is to advance a broader theoretical argument in the field that aims to contribute toward an evolving definition of assessment literacy as a differential and negotiated competency.

The Evolution of Assessment Literacy

“Assessment literacy” is a broad term that has evolved in definition over the past three four decades. Assessment literacy was originally conceptualized as a practical professional skill and initially regarded as teachers' technical knowledge and skills in assessment, with a substantial emphasis on psychometric principles and test design ( Stiggins, 1991 ). The 1990 Standards for Teacher Competency in Educational Assessment of Students [ American Federation of Teachers (AFT) et al., 1990 ], which articulated a set of practices for teacher assessment practice, represented test-based and psychometric approaches to classroom assessment with implications for diagnostic and formative purposes. These Standards highlighted teachers' skills in (a) choosing and developing appropriate assessment methods for instructional purposes; (b) administering, scoring, and interpreting assessment results validly; (c) using assessment results to evaluate student learning, instructional effectiveness, school and curriculum improvement; (d) communicating assessment results to students, parents, and relevant stakeholders; and (e) identifying illegal, inappropriate, and unethical assessment practices. The Standards also provided the initial foundation for investigating teachers' assessment literacy, with a major focus on determining teachers' knowledge and skills in assessment through quantitative measures (e.g., Plake et al., 1993 ).

The largely psychometric and knowledge-driven view of assessment literacy was later expanded by scholars by drawing on contemporary shifts in classroom assessment and learning theories, which included more attention to formative assessment as well as social and theoretical aspects of assessment ( Black and Wiliam, 1998a ; Brookhart, 2011 ). Specifically, Brookhart (2011) reviewed the 1990 Standards and argued that these standards needed to respond to two current shifts: (a) a growing emphasis on formative assessment (i.e., assessment for learning), which had been shown to positively influence student learning ( Black and Wiliam, 1998a ; Assessment Reform Group, 2002 ; Earl, 2012 ); and (b) attention to the social, theoretical, and technical issues that teachers address in their assessment practices in relation to increasing student diversity. In addition to these recommendations, there were continued calls by other scholars to accommodate assessments to respond to cultural, linguistic, and ability-based diversity within classrooms ( Klenowski, 2009 ; Siegel, 2014 ; Cowie, 2015 ).

In 2015, the Joint Committee for Standards on Educational Evaluation released the Classroom Assessment Standards for PreK-12 Teachers ( Klinger et al., 2015 ). This updated set of standards propose 16 guidelines that reflected a contemporary conception of assessment literacy, where teachers exercise “the professional judgment required for fair and equitable classroom formative, benchmark, and summative assessments for all students” (p. 1). These standards guide teachers, students, and parents to leverage assessment results to not only support student learning but also screen and grade student achievement in relation to learning objectives. Accordingly, these 19 guidelines were categorized into three key assessment processes: foundations, use, and quality. Foundations characterize guidelines related to assessment purposes, designs, and preparation. Use comprises guidelines in terms of examining student work, providing instructional feedback, and reporting. Quality includes guidelines on fairness, diversity, bias, and reflection. Collectively, these Standards began to address critiques raised in relation to the 1990 Standards within a more contemporary conception of assessment literacy, which recognizes that teachers make assessment decisions based on an interplay of technical knowledge and skills as well as social and contextual elements.

The focus of previous conceptions of assessment literacy was on what teachers need to know and be able to do, as an individual characteristic, with respect to assessment knowledge and skill. Contemporary conceptions of assessment literacy recognize the importance and role of context in the capacity to develop and enact assessment knowledge and skills. Contemporary views of assessment literacy view it as a negotiated professional aspect of teachers' identities where teachers integrate their knowledge of assessment with their knowledge of pedagogy, content, and learning context ( Adie, 2013 ; Scarino, 2013 ; Cowie et al., 2014 ; Xu and Brown, 2016 ; Looney et al., 2017 ). Willis et al. (2013 , p. 242) effectively articulate this view as:

Assessment literacy is a dynamic context-dependent social practice that involves teachers articulating and negotiating classroom and cultural knowledges with one another and with learners, in the initiation, development and practice of assessment to achieve the learning goals of students.

At the heart of this view of assessment is recognizing that the practice of assessment is shaped by multiple factors including teacher background, experience, professional learning, classroom context, student interactions and behaviors, curriculum, and class diversity ( Looney et al., 2017 ), and that such factors will lead to differential experiences of assessment despite consistency in educational policies and training ( Tierney, 2006 ). More precisely, these socio-cultural factors shape how teachers negotiate various domains of assessment practice. Following previous research ( DeLuca et al., 2016a , b ; Coombs et al., 2018 ), these assessment domains may include teachers' understandings of assessment purposes (i.e., assessment for learning, assessment as learning, and assessment of learning), assessment processes (i.e., assessment design, administration and scoring, and communication and use of results), conceptions of fairness (i.e., a standardized orientation, an equitable approach, and a fully individualized approach), and priorities with respect to assessment theory (i.e., validity or reliability). Thus, what is evident from current conceptions of assessment literacy is that the practice of assessment is not a simple one; rather, it appears that multiple socio-cultural factors influence teachers' negotiation of various assessment domains to create differential practices of assessment based on context and scenario.

Assessment Literacy Research

Drawing on a more contemporary view of assessment literacy, several scholars have taken up the challenge of researching teachers' priorities, knowledge, and approaches to assessment (e.g., Wolf et al., 1991 ; Delandshere and Jones, 1999 ; Brown, 2004 ; Remesal, 2011 ; Gunn and Gilmore, 2014 ; Xu and Brown, 2016 ; Coombs et al., 2018 ) or exploring teachers' enacted assessment practices (e.g., Siegel and Wissehr, 2011 ; Scarino, 2013 ; Willis and Adie, 2014 ; Cowie and Cooper, 2017 ). The majority of this research has involved understanding how teachers primarily use assessments—the purposes of their assessment practices—as related to assessment policies, theories, and dominant assessment cultures within school systems. For example, Wolf et al. (1991) distinguished between a culture of testing and a culture of assessment in regards to teachers' conceptions of assessment purposes. Within a testing culture, teachers are not just focused on instrument construction and application but also on the production and use of relative rankings of students. In contrast, within an assessment culture, teachers focus on the relationship between instruction and learning and places value on the long-term development of the student. Teacher identification with either a testing or assessment culture has been shown to have a direct impact upon their perceptions of intelligence, the relationship between teacher and learner, and the purpose of assessment instruments ( Wolf et al., 1991 ).

Similarly, in a landmark article, Shepard (2000) mapped assessment orientations and practices to dominant historical paradigms within educational systems. Specifically, she argued that traditional paradigms of social efficiency curricula, behaviorist learning theory, and scientific measurement favor a summative testing approach to assessment, whereas a social constructivist paradigm makes provisions for a formative assessment orientation. Her argument acknowledges that previous paradigms continue to shape the actions of teachers and that contemporary conceptualizations of assessment are “likely to be at odds with prevailing beliefs” (p. 12) resulting in resistance to progressive approaches to classroom assessment.

More recently, Brown (2004) and his later work with colleagues (e.g., Harris and Brown, 2009 ; Hirschfeld and Brown, 2009 ; Brown et al., 2011 ) presented teachers' differential conceptions of assessment as defined by their agreement or disagreement with four purposes of assessment: (a) improvement of teaching and learning, (b) school accountability, (c) student accountability, and (d) treating assessment as irrelevant. Teachers who hold the conception that assessment improves teaching and learning would also be expected to believe that formative assessments produced valid and reliable information of student performance to support data-based instruction. Assessment as a means to hold schools accountable for student performance requires teachers to either emphasize the reporting of instructional quality within a school or changes in the quality of instruction over reporting periods. The school accountability purpose of assessment has become increasingly popular, particularly in the United States, over the past few decades with the shift in education toward a standards-based, accountability framework ( Brown, 2004 ; Stobart, 2008 ; Popham, 2013 ). Similarly, student accountability, views the primary purpose of assessment to hold students accountable for their learning toward explicit learning targets. Brown's final conception of assessment recognizes orientations that devalue assessment as legitimate practices in classrooms. A teacher who supports this conception would most likely see assessment as a force of external accountability, disconnected from the relationship between teacher and student within the classroom ( Brown, 2004 ).

In a later study, Brown and Remesal (2012) examined differences in the conceptions of assessments held by prospective and practicing teachers, constructing a three-conception model to explain teachers' orientations to assessment: (a) assessment improves, (b) assessment is negative, and (c) assessment shows the quality of schools and students. Interestingly, prospective teachers relied more heavily upon assessment instruments of unknown validity and reliability (i.e., observations) and did not associate improved learning with valid, dependable assessments.

Postareff et al. (2012) identified five purposes of assessment that were consolidated into two overall purpose of assessment held by classroom teachers: reproductive conceptions (i.e., measuring memorization of facts, how well students covered content, and the application of knowledge) and transformational conceptions (i.e., measuring deep understanding and measuring process and development of student thinking). A relationship between a reproductive conception of assessment and traditional assessment practices as well as a transformational conception of assessment and alternative assessment practices was also identified in this study.

Within these various conceptions of assessment, teachers enact diverse assessment practices within their classrooms. In a recent study, Alm and Colnerud (2015) examined 411 teachers grading practices, noting wide variability in how grades were constructed due to teacher's approaches to classroom assessment. For example, the way teachers developed assessments varied based on whether they used norm- or criterion-referenced grading, whether they added personal rules onto the grading policy, and whether they incorporated data from national examination into final grades. These factors, along teachers' beliefs of what constituted undependable data on student performance and how non-performance factors could be used to adjust grades, resulted in teachers enacting grading systems in fundamentally different ways.

In our own work, we have found that teachers hold significantly different approaches to assessment when considered across teaching division ( DeLuca et al., 2016a ) and career stage ( Coombs et al., 2018 ) specifically, early career teachers tend to value more summative and standard assessment protocols while later career teachers endorse more formative and equitable assessment protocols. Much of the research into teachers enacted assessment practices has used a qualitative methodology involving observations and interviews, without the opportunity to consider how teachers would responds to similar and common assessment scenarios.

In order to provide additional evidence on the differential and situated nature of assessment literacy, we invited 453 teachers to respond to a survey that presented teachers with five common classroom assessment scenarios. By survey responses, we aimed to better understand teachers' various approaches to classroom assessment with specific consideration for differences between elementary and secondary teachers.

The Teachers

Teachers who had completed their initial teacher education program at three Ontario-based universities were recruited for this study via alumni lists (i.e., convenience sample). All teachers were certified and at a similar stage of their teaching career (i.e., completed initial teacher education prior to entering starting teaching positions). All recent graduates at these institutions were sent an email invitation with link to complete the scenario-based survey and provided consent prior to completing the survey following approved research ethics protocols. The response rate for the survey was 71% (453 completed surveys out of 637 survey links that were accessed by potential participants). The 184 surveys that were accessed but not completed did not contain enough complete responses (i.e., sat least four of five scenarios) to determine if there were differences between respondents who completed the survey and those that did not. Of the respondents (i.e., 453 complete responses), the vast majority (87%) had secured work or were planning to work in the public-school system in Ontario. There was a near even split in gender at the secondary teaching division (grades 7–12), with a majority (81%) of females at the elementary teaching division (grades K−6). In total, 200 respondents represented the secondary teaching division and 253 represented the elementary division.

An adapted version of the Approaches to Classroom Assessment Inventory (ACAI) was used in this research. The ACAI was previously developed based on an analysis of 15 contemporary assessment standards (i.e., 1990–present) from five geographic regions (see DeLuca et al., 2016b , for complete analysis of standards). From this analysis, we developed a set of themes to demarcate the construct of assessment literacy, and which aligned with the most recently published Classroom Assessment Standards from the Joint Committee for Standards on Educational Evaluation (2015). The following assessment literacy domains were integrated into the ACAI: (a) Assessment Purposes, (b) Assessment Processes, (c) Assessment Fairness, and (d) Measurement Theory. Each dimension had associated with it a set of three priority areas. For example, the three priorities associated with the assessment literacy theme of assessment purpose were: assessment of learning, assessment for learning, and assessment as learning. See Table 1 for complete list of assessment literacy domains with definitions of associated priority areas.

www.frontiersin.org

Table 1 . Assessment literacy domains.

Scenario-based items were created for the ACAI that addressed the four assessment literacy domains. An expert-panel method was used to ensure the construct validity of the instrument followed by a pilot testing process (see DeLuca et al., 2016a for additional instrument development information). In total, 20 North American educational assessment experts followed an alignment methodology ( Webb, 1997 , 1999 , 2005 ; DeLuca and Bellara, 2013 ) to provide feedback on the scenario items. Each expert rated (on a five-point scale) the items based on their alignment to the table of specifications and the related assessment literacy theme/priority. Based on expert feedback the scenarios were revised and amended until all items met the validation criteria (i.e., average alignment rating of 4 or more). After the alignment process, the ACAI scenarios were pilot tested with practicing teachers. The ACAI version used in this study included 20 items equally distributed across five classroom assessment scenarios with a second part that included a short collection of demographic data.

Teachers were administered an online survey that included five assessment scenarios and demographic questions. For each scenario, teachers were presented with 12 responses and asked to identify the likelihood of enacting each response using a six-point scale (1 = not at all likely; 6 = highly likely). Each dimension maintained three approach options that related to the recently published Joint Committee Classroom Assessment Standards ( Klinger et al., 2015 ). In completing the survey, teachers were asked to consider their own teaching context when responding to each scenario (i.e., position the scenario in relation to the students they primarily taught or most recently taught).

Data Analysis

Only fully complete surveys were included in our analysis ( n = 453). Descriptive statistics (mean, standard deviation) were calculated for responses to each action. Statistical comparisons by teaching division (primary, secondary) were conducted through the use of an independent samples t -test (α = 0.05). A Bonferroni correction was employed, with an adjusted alpha value of 0.0008 (α = 0.05/60 statistical tests) used for in this study ( Peers, 1996 ). Cohen's d was calculated as a measure of effect size. All data analysis was completed using Statistical Program for the Social Sciences version 22 (SPSS v. 22). As our interest in this paper was to consider how teachers respond consistently and differently to the same assessment scenario, results were analyzed by scenario and by participant demographic background to determine contextual and situational differences between teachers and their approaches to assessment.

Scenario Responses

In analyzing teachers' responses, we provide overall response patterns by scenario, recognizing most likely and least likely responses in relation to various assessment approaches (see Table 1 ). Results are presented with consideration for both descriptive trends and significant results by teaching division (i.e., elementary and secondary). Complete results are presented in Appendices A – E with the text highlighting priority areas and differences between groups by scenario.

You give your class a paper-pencil summative unit test with accommodations and modifications for identified learners. Sixteen of the 24 students fail .

In responding to this scenario, teachers prioritized an assessment for learning approach (M = 5.05, SD = 1.01), design approach (M = 4.67, SD = 1.00), and an equitable approach to fairness (M = 4.57, SD = 1.10). In practice, these responses would involve teachers re-teaching parts of the unit and giving students opportunities to apply their learning prior to re-testing the material. It also involves teachers recognizing that the test design may be flawed and that they might need to design a revised unit test to give students, in particular for those with exceptionalities. Importantly, significant differences across elementary and secondary teachers were noted for assessment for learning [ t (452) = 3.68, p < 0.0008, d = 0.35], with elementary teachers favoring these approaches more than secondary teachers.

Among the least endorsed responses to this scenario were those that dealt with a summative and standardized approach to assessment. Across all teachers, the lowest scored response was to remove test questions that most students failed and re-calculate student scores without those questions (M = 2.99, SD = 1.38). Almost as low, was to record the test grade as each student's summative assessment for the unit but reduce the weight of the test in the final grade (M = 3.13, SD = 1.32). Interestingly, however, this response option showed a significant difference between elementary and secondary teachers, with secondary teachers more likely [ t (452) = 4.72, p < 0.0008, d = 0.45] to endorse it. Finally, the third lowest response related to a standard approach to assessment fairness, which involved allowing all students to retake a similar test and averaging the two grades (M = 3.47, SD = 1.27).

You discover that one of your students has plagiarized some of his assignment (e.g., an essay, lab report) .

The results from this scenario suggest some important differences in how elementary and secondary teachers view and respond to plagiarism. Both groups of teachers highly endorsed a communicative approach (elementary M = 5.22, SD = 0.87; secondary M = 5.19, SD = 0.83) and a design approach (elementary M = 4.81, SD = 1.04; secondary M = 4.72, SD = 1.04). In practice, these approaches involve talking with students about the severity of plagiarism and negotiating potential next steps for their learning to ensure that the student learns and demonstrates their learning appropriately. Similar to the first scenario, teachers would also focus on the design of the assessment task and reflect on designing tasks that support more authentic work. Significant differences were observed in the third highest endorsed response. Secondary teachers (M = 4.97, SD = 0.94) were statistically more likely [ t (363.42) = 4.67, p < 0.0008, d = 0.45] to respond to this scenario using a standard approach, which involves explaining to the student the policy on plagiarism and how it must be consistently applied to all students. Adherence to plagiarism policies for all students was further endorsed by secondary teachers in several other significant responses; specifically, those related to a consistent approach [ t (452) = 4.30, p < 0.0008, d = 0.41], which involves consulting school policy on plagiarism and implement consequences consistent with the policy (elementary M = 4.04, SD = 1.18; secondary M = 4.52, SD = 1.19), and an assessment of learning approach [ t (452) = 5.57, p < 0.0008, d = 0.52], which requires teachers to administer consequence in alignment with school policies (elementary M = 4.09, SD = 1.12; secondary M = 4.68, SD = 1.13).

Out of 28 students in your class, 4 students are classified/identified with an exceptionality and have an Individual Education Plan (IEP) (i.e., each student requires accommodations but not a modified curriculum) as well as several other unidentified students with differentiated learning needs. You must decide how to accurately measure learning in your class .

The primary response for both elementary and secondary teachers to this scenario was an equitable approach (M = 5.11, SD = 0.97). In practice, teachers would aim to ensure students with identified learning exceptionalities were provided with accommodations on all assessment tasks, consistent with many school and jurisdictional policies on teaching learners with exceptionalities. Following this priority response, teachers endorsed a communication approach (M = 5.06, SD = 0.93), where teachers would explain to students and parents the purpose of accommodations and how they would be implemented and communicated on report cards.

Among the lowest scored responses were a contextualized approach in which teachers would develop different scoring rubrics for identified students (M = 3.94, SD = 1.25) and a standard approach in which teachers would grade students (without accommodations) based on the same assessments (M = 3.52, SD = 1.35). Interestingly, while not widely endorsed, elementary teachers (M = 4.32, SD = 1.07) tended to significantly prioritize a contextual approach (i.e., still within the “likely” response category) more than secondary teachers [M = 3.63, SD = 1.31; t (397.06) = 3.27, p < 0.0008, d = 0.58]. Furthermore, secondary teachers (M = 3.88, SD = 1.31) tended to significantly prioritize a standard approach more than elementary teachers [M = 3.06, SD = 1.25; t (452) = 6.78, p < 0.0008, d = 0.64]. The lowest ranked response reflected a consistent approach in which teachers would use the same scoring rubric for all students in their class, with secondary teachers statistically more likely to enact this response [elementary M = 2.98, SD = 1.25; secondary M = 3.90, 1.35; t (439.67) = 7.54, p < 0.0008, d = 0.71].

You are planning a unit for your class .

The top three responses to this scenario suggest that teachers base their assessments on the taught curriculum content, enacted pedagogical activities, and co-constructed learning goals with students. Teachers were most likely to use a contextual response to this scenario in which they developed assessments based on the context and activities of their enacted lessons (M = 5.12, SD = 0.79). Teachers also endorsed a balanced approach in which they would develop assessments based on questions and activities that worked well with other students but adjusted them to the content and pedagogies used in their enacted lessons (M = 4.90, SD = 0.90). Using assessments to guide unit planning was highly endorsed by teachers. Again, standard and consistent approaches were the two lowest ranked responses to this scenario; however, secondary teachers (M = 4.15, SD = 1.24) responded with a standard response more than elementary teachers [M = 3.42, SD = 1.12; t (452) = 6.42, p < 0.0008, d = 0.62].

A parent of one of your classified/identified students is concerned about an upcoming standardized test .

This assessment scenario requires teachers to consider their orientation to large-scale testing, assessment of students with exceptionalities, and their approach to communicating with parents. The priority response for both elementary and secondary teachers was an equitable approach (M = 5.26, SD = 0.91). Across both divisions, teachers would tell the parent that her child's IEP would be consulted prior to the test and that appropriate accommodations would be provided, congruent with the IEP. The next set of highly endorsed responses included a differentiated approach (M = 4.89, SD = 1.14), design approach (M = 4.80, SD = 1.06), and communicative approach (M = 4.53, SD = 1.14). These responses suggest that teachers aim to articulate the purpose, role, and influence of the standardize test on students' learning and grades. They are sensitive to the limitations of standardized assessments and equally demonstrate the value of classroom-level data to provide more nuanced information about student learning.

Among the lowest-endorsed responses was an assessment for learning approach, in which teachers would tell the parent that the standardized test would provide feedback on her child's learning toward educational standards and help guide teaching and learning (M = 4.07, SD = 1.28) and a balanced approached (M = 3.89, SD = 1.39). For this scenario, a balanced approach involved teachers telling the parent that standardized tests, in conjunction with report card grades, allow parents to draw more informed conclusions about their child's growth and achievement than either source can provide alone. A significant difference between elementary and secondary teachers was their endorsement of a standard response where teachers would tell the parent that all eligible students in the class must complete the standardized test [elementary M = 3.85, SD = 1.45; secondary M = 4.49, SD = 1.23; t (452) = 4.98, p < 0.0008, d = 0.48]. This significant difference might point to a difference in orientation toward the role of standardized testing between elementary and secondary teachers and may warrant further investigation.

Implications for Assessment Literacy

Much of the assessment research to date has investigated teachers' understandings of assessment purposes (e.g., Brown, 2004 ; Barnes et al., 2017 ), teachers' developing assessment literacy ( Brown, 2004 ; DeLuca et al., 2016a ; Coombs et al., 2018 ; Herppich et al., 2018 ), or specific classroom assessment practices ( Cizek et al., 1995 ; Cauley and McMillan, 2010 ). Few studies have explored teachers' differential responses to specific and common classroom assessment scenarios to substantiate contemporary conceptions of assessment literacy as a situated and differential practice predicated on negotiated knowledges ( Willis et al., 2013 ; Looney et al., 2017 ). Stemming from the assumption that teachers' assessment actions have significant influence on students' learning experience and achievement ( Black and Wiliam, 1998b ; Hattie, 2008 ; DeLuca et al., 2018 ), there is a need to understand how teachers are approaching assessment similarly or differently across grades, classrooms, and teaching contexts. In this study, we presented beginning teachers with five common classroom assessment scenarios and asked them to consider their own teaching context while identifying their likely response to the scenarios using a multi-dimensional framework to classroom assessment actions.

While, we recognize that this is a small-scale study reliant on one data source, we argue that it does provide additional evidence to further conceptualizations of assessment literacy as both situated and differential across teachers. What we see from this study is that in relation to classroom assessment scenarios, teachers have apparent consistency—a core value toward student learning that guides their assessment practice—but also significant instances of difference, which translate to differences in teacher actions in the classroom. For example, secondary teachers endorsed a standard approach to fairness significantly more than elementary teachers within scenarios 2, 3, 4, and 5. Differences in how teachers respond to common assessment scenarios is important as it suggests that students potentially experience assessment quite differently across teachers despite the presence of consistent policies and similar professional learning backgrounds ( Coombs et al., 2018 ). While these differences may not be problematic, and may in fact be desirable in certain instances (e.g., there might be good justification for changing a response to an assessment scenario between elementary and secondary school contexts), they do support the notion of a differential and situated view of teachers' classroom assessment practices.

We recognize that differences observed between teachers might be due to a complexity of factors, including teaching division, class, and personal characteristics and dispositions, that interact as teachers negotiate assessment scenarios in context. What this amounts to, is the recognition that there are other factors shaping teachers' assessment actions in the classroom. In working toward an expanded view of assessment literacy that moves beyond strictly a psychological trait (i.e., cognitive learning of assessment knowledge and skills) to an always situated and differential professional responsibility resulting from teachers negotiating diverse factors at micro-levels (e.g., teachers' beliefs, knowledge, experience, conceptions, teacher diversity), meso-levels (i.e., classroom and school beliefs, polices, practices, student diversity), and macro-levels (i.e., system assessment policies, values, protocols) ( Fulmer et al., 2015 ). Adopting a situated and differential view of assessment literacy where classroom assessment is shaped by a negotiation of personal and contextual factors holds important implications for how teachers are supported in their assessment practices. Firstly, like effective pedagogies, classroom assessment will not look the same in each classroom. While teachers may uphold strong assessment theory, the way in which that theory is negotiated amid the complex dynamics of classroom teaching, learning, and diversity and in relation to school and system cultures of assessment will yield differences in assessment practice. Second, as teachers' assessment practices are to some extent context-dependent ( Fulmer et al., 2015 ), teachers may shift their practices as they work across different teaching contexts (i.e., grades, subjects, schools) or in relation to different students. Finally, what this expanded view of assessment literacy suggests, is that learning to assess is a complex process that involves negotiating evolving assessment knowledge alongside other evolving pedagogical knowledges, socio-cultural contexts of classroom teaching and learning, and system priorities, policies, and processes ( Willis et al., 2013 ).

In considering research stemming from this and other recent assessment literacy studies, we suggest additional empirical investigations to explore the role of various influencing factors that shape teachers' decision-making processes within classroom assessment scenarios. In particular, future research should also address the limitations of the present study; namely, (a) that the sample was drawn from one educational jurisdiction, (b) that the data involved a self-report scale of intended actions rather than observed actions, and (c) that teachers in this study were all new to the profession. Future studies should consider both reported and enacted practices across a wide range of teachers and contexts with purposeful attention to the factors that shape their situated and differential approach to classroom assessment.

Data Availability

The datasets generated for this study are available on request to the corresponding author.

Ethics Statement

This study was carried out in accordance with the recommendations of Queen's General Research Ethics Board with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the Queen's General Research Ethics Board.

Author Contributions

All authors contributed to the intellectual development of this paper. The author order reflects the contributions made by each author to this paper.

Funding provided by Social Sciences and Humanities Research Council of Canada, Grant# 430-2013-0489.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2019.00094/full#supplementary-material

1. ^ Depending on the national context, the term “teaching division” should be considered synonymous with school levels and/or educational levels, all of which are used to denote grades a teacher instructs.

Adie, L. (2013). The development of teacher assessment identity through participation in online moderation. Assess. Educ. 20, 91–106. doi: 10.1080/0969594X.2011.650150

CrossRef Full Text | Google Scholar

Alm, F., and Colnerud, G. (2015). Teachers' experiences of unfair grading. Educ. Assess. 20, 132–150. doi: 10.1080/10627197.2015.1028620

American Federation of Teachers (AFT) National Council on Measurement in Education (NCME) and National Education Association (NEA). (1990). Standards for teacher competence in educational assessment of students. Educ. Measure. 9, 30–32. doi: 10.1111/j.1745-3992.1990.tb00391.x

Assessment Reform Group (2002). Assessment for Learning: 10 Principles . London: University of Cambridge.

Google Scholar

Barnes, N., Fives, H., and Dacey, C. M. (2017). US teachers' conceptions of the purposes of assessment. Teach. Teach. Educ. 65, 107–116. doi: 10.1016/j.tate.2017.02.017

Bennett, R. E., and Gitomer, D. H. (2009). “Transforming K-12 assessment: Integrating accountability, testing, formative assessment and professional support,” in Educational Assessment in the 21st Century: Connecting Theory and Practice , eds C. Wyatt-Smith and J. J. Cumming (Dordrecht: Springer), 43–61. doi: 10.1007/978-1-4020-9964-9_3

Black, P., and Wiliam, D. (1998a). Assessment and classroom learning. Assess. Educ. 5, 7–74. doi: 10.1080/0969595980050102

Black, P., and Wiliam, D. (1998b). Inside the Black Box: Raising Standards Through Classroom Assessment . London: GL Assessment.

Brookhart, S. M. (2008). How to Give Effective Feedback to Your Students. Alexandria, VA: Association of Supervision and Curriculum Development.

Brookhart, S. M. (2011). Educational assessment knowledge and skills for teachers. Educ. Measure. 30, 3–12. doi: 10.1111/j.1745-3992.2010.00195.x

Brookhart, S. M. (2015). Graded achievement, tested achievement, and validity. Educ. Assess. 20, 268–296. doi: 10.1080/10627197.2015.1093928

Brookhart, S. M., Guskey, T. R., Bowers, A. J., McMillan, J. H., Smith, J. K., Smith, L. F., et al. (2016). A century of grading research: meaning and value in the most common educational measure. Rev. Educ. Res. 86, 803–848. doi: 10.3102/0034654316672069

Brown, G. T., Hui, S. K., Flora, W. M., and Kennedy, K. J. (2011). Teachers' conceptions of assessment in Chinese contexts: a tripartite model of accountability, improvement, and irrelevance. Int. J. Educ. Res. 50, 307–320. doi: 10.1016/j.ijer.2011.10.003

Brown, G. T., and Remesal, A. (2012). Prospective teachers' conceptions of assessment: a cross-cultural comparison. Span. J. Psychol. 15, 75–89. doi: 10.5209/rev_SJOP.2012.v15.n1.37286

PubMed Abstract | CrossRef Full Text | Google Scholar

Brown, G. T. L. (2004). Teachers' conceptions of assessment: implications for policy and professional development. Assess. Educ. 11, 301–318. doi: 10.1080/0969594042000304609

Cauley, K. M., and McMillan, J. H. (2010). Formative assessment techniques to support student motivation and achievement. Clear. House 83, 1–6. doi: 10.1080/00098650903267784

Cizek, G. J., Fitzgerald, S. M., and Rachor, R. A. (1995). Teachers' assessment practices: preparation, isolation, and the kitchen sink. Educ. Assess. 3, 159–179. doi: 10.1207/s15326977ea0302_3

Coombs, A. J., DeLuca, C., LaPointe-McEwan, D., and Chalas, A. (2018). Changing approaches to classroom assessment: an empirical study across teacher career stages. Teach. Teach. Educ. 71, 134–144. doi: 10.1016/j.tate.2017.12.010

Cowie, B. (2015). “Equity, ethics and engagement: principles for quality formative assessment in primary science classrooms,” in Sociocultural Studies and Implications for Science Education , eds C. Milne, K. Tobin, and D. DeGennaro (Dordrecht: Springer), 117–133. doi: 10.1007/978-94-007-4240-6_6

Cowie, B., and Cooper, B. (2017). Exploring the challenge of developing student teacher data literacy. Assess. Educ. 24, 147–163. doi: 10.1080/0969594X.2016.1225668

Cowie, B., Cooper, B., and Ussher, B. (2014). Developing an identity as a teacher-assessor: three student teacher case studies. Assess. Matters 7, 64–89. Available online at: https://www.nzcer.org.nz/nzcerpress/assessment-matters/articles/developing-identity-teacher-assessor-three-student-teacher

Delandshere, G., and Jones, J. H. (1999). Elementary teachers' beliefs about assessment in mathematics: a case of assessment paralysis. J. Curric. Superv. 14:216.

DeLuca, C., and Bellara, A. (2013). The current state of assessment education: aligning policy, standards, and teacher education curriculum. J. Teach. Educ. 64, 356–372. doi: 10.1177/0022487113488144

DeLuca, C., LaPointe-McEwan, D., and Luhanga, U. (2016a). Approaches to classroom assessment inventory: A new instrument to support teacher assessment literacy. Educ. Assess. 21, 248–266. doi: 10.1080/10627197.2016.1236677

DeLuca, C., LaPointe-McEwan, D., and Luhanga, U. (2016b). Teacher assessment literacy: a review of international standards and measures. Educ. Assess. Eval. Account. 28, 251–272. doi: 10.1007/s11092-015-9233-6

DeLuca, C., Valiquette, A., Coombs, A. J., LaPointe-McEwan, D., and Luhanga, U. (2018). Teachers' approaches to classroom assessment: a large-scale survey. Assess. Educ. 25, 355–375. doi: 10.1080/0969594X.2016.1244514

Earl, L. M. (2012). Assessment as Learning: Using Classroom Assessment to Maximize Student Learning . Thousand Oaks, CA: Corwin Press.

Fulmer, G. W., Lee, I. C. H., and Tan, K. H. K. (2015). Teachers' assessment practices: an integrative review of research. Assess. Educ. 22, 475–494. doi: 10.1080/0969594X.2015.1017445

Gardner, J. (2006). “Assessment for learning: a compelling conceptualization,” in Assessment and Learning , ed. J. Gardner (London: Sage, 197–204.

Gilboy, M. B., Heinerichs, S., and Pazzaglia, G. (2015). Enhancing student engagement using the flipped classroom. J. Nutr. Educ. Behav. 47, 109–114. doi: 10.1016/j.jneb.2014.08.008

Gunn, A. C., and Gilmore, A. (2014). Early childhood initial teacher education students' learning about assessment. Assess. Matters 7, 24–38. Available online at: https://www.nzcer.org.nz/nzcerpress/assessment-matters/articles/early-childhood-initial-teacher-education-students-learning

Guskey, T. R. (2000). Grading policies that work against standards…and how to fix them. NASSP Bull. 84, 20–29. doi: 10.1177/019263650008462003

Harlen, W. (2006). “On the relationship between assessment for formative and summative purposes,” in Assessment and Learning , ed. J. Gardner (Los Angeles, CA: Sage), 87–103. doi: 10.4135/9781446250808.n6

Harris, L. R., and Brown, G. T. L. (2009). The complexity of teachers' conceptions of assessment: tensions between the needs of schools and students. Assess. Educ. 16, 365–381. doi: 10.1080/09695940903319745

Hattie, J. (2008). Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement. London: Routledge. doi: 10.4324/9780203887332

Herman, J. L. (2008). “Accountability and assessment: Is public interest in K-12 education being served?,” in The Future of Test-Based Educational Accountability , eds K. E. Ryan and L. A. Shepard (New York, NY: Routledge, 211–231.

Herppich, S., Praetoriusm, A., Forster, N., Glogger-Frey, I., Karst, K., Leutner, D., et al. (2018). Teachers' assessment competence: integrating knowledge-, process-, and product-oriented approaches into a competence-oriented conceptual model. Teach. Teach. Educ. 76, 181–193. doi: 10.1016/j.tate.2017.12.001

Hirschfeld, G. H., and Brown, G. T. (2009). Students' conceptions of assessment: factorial and structural invariance of the SCoA across sex, age, and ethnicity. Eur. J. Psychol. Assess. 25, 30–38. doi: 10.1027/1015-5759.25.1.30

Klenowski, V. (2009). Australian Indigenous students: addressing equity issues in assessment. Teach. Educ. 20, 77–93. doi: 10.1080/10476210802681741

Klinger, D. A., McDivitt, P. R., Howard, B. B., Munoz, M. A., Rogers, W. T., and Wylie, E. C. (2015). The Classroom Assessment Standards for PreK-12 Teachers. Kindle Direct Press.

Looney, A., Cumming, J., van Der Kleij, F., and Harris, K. (2017). Reconceptualising the role of teachers as assessors: teacher assessment identity. Assess. Educ. 25, 442–467. doi: 10.1080/0969594X.2016.1268090

MacLellan, E. (2004). Initial knowledge states about assessment: novice teachers' conceptualizations. Teach. Teach. Educ. 20, 523–535. doi: 10.1016/j.tate.2004.04.008

McMillan, J. H. (2017). Classroom Assessment: Principles and Practice That Enhance Student Learning and Motivation . New York, NY: Pearson.

Peers, I. (1996). Statistical Analysis for Education and Psychology Researchers . London: Falmer.

Plake, B., Impara, J., and Fager, J. (1993). Assessment competencies of teachers: a national survey. Educ. Measure. 12, 10–39. doi: 10.1111/j.1745-3992.1993.tb00548.x

Popham, W. J. (2013). Classroom Assessment: What Teachers Need to Know, 7th Edn. Boston, MA: Pearson.

Postareff, L., Virtanen, V., Katajavuori, N., and Lindblom-Ylänne, S. (2012). Academics' conceptions of assessment and their assessment practices. Stud. Educ. Eval. 38, 84–92. doi: 10.1016/j.stueduc.2012.06.003

Remesal, A. (2011). Primary and secondary teachers' conceptions of assessment: a qualitative study. Teach. Teach. Educ. 27, 472–482. doi: 10.1016/j.tate.2010.09.017

Scarino, A. (2013). Language assessment literacy as self-awareness: understanding the role of interpretation in assessment and in teacher learning. Lang. Test. 30, 309–327. doi: 10.1177/0265532213480128

Shepard, L. A. (2000). The role of assessment in a learning culture. Educ. Res. 29, 4–14. doi: 10.3102/0013189X029007004

Siegel, M. A. (2014). Developing preservice teachers' expertise in equitable assessment for English learners. J. Sci. Teach. Educ. 25, 289–308. doi: 10.1007/s10972-013-9365-9

Siegel, M. A., and Wissehr, C. (2011). Preparing for the plunge: preservice teachers' assessment literacy. J. Sci. Teach. Educ. 22, 371–391. doi: 10.1007/s10972-011-9231-6

Stiggins, R. J. (1991). Assessment literacy. Phi Delta Kappan 72, 534–539.

Stobart, G. (2008). Testing Times: The Uses and Abuses of Assessment. New York, NY: Routledge. doi: 10.4324/9780203930502

Tierney, R. D. (2006). Changing practices: influences on classroom assessment. Assess. Educ. 13, 239–264. doi: 10.1080/09695940601035387

Volante, L., and Fazio, X. (2007). Exploring teacher candidates' assessment literacy: implications for teacher education reform and professional development. Can. J. Educ. 30, 749–770. doi: 10.2307/20466661

Webb, N. L. (1997). Criteria for Alignment of Expectations and Assessments in Mathematics and Science Education. Washington, DC: Council of Chief State School Officers.

Webb, N. L. (1999). Alignment of Science and Mathematics Standards and Assessments in Four States. Washington, DC: Council of Chief State School Officers.

Webb, N. L. (2005). Webb Alignment Tool: Training Manual. Madison, WI: Wisconsin Center for Education Research. Retrieved from: http://www.wcer.wisc.edu/WAT/index.aspx (Retrieved March 27, 2017).

Willis, J., and Adie, L. (2014). Teachers using annotations to engage students in assessment conversations: recontextualising knowledge. Curric. J. 25, 495–515. doi: 10.1080/09585176.2014.968599

Willis, J., Adie, L., and Klenowski, V. (2013). Conceptualising teachers' assessment literacies in an era of curriculum and assessment reform. Austr. Educ. Res. 40, 241–256. doi: 10.1007/s13384-013-0089-9

Wolf, D., Bixby, J., Glenn, I. I. I. J, and Gardner, H. (1991). Chapter 2: to use their minds well: investigating new forms of student assessment. Rev. Res. Educ. 17, 31–74. doi: 10.3102/0091732X017001031

Xu, Y., and Brown, G. T. (2016). Teacher assessment literacy in practice: a reconceptualization. Teach. Teach. Educ. 58, 149–162. doi: 10.1016/j.tate.2016.05.010

Keywords: assessment literacy, classroom assessment, approaches to assessment, educational assessment, teacher practice, assessment scenarios

Citation: DeLuca C, Coombs A, MacGregor S and Rasooli A (2019) Toward a Differential and Situated View of Assessment Literacy: Studying Teachers' Responses to Classroom Assessment Scenarios. Front. Educ. 4:94. doi: 10.3389/feduc.2019.00094

Received: 27 April 2019; Accepted: 19 August 2019; Published: 03 September 2019.

Reviewed by:

Copyright © 2019 DeLuca, Coombs, MacGregor and Rasooli. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Christopher DeLuca, cdeluca@queensu.ca

This article is part of the Research Topic

Advances in Classroom Assessment Theory and Practice

Online ordering is currently unavailable due to technical issues. We apologise for any delays responding to customers while we resolve this. For further updates please visit our website: https://www.cambridge.org/news-and-insights/technical-incident

We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings .

Login Alert

literacy assessment case study

  • > Journals
  • > Language Teaching
  • > FirstView
  • > Language assessment literacy

literacy assessment case study

Article contents

Language assessment literacy.

Published online by Cambridge University Press:  03 April 2024

Numerous references to ‘new’ literacies have been added to the discourse of various academic and public domains, resulting in a multiplication of literacies. Among them is the term ‘language assessment literacy’ (LAL), which has been used as a subset of Assessment Literacy (AL) (Gan & Lam, 2022 ) in the field of language testing and assessment and has not been uncontested. LAL refers to the skills, knowledge, methods, techniques and principles needed by various stakeholders in language assessment to design and carry out effective assessment tasks and to make informed decisions based on assessment data (e.g., Fulcher, 2012 * ; Inbar-Lourie, 2008*[1]; 2013 ; Taylor, 2009*, 2013*).

Access options

Crossref logo

No CrossRef data available.

View all Google Scholar citations for this article.

Save article to Kindle

To save this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle .

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Karin Vogt (a1) , Henrik Bøhn (a2) and Dina Tsagari (a3)
  • DOI: https://doi.org/10.1017/S0261444824000090

Save article to Dropbox

To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox .

Save article to Google Drive

To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive .

Reply to: Submit a response

- No HTML tags allowed - Web page URLs will display as text only - Lines and paragraphs break automatically - Attachments, images or tables are not permitted

Your details

Your email address will be used in order to notify you when your comment has been reviewed by the moderator and in case the author(s) of the article or the moderator need to contact you directly.

You have entered the maximum number of contributors

Conflicting interests.

Please list any fees and grants from, employment by, consultancy for, shared ownership in or any close relationship with, at any time over the preceding 36 months, any organisation whose interests may be affected by the publication of the response. Please also list any non-financial associations or interests (personal, professional, political, institutional, religious or other) that a reasonable reader would want to know about in relation to the submitted work. This pertains to all the authors of the piece, their spouses or partners.

  •   Home
  • UA Graduate and Undergraduate Research
  • UA Theses and Dissertations
  • Dissertations

Assessment Literacy: A Study of EFL Teachers’ Assessment Knowledge, Perspectives, and Classroom Behaviors

Thumbnail

Degree Name

Degree level, degree program, degree grantor, collections.

entitlement

Show Statistical Information

Export search results

The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.

  • DOI: 10.1080/08878739909555204
  • Corpus ID: 145231761

Assessment Literacy for Teachers: Making a Case for the Study of Test Validity.

  • Shawn M. Quitter
  • Published 1 March 1999
  • The Teacher Educator

10 Citations

Assessment literacy in a standards-based urban education setting, knowledge, skills, and attitudes of preservice and inservice teachers in educational measurement, enhancing malaysian teachers' assessment literacy, development and validation of classroom assessment literacy scales: english as a foreign language (efl) instructors in a cambodian higher education setting, teachers competence in the educational assessment of students: the case of secondary school teachers in the amhara national regional state, mapping the constellation of assessment discourses: a scoping review study on assessment competence, literacy, capability, and identity, secondary school teachers' competence in educational assessment of students in bahir dar town, criterion-referenced assessment literacy of educators.

  • Highly Influenced

Preservice Versus Inservice Teachers' Assessment Literacy: Does Classroom Experience Make a Difference?.

On standard 5"developing valid grading procedures." respondents, 15 references, measurement training for school personnel recommendations and reality, teacher beliefs about training in testing and measurement, using the sat and high school record in academic guidance, measurement-related course work requirements for teacher certification and recertification., reliability and validity assessment, teachers’ assessment background and attitudes toward testing, classroom teachers move to center stage in the assessment area--ready or not., teacher training in assessment., testing and the law., teachers and testing; a survey of knowledge and attitudes., related papers.

Showing 1 through 3 of 0 Related Papers

EFL TEACHER-STUDENTS' ASSESSMENT LITERACY: A CASE STUDY

  • August 2022
  • Conference: 1st International Conference on English Language Teaching
  • At: Kediri, Indonesia

Khairani Dian Anisa at Universitas Sebelas Maret

  • Universitas Sebelas Maret
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • Cherry Zin Oo

Dennis Alonzo

  • Xiaolin Peng

Jiyoon Lee

  • ChangOk Shin

Frank Giraldo

  • Rama Mathew

Emily Di Zhang

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Library Home

Working With Academic Literacies: Case Studies Towards Transformative Practice

literacy assessment case study

Theresa Lillis, The Open University

Kathy Harrington, London Metropolitan University

Mary Lea, Open University

Sally Mitchell, Queen Mary University of London

Copyright Year: 2015

ISBN 13: 9781602357617

Publisher: WAC Clearinghouse

Language: English

Formats Available

Conditions of use.

Attribution-NonCommercial-NoDerivs

Learn more about reviews.

Reviewed by Margaret Haberman, Adjunct Instructor, University of Southern Maine on 3/30/21

This book has a tremendous range and numerous contributions from a variety of fields within this subject matter. read more

Comprehensiveness rating: 5 see less

This book has a tremendous range and numerous contributions from a variety of fields within this subject matter.

Content Accuracy rating: 5

I did not find any inaccuracies or bias based on my reading. Since many of the contributions are not within my field of study, I cannot speak to specific accuracy. However, these essays are about practice and application of techniques and strategies within a variety of fields/content areas.

Relevance/Longevity rating: 5

These topics will be relevant, in my opinion, for future use.

Clarity rating: 4

I found the text to be accessible.

Consistency rating: 5

The text is consistent within the framework of a text with many different contributors.

Modularity rating: 5

Yes, easily adaptable to needs of an instructor.

Organization/Structure/Flow rating: 5

I found the organization of the text to be logical and easy to follow.

Interface rating: 5

Easy to navigate.

Grammatical Errors rating: 5

I did not notice any grammatical errors.

Cultural Relevance rating: 5

I found this text to be culturally sensitive.

Table of Contents

  • Front Matter
  • Introduction, Theresa Lillis, Kathy Harrington, Mary R. Lea and Sally Mitchell

Section 1. Transforming Pedagogies of Academic Writing and Reading

  • Introduction to Section 1
  • A Framework for Usable Pedagogy: Case Studies Towards Accessibility, Criticality and Visibility, Julio Gimenez and Peter Thomas
  • Working With Power: A Dialogue about Writing Support Using Insights from Psychotherapy, Lisa Clughen and Matt Connell
  • An Action Research Intervention Towards Overcoming "Theory Resistance" in Photojournalism Students, Jennifer Good
  • Student-Writing Tutors: Making Sense of "Academic Literacies", Joelle Adams
  • "Hidden Features" and "Overt Instruction" in Academic Literacy Practices: A Case Study in Engineering, Adriana Fischer
  • Making Sense of my Thesis: Master's Level Thesis Writing as Constellation of Joint Activities, Kathrin Kaufhold
  • Thinking Creatively About Research Writing, Cecile Badenhorst, Cecilia Moloney, Jennifer Dyer, Janna Rosales and Morgan Murray
  • Disciplined Voices, Disciplined Feelings: Exploring Constraints and Choices in a Thesis Writing Circle, Kate Chanock, Sylvia Whitmore and Makiko Nishitani
  • How Can the Text Be Everything? Reflecting on Academic Life and Literacies, Sally Mitchell talking with Mary Scott

Section 2. Transforming the Work of Teaching

  • Introduction to Section 2
  • Opening up The Curriculum: Moving from The Normative to The Transformative in Teachers' Understandings of Disciplinary Literacy Practices, Cecilia Jacobs
  • Writing Development, Co-Teaching and Academic Literacies: Exploring the Connections, Julian Ingle and Nadya Yakovchuk
  • Transformative and Normative? Implications for Academic Literacies Research in Quantitative Disciplines, Moragh Paxton and Vera Frith
  • Learning from Lecturers: What Disciplinary Practice Can Teach Us About "Good" Student Writing, Maria Leedham
  • Thinking Critically and Negotiating Practices in the Disciplines, David Russell in conversation with Sally Mitchell
  • Academic Writing in an ELF Environment: Standardization, Accommodation—or Transformation?, Laura McCambridge
  • "Doing Something that's Really Important": Meaningful Engagement as a Resource for Teachers' Transformative Work with Student Writers in the Disciplines, Jackie Tuck
  • The Transformative Potential of Laminating Trajectories: Three Teachers' Developing Pedagogical Practices and Identities, Kevin Roozen, Paul Prior, Rebecca Woodard and Sonia Kline
  • Marking the Boundaries: Knowledge and Identity in Professional Doctorates, Jane Creaton
  • What's at Stake in Different Traditions? Les Littéracies Universitaires and Academic Literacies, Isabelle Delcambre in conversation with Christiane Donahue

Section 3. Transforming Resources, Genres and Semiotic Practices

  • Introduction to Section 3
  • Genre as a Pedagogical Resource at University, Fiona English
  • How Drawing Is Used to Conceptualize and Communicate Design Ideas in Graphic Design: Exploring Scamping Through a Literacy Practice Lens, Lynn Coleman
  • "There is a Cage Inside My Head and I Cannot Let Things Out", Fay Stevens
  • Blogging to Create Multimodal Reading and Writing Experiences in Postmodern Human Geographies, Claire Penketh and Tasleem Shakur
  • Working with Grammar as a Tool for Making Meaning, Gillian Lazar and Beverley Barnaby
  • Digital Posters—Talking Cycles for Academic Literacy, Diane Rushton, Cathy Malone and Andrew Middleton
  • Telling Stories: Investigating the Challenges to International Students' Writing Through Personal Narrative, Helen Bowstead
  • Digital Writing as Transformative: Instantiating Academic Literacies in Theory and Practice, Colleen McKenna
  • Looking at Academic Literacies from a Composition Frame: From Spatial to Spatio-temporal Framing of Difference, Bruce Horner in conversation with Theresa Lillis

Section 4. Transforming Institutional Framings of Academic Writing

  • Introduction to Section 4
  • Transforming Dialogic Spaces in an "Elite" Institution: Academic Literacies, the Tutorial and High-Achieving Students, Corinne Boz
  • The Political Act of Developing Provision for Writing in the Irish Higher Education Context, Lawrence Cleary and Íde O'Sullivan
  • Building Research Capacity through an AcLits-Inspired Pedagogical Framework, Lia Blaj-Ward
  • Academic Literacies at the Institutional Interface: A Prickly Conversation Around Thorny Issues, Joan Turner
  • Revisiting the Question of Transformation in Academic Literacies: The Ethnographic Imperative, Brian Street in conversation with Mary R. Lea and Theresa Lillis
  • Resisting the Normative? Negotiating Multilingual Identities in a Course for First Year Humanities Students in Catalonia, Spain, Angels Oliva-Girbau and Marta Milian Gubern
  • Academic Literacies and the Employability Curriculum: Resisting Neoliberal Education?, Catalina Neculai
  • A Cautionary Tale about a Writing Course for Schools, Kelly Peake and Sally Mitchell
  • "With writing, you are not expected to come from your home": Dilemmas of Belonging, Lucia Thesen
  • AC Lits Say
  • List of contributors

Ancillary Material

About the book.

The editors and contributors to this collection explore what it means to adopt an "academic literacies" approach in policy and pedagogy. Transformative practice is illustrated through case studies and critical commentaries from teacher-researchers working in a range of higher education contexts—from undergraduate to postgraduate levels, across disciplines, and spanning geopolitical regions including Australia, Brazil, Canada, Cataluña, Finland, France, Ireland, Portugal, South Africa, the United Kingdom, and the United States. Key questions addressed include: How can a wider range of semiotic resources and technologies fruitfully serve academic meaning and knowledge making? What kinds of writing spaces do we need and how can these be facilitated? How can theory and practice from "Academic Literacies" be used to open up debate about writing pedagogy at institutional and policy levels?

About the Contributors

Edited by Theresa Lillis, Kathy Harrington, Mary R. Lea , and Sally Mitchell.

Theresa Lillis is Professor of English Language and Applied Linguistics at The Open University, UK. Her main research area is writing- student writing in higher education, scholarly writing for publication, professional social work writing and writing in grassroots political activity. She has authored and co-authored a number of books, including The Sociolinguistics of Writing (2013), Academic Writing in a Global Context (with Mary Jane Curry, 2010) and Student Writing: Access Regulation, Desire (2001).

Kathy Harrington is Principal Lecturer in Educational Development at London Metropolitan University and Visiting Lecturer at the Tavistock Centre, London. Previously she was Academic Lead - Students as Partners, Higher Education Academy, and from 2005-2010 Director of Write Now, a cross-institutional initiative developing writing and assessment practice within disciplines. She is co-author (with Mick Healey and Abbi Flint) of Engagement through Partnership: Students as Partners in Learning and Teaching in Higher Education (2014).

Mary Lea is an Honorary Associate Reader in Academic and Digital Literacies at the Open University, UK. She has researched and published widely in the field of academic literacies. Her more recent work is concerned with the relationship of the digital to knowledge making practices in the university across academic and professional domains. A recent co-edited volume, with Robin Goodfellow, Literacy in the Digital University: Critical Perspectives on Learning, Scholarship and Technology (2013) considers this emerging area of study.

Sally Mitchell is Head of Learning Development at Queen Mary University of London, where in the early 2000s she established "Thinking Writing," a strand of development activity to support academic staff in exploring the uses of writing in their disciplines and their teaching. She is particularly interested in the ways in which writing development is thought about and positioned institutionally and in questions of who is responsible for students' learning through language.

Contribute to this Page

A Systematic Review of Early Writing Assessment Tools

  • Published: 08 June 2024

Cite this article

literacy assessment case study

  • Katherine L. Buchanan 1 ,
  • Milena Keller-Margulis   ORCID: orcid.org/0000-0001-7539-5375 1 ,
  • Amanda Hut 1 ,
  • Weihua Fan 1 ,
  • Sarah S. Mire   ORCID: orcid.org/0000-0002-3763-3237 2 &
  • G. Thomas Schanding Jr   ORCID: orcid.org/0000-0003-0195-6664 3  

40 Accesses

Explore all metrics

There is considerable research regarding measures of early reading but much less in early writing. Nevertheless, writing is a critical skill for success in school and early difficulties in writing are likely to persist without intervention. A necessary step toward identifying those students who need additional support is the use of screening tools. The purpose of this study was to identify tools used with emergent writers and summarize the current state of this empirical literature. A systematic review was conducted for publications between 1990 and 2022. A total of 59 studies focused on early writing for preschool or kindergarten students and met criteria for inclusion in the review. Results indicated the most used early writing measure was Name Writing followed by Letter Writing, and Spelling tasks with some studies using this specific combination of measures. Despite some consistency in the measures used, there was significant variation in the scoring approach. Review of technical adequacy indicated 65% of studies included reliability data while considerably fewer included validity. Future studies using consistent approaches to scoring early writing tasks and additional examinations of validity are needed to improve educators’ ability to identify and intervene in this skill area.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

literacy assessment case study

Preschool children’s early writing: repeated measures reveal growing but variable trajectories

literacy assessment case study

Assessing early writing: a six-factor model to inform assessment and teaching

literacy assessment case study

Prediction of kindergarten and first-grade reading skills: unique contributions of preschool writing and early-literacy skills

Adams, A.-M., & Simmons, F. R. (2019). Exploring individual and gender differences in early writing performance. Reading & Writing, 32 , 235–263. https://doi.org/10.1007/s11145-018-9859-0

Article   Google Scholar  

Atwell, N. (1987). In the middle: Writing, reading, and learning with adolescents . Heinemann Educational Books.

Google Scholar  

Berninger, V. W., & Amtmann, D. (2003). Preventing written expression disabilities through early and continuing assessment and intervention for handwriting and/or spelling problems: Research into practice. In H. L. Swanson, K. R. Harris, & S. Graham (Eds.), Handbook of Learning Disabilities (pp. 345–363). The Guilford Press.

Berninger, V. W., & Winn, W. (2006). Implications of advancements in brain research and technology for writing development, writing instruction, and educational evolution. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of Writing Research (pp. 96–114). The Guilford Press.

Berninger, V. W., Vaughan, K. B., Abbott, R. D., Abbott, S. P., Rogan, L. W., Brooks, A., Reed, E., & Graham, S. (1997). Treatment of handwriting problems in beginning writers: Transfer from handwriting to composition. Journal of Educational Psychology, 89 , 652–666. https://psycnet.apa.org/doi/10.1037/0022-0663.89.4.652 .

Berninger, V. W., Abbott, R. D., Abbott, S. P., Graham, S., & Richards, T. (2002). Writing and reading: Connections between language by hand and language by eye. Journal of Learning Disabilities, 35 , 39–56. https://doi.org/10.1177/002221940203500104

Byington, T. A., & Kim, Y. (2017). Promoting preschoolers’ emergent writing. YC Young Children, 72 (5), 74–82. https://www.jstor.org/stable/90015861

Calkins, L. M. (1983). Lessons from a child: On the teaching and learning of writing . Heinemann Educational Books Inc.

*Campbell, K., Chen, Y.-J., Cunningham, A. E., & Shenoy, S. (2019). Preschool children’s early writing: repeated measures reveal growing but variable trajectories. Reading & Writing , 32 , 939–961. https://doi.org/10.1007/s11145-018-9893-y

Carrow-Woolfolk, E. (1995). Oral and written language scales (Vol. 93). American Guidance Service.

Clay, M. M. (2001). Change over time in children’s literacy development . Heinemann.

Clay, M. M. (2002). An observation survey of early literacy achievement (2nd ed.). Heinemann.

*Coker, D. L., & Ritchey, K. D. (2010). Curriculum-based measurement of writing in kindergarten and first grade: An investigation of production and qualitative scores.  Exceptional Children ,  76 , 175-193. https://doi.org/10.1177/001440291007600203

Dennis, L. R., & Votteler, N. K. (2013). Preschool teachers and children’s emergent writing: Supporting diverse learners. Early Childhood Education Journal, 41 , 439–446. https://doi.org/10.1007/s10643-012-0563-4

Diamond, K. E., Gerde, H. K., & Powell, D. R. (2008). Development in early literacy skills during the pre-kindergarten year in Head Start: Relations between growth in children’s writing and understanding of letters. Early Childhood Research Quarterly, 23 , 467–478. https://doi.org/10.1016/j.ecresq.2008.05.002

Donica, D. K., Goins, A., & Wagner, L. (2013). Effectiveness of handwriting readiness programs on postural control, hand control, and letter and number formation in head start classrooms. Journal of Occupational Therapy, Schools, & Early Intervention, 6 , 81–93. https://doi.org/10.1080/19411243.2013.810938

Edwards, L. (2003). Writing instruction in kindergarten: Examining an emerging area of research for children with writing and reading difficulties. Journal of Learning Disabilities, 36 , 136–148. https://doi.org/10.1177/002221940303600206

Elliott, C. D., Smith, P., & McCulloch, K. (1997). British ability scales II: technical manual . NferNelson.

Engemann, J., & Gallagher, T. (2006). The conundrum of classroom writing assessment. Brock Education A Journal of Educational Research and Practice, 16 , 33–44. https://doi.org/10.26522/brocked.v16i1.77

*Friedrich, N., Portier, C., & Peterson, S. S. (2019). Identifying patterns in and relationships between graphic representations and talk of northern Canadian rural and indigenous children. Language & Literacy: A Canadian Educational E-Journal , 21 , 39–56.

Gerde, H. K., Bingham, G. E., & Wasik, B. A. (2012). Writing in early childhood classrooms: Guidance for best practices. Early Childhood Education Journal, 40 , 351–359. https://doi.org/10.1007/s10643-012-0531-z

Glover, T. A., & Albers, C. A. (2007). Considerations for evaluating universal screening assessments. Journal of School Psychology, 45 , 117–135. https://doi.org/10.1016/j.jsp.2006.05.005

Glover, T. A., & DiPerna, J. C. (2007). Service delivery for response to intervention: Core components and directions for future research. School Psychology Review, 36 , 526–540. https://doi.org/10.1080/02796015.2007.12087916

Graham, S., Harris, K. R., & Fink, B. (2000). Is handwriting causally related to learning to write? Treatment of handwriting problems in beginning writers. Journal of Educational Psychology, 92 , 620–633. https://psycnet.apa.org/doi/10.1037/0022-0663.92.4.620

Graves, D. H. (1983). Writing: Teachers and children at work . Heinemann Educational Books.

Hall, A. H., Simpson, A., Guo, Y., & Wang, S. (2015). Examining the effects of preschool writing instruction on emergent literacy skills: A systematic review of the literature. Literacy Research and Instruction, 54 , 115–134. https://doi.org/10.1080/19388071.2014.991883

Hallgren, K. A. (2012). Computing inter-rater reliability for observational data: an overview and tutorial. Tutorials in Quantitative Methods for Psychology, 8 , 23–34. https://doi.org/10.20982/2Ftqmp.08.1.p023

Hammill, D. D., & Larsen, S. C. (2009). Test of Written Language (4th ed.). Pearson.

*Heldsinger, S. A., & Humphry, S. M. (2013). Using calibrated exemplars in the teacher-assessment of writing: an empirical study . Educational Research , 55 (3), 219-235. https://doi.org/10.1080/00131881.2013.825159

Hresko, W. P., Herron, S. R., & Peak, P. K. (1996). Test of Early Written Language (TEWL-2) . PsychCorp/Pearson.

Invernizzi, M., Sullivan, A., Meier, J., & Swank, K. (2004). PALS-Pre-K teacher’s manual . University of Virginia Press.

Juel, C., Griffith, P. L., & Gough, P. B. (1986). Acquisition of literacy: A longitudinal study of children in first and second grade. Journal of Educational Psychology, 78 , 243–255. https://psycnet.apa.org/doi/10.1037/0022-0663.78.4.243

*Keller-Margulis, M. A., Ochs, S., Reid, E. K., Faith, E. L., & Schanding Jr, G. T. (2019). Validity and Diagnostic Accuracy of Early Written Expression Screeners in Kindergarten.  Journal of Psychoeducational Assessment ,  37 , 539-552. https://doi.org/10.1177/0734282918769978

*Kesler, T. (2020). “Does it have to be a real story?” A social semiotic assessment of an emergent writer. Language & Education , 34 , 440–468. https://doi.org/10.1080/09500782.2020.1766060

Levin, I., Vries, A. B., Aram, D., & Bus, A. (2005). Writing starts with own name writing: From scribbling to conventional spelling in Israeli and Dutch children. Applied Psycholinguistics, 26 , 463–477. https://doi.org/10.1017/S0142716405050253

Lonigan, C. J., Burgess, S. R., & Anthony, J. L. (2000). Development of emergent literacy and early reading skills in preschool children: evidence from a latent-variable longitudinal study. Developmental Psychology, 36 , 596–613. https://psycnet.apa.org/doi/10.1037/0012-1649.36.5.596

Lust, C. A., & Donica, D. K. (2011). Effectiveness of a handwriting readiness program in Head Start: A two-group controlled trial. American Journal of Occupational Therapy, 65 , 560–568. https://doi.org/10.5014/ajot.2011.000612

*McNeill, B. C., Westerveld, M., van Bysterveldt, A., Boyd, L., & Gillon, G. (2013). Early name writing and invented-spelling development. New Zealand Journal of Educational Studies , 48 , 50-65.

*Milburn, T. F., Hipfner-Boucher, K., Weitzman, E., Greenberg, J., Pelletier, J., & Girolametto, L. (2017). Cognitive, linguistic and print-related predictors of preschool children’s word spelling and name writing. Journal of Early Childhood Literacy , 17 (1), 111-136. https://doi.org/10.1177/1468798415624482

Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., Prisma Group. (2009). Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS med, 6 (7), e1000097.

National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education. (2012). The nation’s report card: Writing 2011 (NCES 2012–470).

National Governors Association. (2010). Common core state standards . Washington, DC

National Research Council. (1998). Preventing reading difficulties in young children . National Academies Press.

Neumann, M. M., Hood, M., & Ford, R. M. (2013). Using environmental print to enhance emergent literacy and print motivation. Reading and Writing, 26 , 771–793. https://doi.org/10.1007/s11145-012-9390-7

*Nixon, J. G., & Topping, K. J. (2001). Emergent writing: The impact of structured peer interaction. Educational Psychology , 21 , 41-58. https://doi.org/10.1080/01443410123268

O’Connor, R. E., & Jenkins, J. R. (1995). Improving the generalization of sound/symbol knowledge: Teaching spelling to kindergarten children with disabilities. The Journal of Special Education, 29 , 255–275. https://doi.org/10.1177/002246699502900301

Olive, T., & Kellogg, R. T. (2002). Concurrent activation of high-and low-level production processes in written composition. Memory & Cognition, 30 , 594–600. https://doi.org/10.3758/BF03194960

Parker, D. C., Burns, M. K., McMaster, K. L., & Shapiro, E. S. (2012). Extending curriculum-based assessment to early writing. Learning Disabilities Research & Practice, 27 , 33–43. https://doi.org/10.1111/j.1540-5826.2011.00348.x

Partanen, M., & Siegel, L. S. (2014). Long-term outcome of the early identification and intervention of reading disabilities. Reading and Writing, 27 , 665–684. https://doi.org/10.1007/s11145-013-9472-1

Puranik, C. S., & Lonigan, C. J. (2011). From scribbles to scrabble: Preschool children’s developing knowledge of written language. Reading and Writing, 24 (5), 567–589. https://doi.org/10.1007/s11145-009-9220-8

Puranik, C. S., Boss, E., & Wanless, S. (2019). Relations between self-regulation and early writing: Domain specific or task dependent? Early Childhood Research Quarterly, 46 , 228–239. https://doi.org/10.1016/j.ecresq.2018.02.006

*Puranik, C. S., & Al Otaiba, S. (2012). Examining the contribution of handwriting and spelling to written expression in kindergarten children. Reading and Writing , 25 , 1523-1546. https://doi.org/10.1007/s11145-011-9331-x

*Puranik, C. S., & Lonigan, C. J. (2012). Early writing deficits in preschoolers with oral language difficulties. Journal of Learning Disabilities , 45 , 179-190. https://doi.org/10.1177/0022219411423423

*Puranik, C. S., & Lonigan, C. J. (2014). Emergent writing in preschoolers: Preliminary evidence for a theoretical framework. Reading Research Quarterly, 49 , 453-467. https://doi.org/10.1002/rrq.79

*Puranik, C. S., Lonigan, C. J., & Kim, Y. S. (2011). Contributions of emergent literacy skills to name writing, letter writing, and spelling in preschool children. Early Childhood Research Quarterly, 26 , 465-474. https://doi.org/10.1016/j.ecresq.2011.03.002

*Puranik, C. S., Schreiber, S., Estabrook, E., & O’Donnell, E. (2014). Comparison of name-writing rubrics: Is there a gold standard? Assessment for Effective Intervention , 40 , 16-23. https://doi.org/10.1177/1534508413502390

Qualifications and Curriculum Authority. (2008). Early years foundation stage: Profile handbook . QCA.

Quinn, M. F., & Bingham, G. E. (2019). The nature and measurement of children’s early composing. Reading Research Quarterly, 54 (2), 213–235. https://doi-org.ezproxy.lib.uh.edu/10.1002/rrq.232

*Quinn, M. F., Bingham, G. E., & Gerde, H. K. (2021). Who writes what when?: Examining children’s early composing. Reading and Writing, 34 , 79-107. https://doi.org/10.1007/s11145-020-10063-z

*Reid, E. K., Keller-Margulis, M. A., Schanding Jr, G. T., & Tolar, T. D. (2019). Predicting kindergarten writing achievement using early written expression and behavior screening.  Journal of Applied School Psychology ,  35 , 215-233. https://doi.org/10.1080/15377903.2019.1568333

Ritchey, K. D. (2006). Learning to write: Progress-monitoring tools for beginning and at-risk writers. Teaching Exceptional Children, 39 , 22–27.

Ritchey, K. D. (2008). The building blocks of writing: Learning to write letters and spell words. Reading and Writing, 21 , 27–47. https://doi.org/10.1007/s11145-007-9063-0

Rocha, R. S., Castro, S. L., & Limpo, T. (2022). The role of transcription and executive functions in writing: A longitudinal study in the transition from primary to intermediate Grades. Reading and Writing, 35 (8), 1911–1932. https://doi.org/10.1007/s11145-022-10256-8

*Rowe, D. W., & Wilson, S. J. (2015). The development of a descriptive measure of early childhood writing: Results from the Write Start! writing assessment. Journal of Literacy Research , 47 , 245-292. https://doi.org/10.1177/1086296X15619723

Rowe, D. W., & Neitzel, C. (2010). Interest and agency in 2-and 3-year-olds’ participation in emergent writing. Reading Research Quarterly, 45 (2), 169–195. https://doi.org/10.1598/RRQ.45.2.2

Schulz, M. M. (2009). Effective writing assessment and instruction for young English language learners. Early Childhood Education Journal, 37 , 57–62. https://doi.org/10.1007/s10643-009-0317-0

Shanahan, T., & Lonigan, C. J. (2010). The National Early Literacy Panel: A summary of the process and the report. Educational Researcher, 39 , 279–285. https://doi.org/10.3102/0013189X10369172

Sittner Bridges, M., & Catts, H. W. (2011). The use of a dynamic screening of phonological awareness to predict risk for reading disabilities in kindergarten children. Journal of Learning Disabilities, 44 , 330–338. https://doi.org/10.1177/0022219411407863

Snow, C. E., Tabors, P. O., Nicholson, P. A., & Kurland, B. F. (1995). SHELL: Oral language and early literacy skills in kindergarten and first-grade children. Journal of Research in Childhood Education, 10 (1), 37–48. https://doi.org/10.1080/02568549509594686

Snow, C., Burns, M. S., & Griffin, P. (1999). Language and literacy environments in preschools(ED426818) . ERIC. https://files.eric.ed.gov/fulltext/ED426818.pdf .

Teale, W., & Sulzby, E. (1986). Emergent literacy: Writing and reading. Norwood . Ablex.

Thomas, L. J., Gerde, H. K., Piasta, S. B., Logan, J. A., Bailet, L. L., & Zettler-Greeley, C. M. (2020). The early writing skills of children identified as at-risk for literacy difficulties. Early Childhood Research Quarterly, 51 , 392–402. https://doi.org/10.1016/j.ecresq.2020.01.003

Tindal, G. (2013). Curriculum-based measurement: A brief history of nearly everything from the 1970s to the present. International Scholarly Research Notices, 2013 , 1–29. https://doi.org/10.1155/2013/958530

Tolchinsky, L. (2003). The cradle of culture and what children know about writing and numbers before being . Psychology Press.

Book   Google Scholar  

Tortorelli, L. S., Gerde, H. K., Rohloff, R., & Bingham, G. E. (2022). Ready, set, write: Early learning standards for writing in the Common Core era. Reading Research Quarterly, 57 (2), 729–752. https://doi.org/10.1002/rrq.436

van Hartingsveldt, M. J., De Groot, I. J., Aarts, P. B., & Nijhuis-Van der Sanden, M. W. (2011). Standardized tests of handwriting readiness: A systematic review of the literature. Developmental Medicine & Child Neurology, 53 , 506–515. https://doi.org/10.1111/j.1469-8749.2010.03895.x

Woodcock, R. W., McGrew, K. S., & Mather, N. (2001). Woodcock-Johnson III NU Complete . Riverside Publishing.

Download references

Author information

Authors and affiliations.

Department of Psychological, Health, and Learning Sciences, University of Houston, Houston, TX, USA

Katherine L. Buchanan, Milena Keller-Margulis, Amanda Hut & Weihua Fan

Department of Educational Psychology, Baylor University, Waco, TX, USA

Sarah S. Mire

Educational & Counselling Psychology, and Special Education, The University of British Columbia, Vancouver, BC, USA

G. Thomas Schanding Jr

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Milena Keller-Margulis .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Please see Fig.  1

figure 1

Prisma flowchart

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Buchanan, K.L., Keller-Margulis, M., Hut, A. et al. A Systematic Review of Early Writing Assessment Tools. Early Childhood Educ J (2024). https://doi.org/10.1007/s10643-024-01697-7

Download citation

Accepted : 20 May 2024

Published : 08 June 2024

DOI : https://doi.org/10.1007/s10643-024-01697-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Early literacy
  • Universal screening
  • Find a journal
  • Publish with us
  • Track your research
  • Open access
  • Published: 17 June 2024

The effectiveness of problem-based learning and case-based learning teaching methods in clinical practical teaching in TACE treatment for hepatocellular carcinoma in China: a bayesian network meta-analysis

  • Jingxin Yan 1   na1 ,
  • Yonghao Wen 2 , 3   na1 ,
  • Xinlian Liu 4   na1 ,
  • Manjun Deng 2 , 3 ,
  • Ting Li 6 ,
  • Huanwei Wang 7 ,
  • Cui Jia 4 ,
  • Jinsong Liao 8 &
  • Lushun Zhang 4  

BMC Medical Education volume  24 , Article number:  665 ( 2024 ) Cite this article

Metrics details

To investigate the effectiveness of problem-based learning (PBL) and case-based learning (CBL) teaching methods in clinical practical teaching in transarterial chemoembolization (TACE) treatment in China.

Materials and methods

A comprehensive search of PubMed, the Chinese National Knowledge Infrastructure (CNKI) database, the Weipu database and the Wanfang database up to June 2023 was performed to collect studies that evaluate the effectiveness of problem-based learning and case-based learning teaching methods in clinical practical teaching in TACE treatment in China. Statistical analysis was performed by R software (4.2.1) calling JAGS software (4.3.1) in a Bayesian framework using the Markov chain-Monte Carlo method for direct and indirect comparisons. The R packages “gemtc”, “rjags”, “openxlsx”, and “ggplot2” were used for statistical analysis and data output.

Finally, 7 studies (five RCTs and two observational studies) were included in the meta-analysis. The combination of PBL and CBL showed more effectiveness in clinical thinking capacity, clinical practice capacity, knowledge understanding degree, literature reading ability, method satisfaction degree, learning efficiency, learning interest, practical skills examination scores and theoretical knowledge examination scores.

Conclusions

Network meta-analysis revealed that the application of PBL combined with the CBL teaching mode in the teaching of liver cancer intervention therapy significantly improves the teaching effect and significantly improves the theoretical and surgical operations, meeting the requirements of clinical education.

Peer Review reports

Introduction

Hepatocellular carcinoma (HCC) is one of the leading causes of cancer-related death worldwide, and newly diagnosed cases increase annually [ 1 ]. More than 50% of newly diagnosed patients are reported in China, with an age-standardized incidence rate of 8.6 per 100,000 individuals annually [ 2 ]. China is a country with a high burden of hepatitis, which indicates that HCC is one of the main focuses of medical investment in China. According to Western and Eastern experts’ consensus and guidelines [ 3 , 4 , 5 ], transarterial chemoembolization (TACE), an interventional method that embolizes the tumor-feeding vascular with embolization materials and chemotherapy drugs, is considered the first choice for most patients with advanced-stage HCC, providing opportunities for surgery. In addition, clinical evidence has also confirmed the effectiveness of TACE and its related protocol in different clinical settings [ 6 ].

With the rapid development of medical education, therefore, cultivating excellent medical professionals is particularly crucial. In clinical education, traditional lecture-based teaching has shortcomings; for example, teachers place too much emphasis on knowledge and passive student learning [ 7 ], resulting in low learning efficiency, insufficient clinical thinking ability, and poor clinical practice ability for students. Moreover, the teaching of interventional radiology, including TACE and other related disciplines, is highly specialized, with relatively few class hours and relatively short internship times, making it difficult to master the outline knowledge in a short period of time. Therefore, to improve teaching effectiveness, it is necessary to break the constraints of traditional teaching methods and strive to find more effective teaching methods.

The problem-based learning (PBL) teaching method emphasizes students’ active learning as the main focus, rather than the traditional lecture-based teaching method. It is based on a student-centered education approach, guided by teachers and based on questions, to introduce relevant basic knowledge [ 8 ]. Through group discussions, students independently collect data and discover and solve problems, and this teaching model can cultivate students’ active learning and innovation. The case-based learning (CBL) teaching method is based on typical cases, using real cases from clinical work in teaching. Before the teacher systematically explains, students are asked to contact the patient themselves in advance and carefully inquire about their medical history and clinical examinations [ 9 ]. Then, relevant information is collected based on the patient’s specific situation (such as similar patient onset factors, diagnosis and treatment plans, treatment clinical reactions, and posttreatment effects). Finally, a preliminary treatment plan will be formed by students, and teachers will continuously improve treatment plans and apply relevant theoretical knowledge for analysis.

Regarding the use of PBL and CBL for TACE teaching, only several Chinese studies found that PBL and CBL could benefit the students and trainees, as TACE teaching requires mastery of various benign and malignant tumors of the liver, including atypical cases, and interspersed with different teaching contents. Besides, TACE is a discipline that requires not only solid theoretical knowledge, but also high mastery and proficiency in practical operational skills. Therefore, the requirements for teaching methods should also be increased [ 10 ].

Although some published randomized controlled trials and observational studies have examined CBL and PBL in clinical education in TACE, there is currently no consensus on the advantages or disadvantages of these two methods. With our study, We hope to provide the optimum educational method for TACE. Therefore, in this study, we conducted a high-quality Bayesian network meta-analysis and systematic review to explore the effectiveness of the PBL and CBL methods in the clinical practical teaching of TACE in China, with the aim of providing a new perspective for the clinical education of TACE.

Study design

In this study, the Bayesian network meta-analysis was performed following the Preferred Reporting Items for Systematic reviews and Meta-analyses statement [ 11 ]. We used a Bayesian network meta-analysis because of its superiority in accounting for the pooled effect and providing precise calculations for related data.

Data sources and search

A comprehensive search of PubMed, Chinese National Knowledge Infrastructure database (CNKI), Weipu database and Wanfang database up to June 2023 was performed. Table S1 lists the search strategy, inclusion criteria, and exclusion criteria.

Data extraction and risk of bias assessment

Two independent reviewers carried out the research and data extraction, and any disagreements were resolved by a third author. Data on study details (first author, study design, year of publication, study population and sample size.) and primary outcomes were extracted into an Excel sheet. We also extracted data on the performance of the difference teaching method. We used the methods of the Cochrane Handbook for Systematic Reviews of Interventions to assess the risk of the randomized controlled trials [ 12 ]. In addition, the Newcastle–Ottawa scale was adopted to evaluate observational studies [ 13 ].

Data synthesis and statistical analysis

We conducted the network meta-analyses for theoretical knowledge examination scores, practical skills examination scores, and the questionnaire entry using a random-effect model in a Bayesian framework.

Statistical analysis was performed by R software (4.2.1) calling JAGS software (4.3.1) in a Bayesian framework using the Markov chain-Monte Carlo method for direct and indirect comparisons. The R packages “gemtc”, “rjags”, “openxlsx”, and “ggplot2” were used for statistical analysis and data output. Parameter settings: the number of chains was 6, the initial value was 2.5, the number of adaptation (or tuning) iterations was 50,000, the number of simulation iterations was 200,000, and the thinning factor was 10.

The network plot and funnel plot were drawn using Stata software (version 16).

Furthermore, statistical heterogeneity and inconsistency were evaluated using the Q test and the statistic inconsistency index (I 2 ). An I 2 value greater than 50% is generally considered to indicate a substantial level of heterogeneity, which consequently initiates sensitivity analysis to identify the source [ 14 ]. Discontinuous data in a Bayesian framework were calculated with the risk ratio (RR) and its 95% confidence interval (CI), and the natural logarithm of RR (LnRR) was used to estimate the outcomes. Continuous data in a Bayesian framework were calculated with the mean difference (MD) and its 95% CI. Accordingly, we performed a pairwise meta-analysis on comparisons on the basis of the frequentist approach to compare with the corresponding pooled results from the Bayesian framework. We used a line diagram to calculate the rank probability of different therapies, in which the X axis represents probability, while the Y axis represents ranking from first to last [ 15 , 16 ].

Study selection and characteristics of included studies

A preliminary search yielded 248 articles, of which 107 were duplicates. After removing duplicates by automated tools, we reviewed the abstracts of the remaining studies, and 134 articles did not meet the inclusion criteria. Finally, 7 studies (five RCTs [ 10 , 17 , 18 , 19 , 20 ] and two observational studies [ 21 , 22 ]) were included in the meta-analysis. Figure  1 shows the study selection flowchart of the literature search process.

figure 1

Flowchart of the literature search process

Description of the selected studies: first author, year of publication, country, intervention, the most important results. In Table  1 . The study quality of the included studies is shown in Tables  2 and 3 .

Findings of the bayesian network meta-analysis

Bayesian network meta-analysis of theoretical knowledge examination scores.

Theoretical knowledge examination scores were reported in all studies. Eligible comparisons of outcomes are presented in the network plot (Fig.  2 a). We used a table (Table S2 ) to describe the effect of 5 interventions on the theoretical knowledge examination scores in participants with a total of 6 comparisons with LnRR. No significant publication bias was found (Fig.  3 a). PBL in combination with TBL showed the best improvement in the theoretical knowledge examination scores, followed by PBL in combination with CBL (Figure S1 ).

figure 2

Network plot. ( A ) Theoretical knowledge examination scores; ( B ) practical skills examination scores; ( C ) learning interest; ( D ): learning efficiency; ( E ) method satisfaction degree; ( F ) literature reading ability; ( G ) knowledge understanding degree; ( H ) clinical practice capacity; ( I ) clinical thinking capacity

figure 3

Funnel plot of outcomes. ( A ) Theoretical knowledge examination scores; ( B ) practical skills examination scores; ( C ) learning interest; ( D ): learning efficiency; ( E ) method satisfaction degree; ( F ) literature reading ability; ( G ) knowledge understanding degree; ( H ) clinical practice capacity; ( I ) clinical thinking capacity

Bayesian network meta-analysis of practical skills examination scores

Practical skills examination scores were reported in 6 studies [ 10 , 18 , 19 , 20 , 21 , 22 ]. Eligible comparisons of outcomes are presented in the network plot (Fig.  2 b). We used a table (Table S3 ) to describe the effect of 5 interventions on the practical skills examination scores in participants with a total of 6 comparisons. No significant publication bias was found (Fig.  3 b). PBL in combination with TBL showed the best improvement in the practical skills examination scores, followed by PBL (Figure S2 ).

Bayesian network meta-analysis of learning interest

Learning interest was reported in 3 studies [ 18 , 20 , 22 ]. Eligible comparisons of outcomes are presented in the network plot (Fig.  2 c). We used a table (Table S4 ) to describe the effect of 5 interventions on learning interest in participants with a total of 5 comparisons. No significant publication bias was found (Fig.  3 c). PBL in combination with TBL showed the best improvement in learning interest, followed by PBL in combination with CBL (Figure S3 ).

Bayesian network meta-analysis of learning efficiency

Learning efficiency was reported in 2 studies [ 20 , 22 ]. Eligible comparisons of outcomes are presented in the network plot (Fig.  2 d). We used a table (Table S5 ) to describe the effect of 3 interventions on learning efficiency in participants with a total of 2 comparisons. No significant publication bias was found (Fig.  3 d). PBL in combination with TBL showed the best improvement in learning efficiency, followed by PBL in combination with CBL (Figure S4 ).

Bayesian network meta-analysis of method satisfaction degree

Method satisfaction degree were reported in 2 studies [ 10 , 18 ]. Eligible comparisons of outcomes are presented in the network plot (Fig.  2 e). We used a table (Table S6 ) to describe the effect of 4 interventions for the method satisfaction degree in participants with a total of 4 comparisons. No significant publication bias was found (Fig.  3 e). PBL in combination with CBL is the most satisfied among students, followed by PBL (Figure S5 ).

Bayesian network meta-analysis of literature reading ability

Literature reading ability was reported in 2 studies [ 18 , 20 ]. Eligible comparisons of outcomes are presented in the network plot (Fig.  2 f). We used a (Table S7 ) to describe the effect of 4 interventions on the literature reading ability in participants with a total of 4 comparisons. No significant publication bias was found (Fig.  3 f). PBL in combination with CBL showed the best improvement in literature reading ability, followed by PBL (Figure S6 ).

Bayesian network meta-analysis of knowledge understanding degree

Knowledge understanding degree were reported in 2 studies [ 17 , 18 ]. Eligible comparisons of outcomes are presented in the network plot (Fig.  2 g). We used a league matrix table (Table S8 ) to describe the effect of 3 interventions for the knowledge understanding degree in participants with a total of 3 comparisons. No significant publication bias was found (Fig.  3 g). PBL in combination with CBL showed the best improvement in the knowledge understanding degree, followed by PBL (Figure S7 ).

Bayesian network meta-analysis of clinical practice capacity

Clinical practice capacity was reported in 2 studies [ 17 , 18 ]. Eligible comparisons of outcomes are presented in the network plot (Fig.  2 h). We used a (Table S9 ) to describe the effect of 3 interventions on the clinical practice capacity in participants with a total of 3 comparisons. No significant publication bias was found (Fig.  3 h). PBL in combination with CBL showed the best improvement in the clinical practice capacity, followed by PBL (Figure S8 ).

Bayesian network meta-analysis of clinical thinking capacity

Clinical thinking capacity was reported in 4 studies [ 17 , 18 , 20 , 22 ]. Eligible comparisons of outcomes are presented in the network plot (Fig.  2 i). We used a table (Table S10 ) to describe the effect of 5 interventions on the clinical thinking capacity in participants with a total of 5 comparisons. No significant publication bias was found (Fig.  3 i). PBL in combination with CBL showed the best improvement in the clinical thinking capacity, followed by CBL (Figure S9 ).

In this meta-analysis of randomized controlled trials and observational studies, we found that the combination of PBL and CBL is the most effective teaching method in TACE treatment in China. The combination of PBL and CBL showed more effectiveness in clinical thinking capacity, clinical practice capacity, knowledge understanding degree, literature reading ability, method satisfaction degree, learning efficiency, learning interest, practical skills examination scores and theoretical knowledge examination scores. In China, interventional therapy has been widely carried out since the 1980s [ 23 ], but the education method is still at an early stage. With this systematic review and meta-analysis, we summarized the current educational practice in China in terms of TACE.

To our knowledge, this is the first network evidence-based study investigating the effectiveness of different teaching methods of TACE in China. In addition, this is also the first systematic review and meta-analysis that has been carried out to investigate the interventional teaching method of TACE.

Since PBL was posted in the 1960s in response to dissatisfaction with traditional medical education, scholars have found that PBL can contribute to knowledge retention, student satisfaction, motivation, and critical thinking from many perspectives on teaching [ 24 , 25 , 26 , 27 , 28 ] In addition, PBL is currently widely used in North America and Asia, and PBL is considered a successful implementation of current medical education, but the utilization of PBL is different in different regions, showing no difference in geographical origin [ 29 ]. Even though some studies have been published, the heterogeneity within the method, region, individuals and outcomes left some difficulties for medical educational researchers. As a result, some studies showed inconsistent research results in the outcomes when PBL was used [ 30 , 31 , 32 ]. It should be noted that the current definition of CBL is not completely clear, and researchers from different countries have proposed definitions of CBL with different details but the same core [ 33 ]. CBL and PBL allow students to obtain and integrate clinical knowledge before their internship career. However, none of the studies mentioned above investigated interventional treatment teaching methods, so our meta-analysis provides value in this vacuum field.

With the rapid development of clinical medicine, traditional medical teaching methods cannot meet the needs of the medical education system. For instance, medical students who cannot master the content of anatomy classes solely through books and lecture teaching need to dissect cadavers to understand the structure of the human body. Similarly, they cannot master the methods and procedures of TACE solely through traditional education methods. There are some possible reasons that may explain why the combination of PBL and CBL showed a better effectiveness in TACE teaching in China. Unlike traditional teaching methods, the combination of PBL and CBL allows for more interaction between students and teachers, improving students’ perceptions of learning [ 34 , 35 ]. In addition, the combination of PBL and CBL may inspire students to engage in theoretical knowledge learning and practical skills, forging a preliminary mind of clinical logic and a stronger grasp of experimental processes [ 36 ].

Recent research highlights the significance of problem-based learning (PBL) and case-based learning (CBL) in education. Studies show that PBL and CBL enhance students’ motivation, engagement, and knowledge construction. Furthermore, longitudinal analyses indicate that social learning dynamics within PBL groups contribute to learning outcomes [ 37 ]. Additionally, utilizing case-based learning has been shown to improve clinical reasoning skills in medical education [ 38 ]. Realist methods are also increasingly utilized in medical education research to gain deeper insights into learning processes [ 39 ]. These findings underscore the importance of incorporating PBL, CBL, and realist methodologies in educational practices.

The regional nature of the study results warrants consideration due to China’s distinct educational environment, cultural context, and medical system. China’s evolving educational landscape, influenced by cultural factors and a shift towards student-centered learning approaches, may impact the applicability of findings on problem-based learning (PBL) and case-based learning (CBL) effectiveness [ 40 ]. Additionally, variations in healthcare systems and medical education practices highlight the need for caution in generalizing results beyond China. Future research should explore the transferability of PBL and CBL to diverse international contexts, considering cultural and educational differences [ 41 , 42 ].

We used Bayesian method to perform this network meta-analysis, as Bayesian method provides more accurate estimates for small samples because this method takes into account possible bias, reaching more accurate estimates for small samples [ 43 ]. After analyzing data through prior information, the resulting posterior information can be used again as prior information in the next statistical calculation process, especially in the process of clinical decision-making, which is more efficient and reliable [ 44 , 45 ]. Besides, the parameter settings is chosen based on our previous studies, which reduce errors caused by insufficient iterations [ 46 ].

The inconsistences of the findings across individual studies should be noted. For example, all included studies did not adopt uniform outcome measures, as there is no standard examination to test the theoretical scores and practical skills. Hence, a standard examination should be established in the future. In this meta-analysis, we synthesized the results to assess the total effectiveness; however, these differences with the results may lead to significant heterogeneity.

Similar to any meta-analysis and evidence-based study, the limitations of this meta-analysis should be noted. First, we included both RCTs and observational studies in this meta-analysis, which will undoubtedly lead to bias in the results and conclusions. Second, some of the outcomes were evaluated subjectively, which may lead to inconsistent results among individuals. Third, as only seven studies were included in this meta-analysis, the sample size of the included studies was exceedingly small, which undoubtedly led to bias that affect the accuracy of the study. Fourth, all participants were from China, so researchers outside China should interpret our results with caution. Fifth, the inconsistencies within included studies might arise from subjective evaluation metrics.

In conclusion, our study found that the combination of PBL and CBL in TACE teaching education was able to improve knowledge learning, practical skills and other important skills in teaching. However, due to the small sample size of the included individuals and the limitations within the study, further high-quality studies are needed to verify our results and conclusions.

Data availability

The datasets used and analyzed during this study are available from the corresponding author on reasonable request.

Abbreviations

Chinese National Knowledge Infrastructure

  • Transarterial chemoembolization

Problem-based Learning

Case-based Learning Teaching

Hepatocellular carcinoma

Confidence interval

Natural logarithm of RR

Mean difference

Yan J, Deng M, Kong S, Li T, Lei Z, Zhang L, Zhuang Y, He X, Wang H, Fan H. Transarterial chemoembolization in combination with programmed death-1/programmed cell death-ligand 1 immunotherapy for hepatocellular carcinoma: a mini review. iLIVER. 2022;1(4):225–34.

Article   Google Scholar  

Singal AG, Lampertico P, Nahon P. Epidemiology and surveillance for hepatocellular carcinoma: new trends. J Hepatol. 2020;72(2):250–61.

Chen LT, Martinelli E, Cheng AL, Pentheroudakis G, Qin S, Bhattacharyya GS, Ikeda M, Lim HY, Ho GF, Choo SP, et al. Pan-asian adapted ESMO Clinical Practice guidelines for the management of patients with intermediate and advanced/relapsed hepatocellular carcinoma: a TOS-ESMO initiative endorsed by CSCO, ISMPO, JSMO, KSMO, MOS and SSO. Ann Oncol. 2020;31(3):334–51.

Vogel A, Cervantes A, Chau I, Daniele B, Llovet J, Meyer T, Nault JC, Neumann U, Ricke J, Sangro B. Hepatocellular carcinoma: ESMO Clinical Practice guidelines for diagnosis, treatment and follow-up. Ann Oncol. 2018;29(Supplement4):iv238–55.

Gordan JD, Kennedy EB, Abou-Alfa GK, Beg MS, Brower ST, Gade TP, Goff L, Gupta S, Guy J, Harris WP, et al. Systemic therapy for Advanced Hepatocellular Carcinoma: ASCO Guideline. J Clin Oncol. 2020;38(36):4317–45.

Yan J, Deng M, Li T, Dong C, Wang M, Kong S, Guo Y, Fan H. Efficacy and complications of transarterial chemoembolization alone or in combination with different protocols for hepatocellular carcinoma: a bayesian network meta-analysis of randomized controlled trials. iLIVER. 2023;2(2):130–41.

Zhou J, Zhou S, Huang C, Xu R, Zhang Z, Zeng S, Qian G. Effectiveness of problem-based learning in Chinese pharmacy education: a meta-analysis. BMC Med Educ. 2016;16:23.

Wood DF. Problem based learning. BMJ. 2003;326(7384):328–30.

Jamkar AV, Burdick W, Morahan P, Yemul VY, Sarmukadum, Singh G. Proposed model of case based learning for training undergraduate medical student in surgery. Indian J Surg. 2007;69(5):176–83.

Lifeng L. Study on the application of PBL model in the teaching of interventional therapy for liver cancer. China Health Nutr 2020.

Hutton B, Salanti G, Caldwell DM, Chaimani A, Schmid CH, Cameron C, Ioannidis JP, Straus S, Thorlund K, Jansen JP, et al. The PRISMA extension statement for reporting of systematic reviews incorporating network meta-analyses of health care interventions: checklist and explanations. Ann Intern Med. 2015;162(11):777–84.

Cumpston M, Li T, Page MJ, Chandler J, Welch VA, Higgins JP, Thomas J. Updated guidance for trusted systematic reviews: a new edition of the Cochrane Handbook for Systematic Reviews of Interventions. Cochrane Database Syst Rev. 2019;10(10):Ed000142.

Google Scholar  

Stang A. Critical evaluation of the Newcastle-Ottawa scale for the assessment of the quality of nonrandomized studies in meta-analyses. Eur J Epidemiol. 2010;25(9):603–5.

Higgins JP, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ. 2003;327(7414):557–60.

Torbahn G, Hofmann H, Rücker G, Bischoff K, Freitag MH, Dersch R, Fingerle V, Motschall E, Meerpohl JJ, Schmucker C. Efficacy and Safety of Antibiotic Therapy in Early Cutaneous Lyme Borreliosis: A Network Meta-analysis. JAMA Dermatol. 2018;154(11):1292–303.

Wang Y, Zhu J, Qin Z, Wang Y, Chen C, Wang Y, Zhou X, Zhang Q, Meng X, Song N. Optimal biopsy strategy for prostate cancer detection by performing a bayesian network meta-analysis of randomized controlled trials. J Cancer. 2018;9(13):2237–48.

Wang Zhichao WZ, Guan L. The application of combined teaching method in clinical teaching of interventional department. J Inner Mongolia Med Univ 2020.

Wang Yecao HZ, Chen W. Evaluation of the effect of combined teaching method in clinical teaching of interventional department. Higher medical education in China. 2020.

Wang Lizhou JT, Zhou Shi. Application of PBL model in the teaching of interventional therapy for liver cancer. Chin Practical Med. 2015.

Sun Yu ZH, Zheng J. The application of PBL combined with CBL in the training of physicians undergoing vascular interventional training for hepatocellular carcinoma. Continuing Med Educ China 2021.

Minjuan D. Application of PBL model in the teaching of interventional therapy for liver cancer. HEALTH Educ. 2018.

Sha Lei SL, Wu J. Application of PBL combined with TBL teaching model in the probation teaching of interventional therapy for liver cancer. Continuing Med Educ China. 2018.

Zuo M, Huang J. The history of interventional therapy for liver cancer in China. J Interv Med. 2018;1(2):70–6.

Kwan CY. Learning of medical pharmacology via innovation: a personal experience at McMaster and in Asia. Acta Pharmacol Sin. 2004;25(9):1186–94.

Abraham R, Ramnarayan K, George B, Adiga I, Haripin N. Effects of problem-based learning along with other active learning strategies on short-term learning outcomes of students in an Indian medical school. International Journal of Health & Allied Sciences,1,2(2012-09-27) 2012, 1(2):98.

Tayyeb R. Effectiveness of problem based learning as an instructional tool for acquisition of content knowledge and promotion of critical thinking among medical students. J Coll Physicians Surg Pak. 2013;23(1):42–6.

Yadav A, Lundeberg M, Subedi D, Bunting C. Problem-based learning in an undergraduate electrical engineering course. In: 2010 Annual Conference & Exposition: 2010; 2010.

Vernon DT, Blake RL. Does problem-based learning work? A meta-analysis of evaluative research. Acad Med. 1993;68(7):550–63.

Trullàs JC, Blay C, Sarri E, Pujol R. Effectiveness of problem-based learning methodology in undergraduate medical education: a scoping review. BMC Med Educ. 2022;22(1):104.

Wenk M, Waurick R, Schotes D, Wenk M, Gerdes C, Van Aken HK, Pöpping DM. Simulation-based medical education is no better than problem-based discussions and induces misjudgment in self-assessment. Adv Health Sci Educ Theory Pract. 2009;14(2):159–71.

Steadman RH, Coates WC, Huang YM, Matevosian R, Larmon BR, McCullough L, Ariel D. Simulation-based training is superior to problem-based learning for the acquisition of critical assessment and management skills. Crit Care Med. 2006;34(1):151–7.

Qin Y, Wang Y, Floden RE. The Effect of Problem-based learning on improvement of the Medical Educational Environment: a systematic review and Meta-analysis. Med Princ Pract. 2016;25(6):525–32.

Cen XY, Hua Y, Niu S, Yu T. Application of case-based learning in medical student education: a meta-analysis. Eur Rev Med Pharmacol Sci. 2021;25(8):3173–81.

Hamdy H, Agamy E. Is running a problem-based learning curriculum more expensive than a traditional subject-based curriculum? Med Teach. 2011;33(9):e509–514.

Donner RS, Bickley H. Problem-based learning: an assessment of its feasibility and cost. Hum Pathol. 1990;21(9):881–5.

Li T, Wang W, Li Z, Wang H, Liu X. Problem-based or lecture-based learning, old topic in the new field: a meta-analysis on the effects of PBL teaching method in Chinese standardized residency training. BMC Med Educ. 2022;22(1):221.

Rengifo J, et al. Social learning dynamics in problem-based learning groups: a longitudinal analysis. Educational Psychol. 2024;44(3):315–32.

Srinivasan S, Wilkes M. Using case-based learning to enhance clinical reasoning skills. Acad Med. 2023;98(3):321–6.

Wong G, Greenhalgh T, Westhorp G, Pawson R. Realist methods in medical education research: what are they and what can they contribute? Med Educ. 2012;46(1):89–96.

Gandomkar R, Mirzazadeh A, Jalili M, Yazdani K, Fata L. The effectiveness of problem-based learning in medical education: a systematic review and meta-analysis. Med Teach. 2023;45(3):301–15.

Johnson DW, Johnson RT. Social interdependence theory and its application in medical education: a systematic review. Med Educ. 2024;58(5):501–16.

Guba E, Lincoln Y. Applying realist evaluation methods in medical education: a practical guide. Acad Med. 2023;98(7):726–34.

Yan J, Deng M, Wang Y, Zhu M, Li T, Hu H, Lei Z, Guo Y, Zhang L. Transjugular Intrahepatic Portosystemic Shunt for Portal Vein Cavernous Transformation: a systematic review and single-arm Meta-analysis. Dig Dis. 2022;40(6):754–65.

Weir CJ, Taylor RS. Informed decision-making: statistical methodology for surrogacy evaluation and its role in licensing and reimbursement assessments. Pharm Stat. 2022;21(4):740–56.

Castelletti F, La Rocca L, Peluso S, Stingo FC, Consonni G. Bayesian learning of multiple directed networks from observational data. Stat Med. 2020;39(30):4745–66.

Deng M, Wen Y, Yan J, Fan Y, Wang Z, Zhang R, Ren L, Ba Y, Wang H, Lu Q, et al. Comparative effectiveness of multiple different treatment regimens for nonalcoholic fatty liver disease with type 2 diabetes mellitus: a systematic review and bayesian network meta-analysis of randomised controlled trials. BMC Med. 2023;21(1):447.

Download references

Acknowledgements

Not applicable.

This work was supported by Chengdu Medical College’s undergraduate education reform project in 2020: Practice of online and offline hybrid teaching mode in pathology experiment teaching (JG202038); Chengdu Medical College’s graduate education and teaching reform project in 2023(YJG202304); Virtual Teaching and Research Project on Neurology and Diseases at Chengdu Medical College in 2023.

Author information

Jingxin Yan, Yonghao Wen and Xinlian Liu contributed equally to this work.

Authors and Affiliations

West China Hospital, Sichuan University, Chengdu, China

Jingxin Yan

Department of Hepatopancreatobiliary Surgery, Affiliated Hospital of Qinghai University, Xining, China

Yonghao Wen & Manjun Deng

Department of Postgraduate, Qinghai University, Xining, China

Department of Pathology and Pathophysiology, Chengdu Medical College, Chengdu, China

Xinlian Liu, Cui Jia & Lushun Zhang

Department of General Surgery, Rongxian People’s Hospital, Zigong, China

Department of Orthopedics, Sichuan Provincial People’s Hospital, Chengdu, China

Department of Ultrasonography, Hainan General Hospital/Hainan Affiliated Hospital of Hainan Medical University, Haikou, 570100, China

Huanwei Wang

Department of Anesthesiology, Affiliated Hospital of Chengdu University, Chengdu University, Chengdu, China

Jinsong Liao

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: Jingxin Yan and Lushun Zhang; validation: Jingxin Yan, Yonghao Wen, Xinlian Liu, Manjun Deng, Bin Ye, Ting Li, Huanwei Wang, Cui Jia, Jinsong Liao, Lushun Zhang; writing – original draft preparation: Jingxin Yan, Manjun Deng, Ting Li; writing – review and editing: Jingxin Yan, Yonghao Wen, Xinlian Liu, Manjun Deng, Bin Ye, Ting Li, Huanwei Wang, Cui Jia, Jinsong Liao, Lushun Zhang; software: Jingxin Yan, Yonghao Wen, Xinlian Liu, Manjun Deng, Bin Ye, Ting Li, Huanwei Wang, Cui Jia, Jinsong Liao, Lushun Zhang.

Corresponding authors

Correspondence to Jinsong Liao or Lushun Zhang .

Ethics declarations

Ethical approval and consent to participate.

Ethical approval was not required for meta-analysis.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Yan, J., Wen, Y., Liu, X. et al. The effectiveness of problem-based learning and case-based learning teaching methods in clinical practical teaching in TACE treatment for hepatocellular carcinoma in China: a bayesian network meta-analysis. BMC Med Educ 24 , 665 (2024). https://doi.org/10.1186/s12909-024-05615-8

Download citation

Received : 15 October 2023

Accepted : 29 May 2024

Published : 17 June 2024

DOI : https://doi.org/10.1186/s12909-024-05615-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Problem-based learning
  • Case-based learning
  • Bayesian method
  • Network meta-analysis

BMC Medical Education

ISSN: 1472-6920

literacy assessment case study

  • Skip to main content
  • Keyboard shortcuts for audio player

A retired federal judge says Judge Cannon appears to show 'favoritism' toward Trump

Headshot of Scott Detrow, 2018

Scott Detrow

Tyler Bartlam

A man wears a mask of former President Donald Trump in front of a Fort Pierce, Fla., courthouse.

A man wears a mask of former President Donald Trump in front of a Fort Pierce, Fla., courthouse. Joe Raedle/Getty Images hide caption

Judge Aileen Cannon, who is overseeing the classified documents case against former President Donald Trump, continues to make decisions that puzzle many legal observers.

Last month, Cannon delayed the start of the trial indefinitely . She's taken months to make routine procedural decisions .

Trump is charged with taking classified and top-secret material with him to Mar-a-Lago after he left the White House and then taking part in a conspiracy to hide documents from federal investigators.

The Trump's Trials team wanted to know how someone who has served on the federal bench views Cannon’s decisions, so we called retired federal Judge Shira Scheindlin. Appointed by President Bill Clinton, she served as a federal judge for over two decades.

NPR reached out to Judge Cannon’s office for a response and received a statement from her court that their judges do not comment on pending cases.

This interview has been lightly edited for length and clarity.

Interview highlights

Scott Detrow: What is the main thing that has stood out to you about how Judge Cannon has handled this case, as you've observed?

Former President Donald Trump sits in Manhattan Criminal Court in New York, on May 20, 2024.

4 takeaways from the historic felony conviction of Donald Trump

Shira Scheindlin: The main thing that stood out to me is how she has constantly caused delay in the case instead of moving it forward. She's done that in, I would say, two ways. One is her inability to rule in an efficient manner. She holds onto motions. She keeps them pending. She can't seem to decide things. Most experienced judges, which of course I considered myself after 27 years, try to know which motions really require further consideration and argument and which, you know, instinctively you could say frankly, one word: denied. And you can rule from the bench.

The second thing that stands out to me is what appears to me to be her dislike of the government and her favoritism toward the defense. I'm not saying that that's going to, in the end, determine how she rules on everything, but she seems to have a visceral dislike of Jack Smith and his team. She's constantly criticizing them. She's constantly being sharp and sarcastic with them, and she almost never treats the defense that way.

A supporter outside a fundraising event for Donald Trump in San Francisco on Thursday isn't put off by his conviction.

3 ways Republicans are trying to use Trump’s conviction to their advantage

Detrow: I want to ask about the decision-making — the first thing you talked about. Because a lot has been said in this trial that's about classified documents, about the fact that classified documents cases are going to take longer than other types of cases because there are big weighted questions that have to be sorted through in terms of the procedure and how things are going to be introduced in the eventual trial. You're saying it's beyond that scope — that it's just a lot of basic questions coming her way that she has not answered, that you think is striking?

Scheindlin: I am saying that when you have a case that involves highly classified documents, it is more complex to review those documents. They have to be done in a safe space. So it's complicated, I understand that. But that's not a full defense of how long it's taking her to move this case forward. While it is complicated, it's been done many times. She's just been inefficient.

Detrow: We all know that President Trump's legal strategy involves delaying this case with the hope that, if he's elected president again, he would be able to end the case against him. From everything you've seen, do you have a gut feeling as to whether or not this is an experience issue with Judge Cannon or whether or not this is a finger on the scale, to try to help those delays?

Scheindlin: I'm not so sure that those are two different choices. They may be combined in her mind. I think she is inexperienced and I think it makes her insecure in her rulings. She's tentative. But the motivation may be mixed in with intentionally delaying enough to make sure this doesn't go before the election. I'm not saying there's a bad motive for that. There have been some commentators who say, you know, if he's president, he'll elevate her to a higher court and all that. I don't see that that would look like a quid pro quo.

But maybe she just says, "This can wait until after the election. I don't want this to affect the election, so I'm going to take my time." It may be intentional — I don't have a sense of that — but I do have a sense that she's inexperienced and insecure.  

Supporters and non-supporters of former U.S. President Donald Trump stand outside the Alto Lee Adams Sr. U.S. Courthouse on March 14 in Fort Pierce, Florida. Trump visited the courthouse for his case in front of District Judge Aileen Cannon.

Supporters and non-supporters of former U.S. President Donald Trump stand outside the Alto Lee Adams Sr. U.S. Courthouse on March 14 in Fort Pierce, Fla. Trump visited the courthouse for his case in front of District Judge Aileen Cannon. Joe Raedle/Getty Images hide caption

Detrow: What do you think the appropriate way is to treat a criminal defendant in your courtroom who's a former president of the United States and is running for president again? Do you think a judge in this position should be thinking about the fact that there is an election? And do the American people deserve a verdict before that election, or is that just something that's not material in a criminal courtroom to you?

Scheindlin: I don't think it's material to me. I think you do your job. So when I think of Judge Merchan , I don't think he was trying to rush it to get it done before the election so the electorate would know whether this guy is a felon or not a felon. And I'm not sure he's intentionally going slow to avoid the public knowing. It's just that one knows how to run a criminal trial because he's experienced and a good, organized judge, and one seems to not. But that said, you never know what's in the back of somebody’s mind. I think it's more a matter of knowing how to run a complex trial.

In Florida a motion to dismiss, in Georgia an election

Trump's Trials

In florida a motion to dismiss, in georgia an election.

Detrow: I'm wondering how worried you are at this moment about the rule of law in this country and the different ways it's being attacked from all directions. You have a former president saying it's a rigged system , it's a political system, that people are trying to charge him with crimes to prevent him from being president. You have a lot of liberal leaning voters in this country who are deeply cynical about the U.S. Supreme Court. You have a president's son just convicted in another federal courthouse and then all sorts of Republican criticism there. I feel like every direction you're coming from, there are serious, real critiques about the rule of law in this country, right or wrong. And I'm wondering how worried you are, in this country that's based on the rule of law, about all of these partisan criticisms at this moment?

Scheindlin: I think the partisan criticism has affected the public's perception of the validity of the court system. I think they've lost a lot of faith in this U.S. Supreme Court because of what has been disclosed about Justice Alito and Justice Thomas . But the system has actually worked quite well in the sense of the trial on the hush money case and in the Hunter Biden case. I might not have agreed with either outcome, but the jury system worked and made a decision and everybody is treated the same and that's a good thing, whether your last name is Biden or your last name is Trump.

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

land-logo

Article Menu

literacy assessment case study

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Spatial development and coupling coordination of society–physics–informational smart cities: a case study on thirty capitals in china.

literacy assessment case study

1. Introduction

2. materials and review, 2.1. study area, 2.2. literature review, 3. methods and data sources, 3.1. modeling, 3.2. construction of the evaluation index system, 3.2.1. construction of evaluation index system for informational space, 3.2.2. construction of evaluation index system for physical space, 3.2.3. construction of evaluation index system for social space, 3.3. methods, 3.3.1. entropy weight method, 3.3.2. revised coupling coordination.

  • U 1 , U 2 , and U 3 represent the comprehensive evaluation indices of the dimensions of information space, physical space, and social space, respectively.
  • C represents the coupling degree of the tri-dimensional space in smart city governance.
  • D represents the fusion coordination index of the tri-dimensional space in smart city governance, with a value range of [0, 1].
  • T represents the comprehensive development index of the coupling system in smart city governance, reflecting the synergistic effects among the tri-dimensional space in smart city governance.
  • α ,     β , and γ refer to the contribution degrees of information space, physical space, and social space in the coupling system, respectively.
  • α + β + γ = 1 . The closer the value is to 1, the greater the contribution degree. This study considers the equal importance of the tri-dimensional space, hence α = β = γ = 1 3 .

3.3.3. Dagum Gini Coefficient Decomposition

  • n represents the number of cities;
  • k represents the number of subgroups, representing the eastern, central, western, and northeastern regions in this study;
  • n j ( n h ) represents the number of cities in the j h -th subgroup;
  • j h represents the number of divisions in the subgroup, and i and r represent the number of cities within the subgroup;
  • G represents the overall Gini coefficient;
  • y j i y h r represents the coordination level of any city in the j h -th subgroup;
  • Y ¯ represents the average coordination level of the tri-dimensional space for all cities, calculated by ∑ j = 1 k ∑ i = 1 n j y j i / n ;
  • G j h represents the Gini coefficient between the j -th subgroup and the j -th subgroup;
  • Y j ¯ represents the average coordination level of the j -th subgroup’s tri-dimensional space;
  • D j h represents the relative influence between region j and region h .

3.3.4. Kernel Density Estimation

  • N represents the number of study objects, representing the number of smart cities in the observed area in this study;
  • X i represents the observation value of each smart city’s spatial coupling coordination in the observed area;
  • X represents the mean value of observation;
  • K · is the kernel function;
  • h represents the bandwidth which determines the precision of the kernel density and the smoothness of the density graph; h = 0.9 N 4 5 is usually adopted ( N is the sample size, S is the sample standard deviation).

3.3.5. BP Neural Network

  • m represents the number of input layer nodes;
  • n represents the number of output layer nodes;
  • α represents a constant between 0 and 10;
  • K represents the number of hidden layer nodes.

3.4. Architecture of Methods

4.1. assessment of smart city spatial development, 4.1.1. comprehensive assessment of smart city spatial development, 4.1.2. subsystem assessment of smart city spatial development, 4.2. descriptive analysis of smart city spatial coupling coordination, 4.2.1. overall characteristics, 4.2.2. regional disparities, 4.2.3. dynamic evolution, 4.3. inferential analysis of smart city spatial coupling coordination, 5. discussion, 5.1. pathways of development, 5.2. limitations, 6. conclusions, 6.1. overall positive development trend but still in early stages, 6.2. important influence of regional environment and development characteristics, 6.3. significant differences of contribution in evaluation indicators, author contributions, data availability statement, conflicts of interest.

  • Shelton, T.; Zook, M.; Wiig, A. The ‘actually existing smart city’. Camb. J. Reg. Econ. Soc. 2015 , 8 , 13–25. [ Google Scholar ] [ CrossRef ]
  • Khatibi, H.; Wilkinson, S.; Sweya, L.N.; Baghersad, M.; Dianat, H. Navigating Climate Change Challenges through Smart Resilient Cities: A Comprehensive Assessment Framework. Land 2024 , 13 , 266. [ Google Scholar ] [ CrossRef ]
  • Qiu, Z. Establishment and Countermeasures of Smart City Security Risk Assessment Model. Adv. Comput. Commun. 2023 , 3 , 70–73. [ Google Scholar ] [ CrossRef ]
  • Laufs, J.; Borrion, H.; Bradford, B. Security and the smart city: A systematic review. Sustain. Cities Soc. 2020 , 55 , 102023. [ Google Scholar ] [ CrossRef ]
  • Niu, S.; Zhang, K.; Zhang, J.; Feng, Y. How Does Industrial Upgrading Affect Urban Ecological Efficiency? New Evidence from China. Emerg. Mark. Financ. Trade 2024 , 60 , 899–920. [ Google Scholar ] [ CrossRef ]
  • Zheng, L. Content, Path and Direction of Urban Digital Transformation. Explor. Free Views 2021 , 4 , 147–152+180. [ Google Scholar ]
  • Sun, X.; Liu, N. The Governance of Urban Community Public Space from the Perspective of Space Theory. Shanghai Urban Manag. 2022 , 31 , 61–67. [ Google Scholar ]
  • Paroutis, S.; Bennett, M.; Heracleous, L. A strategic view on smart city technology: The case of IBM Smarter Cities during a recession. Technol. Forecast. Soc. Change 2014 , 89 , 262–272. [ Google Scholar ] [ CrossRef ]
  • Ojo, A.; Curry, E.; Janowski, T. Designing next generation smart city initiatives-harnessing findings and lessons from a study of ten smart city programs. In Proceedings of the ECIS 2014 Proceedings-22nd European Conference on Information Systems, Tel Aviv, Israel, 9–11 June 2014. [ Google Scholar ]
  • Shamsuzzoha, A.; Nieminen, J.; Piya, S.; Rutledge, K. Smart city for sustainable environment: A comparison of participatory strategies from Helsinki, Singapore and London. Cities 2021 , 114 , 103194. [ Google Scholar ] [ CrossRef ]
  • Al Sharif, R.; Pokharel, S. Smart city dimensions and associated risks: Review of literature. Sustain. Cities Soc. 2022 , 77 , 103542. [ Google Scholar ] [ CrossRef ]
  • Patrão, C.; Moura, P.; Almeida, A.T. Review of smart city assessment tools. Smart Cities 2020 , 3 , 1117–1132. [ Google Scholar ] [ CrossRef ]
  • Sharifi, A. A typology of smart city assessment tools and indicator sets. Sustain. Cities Soc. 2020 , 53 , 101936. [ Google Scholar ] [ CrossRef ]
  • Shi, G.; Liang, B.; Ye, T.; Zhou, K.; Sun, Z. Exploring the Coordinated Development of Smart-City Clusters in China: A Case Study of Jiangsu Province. Land 2024 , 13 , 308. [ Google Scholar ] [ CrossRef ]
  • Deeb, Y.I.; Alqahtani, F.K.; Bin Mahmoud, A.A. Developing a Comprehensive Smart City Rating System: Case of Riyadh, Saudi Arabia. J. Urban Plan. Dev. 2024 , 150 , 04024012. [ Google Scholar ] [ CrossRef ]
  • Joyce, A.; Javidroozi, V. Smart city development: Data sharing vs. data protection legislations. Cities 2024 , 148 , 104859. [ Google Scholar ] [ CrossRef ]
  • Sorri, K.; Yrjönkoski, K.; Seppänen, M. Smart cities, smarter values: Unpacking the ecosystem of urban innovation. Technol. Soc. 2024 , 77 , 102499. [ Google Scholar ] [ CrossRef ]
  • José, R.; Rodrigues, H. A Review on Key Innovation Challenges for Smart City Initiatives. Smart Cities 2024 , 7 , 141–162. [ Google Scholar ] [ CrossRef ]
  • Kavitha, M.M.; Golden, E.J. Smarter and resilient smart contracts applications for smart cities environment using blockchain technology. Automatika 2024 , 65 , 572–583. [ Google Scholar ]
  • Skyworks Showcases Its Momentum for Smart Cities, Automotive and More at the Consumer Electronics Show Booth No. 9627. Businesswire 9 January 2024.
  • Deveci, M.; Pekaslan, D.; Canıtez, F. The assessment of smart city projects using zSlice type-2 fuzzy sets based Interval Agreement Method. Sustain. Cities Soc. 2020 , 53 , 101889. [ Google Scholar ] [ CrossRef ]
  • Xiang, Y.; Xie, X. Digital twin city governance: Changes, dilemmas and countermeasures. E-Government 2021 , 10 , 69–80. [ Google Scholar ]
  • Mei, J. Technology adapts to the city: Subject oppression and ethical dilemmas in digital transformation. Theory Reform 2021 , 3 , 90–101. [ Google Scholar ]
  • Mao, Z.; Huang, Y.; Xu, X. Information Security Risk Analysis and Countermeasures of Smart City from the Perspective of Information Ecology. Chin. Public Adm. 2019 , 9 , 123–129. [ Google Scholar ]
  • Zou, W.; Zhang, L. From Compartmentalization to Integration: The Compartmentalization Dilemma of Smart Cities and Strategies for Reconfiguring the Socio-Technical Imagination. J. Tianjin Adm. Inst. 2023 , 25 , 53–64. [ Google Scholar ]
  • Bhattacharya, T.R.; Bhattacharya, A.; Mclellan, B.; Tezuka, T. Sustainable smart city development framework for develo** countries. Urban Res. Pract. 2020 , 13 , 180–212. [ Google Scholar ] [ CrossRef ]
  • Richey, B.F. Risk Management Framework 2.0 ; Iowa State University: Ames, IA, USA, 2016. [ Google Scholar ]
  • Hiller, J.S.; Russell, R.S. Privacy in crises: The NIST privacy framework. J. Contingencies Crisis Manag. 2017 , 25 , 31–38. [ Google Scholar ] [ CrossRef ]
  • Abdullah, H.A.; Nizam, H.T.; Kutub, T. The Evolution of Information Security Strategies: A Comprehensive Investigation of INFOSEC Risk Assessment in the Contemporary Information Era. Comput. Inf. Sci. 2023 , 16 , 1. [ Google Scholar ]
  • Meijer, A.; Bolívar, M.P.R. Governing the smart city: A review of the literature on smart urban governance. Int. Rev. Adm. Sci. 2016 , 82 , 392–408. [ Google Scholar ] [ CrossRef ]
  • Neirotti, P.; De Marco, A.; Cagliano, A.C.; Mangano, G.; Scorrano, F. Current trends in Smart City initiatives: Some stylised facts. Cities 2014 , 38 , 25–36. [ Google Scholar ] [ CrossRef ]
  • Winkowska, J.; Szpilko, D.; Pejić, S. Smart city concept in the light of the literature review. Eng. Manag. Prod. Serv. 2019 , 11 , 70–86. [ Google Scholar ] [ CrossRef ]
  • Nam, T.; Pardo, T.A. Conceptualizing smart city with dimensions of technology, people, and institutions. In Proceedings of the 12th Annual International Digital Government Research Conference: Digital Government Innovation in Challenging Times, College Park, MD, USA, 12–15 June 2011; pp. 282–291. [ Google Scholar ]
  • Yang, L.; Gan, Q.; Ma, D. Public environmental concern and corporate environmental investment—From the perspective of the moderating effect of green image. Financ. Account. Mon. 2020 , 8 , 33–40. [ Google Scholar ]
  • Shan, Z.; Xu, Q.; Ma, C.; Tang, S.; Wang, W. Digital economy development evaluation system and prospects based on ternary space theory. Macroecon. Manag. 2020 , 2 , 42–49. [ Google Scholar ]
  • Hui, P. Construction of Information Security Risk Assessment Model in Smart City. In Proceedings of the 2020 IEEE Conference on Telecommuni-cations, Optics and Computer Science (TOCS), Shenyang, China, 11–13 December 2020; pp. 393–396. [ Google Scholar ] [ CrossRef ]
  • Liu, X.; Li, X.; He, W.; Shen, C. The Coupling Coordination Degree of Human-Land and the Spatial Allocation of “Production-Living-Ecological”: A Case Study of Jiangsu Province. Mod. Urban Res. 2022 , 10 , 66–72. [ Google Scholar ]
  • Zhang, J.; Zhai, J. Measurement of coupling coordination degree in China’s “Three Living Spaces”. Urban Probl. 2019 , 11 , 38–44. [ Google Scholar ]
  • Xie, G.; Jiang, S.; Zhao, C. Analysis of the coupling coordination level of regional economy, urbanization and social governance. Stat. Decis. 2020 , 36 , 127–130. [ Google Scholar ]
  • Zou, X.; Xie, M.; Xiao, Z.; Wu, T.; Yin, Y. Evaluation of rural development and diagnosis of obstacle factors based on entropy weight topsis method. Chin. J. Agric. Resour. Reg. Plan. 2021 , 42 , 197–206. [ Google Scholar ]
  • Li, R.; Huang, X.; Liu, Y. Spatio-temporal differentiation and influencing factors of China’s urbanization from 2010 to 2020. Acta Geogr. Sin. 2023 , 78 , 777–791. [ Google Scholar ]
  • Wang, S.J.; Kong, W.; Ren, L.; Zhi, D.D. Research on misuses and modification of coupling coordination degree model in China. J. Nat. Resour. 2021 , 36 , 793–810. [ Google Scholar ] [ CrossRef ]
  • Wu, C.; Zhou, X.; Huang, C. Study on the coupling and coordination relationship between the optimization of industrial structure and the construction of ecological civilization of ecological civilization in the Yangtze River Economic Belt. J. Cent. China Norm. Univ. (Nat. Sci.) 2020 , 54 , 555–566. [ Google Scholar ]
  • Ge, S.S.; Zeng, G.; Yang, Y.; Hu, H. The coupling relationship and spatial characteristics analysis between ecological civilization construction and urbanization in the Yellow River Economic Belt. J. Nat. Resour. 2021 , 36 , 87–102. [ Google Scholar ] [ CrossRef ]
  • Zhang, P.; Yang, X.; Chen, Z. Neural network gain scheduling design for large envelope curve flight control law. J. Beijing Univ. Aeronaut. Astronaut. 2005 , 31 , 604–608. [ Google Scholar ]
  • Yang, X.; Gong, G.; Tian, Y.; Yu, X. Generalized Optimal Game Theory in virtual decision-makings. In Proceedings of the 2008 Chinese Control and Decision Conference, Yantai, China, 2–4 July 2008; pp. 2–4. [ Google Scholar ]
  • Shen, H.; Wang, Z.; Gao, C.; Qin, J.; Yao, F.; Xu, W. Determining the number of BP neural network hidden layer units. J. Tianjin Univ. Technol. 2008 , 5 , 13–15. [ Google Scholar ]
  • Meng, F.; Wu, X. Revisiting “Smart City”: Three Basic Research Questions—Based on a Systematic Review of English Literature. Public Adm. Policy Rev. 2022 , 11 , 148–168. [ Google Scholar ]

Click here to enlarge figure

Target LevelStandardized LayerIndex LayerNO.Index
Properties
WeightSource
Informational Space Subsystem
(IS)
Data
(IS1)
Peking University Digital Inclusive Finance IndexIS1-1+0.167 [ ]
Algorithm
(IS2)
R&D personnel ratio (%)IS2-1+0.167 [ , ]
The proportion of employees in the information transmission, computer services, and software industries (%)IS2-2+0.163[ ]
Computational
Power
(IS3)
Internet penetration (%)IS3-1+0.170[ ]
Per capita total telecommunications services (yuan)IS3-2+0.169[ ]
The proportion of mobile phone users at the end of the year (%)IS3-3+0.164[ ]
Target LevelStandardized LayerIndex LayerNO.Index
Properties
WeightSource
Physical
Space
Subsystem
(PS)
Production
(PS1)
The proportion of production land (%)PS1-1+0.063[ ]
Advanced industrial structure (%) PS1-2+0.071[ ]
Upgrading of industrial structure (%)PS1-3+0.072[ ]
Living
(PS2)
Population density (%)PS2-10.073[ , ]
Public library holdings per capita (volume)PS2-2+0.067[ , , ]
Per capita park green space area (square meters)PS2-3+0.068[ , , ]
Per capita medical institutionsPS2-4+0.072[ , ]
Per capita educational resources (persons)PS2-5+0.074[ , ]
Ecology
(PS3)
GDP energy intensity (yuan/billion kilowatt hours)PS3-10.074Original
Industrial wastewater discharge intensity (%)PS3-20.074[ , ]
Industrial sulfur dioxide emission intensity (%)PS3-30.074[ , ]
Harmless treatment rate of household waste (%)PS3-4+0.074[ , ]
Industrial smoke (powder) dust emission intensity (%)PS3-50.074[ ]
Comprehensive utilization rate of general industrial solid waste (%)PS3-6+0.073[ ]
Target LevelStandardized LayerIndex LayerNO.Index
Properties
WeightSource
Social
Space Subsystem
(SS)
Government
(SS1)
Unemployment rate (%)SS1-10.095[ , ]
Government financial support (%)SS1-2+0.091[ , ]
The proportion of insured individuals in unemployment insurance (%)SS1-3+0.087[ ]
The proportion of urban employees participating in basic pension insurance (%)SS1-4+0.089[ ]
The proportion of urban employees participating in basic medical insurance (%)SS1-5+0.089[ ]
Society
(SS2)
Network search indexSS2-1+0.092[ ]
The proportion of employees in public management and social organizations (%)SS2-2+0.091Original
The proportion of employees in the health, social insurance, and social welfare industries (%)SS2-3+0.093Original
General Public
(SS3)
Average salary of employees (yuan)SS3-1+0.090Original
Per capita education level (year)SS3-2+0.092[ ]
Per capita year-end RMB deposit balance of financial institutions (yuan)SS3-3+0.090Original
Coordination PhaseDegree of Coupling CoordinationCoordination Index
Disordered typeExtremely disordered(0, 0.1]
Severely disordered(0.1, 0.2]
Mildly disordered(0.2, 0.3]
Endangered coordination(0.3, 0.4]
Transition typeFragile coordination(0.4, 0.5]
Barely coordinated(0.5, 0.6]
Basic coordination(0.6, 0.7]
Coordinated developmentIntermediate coordination(0.7, 0.8]
Well-coordinated(0.8, 0.9]
High-quality coordination(0.9, 1]
City (Ranked)20112012201320142015201620172018201920202021
Beijing0.638 0.667 0.693 0.708 0.718 0.737 0.759 0.782 0.796 0.829 0.841
Guangzhou0.639 0.677 0.662 0.722 0.721 0.727 0.736 0.730 0.759 0.759 0.762
Shanghai0.610 0.626 0.667 0.661 0.679 0.700 0.713 0.723 0.728 0.730 0.723
Hangzhou0.550 0.603 0.604 0.664 0.644 0.652 0.693 0.703 0.727 0.728 0.737
Nanjing0.593 0.597 0.589 0.626 0.637 0.647 0.653 0.665 0.687 0.711 0.723
Wuhan0.547 0.564 0.590 0.631 0.622 0.629 0.648 0.662 0.665 0.704 0.704
Jinan0.550 0.549 0.589 0.611 0.637 0.638 0.654 0.659 0.651 0.667 0.681
Shenyang0.584 0.590 0.586 0.611 0.611 0.619 0.635 0.646 0.644 0.664 0.663
Changsha0.520 0.556 0.575 0.589 0.611 0.635 0.661 0.665 0.663 0.665 0.668
Xi’an0.533 0.552 0.571 0.606 0.617 0.608 0.634 0.624 0.650 0.648 0.667
Harbin0.529 0.540 0.553 0.601 0.608 0.610 0.620 0.611 0.637 0.649 0.650
Zhengzhou0.514 0.517 0.549 0.554 0.590 0.604 0.639 0.629 0.655 0.665 0.688
Lanzhou0.511 0.516 0.556 0.570 0.610 0.607 0.622 0.623 0.646 0.632 0.645
Tianjin0.510 0.548 0.543 0.588 0.579 0.588 0.605 0.615 0.634 0.639 0.654
Guiyang0.501 0.540 0.540 0.564 0.582 0.581 0.622 0.630 0.637 0.640 0.656
Average0.505 0.523 0.539 0.562 0.568 0.582 0.607 0.612 0.625 0.636 0.646
Chongqing0.530 0.503 0.516 0.571 0.552 0.603 0.620 0.624 0.626 0.627 0.625
Shijiazhuang0.542 0.524 0.542 0.554 0.556 0.570 0.613 0.604 0.610 0.634 0.631
Chengdu0.515 0.519 0.511 0.561 0.545 0.559 0.591 0.600 0.607 0.626 0.648
Nanning0.487 0.520 0.540 0.546 0.555 0.569 0.585 0.589 0.604 0.619 0.619
Fuzhou0.459 0.506 0.532 0.547 0.556 0.558 0.615 0.593 0.592 0.594 0.606
Taiyuan0.492 0.492 0.525 0.525 0.543 0.548 0.566 0.569 0.616 0.625 0.622
Haikou0.487 0.490 0.518 0.526 0.544 0.561 0.582 0.593 0.608 0.594 0.614
Changchun0.481 0.486 0.490 0.522 0.509 0.521 0.548 0.573 0.593 0.609 0.631
Hefei0.477 0.497 0.481 0.526 0.530 0.505 0.541 0.553 0.577 0.594 0.611
Nanchang0.415 0.470 0.482 0.510 0.487 0.518 0.557 0.561 0.582 0.588 0.601
Urumqi0.446 0.434 0.472 0.468 0.471 0.491 0.526 0.519 0.541 0.547 0.552
Kunming0.369 0.388 0.440 0.468 0.456 0.461 0.511 0.530 0.545 0.555 0.632
Yinchuan0.429 0.455 0.463 0.422 0.431 0.493 0.507 0.507 0.519 0.520 0.536
Hohhot0.407 0.432 0.402 0.426 0.436 0.477 0.468 0.486 0.491 0.514 0.531
Xining0.301 0.335 0.385 0.392 0.390 0.432 0.491 0.479 0.471 0.498 0.472
YearThe Overall
Gini Coefficient
The Intra-Group
Gini Coefficient
The Inter-Group
Gini Coefficient
The Contribution of
Hyperdensity
20110.079 0.019 0.048 0.012
20120.076 0.018 0.048 0.010
20130.072 0.017 0.046 0.009
20140.077 0.018 0.048 0.011
20150.079 0.018 0.042 0.018
20160.071 0.016 0.036 0.018
20170.064 0.014 0.034 0.015
20180.064 0.015 0.032 0.017
20190.063 0.015 0.039 0.008
20200.062 0.015 0.039 0.008
20210.059 0.015 0.035 0.010
Decomposition 20112012201320142015201620172018201920202021
The intra-group
Gini coefficient
EC0.060 0.061 0.056 0.059 0.057 0.057 0.049 0.053 0.056 0.059 0.056
NE0.033 0.035 0.032 0.035 0.030 0.030 0.029 0.028 0.020 0.022 0.020
CI0.037 0.012 0.019 0.007 0.024 0.018 0.010 0.006 0.014 0.014 0.008
WE0.085 0.077 0.069 0.077 0.083 0.065 0.058 0.057 0.060 0.052 0.056
The inter-group
Gini coefficient
EC-WE0.108 0.108 0.100 0.108 0.109 0.096 0.087 0.089 0.089 0.089 0.084
EC-CI0.098 0.088 0.090 0.088 0.093 0.098 0.088 0.086 0.071 0.070 0.067
EC-NE0.055 0.057 0.053 0.056 0.056 0.055 0.048 0.049 0.049 0.050 0.047
NE-WE0.081 0.078 0.074 0.081 0.084 0.071 0.064 0.064 0.061 0.064 0.057
NE-CI0.070 0.056 0.062 0.059 0.070 0.073 0.062 0.059 0.043 0.047 0.044
CI-WE0.069 0.059 0.054 0.064 0.067 0.056 0.050 0.049 0.049 0.043 0.043
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Wang, C.; Zhu, C.; Du, M. Spatial Development and Coupling Coordination of Society–Physics–Informational Smart Cities: A Case Study on Thirty Capitals in China. Land 2024 , 13 , 872. https://doi.org/10.3390/land13060872

Wang C, Zhu C, Du M. Spatial Development and Coupling Coordination of Society–Physics–Informational Smart Cities: A Case Study on Thirty Capitals in China. Land . 2024; 13(6):872. https://doi.org/10.3390/land13060872

Wang, Chao, Changhao Zhu, and Mingrun Du. 2024. "Spatial Development and Coupling Coordination of Society–Physics–Informational Smart Cities: A Case Study on Thirty Capitals in China" Land 13, no. 6: 872. https://doi.org/10.3390/land13060872

Article Metrics

Further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

IMAGES

  1. Literacy assessment case study

    literacy assessment case study

  2. FREE 10+ Literacy Assessment Samples [ Health, Digital, Learner ]

    literacy assessment case study

  3. Literacy Assessment by The Write Stuff

    literacy assessment case study

  4. Assessment Literacy: An Educator's Guide to Understanding Assessment, K-12

    literacy assessment case study

  5. Literacy Assessment Case Study Report

    literacy assessment case study

  6. (PDF) A Case Study for Teaching Information Literacy Skills

    literacy assessment case study

VIDEO

  1. Level 3 PT Assessment & Case Study Criteria 09112020

  2. Financial Literacy Internal Assessment Solution 4th Semester DU SOL Financial Literacy Assessment

  3. Summative Assessment Case Study

  4. Day 1: Comprehensive Rapid Literacy Assessment on Grade 2 Pupils

  5. SOR Literacy Assessment: Book Review

  6. Rapid Literacy Assessment (RLA) in English and Filipino

COMMENTS

  1. Raising Critical Readers in the 21st Century: A Case of Assessing

    Grounded in the sociocultural nature of literacies and informed of the inherent biases in widely used, English-dominant reading assessments in U.S. schools, this case study traces the planning, development, and pilot administration (n = 52) of a culturally inclusive (i.e., participant informed), online reading assessment.The Critical Reading Assessment (CRA) is designed to gauge elementary ...

  2. Case Studies

    The following are a series of examples including two students, Crystal and Henry. Part 1. The first set of examples focuses on assessment and the progressive use of progress monitoring data. Example 1. Crystal, a fourth grader who is deaf, is reading at the 3.5 grade level but her teacher thinks she should be "doing better". At the first ...

  3. PDF Insights From a Literacy Tutor: A Case Study of Critical Reading and

    reading and literacy background to be successful in life. The Reading Next report (Biancarosa & Snow, 2004) offers 15 recommendations for providing instruction of the essential elements. While most of them are pertinent to a teacher in a classroom, seven of them pertain to the tutoring sessions reviewed in this case study (i.e.,

  4. Full article: Teachers' conceptions of assessment literacy

    Teachers' assessment literacy (AL) affects the quality of assessments and is, therefore, an essential part of teachers' competence (Popham, 2009). ... 'Struggles as engagement' in teacher change: A longitudinal case study of a reading teacher's changing practices. Teachers and Teaching: Theory and Practice, 25, 453-468. https://doi ...

  5. Literacy assessment

    In a single-case study, the reading skills of a struggling third grade reader were assessed using tools such as QRI (Qualitative Reading Inventory) and visual-discrimination assessments to create a diagnostic profile. The study aimed to identify the student's reading level, and factors that affect language and literacy abilities.

  6. PDF Reading Assessment: A Case Study of Teachers' Beliefs and ...

    The study looked at teachers' beliefs and theoretical orientations regarding reading assessment and its effect on EFL learners' comprehension ability. It examines the correspondence between reading teachers' theoretical orientations and classroom reading assessment. This study included 20 teachers and 120 students.

  7. Teacher Classroom Questioning Practice and Assessment Literacy: Case

    This study examined classroom teacher questioning practice and explored how this process was affected by teacher assessment literacy and other mediating factors based on four case studies. Classroom observations were conducted to identify major patterns in teacher questioning practice, and semi-structured interviews were carried out to probe the participants' perception of classroom assessment ...

  8. Literacy assessment

    In a single-case study, the reading skills of a struggling third grade reader were assessed using tools such as QRI (Qualitative Reading Inventory) and visual-discrimination assessments to create ...

  9. PDF Exploring Secondary School EFL Teachers' Assessment Literacy in ...

    Exploring Secondary School EFL Teachers' Assessment Literacy in Practice: A Case Study in China Yuanyuan Chen1 1 School of English Education, ... assessment literacy (AL) has become a significant research topic in the field of language assessment and testing in the recent decade (Inbar-Lourie, 2008; Taylor, 2013). ...

  10. Toward a Differential and Situated View of Assessment Literacy

    Research has consistently demonstrated that teachers' assessment actions have a significant influence on students' learning experience and achievement. While much of the assessment research to date has investigated teachers' understandings of assessment purposes, their developing assessment literacy, or specific classroom assessment practices, few studies have explored teachers' differential ...

  11. Assessment literacy and student learning: The case for explicitly

    In this paper, we report on a study to quantify the impact on student learning and on student assessment literacy of a brief assessment literacy intervention. We first define 'assessment literacy' then report on the development and validation of an assessment literacy measurement instrument. Using a pseudo-experimental design, we quantified the impact of an assessment literacy-building ...

  12. Teacher assessment literacy in culturally and ...

    This study investigated inclusive teacher assessment practices in culturally and linguistically diverse Norwegian lower secondary schools as well as tensions in and the potential of teacher assessment literacy. Case study data gathered through interviews with 21 teachers in five schools were analysed to examine the schools' assessment policies ...

  13. A Case Study of Formative Assessment to Support Teaching of Reading

    A Case Study of Formative Assessment Shore, Wolf, & Heritage 4 ELFA assessment forms. The ELFA system includes a set of nine reading assessment forms that teachers can use over the course of their instruction. These nine forms are divided into three difficulty categories, developing, intermediate, and experienced, based on the linguistic

  14. Teacher assessment literacy: Surveying knowledge, conceptions and

    In studies of assessment literacy, there is a body of work focusing on how professional training equips language teachers with knowledge and skills to conduct classroom-based assessment. ... Although the case study may not necessarily guarantee broader generalisation of findings, this study still has its unique theoretical contributions by ...

  15. Language assessment literacy

    Local placement test retrofit and building language assessment literacy with teacher stakeholders: A case study from Colombia. Language Testing, 39 (2), 377 - 400. doi: 10.1177/02655322221076153 CrossRef Google Scholar

  16. Assessment literacy and student learning: the case for explicitly

    In this paper, we report on a study to quantify the impact on student learning and on student assessment literacy of a brief assessment literacy intervention. We first define 'assessment literacy' then report on the development and validation of an assessment literacy measurement instrument.

  17. Assessment Literacy: A Study of EFL Teachers' Assessment Knowledge

    Its theoretical framework are Brookhart's (2011) contemporary conceptualizations of formative assessment, Eyal's (2012) discussion of digital assessment literacy as an important component in measuring teacher assessment literacy and Alkharusi's (2009; 2010) methodological approach to investigating assessment literacy. This study ...

  18. Assessment literacy across contexts and competencies

    Assessment literacy involves being able to interpret such data both critically and meaningfully. In the first article of this regular issue, Hopster-den Otter et al. ( Citation this issue ) present a study investigating teachers' (N = 337) preferences and decision-making when interpreting presentations of measurement errors in score reports.

  19. PDF Best Practices in Digital Literacy: A Case Study

    THE SKILLS THAT MATTER in Adult Education. Best Practices in Digital Literacy: A Case Study. 1. and 220 on the CASAS reading assessment, with two distinct subgroups of proficiency within that range. The information from this case study came from a recent pilot on establishing blended learning in this new workplace ESL program.

  20. Assessment Literacy for Teachers: Making a Case for the Study of Test

    Abstract Assessment is an everyday activity for teachers throughout the United States. Unfortunately, many teachers enter the profession with little or no training in educational assessment. Thus, there is some concern among teacher educators that assessment illiteracy is a significant barrier to appropriate decision making in the classroom. A case is made for the inclusion of assessment ...

  21. Working With Academic Literacies: Case Studies Towards Transformative

    The editors and contributors to this collection explore what it means to adopt an "academic literacies" approach in policy and pedagogy. Transformative practice is illustrated through case studies and critical commentaries from teacher-researchers working in a range of higher education contexts—from undergraduate to postgraduate levels, across disciplines, and spanning geopolitical regions ...

  22. Case study of Amelia, a five-year-old reader who enjoys reading ...

    Taking it further. Amelia is an enthusiastic reader and enjoys reading at home. She reads to her mother and father on a daily basis and explained that her father reads to her and her sister every night before bed. It appeared that her home life fosters a positive attitude to reading and this was arguably beneficial to her reading progress.

  23. A Systematic Review of Early Writing Assessment Tools

    Early reading and writing skills are important to overall literacy development and long-term academic success (Lonigan et al., 2000; Puranik & Lonigan, 2012).Though there has historically been greater emphasis on reading, children's knowledge of the writing process and oral language develop concurrently (Diamond et al., 2008).In addition, children as young as two years old demonstrate ...

  24. How curriculum and assessment policies affect the role of reading in an

    Reading assessment is a critical feature of education systems serving a variety of roles, from informing classroom practice to large-scale policy evaluation. ... Hume, A., & Coll, R. K. (2009). Assessment of learning, for learning, and as learning: New Zealand case studies. Assessment in Education Principles, Policy & Practice, 16, 269-290 ...

  25. The effectiveness of problem-based learning and case-based learning

    To investigate the effectiveness of problem-based learning (PBL) and case-based learning (CBL) teaching methods in clinical practical teaching in transarterial chemoembolization (TACE) treatment in China. A comprehensive search of PubMed, the Chinese National Knowledge Infrastructure (CNKI) database, the Weipu database and the Wanfang database up to June 2023 was performed to collect studies ...

  26. A retired federal judge says Judge Cannon appears to show ...

    A man wears a mask of former President Donald Trump in front of a Fort Pierce, Fla., courthouse. Judge Aileen Cannon, who is overseeing the classified documents case against former President ...

  27. GCR 2024 Assessment 2 Case

    1 Author Sara Tödt, RMIT Business and Human Rights Centre, March 2024 1 Global Corporate Responsibility - BUSM4687 Assessment 2 Case Study The Case Perfect Properties 1 is an Australian real estate company that owns and manages a range of office buildings in Melbourne. The company started in 2001 with just one office building and has been growing steadily since then.

  28. Financial Literacy: A Case Study on Current Practices

    Purpose. The purpose of this case-study is to expand on how social innovation can be used to establish. two things: (1) determining how financial literacy can become more accessible to low-income and marginalized youth within high school and (2) developing a sustainable and effective model.

  29. Land

    This study leverages entropy weight methodology to construct a comprehensive assessment index for smart city spatial development in China between 2011 and 2021 . From the overall growth perspective, we observe that the annual growth rate of smart cities from 2011 to 2014 remained relatively stable at a high level, followed by a period of more ...