I. Essay is easy to read due to clear organization of main points
II. Method of organization is well-suited to topic
(50 points possible) Too much information distracts the focus of the paper Essay does not address assignment prompt |
|
IV. Examples in essay are ample support for thesis Examples in essay need more support for thesis |
|
V. Examples are specific and concrete. Examples are somewhat nonspecific and/or abstract. |
|
VI. Interest level of essay is superior. |
|
VII. Introduction and conclusion are interesting and balanced. |
(30 points possible) |
|
IX. Essay exhibits correct spelling. |
|
X. Essay is written in appropriate point of view and format. Point of view and format are inappropriate for essay. |
Main navigation, articulating your assessment values.
Reading, commenting on, and then assigning a grade to a piece of student writing requires intense attention and difficult judgment calls. Some faculty dread “the stack.” Students may share the faculty’s dim view of writing assessment, perceiving it as highly subjective. They wonder why one faculty member values evidence and correctness before all else, while another seeks a vaguely defined originality.
Writing rubrics can help address the concerns of both faculty and students by making writing assessment more efficient, consistent, and public. Whether it is called a grading rubric, a grading sheet, or a scoring guide, a writing assignment rubric lists criteria by which the writing is graded.
Create a rubric at the same time you create the assignment. It will help you explain to the students what your goals are for the assignment.
Consider involving students in Steps 2 and 3. A class session devoted to developing a rubric can provoke many important discussions about the ways the features of the language serve the purpose of the writing. And when students themselves work to describe the writing they are expected to produce, they are more likely to achieve it.
At this point, you will need to decide if you want to create a holistic or an analytic rubric. There is much debate about these two approaches to assessment.
Holistic scoring .
Holistic scoring aims to rate overall proficiency in a given student writing sample. It is often used in large-scale writing program assessment and impromptu classroom writing for diagnostic purposes.
General tenets to holistic scoring:
Holistic rubrics emphasize what students do well and generally increase efficiency; they may also be more valid because scoring includes authentic, personal reaction of the reader. But holistic sores won’t tell a student how they’ve progressed relative to previous assignments and may be rater-dependent, reducing reliability. (For a summary of advantages and disadvantages of holistic scoring, see Becker, 2011, p. 116.)
Here is an example of a partial holistic rubric:
Summary meets all the criteria. The writer understands the article thoroughly. The main points in the article appear in the summary with all main points proportionately developed. The summary should be as comprehensive as possible and should be as comprehensive as possible and should read smoothly, with appropriate transitions between ideas. Sentences should be clear, without vagueness or ambiguity and without grammatical or mechanical errors.
A complete holistic rubric for a research paper (authored by Jonah Willihnganz) can be downloaded here.
Analytic scoring makes explicit the contribution to the final grade of each element of writing. For example, an instructor may choose to give 30 points for an essay whose ideas are sufficiently complex, that marshals good reasons in support of a thesis, and whose argument is logical; and 20 points for well-constructed sentences and careful copy editing.
General tenets to analytic scoring:
Advantages of an analytic rubric include ease of training raters and improved reliability. Meanwhile, writers often can more easily diagnose the strengths and weaknesses of their work. But analytic rubrics can be time-consuming to produce, and raters may judge the writing holistically anyway. Moreover, many readers believe that writing traits cannot be separated. (For a summary of the advantages and disadvantages of analytic scoring, see Becker, 2011, p. 115.)
For example, a partial analytic rubric for a single trait, “addresses a significant issue”:
A complete analytic rubric for a research paper can be downloaded here. In WIM courses, this language should be revised to name specific disciplinary conventions.
Whichever type of rubric you write, your goal is to avoid pushing students into prescriptive formulas and limiting thinking (e.g., “each paragraph has five sentences”). By carefully describing the writing you want to read, you give students a clear target, and, as Ed White puts it, “describe the ongoing work of the class” (75).
Writing rubrics contribute meaningfully to the teaching of writing. Think of them as a coaching aide. In class and in conferences, you can use the language of the rubric to help you move past generic statements about what makes good writing good to statements about what constitutes success on the assignment and in the genre or discourse community. The rubric articulates what you are asking students to produce on the page; once that work is accomplished, you can turn your attention to explaining how students can achieve it.
Becker, Anthony. “Examining Rubrics Used to Measure Writing Performance in U.S. Intensive English Programs.” The CATESOL Journal 22.1 (2010/2011):113-30. Web.
White, Edward M. Teaching and Assessing Writing . Proquest Info and Learning, 1985. Print.
CCCC Committee on Assessment. “Writing Assessment: A Position Statement.” November 2006 (Revised March 2009). Conference on College Composition and Communication. Web.
Gallagher, Chris W. “Assess Locally, Validate Globally: Heuristics for Validating Local Writing Assessments.” Writing Program Administration 34.1 (2010): 10-32. Web.
Huot, Brian. (Re)Articulating Writing Assessment for Teaching and Learning. Logan: Utah State UP, 2002. Print.
Kelly-Reilly, Diane, and Peggy O’Neil, eds. Journal of Writing Assessment. Web.
McKee, Heidi A., and Dànielle Nicole DeVoss DeVoss, Eds. Digital Writing Assessment & Evaluation. Logan, UT: Computers and Composition Digital Press/Utah State University Press, 2013. Web.
O’Neill, Peggy, Cindy Moore, and Brian Huot. A Guide to College Writing Assessment . Logan: Utah State UP, 2009. Print.
Sommers, Nancy. Responding to Student Writers . Macmillan Higher Education, 2013.
Straub, Richard. “Responding, Really Responding to Other Students’ Writing.” The Subject is Writing: Essays by Teachers and Students. Ed. Wendy Bishop. Boynton/Cook, 1999. Web.
White, Edward M., and Cassie A. Wright. Assigning, Responding, Evaluating: A Writing Teacher’s Guide . 5th ed. Bedford/St. Martin’s, 2015. Print.
Scoring rubric for the praxis argumentative essay.
Looking for more prep? Kaplan has the Praxis Test Prep & Practice Resources for you.
[ INTERESTED IN WHAT’S TESTED ON ALL PRAXIS TESTS? ↓ ]
Call 1-800-KAP-TEST or email [email protected]
MCAT Test Prep
LSAT Test Prep
GRE Test Prep
GMAT Test Prep
SAT Test Prep
ACT Test Prep
DAT Test Prep
NCLEX Test Prep
USMLE Test Prep
NCLEX Locations
GRE Locations
SAT Locations
LSAT Locations
MCAT Locations
GMAT Locations
Kaplan Test Prep Contact Us Partner Solutions Work for Kaplan Terms and Conditions Privacy Policy CA Privacy Policy Trademark Directory
|
. | ); shows knowledge of subject matter through effective use of topic-related concepts to support argument; sophistication in fluency of expression. | and for topic ); thesis/purpose statement & conclusion; paragraphs have effective topic sentences. | ; (formal English); correct use of . | ) and excellent use of (key words, pronouns, references, transitions, etc.); presentation of | sentence-level grammar. | . | |
| and layout, but readability and attractiveness are not affected. | development of content; high level of fluency in expression (clarity); evidence of logical progression of ideas; mostly effective use of topic-related concepts to show knowledge of subject matter to support argument. | ; to topic; paragraphing, sequencing, purpose statement/thesis & conclusion, topic sentences evident and appropriate. | and other vocabulary in a variety of situations; mostly correct use of and ; occasional (informal English) | in a range of situations; coherence apparent.
| grammatical errors on the sentence level (e.g. possessives, word deletions; 1-2 run-on sentences or fragments) | |
| required components are included; formatting guidelines for layout (headings), spacing, and alignment are sometimes followed. but the assignment is easy to read. | , but lacks clearly stated positions/argument or supporting information; some use of topic-related concepts to show knowledge of subject matter and to support argument; fluency of expression may be | ; some problems with paragraphing or sequencing, limited purpose statement/thesis or conclusion; use of topic sentences. | only used occasionally; vocabulary used inappropriately; 3-4 instances of (informal English); often incorrect use of . | ; (key words, pronouns, references, transitions, etc.) could be used more often, more effectively, or more accurately. | grammatical errors on the sentence level (e.g. word deletions, possessives, prep., tense); 3-4 fragments, or run-on sentences). | |
| in layout (headings), spacing, and alignment, reducing readability and attractiveness. | ; may be incomplete or unclear; little evidence of argument; little evidence of knowledge of subject matter (use of topic-related concepts) to support argument; in expression | . | range; errors; and other vocabulary used inappropriately; only basic and elementary meanings are conveyed; 5 or more instances of (informal English). | ; (key words, pronouns, references, transitions, etc.) may be missing or are used inaccurately; lack of logical sequencing of ideas. | . | |
| statement of content; often copied from sources or lists of information; argument; use of topic-related concepts to support argument. | attempt at paragraphing, often unsuccessful; strings of sentences; purpose statement/thesis, conclusion, or topic sentences. | ; attempt to use ; (informal English | ) ; presentation of ideas | with errors in sentence structure and sentence grammar; more than 7 fragments or run-on sentences | ||
, |
ACT Writing
What time is it? It's essay time! In this article, I'm going to get into the details of the newly transformed ACT Writing by discussing the ACT essay rubric and how the essay is graded based on that. You'll learn what each item on the rubric means for your essay writing and what you need to do to meet those requirements.
If you've chosen to take the ACT Plus Writing , you'll have 40 minutes to write an essay (after completing the English, Math, Reading, and Science sections of the ACT, of course). Your essay will be evaluated by two graders , who score your essay from 1-6 on each of 4 domains, leading to scores out of 12 for each domain. Your Writing score is calculated by averaging your four domain scores, leading to a total ACT Writing score from 2-12.
Based on ACT, Inc's stated grading criteria, I've gathered all the relevant essay-grading criteria into a chart. The information itself is available on the ACT's website , and there's more general information about each of the domains here . The columns in this rubric are titled as per the ACT's own domain areas, with the addition of another category that I named ("Mastery Level").
demonstrate little or no skill in writing an argumentative essay. | The writer fails to generate an argument that responds intelligibly to the task. The writer's intentions are difficult to discern. Attempts at analysis are unclear or irrelevant. | Ideas lack development, and claims lack support. Reasoning and illustration are unclear, incoherent, or largely absent. | The response does not exhibit an organizational structure. There is little grouping of ideas. When present, transitional devices fail to connect ideas. | The use of language fails to demonstrate skill in responding to the task. Word choice is imprecise and often difficult to comprehend. Sentence structures are often unclear. Stylistic and register choices are difficult to identify. Errors in grammar, usage, and mechanics are pervasive and often impede understanding. | |
demonstrate weak or inconsistent skill in writing an argumentative essay | The writer generates an argument that weakly responds to multiple perspectives on the given issue. The argument's thesis, if evident, reflects little clarity in thought and purpose. Attempts at analysis are incomplete, largely irrelevant, or consist primarily of restatement of the issue and its perspectives. | Development of ideas and support for claims are weak, confused, or disjointed. Reasoning and illustration are inadequate, illogical, or circular, and fail to fully clarify the argument. | The response exhibits a rudimentary organizational structure. Grouping of ideas is inconsistent and often unclear. Transitions between and within paragraphs are misleading or poorly formed. | The use of language is inconsistent and often unclear. Word choice is rudimentary and frequently imprecise. Sentence structures are sometimes unclear. Stylistic and register choices, including voice and tone, are inconsistent and are not always appropriate for the rhetorical purpose. Distracting errors in grammar, usage, and mechanics are present, and they sometimes impede understanding. | |
demonstrate some developing skill in writing an argumentative essay | The writer generates an argument that responds to multiple perspectives on the given issue. The argument's thesis reflects some clarity in thought and purpose. The argument establishes a limited or tangential context for analysis of the issue and its perspectives. Analysis is simplistic or somewhat unclear. | Development of ideas and support for claims are mostly relevant but are overly general or simplistic. Reasoning and illustration largely clarify the argument but may be somewhat repetitious or imprecise. | The response exhibits a basic organizational structure. The response largely coheres, with most ideas logically grouped. Transitions between and within paragraphs sometimes clarify the relationships among ideas. | The use of language is basic and only somewhat clear. Word choice is general and occasionally imprecise. Sentence structures are usually clear but show little variety. Stylistic and register choices, including voice and tone, are not always appropriate for the rhetorical purpose. Distracting errors in grammar, usage, and mechanics may be present, but they generally do not impede understanding. | |
demonstrate adequate skill in writing an argumentative essay | The writer generates an argument that engages with multiple perspectives on the given issue. The argument's thesis reflects clarity in thought and purpose. The argument establishes and employs a relevant context for analysis of the issue and its perspectives. The analysis recognizes implications, complexities and tensions, and/or underlying values and assumptions. | Development of ideas and support for claims clarify meaning and purpose. Lines of clear reasoning and illustration adequately convey the significance of the argument. Qualifications and complications extend ideas and analysis. | The response exhibits a clear organizational strategy. The overall shape of the response reflects an emergent controlling idea or purpose. Ideas are logically grouped and sequenced. Transitions between and within paragraphs clarify the relationships among ideas. | The use of language conveys the argument with clarity. Word choice is adequate and sometimes precise. Sentence structures are clear and demonstrate some variety. Stylistic and register choices, including voice and tone, are appropriate for the rhetorical purpose. While errors in grammar, usage, and mechanics are present, they rarely impede understanding. | |
demonstrate well-developed skill in writing an argumentative essay | The writer generates an argument that productively engages with multiple perspectives on the given issue. The argument's thesis reflects precision in thought and purpose. The argument establishes and employs a thoughtful context for analysis of the issue and its perspectives. The analysis addresses implications, complexities and tensions, and/or underlying values and assumptions. | Development of ideas and support for claims deepen understanding. A mostly integrated line of purposeful reasoning and illustration capably conveys the significance of the argument. Qualifications and complications enrich ideas and analysis. | The response exhibits a productive organizational strategy. The response is mostly unified by a controlling idea or purpose, and a logical sequencing of ideas contributes to the effectiveness of the argument. Transitions between and within paragraphs consistently clarify the relationships among ideas. | The use of language works in service of the argument. Word choice is precise. Sentence structures are clear and varied often. Stylistic and register choices, including voice and tone, are purposeful and productive. While minor errors in grammar, usage, and mechanics may be present, they do not impede understanding. | |
demonstrate effective skill in writing an argumentative essay | The writer generates an argument that critically engages with multiple perspectives on the given issue. The argument's thesis reflects nuance and precision in thought and purpose. The argument establishes and employs an insightful context for analysis of the issue and its perspectives. The analysis examines implications, complexities and tensions, and/or underlying values and assumptions. | Development of ideas and support for claims deepen insight and broaden context. An integrated line of skillful reasoning and illustration effectively conveys the significance of the argument. Qualifications and complications enrich and bolster ideas and analysis. | The response exhibits a skillful organizational strategy. The response is unified by a controlling idea or purpose, and a logical progression of ideas increases the effectiveness of the writer's argument. Transitions between and within paragraphs strengthen the relationships among ideas. | The use of language enhances the argument. Word choice is skillful and precise. Sentence structures are consistently varied and clear. Stylistic and register choices, including voice and tone, are strategic and effective. While a few minor errors in grammar, usage, and mechanics may be present, they do not impede understanding. |
Whew. That rubric might be a little overwhelming—there's so much information to process! Below, I've broken down the essay rubric by domain, with examples of what a 3- and a 6-scoring essay might look like.
The Ideas and Analysis domain is the rubric area most intimately linked with the basic ACT essay task itself. Here's what the ACT website has to say about this domain:
Scores in this domain reflect the ability to generate productive ideas and engage critically with multiple perspectives on the given issue. Competent writers understand the issue they are invited to address, the purpose for writing, and the audience. They generate ideas that are relevant to the situation.
Based on this description, I've extracted the three key things you need to do in your essay to score well in the Ideas and Analysis domain.
#1: Choose a perspective on this issue and state it clearly. #2: Compare at least one other perspective to the perspective you have chosen. #3: Demonstrate understanding of the ways the perspectives relate to one another. #4: Analyze the implications of each perspective you choose to discuss.
There's no cool acronym, sorry. I guess a case could be made for "ACCE," but I wanted to list the points in the order of importance, so "CEAC" it is.
Fortunately, the ACT Writing Test provides you with the three perspectives to analyze and choose from, which will save you some of the time of "generating productive ideas." In addition, "analyzing each perspective" does not mean that you need to argue from each of the points of view. Instead, you need to choose one perspective to argue as your own and explain how your point of view relates to at least one other perspective by evaluating how correct the perspectives you discuss are and analyzing the implications of each perspective.
Note: While it is technically allowable for you to come up with a fourth perspective as your own and to then discuss that point of view in relation to another perspective, we do not recommend it. 40 minutes is already a pretty short time to discuss and compare multiple points of view in a thorough and coherent manner—coming up with new, clearly-articulated perspectives takes time that could be better spend devising a thorough analysis of the relationship between multiple perspectives.
To get deeper into what things fall in the Ideas and Analysis domain, I'll use a sample ACT Writing prompt and the three perspectives provided:
Many of the goods and services we depend on daily are now supplied by intelligent, automated machines rather than human beings. Robots build cars and other goods on assembly lines, where once there were human workers. Many of our phone conversations are now conducted not with people but with sophisticated technologies. We can now buy goods at a variety of stores without the help of a human cashier. Automation is generally seen as a sign of progress, but what is lost when we replace humans with machines? Given the accelerating variety and prevalence of intelligent machines, it is worth examining the implications and meaning of their presence in our lives.
Perspective One : What we lose with the replacement of people by machines is some part of our own humanity. Even our mundane daily encounters no longer require from us basic courtesy, respect, and tolerance for other people.
Perspective Two : Machines are good at low-skill, repetitive jobs, and at high-speed, extremely precise jobs. In both cases they work better than humans. This efficiency leads to a more prosperous and progressive world for everyone.
Perspective Three : Intelligent machines challenge our long-standing ideas about what humans are or can be. This is good because it pushes both humans and machines toward new, unimagined possibilities.
First, in order to "clearly state your own perspective on the issue," you need to figure out what your point of view, or perspective, on this issue is going to be. For the sake of argument, let's say that you agree the most with the second perspective. A essay that scores a 3 in this domain might simply restate this perspective:
I agree that machines are good at low-skill, repetitive jobs, and at high-speed, extremely precise jobs. In both cases they work better than humans. This efficiency leads to a more prosperous and progressive world for everyone.
In contrast, an essay scoring a 6 in this domain would likely have a more complex point of view (with what the rubric calls "nuance and precision in thought and purpose"):
Machines will never be able to replace humans entirely, as creativity is not something that can be mechanized. Because machines can perform delicate and repetitive tasks with precision, however, they are able to take over for humans with regards to low-skill, repetitive jobs and high-skill, extremely precise jobs. This then frees up humans to do what we do best—think, create, and move the world forward.
Next, you must compare at least one other perspective to your perspective throughout your essay, including in your initial argument. Here's what a 3-scoring essay's argument would look like:
I agree that machines are good at low-skill, repetitive jobs, and at high-speed, extremely precise jobs. In both cases they work better than humans. This efficiency leads to a more prosperous and progressive world for everyone. Machines do not cause us to lose our humanity or challenge our long-standing ideas about what humans are or can be.
And here, in contrast, is what a 6-scoring essay's argument (that includes multiple perspectives) would look like:
Machines will never be able to replace humans entirely, as creativity is not something that can be mechanized, which means that our humanity is safe. Because machines can perform delicate and repetitive tasks with precision, however, they are able to take over for humans with regards to low-skill, repetitive jobs and high-skill, extremely precise jobs. Rather than forcing us to challenge our ideas about what humans are or could be, machines simply allow us to BE, without distractions. This then frees up humans to do what we do best—think, create, and move the world forward.
You also need to demonstrate a nuanced understanding of the way in which the two perspectives relate to each other. A 3-scoring essay in this domain would likely be absolute, stating that Perspective Two is completely correct, while the other two perspectives are absolutely incorrect. By contrast, a 6-scoring essay in this domain would provide a more insightful context within which to consider the issue:
In the future, machines might lead us to lose our humanity; alternatively, machines might lead us to unimaginable pinnacles of achievement. I would argue, however, projecting possible futures does not make them true, and that the evidence we have at present supports the perspective that machines are, above all else, efficient and effective completing repetitive and precise tasks.
Finally, to analyze the perspectives, you need to consider each aspect of each perspective. In the case of Perspective Two, this means you must discuss that machines are good at two types of jobs, that they're better than humans at both types of jobs, and that their efficiency creates a better world. The analysis in a 3-scoring essay is usually "simplistic or somewhat unclear." By contrast, the analysis of a 6-scoring essay "examines implications, complexities and tensions, and/or underlying values and assumptions."
To score well on the ACT essay overall, however, it's not enough to just state your opinions about each part of the perspective; you need to actually back up your claims with evidence to develop your own point of view. This leads straight into the next domain: Development and Support.
Another important component of your essay is that you explain your thinking. While it's obviously important to clearly state what your ideas are in the first place, the ACT essay requires you to demonstrate evidence-based reasoning. As per the description on ACT.org [bolding mine]:
Scores in this domain reflect the ability to discuss ideas, offer rationale, and bolster an argument. Competent writers explain and explore their ideas, discuss implications, and illustrate through examples . They help the reader understand their thinking about the issue.
"Machines are good at low-skill, repetitive jobs, and at high-speed, extremely precise jobs. In both cases they work better than humans. This efficiency leads to a more prosperous and progressive world for everyone."
In your essay, you might start out by copying the perspective directly into your essay as your point of view, which is fine for the Ideas and Analysis domain. To score well in the Development and Support domain and develop your point of view with logical reasoning and detailed examples, however, you're going to have to come up with reasons for why you agree with this perspective and examples that support your thinking.
Here's an example from an essay that would score a 3 in this domain:
Machines are good at low-skill, repetitive jobs and at high-speed, extremely precise jobs. In both cases, they work better than humans. For example, machines are better at printing things quickly and clearly than people are. Prior to the invention of the printing press by Gutenberg people had to write everything by hand. The printing press made it faster and easier to get things printed because things didn't have to be written by hand all the time. In the world today we have even better machines like laser printers that print things quickly.
Essays scoring a 3 in this domain tend to have relatively simple development and tend to be overly general, with imprecise or repetitive reasoning or illustration. Contrast this with an example from an essay that would score a 6:
Machines are good at low-skill, repetitive jobs and at high-speed, extremely precise jobs. In both cases, they work better than humans. Take, for instance, the example of printing. As a composer, I need to be able to create many copies of my sheet music to give to my musicians. If I were to copy out each part by hand, it would take days, and would most likely contain inaccuracies. On the other hand, my printer (a machine) is able to print out multiple copies of parts with extreme precision. If it turns out I made an error when I was entering in the sheet music onto the computer (another machine), I can easily correct this error and print out more copies quickly.
The above example of the importance of machines to composers uses "an integrated line of skillful reasoning and illustration" to support my claim ("Machines are good at low-skill, repetitive jobs and at high-speed, extremely precise jobs. In both cases, they work better than humans"). To develop this example further (and incorporate the "This efficiency leads to a more prosperous and progressive world for everyone" facet of the perspective), I would need to expand my example to explain why it's so important that multiple copies of precisely replicated documents be available, and how this affects the world.
World Map - Abstract Acrylic by Nicolas Raymond , used under CC BY 2.0 /Resized from original.
Essay organization has always been integral to doing well on the ACT essay, so it makes sense that the ACT Writing rubric has an entire domain devoted to this. The organization of your essay refers not just to the order in which you present your ideas in the essay, but also to the order in which you present your ideas in each paragraph. Here's the formal description from the ACT website :
Scores in this domain reflect the ability to organize ideas with clarity and purpose. Organizational choices are integral to effective writing. Competent writers arrange their essay in a way that clearly shows the relationship between ideas, and they guide the reader through their discussion.
Making sure your essay is logically organized relates back to the "development" part of the previous domain. As the above description states, you can't just throw examples and information into your essay willy-nilly, without any regard for the order; part of constructing and developing a convincing argument is making sure it flows logically. A lot of this organization should happen while you are in the planning phase, before you even begin to write your essay.
Let's go back to the machine intelligence essay example again. I've decided to argue for Perspective Two, which is:
An essay that scores a 3 in this domain would show a "basic organizational structure," which is to say that each perspective analyzed would be discussed in its own paragraph, "with most ideas logically grouped." A possible organization for a 3-scoring essay:
An essay that scores a 6 in this domain, on the other hand, has a lot more to accomplish. The "controlling idea or purpose" behind the essay should be clearly expressed in every paragraph, and ideas should be ordered in a logical fashion so that there is a clear progression from the beginning to the end. Here's a possible organization for a 6-scoring essay:
In this example, the unifying idea is that machines are helpful (and it's mentioned in each paragraph) and the progression of ideas makes more sense. This is certainly not the only way to organize an essay on this particular topic, or even using this particular perspective. Your essay does, however, have to be organized, rather than consist of a bunch of ideas thrown together.
Here are my Top 5 ACT Writing Organization Rules to follow:
#1: Be sure to include an introduction (with your thesis stating your point of view), paragraphs in which you make your case, and a conclusion that sums up your argument
#2: When planning your essay, make sure to present your ideas in an order that makes sense (and follows a logical progression that will be easy for the grader to follow).
#3: Make sure that you unify your essay with one main idea . Do not switch arguments partway through your essay.
#4: Don't write everything in one huge paragraph. If you're worried you're going to run out of space to write and can't make your handwriting any smaller and still legible, you can try using a paragraph symbol, ¶, at the beginning of each paragraph as a last resort to show the organization of your essay.
#5: Use transitions between paragraphs (usually the last line of the previous paragraph and the first line of the paragraph) to "strengthen the relationships among ideas" ( source ). This means going above and beyond "First of all...Second...Lastly" at the beginning of each paragraph. Instead, use the transitions between paragraphs as an opportunity to describe how that paragraph relates to your main argument.
The final domain on the ACT Writing rubric is Language Use and Conventions. This the item that includes grammar, punctuation, and general sentence structure issues. Here's what the ACT website has to say about Language Use:
Scores in this domain reflect the ability to use written language to convey arguments with clarity. Competent writers make use of the conventions of grammar, syntax, word usage, and mechanics. They are also aware of their audience and adjust the style and tone of their writing to communicate effectively.
I tend to think of this as the "be a good writer" category, since many of the standards covered in the above description are ones that good writers will automatically meet in their writing. On the other hand, this is probably the area non-native English speakers will struggle the most, as you must have a fairly solid grasp of English to score above a 2 on this domain. The good news is that by reading this article, you're already one step closer to improving your "Language Use" on ACT Writing.
There are three main parts of this domain:
#1: Grammar, Usage, and Mechanics #2: Sentence Structure #3: Vocabulary and Word Choice
I've listed them (and will cover them) from lowest to highest level. If you're struggling with multiple areas, I highly recommend starting out with the lowest-level issue, as the components tend to build on each other. For instance, if you're struggling with grammar and usage, you need to focus on fixing that before you start to think about precision of vocabulary/word choice.
At the most basic level, you need to be able to "effectively communicate your ideas in standard written English" ( ACT.org ). First and foremost, this means that your grammar and punctuation need to be correct. On ACT Writing, it's all right to make a few minor errors if the meaning is clear, even on essays that score a 6 in the Language Use domain; however, the more errors you make, the more your score will drop.
Here's an example from an essay that scored a 3 in Language Use:
Machines are good at doing there jobs quickly and precisely. Also because machines aren't human or self-aware they don't get bored so they can do the same thing over & over again without getting worse.
While the meaning of the sentences is clear, there are several errors: the first sentence uses "there" instead of "their," the second sentence is a run-on sentence, and the second sentence also uses the abbreviation "&" in place of "and." Now take a look at an example from a 6-scoring essay:
Machines excel at performing their jobs both quickly and precisely. In addition, since machines are not self-aware they are unable to get "bored." This means that they can perform the same task over and over without a decrease in quality.
This example solves the abbreviation and "there/their" issue. The second sentence is missing a comma (after "self-aware"), but the worse of the run-on sentence issue is absent.
Our Complete Guide to ACT Grammar might be helpful if you just need a general refresh on grammar rules. In addition, we have several articles that focus in on specific grammar rules, as they are tested on ACT English; while the specific ways in which ACT English tests you on these rules isn't something you'll need to know for the essay, the explanations of the grammar rules themselves are quite helpful.
Once you've gotten down basic grammar, usage, and mechanics, you can turn your attention to sentence structure. Here's an example of what a 3-scoring essay in Language Use (based on sentence structure alone) might look like:
Machines are more efficient than humans at many tasks. Machines are not causing us to lose our humanity. Instead, machines help us to be human by making things more efficient so that we can, for example, feed the needy with technological advances.
The sentence structures in the above example are not particularly varied (two sentences in a row start with "Machines are"), and the last sentence has a very complicated/convoluted structure, which makes it hard to understand. For comparison, here's a 6-scoring essay:
Machines are more efficient than humans at many tasks, but that does not mean that machines are causing us to lose our humanity. In fact, machines may even assist us in maintaining our humanity by providing more effective and efficient ways to feed the needy.
For whatever reason, I find that when I'm under time pressure, my sentences maintain variety in their structures but end up getting really awkward and strange. A real life example: once I described a method of counteracting dementia as "supporting persons of the elderly persuasion" during a hastily written psychology paper. I've found the best ways to counteract this are as follows:
#1: Look over what you've written and change any weird wordings that you notice.
#2: If you're just writing a practice essay, get a friend/teacher/relative who is good at writing (in English) to look over what you've written and point out issues (this is how my own awkward wording was caught before I handed in the paper). This point obviously does not apply when you're actually taking the ACT, but it very helpful to ask for someone else to take a look over any practice essays you write to point out issues you may not notice yourself.
The icing on the "Language Use" domain cake is skilled use of vocabulary and correct word choice. Part of this means using more complicated vocabulary in your essay. Once more, look at this this example from a 3-scoring essay (spelling corrected):
Machines are good at doing their jobs quickly and precisely.
Compare that to this sentence from a 6-scoring essay:
Machines excel at performing their jobs both quickly and precisely.
The 6-scoring essay uses "excel" and "performing" in place of "are good at" and "doing." This is an example of using language that is both more skillful ("excel" is more advanced than "are good at") and more precise ("performing" is a more precise word than "doing"). It's important to make sure that, when you do use more advanced words, you use them correctly. Consider the below sentence:
"Machines are often instrumental in ramifying safety features."
The sentence uses a couple of advanced vocabulary words, but since "ramifying" is used incorrectly, the language use in this sentence is neither skillful nor precise. Above all, your word choice and vocabulary should make your ideas clearer, not make them harder to understand.
Okay, we've taken a look at the ACTual ACT Writing grading rubric and gone over each domain in detail. To finish up, I'll go over a couple of ways the scoring rubric can be useful to you in your ACT essay prep.
Now that you know what the ACT is looking for in an essay, you can use that to guide what you write about in your essays...and how develop and organize what you say!
Because I'm an Old™ (not actually trademarked), and because I'm from the East Coast, I didn't really know much about the ACT prior to starting my job at PrepScholar. People didn't really take it in my high school, so when I looked at the grading rubric for the first time, I was shocked to see how different the ACT essay was (as compared to the more familiar SAT essay ).
Basically, by reading this article, you're already doing better than high school me.
An artist's impression of L. Staffaroni, age 16 (look, junior year was/is hard for everyone).
The ACT can't really give you an answer key to the essay the way it can give you an answer key to the other sections (Reading, Math, etc). There are some examples of essays at each score point on the ACT website , but these examples assume that students will be at an equal level in each of domains, which will not necessarily be true for you. Even if a sample essay is provided as part of a practice test answer key, it will probably use different context, have a different logical progression, or maybe even argue a different viewpoint.
The ACT Writing rubric is the next best thing to an essay answer key. Use it as a filter through which to view your essay . Naturally, you don't have the time to become an expert at applying the rubric criteria to your essay to make sure you're in line with the ACT's grading principles and standards. That is not your job. Your job is to write the best essay that you can. If you're not confident in your ability to spot grammar, usage, and mechanics issues, I highly recommend asking a friend, teacher, or family member who is really good at (English) writing to take a look over your practice essays and point out the mistakes.
If you really want custom feedback on your practice essays from experienced essay graders, may I also suggest the PrepScholar test prep platform ? As I manage all essay grading, I happen to know a bit about the essay part of this platform, which provides you with both an essay grade and custom feedback. Learn more about PrepScholar ACT Prep and our essay grading here!
Desirous of some more sweet sweet ACT essay articles? Why not start with our comprehensive guide to the ACT Writing test and how to write an ACT essay, step-by-step ? (Trick question: obviously you should do this.)
Round out your dive into the details of the ACT Writing test with tips and strategies to raise your essay score , information about the best ACT Writing template , and advice on how to get a perfect score on the ACT essay .
Want actual feedback on your essay? Then consider signing up for our PrepScholar test prep platform . Included in the platform are practice tests and practice essays graded by experts here at PrepScholar.
How to Get Into Harvard and the Ivy League
How to Get a Perfect 4.0 GPA
How to Write an Amazing College Essay
What Exactly Are Colleges Looking For?
ACT vs. SAT: Which Test Should You Take?
When should you take the SAT or ACT?
Get Your Free
Find Your Target SAT Score
Free Complete Official SAT Practice Tests
Score 800 on SAT Math
Score 800 on SAT Reading and Writing
Score 600 on SAT Math
Score 600 on SAT Reading and Writing
Find Your Target ACT Score
Complete Official Free ACT Practice Tests
Get a 36 on ACT English
Get a 36 on ACT Math
Get a 36 on ACT Reading
Get a 36 on ACT Science
Get a 24 on ACT English
Get a 24 on ACT Math
Get a 24 on ACT Reading
Get a 24 on ACT Science
Stay Informed
Get the latest articles and test prep tips!
Laura graduated magna cum laude from Wellesley College with a BA in Music and Psychology, and earned a Master's degree in Composition from the Longy School of Music of Bard College. She scored 99 percentile scores on the SAT and GRE and loves advising students on how to excel in high school.
Have any questions about this article or other topics? Ask below and we'll reply!
Scoring essays written by English learners can at times be difficult due to the challenging task of writing larger structures in English. ESL / EFL teachers should expect errors in each area and make appropriate concessions in their scoring. Rubrics should be based on a keen understanding of English learner communicative levels . This essay writing rubric provides a scoring system which is more appropriate to English learners than standard rubrics. This essay writing rubric also contains marks not only for organization and structure, but also for important sentence level mistakes such as the correct usage of linking language , spelling , and grammar.
Demonstrates a keen understanding of the target audience, and uses appropriate vocabulary and language. Anticipates probable questions and addresses these concerns with evidence pertaining to probable potential readers. | Demonstrates a general understanding of audience and uses mostly appropriate vocabulary and language structures. | Demonstrates a limited understanding of audience, and generally uses appropriate, if simple, vocabulary and language. | Not clear which audience is intended for this writing. | ||
Introductory paragraph begins with a statement that both grabs the attention of the reader and is appropriate to the audience. | Introductory paragraph begins with a statement that attempts to grab the attention of the reader, but is incomplete in some sense, or may not be appropriate to the audience. | Introductory paragraph begins with a statement that might be construed as an attention getter, but is not clear. | Introductory paragraph does not contain a hook or attention grabber. | ||
Introductory paragraph contains a clear thesis of main idea with clear suggestions as to how the body of the essay will support this thesis. | Introductory paragraph contains a clear thesis. However, the following support sentences are not necessarily, or only vaguely connected to the body paragraphs. | Introductory paragraph contains a statement that may be construed as a thesis or main idea. However, there is little structural support in the following sentences. | Introductory paragraph contains no clear thesis statement or main idea. | ||
Body paragraphs provide clear evidence and ample examples supporting thesis statement. | Body paragraphs provide clear connections to thesis statement, but may be need more examples or concrete evidence. | Body paragraphs are vaguely on topic, but lack clear connections, evidence and examples of thesis or main idea. | Body paragraphs are unrelated, or marginally connected to essay topic. Examples and evidence is weak or nonexistent. | ||
Closing paragraph provides a clear conclusion successfully stating the author's position, as well as containing an effective restatement of the main idea or thesis of the essay. | Closing paragraph concludes essay in satisfactory manner. However, author's position and / or an effective restatement of main idea or thesis may be lacking. | Conclusion is weak and at times confusing in terms of author's position with little reference to main idea or thesis. | Conclusion is nonexistent with little or no reference to proceeding paragraphs or author's position. | ||
All sentences are well constructed with very few minor mistakes. Complex sentence structures are used effectively. | Most sentences are well constructed with a number of mistakes. Some attempts at complex sentence structure are successful. | Some sentences are well constructed, while others contain serious errors. Use of complex sentence structure is limited. | Very few sentences are well constructed, or sentence structures are all very simple. | ||
Linking language is used correctly and often. | Linking language is used. However, mistakes in exact phrasing or usage of linking language is evident. | Linking language is seldom used. | Linking language is almost never or never used. | ||
Writing includes no or only very few minor errors in grammar, spelling. | Writing includes a relatively small number of errors in grammar, spelling and punctuation. However, reader's understanding is not impeded by these errors. | Writing includes a number of errors in grammar, spelling and punctuation which, at times, hinders reader's understanding. | Writing includes numerous errors in grammar, spelling and punctuation which makes reader's understanding difficult. |
Language Testing in Asia volume 10 , Article number: 12 ( 2020 ) Cite this article
9123 Accesses
6 Citations
3 Altmetric
Metrics details
The literature on using scoring rubrics in writing assessment denotes the significance of rubrics as practical and useful means to assess the quality of writing tasks. This study tries to investigate the agreement among rubrics endorsed and used for assessing the essay writing tasks by the internationally recognized tests of English language proficiency. To carry out this study, two hundred essays (task 2) from the academic IELTS test were randomly selected from about 800 essays from an official IELTS center, a representative of IDP Australia, which was taken between 2015 and 2016. The test takers were 19 to 42 years of age, 120 of them were female and 80 were males. Three raters were provided with four sets of rubrics used for scoring the essay writing task of tests developed by Educational Testing Service (ETS) and Cambridge English Language Assessment (i.e., Independent TOELF iBT, GRE, CPE, and CAE) to score the essays which had been previously scored officially by a certified IELTS examiner. The data analysis through correlation and factor analysis showed a general agreement among raters and scores; however, some deviant scorings were spotted by two of the raters. Follow-up interviews and a questionnaire survey revealed that the source of score deviations could be related to the raters’ interests and (un)familiarity with certain exams and their corresponding rubrics. Specifically, the results indicated that despite the significance which can be attached to rubrics in writing assessment, raters themselves can exceed them in terms of impact on scores.
Writing effectively is a very crucial part of advancement in academic contexts (Rosenfeld et al. 2004 ; Rosenfeld et al. 2001 ), and generally, it is a leading contributor to anyone’s progress in the professional environment (Tardy and Matsuda 2009 ). It is an essential skill enabling individuals to have a remarkable role in today’s communities (Cumming 2001 ; Dunsmuir and Clifford 2003 ). Capable and competent L2 writers demonstrate their idea in the written form, present and discuss their contentions, and defend their stances in different circumstances (Archibald 2004 ; Bridgeman and Carlson 1983 ; Brown and Abeywickrama 2010 ; Cumming 2001 ; Hinkel 2009 ; Hyland 2004 ). Writing correctly and impressively is vital as it ensures that ideas and beliefs are expressed and transferred effectively. Being capable of writing well in the academic environment leads to better scores (Faigley et al. 1981 ; Graham et al. 2005 ; Harman 2013 ). It also helps those who require admission to different organizations of higher education (Lanteigne 2017 ) and provides them with better opportunities to get better job positions. Business communications, proceedings, legal agreements, and military agreements all have to be well written to transmit information in the most influential way (Canseco and Byrd 1989 ; Grabe and Kaplan 1996 ; Hyland 2004 ; Kroll and Kruchten 2003 ; Matsuda 2002 ). What should be taken into consideration is that even well until the mid-1980s, L2 writing in general, and academic L2 writing in particular, was hardly regarded as a major part of standard language tests desirable of being tested on its own right. Later, principally owing to the announced requirements of some universities, it meandered through its path to first being recognized as an option in these tests and then recently turning into an indispensable and integral part of them.
L2 writing is not the mere adequate use of grammar and vocabulary in composing a text, rather it is more about the content, organization and accurate use of language, and proper use of linguistic and textual parts of the language (Chenoweth and Hayes 2001 ; Cumming 2001 ; Holmes 2006 ; Hughes 2003 ; Sasaki 2000 ; Weissberg 2000 ; Wiseman 2012 ). Essay, as one of the official practices of writing, has become a major part of formal education in different countries. It is used by different universities and institutes in selecting qualified applicants, and the applicants’ mastery and comprehension of L2 writing are evaluated by their performance in essay writing.
Essay, as one of the most formal types of writing, constitutes a setting in which clear explanations and arguments on a given topic are anticipated (Kane 2000 ; Muncie 2002 ; Richards and Schmidt 2002 ; Spurr 2005 ). The first steps in writing an essay are to gain a good grasp of the topic, apprehend the raised question and produce the response in an organized way, select the proper lexicon, and use the best structures (Brown and Abeywickrama 2010 ; Wyldeck 2008 ). To many, writing an essay is hampering, yet is a key to success. It makes students think critically about a topic, gather information, organize and develop an idea, and finally produce a fulfilling written text (Levin 2009 ; Mackenzie 2007 ; McLaren 2006 ; Wyldeck 2008 ).
L2 writing has had a great impact on the field of teaching and learning and is now viewed not only as an independent skill in the classroom but also as an integral aspect of the process of instruction, learning, and most freshly, assessment (Archibald 2001 ; Grabe and Kaplan 1996 ; MacDonald 1994 ; Nystrand et al. 1993 ; Raimes 1991 ). Now, it is not possible to think of a dependable test of English language proficiency without a section on essay writing, especially when academic and educational purposes are of concern. Educational Testing Service (ETS) and Cambridge English Language Assessment offer a particular section on essay writing for their tests of English language proficiency. The independent TOEFL iBT writing section, the objective of which is to gauge and assess learners’ ability to logically and precisely express their opinions using their L2 requires the learners to write well at the sentence, paragraph, and essay level. It is written on a computer using a word processing program with rudimentary qualities which does not have a self-checker and a grammar or spelling checker. Generally, the essay should have an introduction, a body, and a conclusion. A standard essay usually has four paragraphs, five is possibly better, and six is too many (Biber et al. 2004 ; Cumming et al. 2000 ). TOEFL iBT is scored based on the candidates’ performance on two tasks in the writing section. Candidates should at least do one of the writing tasks. Scoring could be done either by human rater or automatically (the eRater). Using human judgment for assessing content and meaning along with automated scoring for evaluating linguistic features ensures the consistency and reliability of scores (Jamieson and Poonpon 2013 ; Kong et al. 2009 ; Weigle 2013 ).
The Graduate Record Examination (GRE) analytic writing consists of two different essay tasks, an “issue task” and an “argument task”, the latter being the focus of the present study. Akin to TOELF iBT, the GRE is also written on a computer employing very basic features of a word processing program. Each essay has an introduction including some contextual and upbringing information about what is going to be analyzed, a body in which complex ideas should be articulated clearly and effectively using enough examples and relevant reasons for supporting the thesis statement. Finally, the claims and opinions have to be summed up coherently in the concluding part (Broer et al. 2005 ). The GRE is scored two times on a holistic scale, and usually, the average score is reported if the two scores are within one point; otherwise, a third reader steps in and examines the essay (Staff 2017 ; Zahler 2011 ).
IELTS essay writing (in both Academic and General Modules) involves developing a formal five-paragraph essay in 40 min. Similar to essays in other exams, it should include an introductory paragraph, two to three body paragraphs, and a concluding paragraph (Aish and Tomlinson 2012 ; Dixon 2015 ; Jakeman 2006 ; Loughead 2010 ; Stewart 2009 ). To score IELTS essay writing, the received scores for the (four) components of the rubric are averaged (Fleming et al. 2011 ).
The writing sections of the Cambridge Advanced Certificate in English (CAE) and the Cambridge English: Proficiency (CPE) exams have two parts. The first part is compulsory and candidates are asked to write in response to an input text including articles, leaflets, notices, and formal and/or informal letters. In the second part, the candidates must select one of the writing tasks that might be a letter, proposal, report, or a review (Brookhart and Haines 2009 ; Corry 1999 ; Duckworth et al. 2012 ; Evans 2005 ; Moore 2009 ). The essays should include an introduction, a body, and a conclusion (Spratt and Taylor 2000 ). Similar to IELTS essay writing, these exams are scored analytically. The scores are added up and then converted to a scale of 1 to 20 (Brookhart 1999 ; Harrison 2010 ).
Assessing L2 writing proficiency is a flourishing area, and the precise assessment of writing is a critical matter. Practically, learners are generally expected to produce a piece of text so that raters can evaluate the overall quality of their performance using a variety of different scoring systems including holistic and analytic scoring, which are the most common and acceptable ways of assessing essays (Anderson 2005 ; Brossell 1986 ; Brown and Abeywickrama 2010 ; Hamp-Lyons 1990 , 1991 ; Kroll 1990 ). Today, the significance of L2 writing assessment is on an increase not only in language-related fields of studies but also arguably in all disciplines, and it is a very pressing concern in various educational and also vocational settings.
L2 writing assessment is the focal point of an effective teaching process of this complicated skill (Jones 2001 ). A diligent assessment of writing completes the way it is taught (White 1985 ). The challenging and thorny natures of assessment and writing skills impede the reliable assessment of an essay (Muenz et al. 1999 ) such that, to date, a plethora of research studies have been conducted to discern the validity and reliability of writing assessment. Huot ( 1990 ) argues that writing assessment encounters difficulty because usually, there are more than two or three raters assessing essays, which may lead to uncertainty in writing assessment.
L2 writing assessment is generally prone to subjectivity and bias, and “the assessment of writing has always been threatened due to raters’ biasedness” (Fahim and Bijani 2011 , p. 1). Ample studies document that raters’ assessment and judgments are biased (Kondo-Brown 2002 ; Schaefer 2008 ). They also suggested that in order to reduce the bias and subjectivity in assessing L2 writing, standard and well-described rating scales, viz rubrics, should be determined (Brown and Jaquith 2007 ; Diederich et al. 1961 ; Hamp-Lyons 2007 ; Jonsson and Svingby 2007 ; Aryadoust and Riazi 2016 ). Furthermore, there are some studies suggesting the tendency of many raters toward subjectivity in writing assessment (Eckes 2005 ; Lumley 2005 ; O’Neil and Lunz 1996 ; Saeidi et al. 2013 ; Schaefer 2008 ). In light of these considerations, it becomes of prominence to improve consistency among raters’ evaluations of writing proficiency and to increase the reliability and validity of their judgments to avoid bias and subjectivity to produce a greater agreement between raters and ratings. The most notable move toward attaining this objective is using rubrics (Cumming 2001 ; Hamp-Lyons 1990 ; Hyland 2004 ; Raimes 1991 ; Weigle 2002 ). In layman’s terms, rubrics ensure that all the raters evaluate a writing task by the same standards (Biggs and Tang 2007 ; Dunsmuir and Clifford 2003 ; Spurr 2005 ). To curtail the probable subjectivity and personal bias in assessing one’s writing, there should be some determined and standard criteria for assessing different types of writing tasks (Condon 2013 ; Coombe et al. 2012 ; Shermis 2014 ; Weigle 2013 ).
Assessment rubrics (alternatively called instruments) should be reliable, valid, practical, fair, and constructive to learning and teaching (Anderson et al. 2011 ). Moskal and Leydens ( 2000 ) considered validity and reliability as the two significant factors when rubrics are used for assessing an individual’s work. Although researchers may define validity and reliability in various ways (for instance, Archibald 2001 ; Brookhart 1999 ; Bachman and Palmer 1996 ; Coombe et al. 2012 ; Cumming 2001 ; Messick 1994 ; Moskal and Leydens 2000 ; Moss 1994 ; Rezaei and Lovorn 2010 ; Weigle 2002 ; White 1994 ; Wiggan 1994 ), they generally agree that validity in this area of investigation is the degree to which the criteria support the interpretations of what is going to be measured. Reliability, they generally settle, is the consistency of assessment scores regardless of time and place. Rubrics and any rating scales should be so developed to corroborate these two important factors and equip raters and scorers with an authoritative tool to assess writing tasks fairly. Arguably, “the purpose of the essay task, whether for diagnosis, development, or promotion, is significant in deciding which scale is chosen” (Brossell 1986 , p. 2). As rubrics should be conceived and designed with the purpose of assessment of any given type of written task (Crusan 2015 ; Fulcher 2010 ; Knoch 2009 ; Malone and Montee 2014 ; Weigle 2002 ), the development and validation of rating scales are very challenging issues.
Writing rubrics can also help teachers gauge their own teaching (Coombe et al. 2012 ). Rubrics are generally perceived as very significant resources attainable for teachers enabling them to provide insightful feedback on L2 writing performance and assess learners’ writing ability (Brown and Abeywickrama 2010 ; Knoch 2011 ; Shaw and Weir 2007 ; Weigle 2002 ). Similarly, but from another perspective, rubrics help learners to follow a clear route of progress and contribute to their own learning (Brown and Abeywickrama 2010 ; Eckes 2012 ). Well-defined rubrics are constructive criteria, which help learners to understand what the desired performance is (Bachman and Palmer 1996 ; Fulcher and Davidson 2007 ; Weigle 2002 ). Employing rubrics in the realm of writing assessment helps learners understand raters’ and teachers’ expectations better, judge and revise their own work more successfully, promote self-assessment of their learning, and improve the quality of their writing task. Rubrics can be used as an effective tool enabling learners to focus on their efforts, produce works of higher quality, get better grades, find better jobs, and feel more concerned and confident about doing their assignment (Bachman and Palmer 2010 ; Cumming 2013 ; Kane 2006 ).
Rubrics are set to help scorers evaluate writers’ performances and provide them with very clear descriptions about organization and coherence, structure and vocabulary, fluent expressions, ideas and opinions, among other things. They are also practical for the purpose of describing one’s competence in logical sequencing of ideas in producing a paragraph, use of sufficient and proper grammar and vocabulary related to the topic (Kim 2011 ; Pollitt and Hutchinson 1987 ; Weigle 2002 ). Employing rubrics reduces the time required to assess a writing performance and, most importantly, well-defined rubrics clarify criteria in particular terms enabling scorers and raters to judge a work based on standard and unified yardsticks (Gustilo and Magno 2015 ; Kellogg et al. 2016 ; Klein and Boscolo 2016 ).
Selecting and designing an effective rating scale hinges upon the purpose of the test (Alderson et al. 1995 ; Attali et al. 2012 ; Becker 2011 ; East 2009 ). Although rubrics are crucial in essay evaluation, choosing the appropriate rating scale and forming criteria based on the purpose of assessment are as important (Bacha 2001 ; Coombe et al. 2012 ). It seems that a considerable part of scale developers prefers to adapt their scoring scales from a well-established existing one (Cumming 2001 ; Huot et al. 2009 ; Wiseman 2012 ). The relevant literature supports the idea of adapting rating scales used in large-scale tests for academic purposes (Bacha 2001 ; Leki et al. 2008 ). Yet, East ( 2009 ) warned about the adaptation of rating scales from similar tests, especially when they are to be used across languages.
Holistic and analytic scoring systems are now widely used to identify learners’ writing proficiency levels for different purposes (Brown and Abeywickrama 2010 ; Charney 1984 ; Cohen 1994 ; Coombe et al. 2012 ; Cumming 2001 ; Hamp-Lyons 1990 ; Reid 1993 ; Weir 1990 ). Unlike the analytic scoring system, the holistic one takes the whole written text into consideration. This scoring system generally emphasizes what is done well and what is deficient (Brown and Hudson 2002 ; White 1985 ). The analytic scoring system (multi-trait rubrics), however, includes discrete components (Bacha 2001 ; Becker 2011 ; Brown and Abeywickrama 2010 ; Coombe et al. 2012 ; Hamp-Lyons 2007 ; Knoch 2009 ; Kuo 2007 ; Shaw and Weir 2007 ). To Weigle ( 2002 ), accuracy, cohesion, content, organization, register, and appropriacy of language conventions are the key components or traits of an analytic scoring system. One of the early analytic scoring rubrics for writing was employed in the ESL Composition by Jacobs et al. 1981 , which included five components, namely language development, organization, vocabulary, language use, and mechanics).
Each scoring system has its own merits and limitations. One of the advantages of analytic scoring is its distinctive reliability in scoring (Brown et al. 2004 ; Zhang et al. 2008 ). Some researchers (e.g. Johnson et al. 2000 ; McMillan 2001 ; Ward and McCotter 2004 ) contend that analytic scoring provides the maximum opportunity for reliability between raters and ratings since raters can use one scoring criteria for different writing tasks at a time. Yet, Myford and Wolfe ( 2003 ) considered the halo effect as one of the major disadvantages of analytic rubrics. The most commonly recognized merit of holistic scoring is its feasibility as it requires less time. However, it does not encompass different criteria, affecting its validity in comparison to analytic scoring, as it entails the personal reflection of raters (Elder et al. 2007 ; Elder et al. 2005 ; Noonan and Sulsky 2001 ; Roch and O’Sullivan 2003 ). Cohen ( 1994 ) stated that the major demerit of the holistic scoring system is its relative weakness in providing enough diagnostic information about learners’ writing.
Many research studies have been conducted to examine the effect of analytic and holistic scoring systems on writing performance. For instance, more than half a century ago, Diederich et al. ( 1961 ) carried out a study on the holistic scoring system in a large-scale testing context. Three-hundred essays were rated by 53 raters, and the results showed variation in ratings based on three criteria, namely ideas, organization, and language. About two score years later, Borman ( 1979 ) conducted a similar study on 800 written tasks and found that the variations can be attributed to ideas, organizations, and supporting details. Charney ( 1984 ) did a comparison study between analytic and holistic rubrics in assessing writing performance in terms of validity and found a holistic scoring system to be more valid. Bauer ( 1981 ) compared the cost-effectiveness of analytic and holistic rubrics in assessing essay tasks and found the time needed to train raters to be able to employ analytic rubrics was about two times more than the required time to train raters to use the holistic one. Moreover, the time needed to grade the essays using analytic rubrics was four times the time needed to grade essays using holistic rubrics. Some studies reported findings that corroborated that holistic scoring can be the preferred scoring system in large-scale testing context (Bell et al. 2009 ). Chi ( 2001 ) compared analytic and holistic rubrics in terms of their appropriacy, the agreement of the learners’ scores, and the consistency of rater. The findings revealed that raters who used the holistic scoring system outperformed those employing analytic scoring in terms of inter-rater and intra-rater reliability. Thus, there is research to suggest the superiority of analytic rubrics in assessing writing performance in terms of reliability and accuracy in scoring (Birky 2012 ; Brown and Hudson 2002 ; Diab and Balaa 2011 ; Kondo-Brown 2002 ). It is, generally speaking, difficult to decide which one is the best, and the research findings so far can best be described as inconclusive.
Rubrics of internationally recognized tests used in assessing essays have many similar components, including organization and coherence, task achievement, range of vocabulary used, grammatical accuracy, and types of errors. The wording used, however, is usually different in different rubrics, for instance, “task achievement” that is used in the IELTS rubrics is represented as the “realization of tasks” in CPE and CAE, “content coverage” in GRE, and “task accomplishment” in TOEFL iBT. Similarly, it can be argued that the point of focus of the rubrics for different tests may not be the same. Punctuation, spelling, and target readers’ satisfaction, for example, are explicitly emphasized in CAE and CPE while none of them are mentioned in GRE and TOEFL iBT. Instead, idiomaticity and exemplifications are listed in the TOEFL iBT rubrics, and using enough supporting ideas to address the topic and task is the focus of GRE rating scales (Brindley 1998 ; Hamp-Lyons and Kroll 1997 ; White 1984 ).
Broadly speaking, the rubrics employed in assessing L2 writing include the above-mentioned components but as mentioned previously, they are commonly expressed in different wordings. For example, the criteria used in IELTS Task 2 rating scale are task achievement, coherence and cohesion, lexical resources, and grammatical range and accuracy. These criteria are the ones based on which candidates’ work is assessed and scored. Each of these criteria has its own descriptors, which determine the performance expected to secure a certain score on that criterion. The summative outcome, along with the standards, determines if the candidate has attained the required qualification which is established based on the criteria. The summative outcome of IELTS Task 2 rating scale will be between 0 and 9. Similar components are used in other standard exams like CAE and CPE, their summative outcomes being determined from 1 to 5. Their criteria are used to assess content (relevance and completeness), language (vocabulary, grammar, punctuation, and spelling), organization (logic, coherence, variety of expressions and sentences, and proper use of linking words and phrases), and finally communicative achievement (register, tone, clarity, and interest). CAE and CPE have their particular descriptors which demonstrate the achievement of each learners’ standard for each criterion (Betsis et al. 2012 ; Capel and Sharp 2013 ; Dass 2014 ; Obee 2005 ). Similar to the other rubrics, the GRE scoring scale has the main components like the other essay writing scales but in different wordings. In the GRE, the standards and summative outcomes are reported from 0–6, denoting fundamentally deficient, seriously flawed, limited, adequate, strong, and outstanding, respectively. Like the GRE, the TOEFL iBT is scored from 0–5. Akin to the GRE, Independent Writing Rubrics for the TOEFL iBT delineates the descriptors clearly and precisely (Erdosy 2004 ; Gass et al. 2011 ).
Abundant research studies have been carried out to show that idea and content, organization, cohesion and coherence, vocabulary and grammar, and language and mechanics are the main components of essay rubrics (Jacobs et al. 1981 ; Schoonen 2005 ). What has been considered a missing element in the analytic rating scale is the raters’ knowledge of, and familiarity with, rubrics and their corresponding elements as one of the key yardsticks in measuring L2 writing ability (Arter et al. 1994 ; Sasaki and Hirose 1999 ; Weir 1990 ). Raters play a crucial role in assessing writing. There is research to allude to the impact of raters’ judgments on L2 writing assessment (Connor-Linton 1995 ; Sasaki 2000 ; Schoonen 2005 ; Shi 2001 ).
The past few decades have witnessed an increasing growth in research on different scoring systems and raters’ critical role in assessment. There are some recent studies discussing the importance of rubrics in L2 writing assessment (e.g. Deygers et al. 2018 ; Fleckenstein et al. 2018 ; Rupp et al. 2019 ; Trace et al. 2016 ; Wesolowski et al. 2017 ; Wind et al. 2018 ). They commonly consider rubrics as significant tools for measuring L2 learners’ performances and suggest that rubrics enhance the reliability and validity of writing assessment. More importantly, they argue that employing rubrics can increase the consistency among raters.
Shi ( 2001 ) made comparisons between native and non-native, as well as between experienced and novice raters, and found that raters have their own criteria to assess an essay, virtually regardless of whether they are native or non-native and experienced or novice. Lumley ( 2002 ) and Schoonen ( 2005 ) conducted comparison studies between two groups of raters, one group trained expert raters provided with no standard rubrics, the other group novice raters with no training who had standard rubrics. The trained raters with no rubrics outperformed the other group in terms of accuracy in assessing the essays, implying the importance of raters. Rezaei and Lovorn ( 2010 ) compared the use of rubrics between summative and formative assessment. They argued that using rubrics in summative assessment is predominant and that it overshadows the formative aspects of rubrics. Their results showed that rubrics can be more beneficial when used for formative assessment purposes.
Izadpanah et al. ( 2014 ) conducted a study drawing on Jacobs et al. ( 1981 ) to see if the rubrics of one exam can be the predictor of another one. Practically, they wanted to examine whether the same score would be obtained if a rubric for an IELTS exam was used for assessing CPE or any other standard test. Their findings revealed that the rubrics were comparable with each other in terms of their different components by which different standard essays are assessed. Bachman ( 2000 ) compared TOEFL PBT and CPE and found a very meaningful relationship between the scores gained from essay writing tests. He also concluded that scoring CPE was usually more difficult than PBT, and that under similar conditions, exams from UCLES/Cambridge Assessment (like CPE) received lower scores in comparison to the ones from ETS (like PBT). In Fleckenstein et al. ( 2019 ) experts from different countries linked upper secondary students’ writing profiles elicited in a constructed response test (integrated and independent essays from the TOEFL iBT) to CEFR level. The Delphi technic was used to find out the intra- and inter-panelist consistency while scoring students’ writing profiles. The findings showed that panelists are able to provide ratings consistent with the empirical item difficulties and the validity of the estimate of the cut scores.
Schoonen ( 2005 ) and Attali and Burstein ( 2005 ) compared the generalizability of writing scores to different essays using only one set of the rubric. They checked and analyzed three components of writing rubric, including content, language use, and organization and found that the obtained scores from different essays are similar. Wind ( 2020 ) conducted a study to illustrate and explore methods for evaluating the degree to which raters apply a common rating scale consistently in analytic writing assessments. The results indicated a lack of invariance in rating scale category functioning across domains for several raters. Becker ( 2011 ) also examined different rubrics used to measure writing performance. He investigated the three different types of rubrics, namely holistic, analytic, and primary-trait scoring systems, to find which one is more appropriate for assessing L2 writing. He studied the merits and demerits of the three rubrics and concluded that none of them had superiority over the others, making each legitimate for assessing a piece of writing depending on the purpose of writing, the time allocated for assessment, and the raters’ expertise.
In a recent study, Ghaffar et al. ( 2020 ) examined the impact of rubrics and co-constructed rubrics on middle school students’ writing skill performance. The findings of their study indicated that co-constructed rubrics as assessment tools help students to outperform in their writing due to their familiarity with these types of rubrics. In addition, there are researchers who are of the contention that the use of rubrics is inconclusive and can be controversial especially when they are just used for summative assessment purposes and that when rubrics are used for both summative and formative assessment, they are more advantageous (Andrade 2000 ; Broad 2003 ; Ene and Kosobucki 2016 ; Inoue 2004 ; Panadero and Jonsson 2013 ; Schirmer and Bailey 2000 ; Wilson 2006 , 2017 ).
What all of these studies indicated is that employing well-developed rubrics increase equality and fairness in writing assessment. It is also suggested that various factors could affect writing assessment, especially raters’ expertise and time allocated to the rating (Bacha 2001 ; Ghalib and Hattami 2015 ; Knoch 2009 , 2011 ; Lu and Zhang 2013 ; Melendy 2008 ; Nunn 2000 ; Nunn and Adamson 2007 ). The purpose of the present study is twofold. First, it attempts to investigate the consistency among different standard rubrics in writing assessment. Second, it tries to examine whether any of these rubrics could be used as a predictor of others and if they all tap the same underlying construct.
To meet the objectives of the study, 200 samples of Academic IELTS Task 2 (i.e., essay writing) were used. The samples were randomly selected from more than 800 essays written as part of academic IELTS tests taken between 2015 and 2016 at an official IELTS test center, a representative of IDP Australia. The essays were asked to be written based on different prompts. As an instruction to the IELTS writing Task 2, it is required that the test takers write at least 250 words, a condition that 21 samples did not meet. Test takers were 19 to 42 years of age, 120 of the females and 80 males.
One of the raters in this study was an (anonymous) official IELTS examiner who had scored the essays officially; the other raters were four experienced IELTS instructors from an English department of a nationally prominent language institute, three males and one female, between 26 and 39 years of age, with 5 to 12 years of English language teaching experience. These four raters were selected based on their qualifications, teaching credentials and certifications, and years of teaching experience, particularly in IELTS classes. All the four raters were M.A. holders in TEFL and had been teaching different writing courses at universities and language institutes and were familiar with different scoring systems and their relevant components. Each rater was invited to an individual briefing session with one of the researchers to ensure their familiarity with the rubrics of interest and discuss some practical considerations pertaining to this study. They were asked to read and score each essay four times, each time based on one of the four rubrics (TOELF iBT, GRE, CPE, and CAE). The raters completed the scorings in 12 weeks during which time they were instructed not to share ideas about the task (the costs of scorings were modestly met).
Four sets of rubrics for different writing tests (i.e., Independent TOEFL iBT, GRE, CPE, & CAE) were taken from ETS and Cambridge English Language Assessment. The official IELTS scores of the 200 essays were collected from the IELTS center. The rubrics employed for assessing and evaluating the writing tasks of these five standard exams were analytic rubrics with different scales, namely a nine-point scale for assessing IELTS Task 2, five-point scales for GRE and TOEFL iBT, and six-point scales for CAE and CPE writing tasks. They assess the main components of essay writing construct, including the range of vocabulary and grammar used in addressing the task, cohesion and organization, and range of using cohesive devices, which were presented in different wordings in these rubrics.
Another instrument was a questionnaire designed by the researchers, which included both open-ended and closed-ended questions (see Appendix). The aim was to determine the raters’ attitudes toward their rating experience and their familiarity with each exam and its corresponding rubrics. The themes of questionnaire items were determined based on a review of the literature on the important issues and factors affecting raters’ performances and attitudes (Brown and Abeywickrama 2010 ; Coombe et al. 2012 ; Fulcher and Davidson 2007 ; Weigle 2002 ). In addition, an interview was carried out with the four raters to find out about their interest in rating and also to investigate their familiarity of the exams and their conforming rating scales.
To carry out the study, 200 essay samples were scored once by a certified IELTS examiner. The assigned scores together with the IELTS examiner’s relevant comments were written next to each essay sample. Afterward, all essays were rated by the four other raters, who were kept uninformed of the official IELTS scores. They were provided with the rubrics of the four essay writing tests and were instructed to assess each essay with the four given rubrics. By so doing, in addition to the official IELTS scores, four other scores were given to each essay from each rater; that is to say, each essay received 16 scores plus the official IELTS score. Therefore, all in all, the researchers collected 17 scores for each essay. The researcher-made questionnaire was carried out, and then an interview was conducted whereby the 4 raters were asked about their interest in rating and also their awareness and concerns about each exam and their relevant rubrics.
To do the analysis of the data, the SPSS program, version 22, was employed. Initially, the descriptive statistics of the data were computed, and intercorrelations among the 17 scores were calculated to see if any statistically significant association could be found among the rubrics. To have a better picture of the existing association among the scoring rubrics of the different exams, PCA as a variant of factor analysis was run to examine the extent the rubrics tap the same underlying construct.
To address the first research question, intercorrelations were computed among the IELTS, CAE, CPE, TOEFL iBT, and GRE scores. To answer the second research question, factor analysis was run to examine the extent the standard essay writings in these five tests of English language proficiency tap the same underlying construct. In this section, the results of the intercorrelations and factor analyses computations are reported in detail.
To estimate the intercorrelations among test ratings and raters, first, alpha was calculated for these five sets of scores together (i.e., IELTS, CAE, CPE, TOEFL iBT, and GRE). To analyze the data, primarily, alpha was calculated for each rater separately to check the consistency among raters. Then, alpha was computed for all the raters together to find inter-reliability among the raters. The intercorrelations were afterward computed between each exam score and the IELTS scores to see which score is (more) correlated with the IELTS.
Table 1 presents the alphas as the average of intercorrelations among the five sets of scores including the IELTS scores, and the four scores given by the raters. Evidently, rater 1 has an alpha of about .67, which is lower than the other alphas. However, because there were only five sets of scores correlated in each alpha, this low value of alpha could still be considered acceptable. Nevertheless, this lower value of alpha in comparison to the other alphas could be meaningful since, after all, this rater showed less internal consistency among his ratings.
To see which test rating given by the four raters agreed the least with the IELTS scores, intercorrelations of each test rating with the IELTS scores were computed as shown in Table 2 . As the intercorrelations of the first rater demonstrate, Rater 1’s CPE rating and Rater 4’s TOEFL iBT rating show lower correlations with the IELTS ratings. Afterward, an alpha was computed for an aggregate of the ratings of all the raters including the IELTS scores.
Table 3 shows an alpha of around .86, which could be considered acceptable with regard to the small number of ratings.
To see which rating had a negative effect on the total alpha, item-total correlation for each test rating was computed. Item-total correlation showed the extent to which each test rating agrees with the total of the other test ratings including the IELTS scores. As it is shown in Table 4 , CPE1 and iBT4 had the lowest correlations with the total ratings. This table also indicates that the removal of these scores would have increased the total alpha considerably.
These results, as expected, confirmed the results found in each rater’s alpha and inter-test correlations computed in the previous section.
This study was carried out having hypothesized that the construct of essay writing is similar across different standardized tests (i.e., IELTS, CAE, CPE, TOEFL iBT, and GRE), and a given essay is expected to be scored similarly by the rubrics and scales of these different exams. To see whether this was the case, the ratings of these exams were examined. The correlation analyses reported above showed that there is an acceptable agreement among all test ratings except two of them, CPE and TOEFL iBT. That is, rater 1 in CPE and Rater 4 in TOEFL iBT showed the least correlation among other test ratings (.15 and .13, respectively). To have a better picture of this issue, it was decided to run a PCA to examine the extent these exams tap the same underlying construct. Factor analysis provides some factor loadings for each test item (i.e., test rating); if two or more items load on the same factor, it will show that these items (i.e., test ratings) tap the same construct (i.e., essay writing construct).
Table 5 presents the results of Kaiser-Meyer-Olkin measure (KMO) and Bartlett’s test of sphericity on the sampling adequacy for the analysis. The reported KMO is .83, which is larger than the acceptable value (KMO > .5) according to Field ( 2009 ). Bartlett’s test of sphericity [ χ 2 (136) = 1377.12, p < .001] was also found significant, indicating large enough correlations among the items for PCA; therefore, this sample could be considered adequate for running the PCA.
The next step was to investigate the number of factors required to be retained in the PCA. To do so, the scree plot was checked (Fig. 1 ). The first point that should be identified in the scree plot is the point of inflexion, that is, where the slopes of the line in the scree plot changes dramatically. Only those factors, which fall to the left of the point of inflexion, should be retained. Based on Fig. 1 , it seems that the point of inflexion is on the fourth factor; therefore, four factors were retained.
According to Table 6 , the first four retained factors explain around 60 percent of the whole variance, which is quite considerable.
Table 7 presents the four factor loadings after varimax rotation. Obviously, the different test ratings were loaded on 4 factors. In other words, those test ratings that clustered around the same factor seemed to be loading on the same underlying factor or latent variable.
Following the above analysis, it was decided to further examine the factor loadings as follows: It should be noted that the above factor structure was achieved by considering only those loadings above .4 as suggested by Stevens ( 2002 ), which explained around 16 percent of the variance in the variable. This value was strict, though, resulting in the emergence of limited factors. Therefore, employing Kaiser’s criterion, a second factor analysis was run with a more lenient absolute value for each factor, which was .3 as suggested by Field ( 2009 ). By so doing, more factor loadings emerged and more information was achieved. The factor loadings above .3 are presented in Table 8 , which almost revealed the same factor structure as found in the previous factor analysis with absolute values greater than .4; however, one important finding was that the IELTS ratings this time showed loadings on all the factors on which other tests also loaded. It can be construed, therefore, that the other tests had significant potential to tap the same construct.
After estimating reliability using Cronbach’s alpha and then by running a Confirmatory Factor Analysis, it was decided to omit Rater 1 due to his unfamiliarity with the exam and its corresponding rubrics reported by him in the questionnaire.
Table 9 and Fig. 2 (scree plot) demonstrate the factor structure after removing Rater 1. The scree plot shows that 4 factors should be retained in the analysis, and Table 9 indicates that the first four retained factors explain about 70 percent of the whole variance, which was quite satisfactory.
Scree plot (Rater 1 removed)
Finally, Table 10 shows that after removing Rater 1’s data, all the ratings of Raters 3 and 4 have loaded on the same factors with the IELTS. Of course, like the previous factor analysis, the IELTS ratings again showed loadings on all the factors on which other tests loaded except iBT4. All in all, it could be concluded that the results from the factor analysis confirm the previous findings from alpha computations showing iBT4 ratings had the lowest correlations with the total ratings.
The purpose of the present study is to examine the consistency of the rubrics endorsed for assessing the writing tasks by the internationally recognized tests of English language proficiency. Standard rubrics can be considered constructive tool helping raters to assess different types of essays (Busching, 1998 ). Using rubrics enhances the reliability of the assessment of essays provided that these rubrics are well described and that they tap the same construct (Jonsson & Svingby, 2007 ). The current study is an attempt to examine the reliability among different rubrics of essay writing with regard to their major components, namely, organization, coherence and cohesion, range of lexical and grammatical complexity used, and accuracy.
The results of this study show that all in all, there is a high correlation among raters (i.e., the IELTS examiner and the four other raters) and rating scores (i.e., the official IELTS scores and the other 16 test ratings received from the four raters). The intercorrelations among test ratings and the raters as well as the computation of inter-item correlations between each test rating and the IELTS scores revealed that CPE1 and iBT4 had the least agreement with the official IELTS ratings. Therefore, these low correlations were investigated in a follow-up study by giving the four raters a questionnaire including both open-ended and closed-ended questions. The raters’ responses to the questionnaire denoted the extent to which they were familiar with each exam and their corresponding rubrics.
The responses of two of the raters, that is, Rater 1 in CPE and Rater 4 in TOEFL iBT, proved to be illuminating in explaining their performance. Rater 1 s’ responses to the questionnaire showed that he had no teaching experience for CPE classes. However, his responses to other questions of the questionnaire indicated his familiarity with this exam and its writing essay scoring rubrics. The responses of Rater 4 revealed that she had no teaching experience for TOEFL iBT and no familiarity with the exam and its corresponding rating scales. The outcome from the interview with Rater 4 suggests that using well-trained raters leads to fewer problems in rating. What Rater 4 stated in her responses to the questionnaire and interview were in line with the findings of Sasaki and Hirose ( 1999 ), who concluded that familiarity with different tests and their relevant rubrics leads to better scoring. Additionally, the results of the present study are consistent with what Schoonen ( 2005 ), Attali and Burstein ( 2005 ), Wind et al. ( 2018 ), Deygers et al. ( 2018 ), Wesolowski et al. ( 2017 ), Trace et al. ( 2016 ), Fleckenstein et al. ( 2018 ), Rupp et al. ( 2019 ) found in their studies, that is employing rubrics enhances the reliability of writing assessment as well as among raters.
To this point, the obtained results from this study provide an affirmative answer to the first question of the study, indicating a very high agreement among test ratings and the raters. Also, in order to ensure that the construct of essay writing is similar across different standardized tests and identical essays are scored similarly by the internationally recognized rubrics of these different exams, inter-item correlation analysis was computed which indicated that CPE1 and iBT4 had the lowest correlations with the total ratings. This could be due to either the raters’ inconsistencies or the hypothesis that essay writing is conceptualized differently based on the scoring rubrics of these exams. The follow-up survey also corroborated that the disagreement among Raters 1 and 4 and the other raters was due to either the rater’s discrepancies or the way every writing task was hypothesized differently according to the rubrics of each exam. It can be supported by Weigle ( 2002 ) who concluded that raters should have a good grasp of scoring and its essential details. She also discussed that raters should have a sharp conceptualization of the construct of essay writing.
The results from the rotated component matrix revealed that all the ratings of Raters 3 and 4 loaded on the same factor, meaning that they tap the same construct. Examining the other factor loadings revealed that CAE1, iBT1, CAE2, and iBT2 also loaded on the same factor with the IELTS, suggesting that these rater’s conceptualizations of the construct of essay writing in CAE and TOEFL iBT were more similar to that of the IELTS raters rather than those of CPE and GRE scorers. However, what remained questionable was why CPE1 and GRE1 did not load on the same factor as CAE1 and iBT1, and why CPE1 and GRE1 loaded on the same factor with CPE2 and GRE2. Additionally, why CPE1 also loaded with GRE2 and CPE2 on the same factor remained open to discussion.
What was found above was the results of the PCA considering those factor loadings above .4 based on Stevens ( 2002 ). As this value was strict, and the number of obtained factors was limited, it was decided to apply Kaiser’s Criterion with a less rigorous eigenvalue of .3 based on Field’ ( 2009 ) suggestion. The findings showed almost the same factor loadings as was found in the previous factor analysis. Again Raters 3 and 4 loaded on the same factor, but this time, the IELTS scores loaded on the same factor with CAE2 and iBT2. CAE1, GRE1, and iBT1 loaded on the same factor and what was still debatable was why CPE1 loaded with GRE 2 and CPE2.
Up to this mentioned point, all the results obtained from alpha computation and factor analysis indicated something different in Rater 1, based on which it was decided to omit Rater 1 from the PCA. It is interesting to note that after interviewing all the four raters and scrutinizing the questionnaire survey, it was found that Rater 1, in his responses to the questionnaire, had indicated that he had no teaching experience in teaching CPE classes, and yet he claimed that he was familiar with this exam and its related rating scales, contrary to other raters’ responses to the questionnaire.
After omitting Rater 1 from the PCA, the findings showed that Rater 3’s and Rater 4’s test ratings loaded on the same factor, and this time the IELTS loaded on the factors that all the other tests had loaded except iBT4, meaning that Rater 4 had no agreement with the IELTS raters in rating the essay. What was found from the questionnaire survey of this rater indicated that Rater 4 had no teaching experience for this particular exam. She also had no familiarity with the exam and its corresponding rubrics. This rater also believed that scoring exams like TOEFL iBT and the exams developed by ETS were more difficult, and that they generally received lower scores in comparison to the Cambridge English Language Assessment exams. The results of what Rater 4 stated were not in line with the findings of Bachman ( 2000 ) who did a comparison study between TOEFL PBT and CPE essay task and concluded that CPE scoring is more difficult than scoring TOEFL PBT. Contrary to the findings of the present study, he also concluded that exams like CPE received lower scores.
The results from alpha computation and factor analysis showed the noticeable role of raters in assessing writing. The results from this study are in line with the findings of Lumley ( 2002 ) and Schoonen ( 2005 ) who argue that raters need to be considered one of the most remarkable concerns in the process of assessment. Shi ( 2001 ) argued in favor of the significant role of raters in assessing essays using their own criteria in addition to the standard and determined rating scales. Likewise, the outcome of factor analysis in this research study revealed that raters play a remarkable role in assessing essays by showing that all the items (i.e., test ratings) load on the same factor, especially when all the essay writings were rated by the same rater.
This study aimed to examine the consistency and reliability among different standard rubrics and rating scales used for assessing writing in the internationally recognized tests of English language proficiency. The results from alpha estimation provide evidence for a strong association among the raters and test ratings. Also, what has been found from the PCA indicate that these test ratings tap the same underlying construct. This study encourages employing practical rater trainer and rater training courses, providing them with the authentic opportunities to get familiar with different rubrics. This area requires more investigation on how raters themselves might affect the rating and how employing trained and certified raters can affect the process of rating. Test administrators and developers are the other groups who benefit from the findings of this study, since, when argued that all the test ratings tap the same underlying construct and different essay writing rating scales can be predictors of each other, it would be practical for them to set standard essay writing rubrics which can be used for rating and assessing writing. Also, as the findings of the present study alluded, the developers of the writing rubrics for these tests may also take into stock the implication that there are critical constructs within writing that weigh more heavily when being assessed across standardized measures. Teachers and learners are other groups who benefit from the result of this research study. They might devote less time on describing all these rubrics with their descriptions stated in different words. Instead, they could spend more time on practicing writing and essay writing tasks.
The study tried to examine the reliability of analytic rubrics used in assessing the essay component of the following standardized examinations: IELTS, TOEFL iBT, CAE, CPE, and GRE. While the first four of the tests listed above are indeed English language proficiency examinations designed to assess language skills of English as a Second Language (ESL) learners, the last one (i.e. GRE) is intended for those seeking admission to graduate programs in the U.S., regardless of the first language background. GRE candidates are, at minimum, bachelor degree holders, most of whom are native speakers of English whose education was completed in the English language, while the minority are international applicants to U.S. universities’ master’s and Ph.D. programs from various language backgrounds. GRE writing task, in other words, is not intended for L2 English learners. Therefore, it seems that juxtaposing the GRE requirements for the writing task, which zero in on argumentation and critical thinking, with English language proficiency standards as measured by the other four tests can dilute the generalizability of the results particularly with reference to this particular exam, due to the divergent assessment purposes and intended candidate profiles for this test. Future researchers are encouraged to take heed of this limitation in the present study.
The authors were provided with the data for research purposes. Sharing the data with a third party requires obtaining consent from the organization which provided the data. The materials are available in the article.
Aish, F., & Tomlinson, J. (2012). Get ready for IELTS writing . London: HarperCollins.
Google Scholar
Alderson, J. C., Clapham, C., & Wall, D. (1995). Language test construction and evaluation . Cambridge: Cambridge University Press.
Anderson, B., Bollela, V., Burch, V., Costa, M. J., Duvivier, R., Galbraith, R., & Roberts, T. (2011). Criteria for assessment: consensus statement and recommendations form the Ottawa 2010 conference. Medical Teacher , 33 (3), 206–214.
Anderson, C. (2005). Assessing writers . Portsmouth: Heinemann.
Andrade, H. G. (2000). Using rubrics to promote thinking and learning. Educational Leadership , 57 (5), 13–18.
Archibald, A. (2001). Targeting L2 writing proficiencies: Instruction and areas of change in students’ writing over time. International Journal of English Studies , 1 (2), 153–174.
Archibald, A. (2004). Writing in a second language. In The higher education academy subject centre for languages, linguistics and area studies Retrieved from http://www.llas.ac.uk/resources/gpg/2175 .
Arter, J. A., Spandel, V., Culham, R., & Pollard, J. (1994). The impact of training students to be self-assessors of writing . New Orleans: Paper presented at the Annual Meeting of the American Educational Research Association.
Aryadoust, V., & Riazi, A. M. (2016). Role of assessment in second language writing research and pedagogy. Educational Psychology , 37 (1), 1–7.
Attali, Y., & Burstein, J. (2005). Automated essay scoring with e-rater.V.2.0. (RR- 04-45) . Princeton: ETS.
Attali, Y., Lewis, W., & Steier, M. (2012). Scoring with the computer: alternative procedures for improving the reliability of holistic essay scoring. Language Testing , 30 (1), 125–141.
Bacha, N. (2001). Writing evaluation: what can analytic versus holistic essay scoring tell? System , 29 (3), 371–383.
Bachman, L., & Palmer, A. S. (2010). Language assessment in practice: developing language assessments and justifying their use in the real world . Oxford: Oxford University Press.
Bachman, L. F. (2000). Modern language testing at turn of the century: assuring that what we count counts. Language Testing , 17 (1), 1–42.
Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice: designing and developing useful language tests . Oxford: Oxford University Press.
Bauer, B. A. (1981). A study of the reliabilities and the cost-efficiencies of three methods of assessment for writing ability . Champaign: University of Illinois.
Becker, A. (2011). Examining rubrics used to measure writing performance in U.S. intensive English programs. The CATESOL Journal , 22 (1), 113–117.
Bell, R. M., Comfort, K., Klein, S. P., McCarffey, D., Ormseth, T., Othman, A. R., & Stecher, B. M. (2009). Analytic versus holistic scoring of science performance tasks. Applied Measurement in Education , 11 (2), 121–137.
Betsis, A., Haughton, L., & Mamas, L. (2012). Succeed in the new Cambridge proficiency (CPE)- student’s book with 8 practice tests . Brighton: GlobalELT.
Biber, D., Byrd, M., Clark, V., Conrad, S. M., Cortes, E., Helt, V., & Urzua, A. (2004). Representing language use in the university: analysis of the TOEFL 2000 spoken and written academic language corpus. In ETS research report series (RM-04-3, TOEFL Report MS-25) . Princeton: ETS.
Biggs, J., & Tang, C. (2007). Teaching for quality learning at university . Maidenhead: McGraw Hill.
Birky, B. (2012). A good solution for assessment strategies. A Journal for Physical and Sport Educators , 25 (7), 19–21.
Borman, W. C. (1979). Format and training effects on rating accuracy and rater errors. Journal of Applied Psychology , 64 (4), 410–421.
Bridgeman, B., & Carlson, S. (1983). Survey of academic writing tasks required of graduate and undergraduate foreign students. In ETS Research Report Series (RR- 83-18, TOELF- RR-15) . Princeton: ETS.
Brindley, G. (1998). Describing language development? Rating scales and SLA. In L. F. Bachman, & A. D. Cohen (Eds.), Interfaces between second language acquisition and language testing research , (pp. 112–140). Cambridge: Cambridge University Press.
Broad, B. (2003). What we really value: beyond rubrics in teaching and assessing writing . Logan: Utah State UP.
Broer, M., Lee, Y. W., Powers, D. E., & Rizavi, S. (2005). Ensuring the fairness of GRE writing prompts: Assessing differential difficulty. In ETS research report series (GREB Report No. 02-07R, RR-05-11) .
Brookhart, G., & Haines, S. (2009). Complete CAE student’s book with answers . Cambridge: Cambridge University Press.
Brookhart, S. M. (1999). The art and science of classroom assessment: the missing part of pedagogy. ASHE-ERIC Higher Education Report , 27 (1), 1–128.
Brossell, G. (1986). Current research and unanswered questions in writing assessment. In K. Greenberg, H. Wiener, & R. Donovan (Eds.), Writing assessment: issues and strategies , (pp. 168–182). New York: Longman.
Brown, A., & Jaquith, P. (2007). Online rater training: perceptions and performance . Dubai: Paper presented at Current Trends in English Language Testing Conference (CTELT).
Brown, G. T. L., Glasswell, K., & Harland, D. (2004). Accuracy in the scoring of writing: studies of reliability and validity using a New Zealand writing assessment system. Assessing Writing , 9 (2), 105–121.
Article Google Scholar
Brown, H. D., & Abeywickrama, P. (2010). Language assessment: Principles and classroom practice . Lewiston: Pearson Longman.
Brown, J. (2002). Training needs assessment: a must for developing an effective training program. Sage Journal , 31 (4), 569–578 https://doi.org/10.1177/009102600203100412 .
Brown, J. D., & Hudson, T. (2002). Criterion-referenced language testing. Cambridge applied linguistics series . Cambridge: Cambridge University Press.
Busching, B. (1998). Grading inquiry projects. New Directions for Teaching and Learning , ( 74 ), 89–96.
Canseco, G., & Byrd, P. (1989). Writing required in graduate courses in business administration. TESOL Quarterly , 23 (2), 305–316.
Capel, A., & Sharp, W. (2013). Cambridge english objective proficiency , (2nd ed., ). Cambridge: Cambridge University Press.
Charney, D. (1984). The validity of using holistic scoring to evaluate writing. Research in the Teaching of English , 18 (1), 65–81.
Chenoweth, N. A., & Hayes, J. R. (2001). Fluency in writing: Generating text in L1 and L2. Written Communication , 18 (1), 80–98 https://doi.org/10.1177/0741088301018001004 .
Chi, E. (2001). Comparing holistic and analytic scoring for performance assessment with many facet models. Journal of Applied Measurement , 2 (4), 379–388.
Cohen, A. D. (1994). Assessing language ability in the classroom . Boston: Heinle & Heinle.
Condon, W. (2013). Large-scale assessment, locally-developed measures, and automated scoring of essays: Fishing for red herrings? Assessing Writing , 18 , 100–108.
Connor-Linton, J. (1995). Crosscultural comparison of writing standards: American ESL and Japanese EFL. World English , 14 (1), 99–115.
Coombe, C., Davidson, P., O’Sullivan, B., & Stoynoff, S. (2012). The Cambridge guide to second language assessment . New York: Cambridge University Press.
Corry, H. (1999). Advanced writing with English in use: CAE . Oxford: Oxford University Press.
Crusan, D. (2015). And then a miracle occurs: the use of computers to assess student writing. International Journal of TESOL and Learning , 4 (1), 20–33.
Cumming, A. (2001). Learning to write in a second language: two decades of research. International Journal of English Studies , 1 (2), 1–23.
Cumming, A. (2013). Assessing integrated writing tasks for academic purposes: promises and perils. Language Assessment Quarterly , 10 (1), 1–8.
Cumming, A. H., Kantor, R., Powers, D., Santos, T., & Taylor, C. (2000). TOEFL 2000 writing framework: A working paper , ETS Research Report Series (RM-00-5; TOEFL-MS-18) . Princeton: ETS.
Dass, B. (2014). Adult & continuing professional education practices: CPE among professional providers . Singapore: Partridge Singapore.
Deygers, B., Zeidler, B., Vilcu, D., & Carlsen, C. H. (2018). One framework to unite them all? Use of CEFR in European university entrance policies. Language Assessment Quarterly , 15 (1), 3–15 https://doi.org/10.1080/15434303.2016.1261350 .
Diab, R., & Balaa, L. (2011). Developing detailed rubrics for assessing critique writing: impact on EFL university students’ performance and attitudes. TESOL Journal , 2 (1), 52–72.
Diederich, P. B., French, J. W., & Carlton, S. T. (1961). Factors in judgments of writing ability (Research Bulletin No. RB-61-15) . Princeton: Educational Testing Service https://doi.org/10.1002/j.2333-8504.1961.tb00286.x .
Dixon, N. (2015). Band 9-IELTS writing task 2-real tests . Oxford: Oxford University Press.
Duckworth, M., Gude, K., & Rogers, L. (2012). Cambridge english: proficiency (CPE) masterclass: student’s book . Oxford: Oxford University Press.
Dunsmuir, S., & Clifford, V. (2003). Children’s writing and the use of ICT. Educational Psychology in Practice , 19 (3), 171–187.
East, M. (2009). Evaluating the reliability of a detailed analytic scoring rubric for foreign language writing. Assessing Writing , 14 (2), 88–115.
Eckes, T. (2005). Examining rater effects in TestDaF writing and speaking performance assessments: a many-facet Rasch analysis. Language Assessment Quarterly , 2 (3), 197–221.
Eckes, T. (2012). Operational rater types in writing assessment: linking rater cognition to rater behavior. Language Assessment Quarterly , 9 ( 3 ), 270–292.
Elder, C., Barkhuizen, G., Knoch, U., & von Randow, J. (2007). Evaluating rater responses to an online training program for L2 writing assessment. Language Testing , 24 (1), 37–64.
Elder, C., Knoch, U., Barkhuizen, G., & von Randow, J. (2005). Individual feedback to enhance rater training: does it work? Language Assessment Quarterly , 2 (3), 175–196.
Ene, E., & Kosobucki, V. (2016). Rubrics and corrective feedback in ESL writing: a longitudinal case study of an L2 writer. Assessing Writing , 30 , 3–20 https://doi.org/10.1016/j.asw.2016.06.003 .
Erdosy, M. U. (2004). Exploring variability in judging writing ability in a second language: a study of four experienced raters of ESL composition. In ETS research report series (RR-03-17) . Ontario: ETS.
Evans, V. (2005). Entry tests CPE 2 for the revised Cambridge proficiency examination: Student’s book . New York City: Pearson Education.
Fahim, M., & Bijani, H. (2011). The effect of rater training on raters’ severity and bias in second language writing assessment. Iranian Journal of Language Testing , 1 (1), 1–16.
Faigley, L., Daly, J. A., & Witte, S. P. (1981). The role of writing apprehension in writing performance and competence. Journal of Educational Research , 75 (1), 16–21.
Field, A. P. (2009). Discovering statistics using SPSS (and sex and drugs and rock’ n’ roll) , (3rd ed., ). London: Sage Publication.
Fleckenstein, J., Keller, S., Kruger, M., Tannenbaum, R. J., & Köller, O. (2019). Linking TOEFL iBT writing rubrics to CEFR levels: Cut scores and validity evidence from a standard setting study. Assessing Writing , 43 https://doi.org/10.1016/j.asw.2019.100420 .
Fleckenstein, J., Leucht, M., & Köller, O. (2018). Teachers’ judgement accuracy concerning CEFR levels of prospective university students. Language Assessment Quarterly , 15 (1), 90–101 https://doi.org/10.1080/15434303.2017.1421956 .
Fleming, S., Golder, K., & Reeder, K. (2011). Determination of appropriate IELTS writing and speaking band scores for admission into two programs at a Canadian post-secondary polytechnic institution. The Canadian Journal of Applied Linguistics , 14 (1), 222 – 250 .
Fulcher, G. (2010). Practical language testing . London: Hodder Education.
Fulcher, G., & Davidson, F. (2007). Language testing and assessment: an advanced resource book . New York: Routledge.
Gass, S., Myford, C., & Winke, P. (2011). Raters’ L2 background as a potential source of bias in rating oral performance. Language Testing , 30 (2), 231–252.
Ghaffar, M. A., Khairallah, M., & Salloum, S. (2020). Co-constructed rubrics and assessment forlearning: The impact on middle school students’ attitudes and writing skills. Assessing Writing , 45 https://doi.org/10.1016/j.asw.2020.100468 .
Ghalib, T. K., & Hattami, A. A. (2015). Holistic versus analytic evaluation of EFL writing: a case study. English Language Teaching , 8 (7), 225–236.
Grabe, W., & Kaplan, R. B. (1996). Theory and practice of writing: an applied linguistic perspective . London: Longman.
Graham, S., Harris, K. R., & Mason, L. (2005). Improving the writing performance, knowledge, and self-efficacy of struggling young writers: the effects of self-regulated strategy development. Contemporary Educational Psychology , 30 (2), 207–241 https://doi.org/10.1016/j.cedpsych.2004.08.001 .
Gustilo, L., & Magno, C. (2015). Explaining L2 Writing performance through a chain of predictors: A SEM approach. 3 L: The Southeast Asian Journal of English Language Studies , 21 (2), 115–130.
Hamp-Lyons, L. (1990). Second language writing assessment. In B. Kroll (Ed.), Second language writing: research insights for the classroom , (pp. 69–87). California: Cambridge University Press.
Hamp-Lyons, L. (1991). Holistic writing assessment of LEP students . Washington, DC: Paper presented at Symposium on limited English proficient student.
Hamp-Lyons, L. (2007). Editorial: worrying about rating. Assessing Writing , 12 , 1–9.
Hamp-Lyons, L., & Kroll, B. (1997). TOEFL 2000 – writing: composition, community and assessment (toefl monograph series no. 5) . Princeton: Educational Testing Service.
Harman, R. (2013). Literary intertextuality in genre-based pedagogies: building lexicon cohesion in fifth-grade L2 writing. Journal of Second Language Writing , 22 (2), 125–140.
Harrison, J. (2010). Certificate of proficiency in English (CPE) test preparation course . Oxford: Oxford University Press.
Hinkel, E. (2009). The effects of essay topics on modal verb uses in L1 and L2 academic writing. Journal of Pragmatics , 41 (4), 667–683.
Holmes, P. (2006). Problematizing intercultural communication competence in the pluricultural classroom: Chinese students in New Zealand University. Journal of Language and Intercultural Communication , 6 (1), 18–34.
Hughes, A. (2003). Testing for language teachers . Cambridge: Cambridge University Press.
Huot, B. (1990). The literature of direct writing assessment: major concerns and prevailing trends. Review of Educational Research , 60 (2), 237–239.
Huot, B., Moore, C., & O’Neill, P. (2009). Creating a culture of assessment in writing programs and beyond. College Composition and Communication , 61 ( 1 ), 107–132.
Hyland, K. (2004). Disciplinary discourses: social interactions in academic writing . Michigan: University of Michigan Press.
Inoue, A. (2004). Community-based assessment pedagogy. Assessing Writing , 9 (3), 208–238 https://doi.org/10.1016/j.asw.2004.12.001 .
Izadpanah, M. A., Rakhshandehroo, F., & Mahmoudikia, M. (2014). On the consensus between holistic rating system and analytical rating system: a comparison between TOEFL iBT and Jacobs’ et al. composition. International Journal of Language Learning and Applied Linguistics World , 6 (1), 170–187.
Jacobs, H. L., Zingraf, S. A., Wormuth, D. R., Hartfiel, V. F., & Hughey, J. B. (1981). Testing ESL composition: a practical approach . Rowley: Newbury House.
Jakeman, V. (2006). Cambridge action plan for IELTS: academic module . Cambridge: Cambridge University Press.
Jamieson, J., & Poonpon, K. (2013). Developing analytic rating guides for TOEFL iBT integrated speaking tasks. In ETS research series (RR-13-13, TOEFLiBT-20) . Princeton: ETS.
Johnson, R. L., Penny, J., & Gordon, B. (2000). The relation between score resolution methods and interrater reliability: An empirical study of an analytic scoring rubric. Applied Measurement in Education , 13 , 121–138 https://doi.org/10.1207/S15324818AME1302_1 .
Jones, C. (2001). The relationship between writing centers and improvement in writing ability: An assessment of the literature. Journal of Education , 122 (1), 3–20.
Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: reliability, validity and educational consequences. Educational Research Review , 2 , 130–144.
Kane, M. T. (2006). Validation. In R. L. Brennan (Ed.), Educational measurement , (4th ed., pp. 17–64). Westport: American Council on Education and Praeger Publishers.
Kane, T. S. (2000). Oxford essential guide to writing . New York: Berkey Publishing Group.
Kellogg, R. T., Turner, C. E., Whiteford, A. P., & Mertens, A. (2016). The role of working memory in planning and generating written sentences. Journal of Writing Research , 7 (3), 397–416.
Kim, Y. H. (2011). Diagnosing EAP writing ability using the reduced reparametrized unified model. Language Testing , 28 (4), 509–541.
Klein, P. D., & Boscolo, P. (2016). Trends in research on writing as a learning activity. Journal of Writing Research , 7 (3), 311–350 https://doi.org/10.17239/jowr-2016.07.3.01 .
Knoch, U. (2009). The assessment of academic style in EAP writing: the case of the rating scale. Melbourne Papers in Language Testing , 13 (1), 35.
Knoch, U. (2011). Rating scales for diagnostic assessment of writing: what should they look like and where should the criteria come from? Assessing Writing , 16 (2), 81–96.
Kondo-Brown, K. (2002). A facet analysis of rater bias in Japanese second language writing performance. Language Testing , 19 (1), 3–31.
Kong, N., Liu, O. L., Malloy, J., & Schedl, M. A. (2009). Does content knowledge affect TOEFL iBT reading performance? A confirmatory approach to differential item functioning. In ETS research report series (RR-09-29, TOEFLiBT-09) . Princeton: ETS.
Kroll, B. (1990). Second language writing (Cambridge Applied Linguistics): research insights for the classroom . Cambridge: Cambridge University Press.
Kroll, B., & Kruchten, P. (2003). The rational unified process made essay: a practitioner’s guide to the RUP . Boston: Pearson Education.
Kuo, S. (2007). Which rubric is more suitable for NSS liberal studies? Analytic or holistic? Educational Research Journal , 22 (2), 179–199.
Lanteigne, B. (2017). Unscrambling jumbled sentences: an authentic task for English language assessment? Studies in Second Language Learning and Teaching , 7 (2), 251–273 https://doi.org/10.14746/ssllt.2017.7.2.5 .
Leki, L., Cumming, A., & Silva, T. (2008). A synthesis of research on second language writing in English . New York: Routledge.
Levin, P. (2009). Write great essays . London: McGraw-Hill Education.
Loughead, L. (2010). IELTS practice exam: with audio CDs . Hauppauge: Barron’s Education Series.
Lu, J., & Zhang, Z. (2013). Assessing and supporting argumentation with online rubrics. International Education Studies , 6 (7), 66–77.
Lumley, T. (2002). Assessment criteria in a large-scale writing test: what do they really mean to the raters? Language Testing , 19 (3), 246–276.
Lumley, T. (2005). Assessing second language writing: the rater’s perspective . Frankfurt: Lang.
MacDonald, S. (1994). Professional academic writing in the humanities and social sciences . Carbondale: Southern Illinois University Press.
Mackenzie, J. (2007). Essay writing: teaching the basics from the group up . Markham: Pembroke Publishers.
Malone, M. E., & Montee, M. (2014). Stakeholders’ beliefs about the TOEFL iBT test as a measure of academic language ability (TOEFL iBT Report No. 22, ETS Research Report No. RR-14-42) . Princeton: Educational Testing Service https://doi.org/10.1002/ets2.12039 .
Matsuda, P. K. (2002). Basic writing and second language writers: Toward an inclusive definition. Journal of Basic Writing , 22 (2), 67–89.
McLaren, S. (2006). Essay writing made easy . Sydney: Pascal Press.
McMillan, J. H. (2001). Classroom assessment: principles and practice for effective instruction , (2nd ed., ). Boston: Allyn & Bacon.
Melendy, G. A. (2008). Motivating writers: the power of choice. Asian EFL Journal , 20 (3), 187–198.
Messick, S. (1994). The interplay of evidence and consequences in the validation of performance assessment. Educational Researcher , 23 (2), 13–23.
Moore, J. (2009). Common mistakes at proficiency and how to avoid them . Cambridge: Cambridge University Press.
Moskal, B. M., & Leydens, J. (2000). Scoring rubric development: validity and reliability. Practical Assessment, Research & Evaluation , 7 , 10.
Moss, P. A. (1994). Can there be validity without reliability? Educational Researcher , 23 (2), 5–12.
Muenz, T. A., Ouchi, B. Y., & Cole, J. C. (1999). Item analysis of written expression scoring systems from the PIAT-R and WIAT. Psychology and Schools , 36 (1), 31–40.
Muncie, J. (2002). Using written teacher feedback in EFL composition classes. ELT Journal , 54 (1), 47–53 https://doi.org/10.1093/elt/54.1.47 .
Myford, C. M., & Wolfe, E. W. (2003). Detecting and measuring rater effects using many-facet rasch measurement: Part I. Journal of Applied Measurement , 4 (4), 386–422.
Noonan, L. E., & Sulsky, L. M. (2001). Impact of frame-of-reference and behavioral observation training on alternative training effectiveness criteria in a Canadian military sample. Human Performance , 14 (1), 3–26.
Nunn, R. C. (2000). Designing rating scales for small-group interaction. ELT Journal , 54 (2), 169–178.
Nunn, R. C., & Adamson, J. (2007). Toward the development of interactional criteria for journal paper evaluation. Asian EFL Journal , 9 (4), 205–228.
Nystrand, M., Greene, S., & Wiemelt, J. (1993). Where did composition studies come from? An intellectual history. Written Communication , 10 (3), 267–333.
O’Neil, T. R., & Lunz, M. E. (1996). Examining the invariance of rater and project calibrations using a multi-facet rasch model . New York: Paper presented at the Annual Meeting of the American Educational Research Associations.
Obee, B. (2005). Practice tests for the revised CPE . Berkshire: Express Publishing.
Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purpose revisited. Educational Research Review , 9 , 129–144.
Pollitt, A., & Hutchinson, C. (1987). Calibrating graded assessments: rasch partial credit analysis of performance in writing. Language Testing , 4 (1), 72–92.
Raimes, A. (1991). Out of the woods: Emerging traditions in the teaching of writing. TESOL Quarterly , 25 (3), 407–430.
Reid, J. (1993). Teaching ESL writing . Englewood Cliffs: Regents Prentice Hall.
Rezaei, A. R., & Lovorn, M. (2010). Reliability and validity of rubrics for assessment through writing. Assessing Writing , 15 (1), 18–39.
Richards, J. C., & Schmidt, R. (2002). Longman dictionary of language teaching and applied linguistics . New York: Pearson Education.
Roch, S. G., & O’Sullivan, B. J. (2003). Frame of reference rater training issues: recall, time and behavior observation training. International Journal of Training and Development , 7 (2), 93–107.
Rosenfeld, M., Courtney, R., & Fowles, M. (2004). Identifying the writing tasks important for academic success at the undergraduate and graduate levels. Research report 42 . Princeton: Educational Testing Service.
Rosenfeld, M., Leung, S., & Oltman, P. K. (2001). Identifying the reading, writing, speaking, and listening tasks important for academic success at the undergraduate and graduate levels (TOEFL Monograph Series MS-21) . Princeton: Educational Testing Service.
Rupp, A. A., Casabianca, J. M., Krüger, M., Keller, S., & Köller, O. (2019). Automated essay scoring at scale: a case study in Switzerland and Germany (RR-86. ETS RR-19-12) . ETS Research Report Series , 2019 https://doi.org/10.1002/ets2.12249 .
Saeidi, M., Yousefi, M., & Baghayi, P. (2013). Rater bias in assessing Iranian EFL learners’ writing performance. Iranian Journal of Applied Linguistics , 16 (1), 145–175.
Sasaki, M. (2000). Toward an empirical model of EFL writing processes: an explanatory study. Journal of Second Language Writing , 9 (3), 259–291.
Sasaki, M., & Hirose, K. (1999). Development of an analytic rating scale for Japanese L1 writing. Language Testing , 16 (4), 457–478.
Schaefer, E. (2008). Rater bias patterns in an EFL writing assessment. Language Testing , 25 (4), 465–493.
Schirmer, B. R., & Bailey, J. (2000). Writing assessment rubric: an instructional approach for struggling writers. Teaching Exceptional Children , 33 (1), 52–58.
Schoonen, R. (2005). Generalizability of writing scores: an application of structural equation modeling. Language Testing , 22 (1), 1–5.
Shaw, S. D., & Weir, C. J. (2007). Examining writing: research and practice in assessing second language writing . Cambridge: Cambridge University Press.
Shermis, M. (2014). State-of-the-art automated essay scoring: competition, results, and future directions from a United States demonstration. Assessing Writing , 20 , 53–76 https://doi.org/10.1016/j.asw.2013.04.001 .
Shi, L. (2001). Native- and nonnative- speaking EFL teachers’ evaluation of Chinese students’ English writing. Language Testing , 18 (3), 303–325.
Spratt, M., & Taylor, L. B. (2000). The Cambridge CAE course: self-study student’s book . Cambridge: Cambridge University Press.
Spurr, B. (2005). Successful essay writing for senior high school . NSW: New Frontier Publishing.
Staff, M. P. (2017). GRE guide to the use of scores. In Graduate record examination . Princeton: ETS.
Stevens, J. P. (2002). Applied multivariate statistics for the social sciences , (4th ed., ). Hillsdale: Erlbaum.
Stewart, A. (2009). IELTS preparation & practice: reading and writing—academic module . New York: Pearson Education.
Tardy, M. C., & Matsuda, P. K. (2009). The construction of author voice by editorial board members. Written Communication , 26 (1), 32–52.
Trace, J., Meier, V., & Janseen, G. (2016). “I can see that”: developing shared rubric category interpretations through score negotiation. Assessing Writing , 30 , 32–43 https://doi.org/10.1016/j.asw.2016.08.001 .
Ward, J. R., & McCotter, S. S. (2004). Reflection as a visible outcome for preservice teachers. Teaching and Teacher Education , 20 (3), 243–257.
Weigle, S. C. (2002). Assessing writing . Cambridge: Cambridge University Press.
Book Google Scholar
Weigle, S. C. (2013). English language learners and automated scoring of essays: Critical considerations. Assessing Writing , 18 , 85–99.
Weir, C. J. (1990). Communicative language testing . New Jersey: Prentice Hall, Inc.
Weissberg, B. (2000). Developmental relationship in the acquisition of English syntax: Writing vs. speech. Journal of Learning and Instruction , 10 (1), 37–53 https://doi.org/10.1016/S0959-4752(99)00017-1 .
Wesolowski, B. W., Wind, S. A., & Engelhard, G. (2017). Evaluating differential rater functioning over time in the context of solo music performance assessment. Bulletin of the Council for Research in Music Education , ( 212 ), 75–98 https://doi.org/10.5406/bulcouresmusedu.212.0075 .
White, E. M. (1984). Teaching and assessing writing , (2nd ed., ). San Francisco: Jossey-Bass.
White, E. M. (1985). Teaching and assessing writing . San Francisco: Jossey-Bass.
White, E. M. (1994). Teaching and assessing writing , (2nd ed. ). San Francisco: Jossey-Bass.
Wiggan, G. (1994). The constant danger of sacrificing validity to reliability: making writing assessment serves writer. Assessing Writing , 1 , 129–139 https://doi.org/10.1016/1075-2935(94)90008-6 .
Wilson, M. (2006). Rethinking rubrics in writing assessment . Postmouth: Heinemann.
Wilson, M. (2017). Reimaging writing assessment: from scales to stories . Postmouth: Heinemann.
Wind, S. A. (2020). Do raters use rating scale categories consistently across analytic rubric domains in writing assessment? Assessing Writing , 43 https://doi.org/10.1016/j.asw.2019.100416 .
Wind, S. A., Tsai, C. L., Grajeda, S. B., & Bergin, C. (2018). Principals’ use of rating scale categories in classroom observation for teacher evaluation. School Effectiveness and School Improvement , 29 (3), 485–510 https://doi.org/10.1080/09243453.2018.1470989 .
Wiseman, C. S. (2012). A comparison of the performance of analytic vs. holistic scoring rubrics to assess L2 writing. Iranian Journal of Language Testing , 2 (1), 59–61.
Wyldeck, K. (2008). Everyday spelling and grammar . Sydney: Pascal Press.
Zahler, K. A. (2011). McGraw-Hill’s conquering the NEW GRE verbal and writing . New York: McGraw-Hill Education.
Zhang, B., Johnson, L., & Kilic, G. B. (2008). Assessing the reliability of self-and-peer rating in student group work. Assessment & Evaluation in Higher Education , 33 (3), 329–340 https://doi.org/10.1080/02602930701293181 .
Download references
The authors would like to thank the reviewers for their fruitful comments. We would also like to thank the raters who kindly accepted to contribute to this study.
Authors and affiliations.
Department of Foreign Languages, TUMS International College, Tehran University of Medical Sciences (TUMS), Keshavarz Blvd., Tehran, 1415913311, Iran
Enayat A. Shabani & Jaleh Panahi
You can also search for this author in PubMed Google Scholar
The authors made almost equal contributions to this manuscript, and both read and approved the final manuscript.
Enayat A. Shabani 1 ( [email protected] ) is a Ph.D. in TEFL and is currently the Chair of the Department of Foreign Languages at Tehran University of Medical Sciences (TUMS). His areas of research interest include language testing and assessment, and internationalization of higher education.
Jaleh Panahi 2 ( [email protected] ) holds an M.A. in TEFL. She has been teaching English for 12 years with the main focus of IELTS teaching and instruction. She is currently a part-time instructor at the Department of Foreign Languages, Tehran University of Medical Sciences. Her fields of research interest are language assessment, and language and cognition.
Correspondence to Enayat A. Shabani .
Competing interests.
The authors declare that they have no competing interests.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and permissions
Cite this article.
Shabani, E.A., Panahi, J. Examining consistency among different rubrics for assessing writing. Lang Test Asia 10 , 12 (2020). https://doi.org/10.1186/s40468-020-00111-4
Download citation
Received : 16 June 2020
Accepted : 03 September 2020
Published : 26 September 2020
DOI : https://doi.org/10.1186/s40468-020-00111-4
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
IMAGES
VIDEO
COMMENTS
A writing rubric is a clear set of guidelines on what your paper should include, often written as a rating scale that shows the range of scores possible on the assignment and how to earn each one. Professors use writing rubrics to grade the essays they assign, typically scoring on content, organization, mechanics, and overall understanding.
Essay Rubric Directions: Your essay will be graded based on this rubric. Consequently, use this rubric as a guide when writing your essay and check it again before you submit your essay. Traits 4 3 2 1 Focus & Details There is one clear, well-focused topic. Main ideas are clear and are well supported by detailed and accurate information.
A writing rubric is a scoring guide used to evaluate written work. It lists criteria and describes levels of quality from excellent to poor. Rubrics provide a standardized way to assess writing. ... Use the rubric to critique sample essays and show students how to apply the rubric to improve their own writing. 3. Provide Feedback.
A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects, creative endeavors, and oral presentations.
High School Writing Scoring Rubric February 28, 2018 Level 2 Rubric Elements Full Evidence 3 Partial Evidence 2 Limited Evidence 1 Unrelated Evidence 0 or 5 Organization - The essay addresses a specified claim supported with organized complex ideas. The essay includes at a minimum:
Try this rubric to make student expectations clear and end-of-project assessment easier. Learn more: Free Technology for Teachers. 100-Point Essay Rubric. Need an easy way to convert a scoring rubric to a letter grade? This example for essay writing earns students a final score out of 100 points. Learn more: Learn for Your Life. Drama ...
Holistic scoring is a quick method of evaluating a composition based on the reader's general impression of the overall quality of the writing—you can generally read a student's composition and assign a score to it in two or three minutes. Holistic scoring is usually based on a scale of 0-4, 0-5, or 0-6.
Grading rubrics can be of great benefit to both you and your students. For you, a rubric saves time and decreases subjectivity. Specific criteria are explicitly stated, facilitating the grading process and increasing your objectivity. For students, the use of grading rubrics helps them to meet or exceed expectations, to view the grading process ...
Students can self-assess using the rubric as a checklist before. submitting their assignment. This sample rubric can also be found under the Turnitin tool in. Edit the assignment requirements column, performance level descriptions in each box, and. point values to align with a particular course assignment. Distribute the rubric to students.
The essay writing rubric is designed to score students' essays using a 100-point grading scale in three major categories: Organization, Content, and Style & Mechanics. Essay Writing Assignment. Scoring Rubric ( 100 points possible) Organization: ( 20 points possible) I. Essay is easy to read due to clear organization of main points 8-10.
Writing rubrics can help address the concerns of both faculty and students by making writing assessment more efficient, consistent, and public. Whether it is called a grading rubric, a grading sheet, or a scoring guide, a writing assignment rubric lists criteria by which the writing is graded.
The essay includes at a minimum: some evidence related to the specified topic (i.e., introduction, cause/effect relationship, or. conclusion) 0 no evidence of organization. 5. evidence is. off topic. Idea Development - The essay develops a topic, includes details to promote meaning and create clarity.
An essay rubric is a way teachers assess students' essay writing by using specific criteria to grade assignments. Essay rubrics save teachers time because all of the criteria are listed and organized into one convenient paper. If used effectively, rubrics can help improve students' writing. Below are two types of rubrics for essays.
In your essay, you should use a wide array of vocabulary (and use it correctly). An essay that scores a 4 in Writing on the grading rubric "demonstrates a consistent use of precise word choice.". You're allowed a few errors, even on a 4-scoring essay, so you can sometimes get away with misusing a word or two.
Ultimate Guide to the Praxis Essay Scoring Rubric. The Core Writing essays are scored holistically. Holistic scoring uses a single letter or a number—on the Praxis, it's a number from 1 to 6—to provide an evaluation of an essay as a whole. A holistic score emphasizes the interrelation of different thinking and writing qualities in an ...
Essay Grading Rubric. STUDENT: ESSAY: Excellent to Very Good: There is one clear, well-focused thesis. Excellent command of subject matter. Evidence of independent thought. Supporting arguments relate to main claim & are well organized. Thesis stands out and is supported by details. Relevant, telling, quality details give important information ...
many sentence structure problems; 5-7 fragments or run-on sentences; grammatical errors distract from meaning. S pelling, capitalization, punctuation, and citation errors are frequent and distracting. 1. (1.9-1) Off-topic; f ormatting guidelines for layout (headings), spacing, and alignment are not followed, making the assignment unattractive ...
The ACT Writing Test Scoring Rubric. Ideas and Analysis. Development and Support. Organization. Language Use. Score 6: Responses at this scorepoint demonstrate effective skill in writing an argumentative essay. The writer generates an argument that critically engages with multiple perspectives on the given issue.
Each ACT essay is scored by two different graders on a scale of 1-6 across four different domains, for a total score out of 12 in each domain. These domain scores are then averaged into a total score out of 12. NOTE: The ACT Writing Test from September 2015-June 2016 had a slightly different scoring scale; instead of averaging all the domain ...
Your essay will be evaluated by two graders, who score your essay from 1-6 on each of 4 domains, leading to scores out of 12 for each domain. Your Writing score is calculated by averaging your four domain scores, leading to a total ACT Writing score from 2-12.
Scoring essays written by English learners can at times be difficult due to the challenging task of writing larger structures in English. ESL / EFL teachers should expect errors in each area and make appropriate concessions in their scoring. Rubrics should be based on a keen understanding of English learner communicative levels.This essay writing rubric provides a scoring system which is more ...
The literature on using scoring rubrics in writing assessment denotes the significance of rubrics as practical and useful means to assess the quality of writing tasks. This study tries to investigate the agreement among rubrics endorsed and used for assessing the essay writing tasks by the internationally recognized tests of English language proficiency. To carry out this study, two hundred ...
Writing for an Academic Discussion Rubric. SCORE. DESCRIPTION. 5. A fully successful response. The response is a relevant and very clearly expressed contribution to the online discussion, and it demonstrates consistent facility in the use of language. A typical response displays the following: