Rubric Best Practices, Examples, and Templates

A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects, creative endeavors, and oral presentations.

Rubrics can help instructors communicate expectations to students and assess student work fairly, consistently and efficiently. Rubrics can provide students with informative feedback on their strengths and weaknesses so that they can reflect on their performance and work on areas that need improvement.

How to Get Started

Best practices, moodle how-to guides.

  • Workshop Recording (Fall 2022)
  • Workshop Registration

Step 1: Analyze the assignment

The first step in the rubric creation process is to analyze the assignment or assessment for which you are creating a rubric. To do this, consider the following questions:

  • What is the purpose of the assignment and your feedback? What do you want students to demonstrate through the completion of this assignment (i.e. what are the learning objectives measured by it)? Is it a summative assessment, or will students use the feedback to create an improved product?
  • Does the assignment break down into different or smaller tasks? Are these tasks equally important as the main assignment?
  • What would an “excellent” assignment look like? An “acceptable” assignment? One that still needs major work?
  • How detailed do you want the feedback you give students to be? Do you want/need to give them a grade?

Step 2: Decide what kind of rubric you will use

Types of rubrics: holistic, analytic/descriptive, single-point

Holistic Rubric. A holistic rubric includes all the criteria (such as clarity, organization, mechanics, etc.) to be considered together and included in a single evaluation. With a holistic rubric, the rater or grader assigns a single score based on an overall judgment of the student’s work, using descriptions of each performance level to assign the score.

Advantages of holistic rubrics:

  • Can p lace an emphasis on what learners can demonstrate rather than what they cannot
  • Save grader time by minimizing the number of evaluations to be made for each student
  • Can be used consistently across raters, provided they have all been trained

Disadvantages of holistic rubrics:

  • Provide less specific feedback than analytic/descriptive rubrics
  • Can be difficult to choose a score when a student’s work is at varying levels across the criteria
  • Any weighting of c riteria cannot be indicated in the rubric

Analytic/Descriptive Rubric . An analytic or descriptive rubric often takes the form of a table with the criteria listed in the left column and with levels of performance listed across the top row. Each cell contains a description of what the specified criterion looks like at a given level of performance. Each of the criteria is scored individually.

Advantages of analytic rubrics:

  • Provide detailed feedback on areas of strength or weakness
  • Each criterion can be weighted to reflect its relative importance

Disadvantages of analytic rubrics:

  • More time-consuming to create and use than a holistic rubric
  • May not be used consistently across raters unless the cells are well defined
  • May result in giving less personalized feedback

Single-Point Rubric . A single-point rubric is breaks down the components of an assignment into different criteria, but instead of describing different levels of performance, only the “proficient” level is described. Feedback space is provided for instructors to give individualized comments to help students improve and/or show where they excelled beyond the proficiency descriptors.

Advantages of single-point rubrics:

  • Easier to create than an analytic/descriptive rubric
  • Perhaps more likely that students will read the descriptors
  • Areas of concern and excellence are open-ended
  • May removes a focus on the grade/points
  • May increase student creativity in project-based assignments

Disadvantage of analytic rubrics: Requires more work for instructors writing feedback

Step 3 (Optional): Look for templates and examples.

You might Google, “Rubric for persuasive essay at the college level” and see if there are any publicly available examples to start from. Ask your colleagues if they have used a rubric for a similar assignment. Some examples are also available at the end of this article. These rubrics can be a great starting point for you, but consider steps 3, 4, and 5 below to ensure that the rubric matches your assignment description, learning objectives and expectations.

Step 4: Define the assignment criteria

Make a list of the knowledge and skills are you measuring with the assignment/assessment Refer to your stated learning objectives, the assignment instructions, past examples of student work, etc. for help.

  Helpful strategies for defining grading criteria:

  • Collaborate with co-instructors, teaching assistants, and other colleagues
  • Brainstorm and discuss with students
  • Can they be observed and measured?
  • Are they important and essential?
  • Are they distinct from other criteria?
  • Are they phrased in precise, unambiguous language?
  • Revise the criteria as needed
  • Consider whether some are more important than others, and how you will weight them.

Step 5: Design the rating scale

Most ratings scales include between 3 and 5 levels. Consider the following questions when designing your rating scale:

  • Given what students are able to demonstrate in this assignment/assessment, what are the possible levels of achievement?
  • How many levels would you like to include (more levels means more detailed descriptions)
  • Will you use numbers and/or descriptive labels for each level of performance? (for example 5, 4, 3, 2, 1 and/or Exceeds expectations, Accomplished, Proficient, Developing, Beginning, etc.)
  • Don’t use too many columns, and recognize that some criteria can have more columns that others . The rubric needs to be comprehensible and organized. Pick the right amount of columns so that the criteria flow logically and naturally across levels.

Step 6: Write descriptions for each level of the rating scale

Artificial Intelligence tools like Chat GPT have proven to be useful tools for creating a rubric. You will want to engineer your prompt that you provide the AI assistant to ensure you get what you want. For example, you might provide the assignment description, the criteria you feel are important, and the number of levels of performance you want in your prompt. Use the results as a starting point, and adjust the descriptions as needed.

Building a rubric from scratch

For a single-point rubric , describe what would be considered “proficient,” i.e. B-level work, and provide that description. You might also include suggestions for students outside of the actual rubric about how they might surpass proficient-level work.

For analytic and holistic rubrics , c reate statements of expected performance at each level of the rubric.

  • Consider what descriptor is appropriate for each criteria, e.g., presence vs absence, complete vs incomplete, many vs none, major vs minor, consistent vs inconsistent, always vs never. If you have an indicator described in one level, it will need to be described in each level.
  • You might start with the top/exemplary level. What does it look like when a student has achieved excellence for each/every criterion? Then, look at the “bottom” level. What does it look like when a student has not achieved the learning goals in any way? Then, complete the in-between levels.
  • For an analytic rubric , do this for each particular criterion of the rubric so that every cell in the table is filled. These descriptions help students understand your expectations and their performance in regard to those expectations.

Well-written descriptions:

  • Describe observable and measurable behavior
  • Use parallel language across the scale
  • Indicate the degree to which the standards are met

Step 7: Create your rubric

Create your rubric in a table or spreadsheet in Word, Google Docs, Sheets, etc., and then transfer it by typing it into Moodle. You can also use online tools to create the rubric, but you will still have to type the criteria, indicators, levels, etc., into Moodle. Rubric creators: Rubistar , iRubric

Step 8: Pilot-test your rubric

Prior to implementing your rubric on a live course, obtain feedback from:

  • Teacher assistants

Try out your new rubric on a sample of student work. After you pilot-test your rubric, analyze the results to consider its effectiveness and revise accordingly.

  • Limit the rubric to a single page for reading and grading ease
  • Use parallel language . Use similar language and syntax/wording from column to column. Make sure that the rubric can be easily read from left to right or vice versa.
  • Use student-friendly language . Make sure the language is learning-level appropriate. If you use academic language or concepts, you will need to teach those concepts.
  • Share and discuss the rubric with your students . Students should understand that the rubric is there to help them learn, reflect, and self-assess. If students use a rubric, they will understand the expectations and their relevance to learning.
  • Consider scalability and reusability of rubrics. Create rubric templates that you can alter as needed for multiple assignments.
  • Maximize the descriptiveness of your language. Avoid words like “good” and “excellent.” For example, instead of saying, “uses excellent sources,” you might describe what makes a resource excellent so that students will know. You might also consider reducing the reliance on quantity, such as a number of allowable misspelled words. Focus instead, for example, on how distracting any spelling errors are.

Example of an analytic rubric for a final paper

Example of a holistic rubric for a final paper, single-point rubric, more examples:.

  • Single Point Rubric Template ( variation )
  • Analytic Rubric Template make a copy to edit
  • A Rubric for Rubrics
  • Bank of Online Discussion Rubrics in different formats
  • Mathematical Presentations Descriptive Rubric
  • Math Proof Assessment Rubric
  • Kansas State Sample Rubrics
  • Design Single Point Rubric

Technology Tools: Rubrics in Moodle

  • Moodle Docs: Rubrics
  • Moodle Docs: Grading Guide (use for single-point rubrics)

Tools with rubrics (other than Moodle)

  • Google Assignments
  • Turnitin Assignments: Rubric or Grading Form

Other resources

  • DePaul University (n.d.). Rubrics .
  • Gonzalez, J. (2014). Know your terms: Holistic, Analytic, and Single-Point Rubrics . Cult of Pedagogy.
  • Goodrich, H. (1996). Understanding rubrics . Teaching for Authentic Student Performance, 54 (4), 14-17. Retrieved from   
  • Miller, A. (2012). Tame the beast: tips for designing and using rubrics.
  • Ragupathi, K., Lee, A. (2020). Beyond Fairness and Consistency in Grading: The Role of Rubrics in Higher Education. In: Sanger, C., Gleason, N. (eds) Diversity and Inclusion in Global Higher Education. Palgrave Macmillan, Singapore.
  • Center for Innovative Teaching and Learning
  • Instructional Guide
  • Rubrics for Assessment

A rubric is an explicit set of criteria used for assessing a particular type of work or performance (TLT Group, n.d.) and provides more details than a single grade or mark. Rubrics, therefore, will help you grade more objectively.

Have your students ever asked, “Why did you grade me that way?” or stated, “You never told us that we would be graded on grammar!” As a grading tool, rubrics can address these and other issues related to assessment: they reduce grading time; they increase objectivity and reduce subjectivity; they convey timely feedback to students and they improve students’ ability to include required elements of an assignment (Stevens & Levi, 2005). Grading rubrics can be used to assess a range of activities in any subject area

Elements of a Rubric

Typically designed as a grid-type structure, a grading rubric includes criteria, levels of performance, scores, and descriptors which become unique assessment tools for any given assignment. The table below illustrates a simple grading rubric with each of the four elements for a history research paper. 

Criteria identify the trait, feature or dimension which is to be measured and include a definition and example to clarify the meaning of each trait being assessed. Each assignment or performance will determine the number of criteria to be scored. Criteria are derived from assignments, checklists, grading sheets or colleagues.

Examples of Criteria for a term paper rubric

  • Introduction
  • Arguments/analysis
  • Grammar and punctuation
  • Internal citations

Levels of performance

Levels of performance are often labeled as adjectives which describe the performance levels. Levels of performance determine the degree of performance which has been met and will provide for consistent and objective assessment and better feedback to students. These levels tell students what they are expected to do. Levels of performance can be used without descriptors but descriptors help in achieving objectivity. Words used for levels of performance could influence a student’s interpretation of performance level (such as superior, moderate, poor or above or below average).

Examples to describe levels of performance

  • Excellent, Good, Fair, Poor
  • Master, Apprentice, Beginner
  • Exemplary, Accomplished, Developing, Beginning, Undeveloped
  • Complete, Incomplete
Levels of performance determine the degree of performance which has been met and will provide for consistent and objective assessment and better feedback to students.

Scores make up the system of numbers or values used to rate each criterion and often are combined with levels of performance. Begin by asking how many points are needed to adequately describe the range of performance you expect to see in students’ work. Consider the range of possible performance level.

Example of scores for a rubric

1, 2, 3, 4, 5 or 2, 4, 6, 8

Descriptors

Descriptors are explicit descriptions of the performance and show how the score is derived and what is expected of the students. Descriptors spell out each level (gradation) of performance for each criterion and describe what performance at a particular level looks like. Descriptors describe how well students’ work is distinguished from the work of their peers and will help you to distinguish between each student’s work. Descriptors should be detailed enough to differentiate between the different level and increase the objectivity of the rater.

Descriptors...describe what performance at a particular level looks like.

Developing a Grading Rubric

First, consider using any of a number of existing rubrics available online. Many rubrics can be used “as is.” Or, you could modify a rubric by adding or deleting elements or combining others for one that will suit your needs. Finally, you could create a completely customized rubric using specifically designed rubric software or just by creating a table with the rubric elements. The following steps will help you develop a rubric no matter which option you choose.

  • Select a performance/assignment to be assessed. Begin with a performance or assignment which may be difficult to grade and where you want to reduce subjectivity. Is the performance/assignment an authentic task related to learning goals and/or objectives? Are students replicating meaningful tasks found in the real world? Are you encouraging students to problem solve and apply knowledge? Answer these questions as you begin to develop the criteria for your rubric.
Begin with a performance or assignment which may be difficult to grade and where you want to reduce subjectivity.
  • List criteria. Begin by brainstorming a list of all criteria, traits or dimensions associated task. Reduce the list by chunking similar criteria and eliminating others until you produce a range of appropriate criteria. A rubric designed for formative and diagnostic assessments might have more criteria than those rubrics rating summative performances (Dodge, 2001). Keep the list of criteria manageable and reasonable.
  • Write criteria descriptions. Keep criteria descriptions brief, understandable, and in a logical order for students to follow as they work on the task.
  • Determine level of performance adjectives.  Select words or phrases that will explain what performance looks like at each level, making sure they are discrete enough to show real differences. Levels of performance should match the related criterion.
  • Develop scores. The scores will determine the ranges of performance in numerical value. Make sure the values make sense in terms of the total points possible: What is the difference between getting 10 points versus 100 points versus 1,000 points? The best and worst performance scores are placed at the ends of the continuum and the other scores are placed appropriately in between. It is suggested to start with fewer levels and to distinguish between work that does not meet the criteria. Also, it is difficult to make fine distinctions using qualitative levels such as never, sometimes, usually or limited acceptance, proficient or NA, poor, fair, good, very good, excellent. How will you make the distinctions?
It is suggested to start with fewer [score] levels and to distinguish between work that does not meet the criteria.
  • Write the descriptors. As a student is judged to move up the performance continuum, previous level descriptions are considered achieved in subsequent description levels. Therefore, it is not necessary to include “beginning level” descriptors in the same box where new skills are introduced.
  • Evaluate the rubric. As with any instructional tool, evaluate the rubric each time it is used to ensure it matches instructional goals and objectives. Be sure students understand each criterion and how they can use the rubric to their advantage. Consider providing more details about each of the rubric’s areas to further clarify these sections to students. Pilot test new rubrics if possible, review the rubric with a colleague, and solicit students’ feedback for further refinements.

Types of Rubrics

Determining which type of rubric to use depends on what and how you plan to evaluate. There are several types of rubrics including holistic, analytical, general, and task-specific. Each of these will be described below.

All criteria are assessed as a single score. Holistic rubrics are good for evaluating overall performance on a task. Because only one score is given, holistic rubrics tend to be easier to score. However, holistic rubrics do not provide detailed information on student performance for each criterion; the levels of performance are treated as a whole.

  • “Use for simple tasks and performances such as reading fluency or response to an essay question . . .
  • Getting a quick snapshot of overall quality or achievement
  • Judging the impact of a product or performance” (Arter & McTighe, 2001, p 21)

Each criterion is assessed separately, using different descriptive ratings. Each criterion receives a separate score. Analytical rubrics take more time to score but provide more detailed feedback.

  • “Judging complex performances . . . involving several significant [criteria] . . .
  • Providing more specific information or feedback to students . . .” (Arter & McTighe, 2001, p 22)

A generic rubric contains criteria that are general across tasks and can be used for similar tasks or performances. Criteria are assessed separately, as in an analytical rubric.

  • “[Use] when students will not all be doing exactly the same task; when students have a choice as to what evidence will be chosen to show competence on a particular skill or product.
  • [Use] when instructors are trying to judge consistently in different course sections” (Arter & McTighe, 2001, p 30)

Task-specific

Assesses a specific task. Unique criteria are assessed separately. However, it may not be possible to account for each and every criterion involved in a particular task which could overlook a student’s unique solution (Arter & McTighe, 2001).

  • “It’s easier and faster to get consistent scoring
  • [Use] in large-scale and “high-stakes” contexts, such as state-level accountability assessments
  • [Use when] you want to know whether students know particular facts, equations, methods, or procedures” (Arter & McTighe, 2001, p 28) 

Grading rubrics are effective and efficient tools which allow for objective and consistent assessment of a range of performances, assignments, and activities. Rubrics can help clarify your expectations and will show students how to meet them, making students accountable for their performance in an easy-to-follow format. The feedback that students receive through a grading rubric can help them improve their performance on revised or subsequent work. Rubrics can help to rationalize grades when students ask about your method of assessment. Rubrics also allow for consistency in grading for those who team teach the same course, for TAs assigned to the task of grading, and serve as good documentation for accreditation purposes. Several online sources exist which can be used in the creation of customized grading rubrics; a few of these are listed below.

Arter, J., & McTighe, J. (2001). Scoring rubrics in the classroom: Using performance criteria for assessing and improving student performance. Thousand Oaks, CA: Corwin Press, Inc.

Stevens, D. D., & Levi, A. J. (2005). Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning. Sterling, VA: Stylus.

The Teaching, Learning, and Technology Group (n.d.). Rubrics: Definition, tools, examples, references. http://www.tltgroup.org/resources/flashlight/rubrics.htm

Selected Resources

Dodge, B. (2001). Creating a rubric on a given task. http://webquest.sdsu.edu/rubrics/rubrics.html

Wilson, M. (2006). Rethinking rubrics in writing assessment. Portsmouth, NH: Heinemann.

Rubric Builders and Generators

eMints.org (2011). Rubric/scoring guide. http://www.emints.org/webquest/rubric.shtml

General Rubric Generator. http://www.teach-nology.com/web_tools/rubrics/general/

RubiStar (2008). Create rubrics for your project-based learning activities. http://rubistar.4teachers.org/index.php

Creative Commons License

Suggested citation

Northern Illinois University Center for Innovative Teaching and Learning. (2012). Rubrics for assessment. In Instructional guide for university faculty and teaching assistants. Retrieved from https://www.niu.edu/citl/resources/guides/instructional-guide

  • Active Learning Activities
  • Assessing Student Learning
  • Direct vs. Indirect Assessment
  • Examples of Classroom Assessment Techniques
  • Formative and Summative Assessment
  • Peer and Self-Assessment
  • Reflective Journals and Learning Logs
  • The Process of Grading

Phone: 815-753-0595 Email: [email protected]

Connect with us on

Facebook page Twitter page YouTube page Instagram page LinkedIn page

Eberly Center

Teaching excellence & educational innovation, creating and using rubrics.

A rubric is a scoring tool that explicitly describes the instructor’s performance expectations for an assignment or piece of work. A rubric identifies:

  • criteria: the aspects of performance (e.g., argument, evidence, clarity) that will be assessed
  • descriptors: the characteristics associated with each dimension (e.g., argument is demonstrable and original, evidence is diverse and compelling)
  • performance levels: a rating scale that identifies students’ level of mastery within each criterion  

Rubrics can be used to provide feedback to students on diverse types of assignments, from papers, projects, and oral presentations to artistic performances and group projects.

Benefitting from Rubrics

  • reduce the time spent grading by allowing instructors to refer to a substantive description without writing long comments
  • help instructors more clearly identify strengths and weaknesses across an entire class and adjust their instruction appropriately
  • help to ensure consistency across time and across graders
  • reduce the uncertainty which can accompany grading
  • discourage complaints about grades
  • understand instructors’ expectations and standards
  • use instructor feedback to improve their performance
  • monitor and assess their progress as they work towards clearly indicated goals
  • recognize their strengths and weaknesses and direct their efforts accordingly

Examples of Rubrics

Here we are providing a sample set of rubrics designed by faculty at Carnegie Mellon and other institutions. Although your particular field of study or type of assessment may not be represented, viewing a rubric that is designed for a similar assessment may give you ideas for the kinds of criteria, descriptions, and performance levels you use on your own rubric.

  • Example 1: Philosophy Paper This rubric was designed for student papers in a range of courses in philosophy (Carnegie Mellon).
  • Example 2: Psychology Assignment Short, concept application homework assignment in cognitive psychology (Carnegie Mellon).
  • Example 3: Anthropology Writing Assignments This rubric was designed for a series of short writing assignments in anthropology (Carnegie Mellon).
  • Example 4: History Research Paper . This rubric was designed for essays and research papers in history (Carnegie Mellon).
  • Example 1: Capstone Project in Design This rubric describes the components and standards of performance from the research phase to the final presentation for a senior capstone project in design (Carnegie Mellon).
  • Example 2: Engineering Design Project This rubric describes performance standards for three aspects of a team project: research and design, communication, and team work.

Oral Presentations

  • Example 1: Oral Exam This rubric describes a set of components and standards for assessing performance on an oral exam in an upper-division course in history (Carnegie Mellon).
  • Example 2: Oral Communication This rubric is adapted from Huba and Freed, 2000.
  • Example 3: Group Presentations This rubric describes a set of components and standards for assessing group presentations in history (Carnegie Mellon).

Class Participation/Contributions

  • Example 1: Discussion Class This rubric assesses the quality of student contributions to class discussions. This is appropriate for an undergraduate-level course (Carnegie Mellon).
  • Example 2: Advanced Seminar This rubric is designed for assessing discussion performance in an advanced undergraduate or graduate seminar.

See also " Examples and Tools " section of this site for more rubrics.

CONTACT US to talk with an Eberly colleague in person!

  • Faculty Support
  • Graduate Student Support
  • Canvas @ Carnegie Mellon
  • Quick Links

creative commons image

Grading rubric strategies

A rubric is an evaluation tool that identifies criteria relevant to an assignment and describes levels of performance expectations for the assignment or other student work. Grading rubrics communicate expectations and help assess the extent to which students are achieving developmental learning outcomes. There are many types of rubrics and most are organized into a table for ease of communication and the purpose of building them into WebCampus.

Rubrics are beneficial to both the instructors and the students for the following reasons:

  • Ease of communication: In WebCampus, a rubric can be integrated with a variety of assessments, such as assignments, discussions, presentations, and group projects. Your students will be able to quickly access a detailed explanation as to why they received the grade they did and how to improve for the next assignment.
  • Rubrics convey the assessment standards to students and also reduce the workload in grading. You’ll be doing far less typing/writing if you use a rubric. Rubrics remove the monotony of repeating yourself.
  • They help ensure that the assignments are evaluated fairly and consistently.
  • Rubrics also provide students with clear learning goals, specific requirements, and acceptable performance standards for each assignment. This means that they help students become more aware and better able to efficiently evaluate their own work. When the assignments are scored with the rubric, students can easily identify and understand what area needs further effort in order to meet the performance standard.

Components of a rubric

At its core, a successful rubric involves three important components:

  • Rubric description: The rubric description provides information to students about what aspects of an assignment or project will be assessed and how they will be assessed.
  • Criteria: Criteria, often called dimensions, identify the characteristics or components of the task to be rated. For example, a rubric for a presentation could include content , organization , style , communication , use of visual aid , and presentation skills . A rubric for a written assignment or research paper could include argument , evidence , counter-evidence , sources , and citations .
  • Levels of achievement: Levels of achievement are often called scale. Scale uses labels to describe how well or poorly a task has been performed by students. For example, you can use excellent , good , needs improvement , and poor ; or sophisticated , competent , and not yet competent to set up the assignment scale.

Steps to designing rubrics

1. define the purpose of the assignment/assessment for which you are creating a rubric..

To define the purpose of the assignment, consider the following questions as you do so:

  • What exactly is the assigned task? Does it break down into a variety of different tasks? Are these tasks equally important? What are the learning objectives for this assignment or task? What do you want students to demonstrate in their completed assignments or performances?
  • What might an exemplary student product/performance look like? How might you describe an acceptable student product or performance? How might you describe work that falls below expectations?
  • What kind of feedback do you want to give students on their work or performance? Do you want to give them detailed feedback on a variety of criteria? Do you want to give them specific feedback that will help them improve their future work?

2. Define the criteria.

In defining the criteria, start by asking what knowledge and skills are required for the assignment/assessment? Make a list of these, group and label them, and eliminate any that are not critical. The list should contain no more than 6–7 criteria, but it need not include that many. You may also want to review the learning objectives for the course, the assignment prompt, existing grading checklists, comments on previous work, past examples of student work, and/or discuss the criteria with colleagues.

After drafting some criteria, consider its effectiveness: Can they be observed and measured? Are they important and essential? Are they distinct from other criteria?

3. Design the rating scale.

Overall, most rating scales include 3–5 levels: Excellent, Good, Adequate, Inadequate, Fail, etc. You should use the descriptive words with which they are most comfortable, or not at all. Numbers and grade designations of A, B, C, D, F may be preferred over labels or descriptions. Finally, the rubric may be ordered from lowest to highest or highest to lowest.

4. Write descriptions for each level of the rating scale.

Write descriptions for each level of the rating scale you came up with. You may do this for each particular criterion of the rubric or provide information in a more holistic fashion. These descriptions help students understand your expectations.

Well-written descriptions describe observable and measurable behavior, parallel language across the scale, indicate the degree to which the standards are met, and allow the rubric to be used with all assignments for which the rubric was created.

5. Create your rubric.

After working hard on designing your rubrics, you should build them within WebCampus. Once you create a Rubric in WebCampus, it will be available for student viewing on the same page as the assignment description. Students can use this as they prepare the assignment, making sure they address each of the areas you note. When it’s time for grading, the same rubric will appear in the SpeedGrader, where you can click on the rating the student earned for each category, and provide comments-per-category if you’d like. When you are finished, the Rubric tool will auto-calculate the total for you. See below for more information on how the Rubrics tool in WebCampus.

WebCampus tool: Rubrics

Learn how to create and manage rubrics in your WebCampus course.

Please see the Canvas Guide: How do I use a rubric to grade submissions in SpeedGrader? For more information.

Or watch the rubrics overview video .

Sample rubrics

Here is a sample of a rubric designed for online discussion:

A rubric showing columns for criteria, ratings, and a summation of points in the categories.

Here is a sample of what a rubric designed for an essay and provided in an assignment or syllabus:

Representation of a sample rubric with varying numbers of columns based on the criteria identified.

Feedback with rubrics

Creating rubrics in WebCampus will take some time at the beginning but this is time well-spent. For instructors, rubrics can be reused over time and can help you to grade more efficiently and with greater consistency. For students, rubrics are beneficial because they can receive more timely and meaningful feedback from instructors and they can develop their self-regulated learning skills as they apply the feedback to their work (Stephens & Levi, 2013).

As you create rubrics, consider the following suggestions:

  • Involve students in rubric construction. Stephen and Levi point out the benefits of this approach in that it prevents misunderstanding of expectations and increases students’ awareness of themselves as "stakeholders" in the learning process.
  • Use pre-existing rubrics if they exist, and modify them as needed. You can find respected sources of rubrics such as the AACU Value Rubrics (registration required). You may also ask your colleagues about rubrics they use or work with them to create rubrics you share across classes.

Rippé, C. (2009, August 31). Using Rubrics to Improve Online Teaching, Learning, and Retention . Faculty Focus.

Stephens, D.D., & Levi, A.J. (2013). Introduction to rubrics (2nd Ed.). Serling, VA: Stylus Publishing, LLC.

Our lobby is open 9:00-5:00. We also offer virtual appointments.

Our lobby will be closed all day May 31st.

  • Undergraduate Students
  • Graduate Students
  • Recent Graduates & Alumni
  • Staff & Faculty
  • Managers of On-Campus Student Employees
  • Career Fairs
  • Online Resume Review
  • Drop In Coaching
  • Career Coaching Appointments
  • Workshops and Events
  • Career Courses
  • Connect with Employers
  • Connect with Alumni & Mentors
  • Free Subscriptions for Huskies
  • Private Space for Virtual Interviews
  • Husky Career Closet
  • Professional Headshots
  • Find Purpose
  • Build Skills
  • Get Experience (internships)
  • Build Relationships (networking)
  • Tell Your Story (profiles, resumes, cover letters, interviews)
  • Find Success (jobs, service programs, grad school)
  • Arts / Media / Marketing
  • Consulting / Business
  • Non-profit / Social Justice / Education
  • Law / Government / Policy
  • Physical & Life Sciences
  • Sustainability / Conservation / Energy
  • Tech / Data / Gaming
  • First Generation Students
  • International Students
  • LGBTQ+ Students
  • Students of Color
  • Transfer Students
  • Undocumented/DACA Students
  • Student Veterans
  • Students with Disabilities
  • Featured Jobs & Internships
  • Handshake Access Details
  • Internship Advice
  • On-Campus Employment
  • Job Search Tips
  • For Employers
  • Peace Corps
  • Diplomat in Residence
  • Baldasty Internship Project
  • Get Involved

grading rubric for interview assignment

Practice Interview Rubric

  • Share This: Share Practice Interview Rubric on Facebook Share Practice Interview Rubric on LinkedIn Share Practice Interview Rubric on X

This checklist can be used to understand what components are important to focus on during an interview.

Connect with us:

Contact us: 9a-5p, M-F | 134 Mary Gates Hall | Seattle, WA 98195 | (206) 543-0535 tel | [email protected]

The Division of Student Life acknowledges the Coast Salish people of this land, the land which touches the shared waters of all tribes and bands within the Suquamish, Tulalip, and Muckleshoot Nations. Student Life is committed to developing and maintaining an inclusive climate that honors the diverse array of students, faculty, and staff. We strive to provide pathways for success and to purposefully confront and dismantle existing physical, social, and psychological barriers for minoritized students and communities. We engage in this work while learning and demonstrating cultural humility.

Berkeley Graduate Division

  • Basics for GSIs
  • Advancing Your Skills

Examples of Rubric Creation

Creating a rubric takes time and requires thought and experimentation. Here you can see the steps used to create two kinds of rubric: one for problems in a physics exam for a small, upper-division physics course, and another for an essay assignment in a large, lower-division sociology course.

Physics Problems

In STEM disciplines (science, technology, engineering, and mathematics), assignments tend to be analytical and problem-based. Holistic rubrics can be an efficient, consistent, and fair way to grade a problem set. An analytical rubric often gives a more clear picture of what a student should direct their future learning efforts on. Since holistic rubrics try to label overall understanding, they can lead to more regrade requests when compared to analytical rubric with more explicit criteria. When starting to grade a problem, it is important to think about the relevant conceptual ingredients in the solution. Then look at a sample of student work to get a feel for student mistakes. Decide what rubric you will use (e.g., holistic or analytic, and how many points). Apply the holistic rubric by marking comments and sorting the students’ assignments into stacks (e.g., five stacks if using a five-point scale). Finally, check the stacks for consistency and mark the scores. The following is a sample homework problem from a UC Berkeley Physics Department undergraduate course in mechanics.

Homework Problem

Learning objective.

Solve for position and speed along a projectile’s trajectory.

Desired Traits: Conceptual Elements Needed for the Solution

  • Decompose motion into vertical and horizontal axes.
  • Identify that the maximum height occurs when the vertical velocity is 0.
  • Apply kinematics equation with g as the acceleration to solve for the time and height.
  • Evaluate the numerical expression.

A note on analytic rubrics: If you decide you feel more comfortable grading with an analytic rubric, you can assign a point value to each concept. The drawback to this method is that it can sometimes unfairly penalize a student who has a good understanding of the problem but makes a lot of minor errors. Because the analytic method tends to have many more parts, the method can take quite a bit more time to apply. In the end, your analytic rubric should give results that agree with the common-sense assessment of how well the student understood the problem. This sense is well captured by the holistic method.

Holistic Rubric

A holistic rubric, closely based on a rubric by Bruce Birkett and Andrew Elby:

[a] This policy especially makes sense on exam problems, for which students are under time pressure and are more likely to make harmless algebraic mistakes. It would also be reasonable to have stricter standards for homework problems.

Analytic Rubric

The following is an analytic rubric that takes the desired traits of the solution and assigns point values to each of the components. Note that the relative point values should reflect the importance in the overall problem. For example, the steps of the problem solving should be worth more than the final numerical value of the solution. This rubric also provides clarity for where students are lacking in their current understanding of the problem.

Try to avoid penalizing multiple times for the same mistake by choosing your evaluation criteria to be related to distinct learning outcomes. In designing your rubric, you can decide how finely to evaluate each component. Having more possible point values on your rubric can give more detailed feedback on a student’s performance, though it typically takes more time for the grader to assess.

Of course, problems can, and often do, feature the use of multiple learning outcomes in tandem. When a mistake could be assigned to multiple criteria, it is advisable to check that the overall problem grade is reasonable with the student’s mastery of the problem. Not having to decide how particular mistakes should be deducted from the analytic rubric is one advantage of the holistic rubric. When designing problems, it can be very beneficial for students not to have problems with several subparts that rely on prior answers. These tend to disproportionately skew the grades of students who miss an ingredient early on. When possible, consider making independent problems for testing different learning outcomes.

Sociology Research Paper

An introductory-level, large-lecture course is a difficult setting for managing a student research assignment. With the assistance of an instructional support team that included a GSI teaching consultant and a UC Berkeley librarian [b] , sociology lecturer Mary Kelsey developed the following assignment:

This was a lengthy and complex assignment worth a substantial portion of the course grade. Since the class was very large, the instructor wanted to minimize the effort it would take her GSIs to grade the papers in a manner consistent with the assignment’s learning objectives. For these reasons Dr. Kelsey and the instructional team gave a lot of forethought to crafting a detailed grading rubric.

Desired Traits

  • Use and interpretation of data
  • Reflection on personal experiences
  • Application of course readings and materials
  • Organization, writing, and mechanics

For this assignment, the instructional team decided to grade each trait individually because there seemed to be too many independent variables to grade holistically. They could have used a five-point scale, a three-point scale, or a descriptive analytic scale. The choice depended on the complexity of the assignment and the kind of information they wanted to convey to students about their work.

Below are three of the analytic rubrics they considered for the Argument trait and a holistic rubric for all the traits together. Lastly you will find the entire analytic rubric, for all five desired traits, that was finally used for the assignment. Which would you choose, and why?

Five-Point Scale

Three-point scale, simplified three-point scale, numbers replaced with descriptive terms.

For some assignments, you may choose to use a holistic rubric, or one scale for the whole assignment. This type of rubric is particularly useful when the variables you want to assess just cannot be usefully separated. We chose not to use a holistic rubric for this assignment because we wanted to be able to grade each trait separately, but we’ve completed a holistic version here for comparative purposes.

Final Analytic Rubric

This is the rubric the instructor finally decided to use. It rates five major traits, each on a five-point scale. This allowed for fine but clear distinctions in evaluating the students’ final papers.

[b] These materials were developed during UC Berkeley’s 2005–2006 Mellon Library/Faculty Fellowship for Undergraduate Research program. Members of the instructional team who worked with Lecturer Kelsey in developing the grading rubric included Susan Haskell-Khan, a GSI Center teaching consultant and doctoral candidate in history, and Sarah McDaniel, a teaching librarian with the Doe/Moffitt Libraries.

How to create an interview rubric that actually works

grading rubric for interview assignment

Promoted partner content

codesignal logo

Interview rubrics are a great way to reduce bias. Here's how to build a rubric for any technical role.

Hiring software engineers is a time-intensive and challenging process. It can also feel subjective, with ratings for the same candidates often varying from interviewer to interviewer. In fact, one study found that 65% of technical recruiters see bias in their current hiring processes. Consistent processes are key to ensuring that your engineering team can interview efficiently while also providing fair and objective evaluations of candidates.

To improve efficiency and limit bias, a structured rubric with a concrete scoring guide can help you evaluate a candidate’s competencies for a technical role. However, developing a rubric that fits your hiring needs can be tricky – especially for mid- to senior-level engineering roles, where leadership and other non-technical skills matter greatly.

CodeSignal

How do you actually build an interview rubric? The following four tips will help you create an effective rubric for on-site or remote interviews for any level of technical hire, from early career to senior-level.

1. Quantify all job-relevant skills

Interviewers often have to screen and debrief on a dizzying number of candidates to find someone who is the right fit for a role. It’s easy to make the mistake of allowing for quick qualitative feedback for the sake of time. However, this often makes a fair comparison between candidates near impossible (is a “good” versus “solid” candidate better?), and leaves recruiters looking back on notes with the ambiguous responsibility of interpreting what raters’ subjective comments mean.

With a numerical system, where level of performance is mapped to a specific number, you can create a single, quantified final score to compare skills between candidates. Better yet: ask interviewers to provide concrete observations of behavior – what the candidate did – that led them to their choice of score. Debrief sessions can be focused on hard numbers rather than opinions that may have little to do with the job-relevant skills you want to assess. Using numbers both streamlines evaluation and centers discussion on more objective measures.

2. Define what each score means to ensure consistency across interviewers

A rubric is useful only if each interviewer knows what each score means. Every interviewer approaching a rubric should be able to understand what observable behaviors or answers (not just subjective impressions) merit a particular score. For example, when evaluating a candidate’s collaboration skills, an observation like, “The candidate gave several specific examples of working with others to achieve a desired outcome,” is much stronger than, “The candidate seemed like a team player.” This is critical for consistency in ratings across candidates. When all candidates are being compared on the same criteria, they have a fairer and more equitable shot at success.

When designing the rubric, outline the range of scores possible for each skill and which observable behaviors the candidate should demonstrate in order to achieve each score. The more specific you can get, the better.

3. Evaluate both technical and soft skills

Technical competency is not the only skill an engineer needs to be successful, so it shouldn’t be the only area you assess your candidates on – particularly when it comes to mid- and senior-level hires. Engineers need to possess a wide range of both hard and soft skills to be successful in their roles, including effective communication, collaboration, and leadership. Studies show, for example, that teams who communicate effectively increase their productivity by as much as 25% .

Here’s an example of the level of detail you’ll want to achieve when defining skills to measure. Jamie Talbot, former Director of Engineering at Medium, developed a rubric that assessed for the following areas (and many others) when interviewing software engineers for his team:

  • Problem-solving
  • System design
  • Resoluteness
  • Collaboration
  • Values alignment

Consider all of the relevant competencies that are important to the role you’re hiring for, and pick the most important ones to include in your rubric. Then, spell out what each score looks like (“strong no” through “strong yes,” for instance) for each of these competency areas. The interview process is a great opportunity to evaluate the candidate on any skill – whether hard or soft – that matters to your decision-making.

4. Use early interviews to calibrate your rubric

Once you’ve built your rubric, check that it’s working as intended. We recommend having every interviewer score the same interview independently using the new rubric. Look to see if there are score discrepancies and which items on the rubric were most subject to different interpretations by your interviewers. If you see places where you can improve the rubric or make it more specific, now is the time to do so.

This process will ensure that each interviewer is interpreting the scoring criteria in the same way, and that you’re ultimately selecting the best candidate for the role in a consistent, fair, and objective manner. If you have time, we recommend repeating this process with new interviews.

A great rubric can help interviewers approach recruiting in an efficient and standardized method, and make more objective decisions. The result is a hiring process that gives all candidates a fair chance at success, and ensures engineering teams land the hires that are the best and most qualified fit for the role.

CodeSignal

Related content

How the ai boom is bringing silicon valley back from the brink.

Kelli Korducki

Ask Mathias: How can I avoid hiring folks working more than one job?

Mathias Meyer

CONTENT SPONSORED by MULTIVERSE

Embrace apprenticeships to uncover hidden developer talent

Imran Bashir

Understanding stock options and RSUs as an engineer

Daniel S Lee

4 common software developer interview questions and how to answer them

Harry Guinness _0

31 smart questions to ask in a software engineering interview

CONTENT SPONSORED by CODESIGNAL

5 ways you're stressing candidates out during tech interviews

Generative ai is already impacting tech hiring.

Josh Fruhlinger

What US engineering managers can expect to earn in 2023

Scott Carey

CONTENT SPONSORED by KARAT

Optimize hiring for speed and scale to outpace the competition

Werner Koepf

Training your interviewers to make a great first impression

Rachael Granby

Rethinking your engineer hiring strategy in 2024

 Jennifer Riggins

Generative AI is changing how computer science is taught

Chris Stokel-Walker

Navigating tech recruitment in times of uncertainty

Gretchen Scott

Optimize tech hiring for a meaningful candidate experience

Don Gannon-Jones

Tech hiring: Breaking the cycle of pedigree bias

Johnny Yeo

Beware of fake job candidates

What ai has to offer: using llm tools in interviews.

Natalie Wright

Efficient interviewing techniques

Laveena

Three tips for retaining key engineering talent

Matan Kubovsky

Reframing your tech recruiting strategy in 2023

Senior engineers are weathering the layoffs storm.

Richard Speed

How to conduct a great interview for an ML engineer role

Peter Lu

Hiring engineering managers: The role, process, and current market

Joe Fay

4 hiring trends engineering managers can’t ignore in 2023

How to assess proficiency across 10 core technologies.

Jason Wodikca

How to remove the pain from hiring

Anastasia Lopez

What UK engineering managers can expect to earn in 2022

What the californian pay transparency laws mean for engineering managers, using interview rubrics to make more confident hiring decisions.

Alejandro	Cadavid

Three ways to remove bias from technical interviews

Ten reasons engineers choose to stay with an organization.

REBECCA ABI RAAD

CONTENT SPONSORED by THRYVE TALENT

Six ways to beat the tech talent shortage

John Lennon

CONTENT SPONSORED by SHL

How to hire the right engineers for your team

alissa bonas

What to do when a beloved employee quits

Lara Hogan

Six inclusive hiring tips to attract the right engineers for your org

Sara Gurierrez

How to create an effective offboarding plan when you’re leaving your job

Arjit Srivastava

How to successfully onboard remote engineering staff in four weeks

James Stanier

Seven tips for evaluating soft skills in interviews

Marcin Olichwirowicz

Seven things you need to know about retaining engineering talent

ella-morrish-pink

Hiring the right engineer for your org

Einat Mahat

CONTENT SPONSORED by FIDEL

Three steps to successfully onboard junior engineers

Dan Abel

Four ways to invest in junior engineers

Dileep Marway

The secret to hiring engineers when you pay less than Google

Sarah Milstein

CONTENT SPONSORED by CODERPAD

How to spot high-potential junior talent during interviews

Jonathan Geggatt

CONTENT SPONSORED by PAGERDUTY

Fixing and preventing high-toil environments

Lesley Cordero

CONTENT SPONSORED by CODILITY

Setting standards for your technical assessments

Brendan McCollam

How to stop shrinkage in engineering teams

Christoph Neijenhuis

How to lead your junior engineers to success

Ceora Ford

Common traps when hiring your company’s first engineering manager

Jason Wong

Fixing salaries, levels, and promotions in your engineering org

Fighting the mirrortocracy with inclusive hiring.

Kevin Stewart

Opening doors for Black engineers

Legand Burge

Learnings from 'Addressing tech's access gap'

CONTENT SPONSORED by PLURALSIGHT

How to effectively onboard teams at scale

Austin Bagley

CONTENT SPONSORED by PEPSICO

Understanding the mentorship mesh

Daniel Peck

Unpopular (or popular) opinion: you have to work hard to keep your talented humans

Leemay Nassery

Beyond bootcamps

Amina Adewusi

How do you identify great engineers when hiring?

Yenny Cheung

Boosting equity in tech with more transparency and less pedigree

Shannon Hogue-Brown

Building relationships from further away for successful remote hiring

Jay Shirley

CONTENT SPONSORED by TWITTER

Recruiting diverse engineering talent is everyone's job

Dr. Maisha Gray-Diggs

How to hire an engineering manager

Charity Majors

Increase your hiring success with job success profiles

Rod Begbie

Six ways to create an unbiased hiring process

Amanda Richardson

Creating a technical interview process that works at scale

Clement Ho

CONTENT SPONSORED by SHOPIFY

Starting a new job in a world of 'digital by design'

Kiana Mohseni

Onboarding remote, early-career engineers

Efficient and equitable hiring at a global scale.

Ellie Spencer-Failes

Hiring the right engineer: remote interviewing that works

Allison McMillan

How to structure developer hiring to combat systemic DEI failures

Fast-paced hiring: how to scale a team efficiently and without exhaustion.

Emmanuelle Franquelin

Not all engineering leaders are engineering managers

Tany Reilly

A resignation can be an opportunity

Kevin Goldsmith

Crafting a diverse hiring funnel

Tech access to the underrepresented.

Jason Brewer latest

Finding potential in your interview processes

Making the right salary decisions for your engineering team, how to hire remote junior developers.

Romina Suarez

The ultimate candidate experience: the effect of an inclusive hiring process

Ola Sitarska

How user research helped us to create a more equitable, anonymous hiring process

Bethan Vincent

5 Ways You Can Hire Engineers Better

VM (Vicky) Brasseur

Build a Better Hiring Process with Design Thinking

Crystal Yan

How tech hiring fails us all

Crystal Huff

How to succeed at hiring without really trying

Melinda Seckington

The positives and negatives of networks and tech hiring

Yasmina Banaszczuk

12 steps to becoming a better engineering interviewer

Cate Huston

Plug in to LeadDev

Want to get regular updates on all things LeadDev? Sign up to our mailing list

To find out more about how your data is handled check out our Data Promise

grading rubric for interview assignment

  • Become an interview engineer
  • Integrations

Company_0

  • Request Demo Signup

Effective Hiring Rubrics: A Guide to Objective Assessments

Effective Hiring Rubrics: A Guide to Objective Assessments

A hiring rubric is like a roadmap for companies when they're trying to pick the right person for a job. It's a list of things everyone agrees are important for the job, like skills and qualities. Interviewers use this list to rate candidates. They often give scores, like 1 to 5, to show how well a candidate matches up. This way, it's fair, because everyone's using the same rules. It's pretty handy for keeping things consistent, especially when several people are interviewing the same candidate. It's all about fairness, making better choices, and using data to do it. And, like a good recipe, it can be tweaked over time to work even better. Plus, it helps companies make sure they're following the rules when they hire folks.

grading rubric for interview assignment

Creating an interview rubric that works effectively for assessing candidates in technical roles involves careful planning and attention to detail. Here's a more in-depth breakdown of the key steps:

Quantify Job-Relevant Skills: Use a numerical system to quantify job-relevant skills. This system should map the level of performance to a specific number, allowing for a quantified final score for each candidate. For example, you could use a scale of 1 to 5, where 1 represents the lowest proficiency and 5 indicates the highest. Encourage interviewers to provide concrete observations or examples of candidate behavior that support their chosen score. This approach streamlines the evaluation process and shifts the focus towards objective measures.

Define Score Meanings: To ensure consistency across interviewers, it's crucial that everyone understands what each score on the rubric means. For instance, if you're assessing a candidate's problem-solving skills, specify what behaviors or responses correspond to each score. This could include detailed examples like "The candidate proposed innovative solutions to complex problems," rather than vague descriptions like "The candidate seemed creative." The more specific you can be in defining these behaviors, the better.

Evaluate Both Technical and Soft Skills: Technical competence is important, but it's not the only aspect to consider, especially for mid- to senior-level technical roles. Soft skills like effective communication, collaboration, and leadership are equally vital. Determine the relevant competencies for the role and include them in the rubric. Consider creating a list of skills that matter, both technical and soft, and define what each score level represents for each skill. This level of detail allows you to assess candidates comprehensively.

Calibrate Your Rubric: Once you've designed your rubric, it's important to test its effectiveness. Have each interviewer independently evaluate the same interview using the rubric. This process helps identify any score discrepancies and areas where interpretation differs among interviewers. By doing this, you can refine and improve the rubric to ensure consistent, fair, and objective assessments. If possible, repeat this calibration process with new interviews to further enhance the rubric's reliability.

To make a rubric that really hits the mark, you've got to put in some careful thought and a strong dedication to being fair and consistent. This way, your hiring crew can size up candidates in a more level-headed manner, resulting in smarter hiring choices and a more even-handed hiring process.

Hiring rubrics play a crucial role and keep things fair and make sure everyone's on the same page, no matter what kind of interview you're in. Let's dive into the different interview styles and how these rubrics come to the rescue:

Job Application Screening Rubric:

  • This rubric is used at the initial stage to evaluate job applications.
  • It assesses factors like qualifications, work experience, and personal characteristics.
  • It helps employers filter applicants based on minimum qualifications, streamlining the interview process.

Phone Interview Rubric:

  • A phone interview rubric is an essential tool for assessing candidates during initial phone interviews.
  • It ensures that the interviewer focuses on role-specific needs.
  • Helps in evaluating candidates from the same set of criteria and maintaining consistency in the evaluation process.

Personal Interview Rubric:

  • Personal interviews, whether in-person or via video, are crucial for getting to know candidates on a deeper level.
  • A personal interview rubric provides a structured approach to assess candidates' performance in face-to-face or video interviews.
  • It aids interviewers in evaluating candidates based on predetermined criteria, ensuring consistency and fairness.

Panel Interview with Evaluation Board:

  • Panel interviews involve multiple interviewers or assessors.
  • An evaluation board, often used in panel interviews, adds an extra layer of objectivity.
  • It ensures that diverse viewpoints are considered, resulting in a more comprehensive and fair evaluation of candidates.
  • The ultimate goal is to select the most suitable candidate for the job role.

Blind Hiring Interviews:

  • In blind hiring, the interviewer does not have access to the candidate's resume or identifying information.
  • It's typically used to assess a candidate's skills rather than their qualifications.
  • Hiring rubrics in this context focus on assessing skills, competencies, and other job-related factors without being influenced by personal information.

Employers have the freedom to pick the interview style that fits their needs and then use the matching rubric to keep things consistent and fair when they're evaluating candidates. Whether it's sifting through applications, doing phone or in-person interviews, or getting a bunch of interviewers in on the action, these rubrics are versatile tools that make the whole hiring process effective and objective.

Rubrics come in various types, and the choice of which one to use depends on what and how you intend to evaluate. The primary types of rubrics include holistic, analytical, generic, and task-specific.

Holistic Rubric :

In a holistic rubric, all evaluation criteria are combined into a single score. These are useful when assessing overall performance on a task. Holistic rubrics are generally straightforward to score, as they provide a single overall rating. However, they do not offer detailed information on a student's performance for each specific criterion. They are typically employed for simpler tasks and performances, such as evaluating reading fluency or responses to essay questions. They are also effective for obtaining a quick snapshot of overall quality or achievement and for judging the impact of a product or performance.

Analytical Rubric :

Analytical rubrics, on the other hand, assess each criterion separately and assign distinct descriptive ratings to them. This approach provides a more detailed evaluation but tends to be more time-consuming to score. Analytical rubrics are particularly valuable for judging complex performances that involve multiple significant criteria. They are suitable for providing specific information and feedback to students.

Generic Rubric :

A generic rubric contains criteria that are applicable across various tasks and can be used for similar tasks or performances. Like analytical rubrics, criteria in a generic rubric are assessed separately, allowing for detailed evaluation. Generic rubrics are particularly useful when students are not all performing the same task, or when students have the flexibility to choose evidence that demonstrates their competence in a particular skill or product. Instructors may use generic rubrics when striving to achieve consistent grading across different course sections.

Task-Specific Rubric :

Task-specific rubrics are designed to assess a specific task, with unique criteria evaluated separately. These rubrics may not cover every single criterion involved in a particular task, which could potentially overlook a student's unique solution. They are typically employed when consistent scoring is required quickly and efficiently, such as in large-scale and "high-stakes" contexts like state-level accountability assessments. Task-specific rubrics are also valuable when the primary goal is to determine whether students possess specific knowledge, facts, equations, methods, or procedures related to a particular task.

Developing an effective rubric is a critical aspect of our company-wide hiring process. The company should establish a set of standardized measures that align with the company's mission and values. To kick off new candidate searches, the company should introduce a template that encourages collaboration between hiring managers and recruiters. This template guides them in identifying the core skills essential for the next hire and establishes a consistent standard for candidate evaluation.

The company utilizing hiring-rubrics empowers the teams to create department-level rubrics that allow for more specific and relevant criteria. For example, the engineering team has developed rubrics that break down interview performance into five categories: technical excellence, communication, critical thinking, culture add, and potential. These departmental rubrics combine the company's core values with the specific technical requirements, ensuring uniformity in feedback provided to their team. It's important to highlight the significance of "culture add" and "potential" in this context.

When creating rubrics, it's worth noting that not every interview needs to assess every skill. This is why companies gather multiple data points from various interviews. Rubric-writing guide tries to make these categories flexible and complementary. Each section allows the hiring executives to advocate for candidates by highlighting their core strengths, recognizing that different interviews can shed light on different aspects of a candidate's qualifications. This approach enhances the company's ability to make well-informed and balanced hiring decisions.

We are already working with teams that want to hire the best engineers

ANSR

What Is Full-Stack Development? Top Project Ideas For 2023

Full-stack development encompasses the entire process of conceptualizing, designing, testing, and deploying a web application.

Advanced Python Interview Questions To Hire Experienced Candidates

Advanced Python Interview Questions To Hire Experienced Candidates

Python has gained considerable popularity in the last few years. It does not mean that the hiring department must have ample knowledge of this subject. Here are some of the senior Python developer interview questions you need to have a look on

A Short Guide to Hiring for Hyper-Growth Tech Startups

A Short Guide to Hiring for Hyper-Growth Tech Startups

Recruiting exceptional talent is a critical component of the startup ecosystem, yet it's often the most challenging to perfect. As technology propels forward at a dizzying pace, the recruitment strategies of tech startups, the very entities driving these innovations, often remain antiquated. Hyper-growth in the context of tech startups refers to a growth trajectory that far surpasses the industry's average. The World Economic Forum characterizes hyper-growth as a stage in a startup's lifecycle

Begin your data Love Story! Analyze Taylor Swift’s discography in our Intro to SQL course.

GET STARTED →

4 principles for designing an exceptional rubric for an on-site interview

grading rubric for interview assignment

Technical recruiting is challenging and time-intensive. It can also feel subjective, with ratings for the same candidates often varying from interviewer to interviewer. You want to ensure that your engineering team can interview efficiently while also providing fair and objective evaluations of candidates. Consistency in candidate assessment is more important than ever, with 65% of technical recruiters finding bias in current hiring processes. 

To ensure efficiency and limit bias, a structured rubric with a concrete scoring guide can help you evaluate a candidate’s competencies for a technical role. However, developing a rubric that fits your hiring needs can be tricky.

We’ve put together four key principles to help you design an exceptional rubric for any technical role.

Principle 1: Use a numerical system to assess relevant skills.

Interviewers often have to screen and debrief on a dizzying number of  candidates to find someone who is the right fit for a role. It’s easy to make the mistake of allowing for quick qualitative feedback for the sake of time. However, this often makes a fair comparison between candidates near impossible, and leaves recruiters looking back on notes the ambiguous responsibility of interpreting what “pretty good” or “okay” means.

With a numerical system, where level of performance is mapped to a specific number, you can create a single final score to compare skills between candidates. Debrief sessions can be focused on hard numbers rather than opinions that might have nothing to do with the skill area you want to assess. Using numbers both streamlines evaluation and centers discussion on more objective measures.

Principle 2: Spell out what each score means to ensure consistency among raters.

A rubric is useless if the interviewer doesn’t know what each score means. Every interviewer approaching a rubric should be able to understand what observable behaviors or answers merit a particular score. This is critical for consistency in ratings across candidates—every candidate should have the same shot at success because they are being compared on the same criteria.

When designing the rubric, outline the range of scores possible for each skill and which observable behaviors the candidate should demonstrate in order to achieve each score. The more specific you can get, the better.

Principle 3: Include both technical and communication skills in your assessment.

Technical competency is not the only skill an engineer needs to be successful, so it shouldn’t be the only area you assess your candidates on. Engineers need to possess a wide range of both hard and soft skills to be successful in their roles, including effective communication and collaboration. Studies show that teams who communicate effectively increase their productivity by as much as 25% . 

Consider all of the relevant competencies that are important to the role you’re hiring for, and pick the most important ones to include in your rubric. The interview process is a great opportunity to evaluate the candidate on any skill—whether hard or soft—that matters to your decision-making. 

Principle 4: Calibrate your rubric through initial interviews.

Once you’ve finished creating your rubric, check that it’s working as intended. We recommend having every interviewer score the same interview independently using the new rubric. Check if there are score discrepancies and which items on the rubric were subject to interpretation. If you see places where you can improve the rubric or make it more specific, now is the perfect time to do so.

This process will ensure that each interviewer is interpreting the scoring criteria in the same way, and that you’re ultimately selecting the best candidate for the role in a consistent and equitable manner. If you have time, feel free to repeat this process with new interviews.

A great rubric can help interviewers approach recruiting in an efficient and standardized method, and make fairer and more objective decisions. The structured interview and validated technical evaluation tools at CodeSignal can help you design the rubrics you need to save engineering time and get the most qualified hires. Take the first step and schedule a discovery call here .

This website uses cookies to improve your experience. By continuing to use our site, you consent to our use of cookies.

> Privacy Policy

  • Product Resources
  • Liturgical Resources

Grading Rubric for Interview Questions

PDF logo

  • Bible Grant Program
  • Rights & Permissions
  • Submissions
  • Privacy Policy
  • DEI Statement
  • High School
  • Middle School
  • All Products
  • Order History
  • Return Policy
  • Parish Online Catalog
  • K-8 Online Catalog
  • High School Catalog

Ministry Care Team

  • 800-533-8095
  • International Distributors

Copyright © 2024 Saint Mary's Press. All rights reserved.   •   Privacy Policy

One blog, countless inspirations

Head to our blog to learn how to more effectively touch hearts and change lives through stories, saints, and scripture.

Take a Look

grading rubric for interview assignment

  • Skip to main menu
  • Skip to user menu

Faculty Diversity Internship Program - Fall 2024

  • Mission College
  • West Valley College
  • Graduate students nearing completion of a Master’s degree from an accredited college or university
  • Master’s degree holders
  • Bachelor’s degree holders with a minimum of two years of full-time occupational experience in a vocational/technical area
  • Associate’s degree holders with a minimum of six years of full-time occupational experience in a vocational/technical area
  • Collaborative Teaching
  • Workshops/Professional Development
  • Community College Networking
  • FDIP Orientation
  • Classroom observation/collaborative teaching with mentor
  • Mentor/intern team meetings
  • FDIP workshop participation (critical pedagogy, developing course content, hiring process, student services)
  • College meetings and governance
  • End-of-semester teaching showcase
  • Learn about the community college environment and student population
  • Develop a resume/CV and diversity statement
  • Develop a teaching portfolio (including sample syllabus, lesson plans, quizzes/tests/other assignments, grading rubrics)
  • Complete a FDIP evaluation

Share this job

Get job alerts

Create a job alert and receive personalized job recommendations straight to your inbox.

Similar jobs

Research associate entomology & insect ecology/management.

  • Frankfort, Kentucky, United States

Post-Doctoral Research Associate in Sustainable Agriculture

Post-doctoral research associate for agroecosystem research.

COMMENTS

  1. Interview Rubrics: Definition, Templates and Examples

    A competency refers to the specific expertise, proficiency and aptitude required to successfully perform a specific job. Competencies can come in the form of knowledge, skills or abilities. Examples include: building relationships, accountability, teamwork, mechanical knowledge, decision-making and customer service. Structured interview questions.

  2. Rubric Best Practices, Examples, and Templates

    Rubric Best Practices, Examples, and Templates. A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects ...

  3. PDF Interview Grading Rubric

    Thanked interview for their time and shook their hand. Demonstrated their appreciation for the opportunity, smiled, and gave a handshake. Provided a basic thank you and/or a smile. Did not demonstrate their appreciation for the opportunity; weak or no handshake, weak or no smile. Total Score COMMENTS: Interview Grading Rubric Skill Presentation

  4. PDF interview grading rubric

    Rubric for Interview Report Criterion 4 Exceptional 3 Acceptable 2 Marginal 1 or 0 Unacceptabl e Scor e Introductio n You describe the interview event in detail and skillfully introduce the person you interviewed. You describe the interview event and introduce the person you interviewed. You describe the interview event but say little about the ...

  5. PDF TX001195 2-handout-F-Grading Rubric for Interview Questions

    Grading Rubric for Interview Questions Criteria 432 1 Assignment includes all items requested in the instructions. Assignment not only includes all items requested but they are completed above expectations. Assignment includes all items requested. Assignment includes over half of the items requested. Assignment includes less than half of the

  6. Rubrics for Assessment

    The table below illustrates a simple grading rubric with each of the four elements for a history research paper. Sample rubric demonstrating the key elements of a rubric. Criteria. Excellent (3 points) Good (2 points) Poor (1 point) Number of sources. Ten to twelve.

  7. Creating and Using Rubrics

    Example 1: Philosophy Paper This rubric was designed for student papers in a range of courses in philosophy (Carnegie Mellon). Example 2: Psychology Assignment Short, concept application homework assignment in cognitive psychology (Carnegie Mellon). Example 3: Anthropology Writing Assignments This rubric was designed for a series of short ...

  8. Interview Rubric Template for Student Practice

    Clarity of Purpose. 4- The interview had a very clear purpose that was stated at its beginning, and the purpose was reflected throughout the questions. 3- The interview had a purpose, but it was ...

  9. Grading Rubric Strategies

    Office of Digital Learning. 1664 N. Virginia Street, Reno, NV 89557. Mathewson-IGT Knowledge Center. (775) 682-6798, Classroom Support (775) 784-6085. Learn the benefits of utilizing rubrics and review the steps for creating and deploying rubrics that align with your WebCampus assignments.

  10. Creating Grading Rubrics for Writing Assignments

    Creating grading rubrics, or grids, is a typical way to do this. Having received the criteria with an assignment, students are able to write toward specific goals. Later, when they look at their grades, they can see at a glance the strengths and weaknesses of their work. Instructors are able to grade according to customized descriptive criteria ...

  11. Practice Interview Rubric

    Share This: Share Practice Interview Rubric on Facebook Share Practice Interview Rubric on LinkedIn Share Practice Interview Rubric on X; Copy Link; This checklist can be used to understand what components are important to focus on during an interview. View Resource Related Resources.

  12. PDF Mock Interview Rubric

    Mock Interview Rubric Interviewer Student Criteria 4 t 3 2 1 Introduction 20% Greeting is present. Use of eye contact is appropriate. Body language used is ... Assignment Score _____ + Beyonder/Bonus _____ = Final Score _____ Title: Mock Interview Rubric Author: Statewide Instructional Resource Development Center ...

  13. Examples of Rubric Creation

    Examples of Rubric Creation. Creating a rubric takes time and requires thought and experimentation. Here you can see the steps used to create two kinds of rubric: one for problems in a physics exam for a small, upper-division physics course, and another for an essay assignment in a large, lower-division sociology course.

  14. How to create an interview rubric that actually works

    The following four tips will help you create an effective rubric for on-site or remote interviews for any level of technical hire, from early career to senior-level. 1. Quantify all job-relevant skills.

  15. Engineering interviews: grading rubric

    Part three of a three part guide to conducting engineering interviews at Medium. Preface: Refining our process. Part 1: What we screen for. Part 2: What we don't screen for. Part 3: Grading rubric. This guide is up for continual improvement. Discussion on how to grade for certain categories is welcome, as are suggestions on improvements for ...

  16. The Golden Rubric for Technical Interviews

    That seems obvious, but may not always feel so when you're in the midst of studying for these interviews. This rubric should be used as a North Star. Aim for perfection (24 points), but ...

  17. Effective Hiring Rubrics: A Guide to Objective Assessments

    The company utilizing hiring-rubrics empowers the teams to create department-level rubrics that allow for more specific and relevant criteria. For example, the engineering team has developed rubrics that break down interview performance into five categories: technical excellence, communication, critical thinking, culture add, and potential.

  18. PDF Embedded assignments and grading rubric example

    Appendix 4.2 Embedded Assignments and Grading Rubric Example four assignment SWRK 601 - Human Development. transgendered is documentary theoretical Analysis Paper. downloaded onto YouTube and should be accessible to all via their It is also available through Netflix. The student will choose one of the academic TransGeneration.

  19. 4 principles for designing an exceptional rubric for an on-site interview

    Principle 2: Spell out what each score means to ensure consistency among raters. A rubric is useless if the interviewer doesn't know what each score means. Every interviewer approaching a rubric should be able to understand what observable behaviors or answers merit a particular score. This is critical for consistency in ratings across ...

  20. 40 Free Rubric Templates

    Analytic rubrics evaluate and grade an assignment or work at each performance level. As a result, each performance level gets a separate score which typically requires at least two characteristics of that performance level to be assessed. ... Job Interview Rubric Template. Job interviewers are popularly known to use rubrics to assess and score ...

  21. Grading Rubric for Interview Questions

    Grading Rubric for Interview Questions. This resource is part of the Living in Christ Series. Teachers may use it as a grading rubric after assigning their students an interview assignment. Download PDF Word. This resource is part of the Living in Christ Series.

  22. PDF Assignment Grading with the RubricRubrics help breakdown your grading

    Assignment 1 Choose a person you believe has changed the course of history. Provide a biography and detailed analysis and implication of the person is work. Published until Edit Download Submissions O out of 1 Submissions Graded Step 3: Input the desired setting and information. Step 4: Click "Create Rubric".

  23. Faculty Diversity Internship Program

    Develop a teaching portfolio (including sample syllabus, lesson plans, quizzes/tests/other assignments, grading rubrics) Complete a FDIP evaluation; Activities and outcomes beyond those listed are at the discretion of the mentor/intern teams. ... testing or interview process, please contact Human Resources at (408) 741-2174 to arrange for ...