• Columbia University in the City of New York
  • Office of Teaching, Learning, and Innovation
  • University Policies
  • Columbia Online
  • Academic Calendar
  • Resources and Technology
  • Resources and Guides

Leveraging Annotation Activities and Tools to Promote Collaborative Learning

Collaborative annotation activities support learning by encouraging students to learn with and from their peers. Research has shown that a collaborative learning environment can help strengthen student confidence, as well as foster their critical thinking skills and active engagement in learning. The following resource offers an overview of some of the benefits of collaborative annotation, as well as specific tools and sample activities to help facilitate this collaboration. 

On this page: 

  • Getting Started with Collaborative Annotation Activities
  • Tools to Support Collaborative Annotation Activities
  • Resources and References

The CTL is here to help! 

Want to incorporate annotation activities in your course? Trying to determine what platform might best support your pedagogical needs? The CTL is here to help – email [email protected] to schedule a 1-1 consultation!

Cite this resource: Columbia Center for Teaching and Learning (2022). Leveraging Annotation Activities and Tools to Promote Collaborative Learning. Retrieved [today’s date] from https://ctl.columbia.edu/resources-and-technology/resources/activities-tools-collaborative-learning/

Getting Started with Collaborative Annotation Activities 

Whatever the modality, we must remember that learning is a social process. A student does not learn alone (Garg & Dougherty, 2022). 

Research shows that learning is a social process – students benefit when they have opportunities to collaborate and learn both from and with each other (Garg & Dougherty, 2022; Laal & Ghodsi, 2012). The benefits of a collaborative learning environment span the social, psychological, and academic spheres, and range from helping students build confidence and social support systems, to helping students foster critical thinking skills and encouraging active engagement in their learning (Barkley et al., 2014; Laal & Ghodsi, 2012). To learn more about the benefits of collaborative learning, watch the following video from the CTL’s MOOC “ Inclusive Teaching: Supporting All Students in the College Classroom .”     

One powerful way to promote and encourage social learning is through collaborative annotation activities (e.g., active reading assignments). Collaborative annotation assignments can “promote high pre-class reading compliance, engagement, and conceptual understanding,” leading to deeper student interaction and engagement with course materials, while also helping instructors better gauge students’ understanding, comprehension, and engagement (Miller et al., 2018, p. 3). The table below offers several sample annotation activities that foster collaboration and engage students in active learning. All of the activities included below can be adapted for any classroom setting, with or without the use of additional tools. The following section will highlight tools that instructors can use to meet their specific learning goals. 

Tools to Support Collaborative Annotation Activities 

As with any classroom assignment or activity, it’s important for collaborative annotation activities to align with your course objectives and goals. Likewise, the tool you use for a given activity should align with the goals of the activity itself. The following section introduces several collaborative learning tools. 

Perusall is a social annotation tool that engages students in reading as a communal activity.. In doing so, it makes class reading a more interactive activity for students, and offers instructors tools to gauge student understanding and engagement. For example, instructors can use Perusall’s “confusion report” to identify the most popular questions, as well as specific areas in the text where the most students are commenting and annotating; this report also categorizes questions into theme or topic areas. At the same time, Perusall helps students feel better prepared for class participation, as it encourages reading for comprehension rather than for completion. Students can interact with the text, their peers, and their instructor throughout the text using the variety of Perusall’s commenting and chat features. In this way, Perusall promotes deeper student engagement with course content and their fellow peers. 

Perusall supports a range of course materials and formats, including PDF documents, website pages, podcasts, videos, and more. Additionally, Perusall offers several accessibility features including Open Dyslexic font and built-in text to speech capability. Within Perusall, instructors and students alike have a great deal of flexibility and choice – instructors can assign group reading assignments, specify particular chapters or sections of a text, and pose questions throughout the text; students can upvote their peers’ contributions, ask additional questions, or join a threaded discussion on a particular area of the text. 

To see how Columbia faculty have found success using Perusall, see Dr. Weiping Wu’s (Professor of Architecture, Planning, and Preservation and a 2021 Provost’s Senior Faculty Teaching Scholar) CTL Voices submission Engaging Students Beyond the Classroom by Using Perusall .  

Hypothesis works on any webpage; it requires a browser plugin that allows users to layer their own highlights and comments over an existing webpage. Additionally, there is a process for annotating PDFs provided by instructors. 

Because it overlays existing web pages, there are limitations to the kinds of materials students can annotate with Hypothesis. For example, Hypothesis does not allow for the annotation of individual videos at specific timestamps. Like Perusall, Hypothesis allows for threaded discussions, assigned annotation groups, and deeper engagement with course reading materials. Hypothesis allows students across course sections to annotate one shared document, whereas other tools (e.g., Perusall) require each course section to have its own copy. 

Mediathread

Mediathread is a tool developed by the Columbia Center for Teaching and Learning to support the collection, curation, and close analysis and annotation of a range of media texts. Used across disciplines and Schools at Columbia, Mediathread offers different ways for students to engage with media objects. Given its focus on media engagement, Mediathread helps students curate video collections, annotate at specific timestamps, and embed video clips within their responses to instructor-generated prompts.  For more information on Mediathread including different assignment types and how to get started, see the CTL’s resource on Mediathread . 

Google Suite

The Google suite includes Google Docs, Google Slides, and more; each of the tools in the Google suite can be used for collaborative annotation activities. For example, Google Docs are a great way to set up collaborative note-taking processes, as well as collaborative annotations or comments on a specific text or image. Similarly, the Q&A Feature in Google Slides allows students to annotate and interact with instructors in real time during a lecture. For more details on how to set up the Google Suite tool of your choice, see the CTL’s resource on Collaborative Learning . 

The whiteboard and annotation features in Zoom are also helpful for setting up collaborative annotation spaces. These features allow for annotating images, documents, and more within a single shared screen. For more information about Zoom whiteboard and annotation features, see the following CTL video on setting up and engaging students in Zoom. 

Tools Comparison Table 

The following table highlights several key features in a few of the tools described above. Please note, however, the features included in the table are not exhaustive; in many instances, there are ways of adapting tools’ features to meet a specific need. For additional support in choosing the right tool for your collaborative annotation activities, email the CTL at [email protected]

As you read this table, consider questions such as:

  • What features do I need to run the assignment?
  • How might the tool suit my course and activity goals?

Resources & References

Barkley, E.F., Major, C.H., & Cross, K.P. (2014). C ollaborative learning techniques: A handbook for college faculty, second edition . Jossey-Bass.  

Columbia Center for Teaching and Learning. (2021). Collaborative Learning . 

Garg, N. & Dougherty, K.D. (2022). Education surges when students learn together . Inside Higher Ed.  

Miller, K. Lukoff, B., King, G., & Mazur, E. (2018). Use of social annotation platform for pre-class reading assignments in a flipped introductory physics class . Frontiers in Education .

CTL resources and technology for you.

  • Overview of all CTL Resources and Technology

This website uses cookies to identify users, improve the user experience and requires cookies to work. By continuing to use this website, you consent to Columbia University's use of cookies and similar technologies, in accordance with the Columbia University Website Cookie Notice .

Center for Teaching Innovation

Ideas for group & collaborative assignments, why collaborative learning.

Collaborative learning can help

  • students develop higher-level thinking, communication, self-management, and leadership skills
  • explore a broad range of perspectives and provide opportunities for student voices/expression
  • promote teamwork skills & ethics
  • prepare students for real life social and employment situations
  • increase student retention, self-esteem, and responsibility

Collaborative activities & tools

Group brainstorming & investigation in shared documents.

Have students work together to investigate or brainstorm a question in a shared document (e.g., structured Google doc, Google slide, or sheet) or an online whiteboard, and report their findings back to the class.

  • Immediate view of contributions
  • Synchronous & asynchronous group work
  • Students can come back to the shared document to revise, re-use, or add information
  • Google workspace (Google Docs, Sheets, Forms, & Slides)
  • Microsoft 365 (Word, Excel, PowerPoint, Teams)
  • Cornell Box (document storage)
  • Whiteboarding tools ( Zoom , JamBoard , Miro , Mural , etc.)

Considerations

  • Sharing settings
  • Global access
  • Accessibility

Group discussions with video conferencing and chat

Ask students to post an answer to a question or share their thoughts about class content in the Zoom chat window (best for smaller classes). For large classes, ask students in Zoom breakout rooms to choose a group notetaker to post group discussion notes in the chat window after returning to the main class session.

You can also use a discussion board for asynchronous group work.

  • Students can post their reflections in real time and read/share responses
  • If group work is organized asynchronously, students can come back to the discussion board at their own time

Synchronous group work:

  • Zoom Breakout rooms
  • Microsoft Teams
  • Canvas Conferences
  • Canvas Group Discussions
  • Ed Discussion
  • Stable access to WiFi and its bandwidth
  • Clear expectations about participation and pace for asynchronous discussion boards
  • Monitoring discussion boards

Group projects: creation

Students retrieve and synthesize information by collaborating with their peers to create something new: a written piece, an infographic, a piece of code, or students collectively respond to sample test questions.

  • Group projects may benefit from features offered by shared online space (ability to chat, do video conferencing, share files and links, post announcements and discussion threads, and build content)
  • Canvas groups with all available tools

Setting up groups and group projects for success may require the following steps:

  • Introduce group or peer work early in the semester
  • Establishing ground rules for participation
  • Plan for each step of group work
  • Explain how groups will function and the grading

Peer learning, critiquing, giving feedback

Students submit their first draft of an essay, research proposal, or a design, and the submitted work is distributed for peer review. If students work on a project in teams, they can check in with each other through a group member evaluation activity. Students can also build on each other’s knowledge and understanding of the topic in Zoom breakout room discussions or by sharing and responding in an online discussion board.

When providing feedback and critiquing, students have to apply their knowledge, problem-solving skills, and develop feedback literacy. Students also engage more deeply with the assignment requirements and/or the rubric.

  • FeedbackFruits Peer Review and Group Member Evaluation
  • Canvas Peer Review
  • Turnitin PeerMark
  • Zoom breakout rooms
  • Canvas discussions, and other discussion tools
  • Peer review is a multistep activity and may require careful design and consideration of requirements to help students achieve the learning outcomes. The assignment requirements will inform which platform is best to use and the best settings for the assignment
  • We advise making the first peer review activity a low-stakes assignment for the students to get used to the platform and the flow.
  • A carefully written rubric helps guide students through the process of giving feedback and yields more constructive feedback.
  • It helps when the timing for the activity is generous, so students have enough time to first submit their work and then give feedback.

Group reflection & social annotation activities

Students can annotate, highlight, discuss, and collaborate on text documents, images, video, audio, and websites. Instructors can post guiding questions for students to respond to, and allow students to post their own questions to be answered by peers. This is a great reading activity leading up to an inperson discussion.

  • Posing discussion topics and/or questions for students to answer as they read a paper
  • Students can collaboratively read and annotate synchronously and asynchronously
  • Collaborative annotation helps students to acknowledge some parts of reading that they could have neglected otherwise
  • Annotating in small groups
  • FeedbackFruits
  • Interactive Media (annotations on document, video, and audio)
  • Providing students with thorough instructions
  • These are all third-party tools, so the settings should be selected thoughtfully
  • Accessibility (Perusall)

Group learning with polling and team competitions

Instructors can poll students while they are in breakout rooms using Poll Everywhere. This activity is great for checking understanding and peer learning activities, as students will be able to discuss solutions.

  • Students can share screen in a breakout room and/or answer questions together
  • This activity can be facilitated as a competition among teams
  • Poll Everywhere competitions, surveys, and polls facilitated in breakout rooms
  • Careful construction of questions for students
  • Students may need to be taught how to answer online questions
  • It requires appropriate internet connection and can experience delays in response summaries.

More information

  • Group work & collaborative learning
  • Collaboration tools
  • Active learning
  • Active learning in online teaching
  • Blogs @Oregon State University

Ecampus Course Development and Training

Providing inspiration for your online class.

collaborative annotation assignment

Collaborative Online Annotation Tools for Engaging Students in Readings

Do you ever get the sense that students posting in their online discussions haven’t really engaged with the reading materials for that week? One way to encourage active engagement with course readings is to have students annotate directly in the article or textbook chapter that they are assigned. While it is common to see students annotating in their paper copies of their textbooks or readings, these aren’t easily shared with their peers or instructor. Of course, students could snap a photo of their handwritten annotations and upload that as a reading assignment task, though that does require additional steps on the part of both the student and instructor, and there is no interaction with others in the course during that process. However, it is possible to have students annotate their readings completely online, directly in any article on the web or in their ebook textbook. With this process, the annotations can also be seen by others in the course, if desired, so that students can discuss the reading all together or in small groups as they are reading an article or book chapter online. The benefit to this type of annotation online includes components of active learning, increased student interaction, and accountability for students in engaging with the course materials.

Active Learning

The shift to active learning is a bit like going from watching a soccer game on TV to playing a soccer game. Likewise, reading passively and reading to learn are two different activities. One way to get students actively reading to learn is to ask them to make connections from the course materials to their own lives or society, for example, which they then make into annotations in their readings. Annotation tasks require students to take actions and articulate these connections, all without the pressure of a formal assessment. Furthermore, many students arrive at college not knowing how to annotate, so teaching basic annotation practices helps students become more active and effective learners (Wesley, 2012). 

Interaction

“Individuals are likely to learn more when they learn with others than when they learn alone” (Weimer, 2012). Discussion board activities are often where interaction with others in an online course takes place. However, rather than having students refer to a particular reading passage in their discussion board activity, they can simply highlight a passage and type their comments about it right there in the article, no discussion board assignment needed. Others in the course can also read participants’ annotations and reply. With some creative assignment design in Canvas, this can also be set up for small groups. Students may find this type of annotation discussion more authentic and efficient than using a discussion board tool to discuss a reading.

News article embedded in the assignment shows annotations made by specific students with a box to reply

Accountability

A popular way to ensure that students have done the reading is to give them a quiz. However, this is a solitary activity and is higher-stakes than asking students to make targeted annotations throughout a reading. It may make more sense to guide them through a reading with specific annotation tasks. Being explicit about what pieces of the reading students should focus on can help them understand what they need to retain from the reading assignment.

Possible Activities

  • Student-student interaction: Replace a discussion board activity with a collaborative annotation activity where students can annotate the article as they read. Then they can go back later in the week and reply to each other. 
  • Activate prior knowledge: Ask students to include one annotation related to what they already know about this topic.
  • Evaluate sources: Find a pop-science article in your discipline that includes weak support for arguments or claims, for example. Ask students to identify the sources of support in the arguments and challenge the validity of the support. Perhaps they could even be tasked with adding links to reliable sources of support for your discipline in their annotation comments. 

Nuts and Bolts

Two popular annotation tools are Hypothesis and Perusall . I would encourage you to test these out or ask your instructional designer about your needs and whether an annotation tool would be a good fit for your course learning outcomes. 

Wesley, C. (2012). Mark It Up. Retrieved from The Chronicle of Higher Education: https://www.chronicle.com/article/Mark-It-Up/135166

Weimer, M. (2012, March 27). Five Key Principles of Active Learning. Retrieved from Faculty Focus: https://www.facultyfocus.com/articles/teaching-and-learning/five-key-principles-of-active-learning/

Print Friendly, PDF & Email

Leave a reply Cancel reply

Contact info.

Instructional Continuity ​

collaborative annotation assignment

IC Home >> Pedagogies & Strategies >> Collaborative Annotation in Canvas using Hypothes.is

Tip Sheet: Collaborative Annotation in Canvas using Hypothes.is

Hypothes.is is a collaborative online annotation tool that is now available in Canvas. The tool allows students to collaboratively annotate websites and PDF documents. With the Canvas integration, students do not need to create accounts and their annotations can automatically be seen through SpeedGrader if you set it up as an Assignment. You can also create an ungraded activity through Modules in Canvas.

Note: If you are scanning documents, please make sure that they are accessible using OCR technology . Additionally, in order to be used in Hypothesis, the PDF needs to be a document and not an image.

collaborative annotation assignment

This tool is particularly useful for active reading assignments, collaborative research, textual analysis, and other engagement activities. The Hypothes.is for Education page has many great examples to find inspiration and ideas. For more assistance in integrating Hypothes.is into your course, please contact [email protected] .

Setting Up An Hypothes.is Assignment

1. create a new assignment in canvas., 2. under “submission type”, select “external tool”.

collaborative annotation assignment

3. Type “Hypothesis” in the “Enter or find an External Tool URL” box and click “Find”

collaborative annotation assignment

4. Click on Hypothesis from the options presented.

collaborative annotation assignment

5. Select the type of document the students will be annotating. Choose from the following options:

  • a URL for a web page or PDF, 
  • a PDF file you have already uploaded to Canvas, 
  • or a PDF in your GU Google Drive.

collaborative annotation assignment

6. Set up the Assignment to reflect your intended due date, how many points it is worth, etc.

If this is not a graded assignment, enter 0 for points. Note: If you choose “Not Graded” for “Display Grade as”, you will not be able to add Hypothes.is to the assignment.  Find more information on setting up Assignment parameters in Canvas .

7. When you click “Save and Publish,” you will need to authorize Hypothes.is to use Canvas to create accounts.

collaborative annotation assignment

8. After your students have made their annotations, you can access individual students’ annotation through the Canvas SpeedGrader.

Setting up hypothes.is in modules, 1. create a module in canvas., 2. under “add item” to the module, select external tool..

collaborative annotation assignment

3. Select Hypothesis from the list of external tools.

collaborative annotation assignment

4. Follow steps 5-7 listed above.

5. when you have completed steps 5-7, give the page a name and click “add item.”.

collaborative annotation assignment

Using Hypothes.is

Tip: Provide clear guidance to what kinds of annotations you are expecting the students to provide in the Assignment description. As an instructor, you can also use the Hypothes.is tool within the Assignment to model the kinds of annotations you would like to see and respond to students’ comments directly. 

1. When you click on the published assignment, you will see the following screen with directions:

collaborative annotation assignment

2. To make an annotation, highlight any text in the document or on the web page with your cursor and select the “Annotate” button.

collaborative annotation assignment

3. A text box will appear where the annotation can be typed. Annotations can include links to external websites, images, and styled text.

collaborative annotation assignment

4. The name of the person doing the annotation will automatically appear.

5. annotations can also be organized via “tags”; this may include identifying various parts of a text or other notational differences., 6. once the annotation is complete, click the “post to [class name] button. you can also post the annotation privately by clicking on the dropdown arrow after “post to” button and selecting “only me.”, 7. anyone in the course will be able to create an annotation, see others’ annotations, and respond to others’ annotations using the “reply” button..

collaborative annotation assignment

8. Users can also edit or delete their annotations.

9. look to see if there is a red “show new/updated annotations” icon in the upper right corner. if so, click on it to load recently added or edited annotations..

collaborative annotation assignment

For more assistance in integrating Hypothes.is into your course, please contact [email protected] . For other issues, please visit Hypothes.is help page .

css.php

Integrating Collaborative Annotation into Higher Education Courses for Social Engagement

  • Conference paper
  • First Online: 01 February 2024
  • Cite this conference paper

collaborative annotation assignment

  • Mark P. McCormack   ORCID: orcid.org/0009-0000-0281-3011 13 &
  • John G. Keating 13  

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 899))

Included in the following conference series:

  • International Conference on Interactive Collaborative Learning

102 Accesses

Collaborative Annotation (CA) is a literacy strategy that engages students in critical reading, critical thinking, writing and collaboration all in one activity [ 1 ]. This collaboration amongst students promotes social engagement with course materials and has been shown to be beneficial to higher education by improving learning comprehension [ 2 ] and soft skills amongst students [ 3 ]. For our study, we will investigate the benefits that CA provides higher education courses by means of social engagement with boundary objects in assessment. We have designed several pedagogical pipelines which illustrate how to integrate Collaborative Annotation into several types of assignments. Our research is concerned with the impact CA has on students’ quality of learning. This study aims to design pipelines to integrate Collaborative Annotation into several assessment contexts for social engagement.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

collaborative annotation assignment

Introduction: Collaborative Active Learning—Strategies, Assessment and Feedback

collaborative annotation assignment

Enhancing Academic Reading Skills Using a Peer Assessment of Online Collaborative Annotation Approach

collaborative annotation assignment

Threaded Forums and Social Annotation in Higher Education: A Comparison in Supporting Collaborative Knowledge Construction

Schwane, E.: Collaborative Annotation: For Any Text and Any Class. http://www.wcteonline.org/wp-content/uploads/2015/10/Collaborative-Annotation.pdf

Razon, S., Turner, J., Johnson, T.E., Arsal, G., Tenenbaum, G.: Effects of a collaborative annotation method on students’ learning and learning-related motivation and affect. Comput. Hum. Behav. 28 (2), 350–359 (2012)

Article   Google Scholar  

England, T.K., Nagel, G.L., Salter, S.P.: Using collaborative learning to develop students’ soft skills. J. Educ. Bus. 95 (2), 106–114 (2020)

Dahal, N.: Understanding and uses of collaborative tools for online courses in higher education. Adv. Mobile Learn. Educ. Res. 2 (2), 435–442 (2022)

Penny, L., Murphy, E.: Rubrics for designing and evaluating online asynchronous discussions. Br. J. Edu. Technol. 40 (5), 804–820 (2009)

Wiggins, B.L., Eddy, S.L., Wener-Fligner, L., Freisem, K., Grunspan, D.Z., Theobald, E.J., Timbrook, J., Crowe, A.J.: ASPECT: a survey to assess student perspective of engagement in an active-learning classroom. CBE Life Sci. Educ. 16 (2), ar32 (2017)

Google Scholar  

Download references

Author information

Authors and affiliations.

Maynooth University, Co. Kildare, Ireland

Mark P. McCormack & John G. Keating

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mark P. McCormack .

Editor information

Editors and affiliations.

CTI Global, Frankfurt, Germany

Michael E. Auer

UTN—FRBA, Mozart, Argentina

Uriel R. Cukierman

DISA, Technical University of Valencia, Valencia, Spain

Eduardo Vendrell Vidal

UPM, ETSII, Madrid, Spain

Edmundo Tovar Caro

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Cite this paper.

McCormack, M.P., Keating, J.G. (2024). Integrating Collaborative Annotation into Higher Education Courses for Social Engagement. In: Auer, M.E., Cukierman, U.R., Vendrell Vidal, E., Tovar Caro, E. (eds) Towards a Hybrid, Flexible and Socially Engaged Higher Education. ICL 2023. Lecture Notes in Networks and Systems, vol 899. Springer, Cham. https://doi.org/10.1007/978-3-031-51979-6_9

Download citation

DOI : https://doi.org/10.1007/978-3-031-51979-6_9

Published : 01 February 2024

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-51978-9

Online ISBN : 978-3-031-51979-6

eBook Packages : Intelligent Technologies and Robotics Intelligent Technologies and Robotics (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Banner

Hypothesis for Collaborative Web Annotation: Home

  • For Faculty
  • For Everyone
  • Comments From Evergreen Faculty
  • Bugs and Feature Requests
  • Selective Visual History of Annotation
  • Evergreen Programs and Classes That Have Used Hypothesis

Using Hypothesis for Collaborative Annotation in Canvas

Hypothesis is collaborative web annotation software that was integrated with Canvas in Fall 2018, enabling easy integration of social annotation into Evergreen programs.  In the past two years over a dozen Evergreen programs have used Hypothesis, and use has doubled as we rather abruptly moved into remote online learning in the Covid era.  It is very simple to use, and upon request Paul McMillin ([email protected]) can provide a 15-minute demonstration of how Hypothesis works, and how you might use it.

Getting Started

Hypothesis is now automatically included as an option in every program and course Canvas site.

Three options for getting started with Hypothesis:

I. Contact Paul McMillin ([email protected]) to set up a 15-30 minute demo/introduction for your faculty team.  Includes examples of previous Evergreen annotation assignments as well as the basic 'how-to'.

II.  AND/OR, g o to the "For Faculty" or "For Everyone" pages on this guide.

III.  AND/OR, Use the following official Hypothes.is tutorials:

  • How to add Hypothesis as a module item
  • How to add Hypothesis as an assignment
  • How to grade Student Annotations in Canvas  (grading is optional!  But this is still useful for evaluating)
  • Student Guide on how to use Hypothesis

And for insight into how others use annotation:

  • Hypothes.is:  Back to School with Annotation: 10 Ways to Annotate with Students
  • New York Times Learning Network Blog:  Skills and Strategies | Annotating to Engage, Analyze, Connect and Create

And, if you have over an hour to spare and want to geek out on the larger context for Hypothesis along with some demos:  

  • Hypothesis in Canvas vendor demonstration:  Hypothesis in Canvas: Collaborative Annotation as Discussion Forum 2.0

So What Could Go Wrong?

Not much is likely to go wrong.  Hypothesis is easy to learn and simple to use.  

Here is one thing though.  Some texts are not automatically in a good form for annotation with Hypothesis.  At a minimum, you need to be working with html or PDFs.   Paul McMillin can help you figure out which texts will work off the shelf, and which might require some additional work.  Some scanning support is available for when it is needed.  Try to plan in advance to make sure that you will have time to prepare your texts for annotation.  And always do a test annotation to prove that all is well before publishing an annotation assignment for your students.

Why Use Collaborative Web Annotation?

Improves seminar discussions

Develops close reading skills

Student comments are anchored to specific words/phrases in the text, encouraging greater specificity.

Bridges asynchronous work with synchronous work, improving continuity as we transition from one to the other.

Convenient evaluation using SpeedGrader in Canvas.

Students can access their private and all shared comments any time throughout the quarter, always side by side with the text being commented upon.

Faculty can add annotations to pose questions, provide background information, suggest interpretations, etc.

Here is one good argument for collaborative web annotation, from the creators of Hypothesis:

Let’s be honest, discussion forums are a great idea—we all want students to engage more with their assigned readings and with their classmates. But “discussion” forums fail at precisely what they claim to do: cultivate quality conversation. Collaborative annotation assignments are a better way to encourage students to engage more deeply with course content and with each other. For one, conversations that take place in the margins of readings are more organic, initiated by students themselves about what confuses or intrigues them most. In addition, these annotation discussions are directly connected to texts under study, helping to keep conversation grounded in textual evidence. Using Hypothesis, instructors can make PDFs and web pages hosted in Canvas annotatable. Students can then annotate course readings collaboratively, sharing comments, and replying to each other’s comments. Instructors can also create annotation assignments using Hypothesis so that students submit their annotation “sets” for feedback and grading in Canvas.

I would add to this that students are not only making comments "directly connected to texts";  annotations are directly tied to specific words, phrases, sentences, or paragraphs, encouraging close reading.  And when done in a shared environment  online, close reading becomes both a private activity (between reader and text) and a collaborative activity (as students read and reply to the annotations of others).  As such, collaborative annotation is perfect for our era of remote learning -- it helps provide structured activity off-Zoom, while providing a persistent bridge between private reading and engagement with classmates and faculty.

  • Next: For Faculty >>
  • Last Updated: Oct 10, 2023 11:20 AM
  • URL: https://libguides.evergreen.edu/webannotation
  • Open supplemental data
  • Reference Manager
  • Simple TEXT file

People also looked at

Original research article, collaborative online annotation: pedagogy, assessment and platform comparisons.

collaborative annotation assignment

  • Harvard Medical School, Boston, MA, United States

Annotating a text while reading is commonplace and essentially as old as printed text itself. Collaborative online annotation platforms are enabling this process in new ways, turning reading from a solitary into a collective activity. The platforms provide a critical discussion forum for students and instructors that is directly content-linked, and can increase uptake of assigned reading. However, the student viewpoint regarding collaborative online annotation platforms remains largely unexplored, as do comparisons between annotation and traditional reading assessment methods, and comparisons between the two leading platforms (Hypothes.is vs. Perusall) for annotation by the same student population. The results in this study indicate that collaborative online annotation is largely preferred by students over a traditional reading assessment approach, that students regularly exceed annotation requirements indicated by an instructor, and that overall annotation quality increased as the students gained experience with the platforms. The data analysis in this study can serve as a practical exemplar for measurement of student annotation output, where baselines have yet to be established. These findings link the established research areas of peer learning, formative assessment, and asynchronous learning, with an emerging educational technology.

Introduction

Hovering over a text with a pencil, adding a sticky note to a page, or making digital document highlights and comments, are natural and familiar practices. Some readers feel that they are not giving a text their full attention without adding annotations ( O’Connell, 2012 ). Centuries-old text annotations feature drawings, critical explanation, corrections, and comments to other readers, at times exceeding the amount of primary text itself ( Wolfe and Neuwirth, 2001 ; Wolfe, 2002 ). An annotator might also leave memory prompts, questions, predictions, and connections to other work. For the consumer, the annotations of another student or scholar can be mined for insights that might go unappreciated if reading an unannotated text. Some students prefer second hand textbooks that have already been annotated by previous readers, for precisely this reason ( Van Dam, 1988 ; November, 2020 ). Wolfe and Neuwirth (2001) propose four main functions of annotation: to facilitate reading and later writing tasks by making self-directed annotations, to eavesdrop on the insights of other readers, to provide feedback to writers or promote communication with collaborators, and to call attention to topics and important passages. In Kalir and Garcia’s (2019) comprehensive work, annotation is defined broadly as a note added to a text, which can provide information, share commentary, express power, spark conversation, and aid learning. Accordingly, the online draft copy of their book includes annotations by other scholars that provide extended critical thoughts for any reader willing to consume them. Suggestions for annotations to be positioned as a third independent component of a text ( Bold and Wagstaff, 2017 ), prompt us to consider not only the medium and the message ( McLuhan, 1964 ), but also the marginalia ( Jackson, 2001 ) in all of our reading.

A lack of student attention to assigned reading can be problematic for teachers. In a study of multiple physics courses, only 37% of students regularly read the textbook, and less than 13% read often and before the relevant lecture was occurring ( Podolefsky and Finkelstein, 2006 ). This is in accord with a study of psychology courses, where a similarly low 28% of students did the assigned reading before it was covered in class ( Clump et al., 2004 ). The importance that professors attach to reading appears to be much higher than the importance attached by students. In a Business School study, only 4% of professors thought that a student could score an A or B grade without doing the assigned reading for a course, while 34% of the students thought they could do so ( Braguglia, 2006 ). Furthermore, only 20% of students identified “not having done the reading” as a reason to not participate in discussions ( Howard and Henney, 1998 ), so tutorial-based discussion may not be as strong of a motivator for reading as an instructor would like.

In addition to reading uptake problems, a student’s first experience reading primary literature (i.e., journal research articles, historical documents) can be challenging. They need to adjust to a format that is often less accommodating to the reader, and in the sciences, may have difficulty grasping technical details in experimental protocols and numerous data figures. It may also be greatly rewarding as students gain an appreciation for the structure of an inquiry and how it led to a particular finding, which is often absent when consuming information from a textbook ( Wenk and Tronsky, 2011 ). For article interpretation in the sciences, there is often a focus on the figures, on questions that elicit student confusion, and on questions that would be good follow-up experiments given the data in the article at hand. Approaches for humanities and social science primary source documents may have a distinct, but similarly critical focus. Reading guidance can be provided to students as fillable templates containing thought-prompts from an instructor, an approach that has been repeatedly covered as a beneficial learning scaffold [see Create framework ( Hoskins et al., 2011 ), Figure facts template ( Round and Campbell, 2013 ), and the templates of others ( Wenk and Tronsky, 2011 ; Yeong, 2015 )]. The templates also relate to the process of Just in Time Teaching ( Marrs and Novak, 2004 ), where student misconceptions about a particular reading can be obtained via a template or pre-class questions, so they can be adequately aired and addressed during a subsequent in-person session. Annotation can also be seen as an aid in primary literature comprehension with the Science in the Classroom approach ( Kararo and McCartney, 2019 ), which uses papers pre-annotated by scientific professionals that define key technical terms, highlight previous work, and include news and policy links, in “lenses” that can be toggled on and off.

Distinct from pre-annotated papers is a social or collaborative online annotation approach where the students and instructors input commentary themselves to illuminate and debate various aspects of a text. Two of the leading collaborative online annotation platforms in current use are Hypothes.is and Perusall. Hypothes.is aims ‘‘to enable a conversation over the world’s knowledge,’’ 1 while Perusall positions itself as “the only truly social e-reader,’’ with ‘‘every student prepared for every class 2 .” Since educational technology platforms can change their interfaces and features regularly, going to the platform website will contain the most up to date and complete information for functionality, usage, and implementation tips. Additional guidance for Perusall may be found in King (2016) . Both platforms enable collaborative online annotation of a text. Any part of a text that is highlighted is then linked to a marginal comment box, which can include not only commentary, but also tags, diagrams, and hyperlinks. Both platforms also support Latex for annotating with mathematical notation. Collaborative annotation is possible with any type of material that can be found as a webpage for Hypothes.is, or uploaded as a PDF for Perusall. Textbook material could be annotated if in an online eBook for Hypothes.is, or from Perusall’s catalogue of available textbooks. The two platforms differ in how they are accessed by students, user interface, social functionality, potential audience participation size, and annotation machine learning measurement capabilities.

In contrast to discussion forums that might appear on a learning management system, annotation platform discussions are grounded at a specific place within the document (highlighted word, sentence, paragraph, or figure region) rather than from a description of a figure or reference to a paragraph that is needed to establish context in a discussion forum post. Grounding in a primary document reduces the number of explicit references needed in order for comments to be understood ( Honeycutt, 2001 ). If the source text is absent and not connected to a discussion, participants have to reconstruct the context, which has been referred to as communication overhead ( Weng and Gennari, 2004 ).

Instructional goals in collaborative annotation may change according to the source material. One might expect collaborative annotation of textbook material to have a stronger focus on understanding the fundamental knowledge that the book provides. Annotating research articles may allow for additional goals that build on fundamental knowledge and consider the structure of an inquiry, along with its implications and possible pitfalls.

Prior work on collaborative online annotation ( Miller et al., 2018 ; Lee et al., 2019 ) positions the research alongside theoretical frameworks of: Peer Instruction ( Fagen et al., 2002 ), where students are collaboratively solving problems often focusing on common areas of misconception; Student-Centered Open Learning Environments ( Hannafin et al., 2014 ) where students negotiate complex, open-ended problems in a largely independent manner with web resources and technology tools to complement sense-making; and Social Constructivism ( Vygotsky, 1978 ; Adams, 2006 ), where cognitive functions originate in social interactions and learners integrate into a knowledge community. These three fields hold students’ prior experiences and co-construction of knowledge in high regard.

Theory developed in the formative assessment field also connects naturally to collaborative online annotation. For students to close in on a desired landmark skill for a course, they need to know what they are shooting for and what good practice of that skill looks like ( Sadler, 1989 ). In collaborative online annotation, student thoughts about a text are out in the open. Another student’s reasoning and sense-making on a difficult article can be compared, and the instructor’s reasoning is also there to serve as a model for what criticism in an academic discipline looks like. For this reason, collaborative annotation has been suggested as a signature pedagogy for literary criticism courses, as it embodies the routines and value commitments in that field ( Clapp et al., 2020 ); the sciences and social sciences can surely follow suit. The timing of feedback, another possible weakness in the assessment process, comes in a steady flow in collaborative annotation, as a text is read and analyzed by the instructor and students within a defined time window (less than 1 week in the current study), and the student can expect threads to build and their annotations to be commented upon within hours to days, and occasionally even in real-time if multiple students are active on the platform simultaneously. Peer to peer exchanges may also decrease some of the workload on instructors for feedback provision. Participation norms that used to focus on hands up in a lecture hall are now shifting to other forms of participation in a modern, technology-enhanced classroom ( Jackson et al., 2018 ), if they have not already done so. Asynchronous teaching tools have become increasingly important with transitions to blended and fully online learning environments coming abruptly during a viral pandemic, and can be a welcome remedy for time zone and other technical issues that affect synchronous teaching.

Annotation as an aid for learning has prior support in various settings, both with and without any technological scaffolding. In a pen and paper setting where students were trained in effective textbook annotation routines by their instructors, student annotation was found to be better than a control non-annotation condition on later test performance and self-reported studying efficiency ( Simpson and Nist, 1990 ). In a setting where instructors added marginal notes to course readings, the notes were overwhelmingly affirmed by students as a helpful study aid, and missed when they were absent from other non-annotated course readings ( Duchastel and Chen, 1980 ). In a collaborative synchronous annotation setting using Google docs in English literature classes, annotation was viewed as a technique that allowed instructors to effectively highlight what good performance in literary analysis looks like, and students also felt greatly aided by reading the annotations of others in understanding a given text ( Clapp et al., 2020 ). Collaborative annotation with Hypothes.is facilitated “close reading” with difficult texts ( Kennedy, 2016 ). Perusall provided a stimulus for reading uptake, where 90–95% of students completed all but a few of the reading assignments before class if using Perusall, and also performed better on an exam than those students who took the same class without using Perusall ( Miller et al., 2018 ). Novak et al. (2012) provide an excellent review of research on social annotation platforms; however, many of the platforms they analyzed have relatively small user bases, or are now defunct. Ghadirian et al. (2018) review social annotation tools and suggest that prior research has failed to capture students’ experiences while participating in social annotation activities, and that understanding of how to implement social annotation from disciplines outside of education and computer science is lacking. Wolfe and Neuwirth (2001) have pointed to the absence of studies focusing on participants to solicit their impressions of the technological environments during collaborative annotation. These gaps, coupled with the emergence of Hypothes.is and Perusall as key platforms, should drive new qualitative and quantitative investigation of collaborative online annotation.

There have been no published comparisons of student output and usage preferences between the two leading online annotation platforms, nor direct comparisons of annotation platforms to more traditional classroom assessment techniques such as reading templates, for the same type of content with the same population of students. Furthermore, the student viewpoint regarding collaborative online annotation remains relatively unexplored in prior publications, and pedagogical best practices are still emerging. Instructors and students familiar with more than one annotation platform are well-positioned to provide feedback on the annotation process as a whole. Establishing quantitative baselines for student output on an annotation platform will hold value for instructors to gauge activity in their own classes. To address the above gaps, and situate online annotation platforms for better use in the classroom, this study posed the following research questions:

1. Qualitatively, from the student viewpoint:

a. How do collaborative online annotation platforms compare to a more traditional templated assessment method for the same type of reading content?

b. How do the two leading collaborative online annotation platforms (Hypothes.is and Perusall) compare to each other?

2. Quantitatively, how do Hypothes.is and Perusall compare on student output for the following measures:

a. Number of annotations made per student per paper?

b. Character volume of a student’s annotations per paper?

c. Annotation content quality?

d. Percentage of isolated vs. collaborative threaded annotations?

Also captured are the changes over time in the quantitative measures as students proceed through successive paper analyses on a platform, and then move onto the next platform. The answers to the previous questions are further considered in order to shape more effective pedagogy for collaborative annotation. The educational technology, peer learning, and assessment fields stand to gain valuable insight from readers’ responses to dynamically annotated text.

Research Methods

Participant details and overall workflow.

The study took place with first year Master’s students, in a university in the northeastern United States, in a course focused on the analysis of scientific research papers. Two student cohorts participated: a 2019–2020 cohort of 18 students, and a 2020–2021 cohort of 21 students. Synchronous class sessions were held in-person for the 2019–2020 cohort during the months that the template completion and annotation activities were proceeding, and were held virtually for the 2020–2021 cohort. Annotation and template completion were done by all students on their own time, asynchronously, outside of any synchronous class sessions, with both cohorts. Figure 1 shows the paper analysis routine of students occurring under three different conditions: a traditional assessment template, the Hypothes.is annotation platform, and the Perusall annotation platform. There was 1 week of time allotted for each paper analysis. Both the 2019–2020 and 2020–2021 cohorts used the traditional template first, as it provided an established scaffold for beginners in paper analysis. After completing four assigned papers with the traditional template analysis, the students then used collaborative online annotation for another eight papers. With the 2019–2020 cohort, the Hypothes.is platform (four papers) was used first, followed by Perusall (four papers). The platform order was reversed with the 2020–2021 cohort – Perusall first, and Hypothes.is second. As such, the 2019–2020 cohort analyzed articles A, B, C, D via the traditional template, articles E, F, G, H with Hypothes.is, and articles I, J, K, L with Perusall; the 2020–2021 cohort analyzed articles M, N, O, P with the traditional template, articles Q, R, S, T with Perusall, and articles U, V, W, X with Hypothes.is. The bibliography for all articles A to X is available in Supplementary Table 1 . All papers were recently published (2017–2020) biomedical science research journal articles, deemed to be roughly equivalent in scope and difficulty by the instructor (GWP). The research proposal was reviewed by the Harvard Human Research Protection Program and received the lowest risk categorization. Aid in the informed consent process for students was provided by a program administrator. All student data in this study has been anonymized.

www.frontiersin.org

Figure 1. Study design and student reading assessment overview. Each student cohort analyzed four papers through a traditional reading assessment template, and four papers through each of the two collaborative online annotation platforms. This enabled two comparisons: the traditional template vs. annotation (as reading assessment methods), and Hypothes.is vs. Perusall (as annotation platforms). Full template prompts are available in section “Research Methods,” and the template is also available for download from Supplementary Material . Figure created with Biorender.com .

Research on technological and pedagogical innovations in student-centered open learning environments is thought to be best positioned within authentic classroom settings ( Hannafin et al., 2014 ). This study follows an action research approach in education, as it is instructor-initiated, focused on practical elements of classroom function, and their future improvement ( McMillan, 2015 ; Beck, 2017 ). With template vs. annotation, and Hypothes.is vs. Perusall comparisons, this study also invokes A/B testing. This has grown in popularity as a research method not only in massively open online classes ( Chen et al., 2016 ; Renz et al., 2016 ), but also when testing two instructional approaches with the same or similar student populations, such as comparing two versions of open eTextbooks for readability/user perceptions of quality ( Kimmons, 2021 ), comparing two learning management systems for accessing the same course content ( Porter, 2013 ), or comparing different question formats for impacts on learning ( Van Campenhout et al., 2021 ). Open source A/B testing platforms for education have recently been embraced by major philanthropic foundations ( Carnegie Learning, 2020 ), as a way to aid decision-making surrounding educational practices.

Traditional Assessment Template

In the traditional reading assessment template, for each of the assigned papers, the students filled in the following information/answered the following questions:

• The dates that the paper was read.

• Can you think of an alternative/improved title for the paper?

• What were the first 5 terms that you had to Google? [give a 1–2 sentence description for each].

• What questions do you have related to understanding the paper?

• What questions do you have that could serve as future experiments?

• What other papers could have helped with the understanding of the current paper? ( a means to indicate reading breadth around a particular research topic; students give references to these papers and brief summarizing information on the template ).

• Analyze all figures regarding:

◦ What technique(s) is(are) being used?

◦ What is the main purpose of the figure/what did the researchers find?

This template was considered to be an established form of assessment in a course focused on improving the understanding of primary scientific literature, and is similar to other reading templates referenced in the introduction in that it focuses on figure interpretation, airing student understanding/misunderstanding, and possible future lines of inquiry. It is also included in Supplementary Material , for any instructor to use or adapt.

Annotation Platform Usage

Students were briefed regarding online annotation platform usage and made simple trial annotations when the platforms were first introduced, in order to ensure they could access the articles, highlight and then annotate a piece of text. None of the trial annotations were counted as student output. Examples of annotation from previous students were shared in the briefing so the current students could envision what a collective annotation process entailed. The examples included students posing and answering questions, commenting on annotations of others to agree/disagree/add nuance, adding links to other articles to aid in comprehension, defining key terms, adding diagrams, adding tags, pointing out shortcomings or missing controls in the article, and suggesting future lines of inquiry.

Students were given a guideline of making five annotations per paper, and were given a rubric from Miller et al. (2016) , that the instructor was also following for grading individual annotations. Each annotation was scored 0, 1, or 2 for quality (0 = no demonstration of thoughtful reading of the text, 1 = demonstrates reading of text, but only superficial interpretation, 2 = demonstrates thorough and thoughtful reading and insightful interpretation of the text). There were no pre-existing annotations made by the instructor, so all the annotations were initiated by the students. However, the instructor did occasionally participate in annotation threads to answer a question, clear up a misconception, etc., as was a normal occurrence outside of a research setting. When classifying threaded vs. isolated annotations, instructor comments in threads were excluded. For example, if a thread of 6 annotations had 2 instructor annotations, and 4 student annotations, the length would be counted as 4, and those 4 annotations would be considered to be part of a thread. An annotation with no other student additions is counted as isolated. An annotation with one instructor addition is still counted as isolated if there is no student follow-up after the instructor addition.

In prior studies using student annotation, some instructors gave a weighting of 6% per annotated article ( Lee et al., 2019 ), or 15% of an overall course grade in an undergraduate physics course ( Miller et al., 2016 ). In prior studies using templates, analysis of a paper via the Figure Facts template counted 10% for each paper analyzed ( Round and Campbell, 2013 ). Since the students were expending a considerable amount of effort in reading long and technically challenging papers, the assessment of each paper in this study, either via traditional template or by annotation, carried a 10% weighting in the final course grade.

To sum up the major attributes of the annotation process in this study according to an existing typology of social annotation exercises ( Clapp et al., 2020 ), the annotations were asynchronous as opposed to synchronous (students annotated on their own time); unprompted as opposed to prompted (other than the number of annotations [five] and the shared grading rubric, no specific requests were placed on annotation content); authored as opposed to anonymous (students knew the identity of their classmates making the annotations, and could use the identity in @call outs); and finally, marked as opposed to unmarked (the annotations counted toward course credit). This typology can serve as a useful comparative tool for future collaborative annotation research.

Distinctions Between the Annotation Platforms

The students accessed the Hypothes.is platform as a web browser plug-in. URLs to all the articles for Hypothes.is annotation were given to students through a learning management system module. Although Hypothes.is annotations have the potential for an internet-wide audience, the class grouping for Hypothes.is limited annotation to only the students of the class and the instructor. Public visitors to a particular article’s URL could not access the student annotations because they were not part of the student group. The students accessed Perusall as a stand-alone online platform with a code given by the instructor. Again, the annotations were only available among the students of the course and the instructor. Perusall annotations are generally limited to a course or subgroup within a course.

When this study took place, one could annotate text or a part of a figure with Perusall, but only text annotation was possible with Hypothes.is. Perusall also included some social functions such as student avatars which would let one know when someone else (student or instructor) was also using the platform at the same time, the ability to “upvote” an annotation (express agreement or support), automatic labeling of annotations that were phrased as questions, emoji icons, @student call outs, which will alert someone that they have been mentioned in an annotation, and email notifications for annotation responses. Hypothes.is included tagging and email notification of annotation responses, but did not have the other social-type functionality. Perusall has an additional machine learning capability for grading annotation output in large enrollment classes, as well as a “confusion report” to assess major areas of student confusion, but these were not used and thus not evaluated in the current study.

Survey Questions for Annotation Platform Comparison, and Annotation vs. Traditional Template Comparison

At the end of the academic year, students were given a voluntary, anonymous survey prompting comparison of the collaborative online annotation process to the traditional reading assessment template, and comparison of Hypothes.is to Perusall. For the 2019–2020 cohort, the survey completion rate was 15 out of 18 students. For the 2020–2021 cohort, the survey completion rate was 19 out of 21 students. The overall survey completion rate for all participants was 34/39, or 87%. Survey questions were as follows:

1. Compared to the MS word templated approach, did you find the annotation platform a better or worse tool for your learning of the biology content and experimental procedures in each paper?

2. Which annotation platform, Hypothes.is or Perusall, did you prefer, and why?

3. What did you like about the Hypothes.is platform?

4. What did you dislike about the Hypothes.is platform?

5. What did you like about the Perusall platform?

6. What did you dislike about the Perusall platform?

7. Did you feel that the guideline of 5 annotations per week, with the supplied rubric, was enough guidance in the annotation process? Why or why not?

8. Identify a useful annotation that you came across. What was it about the annotation that made it useful?

9. Identify a useless annotation that you came across. What was it about the annotation that made it useless?

10. How could the annotation platforms and related teaching and learning processes be improved (i.e., features, workflow, teacher prompts, etc.)?

The survey data is available to the reader in full in Supplementary Table 2 , as an unedited student-by-question matrix ( Kuckartz, 2014 ). Categorization of responses for the first two survey items was straightforward, falling into only three categories (Question 1—annotation preferred, template preferred, or no clear preference; Question 2—Hypothes.is preferred, Perusall preferred, or no clear preference). More than three categories were needed to adequately summarize responses for items 3–10, and owing to space constraints in this manuscript, those can be found in Supplementary Table 3 . A few responses were uncategorizable, and occasionally, some questions were left blank by a student. Representative responses for each survey question are included in the body of the paper, with some occasional light editing for clarity. The words “article” and “paper” are used interchangeably throughout.

Annotation Output Analysis and Figure Generation

Quantitative student annotation output measurements included:

i. The number of annotations made per student per paper (how many times does a student annotate?)

ii. The annotation character volume per student per paper (how much do students contribute in their body of annotations for a given paper?)

iii. The annotation character volume per student per annotation per paper (how much do students contribute in each individual annotation?)

iv. The individual annotation quality, as assessed by the course instructor according to the rubric of Miller et al. (2016)

v. Whether annotations were isolated (defined as one solitary annotation) or part of a collaborative thread (defined as two or more annotations)

Anonymized student annotations are available on a student by student basis and on a threaded basis for each cohort and platform as excel files in the Harvard Dataverse. The anonymization replaced names with numbered student labels (student 1, student 2, student 19, etc.). Annotation of one paper was missed by two students (student 19-paper V, student 33-paper Q) in the 2020–2021 cohort due to excused medical absences, so means are calculated for those students with an accordingly adjusted denominator. Character counts were taken for annotations as they appeared, with the name substitutions. If a student typed in a URL or DOI in their annotation, it is included in the character count. If a student included a hyperlink in their annotation, the URL was extracted and placed in a separate column in the excel analysis file, but not counted toward the character length. This approach preserves the links to other resources made by students, but treats the annotation content with as little manipulation as possible. Repeating the character count analyses with URL and DOI text excluded, did not affect any conclusions regarding platform differences (or lack of differences) in annotation output character volumes. Emoji characters in annotations have also been preserved, but were used sparingly by students. Data analysis was performed using a combination of MS Excel and Graphpad Prism 9 software, and figures were generated using Biorender.com ( Figure 1 ), MS Word ( Table 1 and Supplementary Tables 1 – 3 ), or Graphpad Prism ( Figures 2 – 5 ). The number of observations for (i–iv) depended on student cohort size: 2019–2020 cohort ( n = 18), 2020–2021 cohort ( n = 21), or with the students from the two cohorts combined ( n = 39). T -tests for comparing means are paired (Hypothes.is mean vs. Perusall mean) and p values are indicated on the graphs or bar charts. For threaded annotation observations, a threaded vs. isolated percentage was measured for annotation output on each paper, thus there are only four observations (four papers) for a paired t -test within a cohort to compare platforms. Readers should consider whether the difference in means, or data trends in the charts, could be pedagogically significant in their classrooms, along with any consideration of the mean comparison p value. No comparisons for annotation counts, character counts, or annotation quality were undertaken between specific papers, rather the analysis focused on differences between the two annotation platforms. The effect of an individual paper is indeed diluted, as means across four papers for each student within each platform were used to obtain annotation number, character volume, and annotation quality scores which then fed into the comparison of the platforms.

www.frontiersin.org

Table 1. Student survey responses regarding assessment approach and platform preferences.

www.frontiersin.org

Figure 2. Most students exceed the instructor-stipulated annotation requirement. (A) Each data point represents the number of annotations made by a student for each paper. The X-axis is organized to group students together according to platform output means being higher on Hypothes.is, or Perusall, or equal. Lines next to data points indicate standard error of the mean (SEM) of the measurement for each student. Dashed lines indicate the global means for all students, or the minimum stipulated number (5). (B) 2019–2020 cohort (18 students), and (C) 2020–2021 cohort (21 students) mean number of annotations per student (from the four papers on each platform), along with pairing (gray lines) to indicate an individual student’s output on each platform. To the right of each bar chart is a timeline tracking the mean number of annotations per student from the first to the eighth paper annotated, error bars: SEM.

Student Survey Responses

Key comparisons: template vs. annotation, hypothes.is vs. perusall.

Students strongly favored the collaborative online annotation process compared to the traditional paper analysis template ( Table 1 , Q1). 25/34 students (∼74%) felt that the online annotation process was a better content-learning tool compared to the traditional template. Only 6/34 students (∼18%) preferred the template, and 3/34 (∼9%) students had a mixed response with no clear preference for either process.

Those in favor of the online annotation approach indicated that looking through the annotations brought new insights based on the thinking of others, and enabled interaction that was not possible with the traditional reading template.

The annotation platforms were a better tool than the template approach. Having to read through it and analyze it myself, and then re-synthesize it with other people’s comments forced me to go back to the paper more than once and dive in.

Annotation was much better than the templates. Promotes critical thinking and importantly, discussion. With the templates I would never even think about some of the things my classmates bring up.

The power of the annotation platform lays in its capacity to serve as a collective real-time inter-phase in which one can comment, review, and interact with other students. This enables a deeper conversation with respect to questions, concerns, or the analysis of a particular piece of discussion, figure, experimental methodology, and is as a result superior to conventional note-taking which is static by nature.

I thought that the annotation platforms were a lot more helpful because I could see what other students were saying and it wasn’t just my ideas. I felt like I did a lot more thinking when I read the threads of other students.

For those in favor of the traditional template approach, they felt that it prompted a more complete and thorough analysis of the paper, because each figure had to be analyzed.

I personally preferred the templated approach, although it was more difficult and took up significantly more time. It caused me to examine each figure in a lot more detail. With the annotation platforms, it was much easier to “slack off.”

I think the template was better. It gave me a framework for how I’m supposed to learn from and critique a paper. I still follow the template even when I have to use the annotation platform.

Preference between the platforms had a relatively even split ( Table 1 , Q2); 14/34 students (∼41%) preferred Perusall, 12/34 students (∼35%) preferred Hypothes.is, and 8/34 students (∼24%) indicated no clear preference for either platform.

Remaining Qualitative Survey Data

Specifics for platform preference: commentary on hypothes.is.

Students commented favorably on Hypothes.is regarding its simplicity in using the annotation window, overall reading experience with the article being annotated, and having less log in prompts.

I liked Hypothes.is because of its inherent simplicity. You annotate and/or highlight a particular section, can see any replies immediately underneath the annotation, and can in turn can click within the text or within the annotation to go back and forth between the text and comment of interest.

Hypothes.is, because it’s much easier to locate the annotations. In Perusall we have to click on each highlight to navigate to the specific annotation.

Hypothes.is was easy to see all the threads and I didn’t have to login every time I wanted to access it.

I liked in Hypothes.is how hierarchy can be established within a thread of comments when necessary. You can reply to any comment in a particular thread and the system will make it clear which comment you are responding to by adding an indentation before your reply. That makes annotations very neat and organized.

Students disliked Hypothes.is for the inability to annotate figures, and the lack of an upvote button. Some viewed the plug-in nature of Hypothes.is as a positive, because it brought them directly to the paper’s URL, while others viewed this as a negative because they were accessing Hypothes.is through Google Chrome, which was not their preferred browser. As a change from when this data was collected, it appears that Hypothes.is now supports all browsers with the exception of Internet Explorer.

Hypothes.is has fewer functions in the comment section than Perusall.

I wish Hypothes.is had an upvote/question button for my peers’ responses.

I definitely preferred the website nature of Perusall over the plugin of Hypothes.is.

It is less interactive than Perusall, like marking annotation as helpful or marks it as when I have the same question.

There was not a graph annotation function like Perusall, which made it more difficult to annotate figures. The plugin format of Hypothes.is was a bit hard to figure out at times.

I don’t see an option to upvote or mark my favorite annotations. I once clicked on the flag button at the bottom of someone’s annotation and I then realized that it reported the annotation to the moderator, as if there is something inappropriate. That’s not what I meant to do and I couldn’t undo the “reporting.” That was pretty awful.

I do not like how I have to download it and use it as a Google Chrome (not my favorite browser) extension. I also dislike how I cannot label the figures directly—can only highlight text.

Specifics for Platform Preference: Commentary on Perusall

Students favored Perusall’s allowance for annotating images, annotation up-voting, the more social feel and appearance, seeing the presence of other students, and the ease of access via one online platform with all course article PDFs present.

I preferred Perusall over Hypothes.is. It seems like a more user-friendly platform, it allows inserting images (and emojis!) and has good filter functions (e.g., for unread replies, comments by instructor etc.).

Perusall seems to be a more well-polished annotation tool. You can temporarily hide other people’s annotation and focus on the paper only, which gives you a cleaner environment.

Preferred Perusall because it is easier for me to access the papers. With Hypothes.is, I often have to switch from Safari to Chrome or vice versa before it lets me view the paper. Also, with Perusall, I get to draw a highlighted box for annotating figures, with Hypothes.is, I can only highlight text.

I personally preferred Perusall because we can annotate on a figure by highlighting the image and also upvote the threads we think are particularly good.

I like the organization of the Perusall platform, specifically the page by page conversation tracking as well as the up-voting feature.

Perusall’s interface reminds me of social media a bit more than Hypothes.is, which is at least refreshing when going through what could be difficult material.

There is no confusion about what link to follow to annotate the paper because it’s already uploaded. It’s easy to see each person’s annotations because they’re color coded and you also get to see who’s online. You can react to people’s annotations.

Student dislikes of Perusall included problems with highlighting text, having to do frequent sign-ins when accessing the platform, occasional inability to read both the paper and the annotations simultaneously on their screens, and the lack of being able to use the tool themselves for independent study groups.

I always had issues highlighting discrete pieces of the text, whereby my highlighter function would inadvertently highlight an entire page but not allow me to highlight a particular sentence or words.

It’s quite difficult to see the content of annotation without clicking on the highlight. Also can’t download the paper directly from the Perusall platform.

It was hard to have the comments section open and be zoomed in on the paper enough to read it. Additionally, there were a few times where comments would get put on top of each other by different students and the comment that was placed first was not seen.

One suggestion I would make is that I hope I can upload papers myself and set up study groups. I tried to discuss one of the papers with just a small group of students but unfortunately I could not do that.

The final comment is suggestive of the utility of collaborative annotation outside of an instructor-guided setting (i.e., student study groups, group project collaborations).

Instructor Guideline for Numbers of Annotations Made

The majority of students (70%) found the guidelines regarding the number of annotations to be sufficient, although they could perceive the arbitrary nature of the guideline in that five or even fewer annotations could be lengthy and insight-rich, while a higher number of annotations could be terse and not provide as much insight. The annotation number stipulation is straightforward and easy for both instructors and students to keep track of which is likely one of the reasons that it is commonly used as either a guideline or output measurement ( Miller et al., 2018 ; Lee et al., 2019 ; Singh, 2019 ).

5 annotations are more than enough. I think the nature of the platforms naturally enforce conversations and interactions, that in my mind without thinking about it, usually go beyond the suggested 5 annotations.

Yes, I think five annotations are a good amount. But maybe some clarifications on how long those annotations should be. Since some students have five really long annotations, and some have 10 short annotations. Otherwise, the guidance is clear.

I think so; the real power of these guidelines came from the variety of annotations that were given by each of the students. Between background information on methods, critiques of the authors’ analysis, and questions about the scientific background of the papers, I felt like the five annotations per week were sufficient for a robust conversation.

I think the guideline was reasonable. As a suggestion, perhaps the assignment could include introducing a certain number of comments as well as responding to other comments. The goal of using an annotation platform vs. the template is to encourage discussion with other people.

Which Annotations Were Most Useful?

Annotations explaining some aspect of the source text were the most frequently mentioned as useful in the survey (40% of responses). Not surprisingly, the students also found value in annotations encouraging dialogue and raising additional concerns and questions. Corrections from either the instructor or other students, and linkage to other applications or course content, rounded out the categories for useful annotations.

We have seen several times circumstances where multiple people will enter into a conversation and we end up with a whole thread of annotations. I think these can be extremely helpful and also just make reading the paper more interesting. Especially when people argue about a certain point, as getting to see people’s arguments often helps to better understand the author’s motivation behind doing something a certain way.

Corrections from other students and the instructor if I misunderstand something. Connections between the paper we are annotating with lecture materials or other research papers.

Annotations that synthesized something from the paper and then asked a question about it. What I appreciated was that it was sometimes a comprehension question (why would they use X method, not Y?) but sometimes it linked to outside ideas or thoughts (would this translate to Z?).

Sometimes I come across annotations that describe a method, and those are helpful because they make it easier to understand the results. However, this only applies to annotations where someone took the time to make a clear and concise annotation rather than copy-pasting a description from a webpage or linking a paper.

Which Annotations Were Useless?

The most frequent response for this question in the survey was that there was no such thing as a useless annotation (31%). Students placed less value on the re-stating of anything obvious, terse agreement annotations, or information that was easily found through an internet search. They favored annotations that were dialogic. There were some differences in opinion in regards to definition-type annotations; for some they made the reading process easier, while others viewed definitions as a dialogic dead-end and something that they can easily obtain on their own.

Some annotations were superfluous in nature and defined terms and or processes that were canonical and did not need a one paragraph explanation.

Definitions—especially in the beginning were very frustrating. There is no response to them, they don’t make you think any more or differently about the paper.

I dislike annotations that only link to another paper, like ‘Excellent review article (link). What is it about the review article that makes it excellent? What did the student learn from that review article? What about the review article complements this specific paper? Just even a single sentence would be a big improvement.

Annotations that describe straightforward results. Like if there is a graph that shows that some parameter increased with a treatment, then an annotation stating just that is useless. If the annotation links it to the other results and explains the conclusion, that’s useful. However, it shouldn’t be too long and convoluted.

I can’t remember any useless annotation I have come across. I don’t think there is or can be any useless annotation—I think what may seem obvious to one may actually be something that is completely missed by another.

Further Pedagogy-Related Feedback From Students

In the survey responses, students are thinking of ways to operationalize dialogic feedback and achieve “revisits” to the platform after an annotation requirement has been fulfilled. Some students were daunted by the vast amount of annotations on a given paper in a group of approximately 20 students and one instructor annotating. Reading the full body of annotations is a fairly large time commitment for the students, who would also spend a great deal of time reading the content of the paper itself.

I feel like having your recaps in class helps, because I rarely read all of the annotations, or feel overwhelmed doing so.

I think at times there were just too many comments on a paper. It became a race for people to read and annotate the papers early so there was enough left to comment on, without being repetitive. If I was late to the game, sometimes it was easier to just read the comments and find an outside/related subject to Google and link to instead of reading the paper and really thinking about it. I think lowering the number of comments we need to make would help with that.

What happens a lot with me and some of my friends is that by the time we’re done reading and making our annotations, someone else on the platform has already commented what we wanted to say. Then it becomes stressful to think of new things just to stand out. I feel like commenting “I had the same question” on another person’s comment makes it seem like I was lazy and didn’t read the paper when in fact I really did have the same question.

I think it would be interesting to assign different figures to different groups of students, it might allow for more in depth critique of certain sections. Additionally, it would be an opportunity for students to work in groups and get to know each other.

Again, I like the “back and forth” discussions in the annotation. It is like a debate in the annotation form. I think I’ve seen too much “I agree” (though I used it a lot, too). We might be better off to give contrary opinions and then defend each other’s view using lecture or outside source knowledge. I’m sure we’ve all come across some annotations to which we hold completely different opinions. For these annotations, after we’ve given our feedback, I’d expect the other people to defend their ideas too.

I think that perhaps there could be an incentive provided for people to actively go back to the platform (after they made their annotations) to discuss with people who annotate after them—perhaps like extra marks? Because once one makes their annotations, there isn’t really a need for one to go back and “interact.” So perhaps this would encourage more interaction? but I also feel that this may lead to “flooding” of annotations.

The annotation platforms have adopted technical solutions to enhance returns to the platform via email notifications when one has been tagged in an annotation, or had their annotation further commented on. Additional return incentive could be built-in pedagogically by the instructor, perhaps encouraging responses in dialogic threads, or suggesting that while a certain number of responses can be “initiative” (start a new thread), other responses should continue from an existing annotation to make a constructive dialogic thread. Assessment routines could perhaps be shifted away from individual annotations and toward the overall quality of collaborative threaded contributions.

Students suggest some prompting such that all of an individual’s annotations are not directed in one section of a paper, instead being divided among introduction, methods, results, and discussion sections. Teacher prompts taking the form of “seed” annotations could also guide students by superimposing a templated approach onto the annotation approach, if certain seed annotations are regularly included (i.e., Are the researchers missing any controls? Do the conclusions feel supported by the existing data?). In another study, anonymous seed annotations generated from a previous year’s more intriguing threaded discussions, had future value to prompt better annotation quality and more elaborative-generative type of threads in a subsequent class ( Miller et al., 2016 ).

Teacher prompts could be helpful, though I also worry that then students may focus on answering just those prompts and not branching out to really critically analyze the rest of the paper.

I think it worked really well overall, however, it would help to have more guidance/requirements on the types of annotations students should be leaving. Annotation platforms make it really easy to “skim’ the paper, rather than really read into it.

Simply writing five annotations would be very generic. It may be better to restrain, for example, one annotation at least to comment on a new term that wasn’t familiar to you before, three annotations at least to comment on the results/experiments, and perhaps one annotation at least to comment on the biggies (significance? results sound or not? future directions? etc.).

Since the body of annotations can grow to a large size in a group of 20 students, the notion of going to upvoted responses might be a way for students to consume the annotations more selectively. The upvote button on the Perusall platform should help to limit sparse “I agree” type annotations, as the upvote accomplishes the same function. However, there was some concern that threads or comments that were really insightful did not get upvotes, whereas some threads that were viewed as not being particularly helpful did receive multiple upvotes. This is an area where instructor curation and recaps are needed to prevent the loss of quality annotation work from student consideration.

On Hypothes.is, I can’t see which comments get the most “upvotes” or “likes.” Sometimes I don’t have the time to read through every comment, but it’d be helpful to look at comments that were most helpful to a lot of students.

I read some really thoughtful Perusall annotations from other people that didn’t get upvoted. I also read some less thoughtful Perusall annotations from other people that got relatively heavily upvoted.

Quantitative Data: Annotation Output for Hypothes.is vs. Perusall Platforms

Instructors getting started with a collaborative annotation platform may look toward quantitative metrics suggestive of student engagement. Perhaps the platforms themselves will come up with more sophisticated indicators, but some basic usage indicators that are easy for an instructor to grasp include: the number of annotations made by a student (How often does it meet or exceed instructor-stipulated minimum?); annotation character volume per student per paper (Has a student contributed sparse or more lengthy content in their annotations for a paper?); annotation content quality; and the degree to which annotations are isolated (did not receive any further response) or threaded (received at least one response). Looking at these metrics over time, as students progress from one paper to the next, and then from one platform to the next, may also be beneficial to gauge overall student progress.

Number of Annotations per Student per Paper

In considering each data point in Figure 2A , representing the number of annotations made by a student on a given paper, the vast majority of students exceed the five annotation instructor stipulation on a consistent basis. Only 4/39 students consistently adhered to the minimum recommended number. This exceeding of the instructor-stipulated minimum is in line with a study by Lee et al. (2019) , where a mean number of 16.4 annotations per student per paper exceeded a 12 annotation stipulation. 24/39 students had more annotations per paper using Perusall, while 11/39 students had more annotations per paper when using Hypothes.is. 4/39 students annotated to the same degree using both platforms, likely out of habit of sticking to the minimum stipulated annotation number. In considering the output from both cohorts ( n = 39), the mean number of annotations per paper per student using Perusall (7.49, SEM:0.427), was higher than the mean number of annotations per paper per student using Hypothes.is (6.81, SEM:0.375), although the difference was less than one annotation per paper, not statistically significant, and unlikely to be pedagogically significant for reasons mentioned previously.

In Figures 2B,C , means for the number of annotations made per paper were similar when comparing output on the two platforms for either cohort. The mean number of annotations made per student per paper stayed relatively stable over time as the students progressed within and between annotation platforms in the 2019–2020 cohort. There was an initial high activity on the first paper annotated for the 2020–2021 cohort, which then stabilized between 6.5 and 7.5 annotations per paper, similar to the 2019–2020 cohort. An abnormal initial output makes sense, if one considers that students are adjusting to the platforms and may not yet have a good sense of output norms among their peers.

Character Volume in Annotations

In Figure 3A , 28/39 students had higher annotation character volumes per paper using Perusall, while 11/39 students had higher annotation character volumes with Hypothes.is. In combining data from the two cohorts ( n = 39), the overall mean for Perusall was 3,205 characters per paper (SEM: 206), and 2,781 characters per paper (SEM: 210) for Hypothes.is ( p = 0.0157).

www.frontiersin.org

Figure 3. Character volume in annotations. (A) Each data point represents the total character volume within all annotations made by a student for each paper. The X-axis is organized to group students together according to platform output means being higher on Hypothes.is or Perusall. Lines next to data points: SEM. Dashed lines indicate global means. (B) For the 2019–2020 cohort (18 students), and (C) the 2020–2021 cohort (21 students), the mean total character volume for all annotations per student (from four papers on each platform), and mean character volume per annotation are indicated, along with the pairing (gray lines) to indicate an individual student’s output on each platform. Below the bar chart is a timeline tracking the mean total character volume of annotations per student from the first to the eighth paper, error bars: SEM.

In Figure 3B , the 2019–2020 cohort had a higher mean total character volume output per student per paper for Perusall (3,205, SEM:220), than for Hypothes.is (2,287, SEM:215) ( p < 0.0001). They also had a higher mean character volume per annotation for Perusall (503, SEM:19.7), than for Hypothes.is (355, SEM:17.7) ( p < 0.0001). This cohort showed a steady increase in character volume output per student over time.

In Figure 3C , there was no significant difference seen in mean total character volume for the 2020–2021 cohort between the platforms, although Hypothes.is had a higher character volume output per student when looking on a per annotation basis ( p = 0.012). Mean character volume output per student over time was steadier and did not show the same consistently rising pattern as the 2019–2020 cohort. One potential explanation is that the user interface or social nature of the Perusall platform encourages a higher output and this inertia remains when the students transition to Hypothes.is.

Annotation Quality Scores Increase Over Time, Regardless of Platform Order

In keeping with prior studies ( Miller et al., 2018 ), individual annotation quality was generally quite high, with the vast majority of annotations scoring full marks (two out of a possible two in an individual annotation). Figures 4A,B , indicate a decrease in low scoring annotations (0 or 1), as the students go from the first to the second annotation platform. Figures 4C,D indicate an increase in mean annotation quality score as students progressed from one platform to the next, regardless of platform order. Mean annotation quality score for the 2019–2020 cohort went from 1.70 to 1.84 from the first to the second platform (Hypothes.is to Perusall) ( p = 0.0176). For the 2020–2021 cohort, mean annotation quality went from 1.57 to 1.71 from the first to the second platform (Perusall to Hypothes.is) ( p = 0.011). In considering progression over time for annotation quality in Figures 4E,F , there was some fluctuation on a per paper basis, but the trends indicate an improvement from the beginning to the end of the annotation exercise. This data combined, is consistent with a growing fluency with annotation practices and de-emphasizes any platform influence on annotation quality. It is reasonable to conjecture that different attributes of a platform may change student behavior, and this can be seen in regards to annotation lengths. Since both platforms enable an essential basic annotation function, student insight shines through and does not necessarily depend on annotation length. Thus, it is reassuring that the mean quality score measured per student globally ( n = 39) was almost identical (1.71 Hypothes.is, SEM:0.04, 1.70 Perusall, SEM:0.05).

www.frontiersin.org

Figure 4. Annotation quality scores increase from the first to second annotation platform for each cohort, regardless of the platform order. (A) 2019–2020 cohort, and (B) 2020–2021 cohort, mean number of annotations scoring a 0, 1, or 2 (among four papers on each platform), error bars: SEM. (C) 2019–2020 cohort, and (D) 2020–2021 cohort, mean annotation quality score per student (among four papers on each platform), with the pairing (gray lines) to indicate an individual student’s score on each platform. (E) 2019–2020 cohort, and (F) 2020–2021 cohort, timeline tracking the mean annotation quality score per student from the first to the eighth paper, error bars: SEM.

Isolated vs. Threaded Annotations

Threaded annotations can be viewed as preferable to isolated annotations because they provide evidence that the initial annotation has been read and digested by the responder, and then spurred some dialogue for debate, additional nuance, or correction. In considering the percentage of total annotations that were isolated vs. those appearing in a thread, the only time that isolated annotations outnumbered threaded annotations was in the initial use of the Hypothes.is platform with the first assigned paper for the 2019–2020 cohort ( Figure 5A ). In all other papers, annotations that were part of a thread outnumbered those that were isolated. The 2019–2020 cohort showed a clear trend of increasing threaded annotations over time, and a higher mean of percentage threaded annotations in the second platform (Perusall, 80% threaded), vs. the first platform (Hypothes.is, 53% threaded) ( p = 0.0108). The 2020–2021 cohort ( Figure 5B ) showed a relatively steady trend with a mean of ∼70% of annotations occurring in threads on each platform. The final paper annotated on each platform tended to have the highest percentage of collaborative annotations, again indicating an upward trend for dialogue.

www.frontiersin.org

Figure 5. Annotations in collaborative threads over time. (A) 2019–2020 cohort, (B) 2020–2021 cohort, percentage of annotations classified as isolated (no further student responses) [squares], or accompanied by one or more responses (thread length two or greater) [circles] within a given paper. Above the graphs are the mean percentages of threaded responses among the four papers annotated on each platform within a given cohort.

The trend toward an increase in the percentage of threaded annotations, and an increase in mean annotation quality scores over time is reassuring, as it suggests that even in a relatively unprompted setting, students have some natural fluency and become better annotators. This should be encouraging for both the annotation platform designers and for teachers considering a collaborative annotation approach in their courses. Prior studies have not followed the same student population from one platform to another, nor looked at output over time (threaded vs. isolated, annotation numbers, annotation character volume) within and between platforms. The quantitative analysis in this work provides a baseline upon which future quantitative studies on student annotation output can be compared or further built-upon in sophistication. The annotation character volume difference in the 2019–2020 cohort was in favor of output on the Perusall platform, which could suggest that social functionality of a platform may drive some additional engagement, however, that conclusion should be tempered by the data from the 2020–2021 cohort, which was more even. The survey data shows a slight preference for Perusall vs. Hypothes.is (41% vs. 35%).

Caveats and Limitations of Current Study

Since the class sizes in this study were relatively small (<25 students), the body of annotations for a weekly reading were still fully consumable by the instructor with the investment of roughly 4–5 h for reading, processing, grading, and engaging with a subset of those annotations. This does not include a thorough reading of the source document and the planning of the accompanying lecture, which took additional time. The reading time commitment for an entire body of annotations is perhaps even more daunting for students, as was indicated in some survey responses. With larger classes, one instructor may have difficulty managing the body of annotations, and if engaging with students on the platform, would likely be participating within a smaller percentage of the overall student body. Both platforms have the ability to divide a class into smaller subgroups. Perusall’s default setting is for groups of 20 students. If the readings are annotated in assignment mode, Perusall also has a machine learning capability to analyze a large body of annotations that could accrue with a large class, but this was not evaluated in the current study. Annotations of poor quality can contribute noise to the reading experience, and contempt for the annotator ( Wolfe and Neuwirth, 2001 ). In this study, low quality annotations were a relatively minor concern, but could be a greater concern with larger class sizes, or for classes where some subset of students approach the source material in superficial way (i.e., required class outside of student’s main interests, unreasonable difficulty for students in grasping the source material, or desire to troll/abuse other students in the class). In sum, even though the annotation approach worked well in the current study and student population, problems could emerge with another population.

Since the Hypothes.is annotations were occurring on article PDFs hosted as webpages, annotations can be temporarily lost if the article URL changes. This occurred with one article from the 2019 to 2020 cohort, and one article from the 2020 to 2021 cohort. With some technical support from Hypothes.is, the annotations were recovered by using a locally saved PDF where an underlying fingerprint could still be recognized in order to show the annotations. Individual annotations can also become “orphaned” if the text they were directed to disappears from the source webpage. These are listed under another tab in the annotation interface, so are not lost from consideration. If students are annotating web content that is more dynamic with many source edits, then this could be more problematic. In Perusall, the source documents were uploaded PDFs, so the underlying text never changed.

Ideally, the same articles would have been assigned to each cohort (2019–2020 and 2020–2021), however, that was not possible, as the articles needed to relate to a seminar speaker series where the invited speakers change from year to year. Instructors should keep in mind that when students first use an annotation platform, they do not yet have an impression of group output norms, so one might expect higher or lower output on the first paper annotated. This can be seen in both cohorts in this study, as the 2019–2020 cohort had a particularly low character volume on the first paper annotated, while the 2020–2021 cohort had a higher annotation number and character output on the first paper.

The mean number of annotations per paper are surely influenced by teacher guidelines. If one used the platforms with no minimum stipulation, or had a minimum stipulation count of 10 instead of 5, student behavior is likely to change. Some portion of the motivation is driven by instructor stipulation and the grading of the annotations, another portion of the motivation is coming from genuine engagement with a thought-provoking point made by another student, a refutation of one’s annotation, or taking a conversation in an unexpected direction. One cannot be sure of the balance between these forces, but there is prior research indicating that even in ungraded settings, collaborative annotation still appears to engage students with class-associated reading ( Singh, 2019 ).

In retrospect, being able to link identity for student survey comments to the same student’s annotation output would have enabled additional research questions to be asked (i.e., do students that favor one platform in their survey response also make more annotations/have a higher character volume with that platform?). As the surveys in this study were answered anonymously by students, this was not possible.

Finally, the functionality of the platforms can change over time. This is an unavoidable problem for research on any type of educational technology. Some issues mentioned by students may already be in the process of being fixed by the platforms.

Emergence of Annotation Best Practices

The major areas for the shaping of annotation best practices appear to reside in:

1. Scaffolding for students in writing more effective annotations.

2. Affordances of asynchronous participation.

3. Measurement of annotation across texts vs. within texts.

4. Large data set mining/learning analytics approaches.

As the annotation platforms are relatively new to the education technology scene, instructors are now starting to consider what scaffolding is needed in order for students to write high quality annotations. Work by Jackson (2021) parallels two of the qualitative survey prompts here, in that it asks for students to elaborate on what makes for good quality and poor quality annotations, in hopes that they will apply that reflection toward their own annotation output later on. It includes an excellent clarify-connect-extend annotation rubric ( Jackson, 2021 ), which instructors might find useful in an initial briefing of the annotation process for their students, or for remedial tune-ups for those who are contributing less than ideal output.

Asynchronous discussion allows for preparation and analysis not only for students, but also for the instructor. For example, in synchronous situations, the instructor cannot typically ask a student to wait for an hour for a reply to a comment/question, in order that the instructor can go read another article and make a more nuanced and accurate comment. Yet with an asynchronous approach, this is possible. Although one often thinks of how to motivate students, these asynchronous approaches provide a buffer of time that can motivate further engagement from the instructor with the source text or with other related materials. On the other hand, tardy feedback (>2 weeks after an assignment is completed) is detrimental to the feedback’s value and impact ( Brown, 2007 ). With the annotation platforms in the current study, follow-up on student annotations occurred on the order of hours to days, well within the period of significance for feedback usefulness.

Annotation across various texts vs. within a given text both yield valuable information ( Marshall, 2000 ). A student’s annotations across various texts during a semester, or during a degree program, could give some indication of intellectual growth over time. The body of annotations within a given text could provide an important indicator for instructors regarding engagement levels for an assigned text, with the assumption being that a text with a high volume of threaded annotations is more conducive to debate and collective meaning-making by the students than a text with a low volume. This may provide a signal for what reading should be kept or omitted in future course syllabi, while considering that some higher or lower numbers may occur in the initial introduction of the annotation platform, as students become familiar with the annotation routines. Similar consideration of individual student activity vs. course resource usage have been harnessed for LMS dashboards ( Wise and Jung, 2019 ), and for annotations across course documents over time ( Singh, 2019 ). Although just in time teaching was mentioned previously in regards to the traditional template assessment, it may equally apply to annotation output, particularly if collating tags indicating confusion. This could inform instructors on where students are having difficulties ( Singh, 2019 ). Perusall also has the capability to generate a confusion report to summarize general areas of questions/confusion.

For learning analytics practitioners, a body of annotations holds not only the insight within it (i.e., what section of text is highlighted? what is expressed in the annotations added?), but where it was applied (which document or URL?), when it was applied (time stamps), and how students and scholars might form an effective network (who participates in whose threads?). This could collectively yield a staggering amount of data. An estimated 2,900,000 time-stamped learning “traces” were postulated to arise from a 200-student course using an nStudy collaborative annotation tool ( Winne et al., 2019 ). The Hypothes.is and Perusall platforms have vastly larger student user bases, so collaborative online annotation seems ripe for learning analytics and big data inquiries. Statistical properties of online web page tagging practices ( Halpin et al., 2007 ; Glushko et al., 2008 ), or the view of collaborative tagging as distributed cognition ( Steels, 2006 ), may also apply to annotation content when larger groups of annotators are involved.

Annotation Platforms for Peer Review and Post-publication Peer Review

Although the user base for online annotation by students is large, collaborative text-linked annotation could find additional users in a journal’s peer review process or the post-publication peer review process ( Staines, 2018 ), whereby commentary is collected for the purposes of re-contextualizing or further assessing the quality of previously published manuscript ( Kriegeskorte, 2012 ). Some journals already include collaborative stages in peer review, but the discussion occurs in more of a forum type situation, where the commentary is not directly text-linked or marginal. Authors, reviewers, and editors should consider whether commentary that is directly text-linked or figure-linked is more beneficial, or whether they would like to continue to contextualize comments with line numbers and other source document referrals. Critical commentary on a published article may occur already within the introduction and discussion sections of other articles, or on web blogs, but assembling it can be difficult, as it is not anchored within the discussed document, but reconstructed in a labor-intensive way from citation trails. One can contemplate whether post-publication peer review initiatives like Pub Peer ( Townsend, 2013 ), would be more streamlined if commentary was directly content-linked. This could perhaps be aided by a set of common tags among users.

Meta-Commentary, New Teaching Spaces

In reading the primary source paper and accrued commentary in the annotations, which often include praises, snipes, and how the authors “should have done things differently,” one is fairly confident that the commentary drives additional interest in the paper. Although they are not typically marginal or text-linked, comments in newspaper articles are generally supported by authors and may drive more interest in the article itself ( Nielsen, 2012 ). To consider an example outside of academics, some television programs (i.e., Terrace House) include a surrogate audience of commentators to help a home audience interpret and judge the actions of characters on the show ( Rugnetta, 2017 ). The audience tunes in not only to see what the main characters will do, but also how their behavior is commented upon by this panel of observers. Their commentary functions as highly engaging meta-content that indicates how a viewer should receive and process main events in the show ( Urban, 2010 ). Some fear that the show would be mundane without the additional panel commentary which serves as a major engagement tool; the audience is treated to a meta-experience that filters their own experience, and it is this alternative reading that provides additional intrigue ( Kyotosuki, 2018 ).

Consider reading your favorite movie script with annotations by the director, or a draft of your favorite novel including exchanges between the editor and the author, or a landmark scientific paper with annotations by current scientists. These would all inform the reader on the process involved in getting to the final product, or in the latter example could provide a contemporary lens for older content, and thus add value. Some critics have imagined bodies of annotations from a favorite book that could be shared (transportable social marginalia) in a literary communion through a series of “annotation skins” ( Anderson, 2011 ). The collation and screening of quality annotations could also be a value-adding enterprise for those willing to participate.

While one can lament the loss of physical teaching spaces imposed by a viral pandemic or other virtual learning circumstances, new spaces are opened by new technologies ( Pursell and Iiyoshi, 2021 ). The instructor and the student can “meet at the text” via collaborative online annotation, and engage in critical exchanges.

Future Research Questions

Collaborative annotation provides fertile ground for further study. First: To what degree do students read the primary source text on the annotation platform? Is the student marking up a paper copy or separate digital copy (i.e., personally downloaded PDF file) and then going to the annotation platform, or do they treat the platform as a packaged experience where they do both the initial reading and annotating? This question should be included in future annotation usage surveys and could inform platform designers, who would like to enable the smoothest experience in both source text reading and annotating on the same screen. One might expect dialogic interactions to decrease if users were annotating a paper or digital copy by themselves first and then just typing in those isolated points into the annotation platform interface. Second: What percentage of the total body of annotations on a given text are students consuming? To what extent do students revisit their own annotation threads to look for and address new responses, or revisit a growing body of annotations after they have fulfilled an instructor stipulated amount? Survey data in this study indicated that students found value in the annotations of others, which is in accord with the value of “eavesdropping” on the insights of other readers ( Wolfe and Neuwirth, 2001 ), but currently, the only plausible indicator that an annotation has been read by another student is if they have then commented on it to extend the thread. Perhaps technical developments by the platforms might render some measurement on student consumption of annotations in the future. The consumption of high quality, but perhaps unseen, threads can be aided by an instructor’s curation of annotations for a subsequent class session. Third: How much value does a body of annotations hold over time? Although this has been considered for an annotation’s value relative to the potential permanence of the work itself ( Marshall, 2000 ), one could also ponder how much value a body of annotations generated in one class could have for an instructor or group of students at another university who happen to be embarking on an annotation exercise on the same source text. This would seem to provide fertile ground for cross-cultural and cross-institutional comparisons. An instructor could give a series of richly annotated documents to a group of students and have them evaluate that reading experience vs. a set of unannotated documents to test the dirty textbook Hypothes.is ( Van Dam, 1988 ) in the current online portable annotation environment. There will be opportunities for instructor curation and comparison that also relate to pedagogy, as was the strategy for a previous class’s annotations to function as “seed” annotations for the promotion of productive student dialogue by Miller et al. (2016) . Fourth: Would the author of the source text ever wish to engage with the annotators? Some authors might discover new insights, research directions, and caveats for their published work in treating public annotation directly linked to the source text as a form of post-publication peer review. Textbook authors and editors might like to see sections of the book that generate many annotations indicative of confusion. Other authors are opposed, stating that commentary on their article is fine in other locations (separate blogs, twitter, etc.), but do not want any commentary to be superimposed upon their own website URLs ( Watters, 2017 ). Constructive commentary is likely favorable to most, but it would need to also be free of the noise of useless comments, personal attacks, or factually false statements. This useful “wheat” vs. useless “chaff” concern affects all publication systems.

Collaborative online annotation can provide a means for creative discussion and better understanding of a text, including quite challenging primary research texts. As with any educational technology, pedagogical considerations will be of paramount importance. Students recognize and appreciate that an online annotation platform can make their thoughts, and those of their classmates, visible and actionable for an assigned text, thus providing a useful comparator. Also, some solace can be found in a struggle on tough material that is collective as opposed to isolated. Repetition in grading is cut down when collaborative annotation takes the place of an assignment where students are generating a relatively uniform assessment product. Some of the feedback burden on instructors is removed when students beat the instructor to the punch in responding to an annotation with quality feedback.

Early web browsers contemplated annotation as a feature, but were hampered by an inability to host the potentially huge scale of annotations on a proper server ( Andreessen, 2014 ), so a realization of the power of online annotation is not new and has been around since the early 1990s. Now, because of the large and growing user bases, Perusall and Hypothes.is are opening up a new enterprise that classroom instructors, scholars, and learning analytics practitioners can all enter, and hopefully can all benefit students in the process.

In an address entitled: The Revolution will be annotated ( Personal Democracy Forum, 2013 ), Whaley argued that “reasoning tends to work better as a team sport.” The student feedback in the current study supports that argument. In As we may think ( Bush, 1945 ), where multiple aspects of the internet were presupposed, Vannevar Bush predicted:

There is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record. The inheritance from the master becomes, not only his additions to the world’s record, but for his disciples the entire scaffolding by which they were erected.

This idea may gain traction in the rapidly accruing mass of annotations and post-publication commentary. Since annotation platforms like Perusall are now serving students in the millions, and Hypothes.is annotators have made over 20 million annotations, approximately one year after they marked 10 million annotations, 3 research into usage and impact of these platforms seems particularly pressing. Hypothes.is, through iAnnotate, 4 and Perusall, through the Perusall Exchange 5 are generating excitement in their own dedicated conferences. Learning Management Systems (LMSs) and Audience Response Systems (“clickers” or ARSs) have become so ubiquitous in higher education as to gain a common label. With the number of students currently served, it seems fitting that collaborative online annotation platforms (COAPs) acquire a common label too. To examine the scope of the current study, the students in two cohorts made altogether over 2,200 annotations, totaling over 920,000 characters. Although most of the focus in this and other annotation papers is how the collaborative annotation process helps the students, one can also consider how this spotlight into student thoughts helps the teacher. The students repeatedly had insights into the scientific content of the assigned papers which expanded the thinking of the instructor.

These annotation platforms are bringing new value to the educational technology landscape, new ways of achieving prompt and valuable feedback that is often dialogic in nature, may lessen instructor burden, and increase instructor and student motivation. The task we now face as educators is to make the annotation trails as useful as possible as we engage in the team sport of reasoning in the sciences, social sciences, and humanities.

Data Availability Statement

Anonymized datasets for this study can be found in the Harvard Dataverse online repository: Full survey responses ( https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/0GG53Z ), and annotation content excel files ( https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/G8SR2G ).

Ethics Statement

The studies involving human participants were reviewed and approved by the Harvard Human Research Protection Program. Written informed consent for participation was not required for this study in accordance with the National Legislation and the Institutional Requirements.

Author Contributions

GWP conceived the study, analyzed the data, and wrote the manuscript.

Conflict of Interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Acknowledgments

I would like to acknowledge the consistent group effort of the students in annotating challenging papers, and providing frank commentary on the annotation process. The author would also like to acknowledge the time spent with Harvard Medical School’s Curriculum Fellows for broadening his exposure to pedagogical approaches and tools. Valuable feedback on early drafts of this manuscript came from Dr. Taralyn Tan of Harvard’s Neuroscience Program, and Dr. Rachel Wright of Smith College. Valuable guidance on IRB navigation was provided by Dr. Madhvi Venkatesh, Vanderbilt University, formerly of Harvard.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2022.852849/full#supplementary-material

  • ^ https://web.Hypothes.is/about/
  • ^ https://perusall.com/about
  • ^ https://web.Hypothes.is/blog/our-view-from-20-million-annotations/
  • ^ https://iannotate.org/
  • ^ https://perusall.com/exchange

Adams, P. (2006). Exploring social constructivism: theories and practicalities. Education 34, 243–257. doi: 10.1080/03004270600898893

CrossRef Full Text | Google Scholar

Anderson, S. (2011). What I Really Want is Someone Rolling Around in the Text. New York: New York Times,

Google Scholar

Andreessen, M. (2014). Why Andreessen Horowitz Is Investing in Rap Genius . Available online at: http://genius.com/Marc-andreessen-why-andreessen-horowitz-is-investing-in-rap-genius-annotated [Accessed on July 19, 2021]

Beck, C. (2017). “Informal action research: The nature and contribution of everyday classroom inquiry,” in The Palgrave International Handbook of Action Research , eds L. Rowell, C. Bruce, J. Shosh, and M. Riel (New York: PalgraveMacmillan), 37–48. doi: 10.1111/1751-7915.13576

PubMed Abstract | CrossRef Full Text | Google Scholar

Bold, M. R., and Wagstaff, K. L. (2017). Marginalia in the digital age: are digital reading devices meeting the needs of today’s readers? Libr. Inf. Sci. Res. 39, 16–22. doi: 10.1016/j.lisr.2017.01.004

Braguglia, K. H. (2006). Perceptions Of Reading Assignments: J. Coll. Teach. Learn. 3, 51–56.

Brown, J. (2007). Feedback: the student perspective. Res. Post Compul. Educ. 12, 33–51.

Bush, V. (1945). As we may think. Atl. Mon. 176, 101–108.

Carnegie Learning (2020). UpGrade: An Open Source Platform for A/B Testing in Education. Available online at: https://www.carnegielearning.com/blog/upgrade-ab-testing/ [Accessed on Jun 28, 2021]

Chen, Z., Chudzicki, C., Palumbo, D., Alexandron, G., Choi, Y.-J., Zhou, Q., et al. (2016). Researching for better instructional methods using AB experiments in MOOCs: results and challenges. Res. Pract. Technol. Enhanc. Learn. 11, 1–20. doi: 10.1186/s41039-016-0034-4

Clapp, J., DeCoursey, M., Lee, S. W. S., and Li, K. (2020). “Something fruitful for all of us”: social annotation as a signature pedagogy for literature education. Arts Humanit. High. Educ. 20, 147402222091512.

Clump, M. A., Bauer, H., and Bradley, C. (2004). The Extent to which Psychology Students Read Textbooks: a Multiple Class An. Discovery Service for Purdue University. J. Instr. Psychol. 31, 227–232.

Duchastel, P., and Chen, Y. P. (1980). The use of marginal notes in text to assist learning. Educ. Technol. 20, 41–45.

Fagen, A. P., Crouch, C. H., and Mazur, E. (2002). Peer instruction: results from a range of classrooms. Phys. Teach. 40, 206–209. doi: 10.1119/1.1474140

Ghadirian, H., Salehi, K., and Ayub, A. F. M. (2018). Social annotation tools in higher education: a preliminary systematic review. Int. J. Learn. Technol. 13, 130–162. doi: 10.1504/ijlt.2018.092096

Glushko, R. J., Maglio, P. P., Matlock, T., and Barsalou, L. W. (2008). Categorization in the wild. Trends Cogn. Sci. 12, 129–135. doi: 10.1016/j.tics.2008.01.007

Halpin, H., Robu, V., and Shepherd, H. (2007). The Complex Dynamics of Collaborative Tagging. in Proceedings of the 16th International Conference on World Wide Web. New York: ACM 211–220.

Hannafin, M. J., Hill, J. R., Land, S. M., and Lee, E. (2014). “Student-centered, open learning environments: Research, theory, and practice,” in Handbook of Research on Educational Communications and Technology , eds M. Spector, M. D. Merrill, J. Merrienboer, and M. Driscoll (London, UK: Routledge), 641–651. doi: 10.1007/978-1-4614-3185-5_51

Honeycutt, L. (2001). Comparing e-mail and synchronous conferencing in online peer response. Writ. Commun. 18, 26–60. doi: 10.1177/0741088301018001002

Hoskins, S. G., Lopatto, D., and Stevens, L. M. (2011). The CREATE approach to primary literature shifts undergraduates’ self-assessed ability to read and analyze journal articles, attitudes about science, and epistemological beliefs. CBE Life Sci. Educ. 10, 368–378. doi: 10.1187/cbe.11-03-0027

Howard, J. R., and Henney, A. L. (1998). Student participation and instructor gender in the mixed-age college classroom. J. Higher Educ. 69, 384–405. doi: 10.2307/2649271

Jackson, H., Nayyar, A., Denny, P., Luxton-Reilly, A., and Tempero, E. (2018). HandsUp: An In-Class Question Posing Tool. in 2018 International Conference on Learning and Teaching in Computing and Engineering (LaTICE). New Jersey: IEEE, 24–31.

Jackson, H. J. (2001). Marginalia: Readers Writing in Books. London: Yale University Press.

Jackson, P. (2021). How to Write a High Quality Reading Annotation. Available online at: https://www.saltise.ca/wp-content/uploads/2021/03/How-to-write-high-quality-reading-annotations.pdf [Accessed on Jun 30, 2021]

Kalir, R., and Garcia, A. (2019). Annotation: The MIT Press Essential Knowledge Series. Annotation. Available online at: https://mitpressonpubpub.mitpress.mit.edu/annotation [Accessed on May 17, 2021].

Kararo, M., and McCartney, M. (2019). Annotated primary scientific literature: a pedagogical tool for undergraduate courses. PLoS Biol. 17:e3000103. doi: 10.1371/journal.pbio.3000103

Kennedy, M. (2016). Open annotation and close reading the Victorian text: using hypothes. is with students. J. Vic. Cult. 21, 550–558. doi: 10.1080/13555502.2016.1233905

Kimmons, R. (2021). A/B Testing on Open Textbooks. A Feasibility Study for Continuously Improving Open Educational Resources. J. Appl. Instr. Des. 10, 1–9. doi: 10.51869/102/rk

King, G. (2016). Introduction to Perusall. Available online at: https://perusall.com/downloads/gary-king-webinar-slides.pdf (accessed June 3, 2020).

Kriegeskorte, N. (2012). Open evaluation: a vision for entirely transparent post-publication peer review and rating for science. Front. Comput. Neurosci. 6:79. doi: 10.3389/fncom.2012.00079

Kuckartz, U. (2014). Qualitative Text Analysis: A Guide to Methods, Practice and Using Software. California: Sage.

Kyotosuki, N. (2018). Terrace House: Visualising ‘Asian Modernity. Available online at: https://atmafunomena.wordpress.com/2018/08/31/terrace-house-visualising-asian-modernity/ [Accessed on July 1, 2021].

Lee, S. C., Lee, Z.-W., and Yeong, F. M. (2019). “Using social annotations to support collaborative learning in a Life Sciences module,” in personalized Learing. Diverse Goals. One Heart, ASCILITE , ed. Y. W. Chew (Singapore: ASCILITE), 487–492.

Marrs, K. A., and Novak, G. (2004). Just-in-time teaching in biology: creating an active learner classroom using the internet. Cell Biol. Educ. 3, 49–61. doi: 10.1187/cbe.03-11-0022

Marshall, C. (2000). The Future of Annotation in a Digital (paper) World. Successes & Failures of Digital Libraries: [papers presented at the 1998 Clinic on Library Applications of Data Processing, March 22-24, 1998] . Available online at: http://hdl.handle.net/2142/25539

McLuhan, M. (1964). Understanding Media: The Extensions of Man. Cambridge: MIT press.

McMillan, J. H. (2015). Fundamentals of Educational Research. London: Pearson.

Miller, K., Lukoff, B., King, G., and Mazur, E. (2018). Use of a Social Annotation Platform for Pre-Class Reading Assignments in a Flipped Introductory Physics Class. Front. Educ. 3:1–12. doi: 10.3389/feduc.2018.00008

Miller, K., Zyto, S., Karger, D., Yoo, J., and Mazur, E. (2016). Analysis of student engagement in an online annotation system in the context of a flipped introductory physics class. Phys. Rev. Phys. Educ. Res. 12, 1–12. doi: 10.1103/PhysRevPhysEducRes.12.020143

Nielsen, C. (2012). Newspaper journalists support online comments. Newsp. Res. J. 33, 86–100. doi: 10.1093/geront/gns046

Novak, E., Razzouk, R., and Johnson, T. E. (2012). The educational use of social annotation tools in higher education: a literature review. Internet High. Educ. 15, 39–49. doi: 10.1016/j.iheduc.2011.09.002

November, A. (2020). Interview with Eric Mazur: Socrates Meets the Web! Novemb. Learn. Webpage. Available online at: https://novemberlearning.com/2020/04/29/interview-with-eric-mazur-socrates-meets-the-web/ [Accessed on May 25, 2020].

O’Connell, M. (2012). The Marginal Obsession with Marginalia. New York: New Yorker Times 26.

Personal Democracy Forum (2013). Dan Whaley: The Revolution Will Be Annotated . Available online at: https://www.youtube.com/watch?v=2jTctBbX_kw

Podolefsky, N., and Finkelstein, N. (2006). The Perceived Value of College Physics Textbooks: students and Instructors May Not See Eye to Eye. Phys. Teach. 44, 338–342. doi: 10.1119/1.2336132

Porter, G. W. (2013). Free choice of learning management systems: do student habits override inherent system quality? Interact. Technol. Smart Educ. 10, 84–94. doi: 10.1108/ITSE-07-2012-0019

Pursell, C., and Iiyoshi, T. (2021). Policy Dialogue: online Education as Space and Place. Hist. Educ. Q. 61, 534–545. doi: 10.1017/heq.2021.47

Renz, J., Hoffmann, D., Staubitz, T., and Meinel, C. (2016). Using A/B testing in MOOC environments. in Proceedings of the Sixth International Conference on Learning Analytics & Knowledge , New York: ACM. 304–313.

Round, J. E., and Campbell, A. M. (2013). Figure facts: encouraging undergraduates to take a data-centered approach to reading primary literature. CBE Life Sci. Educ. 12, 39–46. doi: 10.1187/cbe.11-07-0057

Rugnetta, M. (2017). How is Terrace House like a Let’s Play. Available online at: https://www.youtube.com/watch?v=24MWwO_Gpg8 [Accessed on July 1, 2021]

Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instr. Sci. 18, 119–144. doi: 10.1007/bf00117714

Simpson, M. L., and Nist, S. L. (1990). Textbook annotation: an effective and efficient study strategy for college students. J. Read. 34, 122–129.

Singh, S. (2019). Exploring the potential of social annotations for predictive and descriptive analytics. Annual Conference on Innovation and Technology in Computer Science Education. New York: Association for Computing Machinery 247–248. doi: 10.1145/3304221.3325547

Staines, H. R. (2018). Digital Open Annotation with Hypothes.is: Supplying the Missing Capability of the Web. J. Sch. Publ. 49, 345–365. doi: 10.3138/jsp.49.3.04

Steels, L. (2006). Collaborative tagging as distributed cognition. Pragmat. Cogn. 14, 287–292. doi: 10.1016/j.chb.2015.04.053

Townsend, F. (2013). Post-publication peer review: PubPeer. Ed. Bull. 9, 45–46. doi: 10.1080/17521742.2013.865333

Urban, G. (2010). A method for measuring the motion of culture. Am. Anthropol. 112, 122–139. doi: 10.1111/j.1548-1433.2009.01201.x

Van Campenhout, R., Brown, N., Jerome, B., Dittel, J. S., and Johnson, B. G. (2021). Toward Effective Courseware at Scale: Investigating Automatically Generated Questions as Formative Practice. in Proceedings of the Eighth ACM Conference on Learning@Scale. New York: ACM 295–298.

Van Dam, A. (1988). Hypertext’87: keynote address. Commun. ACM 31, 887–895. doi: 10.1145/48511.48519

Vygotsky, L. (1978). Mind in Society: The Development of Higher Psychological Processes. Cambridge: Harvard University Press

Watters, A. (2017). Un-Annotated. Available online at: http://hackeducation.com/2017/04/26/no-annotations-thanks-bye [Accessed on August 12, 2021]

Weng, C., and Gennari, J. H. (2004). Asynchronous Collaborative Writing through Annotations. in Proceedings of the 2004 ACM Conference on Computer Supported Cooperative Work. Washington: University of Washington 578–581.

Wenk, B. L., and Tronsky, L. (2011). First-Year Students Benefit From Reading Primary Research Articles. J. Coll. Sci. Teach. 40, 60–67.

PubMed Abstract | Google Scholar

Winne, P. H., Teng, K., Chang, D., Lin, M. P. C., Marzouk, Z., Nesbit, J. C., et al. (2019). nStudy: software for learning analytics about processes for self-regulated learning. J. Learn. Anal. 6, 95–106.

Wise, A. F., and Jung, Y. (2019). Teaching with analytics: towards a situated model of instructional decision-making. J. Learn. Anal. 6, 53–69.

Wolfe, J. (2002). Annotation technologies: a software and research review. Comput. Compos. 19, 471–497. doi: 10.1016/s8755-4615(02)00144-5

Wolfe, J. L., and Neuwirth, C. M. (2001). From the margins to the center: the future of annotation. J. Bus. Tech. Commun. 15, 333–371. doi: 10.1177/105065190101500304

Yeong, F. M. (2015). Using primary literature in an undergraduate assignment: demonstrating connections among cellular processes. J. Biol. Educ. 49, 73–90. doi: 10.1080/00219266.2014.882384

Keywords : annotation, assessment, asynchronous, feedback, Hypothes.is, peer learning, Perusall

Citation: Porter GW (2022) Collaborative Online Annotation: Pedagogy, Assessment and Platform Comparisons. Front. Educ. 7:852849. doi: 10.3389/feduc.2022.852849

Received: 11 January 2022; Accepted: 23 March 2022; Published: 10 May 2022.

Reviewed by:

Copyright © 2022 Porter. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Gavin W. Porter, [email protected]

Collaborative Annotation Tools

Collaborative annotation tools offer instructors the ability to make reading and textual analysis a communal activity. Most options are free of charge and easy to learn/use.

The following sections describe some online tools that help foster active and collaborative annotation of various documents, articles, eBooks, and PDF files.

Google Docs

Google Docs is a word processing platform from Google that allows you to create a blank document or upload a document from your personal computer and allows multiple users to annotate the document.

Widely used

It easily saves and converts documents from one file type to another

Allows users to collaborate even if they do not have a Google account

Users not using a Google account will remain anonymous

Only allows for textual commenting

There is no student analytics option for tracking participation

Great entryway technology for any classroom requiring collaborative work on a single document

  •   Free
  •   Google Docs Website
  •   How to Do Collaborative Annotation in Google Docs for Online or Blended Classrooms

Perusall is a free collaborative annotation service that claims to transform reading from a “solitary experience to an “engaging and collective one.” Teachers are able to create online courses independently of or integrated with an LMS Canvas. From there, teachers can upload texts as files from their computers, or they can have students purchase e-copies directly from the publisher. Teachers can make assignments from specific chapters or sections and have students read, comment, and ask questions collaboratively.

Completely free to use (no upgraded version)

Allows students to work collaboratively or individually

Students can communicate with each other by responding to or emphasizing their classmates' questions

Students can annotate images and text

Program comes equipped with student analytics function that tracks what participants contribute to the assignment

App is not currently integrated in Emory’s Canvas account

There are extra steps to gain “Instructor Access”

Layout is pretty simplistic

Great for courses that stick close to a single text book

Great for courses where collaborative reading/editing is a part of the learning objective

Great for courses where there is no central textbook but is done exclusively through PDFs

  •   Perusall Website
  •   Introduction to the Perusall platform
  •   Perusall Tutorial

Hypothesis is a free online tool and web browser extension that allows you to annotate and save web pages both individually and collaboratively. The creators describe the technology as creating a “conversation layer over the entire web” between you and whatever community of learners you are a part of. The company also prides itself on offering “free, open, and neutral” tools for online learning.

Sleek/almost invisible toolbar

Allows you to tag and categorize annotations online

Allows users to comment with text, images, or short videos (by embed link only)

Gives users the ability to keep annotations private or make them public as to engage with the greater scholarly community

Compared to Insert Learning, the layout is a little less intuitive

No student analytics option for tracking participation

Great for classes where students primarily read online content

  •   Hypothesis Website
  •   Hypothesis Annotation in 5 Minutes!
  •   Hypothesis Demo

Kami allows teachers to share files with others to collaboratively annotate documents with text, images, voice memos, and short videos. This technology also comes with a web browser extension that makes opening the Kami editor seamless.

Fully integrated with Google Classroom

Basic version has most of the same elements of Perusall

There is an attached OCR converter to make PDFs readable

Most advanced settings cost $99/year/teacher

Marketed for k-12 students

Great for teachers who use Google Drive/Google Classroom

Great for group writing assignments

Great for peer review or workshop circles

  •   Basic Version: Free; Pro Version: $99+; Pricing Range Available Here
  •   Kami Website
  •   What is Kami?
  •   Kami and Google Classroom Integration

Insert Learning

Insert Learning allows teachers to collect individual websites online and turn them into trackable online assignments for students in their courses. Using a free account with Insert Learning (also by downloading their Chrome web browser extension), teachers can highlight, annotate, or insert questions into any website.

Completely free (no upgraded version)

You can add textual information as well as short video recordings

Toolbar is easy to understand/navigate

Fully integrated into Google Classroom

Simplistic layout

Caters to K-12 students

Great for teachers who often assign long form journalism or texts which are primarily accessible online

  •   Insert Learning Website
  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Hypothes.is: Social and Collaborative Annotation

What is hypothes.is.

Hypothes.is is an online collaborative annotation tool. With Hypothesis you can leverage the power of social learning in your online course. Benefits of collaborative annotation include increased: student understanding (Miller et al., 2016), intrinsic motivation (Dean & Schulten, 2015), and collective efficacy (Bandura, 2000).

Hypothes.is’ affordances include:

  • Students and teacher co-construct knowledge in the authentic environment of an assigned reading
  • Students and teacher learn through multimodal expression facilitated by annotations that can combine text, GIFs, and video
  • Teacher can model metacognition and other reading strategies by seeding an assigned text with annotations
  • Teacher can ask questions about important passages by seeding an assigned text with annotations
  • Students and teacher use the tool within CarmenCanvas
  • Teacher can choose to provide feedback to students via the SpeedGrader integration

How Does Hypothesis Work?

Hypothesis is a tool integrated into CarmenCanvas that allows the instructor to set up course readings (pdf’s, websites, and EPUB files) so that students can annotate together. Instructors can set up a readings and webpages in Carmen so that students can annotate, comment, and discuss in a shared space. Hypothesis readings can be set up in Carmen in two ways:

  • Assignment in Carmen (graded or ungraded)
  • Standalone page within a Module in Carmen (ungraded)

Setting Up Hypothesis in Carmen

First, begin by creating a new assignment in Carmen . If using a pdf reading, you will also need to add that pdf to the files in your Carmen course. Once you have an assignment you would like to add Hypothesis to, use the following steps:

1. From the Submission Type options, select External Tool.

Screenshot of the External Tool option in Submission Type drop-down menu.

2. Click Find, scroll down and select Hypothesis .

Screenshot of external tools in the drop-down menu.

3. Select the option to either Enter URL (for webpage) or Select PDF from Canvas .

Screenshot of option to "Enter URL of web page or PDF" or "Select PDF from Canvas."

4. Select your PDF or enter your URL location when prompted.

Screenshot of files to select PDF.

5. You will be returned to the same window illustrated in Step 2. The Hypothesis tool is now selected, click Select to confirm .

Screenshot of external tools in the drop-down menu.

6. As a confirmation, you will see the eternal tool window with the location of the tool and your selected reading pre-filled.

a. Select if the tool (Hypothesis) should load in a new tab or directly within the Carmen page.

Screenshot of URL ready to be added for annotation using Hypothes.is.

7. At this juncture you can set up the rest of your assignment: Add instructions, points, decide how you want to display the grade, and due dates. Once ready, scroll to the bottom of the page and click Save . The page should reload, and you should now see the text students will annotate using Hypothesis (if you selected to not load the tool in a new tab). Don’t forget to publish the assignment!

How can the DELD team help you?

The EHE Distance Education and Learning Design (DELD) team is ready to partner with instructors to implement Hypothesis in their course.

  • Assistance setting up Hypothesis
  • Social annotation ideas for your course
  • Group readings and collaboration

Contact the DELD Team through the service portal to set up a consultation.

Additional Resources

Hypothes.is’ website for educators  offers extensive guidance on integrating Hypothes.is into a course, including:

  • YouTube tutorial videos
  • Examples of classroom use
  • Webinars  about online collaborative annotation
  • DELD Workshop Recording: Introduction to Hypothesis

Bandura, A. (2000). Exercise of human agency through collective efficacy. Current Directions in Psychological Science, 9, 75-78. doi: 10.1111/1467-8721.00064​

Dean, J., & Schulten, K. (2015, November 12). Skills and strategies: Annotating to engage, analyze, connect and create. The New York Times. Retrieved from https://learning.blogs.nytimes.com/2015/11/12/skills-and-strategies-annotating-to-engage-analyze-connect-and-create/?_r=0

Miller, K., Zyto S., Karger, D., Yoo, J., & Mazur, E. (2016). Analysis of student engagement in an online annotation system in the context of a flipped introductory physics class. Physical Review Physics Education Research, 12, 1-12. DOI: 10.1103/PhysRevPhysEducRes.12.020143

Meet With Us

Request a Consultation

View all Services

College of Education and Human Ecology

Back to School With Annotation: 10 Ways to Annotate With Students

By jeremydean | 25 August, 2015

WesAnnotate2

It’s back-to-school season and I find myself once again encouraging teachers to discuss course readings with their students using collaborative web annotation technologies like Hypothesis. Though relatively new to Hypothesis, I’ve been making this pitch for a few years now,  but in conversations with educators of late I’ve come to realize that we often mean different things by the word “annotate.” Annotation connotes something distinct in specific subject areas, at different  grade and skill levels, and within certain teaching philosophies. This will be the first semester during which Hypothesis has an active education department and so in the spirit these first days of the school year, I thought it might be worth exploring what we really mean when we say, “annotate.”

Annotation is typically perceived as a means to an end. As marginal note-taking it often is the basis for questions asked in class discussion or points made in a final paper. But annotation can also be a kind of end in itself, or at least more than a rest-stop on the way to intellectual discovery. This becomes especially true when annotation is brought into the relatively public and collaborative space of social reading online. Digital marginalia as such requires a redefinition or at least expanded understanding of what is traditionally meant by the act of “annotation.”

Billy Collins’ poem “Marginalia” outlines various ways that people have annotated throughout history, including in formal education contexts. But even within pre-digital student marginalia there can be a wide range of types of annotation from defining terms and explaining allusions to analytic commentary to more creative responses to the text at hand. As annotation becomes social and media-rich as it does using Hypothesis and other web annotation technologies, these species of marginalia only further proliferate.

For those curious about integrating annotation exercises into an assignment or a course, below I outline ten practical ways that one might annotate with a class. This list is by no means exhaustive–the larger point is that there are a lot of different ways for students and teachers to annotate. I’d love to hear about your experiences with annotation in the classroom in notes and comments here or even in your own blog posts on Hypothesis. My hope is that over the course of the coming semesters, Hypothesis will develop as a community of educators sharing their ideas, assignments, successes, and failures. As always, feel free to contact me at [email protected] to chat further about collaborative annotation! (For a more technical guide to using Hypothesis, see our tutorials on getting started here.)

1. Teacher Annotations

Pre-populate a text with questions for students to reply to in annotations or notes elucidating important points as they read.

hamlet

One of the amazing aspects of social reading is that you can be inside the text with your students while they are reading, facilitating their comprehension, inspiring their analysis, and observing their confusion and insight. It’s about as close as you can get to the intimacy of in-class interaction online. You can guide students through the reading with your annotations, offering context and possible interpretations. This allows you to be the Norton editor of your course readings, but attentive to the particular themes of your course or local contexts of your classroom community. Or you can provoke student responses to the text through annotating with questions to be answered in replies to your annotations. Looking at responses to a question posed in the margin of a text is a great starting point for class discussion in a blended course. In the classroom, students can be prompted to expand on points begun as annotations to jumpstart the conversation. And when there is no physical classroom space, as in online courses, annotation can be a means for the instructor to have a similar guiding presence and to create an engaging and engaged community of readers.

The real pedagogical innovation of collaborative annotation, however, is that students are empowered as knowledge producers in their own right, so most of my suggested classroom annotation practices revolve around a variety of student-centered annotation activities in which they are the ones posing the questions and teachers are co-learners in the reading process. There are also other use cases, however, for teacher annotation, such as using web annotation to comment on student writing published online.

2. Annotation as Gloss

Have students look up difficult words or unknown allusions in a text and share their research as annotations.

Annotation bookshelf by Lau Design.

Practical across a wide range of skill levels, this exercise can span from simply creating a list of vocabulary words from a text for study to presenting, as a class or individually, a text annotated like a scholarly volume. We’ve seen this kind of exercise completed on great works of literature as well as scientific research papers. Think of the activity as creating a kind of inline Wikipedia on top of your course reading. For difficult texts, sharing the burden of the research necessary for comprehension can help students better understand their reading. And there is something incredibly powerful about students beginning to imagine themselves as scholars, responsible for guiding a real audience through a text, whether their own peers or a broader intellectual community. Students can be encouraged to practice skills like rephrasing research material appropriately and citing sources using different formatting styles. And, of course, glossing can be combined with more insightful annotation as well.

Protip: If you plan to annotate across multiple texts with a class, have students use a course tag (like “Eng101DrDeanFall15”) in all of their annotations. Tagging in this way allows both teacher and students to follow the group’s work on a class stream of activity. Here’s an example of what such a class tag stream looks like from one of our most active educators, Greg McVerry.  More on the pedagogy of tags in this tutorial. Note: very soon (in a matter of weeks) we will be launching a private group feature that will streamline this workflow–annotations will be publishable to a specific group and that group will have a stream that can be followed.

3. Annotation as Question

Have students highlight, tag, and annotate words or passages that are confusing to them in their readings.

An annotation need not be, and often is not, an answer. A simple question mark in the margin of a book can flag a word or passage for discussion. And such discussions can be generative of important explication and analysis. Directing students to annotate in this way creates a sort of heat map for the instructor that can be used to zero in on troubling sections and subjects or spark class discussion. Tags categorizing the particular problem could be used as a simple way to prompt clarification (vocab, plts, research methods, etc.).

While the teacher can respond to such student annotations, a possible follow up exercise could have students respond to each other’s questions instead.

4. Annotation as Close Reading

Have students identify formal textual elements and broader social and historical contexts at work in specific passages.

Online annotation powerfully enacts the careful selection of text for in-depth analysis that is the hallmark of much high school and college English and language arts curriculum. Using web annotation, students are required to literally select small pieces of meaningful evidence from a document for specific analysis. Teachers can direct students to identify textual features (word choice, repetition, imagery, metaphor, etc.) or relevant broader contexts (historical, biographical, cultural, etc.) for passages of a text, and then prompt them to develop a short argument based on such evidence. Collaborative close reading can be especially effective in that multiple students can build off each other’s interpretations to demonstrate how deep textual analysis can go. Teachers implementing the Common Core State Standards for reading might pay special attention to this use case for annotation in the classroom.

Common Core State Standards for Reading, Anchor Standard 1.

Some teachers will use web annotation as a tool throughout the semester for this purpose. Students thus gain regular practice in close reading and build ideas towards more substantive, summative assignments. Such assignments can also begin as collaborative exercises done by the entire class and culminate with individual or small group annotation projects.

5. Annotation as Rhetorical Analysis

Have students mark and explain the use of rhetorical strategies in online articles or essays.

Whether analyzed as a class or individually, a clearly argumentative piece should be chosen for this assignment, perhaps from an op-ed page in a newspaper or magazine. Students might be asked first to simply identify rhetorical strategies (like ethos , pathos , and logos ) using the tag feature in annotations created with Hypothesis. On a second pass, they can be asked to elaborate on how and why a certain strategy is being used by the author. Identification of rhetorical fallacies could be built into this or a related assignment as well. Note: in order to make such an exercise more streamlined, we plan in the near to future allow users to pre-populate a set of controlled terms with which a group can tag their annotations.

Combined with exercises six and nine, annotation as rhetorical analysis could be part of a composition course that also has students map arguments in a controversy using annotation and then begin their own advocacy through annotation of primary sources mapped and analyzed. (This is how the rhetoric department at UT-Austin, where I taught while getting my PhD., structures their freshman composition courses.) A twist on this assignment could ask students to analyze their own persuasive prose in this way–discussion of such self-reflexive annotation on one’s own writing is a whole other category of annotation, probably deserving of a blog post in itself.

6. Annotation as Opinion

Have students share their personal opinions on a controversial topic as discussed by an article.

A lot of how people are interacting with content online today—commenting on web articles, Tweeting about them with brief notes–is a kind of annotation. At Hypothesis we might think of web annotation as a more rigorous form of such engagement with language and ideas on the Internet. Framing one’s opinions as annotations of specific statements or facts is a reminder that our arguments should be grounded in actual evidence. In any case, allowing students to express their opinions in the margins of the Web, and helping them become responsible and thoughtful in those expressions, is a huge part of what it means to be literate both on the Web and in democratic society more generally. Students could be asked simply to respond to the reading with their thoughts, as in a dialectical reading journal, or employ specific cultural or persuasive strategies in their rhetorical intervention.

Again, this advocacy exercise could be a summative assignment within a unit that uses Hypothesis to complete annotation activities like those described in ways five and nine here.

7. Annotation as Multimedia Writing

Have students annotate with images and video or integrate images and video into other types of annotations.

A student annotation with evocative use of images on Genius.com.

One of the unique aspects of online annotation (and online writing in general) is the ability to include multimedia elements in the composition process. We’ve found many teachers and students excited to make use of animated GIFs in annotation. The use of images can simply be representative (this is a reference to Lincoln with a photo of the 16th president), but more advanced students can be taught to think about how images themselves make arguments and serve other rhetorical purposes.

It is advisable to spend a lesson introducing the idea of digital writing to students with particular attention to the use of images, covering everything from search to use policies and attribution. More traditional teachers may be less accustomed to assessing such multimedia compositions and should spend some time thinking about and explaining to their students a grading rubric.

8. Annotation as Independent Study

Have students explore the Internet on their own with some limited direction (find an article from a respectable source on a topic important to you personally), exercising traditional literacy skills (define difficult words, identify persuasive strategies, etc.).

Many of the above exercises presume that students are annotating for the most part together on a shared course text. But the nature of web annotation is that we can see the notes of others even if we are not reading the same text. In this way, we can attend to annotations as texts themselves. Like a friend’s Facebook page or Twitter feed, seeing someone else navigate the world can be interesting. And through web annotation students can be taught to navigate the digital world responsibly and thoughtfully. Protip: because each Hypothesis user’s annotations are streamed on their public “My Annotations” page, teachers can monitor and assess student work there rather than on individual texts if so desired. (You can click on a username attached to an annotation or search the Hypothesis stream for a username to locate this page. Here’s mine. )

Whether or not one goes so far as to let students roam free on the open Web applying their classroom learning, we have found it to be valuable to have a unit develop from collaborative work to independent or small group work. Students might start off annotating together on a few select texts, getting a sense of what annotation means and how a particular platform like Hypothesis works, but by the end of a term become responsible for glossing or analyzing a single text or set of texts themselves.

9. Annotation as Annotated Bibliography

Have students research a topic or theme and tag and annotate relevant texts across the Internet.

Tagging in page-level annotations as social bookmarking.

This is a different kind of annotation than largely discussed above. Here we are annotating on the level of the document rather than on a particular selection of text. (Users can create unanchored annotations for this purpose using the annotation icon on the sidebar without selecting any particular text within a document.) But this annotation exercise practices solid research skills and can be used as preparation for research assignments. In fact, using annotation as a annotated bibliography assignment is an excellent way for teachers to follow and guide student research during the process itself. The result of this assignment will be something useful for a paper such as a summary of the major stakeholders of a particular issue and how they articulate their positions.

Of course this kind of annotation as page-level commentary can be combined with more fine-grained attention through annotation to the texts of these tagged documents. In addition to outlining sources needed for a paper, the student can begin to break down those sources for close reading within an essay.

10. Annotation as Creative Act

Have students respond creatively to their reading with their own poetry or prose or visual art as annotations.

Book art by Kaspen for the Anagram Bookstore.

Annotation need not be overtly analytical. Whether in writing or using other media, students can respond creatively to texts under study through annotation as well, inserting themselves within the intertextual web that is the history of literature and culture. One creative writing exercise might be to have students annotate in the voices of a characters from a novel being read. Or to have them re-imagine passages written as newspaper stories. Nathan Blom’s Annotated Lear Project at LaGuardia Arts High School is a great example of students creatively responding to a text through annotation.

Students can also use their imaginations to annotate texts with their own drawings, photographs, or videos inline with the relevant sources of textual inspiration. Whether completed individually or collaboratively this exercise can result in some wonderful, illustrated editions of course texts.

Share this article

  • Our Mission

More Than Highlighting: Creative Annotations

Active strategies for annotation like collaborative work and illustration increase students’ comprehension and retention.

A page of a spiral notebook listing the elements of a tragedy with various drawings and doodles

Annotating texts is not the most exciting tactic for reading comprehension. In my classroom experience, even the mention of the word annotate  was met with looks of confusion or boredom. Traditional annotations have been students’ only interactions with the text. When students are asked to underline important parts of the texts, they will usually pick the first line that seems appealing or attempt to highlight the whole page of text with pretty-colored highlighters. Simply underlining the text will not meet the needs of our 21st-century learners.

Annotations are a critical strategy teachers can use to encourage students to interact with a text. They promote a deeper understanding of passages and encourage students to read with a purpose. Teachers can use annotations to emphasize crucial literacy skills like visualization, asking questions, and making inferences.

Purposeful instruction with annotating texts is required for students to benefit from this strategy. Focused instructional activities associated with annotation make the process engaging. Teachers can encourage students to participate in the annotation in new ways that use visual and collaborative strategies.

Illustrated Annotations

Illustrated annotations use images to increase comprehension and understanding. Students create illustrations to represent concepts and elements of literature. Prior to reading the text, the students create a visual representation or symbol for the concept or element of focus for the learning target. When the students annotate the text, they use the illustration they created.

I recently used this strategy to teach Hamlet . Specifically, we focused on the seven elements of Shakespearean tragedies. Before reading the texts, students drew visuals or symbols of each element. Students could choose any illustration that enhanced their learning. Those who were not adept at art could draw a “TF” for tragic flaw. After the students created their illustrations, I selected chunks of the texts for the students to annotate throughout our reading of the play.

The process of creating an illustration helps students synthesize information and increases student engagement and creativity. It makes annotating texts a more hands-on experience and makes their learning meaningful and personal. One challenge with this assignment occurs when students believe they cannot draw, do not have artistic talent, or are not creative. Allowing less artistic students to use symbols or simple drawings also emphasizes the importance of student choice. The purpose of the assignment is to capture the symbolism of concepts, so they can create any marking that represents their perception and understanding of a concept.

A printout of Hamlet's 'to be or not to be' soliloquy, marked with student notes and drawings

Collaborative Annotations

Another annotation strategy is collaborative annotation, or an annotation on a shared text by multiple students. Students annotate the same text and analyze each person’s annotations to find inspiration, discover similarities, or ask questions.

Students were given guided analysis prompts while annotating the text and their peers’ responses. During this lesson, the students were instructed to write two extended comments and pose one question per page of text. The next set of students had to do the same, but they could comment on the text or a previous annotation from another student. Each class was able to view and analyze the annotations of their peers from previous classes. At the end, students had a collection of annotations that showed several different processes of reading a text.

This strategy encourages students to close-read a text. Students think critically and have a deeper and more meaningful understanding of the text. Students also collaborate and communicate about a text with their peers by commenting and questioning the marks of others.

Personalizing the Process

Annotation strategies can be differentiated for learners in a single classroom by adjusting the requirements for each reading. Learning targets for the annotation activities can be modified for different learning needs.

Digital applications may be used in several different ways. In order to facilitate collaborative annotations in a digital format, teachers can use Google Docs. Students analyze the same text and leave comments or highlight portions of the text. Students can easily share documents and comment on other students’ annotations. For visual annotations, teachers can use graphic tools like Adobe Spark. Students can pull parts of the texts and choose pictures to represent their interpretations.

Teachers in any content area can use these annotation strategies for any texts in the class to emphasize certain themes or to promote literacy in their classes. Creativity and collaboration are crucial to 21st-century learners. When creative annotating strategies facilitate student interaction with a text, the annotation process is a meaningful learning experience and not just a coloring page with meaningless highlights.

IT Connect | UW Information Technology

Hypothesis: Collaborative Annotation for Canvas LMS

Available for.

Available for : Instructors Students

Log in to Canvas to access Hypothesis

ABOUT HYPOTHESIS

Hypothesis is a collaborative annotation tool integrated with Canvas that supports shared annotations within a course, discussion in response to annotations, and active reading of text. Instructors select Hypothesis as an external tool when setting up an assignment and can also choose to assign readings to groups. Students can then annotate course readings collaboratively, sharing comments, and replying to each other’s comments with text, links, images, and video. Hypothesis is also fully integrated with SpeedGrader for efficient review and grading of student annotations.

Resources for instructors

  • We recommend selecting the Load In A New Tab option when setting up a Hypothesis assignment. This will allow for a better reading experience for students, especially those who magnify the contents of their screen for accessibility purposes.
  • Set up Hypothesis readings through Canvas Modules
  • Grade Hypothesis annotations in Canvas
  • If you are using Canvas Files or Groups for any Hypothesis readings you will need to take additional steps before the assignment works in the new course.
  • Hypothesis FAQs

Resources for students 

Consider sharing the following links in your Canvas course, or point students to this page

  • Learn the basics of navigating and using Hypothesis
  • Short screen casts show how to highlight, annotate, make page notes, and reply to others’ notes
  • Jazz up your annotations with this deep dive into the editing interface
  • Create stand-out annotations with these five best practices to make your annotations stand out

Hypothesis helps you to

  • Provide a new way for students to discuss class readings
  • Help students consider multiple viewpoints when reading
  • Assist students in close and active reading of texts
  • Encourage students to engage critically with readings

Hypothesis Support

Workshops & webinars, hypothesis 101.

If you’d like to learn more about Hypothesis and see a demo, register for an upcoming Hypothesis 101 webinar or watch a Hypothesis 101 recording .

Hypothesis Partner Workshops

Each quarter, Hypothesis offers a variety of (typically) 30min workshops led by their team . Are you looking for ways to help your students develop their close reading skills and increase their engagement with your course materials? Maybe you’re seeking a more collaborative approach to reading complex texts while building community? Get ideas you can bring back to your courses, students, and colleagues for how to use Hypothesis for social annotation.

Topics for this quarter:

  • Activating annotation in Canvas
  • Using multimedia & tags in annotations
  • Using Hypothesis with small groups
  • Creative ways to use social annotation in your course
  • Show-and-tell participatory workshop

Liquid Margins

Hypothesis hosts a recurring web “show” featuring instructors and staff to talk about collaborative annotation, social learning, and other ways to make knowledge together.

Offered throughout the year

Previous workshop recordings

If you missed any of the Hypothesis partner workshops offered during autumn quarter, you can find recordings on the Hypothesis YouTube channel .

Vendor Help

  • The Hypothesis Knowledge Base includes FAQs, tutorials, how-tos, and troubleshooting tips.
  • Schedule a meeting with Hypothesis Customer Success Specialist Autumn Ottenad for instructional design advice or questions on how to best use Hypothesis in your course.
  • Watch Liquid Margins , the Hypothesis web series, to learn more about how other instructors use collaborative annotations in their course.
  • Email and phone
  • Resources Home
  • Technologies
  • Course Design

Collaborative Discussions

Updated on March 19, 2024

What is Perusall?

Perusall is a collaborative annotation tool that turns solitary reading assignments into collective learning activities. Perusall allows instructors to digitally assign readings to students, who then collaboratively engage with texts through annotation and commentary.

View Setting up Perusall for a Course to get started.

Migrating Perusall Courses from Before Spring 2023

Perusall was upgraded in January 2023 to streamline its Canvas integration and provide more flexible options for navigation and exporting grades to the Canvas Gradebook. If you will be using Perusall this Spring and migrating a course taught before Spring 2023,  additional steps may be needed to ensure a smooth transition.  Please contact  [email protected]  with any questions. Perusall’s description of the new integration :  “Students can use a generic Perusall link and individual assignment scores will still be sent to the LMS. You can still create individual assignment links, called “deep links” in LTI 1.3, which point to particular Perusall assignments even when the link and assignment do not have matching names. (Note: deep links are automatically created in Canvas as you create Perusall assignments.) Second, your LMS course roster automatically syncs to your Perusall course; instructors will be able to view the full roster in Perusall even before students ever launch into Perusall. That way, instructors can set up groups in advance of the start of the semester.”

Benefits of Perusall

For students.

  • Fosters collaborative and thoughtful course engagement via annotation and commentary
  • Provides a space for students to share questions, reflections, ideas, and connections with each other and with the instructor
  • Offers an alternate mode of contributing to class discourse

For Instructors

  • Ensures that students come to class prepared and having thoroughly read the material
  • Allows instructor to structure lecture and discussion more efficiently around students’ prior engagement with the reading
  • Includes auto-scoring feature for instructors who wish to incorporate into participation grade
  • Utility Menu

University Logo

GA4 Tracking code

Harvard Office of the Vice Provost for Advances in Learning

Using Social Annotation Tools to Unlock Collective Wisdom

image of Gavin Porter

The benefits

Social annotation engages students on a deeper level through peer interaction and it helps make learning visible to all. These tools enable students to ask questions, share insights, and engage in discussions about the material asynchronously. Students' understanding of the material is enhanced, and they also see that they are not alone in their struggles with reading challenging research papers. For Dr. Porter, these tools allow for greater insight into how students are processing course material so he can highlight exemplar points or address any misunderstandings. Instructors can choose to actively engage in the asynchronous discussion to build upon or steer comments, or simply summarize thoughts to bring back to a synchronous class discussion. 

“The social annotation approach makes students’ thoughts visible and prompts the entire class to participate in discussion of course content.”

The challenges

Dr. Porter believes that for social annotation to be most effective, instructors should participate in the asynchronous discussions and bring them back to class for further reflection. Lectures can be amended based on common (… or intriguing and uncommon) questions. However, manual grading of annotations can be quite time consuming if the class is large. The Perusall platform has the ability to synthesize and evaluate comments using machine learning, which is a welcome feature for large groups. Even with automated evaluation, instructors can still participate in interesting discussion threads, as their time permits, and bring selected annotations to future synchronous class sessions. 

Takeaways and best practices

  • Start small. Consider incorporating social annotation into a few assignments before deciding whether to fully integrate it into your teaching. Assess your student engagement and keep examples of high- and low-quality annotation content to guide the next cohort of students. 
  • Join the conversation. Actively participate in the asynchronous discussions. Instructor involvement can enrich the learning experience for both teacher and student. It also provides an opportunity to guide the conversation, clear up misconceptions, and highlight important points.
  • Spotlight student input. Dr. Porter typically adds screenshots of students’ insightful annotations to class presentations. This not only acknowledges and validates the student's work but also provides a jumping-off point for deeper classroom discussions.

Bottom line

Social annotation tools offer a new and effective way to enhance students’ reading experiences. If used effectively, instructors can improve students' understanding of complex materials, foster an engaged asynchronous learning community, and provide a platform for meaningful discussions about course content.

Related research

Use of an online social annotation platform to enhance a flipped organic chemistry course, collaborative online annotation: pedagogy, assessment and platform comparisons, use of a social annotation platform for pre-class reading assignments in a flipped introductory physics class, related resource, ask a librarian, interested in hearing from a faculty member using perusall.

Interested in hearing from a faculty member using Perusall? Watch this video of Professor Eric Beerbohm from VPAL’s Debate as Pedagogy event explaining how he used the platform to launch class discussions. 

Introductory video from Perusall’s founders

Digital Writing and Research Lab

Discuss This!: Structuring Reading Discussions through Collaborative Annotations

collaborative annotation assignment

A DWRL Practicum Online Module by Abby Burns

Reading discussions have always been my favorite genre of class sessions. When you ask one question and the conversation starts moving with students responding to each other, bringing in their own knowledge, pushing back on and complicating a reading’s claims, it’s some kind of magic. And, let’s be honest, these discussions can be infinitely more engaging and dynamic than another monovocal lecture on the rhetorical triangle. However, they have also always posed a bit of a problem for me.

As a hard-of-hearing instructor who struggles intermittently with listening fatigue, there are days when keeping up with the class eats away at all of my energy (or requires more energy than I have), diminishing my capacity to respond in real time. The situation becomes even more dire when there’s a student in the room who never speaks above a whisper, which somehow there always is. What’s more, with a million and one responsibilities beyond teaching, I cannot afford exhaustion after a single hour at the head of the class. Since I am also not exactly willing to cede discussions altogether—see my love of magic—something has to give.

One tool I have come stand by for negotiating this problem is collaborative annotation platforms like Perusall or Hypothes.is . These platforms allow students to connect on the (digital) margins of a shared PDF, taking collaborative notes directly on assigned readings.

A screenshot of an active Perusall screen, where a student annotation is displayed to the left of a PDF document

This collaboration can look like students responding to each other’s comments, “upvoting” comments they definitely want to discuss as a class, identifying the core components (e.g., target/ problem, intervention, subclaims, limitations, etc.) of a dense text’s argument, and so on and so forth. In the process of this back-and-forth, I get a preview of what to expect in class and thus what kinds of questions or comments I should prepare to answer.

Of course, not everyone who teaches has a hearing disability; however, like many access tips and tricks, collaborative annotation is one that can stand to benefit any instructor’s pedagogical practice—and, who knows, you may have a HoH/ deaf student one day who will appreciate the alternative avenue to participation. In this module, I go over the basics of collaborative annotation tools and their potential uses and benefits. Finally, at the end, you can find a video tour of Perusall meant to demystify what these platforms look like and how both you and your students might interact with them.  

This module aims to:

  • Introduce instructors to collaborate annotations in the classroom;
  • Suggest some potential benefits and uses of collaborative annotations;
  • Provide a video tour of a potential platform on which to host annotation assignments

First, it’s worth reflecting on why we stage reading discussions at all. What are the learning outcomes associated with such discussions and how do collaborate annotations facilitate those outcomes?

Practicing Critical Reading

When we prompt students to “read critically,” generally what we mean is that we want them to move beyond reading to consume and regurgitate information. We want them to think alongside the text, whether that means making connections, identifying assumptions, expanding upon a subclaim, noticing a site of tension or contradiction, or some combination of the above. At the same time, we want to discourage the overly harsh skeptic who disagrees with every point the writer makes because they’ve collapsed “critical” with “judgmental” and they seem to think the only intelligent form of engagement is that of the contrarian.

Oftentimes, students who come into the classroom have various capacities for critical reading, whether because we’re teaching a class on conspiracy theories and one student in the room has already listened to every relevant podcast out there (no small feat) or because one of the students comes from a highly technical major where the goal of class readings is always about extracting information and they can’t quite shake the habit. Figuring out how to teach across skill and comfort levels can be a task in and of itself.

Collaborative annotations can help clarify what critical reading can look likein practice , serving as an invention resource for students who can’t quite figure out what they want to say. If we go back to the Burkean Parlor, the shared PDF provides a visual for the idea of “reading (or research) as conversation” by inviting students to converse not only with the writer but with each other as they read. Students can see how their peers are responding to the reading, can see the different tactics they use to build out connections, can ask each other questions, all before they even step into the classroom for the in-person discussion.

On a similar note, when you assign particularly dense or difficult readings, you as the instructor can leave annotations noting specific rhetorical moves the writer makes. It’s a simple but effective technique to take note of the writer’s first articulation of their thesis and the subclaims through which they build their thesis out. These notes can serve as anchor points for students who might otherwise end up drowning in lines like, “For discourse to materialize a set of effects , ‘discourse’ itself must be understood as complex and convergent chains in which ‘effects’ are vectors of power” (Butler).       

Synthesizing personal knowledge with course content and following curiosity

As dynamic and engaging as discussions can be, we have all experienced that discussion where every question you ask feels like pulling teeth. Maybe it’s because you teach at eight in the morning. Maybe all of the students trusted their peers to do the reading and carry the day only to find themselves, like you, met with a discomfiting wall of silence. Maybe you’re asking the wrongquestions.

Collaborative annotations give you an opportunity before entering the classroom to design discussion questions that meet students where they are. In the words of Casey Boyle, the annotations provide a kind of “heat map” of the text. You might assign Audre Lorde’s “Age, Race, Class, and Sex,” and design questions that think with the idea of a mythical norm, only for students to be weirdly obsessed with Lorde’s note about how the devaluation of poetry exposes class bias. (As an aside: I teach this essay every semester and I have never not had a student start a discussion on the devaluation of certain art forms as it relates to class or gender). Here, you can follow students’ interests while designing questions that push them to deepen their thinking (or prepare further scaffolding to lead them back to the reading’s core argument). In this way, collaborative annotations provide the means to give students more agency in determining the direction of class discussion while at the same time staying anchored to the text. That is, because annotations are attached to specific moments in the text, they help to prevent students from going too far afield as they generate discussion points. While other forms of reading responses and accountability checks (e.g., blog posts) can leave space for students to go on unproductive tangents, annotations are tailor made for the pedagogical move of “Let’s circle back to Lorde.”          

There are undoubtedly more uses and benefits to collaborative annotations than what I’ve included here (and I’m excited to hear from you how you have integrated these tools into your own pedagogical practice in the comments). For now, I will leave this module open-ended.

For anyone unfamiliar with collaborative annotation platforms, I’ve included a short video below touring the platform I currently use in my own classes (Perusall) so you can get a better sense of how you and your students might navigate the interface and what the technology might yet make possible.   

dwrlstaff

Related Posts

Proposal assignment.

collaborative annotation assignment

Student Essay AI Co-Writing Public Demonstration

collaborative annotation assignment

Flash Fellowship: The Re-lineator

collaborative annotation assignment

Lesson Plan: Teaching Context with Video Creation

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

collaborative annotation assignment

Instructure Logo

You're signed out

Sign in to ask questions, follow content, and engage with the Community

  • Canvas Instructional Designers
  • Canvas Instructional Designer Blog
  • Using the Hypothesis Collaborative Annotation Tool...

Using the Hypothesis Collaborative Annotation Tool in Canvas

dholton

  • Subscribe to RSS Feed
  • Mark as New
  • Mark as Read
  • Printer Friendly Page
  • Report Inappropriate Content
  • collaborative learning
  • cooperative learning
  • open source

You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.

collaborative annotation assignment

Director of Teaching and Learning at Florida Poly. My background is in psychology and education. I have mostly focused on STEM education and educational technology.

15 Logins Completed

Community Help

View our top guides and resources:.

To participate in the Instructurer Community, you need to sign up or log in:

Follow Puck Worlds online:

  • Follow Puck Worlds on Twitter

Site search

Filed under:

  • Kontinental Hockey League

Gagarin Cup Preview: Atlant vs. Salavat Yulaev

Share this story.

  • Share this on Facebook
  • Share this on Twitter
  • Share this on Reddit
  • Share All sharing options

Share All sharing options for: Gagarin Cup Preview: Atlant vs. Salavat Yulaev

Gagarin cup (khl) finals:  atlant moscow oblast vs. salavat yulaev ufa.

Much like the Elitserien Finals, we have a bit of an offense vs. defense match-up in this league Final.  While Ufa let their star top line of Alexander Radulov, Patrick Thoresen and Igor Grigorenko loose on the KHL's Western Conference, Mytischi played a more conservative style, relying on veterans such as former NHLers Jan Bulis, Oleg Petrov, and Jaroslav Obsut.  Just reaching the Finals is a testament to Atlant's disciplined style of play, as they had to knock off much more high profile teams from Yaroslavl and St. Petersburg to do so.  But while they did finish 8th in the league in points, they haven't seen the likes of Ufa, who finished 2nd. 

This series will be a challenge for the underdog, because unlike some of the other KHL teams, Ufa's top players are generally younger and in their prime.  Only Proshkin amongst regular blueliners is over 30, with the work being shared by Kirill Koltsov (28), Andrei Kuteikin (26), Miroslav Blatak (28), Maxim Kondratiev (28) and Dmitri Kalinin (30).  Oleg Tverdovsky hasn't played a lot in the playoffs to date.  Up front, while led by a fairly young top line (24-27), Ufa does have a lot of veterans in support roles:  Vyacheslav Kozlov , Viktor Kozlov , Vladimir Antipov, Sergei Zinovyev and Petr Schastlivy are all over 30.  In fact, the names of all their forwards are familiar to international and NHL fans:  Robert Nilsson , Alexander Svitov, Oleg Saprykin and Jakub Klepis round out the group, all former NHL players.

For Atlant, their veteran roster, with only one of their top six D under the age of 30 (and no top forwards under 30, either), this might be their one shot at a championship.  The team has never won either a Russian Superleague title or the Gagarin Cup, and for players like former NHLer Oleg Petrov, this is probably the last shot at the KHL's top prize.  The team got three extra days rest by winning their Conference Final in six games, and they probably needed to use it.  Atlant does have younger regulars on their roster, but they generally only play a few shifts per game, if that. 

The low event style of game for Atlant probably suits them well, but I don't know how they can manage to keep up against Ufa's speed, skill, and depth.  There is no advantage to be seen in goal, with Erik Ersberg and Konstantin Barulin posting almost identical numbers, and even in terms of recent playoff experience Ufa has them beat.  Luckily for Atlant, Ufa isn't that far away from the Moscow region, so travel shouldn't play a major role. 

I'm predicting that Ufa, winners of the last Superleague title back in 2008, will become the second team to win the Gagarin Cup, and will prevail in five games.  They have a seriously well built team that would honestly compete in the NHL.  They represent the potential of the league, while Atlant represents closer to the reality, as a team full of players who played themselves out of the NHL. 

  • Atlant @ Ufa, Friday Apr 8 (3:00 PM CET/10:00 PM EST)
  • Atlant @ Ufa, Sunday Apr 10 (1:00 PM CET/8:00 AM EST)
  • Ufa @ Atlant, Tuesday Apr 12 (5:30 PM CET/12:30 PM EST)
  • Ufa @ Atlant, Thursday Apr 14 (5:30 PM CET/12:30 PM EST)

Games 5-7 are as yet unscheduled, but every second day is the KHL standard, so expect Game 5 to be on Saturday, like an early start. 

Loading comments...

Perusall logo

Every student prepared for every class

Students interact with each other on Perusall's social learning platform: commenting and responding to each other on documents and videos.

Instructors at these institutions instruct with Perusall:

The Arizona State University Logo showcasing Perusall's partnership with renowned educational establishments

Ensure every student is prepared for every class

  • On average, only 20-30% of students do assigned reading; with Perusall, >90% do the reading .
  • Students learn more than twice as much information from assignments, with no effort from the instructor.

Engage students at scale

  • Students help each other learn by collectively commenting on readings and videos, responding to each other, and participating in a social learning environment.
  • Students get instant feedback and motivation; instructors get continuous insights on student engagement.

Save time and improve your teaching

  • Perusall can automatically grade students' engagement with the content.
  • Start class using Perusall’s Student Confusion Report — a one-page summary of concepts that confused students, along with some of the best comments.

Simplify book orders and use the materials you need

  • Combine open educational resources (OER), your own materials, & books from our catalog of > 1 million ebooks from 46,000 publishers .
  • Upload Google Docs, Google Slides, PowerPoint, Word documents, and PDFs, integrate videos, web pages, and podcasts, and more.

Keep your administration happy

  • Administration-friendly, FERPA compliant, secure, free of advertising, and we never sell your data.
  • Perusall works with bookstores and all LMSes.
  • Custom institutional licenses are available .

Stay at the forefront of educational innovation

  • Benefit from continually updated and rigorously evaluated educational innovations.
  • Instructors use Perusall in classes from 2 to 4,000+ students in all disciplines and professional development settings.

A diverse group of students collaboratively studying with a tablet, symbolizing peer engagement and preparation for class with Perusall.

Join 40,000 instructors, at 3,000 educational institutions, teaching 1.5 million students, in 112 countries.

Set up the whole semester in 15 minutes.

A logo for Penguin Random House, a publisher available on Perusall.

Keep students intrinsically motivated

Image of a Perusall conversation discussing physics, where one user asks if gravitational and electric fields are mutually exclusive and another confirms they are not, citing the interaction between electric and magnetic fields as fundamental to various technologies like radio and electromagnetic motors.

Get complete insights

Try perusall today, increase student learning with perusall.

The social learning platform that prepares every student for every class.

Group of diverse college students laughing and collaborating around a laptop in a bright, modern classroom setting, with open textbooks and water bottles on the table, indicating a positive and interactive learning environment using Perusall.

IMAGES

  1. 8 Ideas Designed to Engage Students In Active Learning Online

    collaborative annotation assignment

  2. This collaborative writing assignment is the perfect addition to your

    collaborative annotation assignment

  3. Top 6 PDF annotation tools for document collaboration

    collaborative annotation assignment

  4. How to Do Collaborative Annotation in Google Docs for Online or Blended Classrooms

    collaborative annotation assignment

  5. Collaborative Annotations in the Classroom: Visual and Electronic

    collaborative annotation assignment

  6. Collaborative Annotation in Canvas using Hypothes.is

    collaborative annotation assignment

VIDEO

  1. Collaborative annotation module intro

  2. Module 10 Assignment Collaborative Video Project Group 30

  3. Using Perusall for Collaborative Annotation

  4. Summer Reading Fun Video

  5. How Students See an Annotation Assignment

  6. How To Create A Student Annotation Assignment

COMMENTS

  1. Leveraging Annotation Activities and Tools to Promote Collaborative

    One powerful way to promote and encourage social learning is through collaborative annotation activities (e.g., active reading assignments). Collaborative annotation assignments can "promote high pre-class reading compliance, engagement, and conceptual understanding," leading to deeper student interaction and engagement with course materials, while also helping instructors better gauge ...

  2. Ideas for group & collaborative assignments

    Collaborative learning can help. students develop higher-level thinking, communication, self-management, and leadership skills. explore a broad range of perspectives and provide opportunities for student voices/expression. promote teamwork skills & ethics. prepare students for real life social and employment situations.

  3. Collaborative Online Annotation Tools for Engaging Students in Readings

    Students simply go to the assignment and can begin annotating. In the image above, a student highlights a passage to show what the annotation refers to. For a collaborative activity, students can reply to any peer's comment. Alternatively, the instructor can set the annotations to be private, for more independent tasks. Accountability

  4. Tip Sheet: Collaborative Annotation in Canvas using Hypothes.is

    Hypothes.is is a collaborative online annotation tool that is now available in Canvas. The tool allows students to collaboratively annotate websites and PDF documents. With the Canvas integration, students do not need to create accounts and their annotations can automatically be seen through SpeedGrader if you set it up as an Assignment.

  5. Integrating Collaborative Annotation into Higher Education Courses for

    Collaborative Annotation (CA) is a literacy strategy that engages students in critical reading, critical thinking, writing and collaboration all in one activity [1]. ... We have designed several pedagogical pipelines which illustrate how to integrate Collaborative Annotation into several types of assignments. Our research is concerned with the ...

  6. Social annotation technology helps students read together

    For these reasons, Luskey is a proponent of online tools that facilitate social annotation—collaborative reading, thinking and marking up of an article, webpage, podcast, collection of images or video. ... students who prefer to reflect before responding have equal opportunity (within the bounds of the assignment deadline) as those who are ...

  7. Hypothesis for Collaborative Web Annotation: Home

    Collaborative annotation assignments are a better way to encourage students to engage more deeply with course content and with each other. For one, conversations that take place in the margins of readings are more organic, initiated by students themselves about what confuses or intrigues them most. In addition, these annotation discussions are ...

  8. Frontiers

    Collaborative online annotation platforms are enabling this process in new ways, turning reading from a solitary into a collective activity. ... Repetition in grading is cut down when collaborative annotation takes the place of an assignment where students are generating a relatively uniform assessment product. Some of the feedback burden on ...

  9. Collaborative Annotation Tools

    Perusall is a free collaborative annotation service that claims to transform reading from a "solitary experience to an "engaging and collective one." Teachers are able to create online courses independently of or integrated with an LMS Canvas. ... Teachers can make assignments from specific chapters or sections and have students read ...

  10. (PDF) Collaborative Online Annotation: Pedagogy, Assessment and

    Collaborative online annotation platforms are enabling this process in new ways, turning reading from a solitary into a collective activity. ... If the readings are annotated in assignment mode ...

  11. Hypothes.is: Social and Collaborative Annotation

    Benefits of collaborative annotation include increased: student understanding (Miller et al., 2016), intrinsic motivation (Dean & Schulten, 2015), and collective efficacy (Bandura, 2000). Hypothes.is' affordances include: Students and teacher co-construct knowledge ... Assignment in Carmen (graded or ungraded) Standalone page within a Module in ...

  12. Collaborative Annotation: Tools for Enhancing Learning and Scholarly

    This article examines the growing demand for collaborative annotation technologies in the academic community, tools and technologies that enable collaborative annotation, and various projects that have implemented it in public forums. Keywords: Collaborative annotation;

  13. Back to School With Annotation: 10 Ways to Annotate With Students

    Students thus gain regular practice in close reading and build ideas towards more substantive, summative assignments. Such assignments can also begin as collaborative exercises done by the entire class and culminate with individual or small group annotation projects. 5. Annotation as Rhetorical Analysis

  14. Creative Annotation Can Improve Students' Reading ...

    The purpose of the assignment is to capture the symbolism of concepts, so they can create any marking that represents their perception and understanding of a concept. ... In order to facilitate collaborative annotations in a digital format, teachers can use Google Docs. Students analyze the same text and leave comments or highlight portions of ...

  15. Empowering active learning: A social annotation tool for improving

    This finding suggests that online collaborative annotation, as a short and low-pressure assessment, could be a comfortable and adaptable platform for students with low English proficiency to succeed. ... Notably, EAL students with different levels of English language proficiency performed similarly in collaborative annotation assignments ...

  16. Hypothesis: Collaborative Annotation for Canvas LMS

    ABOUT HYPOTHESIS. Hypothesis is a collaborative annotation tool integrated with Canvas that supports shared annotations within a course, discussion in response to annotations, and active reading of text. Instructors select Hypothesis as an external tool when setting up an assignment and can also choose to assign readings to groups.

  17. Perusall

    Perusall is a collaborative annotation tool that turns solitary reading assignments into collective learning activities. Perusall allows instructors to digitally assign readings to students, who then collaboratively engage with texts through annotation and commentary. View Setting up Perusall for a Course to get started.

  18. Using Social Annotation Tools to Unlock Collective Wisdom

    Due to the repetitive nature of the assessment product and after realizing that all students could benefit from each other's questions and ideas, Dr. Porter transitioned this assignment to a collaborative one using a social annotation platform created at Harvard called Perusall. The platform embeds the research paper PDF that students read ...

  19. Discuss This!: Structuring Reading Discussions through Collaborative

    In this way, collaborative annotations provide the means to give students more agency in determining the direction of class discussion while at the same time staying anchored to the text. That is, because annotations are attached to specific moments in the text, they help to prevent students from going too far afield as they generate discussion ...

  20. What's New in Microsoft Teams

    Collaborative annotations let meeting presenters add drawings, notes, reactions, text highlights, and more on the screen being shared during a meeting. Now, screen sharers can save the annotated content to Whiteboard, so all meeting participants can access it later and continue working on it. To save your content with annotations to Whiteboard ...

  21. Elektrostal

    In 1938, it was granted town status. [citation needed]Administrative and municipal status. Within the framework of administrative divisions, it is incorporated as Elektrostal City Under Oblast Jurisdiction—an administrative unit with the status equal to that of the districts. As a municipal division, Elektrostal City Under Oblast Jurisdiction is incorporated as Elektrostal Urban Okrug.

  22. Using the Hypothesis Collaborative Annotation Tool in Canvas

    01-14-2019 01:12 PM. It is now possible to connect Hypothesis, a free and open source collaborative annotation tool, with your Canvas course. You could use this for activities in which your students collaboratively comment or annotate web sites, documents, and other items. See this tutorial for how Hypothesis works, and here are some quick ...

  23. Gagarin Cup Preview: Atlant vs. Salavat Yulaev

    Much like the Elitserien Finals, we have a bit of an offense vs. defense match-up in this league Final. While Ufa let their star top line of Alexander Radulov, Patrick Thoresen and Igor Grigorenko loose on the KHL's Western Conference, Mytischi played a more conservative style, relying on veterans such as former NHLers Jan Bulis, Oleg Petrov, and Jaroslav Obsut.

  24. Perusall

    Increase student learning with Perusall. The social learning platform that prepares every student for every class. Use the material you want to create your assignments. Save time and discover insights with our assessment tools. Discover Perusall — the interactive social learning platform where every student arrives prepared for every class.