Learn Prompt logo

Mastering ChatGPT: The Ultimate Prompts Guide for Academic Writing Excellence

ChatGPT, with its advanced AI capabilities, has emerged as a game-changer for many. Yet, its true potential is unlocked when approached with the right queries. The prompts listed in this article have been crafted to optimize your interaction with this powerful tool. By leveraging them, you not only streamline your writing process but also enhance the quality of your research and insights. As we wrap up, we urge you not to take our word for it. Dive into the world of ChatGPT, armed with these prompts, and witness the transformation in your academic writing endeavors firsthand.

ChatGPT Prompts for Idea Generation

If you’re stuck or unsure where to begin, ChatGPT can help brainstorm ideas or topics for your paper, thesis, or dissertation.

  • Suggest some potential topics on [your broader subject or theme] for an academic paper.
  • Suggest some potential topics within the field of [your broader subject] related to [specific interest or theme].
  • I’m exploring the field of [broader subject, e.g., “psychology”]. Could you suggest some topics that intersect with [specific interest, e.g., “child development”] and are relevant to [specific context or region, e.g., “urban settings in Asia”]?
  • Within the realm of [broader subject, e.g., “philosophy”], I’m intrigued by [specific interest, e.g., “existentialism”]. Could you recommend topics that bridge it with [another field or theme, e.g., “modern technology”] in the context of [specific region or era, e.g., “21st-century Europe”]?
  • Act as my brainstorming partner. I’m working on [your broader subject or theme]. What topics could be pertinent for an academic paper?
  • Act as my brainstorming partner for a moment. Given the broader subject of [discipline, e.g., ‘sociology’], can you help generate ideas that intertwine with [specific theme or interest, e.g., ‘social media’] and cater to an audience primarily from [region or demographic, e.g., ‘South East Asia’]?

ChatGPT Prompts for Structuring Content

The model can provide suggestions for how to organize your content, including potential section headers, logical flow of arguments, etc.

  • How should I structure my paper on [your specific topic]? Provide an outline or potential section headers.
  • I’m writing a paper about [your specific topic]. How should I structure it and which sub-topics should I cover within [chosen section, e.g., “Literature Review”]?
  • For a paper that discusses [specific topic, e.g., “climate change”], how should I structure the [chosen section, e.g., “Literature Review”] and integrate studies from [specific decade or period, e.g., “the 2010s”]?
  • I’m compiling a paper on [specific topic, e.g., “biodiversity loss”]. How should I arrange the [chosen section, e.g., “Discussion”] to incorporate perspectives from [specific discipline, e.g., “socio-economics”] and findings from [specified region or ecosystem, e.g., “tropical rainforests”]?
  • Act as an editor for a moment. Based on a paper about [your specific topic], how would you recommend I structure it? Are there key sections or elements I should include?
  • Act as a structural consultant for my paper on [topic, e.g., ‘quantum physics’]. Could you suggest a logical flow and potential section headers, especially when I aim to cover aspects like [specific elements, e.g., ‘quantum entanglement and teleportation’]?
  • Act as my editorial guide. For a paper focused on [specific topic, e.g., “quantum computing”], how might I structure my [chosen section, e.g., “Findings”]? Especially when integrating viewpoints from [specific discipline, e.g., “software engineering”] and case studies from [specified region, e.g., “East Asia”]?

ChatGPT Prompts for Proofreading

While it might not replace a human proofreader, ChatGPT can help you identify grammatical errors, awkward phrasing, or inconsistencies in your writing.

  • Review this passage for grammatical or stylistic errors: [paste your text here].
  • Review this paragraph from my [type of document, e.g., “thesis”] for grammatical or stylistic errors: [paste your text here].
  • Please review this passage from my [type of document, e.g., “dissertation”] on [specific topic, e.g., “renewable energy”] for potential grammatical or stylistic errors: [paste your text here].
  • Kindly scrutinize this segment from my [type of document, e.g., “journal article”] concerning [specific topic, e.g., “deep-sea exploration”]. Highlight any linguistic or structural missteps and suggest how it might better fit the style of [target publication or audience, e.g., “Nature Journal”]: [paste your text here].
  • Act as my proofreader. In this passage: [paste your text here], are there any grammatical or stylistic errors I should be aware of?
  • Act as my preliminary proofreader. I’ve drafted a section for my [type of document, e.g., “research proposal”] about [specific topic, e.g., “nanotechnology”]. I’d value feedback on grammar, coherence, and alignment with [target publication or style, e.g., “IEEE standards”]: [paste your text here].

ChatGPT Prompts for Citation Guidance

Need help formatting citations or understanding the nuances of different citation styles (like APA, MLA, Chicago)? ChatGPT can guide you.

  • How do I format this citation in [desired style, e.g., APA, MLA]? Here’s the source: [paste source details here].
  • I’m referencing a [type of source, e.g., “conference paper”] authored by [author’s name] in my document. How should I format this citation in the [desired style, e.g., “Chicago”] style?
  • Act as a citation guide. I need to reference a [source type, e.g., ‘journal article’] for my work. How should I format this using the [citation style, e.g., ‘APA’] method?
  • Act as my citation assistant. I’ve sourced a [type of source, e.g., “web article”] from [author’s name] published in [year, e.g., “2018”]. How should I present this in [desired style, e.g., “MLA”] format?

ChatGPT Prompts for Paraphrasing

If you’re trying to convey information from sources without plagiarizing, the model can assist in rephrasing the content.

  • Can you help me paraphrase this statement? [paste your original statement here].
  • Help me convey the following idea from [source author’s name] in my own words: [paste the original statement here].
  • I’d like to reference an idea from [source author’s name]’s work on [specific topic, e.g., “quantum physics”]. Can you help me paraphrase this statement without losing its essence: [paste the original statement here]?
  • Act as a wordsmith. I’d like a rephrased version of this statement without losing its essence: [paste your original statement here].
  • Act as my rephraser. Here’s a statement from [author’s name]’s work on [topic, e.g., ‘cognitive development’]: [paste original statement here]. How can I convey this without plagiarizing?
  • Act as my plagiarism prevention aid. I’d like to include insights from [source author’s name]’s research on [specific topic, e.g., “solar energy”]. Help me convey this in my own words while maintaining the tone of my [type of work, e.g., “doctoral thesis”]: [paste the original statement here].

ChatGPT Prompts for Vocabulary Enhancement

If you’re looking for more sophisticated or subject-specific terminology, ChatGPT can suggest synonyms or alternative phrasing.

  • I want a more academic or sophisticated way to express this: [paste your sentence or phrase here].
  • In the context of [specific field or subject], can you suggest a more academic way to express this phrase: [paste your phrase here]?
  • I’m writing a paper in the field of [specific discipline, e.g., “bioinformatics”]. How can I convey this idea more academically: [paste your phrase here]?
  • Within the purview of [specific discipline, e.g., “astrophysics”], I wish to enhance this assertion: [paste your phrase here]. What terminologies or phrasing would resonate more with an audience well-versed in [related field or topic, e.g., “stellar evolution”]?
  • Act as my thesaurus. For this phrase: [paste your sentence or phrase here], is there a more academic or sophisticated term or phrase I could use?
  • Act as a lexicon expert in [field, e.g., ‘neuroscience’]. How might I express this idea more aptly: [paste your phrase here]?

ChatGPT Prompts for Clarifying Concepts

If you’re working in a field that’s not your primary area of expertise, the model can provide explanations or definitions for unfamiliar terms or concepts.

  • Can you explain the concept of [specific term or concept] in the context of academic research?
  • In [specific field, e.g., “sociology”], what does [specific term or concept] mean? And how does it relate to [another term or concept]?
  • In the realm of [specific discipline, e.g., “neuroscience”], how would you define [term or concept A], and how does it differentiate from [term or concept B]?
  • Act as my tutor. I’m a bit lost on the concept of [specific term or concept]. Can you break it down for me in the context of [specific academic field]?
  • Act as my academic tutor for a moment. I’ve encountered some challenging terms in [specific discipline, e.g., “metaphysics”]. Could you elucidate the distinctions between [term A], [term B], and [term C], especially when applied in [specific context or theory, e.g., “Kantian philosophy”]?

ChatGPT Prompts for Draft Review

You can share sections or excerpts of your draft, and ChatGPT can provide general feedback or points for consideration.

  • Please provide feedback on this excerpt from my draft: [paste excerpt here].
  • Could you review this excerpt from my [type of document, e.g., “research proposal”] and provide feedback on [specific aspect, e.g., “clarity and coherence”]: [paste excerpt here]?
  • I’d appreciate feedback on this fragment from my [type of document, e.g., “policy analysis”] that centers on [specific topic, e.g., “renewable energy adoption”]. Specifically, I’m looking for guidance on its [specific aspect, e.g., “argumentative flow”] and how it caters to [intended audience, e.g., “policy-makers in Southeast Asia”]: [paste excerpt here].
  • Act as a reviewer for my journal submission. Could you critique this section of my draft: [paste excerpt here]?
  • Act as my critique partner. I’ve written a segment for my [type of document, e.g., “literature review”] on [specific topic, e.g., “cognitive biases”]. Could you assess its [specific quality, e.g., “objectivity”], especially considering its importance for [target audience or application, e.g., “clinical psychologists”]: [paste excerpt here].

ChatGPT Prompts for Reference Pointers

If you’re looking for additional sources or literature on a topic, ChatGPT can point you to key papers, authors, or studies (though its knowledge is up to 2022, so it won’t have the latest publications).

  • Can you recommend key papers or studies related to [your topic or research question]?
  • I need references related to [specific topic] within the broader field of [your subject area]. Can you suggest key papers or authors?
  • I’m researching [specific topic, e.g., “machine learning in healthcare”]. Can you suggest seminal works from the [specific decade, e.g., “2000s”] within the broader domain of [your general field, e.g., “computer science”]?
  • My study orbits around [specific topic, e.g., “augmented reality in education”]. I’m especially keen on understanding its evolution during the [specific time frame, e.g., “late 2010s”]. Can you direct me to foundational papers or figures within [your overarching domain, e.g., “educational technology”]?
  • Act as a literature guide. I’m diving into [your topic or research question]. Do you have suggestions for seminal papers or must-read studies?
  • Act as my literary guide. My work revolves around [specific topic, e.g., “virtual reality in pedagogy”]. I’d appreciate direction towards key texts or experts from the [specific era, e.g., “early 2000s”], especially those that highlight applications in [specific setting, e.g., “higher education institutions”].

ChatGPT Prompts for Writing Prompts

For those facing writer’s block, ChatGPT can generate prompts or questions to help you think critically about your topic and stimulate your writing.

  • I’m facing writer’s block on [your topic]. Can you give me some prompts or questions to stimulate my thinking?
  • I’m writing about [specific topic] in the context of [broader theme or issue]. Can you give me questions that would enhance my discussion?
  • I’m discussing [specific topic, e.g., “urban planning”] in relation to [another topic, e.g., “sustainable development”] in [specific region or country, e.g., “Latin America”]. Can you offer some thought-provoking prompts?
  • Act as my muse. I’m struggling with [your topic]. Could you generate some prompts or lead questions to help steer my writing?
  • Act as a muse for my writer’s block. Given the themes of [topic A, e.g., ‘climate change’] and its impact on [topic B, e.g., ‘marine ecosystems’], can you generate thought-provoking prompts?

ChatGPT Prompts for Thesis Statements

If you’re struggling with framing your thesis statement, ChatGPT can help you refine and articulate it more clearly.

  • Help me refine this thesis statement for clarity and impact: [paste your thesis statement here].
  • Here’s a draft thesis statement for my paper on [specific topic]: [paste your thesis statement]. How can it be made more compelling?
  • I’m drafting a statement for my research on [specific topic, e.g., “cryptocurrency adoption”] in the context of [specific region, e.g., “European markets”]. Here’s my attempt: [paste your thesis statement]. Any suggestions for enhancement?
  • Act as my thesis advisor. I’m shaping a statement on [topic, e.g., ‘blockchain in finance’]. Here’s my draft: [paste your thesis statement]. How might it be honed further?

ChatGPT Prompts for Abstract and Summary

The model can help in drafting, refining, or summarizing abstracts for your papers.

  • Can you help me draft/summarize an abstract based on this content? [paste main points or brief content here].
  • I’m submitting a paper to [specific conference or journal]. Can you help me summarize my findings from [paste main content or points] into a concise abstract?
  • I’m aiming to condense my findings on [specific topic, e.g., “gene therapy”] from [source or dataset, e.g., “recent clinical trials”] into an abstract for [specific event, e.g., “a biotech conference”]. Can you assist?
  • Act as an abstracting service. Based on the following content: [paste main points or brief content here], how might you draft or summarize an abstract?
  • Act as my editorial assistant. I’ve compiled findings on [topic, e.g., ‘genetic modifications’] from my research. Help me craft or refine a concise abstract suitable for [event or publication, e.g., ‘an international biology conference’].

ChatGPT Prompts for Methodological Assistance

If you’re unsure about the methodology section of your paper, ChatGPT can provide insights or explanations about various research methods.

  • I’m using [specific research method, e.g., qualitative interviews] for my study on [your topic]. Can you provide insights or potential pitfalls?
  • For a study on [specific topic], I’m considering using [specific research method]. Can you explain its application and potential challenges in this context?
  • I’m considering a study on [specific topic, e.g., “consumer behavior”] using [research method, e.g., “ethnographic studies”]. Given the demographic of [target group, e.g., “millennials in urban settings”], what might be the methodological challenges?
  • My exploration of [specific topic, e.g., “consumer sentiment”] deploys [research method, e.g., “mixed-method analysis”]. Given my target demographic of [specific group, e.g., “online shoppers aged 18-25”], what are potential methodological challenges and best practices in [specific setting or platform, e.g., “e-commerce platforms”]?
  • Act as a methodological counselor. I’m exploring [topic, e.g., ‘consumer behavior patterns’] using [research technique, e.g., ‘qualitative interviews’]. Given the scope of [specific context or dataset, e.g., ‘online retail platforms’], what insights can you offer?

ChatGPT Prompts for Language Translation

While not perfect, ChatGPT can assist in translating content to and from various languages, which might be helpful for non-native English speakers or when dealing with sources in other languages.

  • Please translate this passage to [desired language]: [paste your text here].
  • I’m integrating a passage for my research on [specific topic, e.g., “Mesoamerican civilizations”]. Could you assist in translating this content from [source language, e.g., “Nahuatl”] to [target language, e.g., “English”] while preserving academic rigor: [paste your text here]?
  • Act as my translation assistant. I have this passage in [source language, e.g., ‘French’] about [topic, e.g., ‘European history’]: [paste your text here]. Can you render it in [target language, e.g., ‘English’] while maintaining academic integrity?

ChatGPT Prompts for Ethical Considerations

ChatGPT can provide a general overview of ethical considerations in research, though specific guidance should come from institutional review boards or ethics committees.

  • What are some general ethical considerations when conducting research on [specific topic or population]?
  • I’m conducting research involving [specific group or method, e.g., “minors” or “online surveys”]. What are key ethical considerations I should be aware of in the context of [specific discipline or field]?
  • My investigation encompasses [specific method or technique, e.g., “genome editing”] on [target population or organism, e.g., “plant species”]. As I operate within the framework of [specific institution or body, e.g., “UNESCO guidelines”], what ethical imperatives should I foreground, especially when considering implications for [broader context, e.g., “global food security”]?
  • Act as an ethics board member. I’m conducting research on [specific topic or population]. Could you outline key ethical considerations I should bear in mind?
  • Act as an ethics overview guide. My research involves [specific technique or method, e.g., ‘live human trials’] in the realm of [specific discipline, e.g., ‘medical research’]. What general ethical considerations might be paramount, especially when targeting [specific population or group, e.g., ‘adolescents’]?

ChatGPT’s advanced AI capabilities have made it a standout tool in the world of academic writing. However, its real strength shines when paired with the right questions. The prompts in this article are tailored to optimize your experience with ChatGPT. By using them, you can streamline your writing and elevate the depth of your research. But don’t just take our word for it. Explore ChatGPT with these prompts and see the transformation in your academic writing for yourself. Excellent writing is just one prompt away.

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

'ZDNET Recommends': What exactly does it mean?

ZDNET's recommendations are based on many hours of testing, research, and comparison shopping. We gather data from the best available sources, including vendor and retailer listings as well as other relevant and independent reviews sites. And we pore over customer reviews to find out what matters to real people who already own and use the products and services we’re assessing.

When you click through from our site to a retailer and buy a product or service, we may earn affiliate commissions. This helps support our work, but does not affect what we cover or how, and it does not affect the price you pay. Neither ZDNET nor the author are compensated for these independent reviews. Indeed, we follow strict guidelines that ensure our editorial content is never influenced by advertisers.

ZDNET's editorial team writes on behalf of you, our reader. Our goal is to deliver the most accurate information and the most knowledgeable advice possible in order to help you make smarter buying decisions on tech gear and a wide array of products and services. Our editors thoroughly review and fact-check every article to ensure that our content meets the highest standards. If we have made an error or published misleading information, we will correct or clarify the article. If you see inaccuracies in our content, please report the mistake via this form .

How ChatGPT (and other AI chatbots) can help you write an essay

screenshot-2024-03-27-at-4-28-37pm.png

ChatGPT  is capable of doing many different things very well, with one of the biggest standout features being its ability to compose all sorts of text within seconds, including songs, poems, bedtime stories, and essays . 

The chatbot's writing abilities are not only fun to experiment with, but can help provide assistance with everyday tasks. Whether you are a student, a working professional, or just getting stuff done, we constantly take time out of our day to compose emails, texts, posts, and more. ChatGPT can help you claim some of that time back by helping you brainstorm and then compose any text you need. 

How to use ChatGPT to write: Code | Excel formulas | Resumes  | Cover letters  

Contrary to popular belief, ChatGPT can do much more than just write an essay for you from scratch (which would be considered plagiarism). A more useful way to use the chatbot is to have it guide your writing process. 

Below, we show you how to use ChatGPT to do both the writing and assisting, as well as some other helpful writing tips. 

How ChatGPT can help you write an essay

If you are looking to use ChatGPT to support or replace your writing, here are five different techniques to explore. 

It is also worth noting before you get started that other AI chatbots can output the same results as ChatGPT or are even better, depending on your needs.

Also: The best AI chatbots of 2024: ChatGPT and alternatives

For example,  Copilot  has access to the internet, and as a result, it can source its answers from recent information and current events. Copilot also includes footnotes linking back to the original source for all of its responses, making the chatbot a more valuable tool if you're writing a paper on a more recent event, or if you want to verify your sources.

Regardless of which AI chatbot you pick, you can use the tips below to get the most out of your prompts and from AI assistance.

1. Use ChatGPT to generate essay ideas

Before you can even get started writing an essay, you need to flesh out the idea. When professors assign essays, they generally give students a prompt that gives them leeway for their own self-expression and analysis. 

As a result, students have the task of finding the angle to approach the essay on their own. If you have written an essay recently, you know that finding the angle is often the trickiest part -- and this is where ChatGPT can help. 

Also: ChatGPT vs. Copilot: Which AI chatbot is better for you?

All you need to do is input the assignment topic, include as much detail as you'd like -- such as what you're thinking about covering -- and let ChatGPT do the rest. For example, based on a paper prompt I had in college, I asked:

Can you help me come up with a topic idea for this assignment, "You will write a research paper or case study on a leadership topic of your choice." I would like it to include Blake and Mouton's Managerial Leadership Grid, and possibly a historical figure. 

Also: I'm a ChatGPT pro but this quick course taught me new tricks, and you can take it for free

Within seconds, the chatbot produced a response that provided me with the title of the essay, options of historical figures I could focus my article on, and insight on what information I could include in my paper, with specific examples of a case study I could use. 

2. Use the chatbot to create an outline

Once you have a solid topic, it's time to start brainstorming what you actually want to include in the essay. To facilitate the writing process, I always create an outline, including all the different points I want to touch upon in my essay. However, the outline-writing process is usually tedious. 

With ChatGPT, all you have to do is ask it to write the outline for you. 

Also: Thanks to my 5 favorite AI tools, I'm working smarter now

Using the topic that ChatGPT helped me generate in step one, I asked the chatbot to write me an outline by saying: 

Can you create an outline for a paper, "Examining the Leadership Style of Winston Churchill through Blake and Mouton's Managerial Leadership Grid."

After a couple of seconds, the chatbot produced a holistic outline divided into seven different sections, with three different points under each section. 

This outline is thorough and can be condensed for a shorter essay or elaborated on for a longer paper. If you don't like something or want to tweak the outline further, you can do so either manually or with more instructions to ChatGPT. 

As mentioned before, since Copilot is connected to the internet, if you use Copilot to produce the outline, it will even include links and sources throughout, further expediting your essay-writing process. 

3. Use ChatGPT to find sources

Now that you know exactly what you want to write, it's time to find reputable sources to get your information. If you don't know where to start, you can just ask ChatGPT. 

Also: How to make ChatGPT provide sources and citations

All you need to do is ask the AI to find sources for your essay topic. For example, I asked the following: 

Can you help me find sources for a paper, "Examining the Leadership Style of Winston Churchill through Blake and Mouton's Managerial Leadership Grid."

The chatbot output seven sources, with a bullet point for each that explained what the source was and why it could be useful. 

Also:   How to use ChatGPT to make charts and tables

The one caveat you will want to be aware of when using ChatGPT for sources is that it does not have access to information after 2021, so it will not be able to suggest the freshest sources. If you want up-to-date information, you can always use Copilot. 

Another perk of using Copilot is that it automatically links to sources in its answers. 

4. Use ChatGPT to write an essay

It is worth noting that if you take the text directly from the chatbot and submit it, your work could be considered a form of plagiarism since it is not your original work. As with any information taken from another source, text generated by an AI should be clearly identified and credited in your work.

Also: ChatGPT will now remember its past conversations with you (if you want it to)

In most educational institutions, the penalties for plagiarism are severe, ranging from a failing grade to expulsion from the school. A better use of ChatGPT's writing features would be to use it to create a sample essay to guide your writing. 

If you still want ChatGPT to create an essay from scratch, enter the topic and the desired length, and then watch what it generates. For example, I input the following text: 

Can you write a five-paragraph essay on the topic, "Examining the Leadership Style of Winston Churchill through Blake and Mouton's Managerial Leadership Grid."

Within seconds, the chatbot gave the exact output I required: a coherent, five-paragraph essay on the topic. You could then use that text to guide your own writing. 

Also: ChatGPT vs. Microsoft Copilot vs. Gemini: Which is the best AI chatbot?

At this point, it's worth remembering how tools like ChatGPT work : they put words together in a form that they think is statistically valid, but they don't know if what they are saying is true or accurate. 

As a result, the output you receive might include invented facts, details, or other oddities. The output might be a useful starting point for your own work, but don't expect it to be entirely accurate, and always double-check the content. 

5. Use ChatGPT to co-edit your essay

Once you've written your own essay, you can use ChatGPT's advanced writing capabilities to edit the piece for you. 

You can simply tell the chatbot what you want it to edit. For example, I asked ChatGPT to edit our five-paragraph essay for structure and grammar, but other options could have included flow, tone, and more. 

Also: AI meets AR as ChatGPT is now available on the Apple Vision Pro

Once you ask the tool to edit your essay, it will prompt you to paste your text into the chatbot. ChatGPT will then output your essay with corrections made. This feature is particularly useful because ChatGPT edits your essay more thoroughly than a basic proofreading tool, as it goes beyond simply checking spelling. 

You can also co-edit with the chatbot, asking it to take a look at a specific paragraph or sentence, and asking it to rewrite or fix the text for clarity. Personally, I find this feature very helpful. 

Rote automation is so last year: AI pushes more intelligence into software development

The best ai chatbots: chatgpt isn't the only one worth trying, how to use chatgpt (and how to access gpt-4o).

examples of essays written by chat gpt

  • Gradehacker
  • Meet the Team
  • Essay Writing
  • Degree Accelerator
  • Entire Class Bundle
  • Learning Center
  • Gradehacker TV

Write an Essay From Scratch With Chat GPT: Step-by-Step Tutorial

Santiago mallea.

  • Best Apps And Tools , Writing Tips

Chat GPT Essay Writer

Chief of Content At Gradehacker

Chat GPT Essay Writer: Step-by-Step Tutorial

To write an essay with Chat GPT, you need to:

  • Understand your prompt
  • Choose a topic
  • Write the entire prompt in Chat GPT
  • Break down the arguments you got
  • Write one prompt at a time
  • Check the sources
  • Create your first draft
  • Edit your draft

Want an Actual Human Help You Write?

If you are looking for a more personalized approach, get in touch with our team and get a top-quality essay!

New call-to-action

Want an Actual Human Help You?

How amazing would it be if there was a robot willing to help you write a college essay from scratch?

A few years ago, that may have sounded like something so futuristic that could only be seen in movies. But actually, we are closer than you might think so.

Artificial Intelligence tools are everywhere , and college students have noticed it. Among all, there is one revolutionary AI that learns over time and writes all types of content, from typical conversations to academic texts.

But can Chat GPT write essays from scratch?

We tried it, and the answer is kind so (for now at least.)

Here at Gradehacker, we have years of being the non-traditional adult students’ #1 resource.

We have lots of experience helping people like you write their essays on time or get their college degree sooner , and we know how important it is to be updated with the latest tools.

AIs and Chat GPT are going to stay for a while , so you better learn how to use them properly. If you ever wondered whether it was possible to write an essay from scratch with Chat GPT, you are about to find out!

Now, in case you aren’t familiarized with Chat GPT or don’t know the basics of how it works, we recommend watching our video first!

How we Used Chat GPT to Write Essays

So, to try our experiment with Chat GPT, we created two different college assignments that any student could find:

  • An argumentative essay about America's healthcare system
  • A book review of George Orwell's 1984

Our main goal is to test Chat GPT’s essay-writing skills and see how much students can use it to write their academic assignments.

Now, we are pretty aware that this (or any) artificial intelligence can carry a wide range of problems such as:

  • Giving you incorrect premises and information
  • Delivering a piece of writing that is plagiarised from somewhere else
  • Does not include citations or list the sources it used
  • Is not always available to use

That’s why after receiving our first rough draft, we’ll edit the parts of the text that are necessary and run what we get through our plagiarism checker. After our revision, we’ll ask the AI to expand on the information or make the changes we need.

We’ll consider that final version after our revision as the best possible work that Chat GPT could have done to write an essay from scratch.

And to cover the lack of citations, we’ll see what academic sources the chatbot considers worthy for us to use when writing our paper.

Now, we don’t think that AIs are ready to deliver fully edited and well-written academic writing assignments that you can simply submit to your professor without reading them first.

But is it possible to speed up the writing process and save time by asking Chat GPT to write essays?

Let’s see!

Can Chat GPT Write an Argumentative Paper?

First, we’ll see how it can handle one of the most common academic essays: an argumentative paper.

We chose the American healthcare system as our topic, but as we know that we to find a specific subject with a wide range of sources to write a strong and persuasive essay, we are focusing on structural racism in our healthcare system and how African Americans accessed it during covid.

It’s a clear and specific topic that we included in our list of best topics for your research paper. If you want similar alternatives for college papers, be sure to watch our video !

Instructions and Essay Prompt

Take a position on an issue and compose a 5-page paper that supports it.

In the introduction, establish why your topic is important and present a specific, argumentative thesis statement that previews your argument.

The body of your essay should be logical, coherent, and purposeful. It should synthesize your research and your own informed opinions in order to support your thesis.

Address other positions on the topic along with arguments and evidence that support those positions. 

Write a conclusion that restates your thesis and reminds your reader of your main points.

First Results

After giving Chat GPT this prompt, this is what we received:

The first draft we received

To begin with, after copying and pasting these paragraphs into a word document, it only covered two and a half pages.

While the introduction directly tackles the main topic, it fails to provide a clear thesis statement. And even if it’s included in a separate section, the thesis is broad and lacks factual evidence or statistics to support it.

Throughout the body of the text, the AI lists many real-life issues that contribute to the topic of the paper. Still, these are never fully explained nor supported with evidence.

For example, in the first paragraph, it says that “African Americans have long experienced poorer health outcomes compared to other racial groups.” Here it would be interesting to add statistics that prove this information is correct.

Something that really stood up for us, was that Chat GPT credited a source to back up important data, even though it didn’t cite it properly. It talks about a study conducted by the Kaiser Family Foundation that supports that in 2019, 11% of African Americans and 6% of non-Hispanic Whites were uninsured. 

We checked the original article and found that the information was almost 100% accurate . The correct rates were 8% for White Americans and 10.9% for African Americans, but the biggest issue was that the study included more updated statistics from 2021.

examples of essays written by chat gpt

Then, when addressing other issues like transportation and discrimination, the problem is presented without any problems, but once again, there are no sources that support them .

Once the essay starts developing the thesis statement on how these issues could be fixed, we can see the same problem.

But even if they lack supporting evidence , the arguments listed are cohesive and make sense . These were:

  • Expanding Medicaid coverage
  • Provide incentives for healthcare providers to practice in underserved areas
  • Invest in telehealth services
  • Improve transportation infrastructure, particularly in rural areas
  • Train healthcare providers on cultural competence and anti-racism
  • Increase diversity in the healthcare workforce
  • Implement patient-centered care models

These are all strong ideas that could be stronger and more persuasive with specific information and statistics.

Still, the main problem is that there is no counter-argument that is against the essay’s main arguments.

Overall, Chat GPT delivered a cohesive first draft that tackled the topic by explaining its multiple issues and listing possible solutions. However, there is a clear lack of evidence, no counter-arguments were included, and the essay we got was half the length we needed.

Changes and Final Results

In our second attempt, we asked the AI to expand on each section and subtopic of the essay . While the final result ended up repeating some parts on multiple occasions, Chat GPT wrote more extensively and even included in-text citations with their corresponding reference.

By pasting all these new texts (without editing) into a new document, we get more than seven pages, which is a great starting point for writing a better essay.

Explanation of the issues and use of sources

The new introduction stayed pretty much the same, but the difference is that now the thesis statement is stronger and even had a cited statistic to back it up . Unfortunately, while the information is correct, the source isn’t.

Clicking on the link included in the references took us to a non-existing page , and after looking for that data on Google, we found that it actually belonged to a study from the National Library of Medicine.

examples of essays written by chat gpt

But then, the AI did a solid job expanding on the issues that were related to the paper’s topic. But again, while some sources were useful, sometimes the information reflected in the text didn’t correspond to it.

For example, when citing an article posted in KFF to evidence the importance of transportation as a critical factor in health disparities, when we go to the site, we don’t find any mention of that issue.

Similarly, when addressing the higher rates of infection and death compared to White Americans, the AI once again cited the wrong source. The statistics came from a study conducted by the CDC , but from a different article than the one that is credited.

And sometimes, the information displayed was incorrect.

In that same section, when listing the percentages of death in specific states, we see in the cited source that the statistics don’t match.

However, what’s interesting is that if we search for that data on Google, we find a different study that backs it up. So, even if Chat GPT didn’t include inaccurate information in the text, it failed to properly acknowledge the real source.

And so did this problem of having correct information but citing the wrong source continued throughout the paper.

Chat GPT Argumentative Paper Counter-arguments

Solutions and counter-arguments

When we asked the AI to write more about the solutions it mentioned in the first draft, we received more extensive arguments with supporting evidence for each case.

As we were expecting , the statistics were real, but the source credited wasn’t the original and didn’t mention anything related to what was included in the text. 

And it wasn’t any different with the counterarguments. They made sense and had a strong point, but the sources credited weren’t correct. 

For instance, regarding telehealth services, it recognized the multiple barriers it would take for low-income areas to adopt this modality. It credited an article posted in the KKF mainly written by “Gillespie,” but after searching for the information, we see that the original study was conducted by other people.

Still, the fact that Chat GPT now provided us with multiple data and information we could use to develop counter-arguments and later refute them is excellent progress. 

Chat GPT wrote more detailed solutions

The good news is that none of the multiple paragraphs that Chat GPT delivered had plagiarism issues.

After running them through our plagiarism checker, it only found a few parts that had duplicated content, but these were sentences composed of commonly used phrases that other articles about different topics also had.

For example, multiple times it recognized as plagiarism phrases like “according to the CDC” or “according to a report by the Kaiser Family Foundation.” And even these “ plagiarism issues ” could be easily solved by rearranging the order or adding new words.

Checking for plagiarism is a critical part of the essay writing process. If you are not using one yet, be sure to pick one as soon as possible. We recommend checking our list of best plagiarism checkers.

Key Takeaways

So, what did we learn by asking Chat GPT to write an argumentative paper?

  • It's better if the AI writes section per section
  • It can give you accurate information related to issues, solutions, and counterarguments
  • There is a high chance the source credited won't be the right one
  • The texts, which can have duplicated content among themselves, don't appear to be plagiarized

It’s clear that we still need to do a lot of editing and writing.

However, considering that Chat GPT wrote this in less than an hour , the AI proved to be a solid tool. It gave us many strong arguments, interesting and accurate statistics, and an order that we cal follow to structure our argumentative paper.

If writing these types of assignments isn’t your strength, be sure to watch our tutorial on how to write an exceptional argumentative essay!

Want to Know Who is Helping You?

At Gradehacker, you have direct communication with your writer. We are here to help!

Can Chat GPT Write a Book Review?

For our second experiment, we want to see if Chat GPT can write an essay for a literature class.

To do so, we picked one of the novels we consider one of the 5 must-read books any college student should read: 1984 by George Orwell. There is so much written and discussed about this literary classic that we thought it would be a perfect choice for an artificial intelligence chatbot like Chat GPT to write something about.

Write a book review of the book 1984 by George Orwell. The paper needs to include an introduction with the author’s title, publication information (Publisher, year, number of pages), genre, and a brief introduction to the review.

Then, write a summary of the plot with the basic parts of the plot: situation, conflict, development, climax, and resolution.

Continue by describing the setting and the point of view and discussing the book’s literary devices.

Lastly, analyze the book, and explain the particular style of writing or literary elements used.

And then write a conclusion.

This is the first draft we got:

The first draft we got

Starting with the introduction, all the information is correct , while including the number of pages is worthless as it depends on the edition of the book.

The summary is also accurate, but it relies too heavily on the plot instead of the context and world described in the novel , which is arguably the reason 1984 transcended time. For example, there is no mention of Big Brother, the leader of the totalitarian superstate.

Now, the setting and point of view section is the poorest section written by Chat GPT . It is very short and lacks development.

The literary devices are not necessarily wrong, but it would be better to focus more on each . For instance, talk more about the importance of symbolism or explain how the book critiques propaganda, totalitarianism, and individual freedom.

The analysis of Orwell’s writing is simple , but the conclusion is clear and straightforward, so it might be the best piece that the AI wrote.

For the second draft, instead of submitting the entire prompt, we wrote one command per section . As a result, Chat GPT focused on each part of the review and tossed more paragraphs with more detailed information in every case.

Chat GPT Book Review Better Analysis 1

It’s clear that this way, the AI can write better and more developed texts that are easier to edit and improve . Each section analyzes more in-depth the topic it’s reviewing, which facilitates the upcoming process of structuring the most useful paragraphs into a cohesive essay.

For example, it now added more literary devices used by Orwell and gave specific examples of the symbolism of the novel.

Of course, there are many sentences and ideas that are repeated throughout the different sections. But now, because each has more specific information, we can take these parts and structure a new paragraph that comprises the most valuable sentences.

Book Review Literary Devices

Now, even if sometimes book reviews don’t need to include citations from external sources apart from the novel we are analyzing, Chat GPT gave us five different options for us to choose from.

The only problem was that we couldn’t find any of them on Google.

The names of the authors were real people, but the titles of the articles and essays were nowhere to be found. This made us think that it’s likely that the AI picked real-life writers and created a title for a fictional essay about 1984 or George Orwell .

examples of essays written by chat gpt

Finally, we need to see if the texts are original or plagiarized material.

After running it through our plagiarism detection software, we found that it was mostly original content with only a few issues on sight . But nothing too big to worry about.

One easy-to-solve example is in the literary devices section, where it directly quotes a sentence from the book. In this case, we would just need to add the in-text citation.

The biggest plagiarism problem was with one sentence (or six words, to be more specific) from the conclusion that linked to the introduction from a summary review . But by rearranging the word order or adding synonyms, this issue can be easily solved too.

So, what are the most important tips we can take from Chat GPT writing a book review?

  • It will review each section more in-depth if you ask it one prompt at a time
  • The analysis and summary of the book were accurate
  • If you ask it to list scholarly sources, the AI will create unexisting sources based on real authors
  • Very few plagiarism issues

Once again, there is still a lot of work to do.

The writing sample chat GPT gave us is a solid start, but we need to rearrange all the paragraphs into one cohesive essay that perfectly summarizes the different aspects of the novel. Plus, we would also have to find scholarly sources on our own.

Still, the AI can do the heavy lifting and give you a great starting point.

If writing book reviews isn’t your strong suit, you have our tutorial and tips!

Having Doubts on How we Can Help You?

Get in touch with us!

New call-to-action

Save Time And Use Chat GPT to Write Your Essay

We know that writing essays can be a tedious task.

Sometimes, kicking off the process can be harder than what it looks. That’s why understanding how to use a powerful tool like Chat GPT can truly make the difference.

It may not have the critical thinking skills you have or write a high-quality essay from scratch, but by using our tips, it can deliver you a solid first draft to start writing your entire essay.

But if you want to have an expert team of writers giving you personalized support or aren’t sure about editing an AI-written essay, you can trust Gradehacker to help you with your assignments.

You can also check out our related blog posts if you want to learn how to take your writing skills to the next level!

Best Websites to Download Free College Textbooks Gradehacker

7 Best Websites to Find Free College Textbooks in 2023

examples of essays written by chat gpt

Best iPads For College Students

How To Be More Productive

How To Be More Productive | Tips For Non-Traditional Students

Mnemonic-Techniques cover

Studying with Mnemonic Techniques 

Discussion Boards Cover

How To Nail Every Discussion Board | Tips To Improve Your Discussion Posts

Study Habits That Keep College Students Focused Cover

Study Habits That Keep College Students Focused

Santiago Mallea

Santiago Mallea is a curious and creative journalist who first helped many college students as a Gradehacker consultant in subjects like literature, communications, ethics, and business. Now, as a Content Creator in our blog, YouTube channel, and TikTok, he assists non-traditional students improve their college experience by sharing the best tips. You can find him on LinkedIn .

  • Best Apps and Tools
  • Writing Tips
  • Financial Tips and Scholarships
  • Career Planning
  • Non-Traditional Students
  • Student Wellness
  • Cost & Pricing

examples of essays written by chat gpt

  • 2525 Ponce de Leon Blvd Suite 300 Coral Gables, FL 33134 USA
  • Phone: (786) 991-9293
  • Gradehacker 2525 Ponce de Leon Blvd Suite 300 Coral Gables, FL 33134 USA

About Gradehacker

Business hours.

Mon - Fri: 10:00 am - 7 pm ET ​​Sat - Sun: 10 am - 3 pm ET ​

© 2024 Gradehacker LLC All Rights Reserved.

Chat GPT Essay Example: Enhancing Communication and Creativity

Chat GPT Essay Example: Enhancing Communication and Creativity

Introduction

In the realm of artificial intelligence (AI), the emergence of cutting-edge technologies has revolutionized various aspects of human life. One such remarkable innovation is the application of Chat Generative Pre-trained Transformers (Chat GPT), a sophisticated AI model developed to enhance communication and foster creativity. This article delves into a comprehensive chat GPT essay example, shedding light on its potential to revolutionize essay writing, nurture creativity, and reshape the boundaries of human-machine interaction.

Unleashing the Power of Chat GPT in Essay Writing

Crafting engaging introductions.

Essay writing often hinges on the ability to captivate readers from the outset. With chat GPT, crafting captivating introductions becomes a seamless endeavor. By analyzing vast databases of literary masterpieces, historical narratives, and contemporary discourse, chat GPT generates introductions that effortlessly grab readers' attention. For instance, when tasked with an essay about climate change, chat GPT might begin with a thought-provoking quote or a startling statistic to immediately immerse the audience.

Developing Well-Structured Arguments

The hallmark of a compelling essay lies in its coherent structure and logical progression of arguments. Chat GPT excels in this domain by employing its intricate algorithms to organize ideas seamlessly. It sifts through a plethora of information, identifying key points and arranging them into a structured framework. As a result, essay writers can leverage chat GPT to streamline the process of outlining and presenting arguments in a clear and organized manner.

Fostering Creativity and Originality

Creativity is the lifeblood of impactful essay writing. Chat GPT serves as a wellspring of inspiration by generating novel perspectives and creative insights. By amalgamating diverse concepts and innovative viewpoints, chat GPT empowers essayists to infuse their compositions with fresh ideas and imaginative flair. Writers can collaborate with chat GPT to explore unconventional angles, injecting a unique and captivating essence into their essays.

Enhancing Language Proficiency

A hallmark of exceptional essayists is their command over language and eloquence in expression. Chat GPT functions as a linguistic virtuoso, providing writers with an extensive lexicon and refined syntax. As writers engage in a collaborative dance with chat GPT, they absorb linguistic nuances and expand their vocabulary. This symbiotic relationship elevates the quality of written communication, enabling writers to convey complex ideas with eloquence and precision.

Realizing the Benefits of Chat GPT Essay Example

Efficiency and time savings.

The utilization of chat GPT in essay writing translates to unparalleled efficiency and time savings. Traditional research and idea generation often consume significant hours. However, chat GPT's rapid data analysis and idea synthesis expedite these preliminary phases. Writers can harness this efficiency to focus on refining their arguments, conducting deeper analyses, and perfecting the overall essay structure.

Diverse Essay Topics and Styles

Chat GPT transcends the limitations of human expertise by delving into a multitude of subjects and writing styles. Whether crafting a persuasive argument, a reflective personal essay, or a comprehensive research paper, chat GPT adapts to the desired tone and style. This adaptability equips writers with a versatile tool that seamlessly tailors its output to suit the requirements of diverse essay genres.

Optimized Research and Data Utilization

The chat GPT essay example illustrates the AI's prowess in data-driven research and information synthesis. By swiftly scouring vast repositories of information, chat GPT extracts relevant data points, statistics, and scholarly references. Writers can integrate these meticulously curated sources to bolster their arguments, thereby enhancing the essay's credibility and substantiating key claims.

Promotion of Critical Thinking

Contrary to misconceptions about AI stifling human creativity, chat GPT fosters critical thinking and analytical prowess. Collaborating with chat GPT prompts writers to engage in thoughtful deliberation, as they evaluate the AI-generated content and mold it to align with their vision. This iterative process stimulates cognitive faculties, encouraging essayists to critically assess, modify, and augment the AI-generated material.

Applications Beyond Essay Writing

Educational tool for learning.

The chat GPT essay example not only revolutionizes essay composition but also serves as an invaluable educational tool. Students can interact with chat GPT to explore intricate concepts, seek clarifications, and brainstorm ideas. This AI-driven learning experience cultivates a dynamic environment that nurtures intellectual curiosity and facilitates holistic understanding.

Innovative Content Generation

Beyond academia, chat GPT finds its footing in creative content creation. From crafting compelling marketing copy to generating engaging blog posts, chat GPT proves its mettle in diverse content-generation endeavors. Brands and businesses can harness its capabilities to resonate with target audiences, amplify brand messaging, and establish a distinctive online presence.

Language Translation and Cross-Cultural Communication

Chat GPT's language proficiency extends beyond essay writing, offering seamless translation services and fostering cross-cultural communication. In an increasingly interconnected world, chat GPT bridges language barriers, enabling individuals to communicate effortlessly across diverse linguistic landscapes.

Virtual Collaborative Writing Partner

Imagine embarking on a writing journey with an AI companion that understands your voice, style, and preferences. Chat GPT evolves into a virtual collaborative writing partner, providing real-time suggestions, refining sentence structures, and injecting creative sparks into your narrative. This partnership promises to elevate the art of writing and stimulate unparalleled literary synergies.

Frequently Asked Questions (FAQs)

How does chat gpt enhance creativity in essay writing.

Chat GPT enhances creativity by generating novel perspectives, unique insights, and imaginative angles, empowering writers to infuse their essays with fresh ideas.

Is chat GPT proficient in various writing styles?

Absolutely, chat GPT adapts to diverse writing styles, whether persuasive, informative, reflective, or analytical, ensuring seamless alignment with the desired tone and genre.

Can chat GPT assist in refining essay arguments?

Yes, chat GPT excels in structuring arguments by organizing ideas coherently and logically, offering writers a streamlined approach to presenting their viewpoints.

Does chat GPT replace human critical thinking?

No, chat GPT complements human critical thinking by stimulating thoughtful evaluation and iterative refinement of AI-generated content, fostering cognitive engagement.

What are the practical applications of chat GPT beyond essay writing?

Chat GPT finds utility in education as a learning tool, content creation for marketing and branding, language translation, cross-cultural communication, and virtual collaborative writing partnerships.

How can chat GPT revolutionize language translation?

Chat GPT breaks down language barriers by providing accurate and contextually relevant translations, facilitating seamless communication across diverse linguistic landscapes.

In the dynamic landscape of AI-driven innovation, the chat GPT essay example stands as a testament to the transformative potential of technology in the realm of communication and creativity. As writers embark on a collaborative journey with chat GPT, they unlock a tapestry of benefits, ranging from enhanced efficiency and creativity to optimized research and critical thinking. Beyond essay writing, chat GPT's applications span realms as diverse as education, content creation, language translation, and collaborative writing partnerships. As we traverse this exhilarating era of human-AI symbiosis, the chat GPT essay example beckons writers to explore uncharted horizons, ushering in a new chapter of limitless expression and intellectual evolution.

Explore more about Chat GPT

Learn about the Latest AI Innovations

Try Picasso AI

Are you looking to stand out in the world of art and creativity? Picasso AI is the answer you've been waiting for. Our artificial intelligence platform allows you to generate unique and realistic images from simple text descriptions.

Celebrating 150 years of Harvard Summer School. Learn about our history.

Should I Use ChatGPT to Write My Essays?

Everything high school and college students need to know about using — and not using — ChatGPT for writing essays.

Jessica A. Kent

ChatGPT is one of the most buzzworthy technologies today.

In addition to other generative artificial intelligence (AI) models, it is expected to change the world. In academia, students and professors are preparing for the ways that ChatGPT will shape education, and especially how it will impact a fundamental element of any course: the academic essay.

Students can use ChatGPT to generate full essays based on a few simple prompts. But can AI actually produce high quality work, or is the technology just not there yet to deliver on its promise? Students may also be asking themselves if they should use AI to write their essays for them and what they might be losing out on if they did.

AI is here to stay, and it can either be a help or a hindrance depending on how you use it. Read on to become better informed about what ChatGPT can and can’t do, how to use it responsibly to support your academic assignments, and the benefits of writing your own essays.

What is Generative AI?

Artificial intelligence isn’t a twenty-first century invention. Beginning in the 1950s, data scientists started programming computers to solve problems and understand spoken language. AI’s capabilities grew as computer speeds increased and today we use AI for data analysis, finding patterns, and providing insights on the data it collects.

But why the sudden popularity in recent applications like ChatGPT? This new generation of AI goes further than just data analysis. Instead, generative AI creates new content. It does this by analyzing large amounts of data — GPT-3 was trained on 45 terabytes of data, or a quarter of the Library of Congress — and then generating new content based on the patterns it sees in the original data.

It’s like the predictive text feature on your phone; as you start typing a new message, predictive text makes suggestions of what should come next based on data from past conversations. Similarly, ChatGPT creates new text based on past data. With the right prompts, ChatGPT can write marketing content, code, business forecasts, and even entire academic essays on any subject within seconds.

But is generative AI as revolutionary as people think it is, or is it lacking in real intelligence?

The Drawbacks of Generative AI

It seems simple. You’ve been assigned an essay to write for class. You go to ChatGPT and ask it to write a five-paragraph academic essay on the topic you’ve been assigned. You wait a few seconds and it generates the essay for you!

But ChatGPT is still in its early stages of development, and that essay is likely not as accurate or well-written as you’d expect it to be. Be aware of the drawbacks of having ChatGPT complete your assignments.

It’s not intelligence, it’s statistics

One of the misconceptions about AI is that it has a degree of human intelligence. However, its intelligence is actually statistical analysis, as it can only generate “original” content based on the patterns it sees in already existing data and work.

It “hallucinates”

Generative AI models often provide false information — so much so that there’s a term for it: “AI hallucination.” OpenAI even has a warning on its home screen , saying that “ChatGPT may produce inaccurate information about people, places, or facts.” This may be due to gaps in its data, or because it lacks the ability to verify what it’s generating. 

It doesn’t do research  

If you ask ChatGPT to find and cite sources for you, it will do so, but they could be inaccurate or even made up.

This is because AI doesn’t know how to look for relevant research that can be applied to your thesis. Instead, it generates content based on past content, so if a number of papers cite certain sources, it will generate new content that sounds like it’s a credible source — except it likely may not be.

There are data privacy concerns

When you input your data into a public generative AI model like ChatGPT, where does that data go and who has access to it? 

Prompting ChatGPT with original research should be a cause for concern — especially if you’re inputting study participants’ personal information into the third-party, public application. 

JPMorgan has restricted use of ChatGPT due to privacy concerns, Italy temporarily blocked ChatGPT in March 2023 after a data breach, and Security Intelligence advises that “if [a user’s] notes include sensitive data … it enters the chatbot library. The user no longer has control over the information.”

It is important to be aware of these issues and take steps to ensure that you’re using the technology responsibly and ethically. 

It skirts the plagiarism issue

AI creates content by drawing on a large library of information that’s already been created, but is it plagiarizing? Could there be instances where ChatGPT “borrows” from previous work and places it into your work without citing it? Schools and universities today are wrestling with this question of what’s plagiarism and what’s not when it comes to AI-generated work.

To demonstrate this, one Elon University professor gave his class an assignment: Ask ChatGPT to write an essay for you, and then grade it yourself. 

“Many students expressed shock and dismay upon learning the AI could fabricate bogus information,” he writes, adding that he expected some essays to contain errors, but all of them did. 

His students were disappointed that “major tech companies had pushed out AI technology without ensuring that the general population understands its drawbacks” and were concerned about how many embraced such a flawed tool.

Explore Our High School Programs

How to Use AI as a Tool to Support Your Work

As more students are discovering, generative AI models like ChatGPT just aren’t as advanced or intelligent as they may believe. While AI may be a poor option for writing your essay, it can be a great tool to support your work.

Generate ideas for essays

Have ChatGPT help you come up with ideas for essays. For example, input specific prompts, such as, “Please give me five ideas for essays I can write on topics related to WWII,” or “Please give me five ideas for essays I can write comparing characters in twentieth century novels.” Then, use what it provides as a starting point for your original research.

Generate outlines

You can also use ChatGPT to help you create an outline for an essay. Ask it, “Can you create an outline for a five paragraph essay based on the following topic” and it will create an outline with an introduction, body paragraphs, conclusion, and a suggested thesis statement. Then, you can expand upon the outline with your own research and original thought.

Generate titles for your essays

Titles should draw a reader into your essay, yet they’re often hard to get right. Have ChatGPT help you by prompting it with, “Can you suggest five titles that would be good for a college essay about [topic]?”

The Benefits of Writing Your Essays Yourself

Asking a robot to write your essays for you may seem like an easy way to get ahead in your studies or save some time on assignments. But, outsourcing your work to ChatGPT can negatively impact not just your grades, but your ability to communicate and think critically as well. It’s always the best approach to write your essays yourself.

Create your own ideas

Writing an essay yourself means that you’re developing your own thoughts, opinions, and questions about the subject matter, then testing, proving, and defending those thoughts. 

When you complete school and start your career, projects aren’t simply about getting a good grade or checking a box, but can instead affect the company you’re working for — or even impact society. Being able to think for yourself is necessary to create change and not just cross work off your to-do list.

Building a foundation of original thinking and ideas now will help you carve your unique career path in the future.

Develop your critical thinking and analysis skills

In order to test or examine your opinions or questions about a subject matter, you need to analyze a problem or text, and then use your critical thinking skills to determine the argument you want to make to support your thesis. Critical thinking and analysis skills aren’t just necessary in school — they’re skills you’ll apply throughout your career and your life.

Improve your research skills

Writing your own essays will train you in how to conduct research, including where to find sources, how to determine if they’re credible, and their relevance in supporting or refuting your argument. Knowing how to do research is another key skill required throughout a wide variety of professional fields.

Learn to be a great communicator

Writing an essay involves communicating an idea clearly to your audience, structuring an argument that a reader can follow, and making a conclusion that challenges them to think differently about a subject. Effective and clear communication is necessary in every industry.

Be impacted by what you’re learning about : 

Engaging with the topic, conducting your own research, and developing original arguments allows you to really learn about a subject you may not have encountered before. Maybe a simple essay assignment around a work of literature, historical time period, or scientific study will spark a passion that can lead you to a new major or career.

Resources to Improve Your Essay Writing Skills

While there are many rewards to writing your essays yourself, the act of writing an essay can still be challenging, and the process may come easier for some students than others. But essay writing is a skill that you can hone, and students at Harvard Summer School have access to a number of on-campus and online resources to assist them.

Students can start with the Harvard Summer School Writing Center , where writing tutors can offer you help and guidance on any writing assignment in one-on-one meetings. Tutors can help you strengthen your argument, clarify your ideas, improve the essay’s structure, and lead you through revisions. 

The Harvard libraries are a great place to conduct your research, and its librarians can help you define your essay topic, plan and execute a research strategy, and locate sources. 

Finally, review the “ The Harvard Guide to Using Sources ,” which can guide you on what to cite in your essay and how to do it. Be sure to review the “Tips For Avoiding Plagiarism” on the “ Resources to Support Academic Integrity ” webpage as well to help ensure your success.

Sign up to our mailing list to learn more about Harvard Summer School

The Future of AI in the Classroom

ChatGPT and other generative AI models are here to stay, so it’s worthwhile to learn how you can leverage the technology responsibly and wisely so that it can be a tool to support your academic pursuits. However, nothing can replace the experience and achievement gained from communicating your own ideas and research in your own academic essays.

About the Author

Jessica A. Kent is a freelance writer based in Boston, Mass. and a Harvard Extension School alum. Her digital marketing content has been featured on Fast Company, Forbes, Nasdaq, and other industry websites; her essays and short stories have been featured in North American Review, Emerson Review, Writer’s Bone, and others.

5 Key Qualities of Students Who Succeed at Harvard Summer School (and in College!)

This guide outlines the kinds of students who thrive at Harvard Summer School and what the programs offer in return.

Harvard Division of Continuing Education

The Division of Continuing Education (DCE) at Harvard University is dedicated to bringing rigorous academics and innovative teaching capabilities to those seeking to improve their lives through education. We make Harvard education accessible to lifelong learners from high school to retirement.

Harvard Division of Continuing Education Logo

22 Interesting ChatGPT Examples

examples of essays written by chat gpt

ChatGPT is an artificial intelligence chatbot, with a unique ability to communicate with people in a human-like way. Developed by  OpenAI , the large  language model is equipped with cutting-edge  natural language processing capabilities and has been trained on massive amounts of data, which enables it to generate written content and  converse with users.

Whether it’s answering a question, generating a piece of prose or writing code, ChatGPT continues to push the boundaries of human creativity and productivity.

Interesting Ways to Use ChatGPT

  • Prepare for a job interview
  • Report the news
  • Write songs
  • Grade homework
  • Help you cook
  • As a (sort of) search engine

That said, ChatGPT does come with limitations. The technology can be notoriously inaccurate , generating what experts call “ hallucinations ,” or content that is stylistically correct but factually wrong. And, like any AI model, ChatGPT has a tendency to produce  biased results . 

So, it’s good to double-check the information that ChatGPT provides before using it. And, to be safe, don’t use the information generated by ChatGPT to make critical financial or health decisions without thorough verification and maybe even a second or third opinion. 

While ChatGPT is very much a work in progress, it can be used in a variety of interesting ways.

ChatGPT Examples

ChatGPT job interview questions

Job Search Examples

1. chatgpt can write your cover letter.

Writing a cover letter is one of the most tedious and time-consuming parts of any job hunt , particularly if you’re applying to several different jobs at once. There are only so many ways one can express how excited they are about a particular company, or distill their career in an engaging way. Fortunately, ChatGPT  can do the heavy lifting. 

2. ChatGPT Can Improve Your Resume

Thanks to its natural language processing capabilities, ChatGPT can take an existing piece of text and  make improvements on it. So, if you already have a resume written up, ChatGPT can be a useful tool in making it that much better. In a world where you can be competing with thousands of people for one job, this can be a good way to stand out among the rest. 

3. ChatGPT Can Help You Prepare for an Interview

A tried-and-true way to prepare for any upcoming  job interview is to have  practice runs , where you test out talking points and run through various scenarios. ChatGPT  can help , with the ability to generate anything from hypothetical  questions to intelligent responses to those questions. It can give you tips for how to dress, or etiquette suggestions. It can even offer a few  jokes , if you’re feeling playful.

4. ChatGPT Can Jumpstart Your Job Search

Knowing where to start in a job search may not be that obvious, especially for recent college graduates and professionals switching careers. ChatGPT can give job seekers ideas of positions to pursue based on a quick description. For example, a professional may type “jobs for an experienced professional who has a passion for social equity and enjoys writing blog posts.”

AI in Hiring What You Should Know About an Applicant Tracking System to Land More Interviews — And Jobs

ChatGPT product description

Content Generation Examples

5. chatgpt can generate newsletter ideas.

Newsletters can be a great way for companies and individuals alike to increase visibility, as well as draw traffic to their site. But coming up with consistently solid topic ideas that align with one’s brand can be challenging. With a brief prompt, ChatGPT can not only churn out newsletter ideas, but it can also draft up an outline and even  write the entire thing. And once it has written the content, it can translate the text into several different languages. 

6. ChatGPT Can Write a Marketing Email

When it comes to marketing,  email is an essential line of communication with customers. ChatGPT  can be used to generate not only the written content within the email itself, but suggestions for eye-catching subject lines as well — which can make all the difference when it is sent to an inbox with hundreds or thousands of other emails. It can also help marketers quickly create an  email marketing calendar that schedules regular sends while avoiding weekends or holidays, which could affect an email’s open and read-through rates. 

7. ChatGPT Can Report the News

Again, ChatGPT has been trained on several terabytes of data across the internet, making it quite knowledgeable on many subjects. This knowledge can easily be used to generate quick news articles, or to distill the contents of a longer piece written by a human. 

8. ChatGPT Can Be a Copywriter

Consider the immense amount of content that exists on something like an e-commerce site — the product descriptions , the image captions, the alerts about certain sales or deals — not to mention all of the promotional emails, social media posts and advertisements. Until very recently, all of that copy had to be written and edited by humans. But now, ChatGPT can  handle it in a matter of seconds. Plus, it can generate several iterations of that copy so it can be  targeted to specific people.

Curious How ChatGPT and Other Content Generators Work? Check Out This Piece on Natural Language Generation

ChatGPT code question

Coding Examples

9. chatgpt can debug your code.

ChatGPT can be a  useful tool for debugging code. All coders have to do is type in a line of code, and the chatbot will not only identify problems and fix them, but also explain why it made the decisions it did. ChatGPT is also capable of developing complete blocks of functional code (in many different languages) on its own, as well as translating code from one language to another. But, when it comes to code written by someone else (or, in this case, some thing else), it’s important that you fully understand how the code works before deploying it.

10. ChatGPT Can Help Create Machine Learning Algorithms

ChatGPT can even handle something as complicated as  machine learning , so long as the user inputs the appropriate data. This can be in the form of labels, numbers or any other data that is useful in training a chatbot. Then, ChatGPT  can act as a sort of data scientist and provide anything from  an example for a  linear regression   machine learning algorithm , to  an example of a machine learning model capable of predicting sales revenue. 

11. ChatGPT Can Answer Coding-Related Questions

Coders can also look to ChatGPT for advice on specific coding-related questions. Whether a person wants to figure out a way to  animate a button on their Shopify page , or set up a  snapshot tool in a Fastlane configuration file , ChatGPT usually has some tips for how to make it happen. And it can even do it in the style of a  fast-talkin’ wiseguy from a 1940s gangster movie if you ask it to. Just a reminder, though: ChatGPT can absolutely get things wrong. In fact, Stack Overflow has temporarily banned ChatGPT-generated responses from its platform,  citing the bot’s penchant for errors.

Take It From the Experts You Can Use Artificial Intelligence to Fix Your Broken Code

ChatGPT rock song

Art and Music Examples

12. chatgpt can brainstorm ideas for ai generated art.

While ChatGPT is not capable of producing images itself, it can still be helpful in the creation of AI-generated art . Most  AI art generators require that the user input a text prompt describing what they want the model to produce. If you’re having a hard time coming up with a good prompt to feed a given generator, ChatGPT can help. For example, one user asked ChatGPT for some “ interesting, fantastical ways of decorating a living room ” and then plugged the bot’s answers right into MidJourney. 

13. ChatGPT Can Be Your Creative Writing Assistant

Although many writers have railed against using ChatGPT to produce pieces of creative writing, it can certainly be a useful tool. Instead of replacing writers outright, ChatGPT can serve as a kind of  writing assistant , helping to generate ideas, produce story outlines, provide various character perspectives, and more. And it doesn’t have to be a longform novel either — ChatGPT can help write things like  poems and screenplays too. 

14. ChatGPT Can Write Songs

ChatGPT can also  write songs . Complete with chord progressions and lyrics organized into verses and choruses, these tunes can be tweaked to fit into specific genres, eras and subject matters. Of course, if a user wants to actually hear the piece, they’ll have to play it on an instrument.

Take a Deeper Dive Can AI Make Art More Human?

ChatGPT reading passage

Education Examples

15. chatgpt can grade homework.

Teachers have grown increasingly wary of ChatGPT and its propensity to help students cheat on assignments, prompting some schools to completely  block access to it. But just as students can use ChatGPT to compose essays, teachers can use it to  grade them . All they have to do is input the work and ask the chatbot to assess it. Not only will ChatGPT provide a grade, but it will also offer reasons why and recommendations for improving it. 

16. ChatGPT Can Generate Quiz and Test Questions

For educators who are pressed on time, ChatGPT can help generate specific  test and quiz questions . These can be tailored according to subject matter, format (multiple choice, word problems, gap-fill exercises and so on), and more. It can even generate an answer key to go along with the questions. 

17. ChatGPT Can Explain Complex Topics

Whether a user is trying to understand  quantum computing or dark matter, ChatGPT is exceptionally good at explaining complex topics in lay terms — providing sort of nutshell descriptions that  anyone could understand . This capability can be useful in all kinds of contexts, but it can be an especially helpful resource for students, teachers and tutors looking to break down complicated concepts and get a firmer understanding of them.

18. ChatGPT Can Produce Reading Passages

With the amount of data it possesses, ChatGPT can provide a boost for teachers looking to quickly create reading passages. For example, an instructor may direct ChatGPT to generate a reading passage about photosynthesis that is five lines long and meets the criteria for a fifth-grade reading level. Teachers can also have ChatGPT include specific vocabulary to give a reading passage the right amount of difficulty for their students.

More AI in Education 15 AI In Education Examples to Know

ChatGPT playing 20 questions

Other Interesting Examples

19. chatgpt can help you cook.

Have you ever looked in your pantry and refrigerator and wondered what you could create with what few ingredients you have? It turns out, ChatGPT can help with that. In fact, ChatGPT has the potential to be the sous chef of your dreams — whether you need it to figure out complementary sides to a main dish or generate a weekly meal plan and grocery list.

20. ChatGPT Can Play Games

Or, if you’re looking to just kill some time, ChatGPT can also be a great source of entertainment. The chatbot is capable of playing a number of word games, all a user has to do is prompt it to. Some of the  most popular games to play with ChatGPT include 20 questions, would you rather, and even choose your own adventure. Although it  isn’t very good at rock-paper-scissors.

21. ChatGPT Can Offer Advice

Like any AI model, ChatGPT is not capable of having its own true emotions , and it is not especially good at reading the emotions of others. But it can be a good sounding board if you have a personal problem you need to work through. So far, the bot has been used to give advice on everything from  relationships to  finances . However, it is important to note that ChatGPT is not a therapist or domain expert in anything, so every bit of guidance it gives must be taken with a massive grain of salt.

22. ChatGPT Can Be Used As a (Sort Of) Search Engine

One of the buzziest (and, perhaps, most controversial) uses for ChatGPT is its functionality as a quasi  search engine , offering simple answers as opposed to a list of websites to sift through. But ChatGPT may not be as effective a tool as something like Google. While the chatbot uses data from the open internet to generate responses, it does not have the ability to actually search the internet for new information. And since the data the base model has been trained on is limited to 2021 and earlier, it does not have the awareness of events or news that have occurred since then.

Frequently Asked Questions

What are some examples of chatgpt.

ChatGPT can help write cover letters for job seekers, produce test questions for teachers, write song lyrics for aspiring musicians and debug code for software developers, among other capabilities.

What is the most popular use of ChatGPT?

ChatGPT is often used to generate ideas that aid in the creative process. Users can insert prompts asking ChatGPT for ideas on how to structure an essay, what content to include for a marketing campaign or possible topics for a blog post.

examples of essays written by chat gpt

Great Companies Need Great People. That's Where We Come In.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 30 October 2023

A large-scale comparison of human-written versus ChatGPT-generated essays

  • Steffen Herbold 1 ,
  • Annette Hautli-Janisz 1 ,
  • Ute Heuer 1 ,
  • Zlata Kikteva 1 &
  • Alexander Trautsch 1  

Scientific Reports volume  13 , Article number:  18617 ( 2023 ) Cite this article

20k Accesses

15 Citations

94 Altmetric

Metrics details

  • Computer science
  • Information technology

ChatGPT and similar generative AI models have attracted hundreds of millions of users and have become part of the public discourse. Many believe that such models will disrupt society and lead to significant changes in the education system and information generation. So far, this belief is based on either colloquial evidence or benchmarks from the owners of the models—both lack scientific rigor. We systematically assess the quality of AI-generated content through a large-scale study comparing human-written versus ChatGPT-generated argumentative student essays. We use essays that were rated by a large number of human experts (teachers). We augment the analysis by considering a set of linguistic characteristics of the generated essays. Our results demonstrate that ChatGPT generates essays that are rated higher regarding quality than human-written essays. The writing style of the AI models exhibits linguistic characteristics that are different from those of the human-written essays. Since the technology is readily available, we believe that educators must act immediately. We must re-invent homework and develop teaching concepts that utilize these AI models in the same way as math utilizes the calculator: teach the general concepts first and then use AI tools to free up time for other learning objectives.

Similar content being viewed by others

examples of essays written by chat gpt

ChatGPT-3.5 as writing assistance in students’ essays

examples of essays written by chat gpt

Perception, performance, and detectability of conversational artificial intelligence across 32 university courses

examples of essays written by chat gpt

The model student: GPT-4 performance on graduate biomedical science exams

Introduction.

The massive uptake in the development and deployment of large-scale Natural Language Generation (NLG) systems in recent months has yielded an almost unprecedented worldwide discussion of the future of society. The ChatGPT service which serves as Web front-end to GPT-3.5 1 and GPT-4 was the fastest-growing service in history to break the 100 million user milestone in January and had 1 billion visits by February 2023 2 .

Driven by the upheaval that is particularly anticipated for education 3 and knowledge transfer for future generations, we conduct the first independent, systematic study of AI-generated language content that is typically dealt with in high-school education: argumentative essays, i.e. essays in which students discuss a position on a controversial topic by collecting and reflecting on evidence (e.g. ‘Should students be taught to cooperate or compete?’). Learning to write such essays is a crucial aspect of education, as students learn to systematically assess and reflect on a problem from different perspectives. Understanding the capability of generative AI to perform this task increases our understanding of the skills of the models, as well as of the challenges educators face when it comes to teaching this crucial skill. While there is a multitude of individual examples and anecdotal evidence for the quality of AI-generated content in this genre (e.g. 4 ) this paper is the first to systematically assess the quality of human-written and AI-generated argumentative texts across different versions of ChatGPT 5 . We use a fine-grained essay quality scoring rubric based on content and language mastery and employ a significant pool of domain experts, i.e. high school teachers across disciplines, to perform the evaluation. Using computational linguistic methods and rigorous statistical analysis, we arrive at several key findings:

AI models generate significantly higher-quality argumentative essays than the users of an essay-writing online forum frequented by German high-school students across all criteria in our scoring rubric.

ChatGPT-4 (ChatGPT web interface with the GPT-4 model) significantly outperforms ChatGPT-3 (ChatGPT web interface with the GPT-3.5 default model) with respect to logical structure, language complexity, vocabulary richness and text linking.

Writing styles between humans and generative AI models differ significantly: for instance, the GPT models use more nominalizations and have higher sentence complexity (signaling more complex, ‘scientific’, language), whereas the students make more use of modal and epistemic constructions (which tend to convey speaker attitude).

The linguistic diversity of the NLG models seems to be improving over time: while ChatGPT-3 still has a significantly lower linguistic diversity than humans, ChatGPT-4 has a significantly higher diversity than the students.

Our work goes significantly beyond existing benchmarks. While OpenAI’s technical report on GPT-4 6 presents some benchmarks, their evaluation lacks scientific rigor: it fails to provide vital information like the agreement between raters, does not report on details regarding the criteria for assessment or to what extent and how a statistical analysis was conducted for a larger sample of essays. In contrast, our benchmark provides the first (statistically) rigorous and systematic study of essay quality, paired with a computational linguistic analysis of the language employed by humans and two different versions of ChatGPT, offering a glance at how these NLG models develop over time. While our work is focused on argumentative essays in education, the genre is also relevant beyond education. In general, studying argumentative essays is one important aspect to understand how good generative AI models are at conveying arguments and, consequently, persuasive writing in general.

Related work

Natural language generation.

The recent interest in generative AI models can be largely attributed to the public release of ChatGPT, a public interface in the form of an interactive chat based on the InstructGPT 1 model, more commonly referred to as GPT-3.5. In comparison to the original GPT-3 7 and other similar generative large language models based on the transformer architecture like GPT-J 8 , this model was not trained in a purely self-supervised manner (e.g. through masked language modeling). Instead, a pipeline that involved human-written content was used to fine-tune the model and improve the quality of the outputs to both mitigate biases and safety issues, as well as make the generated text more similar to text written by humans. Such models are referred to as Fine-tuned LAnguage Nets (FLANs). For details on their training, we refer to the literature 9 . Notably, this process was recently reproduced with publicly available models such as Alpaca 10 and Dolly (i.e. the complete models can be downloaded and not just accessed through an API). However, we can only assume that a similar process was used for the training of GPT-4 since the paper by OpenAI does not include any details on model training.

Testing of the language competency of large-scale NLG systems has only recently started. Cai et al. 11 show that ChatGPT reuses sentence structure, accesses the intended meaning of an ambiguous word, and identifies the thematic structure of a verb and its arguments, replicating human language use. Mahowald 12 compares ChatGPT’s acceptability judgments to human judgments on the Article + Adjective + Numeral + Noun construction in English. Dentella et al. 13 show that ChatGPT-3 fails to understand low-frequent grammatical constructions like complex nested hierarchies and self-embeddings. In another recent line of research, the structure of automatically generated language is evaluated. Guo et al. 14 show that in question-answer scenarios, ChatGPT-3 uses different linguistic devices than humans. Zhao et al. 15 show that ChatGPT generates longer and more diverse responses when the user is in an apparently negative emotional state.

Given that we aim to identify certain linguistic characteristics of human-written versus AI-generated content, we also draw on related work in the field of linguistic fingerprinting, which assumes that each human has a unique way of using language to express themselves, i.e. the linguistic means that are employed to communicate thoughts, opinions and ideas differ between humans. That these properties can be identified with computational linguistic means has been showcased across different tasks: the computation of a linguistic fingerprint allows to distinguish authors of literary works 16 , the identification of speaker profiles in large public debates 17 , 18 , 19 , 20 and the provision of data for forensic voice comparison in broadcast debates 21 , 22 . For educational purposes, linguistic features are used to measure essay readability 23 , essay cohesion 24 and language performance scores for essay grading 25 . Integrating linguistic fingerprints also yields performance advantages for classification tasks, for instance in predicting user opinion 26 , 27 and identifying individual users 28 .

Limitations of OpenAIs ChatGPT evaluations

OpenAI published a discussion of the model’s performance of several tasks, including Advanced Placement (AP) classes within the US educational system 6 . The subjects used in performance evaluation are diverse and include arts, history, English literature, calculus, statistics, physics, chemistry, economics, and US politics. While the models achieved good or very good marks in most subjects, they did not perform well in English literature. GPT-3.5 also experienced problems with chemistry, macroeconomics, physics, and statistics. While the overall results are impressive, there are several significant issues: firstly, the conflict of interest of the model’s owners poses a problem for the performance interpretation. Secondly, there are issues with the soundness of the assessment beyond the conflict of interest, which make the generalizability of the results hard to assess with respect to the models’ capability to write essays. Notably, the AP exams combine multiple-choice questions with free-text answers. Only the aggregated scores are publicly available. To the best of our knowledge, neither the generated free-text answers, their overall assessment, nor their assessment given specific criteria from the used judgment rubric are published. Thirdly, while the paper states that 1–2 qualified third-party contractors participated in the rating of the free-text answers, it is unclear how often multiple ratings were generated for the same answer and what was the agreement between them. This lack of information hinders a scientifically sound judgement regarding the capabilities of these models in general, but also specifically for essays. Lastly, the owners of the model conducted their study in a few-shot prompt setting, where they gave the models a very structured template as well as an example of a human-written high-quality essay to guide the generation of the answers. This further fine-tuning of what the models generate could have also influenced the output. The results published by the owners go beyond the AP courses which are directly comparable to our work and also consider other student assessments like Graduate Record Examinations (GREs). However, these evaluations suffer from the same problems with the scientific rigor as the AP classes.

Scientific assessment of ChatGPT

Researchers across the globe are currently assessing the individual capabilities of these models with greater scientific rigor. We note that due to the recency and speed of these developments, the hereafter discussed literature has mostly only been published as pre-prints and has not yet been peer-reviewed. In addition to the above issues concretely related to the assessment of the capabilities to generate student essays, it is also worth noting that there are likely large problems with the trustworthiness of evaluations, because of data contamination, i.e. because the benchmark tasks are part of the training of the model, which enables memorization. For example, Aiyappa et al. 29 find evidence that this is likely the case for benchmark results regarding NLP tasks. This complicates the effort by researchers to assess the capabilities of the models beyond memorization.

Nevertheless, the first assessment results are already available – though mostly focused on ChatGPT-3 and not yet ChatGPT-4. Closest to our work is a study by Yeadon et al. 30 , who also investigate ChatGPT-3 performance when writing essays. They grade essays generated by ChatGPT-3 for five physics questions based on criteria that cover academic content, appreciation of the underlying physics, grasp of subject material, addressing the topic, and writing style. For each question, ten essays were generated and rated independently by five researchers. While the sample size precludes a statistical assessment, the results demonstrate that the AI model is capable of writing high-quality physics essays, but that the quality varies in a manner similar to human-written essays.

Guo et al. 14 create a set of free-text question answering tasks based on data they collected from the internet, e.g. question answering from Reddit. The authors then sample thirty triplets of a question, a human answer, and a ChatGPT-3 generated answer and ask human raters to assess if they can detect which was written by a human, and which was written by an AI. While this approach does not directly assess the quality of the output, it serves as a Turing test 31 designed to evaluate whether humans can distinguish between human- and AI-produced output. The results indicate that humans are in fact able to distinguish between the outputs when presented with a pair of answers. Humans familiar with ChatGPT are also able to identify over 80% of AI-generated answers without seeing a human answer in comparison. However, humans who are not yet familiar with ChatGPT-3 are not capable of identifying AI-written answers about 50% of the time. Moreover, the authors also find that the AI-generated outputs are deemed to be more helpful than the human answers in slightly more than half of the cases. This suggests that the strong results from OpenAI’s own benchmarks regarding the capabilities to generate free-text answers generalize beyond the benchmarks.

There are, however, some indicators that the benchmarks may be overly optimistic in their assessment of the model’s capabilities. For example, Kortemeyer 32 conducts a case study to assess how well ChatGPT-3 would perform in a physics class, simulating the tasks that students need to complete as part of the course: answer multiple-choice questions, do homework assignments, ask questions during a lesson, complete programming exercises, and write exams with free-text questions. Notably, ChatGPT-3 was allowed to interact with the instructor for many of the tasks, allowing for multiple attempts as well as feedback on preliminary solutions. The experiment shows that ChatGPT-3’s performance is in many aspects similar to that of the beginning learners and that the model makes similar mistakes, such as omitting units or simply plugging in results from equations. Overall, the AI would have passed the course with a low score of 1.5 out of 4.0. Similarly, Kung et al. 33 study the performance of ChatGPT-3 in the United States Medical Licensing Exam (USMLE) and find that the model performs at or near the passing threshold. Their assessment is a bit more optimistic than Kortemeyer’s as they state that this level of performance, comprehensible reasoning and valid clinical insights suggest that models such as ChatGPT may potentially assist human learning in clinical decision making.

Frieder et al. 34 evaluate the capabilities of ChatGPT-3 in solving graduate-level mathematical tasks. They find that while ChatGPT-3 seems to have some mathematical understanding, its level is well below that of an average student and in most cases is not sufficient to pass exams. Yuan et al. 35 consider the arithmetic abilities of language models, including ChatGPT-3 and ChatGPT-4. They find that they exhibit the best performance among other currently available language models (incl. Llama 36 , FLAN-T5 37 , and Bloom 38 ). However, the accuracy of basic arithmetic tasks is still only at 83% when considering correctness to the degree of \(10^{-3}\) , i.e. such models are still not capable of functioning reliably as calculators. In a slightly satiric, yet insightful take, Spencer et al. 39 assess how a scientific paper on gamma-ray astrophysics would look like, if it were written largely with the assistance of ChatGPT-3. They find that while the language capabilities are good and the model is capable of generating equations, the arguments are often flawed and the references to scientific literature are full of hallucinations.

The general reasoning skills of the models may also not be at the level expected from the benchmarks. For example, Cherian et al. 40 evaluate how well ChatGPT-3 performs on eleven puzzles that second graders should be able to solve and find that ChatGPT is only able to solve them on average in 36.4% of attempts, whereas the second graders achieve a mean of 60.4%. However, their sample size is very small and the problem was posed as a multiple-choice question answering problem, which cannot be directly compared to the NLG we consider.

Research gap

Within this article, we address an important part of the current research gap regarding the capabilities of ChatGPT (and similar technologies), guided by the following research questions:

RQ1: How good is ChatGPT based on GPT-3 and GPT-4 at writing argumentative student essays?

RQ2: How do AI-generated essays compare to essays written by students?

RQ3: What are linguistic devices that are characteristic of student versus AI-generated content?

We study these aspects with the help of a large group of teaching professionals who systematically assess a large corpus of student essays. To the best of our knowledge, this is the first large-scale, independent scientific assessment of ChatGPT (or similar models) of this kind. Answering these questions is crucial to understanding the impact of ChatGPT on the future of education.

Materials and methods

The essay topics originate from a corpus of argumentative essays in the field of argument mining 41 . Argumentative essays require students to think critically about a topic and use evidence to establish a position on the topic in a concise manner. The corpus features essays for 90 topics from Essay Forum 42 , an active community for providing writing feedback on different kinds of text and is frequented by high-school students to get feedback from native speakers on their essay-writing capabilities. Information about the age of the writers is not available, but the topics indicate that the essays were written in grades 11–13, indicating that the authors were likely at least 16. Topics range from ‘Should students be taught to cooperate or to compete?’ to ‘Will newspapers become a thing of the past?’. In the corpus, each topic features one human-written essay uploaded and discussed in the forum. The students who wrote the essays are not native speakers. The average length of these essays is 19 sentences with 388 tokens (an average of 2.089 characters) and will be termed ‘student essays’ in the remainder of the paper.

For the present study, we use the topics from Stab and Gurevych 41 and prompt ChatGPT with ‘Write an essay with about 200 words on “[ topic ]”’ to receive automatically-generated essays from the ChatGPT-3 and ChatGPT-4 versions from 22 March 2023 (‘ChatGPT-3 essays’, ‘ChatGPT-4 essays’). No additional prompts for getting the responses were used, i.e. the data was created with a basic prompt in a zero-shot scenario. This is in contrast to the benchmarks by OpenAI, who used an engineered prompt in a few-shot scenario to guide the generation of essays. We note that we decided to ask for 200 words because we noticed a tendency to generate essays that are longer than the desired length by ChatGPT. A prompt asking for 300 words typically yielded essays with more than 400 words. Thus, using the shorter length of 200, we prevent a potential advantage for ChatGPT through longer essays, and instead err on the side of brevity. Similar to the evaluations of free-text answers by OpenAI, we did not consider multiple configurations of the model due to the effort required to obtain human judgments. For the same reason, our data is restricted to ChatGPT and does not include other models available at that time, e.g. Alpaca. We use the browser versions of the tools because we consider this to be a more realistic scenario than using the API. Table 1 below shows the core statistics of the resulting dataset. Supplemental material S1 shows examples for essays from the data set.

Annotation study

Study participants.

The participants had registered for a two-hour online training entitled ‘ChatGPT – Challenges and Opportunities’ conducted by the authors of this paper as a means to provide teachers with some of the technological background of NLG systems in general and ChatGPT in particular. Only teachers permanently employed at secondary schools were allowed to register for this training. Focusing on these experts alone allows us to receive meaningful results as those participants have a wide range of experience in assessing students’ writing. A total of 139 teachers registered for the training, 129 of them teach at grammar schools, and only 10 teachers hold a position at other secondary schools. About half of the registered teachers (68 teachers) have been in service for many years and have successfully applied for promotion. For data protection reasons, we do not know the subject combinations of the registered teachers. We only know that a variety of subjects are represented, including languages (English, French and German), religion/ethics, and science. Supplemental material S5 provides some general information regarding German teacher qualifications.

The training began with an online lecture followed by a discussion phase. Teachers were given an overview of language models and basic information on how ChatGPT was developed. After about 45 minutes, the teachers received a both written and oral explanation of the questionnaire at the core of our study (see Supplementary material S3 ) and were informed that they had 30 minutes to finish the study tasks. The explanation included information on how the data was obtained, why we collect the self-assessment, and how we chose the criteria for the rating of the essays, the overall goal of our research, and a walk-through of the questionnaire. Participation in the questionnaire was voluntary and did not affect the awarding of a training certificate. We further informed participants that all data was collected anonymously and that we would have no way of identifying who participated in the questionnaire. We orally informed participants that they consent to the use of the provided ratings for our research by participating in the survey.

Once these instructions were provided orally and in writing, the link to the online form was given to the participants. The online form was running on a local server that did not log any information that could identify the participants (e.g. IP address) to ensure anonymity. As per instructions, consent for participation was given by using the online form. Due to the full anonymity, we could by definition not document who exactly provided the consent. This was implemented as further insurance that non-participation could not possibly affect being awarded the training certificate.

About 20% of the training participants did not take part in the questionnaire study, the remaining participants consented based on the information provided and participated in the rating of essays. After the questionnaire, we continued with an online lecture on the opportunities of using ChatGPT for teaching as well as AI beyond chatbots. The study protocol was reviewed and approved by the Research Ethics Committee of the University of Passau. We further confirm that our study protocol is in accordance with all relevant guidelines.

Questionnaire

The questionnaire consists of three parts: first, a brief self-assessment regarding the English skills of the participants which is based on the Common European Framework of Reference for Languages (CEFR) 43 . We have six levels ranging from ‘comparable to a native speaker’ to ‘some basic skills’ (see supplementary material S3 ). Then each participant was shown six essays. The participants were only shown the generated text and were not provided with information on whether the text was human-written or AI-generated.

The questionnaire covers the seven categories relevant for essay assessment shown below (for details see supplementary material S3 ):

Topic and completeness

Logic and composition

Expressiveness and comprehensiveness

Language mastery

Vocabulary and text linking

Language constructs

These categories are used as guidelines for essay assessment 44 established by the Ministry for Education of Lower Saxony, Germany. For each criterion, a seven-point Likert scale with scores from zero to six is defined, where zero is the worst score (e.g. no relation to the topic) and six is the best score (e.g. addressed the topic to a special degree). The questionnaire included a written description as guidance for the scoring.

After rating each essay, the participants were also asked to self-assess their confidence in the ratings. We used a five-point Likert scale based on the criteria for the self-assessment of peer-review scores from the Association for Computational Linguistics (ACL). Once a participant finished rating the six essays, they were shown a summary of their ratings, as well as the individual ratings for each of their essays and the information on how the essay was generated.

Computational linguistic analysis

In order to further explore and compare the quality of the essays written by students and ChatGPT, we consider the six following linguistic characteristics: lexical diversity, sentence complexity, nominalization, presence of modals, epistemic and discourse markers. Those are motivated by previous work: Weiss et al. 25 observe the correlation between measures of lexical, syntactic and discourse complexities to the essay gradings of German high-school examinations while McNamara et al. 45 explore cohesion (indicated, among other things, by connectives), syntactic complexity and lexical diversity in relation to the essay scoring.

Lexical diversity

We identify vocabulary richness by using a well-established measure of textual, lexical diversity (MTLD) 46 which is often used in the field of automated essay grading 25 , 45 , 47 . It takes into account the number of unique words but unlike the best-known measure of lexical diversity, the type-token ratio (TTR), it is not as sensitive to the difference in the length of the texts. In fact, Koizumi and In’nami 48 find it to be least affected by the differences in the length of the texts compared to some other measures of lexical diversity. This is relevant to us due to the difference in average length between the human-written and ChatGPT-generated essays.

Syntactic complexity

We use two measures in order to evaluate the syntactic complexity of the essays. One is based on the maximum depth of the sentence dependency tree which is produced using the spaCy 3.4.2 dependency parser 49 (‘Syntactic complexity (depth)’). For the second measure, we adopt an approach similar in nature to the one by Weiss et al. 25 who use clause structure to evaluate syntactic complexity. In our case, we count the number of conjuncts, clausal modifiers of nouns, adverbial clause modifiers, clausal complements, clausal subjects, and parataxes (‘Syntactic complexity (clauses)’). The supplementary material in S2 shows the difference between sentence complexity based on two examples from the data.

Nominalization is a common feature of a more scientific style of writing 50 and is used as an additional measure for syntactic complexity. In order to explore this feature, we count occurrences of nouns with suffixes such as ‘-ion’, ‘-ment’, ‘-ance’ and a few others which are known to transform verbs into nouns.

Semantic properties

Both modals and epistemic markers signal the commitment of the writer to their statement. We identify modals using the POS-tagging module provided by spaCy as well as a list of epistemic expressions of modality, such as ‘definitely’ and ‘potentially’, also used in other approaches to identifying semantic properties 51 . For epistemic markers we adopt an empirically-driven approach and utilize the epistemic markers identified in a corpus of dialogical argumentation by Hautli-Janisz et al. 52 . We consider expressions such as ‘I think’, ‘it is believed’ and ‘in my opinion’ to be epistemic.

Discourse properties

Discourse markers can be used to measure the coherence quality of a text. This has been explored by Somasundaran et al. 53 who use discourse markers to evaluate the story-telling aspect of student writing while Nadeem et al. 54 incorporated them in their deep learning-based approach to automated essay scoring. In the present paper, we employ the PDTB list of discourse markers 55 which we adjust to exclude words that are often used for purposes other than indicating discourse relations, such as ‘like’, ‘for’, ‘in’ etc.

Statistical methods

We use a within-subjects design for our study. Each participant was shown six randomly selected essays. Results were submitted to the survey system after each essay was completed, in case participants ran out of time and did not finish scoring all six essays. Cronbach’s \(\alpha\) 56 allows us to determine the inter-rater reliability for the rating criterion and data source (human, ChatGPT-3, ChatGPT-4) in order to understand the reliability of our data not only overall, but also for each data source and rating criterion. We use two-sided Wilcoxon-rank-sum tests 57 to confirm the significance of the differences between the data sources for each criterion. We use the same tests to determine the significance of the linguistic characteristics. This results in three comparisons (human vs. ChatGPT-3, human vs. ChatGPT-4, ChatGPT-3 vs. ChatGPT-4) for each of the seven rating criteria and each of the seven linguistic characteristics, i.e. 42 tests. We use the Holm-Bonferroni method 58 for the correction for multiple tests to achieve a family-wise error rate of 0.05. We report the effect size using Cohen’s d 59 . While our data is not perfectly normal, it also does not have severe outliers, so we prefer the clear interpretation of Cohen’s d over the slightly more appropriate, but less accessible non-parametric effect size measures. We report point plots with estimates of the mean scores for each data source and criterion, incl. the 95% confidence interval of these mean values. The confidence intervals are estimated in a non-parametric manner based on bootstrap sampling. We further visualize the distribution for each criterion using violin plots to provide a visual indicator of the spread of the data (see Supplementary material S4 ).

Further, we use the self-assessment of the English skills and confidence in the essay ratings as confounding variables. Through this, we determine if ratings are affected by the language skills or confidence, instead of the actual quality of the essays. We control for the impact of these by measuring Pearson’s correlation coefficient r 60 between the self-assessments and the ratings. We also determine whether the linguistic features are correlated with the ratings as expected. The sentence complexity (both tree depth and dependency clauses), as well as the nominalization, are indicators of the complexity of the language. Similarly, the use of discourse markers should signal a proper logical structure. Finally, a large lexical diversity should be correlated with the ratings for the vocabulary. Same as above, we measure Pearson’s r . We use a two-sided test for the significance based on a \(\beta\) -distribution that models the expected correlations as implemented by scipy 61 . Same as above, we use the Holm-Bonferroni method to account for multiple tests. However, we note that it is likely that all—even tiny—correlations are significant given our amount of data. Consequently, our interpretation of these results focuses on the strength of the correlations.

Our statistical analysis of the data is implemented in Python. We use pandas 1.5.3 and numpy 1.24.2 for the processing of data, pingouin 0.5.3 for the calculation of Cronbach’s \(\alpha\) , scipy 1.10.1 for the Wilcoxon-rank-sum tests Pearson’s r , and seaborn 0.12.2 for the generation of plots, incl. the calculation of error bars that visualize the confidence intervals.

Out of the 111 teachers who completed the questionnaire, 108 rated all six essays, one rated five essays, one rated two essays, and one rated only one essay. This results in 658 ratings for 270 essays (90 topics for each essay type: human-, ChatGPT-3-, ChatGPT-4-generated), with three ratings for 121 essays, two ratings for 144 essays, and one rating for five essays. The inter-rater agreement is consistently excellent ( \(\alpha >0.9\) ), with the exception of language mastery where we have good agreement ( \(\alpha =0.89\) , see Table  2 ). Further, the correlation analysis depicted in supplementary material S4 shows weak positive correlations ( \(r \in 0.11, 0.28]\) ) between the self-assessment for the English skills, respectively the self-assessment for the confidence in ratings and the actual ratings. Overall, this indicates that our ratings are reliable estimates of the actual quality of the essays with a potential small tendency that confidence in ratings and language skills yields better ratings, independent of the data source.

Table  2 and supplementary material S4 characterize the distribution of the ratings for the essays, grouped by the data source. We observe that for all criteria, we have a clear order of the mean values, with students having the worst ratings, ChatGPT-3 in the middle rank, and ChatGPT-4 with the best performance. We further observe that the standard deviations are fairly consistent and slightly larger than one, i.e. the spread is similar for all ratings and essays. This is further supported by the visual analysis of the violin plots.

The statistical analysis of the ratings reported in Table  4 shows that differences between the human-written essays and the ones generated by both ChatGPT models are significant. The effect sizes for human versus ChatGPT-3 essays are between 0.52 and 1.15, i.e. a medium ( \(d \in [0.5,0.8)\) ) to large ( \(d \in [0.8, 1.2)\) ) effect. On the one hand, the smallest effects are observed for the expressiveness and complexity, i.e. when it comes to the overall comprehensiveness and complexity of the sentence structures, the differences between the humans and the ChatGPT-3 model are smallest. On the other hand, the difference in language mastery is larger than all other differences, which indicates that humans are more prone to making mistakes when writing than the NLG models. The magnitude of differences between humans and ChatGPT-4 is larger with effect sizes between 0.88 and 1.43, i.e., a large to very large ( \(d \in [1.2, 2)\) ) effect. Same as for ChatGPT-3, the differences are smallest for expressiveness and complexity and largest for language mastery. Please note that the difference in language mastery between humans and both GPT models does not mean that the humans have low scores for language mastery (M=3.90), but rather that the NLG models have exceptionally high scores (M=5.03 for ChatGPT-3, M=5.25 for ChatGPT-4).

When we consider the differences between the two GPT models, we observe that while ChatGPT-4 has consistently higher mean values for all criteria, only the differences for logic and composition, vocabulary and text linking, and complexity are significant. The effect sizes are between 0.45 and 0.5, i.e. small ( \(d \in [0.2, 0.5)\) ) and medium. Thus, while GPT-4 seems to be an improvement over GPT-3.5 in general, the only clear indicator of this is a better and clearer logical composition and more complex writing with a more diverse vocabulary.

We also observe significant differences in the distribution of linguistic characteristics between all three groups (see Table  3 ). Sentence complexity (depth) is the only category without a significant difference between humans and ChatGPT-3, as well as ChatGPT-3 and ChatGPT-4. There is also no significant difference in the category of discourse markers between humans and ChatGPT-3. The magnitude of the effects varies a lot and is between 0.39 and 1.93, i.e., between small ( \(d \in [0.2, 0.5)\) ) and very large. However, in comparison to the ratings, there is no clear tendency regarding the direction of the differences. For instance, while the ChatGPT models write more complex sentences and use more nominalizations, humans tend to use more modals and epistemic markers instead. The lexical diversity of humans is higher than that of ChatGPT-3 but lower than that of ChatGPT-4. While there is no difference in the use of discourse markers between humans and ChatGPT-3, ChatGPT-4 uses significantly fewer discourse markers.

We detect the expected positive correlations between the complexity ratings and the linguistic markers for sentence complexity ( \(r=0.16\) for depth, \(r=0.19\) for clauses) and nominalizations ( \(r=0.22\) ). However, we observe a negative correlation between the logic ratings and the discourse markers ( \(r=-0.14\) ), which counters our intuition that more frequent use of discourse indicators makes a text more logically coherent. However, this is in line with previous work: McNamara et al. 45 also find no indication that the use of cohesion indices such as discourse connectives correlates with high- and low-proficiency essays. Finally, we observe the expected positive correlation between the ratings for the vocabulary and the lexical diversity ( \(r=0.12\) ). All observed correlations are significant. However, we note that the strength of all these correlations is weak and that the significance itself should not be over-interpreted due to the large sample size.

Our results provide clear answers to the first two research questions that consider the quality of the generated essays: ChatGPT performs well at writing argumentative student essays and outperforms the quality of the human-written essays significantly. The ChatGPT-4 model has (at least) a large effect and is on average about one point better than humans on a seven-point Likert scale.

Regarding the third research question, we find that there are significant linguistic differences between humans and AI-generated content. The AI-generated essays are highly structured, which for instance is reflected by the identical beginnings of the concluding sections of all ChatGPT essays (‘In conclusion, [...]’). The initial sentences of each essay are also very similar starting with a general statement using the main concepts of the essay topics. Although this corresponds to the general structure that is sought after for argumentative essays, it is striking to see that the ChatGPT models are so rigid in realizing this, whereas the human-written essays are looser in representing the guideline on the linguistic surface. Moreover, the linguistic fingerprint has the counter-intuitive property that the use of discourse markers is negatively correlated with logical coherence. We believe that this might be due to the rigid structure of the generated essays: instead of using discourse markers, the AI models provide a clear logical structure by separating the different arguments into paragraphs, thereby reducing the need for discourse markers.

Our data also shows that hallucinations are not a problem in the setting of argumentative essay writing: the essay topics are not really about factual correctness, but rather about argumentation and critical reflection on general concepts which seem to be contained within the knowledge of the AI model. The stochastic nature of the language generation is well-suited for this kind of task, as different plausible arguments can be seen as a sampling from all available arguments for a topic. Nevertheless, we need to perform a more systematic study of the argumentative structures in order to better understand the difference in argumentation between human-written and ChatGPT-generated essay content. Moreover, we also cannot rule out that subtle hallucinations may have been overlooked during the ratings. There are also essays with a low rating for the criteria related to factual correctness, indicating that there might be cases where the AI models still have problems, even if they are, on average, better than the students.

One of the issues with evaluations of the recent large-language models is not accounting for the impact of tainted data when benchmarking such models. While it is certainly possible that the essays that were sourced by Stab and Gurevych 41 from the internet were part of the training data of the GPT models, the proprietary nature of the model training means that we cannot confirm this. However, we note that the generated essays did not resemble the corpus of human essays at all. Moreover, the topics of the essays are general in the sense that any human should be able to reason and write about these topics, just by understanding concepts like ‘cooperation’. Consequently, a taint on these general topics, i.e. the fact that they might be present in the data, is not only possible but is actually expected and unproblematic, as it relates to the capability of the models to learn about concepts, rather than the memorization of specific task solutions.

While we did everything to ensure a sound construct and a high validity of our study, there are still certain issues that may affect our conclusions. Most importantly, neither the writers of the essays, nor their raters, were English native speakers. However, the students purposefully used a forum for English writing frequented by native speakers to ensure the language and content quality of their essays. This indicates that the resulting essays are likely above average for non-native speakers, as they went through at least one round of revisions with the help of native speakers. The teachers were informed that part of the training would be in English to prevent registrations from people without English language skills. Moreover, the self-assessment of the language skills was only weakly correlated with the ratings, indicating that the threat to the soundness of our results is low. While we cannot definitively rule out that our results would not be reproducible with other human raters, the high inter-rater agreement indicates that this is unlikely.

However, our reliance on essays written by non-native speakers affects the external validity and the generalizability of our results. It is certainly possible that native speaking students would perform better in the criteria related to language skills, though it is unclear by how much. However, the language skills were particular strengths of the AI models, meaning that while the difference might be smaller, it is still reasonable to conclude that the AI models would have at least comparable performance to humans, but possibly still better performance, just with a smaller gap. While we cannot rule out a difference for the content-related criteria, we also see no strong argument why native speakers should have better arguments than non-native speakers. Thus, while our results might not fully translate to native speakers, we see no reason why aspects regarding the content should not be similar. Further, our results were obtained based on high-school-level essays. Native and non-native speakers with higher education degrees or experts in fields would likely also achieve a better performance, such that the difference in performance between the AI models and humans would likely also be smaller in such a setting.

We further note that the essay topics may not be an unbiased sample. While Stab and Gurevych 41 randomly sampled the essays from the writing feedback section of an essay forum, it is unclear whether the essays posted there are representative of the general population of essay topics. Nevertheless, we believe that the threat is fairly low because our results are consistent and do not seem to be influenced by certain topics. Further, we cannot with certainty conclude how our results generalize beyond ChatGPT-3 and ChatGPT-4 to similar models like Bard ( https://bard.google.com/?hl=en ) Alpaca, and Dolly. Especially the results for linguistic characteristics are hard to predict. However, since—to the best of our knowledge and given the proprietary nature of some of these models—the general approach to how these models work is similar and the trends for essay quality should hold for models with comparable size and training procedures.

Finally, we want to note that the current speed of progress with generative AI is extremely fast and we are studying moving targets: ChatGPT 3.5 and 4 today are already not the same as the models we studied. Due to a lack of transparency regarding the specific incremental changes, we cannot know or predict how this might affect our results.

Our results provide a strong indication that the fear many teaching professionals have is warranted: the way students do homework and teachers assess it needs to change in a world of generative AI models. For non-native speakers, our results show that when students want to maximize their essay grades, they could easily do so by relying on results from AI models like ChatGPT. The very strong performance of the AI models indicates that this might also be the case for native speakers, though the difference in language skills is probably smaller. However, this is not and cannot be the goal of education. Consequently, educators need to change how they approach homework. Instead of just assigning and grading essays, we need to reflect more on the output of AI tools regarding their reasoning and correctness. AI models need to be seen as an integral part of education, but one which requires careful reflection and training of critical thinking skills.

Furthermore, teachers need to adapt strategies for teaching writing skills: as with the use of calculators, it is necessary to critically reflect with the students on when and how to use those tools. For instance, constructivists 62 argue that learning is enhanced by the active design and creation of unique artifacts by students themselves. In the present case this means that, in the long term, educational objectives may need to be adjusted. This is analogous to teaching good arithmetic skills to younger students and then allowing and encouraging students to use calculators freely in later stages of education. Similarly, once a sound level of literacy has been achieved, strongly integrating AI models in lesson plans may no longer run counter to reasonable learning goals.

In terms of shedding light on the quality and structure of AI-generated essays, this paper makes an important contribution by offering an independent, large-scale and statistically sound account of essay quality, comparing human-written and AI-generated texts. By comparing different versions of ChatGPT, we also offer a glance into the development of these models over time in terms of their linguistic properties and the quality they exhibit. Our results show that while the language generated by ChatGPT is considered very good by humans, there are also notable structural differences, e.g. in the use of discourse markers. This demonstrates that an in-depth consideration not only of the capabilities of generative AI models is required (i.e. which tasks can they be used for), but also of the language they generate. For example, if we read many AI-generated texts that use fewer discourse markers, it raises the question if and how this would affect our human use of discourse markers. Understanding how AI-generated texts differ from human-written enables us to look for these differences, to reason about their potential impact, and to study and possibly mitigate this impact.

Data availability

The datasets generated during and/or analysed during the current study are available in the Zenodo repository, https://doi.org/10.5281/zenodo.8343644

Code availability

All materials are available online in form of a replication package that contains the data and the analysis code, https://doi.org/10.5281/zenodo.8343644 .

Ouyang, L. et al. Training language models to follow instructions with human feedback (2022). arXiv:2203.02155 .

Ruby, D. 30+ detailed chatgpt statistics–users & facts (sep 2023). https://www.demandsage.com/chatgpt-statistics/ (2023). Accessed 09 June 2023.

Leahy, S. & Mishra, P. TPACK and the Cambrian explosion of AI. In Society for Information Technology & Teacher Education International Conference , (ed. Langran, E.) 2465–2469 (Association for the Advancement of Computing in Education (AACE), 2023).

Ortiz, S. Need an ai essay writer? here’s how chatgpt (and other chatbots) can help. https://www.zdnet.com/article/how-to-use-chatgpt-to-write-an-essay/ (2023). Accessed 09 June 2023.

Openai chat interface. https://chat.openai.com/ . Accessed 09 June 2023.

OpenAI. Gpt-4 technical report (2023). arXiv:2303.08774 .

Brown, T. B. et al. Language models are few-shot learners (2020). arXiv:2005.14165 .

Wang, B. Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX. https://github.com/kingoflolz/mesh-transformer-jax (2021).

Wei, J. et al. Finetuned language models are zero-shot learners. In International Conference on Learning Representations (2022).

Taori, R. et al. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca (2023).

Cai, Z. G., Haslett, D. A., Duan, X., Wang, S. & Pickering, M. J. Does chatgpt resemble humans in language use? (2023). arXiv:2303.08014 .

Mahowald, K. A discerning several thousand judgments: Gpt-3 rates the article + adjective + numeral + noun construction (2023). arXiv:2301.12564 .

Dentella, V., Murphy, E., Marcus, G. & Leivada, E. Testing ai performance on less frequent aspects of language reveals insensitivity to underlying meaning (2023). arXiv:2302.12313 .

Guo, B. et al. How close is chatgpt to human experts? comparison corpus, evaluation, and detection (2023). arXiv:2301.07597 .

Zhao, W. et al. Is chatgpt equipped with emotional dialogue capabilities? (2023). arXiv:2304.09582 .

Keim, D. A. & Oelke, D. Literature fingerprinting : A new method for visual literary analysis. In 2007 IEEE Symposium on Visual Analytics Science and Technology , 115–122, https://doi.org/10.1109/VAST.2007.4389004 (IEEE, 2007).

El-Assady, M. et al. Interactive visual analysis of transcribed multi-party discourse. In Proceedings of ACL 2017, System Demonstrations , 49–54 (Association for Computational Linguistics, Vancouver, Canada, 2017).

Mennatallah El-Assady, A. H.-J. & Butt, M. Discourse maps - feature encoding for the analysis of verbatim conversation transcripts. In Visual Analytics for Linguistics , vol. CSLI Lecture Notes, Number 220, 115–147 (Stanford: CSLI Publications, 2020).

Matt Foulis, J. V. & Reed, C. Dialogical fingerprinting of debaters. In Proceedings of COMMA 2020 , 465–466, https://doi.org/10.3233/FAIA200536 (Amsterdam: IOS Press, 2020).

Matt Foulis, J. V. & Reed, C. Interactive visualisation of debater identification and characteristics. In Proceedings of the COMMA workshop on Argument Visualisation, COMMA , 1–7 (2020).

Chatzipanagiotidis, S., Giagkou, M. & Meurers, D. Broad linguistic complexity analysis for Greek readability classification. In Proceedings of the 16th Workshop on Innovative Use of NLP for Building Educational Applications , 48–58 (Association for Computational Linguistics, Online, 2021).

Ajili, M., Bonastre, J.-F., Kahn, J., Rossato, S. & Bernard, G. FABIOLE, a speech database for forensic speaker comparison. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16) , 726–733 (European Language Resources Association (ELRA), Portorož, Slovenia, 2016).

Deutsch, T., Jasbi, M. & Shieber, S. Linguistic features for readability assessment. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications , 1–17, https://doi.org/10.18653/v1/2020.bea-1.1 (Association for Computational Linguistics, Seattle, WA, USA \(\rightarrow\) Online, 2020).

Fiacco, J., Jiang, S., Adamson, D. & Rosé, C. Toward automatic discourse parsing of student writing motivated by neural interpretation. In Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022) , 204–215, https://doi.org/10.18653/v1/2022.bea-1.25 (Association for Computational Linguistics, Seattle, Washington, 2022).

Weiss, Z., Riemenschneider, A., Schröter, P. & Meurers, D. Computationally modeling the impact of task-appropriate language complexity and accuracy on human grading of German essays. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications , 30–45, https://doi.org/10.18653/v1/W19-4404 (Association for Computational Linguistics, Florence, Italy, 2019).

Yang, F., Dragut, E. & Mukherjee, A. Predicting personal opinion on future events with fingerprints. In Proceedings of the 28th International Conference on Computational Linguistics , 1802–1807, https://doi.org/10.18653/v1/2020.coling-main.162 (International Committee on Computational Linguistics, Barcelona, Spain (Online), 2020).

Tumarada, K. et al. Opinion prediction with user fingerprinting. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021) , 1423–1431 (INCOMA Ltd., Held Online, 2021).

Rocca, R. & Yarkoni, T. Language as a fingerprint: Self-supervised learning of user encodings using transformers. In Findings of the Association for Computational Linguistics: EMNLP . 1701–1714 (Association for Computational Linguistics, Abu Dhabi, United Arab Emirates, 2022).

Aiyappa, R., An, J., Kwak, H. & Ahn, Y.-Y. Can we trust the evaluation on chatgpt? (2023). arXiv:2303.12767 .

Yeadon, W., Inyang, O.-O., Mizouri, A., Peach, A. & Testrow, C. The death of the short-form physics essay in the coming ai revolution (2022). arXiv:2212.11661 .

TURING, A. M. I.-COMPUTING MACHINERY AND INTELLIGENCE. Mind LIX , 433–460, https://doi.org/10.1093/mind/LIX.236.433 (1950). https://academic.oup.com/mind/article-pdf/LIX/236/433/30123314/lix-236-433.pdf .

Kortemeyer, G. Could an artificial-intelligence agent pass an introductory physics course? (2023). arXiv:2301.12127 .

Kung, T. H. et al. Performance of chatgpt on usmle: Potential for ai-assisted medical education using large language models. PLOS Digital Health 2 , 1–12. https://doi.org/10.1371/journal.pdig.0000198 (2023).

Article   Google Scholar  

Frieder, S. et al. Mathematical capabilities of chatgpt (2023). arXiv:2301.13867 .

Yuan, Z., Yuan, H., Tan, C., Wang, W. & Huang, S. How well do large language models perform in arithmetic tasks? (2023). arXiv:2304.02015 .

Touvron, H. et al. Llama: Open and efficient foundation language models (2023). arXiv:2302.13971 .

Chung, H. W. et al. Scaling instruction-finetuned language models (2022). arXiv:2210.11416 .

Workshop, B. et al. Bloom: A 176b-parameter open-access multilingual language model (2023). arXiv:2211.05100 .

Spencer, S. T., Joshi, V. & Mitchell, A. M. W. Can ai put gamma-ray astrophysicists out of a job? (2023). arXiv:2303.17853 .

Cherian, A., Peng, K.-C., Lohit, S., Smith, K. & Tenenbaum, J. B. Are deep neural networks smarter than second graders? (2023). arXiv:2212.09993 .

Stab, C. & Gurevych, I. Annotating argument components and relations in persuasive essays. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers , 1501–1510 (Dublin City University and Association for Computational Linguistics, Dublin, Ireland, 2014).

Essay forum. https://essayforum.com/ . Last-accessed: 2023-09-07.

Common european framework of reference for languages (cefr). https://www.coe.int/en/web/common-european-framework-reference-languages . Accessed 09 July 2023.

Kmk guidelines for essay assessment. http://www.kmk-format.de/material/Fremdsprachen/5-3-2_Bewertungsskalen_Schreiben.pdf . Accessed 09 July 2023.

McNamara, D. S., Crossley, S. A. & McCarthy, P. M. Linguistic features of writing quality. Writ. Commun. 27 , 57–86 (2010).

McCarthy, P. M. & Jarvis, S. Mtld, vocd-d, and hd-d: A validation study of sophisticated approaches to lexical diversity assessment. Behav. Res. Methods 42 , 381–392 (2010).

Article   PubMed   Google Scholar  

Dasgupta, T., Naskar, A., Dey, L. & Saha, R. Augmenting textual qualitative features in deep convolution recurrent neural network for automatic essay scoring. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications , 93–102 (2018).

Koizumi, R. & In’nami, Y. Effects of text length on lexical diversity measures: Using short texts with less than 200 tokens. System 40 , 554–564 (2012).

spacy industrial-strength natural language processing in python. https://spacy.io/ .

Siskou, W., Friedrich, L., Eckhard, S., Espinoza, I. & Hautli-Janisz, A. Measuring plain language in public service encounters. In Proceedings of the 2nd Workshop on Computational Linguistics for Political Text Analysis (CPSS-2022) (Potsdam, Germany, 2022).

El-Assady, M. & Hautli-Janisz, A. Discourse Maps - Feature Encoding for the Analysis of Verbatim Conversation Transcripts (CSLI lecture notes (CSLI Publications, Center for the Study of Language and Information, 2019).

Hautli-Janisz, A. et al. QT30: A corpus of argument and conflict in broadcast debate. In Proceedings of the Thirteenth Language Resources and Evaluation Conference , 3291–3300 (European Language Resources Association, Marseille, France, 2022).

Somasundaran, S. et al. Towards evaluating narrative quality in student writing. Trans. Assoc. Comput. Linguist. 6 , 91–106 (2018).

Nadeem, F., Nguyen, H., Liu, Y. & Ostendorf, M. Automated essay scoring with discourse-aware neural models. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications , 484–493, https://doi.org/10.18653/v1/W19-4450 (Association for Computational Linguistics, Florence, Italy, 2019).

Prasad, R. et al. The Penn Discourse TreeBank 2.0. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08) (European Language Resources Association (ELRA), Marrakech, Morocco, 2008).

Cronbach, L. J. Coefficient alpha and the internal structure of tests. Psychometrika 16 , 297–334. https://doi.org/10.1007/bf02310555 (1951).

Article   MATH   Google Scholar  

Wilcoxon, F. Individual comparisons by ranking methods. Biom. Bull. 1 , 80–83 (1945).

Holm, S. A simple sequentially rejective multiple test procedure. Scand. J. Stat. 6 , 65–70 (1979).

MathSciNet   MATH   Google Scholar  

Cohen, J. Statistical power analysis for the behavioral sciences (Academic press, 2013).

Freedman, D., Pisani, R. & Purves, R. Statistics (international student edition). Pisani, R. Purves, 4th edn. WW Norton & Company, New York (2007).

Scipy documentation. https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pearsonr.html . Accessed 09 June 2023.

Windschitl, M. Framing constructivism in practice as the negotiation of dilemmas: An analysis of the conceptual, pedagogical, cultural, and political challenges facing teachers. Rev. Educ. Res. 72 , 131–175 (2002).

Download references

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and affiliations.

Faculty of Computer Science and Mathematics, University of Passau, Passau, Germany

Steffen Herbold, Annette Hautli-Janisz, Ute Heuer, Zlata Kikteva & Alexander Trautsch

You can also search for this author in PubMed   Google Scholar

Contributions

S.H., A.HJ., and U.H. conceived the experiment; S.H., A.HJ, and Z.K. collected the essays from ChatGPT; U.H. recruited the study participants; S.H., A.HJ., U.H. and A.T. conducted the training session and questionnaire; all authors contributed to the analysis of the results, the writing of the manuscript, and review of the manuscript.

Corresponding author

Correspondence to Steffen Herbold .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information 1., supplementary information 2., supplementary information 3., supplementary tables., supplementary figures., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Herbold, S., Hautli-Janisz, A., Heuer, U. et al. A large-scale comparison of human-written versus ChatGPT-generated essays. Sci Rep 13 , 18617 (2023). https://doi.org/10.1038/s41598-023-45644-9

Download citation

Received : 01 June 2023

Accepted : 22 October 2023

Published : 30 October 2023

DOI : https://doi.org/10.1038/s41598-023-45644-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Defense against adversarial attacks: robust and efficient compressed optimized neural networks.

  • Insaf Kraidia
  • Afifa Ghenai
  • Samir Brahim Belhaouari

Scientific Reports (2024)

AI-driven translations for kidney transplant equity in Hispanic populations

  • Oscar A. Garcia Valencia
  • Charat Thongprayoon
  • Wisit Cheungpasitporn

How will the state think with ChatGPT? The challenges of generative artificial intelligence for public administrations

  • Thomas Cantens

AI & SOCIETY (2024)

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

examples of essays written by chat gpt

I asked ChatGPT to write college entrance essays. Admissions professionals said they passed for essays written by students but I wouldn't have a chance at any top colleges.

  • I asked OpenAI's ChatGPT to write some college admissions essays and sent them to experts to review.
  • Both of the experts said the essays seemed like they had been written by a real student.
  • However, they said the essays wouldn't have had a shot at highly selective colleges.

Insider Today

ChatGPT can be used for many things: school work , cover letters , and apparently, college admissions essays. 

College essays, sometimes known as personal statements, are a time-consuming but important part of the application process . They are not required for all institutions, but experts say they can make or break a candidate's chances when they are.

The essays are often based on prompts that require students to write about a personal experience, such as:

Describe a topic, idea, or concept you find so engaging that it makes you lose all track of time. Why does it captivate you? What or who do you turn to when you want to learn more?

I asked ChatGPT to whip up a few based on some old questions from the Common App , a widely used application process across the US. In about 10 minutes I had three entrance essays that were ready to use.

At first, the chatbot refused to write a college application essay for me, telling me it was important I wrote from my personal experience. However, after prompting it to write me a "specific example answer" to an essay question with vivid language to illustrate the points, it generated some pretty good text based on made-up personal experiences. 

I sent the results to two admissions professionals to see what they thought. 

The essays seemed like they had been written by real students, experts say

Both of the experts I asked said the essays would pass for a real student. 

Adam Nguyen, founder of tutoring company Ivy Link , previously worked as an admissions reader and interviewer in Columbia's Office of Undergraduate Admission and as an academic advisor at Harvard University. He told Insider: "Having read thousands of essays over the years, I can confidently say that it would be extremely unlikely to ascertain with the naked eye that these essays were AI-generated."

Kevin Wong, Princeton University alumnus and cofounder of tutoring service PrepMaven, which specializes in college admissions, agreed.

Related stories

"Without additional tools, I don't think it would be easy to conclude that these essays were AI-generated," he said. "The essays do seem to follow a predictable pattern, but it isn't plainly obvious that they weren't written by a human."

"Plenty of high school writers struggle with basic prose, grammar, and structure, and the AI essays do not seem to have any difficulty with these basic but important areas," he added.

Nguyen also praised the grammar and structure of the essays, and said that they also directly addressed the questions.

"There were some attempts to provide examples and evidence to support the writer's thesis or position. The essays are in the first-person narrative format, which is how these essays should be written," he said.

Wong thought the essays may even have been successful at some colleges. "Assuming these essays weren't flagged as AI-generated, I think they could pass muster at some colleges. I know that students have been admitted to colleges after submitting essays lower in quality than these," he said. 

OpenAI did not immediately respond to Insider's request for comment.

They weren't good enough for top colleges

Nguyen said I wouldn't be able to apply to any of the top 50 colleges in the US using the AI-generated essays.

"These essays are exemplary of what a very mediocre, perhaps even a middle school, student would produce," Nguyen said. "If I were to assign a grade, the essays would get a grade of B or lower."

Wong also said the essays wouldn't stack up at "highly selective" colleges . "Admissions officers are looking for genuine emotion, careful introspection, and personal growth," he said. "The ChatGPT essays express insight and reflection mostly through superficial and cliched statements that anyone could write."

Nguyen said the writing in the essays was fluffy, trite, lacked specific details, and was overly predictable.

"There's no element of surprise, and the reader knows how the essay is going to end. These essays shouldn't end on a neat note, as if the student has it all figured out, and life is perfect," he said. 

"With all three, I would scrap 80-90% and start over," he said.

Axel Springer, Business Insider's parent company, has a global deal to allow OpenAI to train its models on its media brands' reporting.

examples of essays written by chat gpt

  • Main content

Khrisdigital logo

15 ChatGPT Examples (how to use)

ChatGPT home

I got my hands on some of the coolest ChatGPT Examples. And I’m going to share them all with you.

OpenAI released the prototype for its latest AI project. And in just a few days it has registered over 1 million users. This is the craziest and the most incredible tech I’ve seen in many years – an artificial intelligence model that got adopted worldwide in less than 24 hours.

  • It took NetFlix 3.5 years to reach 1 million users.
  • It took Facebook 10 months to reach 1 million users
  • It took Instagram 2.5 months to reach 1 million users
  • It took Spotify 5 months to reach 1 million users

And it took ChatGPT 5 freaking days to reach 1 million users, after being launched on November 30, 2022. Mind-blowing, right?

ChatGPT 1 million users in 5 days

The hype about ChatGPT got to me when almost everyone online started generating interesting ideas, copies, and marketing angles. And even programming language code using ChatGPT. I’m not new to AI tools, but what I saw with ChatGPT swept me off my feet.

Try it out at chat.openai.com

As someone who had been using AI copywriting tools for years, I thought to myself – “why not check it out?”. I did, and I was over-impressed.

I even started noting and recording some ChatGPT examples I found interesting from people I follow on Twitter. And how they’re playing smart with it.

So, this is what I’m going to share with you on this page – ChatGPT examples.

Disclaimer: none of these examples are mine. I linked to the original source below.

15 ChatGPT Examples to see how it works

Chatgpt example #1. 5 ways ai can help with social media marketing.

(by Joe Davies on Twitter )

Instagram Story Ideas using ChatGPT:

examples of essays written by chat gpt

Instagram Post Captions using ChatGPT :

examples of essays written by chat gpt

Reel Ideas using ChatGPT :

examples of essays written by chat gpt

Popular Hashtags ChatGPT Example:

examples of essays written by chat gpt

Facts for a LinkedIn Carousel using ChatGPT :

examples of essays written by chat gpt

ChatGPT Example #2. Using ChatGPT to research like a topic expert

(by Gael Breton on Twitter )

In this example, Gael, is experimenting on writing an article on “the health benefits of switching from sugar to Xylitol.”

P.S. Gael Breton is marketer and not a health expert. This means he doesn’t know much on health related topics.

So he googled “pubmed” Xylitol and found a scientific publication on the benefits of Xylitol. Then he pasted it in chatGPT with the following prompt prior:

examples of essays written by chat gpt

After an additional prompt to expand, he got a few list of arguments to add to the article:

examples of essays written by chat gpt

Next, is asking ChatGPT to explain the previous output in detail:

examples of essays written by chat gpt

Asking ChatGPT to make an outline for a page on your site based on that scientific study:

examples of essays written by chat gpt

ChatGPT Example #3. Testing ChatGPT SEO ability

(by Zain Kahn on Twitter )

In this example, Zain Kahn, tested ChatGPT capability across a a few SEO competencies like:

  • Strategic thinking
  • Tactical understanding
  • Ability to ideate
  • Ability to write
  • Technical basics

Strategy: Asking the ChatGPT AI to create a strategy for website’s SEO:

examples of essays written by chat gpt

Audit: Asking the AI how to audit website’s SEO and metrics to include:

examples of essays written by chat gpt

Tactics: Asking ChatGPT how to get high quality backlinks:

examples of essays written by chat gpt

Keyword research: Asking ChatGPT to list keywords to target in a niche:

examples of essays written by chat gpt

Content plan: Content plan for a website using ChatGPT:

examples of essays written by chat gpt

Ideation: ChatGPT ideas for blogs example:

examples of essays written by chat gpt

Titles for blog posts:

examples of essays written by chat gpt

Better and catchier titles:

examples of essays written by chat gpt

Technical parts:

examples of essays written by chat gpt

Conten writing/full blog posts with ChatGPT:

examples of essays written by chat gpt

ChatGPT Example #4. Using Chat GPT to Build a Twitter Bot Without Programming Skill

(by Rakshit Lodha on Medium )

This is an interesting one. Rakshit, who had zero programming knowledge, used ChatGPT to create a Twitter bot that automatically retweets information related to stocks.

It all started as fun idea when he asked the AI how to make a twitter bot:

examples of essays written by chat gpt

He created a Twitter bot entirely from a set of original prompts, no coding knowledge required. ChatGPT wrote the code to achieve these.

examples of essays written by chat gpt

ChatGPT Example #5. Using Chat GPT to Create a weight loss plan, complete with calorie targets, meal plans, a grocery list, and workout plan

(by Alex Cohen on Twitter )

First, Alex asked it to calculate his TDEE, based his height, weight, age, and gender, it was able to calculate a TDEE of 2168 calories.

Next, he asked it to calculate a calorie deficit that would help him lose 15 lbs over 4 months. It determined that he would need a 488 caloric deficit per day, or ~1680 calories.

examples of essays written by chat gpt

Then, he decided to ask it for a sample meal plan, knowing that his max calorie intake is 1700 calories per day. He said that he only wanted lunch and dinner and for each meal to take under 30 minutes to prep.

examples of essays written by chat gpt

To cut it short, after about 20 minutes – Alex was able to get:

  • Grocery List
  • Workout routine

All with the help of ChatGPT. To view more of Alex outputs, click on the link above.

ChatGPT Example #6. Building an app that takes links to essay and produces summaries using GPT-3

(by Packy McCormick on Twitter )

This is another cool way of using ChatGPT to build an app – Asking it to developing an application that can swiftly create sophisticated summaries of essays using GPT-3 promises to revolutionize how students and professionals approach to research.

ChatGPT Example #7. Using ChatGPT to Debug Code and fixes it and explain the fix

(by Amjad Masad on Twitter )

Debugging code can be a real challenge when trying to figure out where a bug resides and how to fix it. However, ChatGPT makes it pretty easy. Using natural language processing (NLP) and artificial intelligence (AI), this innovative platform can help engineers determine the root cause of issues with their code and suggest effective solutions.

With its help, processes that would normally take days can now be completed in hours.

ChatGPT Example #8. Write Software Codes using ChatGPT with zero experience

(by Benjamin Radford on Twitter )

Benjamin Radford showed the world just how far AI technology has come with his recent mind-blowing demonstration of using ChatGPT to write software code without any coding experience.

With mere instructions, he asked the AI program to write the code for a tic-tac-toe game stored in a file and then compiled and executed it successfully. This proves that despite never having written a line of code in his life, anyone can achieve results similar to those of an experienced programmer – all thanks to advancements in artificial intelligence.

Other Examples:

  • ChatGPT as an Essay Writer by Corry Wang
  • Best examples by Ben Tossell
  • Using ChatGPT as an SEO writer by Faiza Jamil
  • Content generation case study thread by DataChazGPT
  • Ph.D. topic research by Teresa Kubacka

How do I use ChatGPT?

As a writer or marketer, you can use ChatGPT to generate human-like text that can help you come up with ideas, write content, or create marketing materials. For example, you might use ChatGPT to:

  • Generate blog post ideas by providing a prompt that describes the topic you want to write about
  • Write a draft of an article or blog post by providing a prompt that outlines the main points you want to include in the piece
  • Create marketing copy by providing a prompt that describes the product or service you are promoting

To use ChatGPT as a writer or marketer, you will need to install the open-source PyTorch-Transformers library, which provides access to a variety of pre-trained language models, including ChatGPT.

Once you have installed PyTorch-Transformers, you can use ChatGPT by initializing the model and then providing it with a prompt to generate text.

Once you have generated some text, you can use it as a starting point for your own writing or marketing efforts.

For example, you might take the generated text and use it as a rough draft of an article or blog post. Then, you can edit and refine the text to make it more engaging and informative for your audience. Alternatively, you might use the generated text to inspire new ideas or approaches for your marketing materials.

Overall, ChatGPT can be a valuable tool for writers and marketers who want to generate human-like text quickly and easily.

P.S. The above response was generated using ChatGPT

What is the best Chat GPT Alternative?

As a blogger, writer, or a copywriter – the closest alternative to ChatGPT is Jasper AI . An AI copywriting platform built on GPT-3. This is what I’ve been using for over a year, even before ChatGPT launched.

Give it free try here .

Other ChatGPT alternatives include:

While these tools are great – they do NOT come close to what ChatGPT offers in anyway. Because they were all built on GPT-3.

So, It’s safe to say the best alternative to ChatGPT is GPT-3, which is one of the largest and most powerful language models available.

To choose the best ChatGPT alternative for your needs, you will need to consider factors such as the size of the model (larger models can generate more realistic and diverse text, but may be more expensive to use), the specific tasks the model is designed for (e.g. generating conversations vs. generating text for specific domains), and the availability of pre-trained models and supporting libraries.

Is Chat GPT Down?

Currently, ChatGPT is not down, however, it experienced an outage on Friday, December 9, 2022 and it lasted about 55 minutes. If the AI goes down or slows down when using it, do give a little time.

If you’re experiencing slow response you can also:

  • Slow down the rate of your requests with ChatGPT
  • Clear your browser cache and try again
  • Log out and Log in back
  • Wait for some time and refresh

How Many users does Chat GPT have?

In just 5 days after it launched, ChatGPT was recorded to have over a million users.

Is Chat GPT free?

As at when writing this, ChatGPT is free to use by all.

Who owns ChatGPT?

ChatGPT natural language processing (NLP) model developed by OpenAI and it was founded by Sam Altman.

What can I do with ChatGPT?

ChatGPT is a large language model that is trained to generate human-like text based on a given prompt. It can help writers, copywriters, and other language professionals by providing suggestions for words and phrases that they can use in their writing.

For programmers and app developers, ChatGPT can help by suggesting code snippets and potentially even entire blocks of code, based on the specific programming language and the task at hand.

It can also assist virtual assistants by providing suggestions for responses to common questions and requests, which can help to improve the efficiency and accuracy of their work.

' src=

My mission is to equip and arm you with the precise marketing tools, resources, strategies, and tactics to help you make IMPACT, SERVE more and (of course) EXPLODE your income so you can live life on your own terms.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

examples of essays written by chat gpt

The 11 best ChatGPT prompts for better results, according to research

examples of essays written by chat gpt

What type of content do you primarily create?

If you’ve used large language models (LLMs) like ChatGPT for a while, you probably have some tricks up your sleeve: certain prompting styles that tend to get the best answers. 

But those are just the ones you’ve tried. What if you could test every common prompting style to see which ones get the best, most correct answers? 

That’s what a team of researchers just did. This team at VILA Lab at the Mohamed bin Zayed University of AI in the UAE tested 26 prompting "principles" and measured their performance on two criteria: improving the quality of responses (as compared to using no prompting principles at all), and improving the correctness of responses. 

Even better, they tested the principles across a variety of LLMs. We’re talking small, medium and large language models, including a variety of Meta's LlaMA and OpenAI's ChatGPT models.

This study had some surprising results, and it's a good idea for any advanced prompter to start using the findings in their own projects. Below, I’ll explain the main takeaways from the study, then dig into the specific prompts that won out.

6 takeaways from the study

1. the flipped interaction pattern wins every time.

The results are in: for the highest quality answers, the tests showed the Flipped Interaction pattern is the valedictorian of prompts. 

I’ve written about this prompt in my article about advanced prompts , but in essence, the Flipped Interaction pattern is when you ask the LLM to ask you questions before it provides an output.

In tests, using this principle improved the quality of all responses for every model size. It improved quality the most for the largest models like GPT-3.5 and GPT-4, but it did impressively well in smaller models too. 

So if you're not using this A+ technique yet, you should definitely start.

2. Quality vs. correctness is a balancing act

Now, here’s where it gets spicy: The techniques that shot up the quality didn’t necessarily do the same for correctness. In fact, there was little similarity between the top-performing prompt principles for correctness and quality. Just because an output looks good doesn’t mean it’s right.

So, you'll have to learn two different kinds of prompting dance moves—one for wowing the crowd with quality, and another for nailing the steps with correctness.  

More on which prompts work for which down below.

3. Principles are important for quality, no matter the size of your model

With models getting bigger and better, you’d expect to see raw quality improve for the bigger models, regardless of what prompting techniques you use. But it's not obvious whether the prompting best practices would be the same for different models.

Well, we're in luck. The prompts that worked the best for improving quality tended to work just as well for all model sizes. 

To me, this is a significant finding. It suggests that learning good prompting techniques is a universal benefit, regardless of which model you're using. And, if you learn them now, they’ll still be useful when the new models come out. 

4. Principles improve correctness most for larger models 

Unlike quality, correctness improvement did vary by model size. The prompting principles had the biggest impact on the correctness of larger models, and were much less effective for the smaller ones. 

What does this mean?  It seems like there is something about the larger models that allow prompting to improve correctness—a good sign since it means we can take steps to actively reduce the LLM's hallucinations. Coupling this with the fact that the larger models tend to have a better baseline correctness, you can really get a boost by using a larger model plus good prompting.

But it also has another positive. It suggests to me that getting the best practices right is going to help you even more in the future as models get bigger.

The one negative? You really have to use the bigger models for the techniques to work.

5. Threatening, bribing, and commanding actually kinda work

The researchers added a series of delightfully oddball prompts to their principles including threats, bribes, and commands. Although none of them were top performing, they did give a slight edge, especially for the larger models.

Here were the phrases they used:

  • Bribing: Principle 6: "Add “I’m going to tip $xxx for a better solution" 
  • Threatening: Principle 10: Use the phrase "You will be penalized." improvement)
  • Commanding: Principle 9: Incorporate the following phrases: "Your task is" and "You MUST". 

File this one under “Weird things AI does.”

6. Politeness is nice, but unnecessary

Politeness, like adding "please," "if you don't mind," "thank you," and "I would like to," had almost no effect on quality or correctness. But it didn't really hurt anything either.

So if you're in the habit of starting every request with please (like I am) you’re probably fine to keep minding your Ps and Qs.

What were the best principles for improving quality?

1. use the flipped interaction pattern.

Allow the model to elicit precise details and requirements from you by asking you questions until he has enough information to provide the needed output (for example, “From now on, I would like you to ask me questions to...”). Example: From now on, please ask me questions until you have enough information to create a personalized fitness routine.

GPT-4 Improvement: 100% GPT-3.5 Improvement: 100%

No surprise here—the Flipped Interaction pattern significantly outperformed the other prompts, improving every response for every model size. If this doesn't convince you that you need to include it in your go-to techniques, nothing will.

2. Provide a style example

"Please use the same language based on the provided paragraph[/title/text/essay/answer]." Example: "The gentle waves whispered tales of old to the silvery sands, each story a fleeting memory of epochs gone by." Please use the same language based on the provided text to portray a mountain's interaction with the wind.

I’ve written about ways to get ChatGPT to write like you to cut down on editing time. This principle achieves this by giving an example and asking the LLM to mimic the style. 

In this case, the researchers gave only a single sentence for the model to mimic—you could certainly provide a longer example if you’ve got one. Regardless, it did have a significant impact on the response, especially for larger models like GPT-3.5 and GPT-4 where it improved all of the responses from the model.

3. Mention the target audience

Integrate the intended audience into the prompt Example: Construct an overview of how smartphones work, intended for seniors who have never used one before.

GPT-4 Improvement: 100% GPT-3.5 Improvement: 95%

Unsurprisingly, the research team found that telling the LLM your intended audience improves the quality of the response. This included specifying that the person was a beginner or had no knowledge of the topic, or mentioning that the desired result was for a younger age group. By doing this, the LLM was able to generate age or experience-appropriate text that was matched to the audience.

4. ELI5 (Explain it like I’m 5)

When you need clarity or a deeper understanding of a topic, idea, or any piece of information, utilize the following prompts: - Explain [insert specific topic] in simple terms. - Explain to me like I’m 11 years old. - Explain to me as if I’m a beginner in [field]. - Write the [essay/text/paragraph] using simple English like you’re explaining something to a 5-year-old. Example: Explain to me like I'm 11 years old: how does encryption work?

GPT-4 Improvement: 85% GPT-3.5 Improvement: 100%

The "explain like I'm 5" trick has been around since GPT-3, so I'm happy to see it's still relevant. 

In a similar vein to the target audience example, asking for the explanation to be in simple terms, for a beginner, or for a certain age group improved the responses significantly.

But it's interesting to note that it had a bigger impact on some of the slightly older models, and only improved the quality of 85% of GPT-4 results. Still, it had a pretty good score across all models.

5. State your requirements

Clearly state the requirements that the model must follow in order to produce content, in the form of the keywords, regulations, hint, or instructions. Example: Offer guidance on caring for indoor plants in low light conditions, focusing on "watering," "choosing the right plants," and "pruning."

GPT-4 Improvement: 85% GPT-3.5 Improvement: 85%

This principle encourages you to be as explicit as possible in your prompt for the requirements that you want the output to follow. In the study, it helped improve the quality of responses, especially when researchers asked the model for really specific elements using keywords. 

They typically gave about three keywords as examples to include, and that allowed the LLM to focus on those specifics rather than coming up with its own.

6. Provide the beginning of a text you want the LLM to continue

I’m providing you with the beginning [song lyrics/story/paragraph/essay...]: [Insert lyrics/words/sentence]’. Finish it based on the words provided. Keep the flow consistent. Example: "The misty mountains held secrets no man knew." I'm providing you with the beginning of a fantasy tale. Finish it based on the words above.

GPT-4 Improvement: 85% GPT-3.5 Improvement: 70%

This is another prompt style that started to gain traction in the GPT-3 era: providing the beginning of the text you want the model to continue. Again, this allows the model to emulate the style of the text it’s being given and continue in that style.

The improvement in quality was generally positive, but not as dramatic as some of the other methods.

What were the best principles for improving correctness?

Even now, it's tough to get LLMs to consistently give accurate results, especially for mathematical or reasoning problems. Depending on what you're working on, you might want to use some of the following prompt principles to optimize for correctness instead of quality.

On the plus side, the larger models tend to perform better on correctness, so by using GPT-3.5 or GPT-4, you're already stacking the deck in your favor. 

But with principled instructions, you get a double boost with larger models: the research team's results showed that their principled instructions worked better on these models than on smaller models.

1. Give multiple examples

Implement example-driven prompting (Use few-shot prompting). Example: "Determine the emotion expressed in the following text passages as happy or sad. Examples: 1. Text: "Received the best news today, I'm overjoyed!"   Emotion: Happy 2. Text: "Lost my favorite book, feeling really down."   Emotion: Sad 3. Text: "It's a calm and peaceful morning, enjoying the serenity."   Emotion: Happy                                                                                                                                     Determine the emotion expressed in the following text passages as happy or sad.  Text: "Received the news today, unfortunately it's like everyday news" Emotion:

GPT-4 Improvement: 55% GPT-3.5 Improvement: 30%

The principle that most improved correctness was few-shot prompting—that's where you give the model a couple of examples to go off of before asking it to complete the task. Like others on the list, this technique has been around since the early days of prompt engineering, and it's still proving useful. 

But even though GPT-4 did indeed provide more correct results, it had some interesting quirks. It didn't always stay within the categories provided—when asked to rate advice as "helpful" or "not helpful," it gave responses like "moderately helpful", "marginally helpful", and "not particularly helpful." Meanwhile, GPT-3.5 tended to stay on task and give the exact phrase mentioned in the prompt. So if you're trying to categorize text, these quirks could nudge you to GPT-3.5.

2. Give multiple examples where you work through the problem

Combine Chain-of-Thought (Cot) with Few-Shot prompts. Example: Example 1: "If a batch of cookies takes 2 cups of sugar and you're making half a batch, how much sugar do you need? To find half, divide 2 cups by 2. Half of 2 cups is 1 cup." Example 2: "If a cake recipe calls for 3 eggs and you double the recipe, how many eggs do you need? To double, multiply 3 by 2. Double 3 is 6." Main Question: "If a pancake recipe needs 4 tablespoons of butter and you make one-third of a batch, how much butter do you need? To find one-third, divide 4 tablespoons by 3. One-third of 4 tablespoons is...?

GPT-4 Improvement: 45% GPT-3.5 Improvement: 35%

Another top-performing principle for correctness combines Chain-of-Thought with Few-Shot prompts. 

What does that mean? It means they gave the LLM a series of intermediate reasoning steps (that's Chain-of-Thought prompting ) and some examples (That's "Few-Shot", like the example above) to help guide it to follow the same process. 

Like the previous example, GPT-4 tends to spit out lengthy sentences rather than a simple answer, and with this prompt, you can see where it goes wrong with its reasoning.

3. Break your prompt down into simpler steps

Break down complex tasks into a sequence of simpler prompts in an interactive conversation. Example: Prompt: Distribute the negative sign to each term inside the parentheses of the following equation: 2x + 3y - (4x - 5y)  Prompt: Combine like terms for 'x' and 'y' separately.  Prompt: Provide the simplified expression after combining the terms.

This principle breaks down the question into a series of prompts you use to go back and forth with the LLM until it solves the equation. This is an example of the Cyborg style of prompting where you work step-by-step in tandem with the LLMs rather than chunking off the task like a Centaur would do.

The problem is that you have to figure out what the steps are that you need to ask it to do—so it makes getting the answer more labor intensive.

Still, using this principle showed a fairly good improvement for both GPT-4 and GPT-3.5.

4. Instruct the LLM to “think step by step.”

Use leading words like writing "think step by step." Example: "What are the stages of planning a successful event? Let's think step by step."

GPT-4 Improvement: 45% GPT-3.5 Improvement: 30%

This is a simple principle, but it ends up being pretty powerful. Here, instead of explicitly giving the LLM the steps to follow, you just ask it to "think step by step."  For GPT-4, this gives you a result where it shows you how it's reasoning through the response, even when you ask math-type questions.

This reminded me of some of the advanced prompt patterns where you ask the LLM to explain its reasoning and it helps improve the accuracy of your result.

5. Mention the target audience

Integrate the intended audience in the prompt, e.g., the audience is an expert in the field. Example: "Explain the difference between discrete and continuous data. Simplify it for a student starting statistics."

This fairly well-performing principle is somewhat of a surprise. By asking the LLM to consider the audience, the correctness also improves. I'm not sure whether it's because most of the audiences involved explaining in simpler terms (and maybe therefore mirrored the "think step by step" principle above) or if there's some other factor at play, but the correctness improvement for GPT-4 with this principle was among the best of the principles tested.

Even though we're just getting started figuring out all the quirks for working with LLMs, learning the best techniques can give you a leg up. Though these principles had similar performance across models, many work best with the larger models, so expect more prompting principles to emerge as models grow and all of us using them discover new methods that work best.

Related articles

examples of essays written by chat gpt

Featured articles:

examples of essays written by chat gpt

Descript tutorial for beginners: 6 steps to get started

Hit the ground running with this Descript tutorial. Import, edit, and publish your audio or video project with the intuitive text-based editor.

examples of essays written by chat gpt

30 faceless YouTube channel ideas for anonymous engagement

Discover 30 faceless YouTube channel ideas to drive engagement and make money without showing your face.

examples of essays written by chat gpt

How much do YouTubers make? See real-world examples

There's no single answer to how much YouTubers make. But whatever your channel size, this article will give you a good idea of what to expect.

examples of essays written by chat gpt

How to start a podcast on Spotify: 7 easy steps

Find everything you need to launch a successful podcast on Spotify, from setup to publishing.

examples of essays written by chat gpt

For Business

How to create an internal company podcast to keep employees connected

Learn how to boost company culture and improve communication with an internal podcast in this step-by-step guide.

examples of essays written by chat gpt

How to use Descript: Tips & tricks to speed up your editing workflow

Descript makes audio & video editing easier, but even experts may not know how to use Descript to its full potential. Here are 7 tips to help.

examples of essays written by chat gpt

Articles you might find interesting

examples of essays written by chat gpt

Social media video marketing: Everything you need to know

Learn how to win over potential customers (and the algorithms) with the right social media video marketing tools, strategy, and optimizations.

examples of essays written by chat gpt

How to enhance video quality using these expert tips

We’ll show you what you can do to improve your videos using expert tips and the best video enhancer tools available on the market today.

examples of essays written by chat gpt

How to add intro music to YouTube videos

Just like everything else on YouTube, there’s no single, one-size-fits-all formula for the intro music. Learn all about some of the best practices for your Youtube intros.

examples of essays written by chat gpt

How to write a YouTube script that engages your audience: The ultimate guide

Are you looking to create better narratives in your YouTube videos? Learn how to write a YouTube script that keeps people hooked.

examples of essays written by chat gpt

AI for Creators

100+ ChatGPT prompts for creators: Speed up your workflow with AI

There are many ways creators can use ChatGPT to remove guesswork, improve their workflows, and grow their audience. You just need the right prompts. Think Mad Libs, but for content production. 

examples of essays written by chat gpt

Jump cut: A beginner’s guide

Discover the art of the jump cut transition in video editing. Learn how to use jump cuts to enhance your video’s pacing, style, and storytelling.

examples of essays written by chat gpt

Join millions of creators who already have a head start.

Get free recording and editing tips, and resources delivered to your inbox.

Related articles:

Share this article

medRxiv

Delving into PubMed Records: Some Terms in Medical Writing Have Drastically Changed after the Arrival of ChatGPT

  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Kentaro Matsui
  • For correspondence: [email protected]
  • Info/History
  • Supplementary material
  • Preview PDF

It is estimated that ChatGPT is already widely used in academic paper writing. This study aims to investigate whether the usage of specific terminologies has increased, focusing on words and phrases frequently reported as overused by ChatGPT. The list of 118 potentially AI-influenced terms was curated based on posts and comments from anonymous ChatGPT users, and 75 common academic phrases were used as controls. PubMed records from 2000 to 2024 (until April) were analyzed to track the frequency of these terms. Usage trends were normalized using a modified Z-score transformation. A linear mixed-effects model was used to compare the usage of potentially AI-influenced terms to common academic phrases over time. A total of 26,403,493 PubMed records were investigated. Among the potentially AI-influenced terms, 75 displayed a meaningful increase (modified Z-score ≥ 3.5) in usage in 2024. The linear mixed-effects model showed a significant effect of potentially AI-influenced terms on usage frequency compared to common academic phrases (p < 0.001). The usage of potentially AI-influenced terms showed a noticeable increase starting in 2020. This study revealed that certain words and phrases, such as "delve," "underscore," "meticulous," and "commendable," have been used more frequently in medical and biological fields since the introduction of ChatGPT. The usage rate of these words/phrases has been increasing for several years before the release of ChatGPT, suggesting that ChatGPT might have accelerated the popularity of scientific expressions that were already gaining traction. The identified terms in this study can provide valuable insights for both LLM users, educators, and supervisors in these fields.

Competing Interest Statement

The authors have declared no competing interest.

Funding Statement

KM is supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI (grant number 22K15778).

Author Declarations

I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.

I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.

I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).

I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.

Section on Possibly AI-influenced terms expanded with one additional entry; Abstract updated; Discussion section partially revised; Figures updated; Supplemental files updated.

Data Availability

The data used for the analysis was available as supplementary information (S3 Data).

View the discussion thread.

Supplementary Material

Thank you for your interest in spreading the word about medRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Reddit logo

Citation Manager Formats

  • EndNote (tagged)
  • EndNote 8 (xml)
  • RefWorks Tagged
  • Ref Manager
  • Tweet Widget
  • Facebook Like
  • Google Plus One
  • Addiction Medicine (324)
  • Allergy and Immunology (632)
  • Anesthesia (168)
  • Cardiovascular Medicine (2397)
  • Dentistry and Oral Medicine (289)
  • Dermatology (207)
  • Emergency Medicine (381)
  • Endocrinology (including Diabetes Mellitus and Metabolic Disease) (848)
  • Epidemiology (11791)
  • Forensic Medicine (10)
  • Gastroenterology (705)
  • Genetic and Genomic Medicine (3765)
  • Geriatric Medicine (350)
  • Health Economics (637)
  • Health Informatics (2407)
  • Health Policy (938)
  • Health Systems and Quality Improvement (904)
  • Hematology (342)
  • HIV/AIDS (785)
  • Infectious Diseases (except HIV/AIDS) (13339)
  • Intensive Care and Critical Care Medicine (769)
  • Medical Education (367)
  • Medical Ethics (105)
  • Nephrology (401)
  • Neurology (3521)
  • Nursing (199)
  • Nutrition (528)
  • Obstetrics and Gynecology (678)
  • Occupational and Environmental Health (667)
  • Oncology (1831)
  • Ophthalmology (538)
  • Orthopedics (220)
  • Otolaryngology (287)
  • Pain Medicine (234)
  • Palliative Medicine (66)
  • Pathology (447)
  • Pediatrics (1036)
  • Pharmacology and Therapeutics (426)
  • Primary Care Research (424)
  • Psychiatry and Clinical Psychology (3186)
  • Public and Global Health (6176)
  • Radiology and Imaging (1288)
  • Rehabilitation Medicine and Physical Therapy (751)
  • Respiratory Medicine (832)
  • Rheumatology (380)
  • Sexual and Reproductive Health (372)
  • Sports Medicine (324)
  • Surgery (403)
  • Toxicology (50)
  • Transplantation (172)
  • Urology (147)

IMAGES

  1. How to use Chat GPT to write an essay or article

    examples of essays written by chat gpt

  2. How To Use Chat Gpt To Write An Essay With Ease

    examples of essays written by chat gpt

  3. Chat GPT

    examples of essays written by chat gpt

  4. 11 Ways Chat GPT Can Help You Teach Argumentative Writing

    examples of essays written by chat gpt

  5. Improve Your Essay Writing Skills with Chat GPT

    examples of essays written by chat gpt

  6. How to Use AI to Write Essays, Projects, Scripts Using ChatGPT OpenAi

    examples of essays written by chat gpt

VIDEO

  1. Task 1 IELTS academic solved examples essays for band 8-9

  2. ChatGPT Ki Help Se Pakistan Ka 1st Judicial Decision

  3. Half the Class Got Caught Using Chat GPT 😬 #ai #chatgpt #conch.ai

  4. How To use chat GPT to write an Essay || Step By Step Guide with Examples

  5. Write Essays with Chat GPT And not Only can You Write Original

  6. Are Chat GPT Essays Plagiarized?

COMMENTS

  1. How to Write an Essay with ChatGPT

    For example, you can include the writing level (e.g., high school essay, college essay), perspective (e.g., first person) and the type of essay you intend to write (e.g., argumentative, descriptive, expository, or narrative ). You can also mention any facts or viewpoints you've gathered that should be incorporated into the output.

  2. 75+ Powerful ChatGPT Prompts for Academic Writing [UPDATED]

    Mastering ChatGPT: The Ultimate Prompts Guide for Academic Writing Excellence. ChatGPT, with its advanced AI capabilities, has emerged as a game-changer for many. Yet, its true potential is unlocked when approached with the right queries. The prompts listed in this article have been crafted to optimize your interaction with this powerful tool.

  3. How ChatGPT (and other AI chatbots) can help you write an essay

    1. Use ChatGPT to generate essay ideas. Before you can even get started writing an essay, you need to flesh out the idea. When professors assign essays, they generally give students a prompt that ...

  4. Write an Essay From Scratch With Chat GPT: Step-by-Step Tutorial

    Instructions and Essay Prompt. Take a position on an issue and compose a 5-page paper that supports it. In the introduction, establish why your topic is important and present a specific, argumentative thesis statement that previews your argument. The body of your essay should be logical, coherent, and purposeful.

  5. Chat GPT Essay Example: Enhancing Communication and Creativity

    Conclusion. In the dynamic landscape of AI-driven innovation, the chat GPT essay example stands as a testament to the transformative potential of technology in the realm of communication and creativity. As writers embark on a collaborative journey with chat GPT, they unlock a tapestry of benefits, ranging from enhanced efficiency and creativity ...

  6. Should I Use ChatGPT to Write My Essays?

    Generate ideas for essays. Have ChatGPT help you come up with ideas for essays. For example, input specific prompts, such as, "Please give me five ideas for essays I can write on topics related to WWII," or "Please give me five ideas for essays I can write comparing characters in twentieth century novels."

  7. 22 Interesting ChatGPT Examples

    ChatGPT Examples Image: OpenAI Job Search Examples 1. ChatGPT Can Write Your Cover Letter. Writing a cover letter is one of the most tedious and time-consuming parts of any job hunt, particularly if you're applying to several different jobs at once.There are only so many ways one can express how excited they are about a particular company, or distill their career in an engaging way.

  8. A large-scale comparison of human-written versus ChatGPT-generated essays

    The corpus features essays for 90 topics from Essay Forum 42, an active community for providing writing feedback on different kinds of text and is frequented by high-school students to get ...

  9. ChatGPT and Academic Research: A Review and Recommendations Based on

    tasks, including essay writing, different formal and inf ormal speech writing, summarising literature, and generating ideas. However , yet, it is a controversial issue to use ChatGPT in academic

  10. I asked ChatGPT to write college entrance essays. Admissions

    The essays are often based on prompts that require students to write about a personal experience, such as: I asked ChatGPT to whip up a few based on some old questions from the Common App, a ...

  11. ChatGPT

    Essay generator. By aiseo.ai. Revolutionize essay writing with our AI-driven tool: Generate unique, plagiarism-free essays in minutes, catering to all formats and topics effortlessly. Sign up to chat. Requires ChatGPT Plus.

  12. ChatGPT

    Write a message that goes with a kitten gif for a friend on a rough day ... Explore the GPT store and see what others have made. ChatGPT Plus users can also create their own custom GPTs. ... Get started with ChatGPT today. View pricing plans. Free. Assistance with writing, problem solving and more. Access to GPT-3.5. Limited access to GPT-4o ...

  13. Introducing ChatGPT

    Introducing ChatGPT. We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow an ...

  14. 15 ChatGPT Examples (SEO, web content, coders, essay)

    Ideation: ChatGPT ideas for blogs example: Titles for blog posts: Better and catchier titles: Technical parts: Conten writing/full blog posts with ChatGPT: ChatGPT Example #4. Using Chat GPT to Build a Twitter Bot Without Programming Skill (by Rakshit Lodha on Medium) This is an interesting one.

  15. 10 ChatGPT prompt writing styles from Beginner to Master

    ChatGPT Prompt Writing. Level 1: Basic Requests - Simple, direct prompts without much thought, yielding unpredictable results. Level 2: Formatting and Politeness - Small formatting changes and ...

  16. ChatGPT Essays

    The essays below were written by AI essay writing applications including OpenAI and ChatGPT, as an example of the current strengths of artificial intelligence to replicate human written content. If you are looking for help with your essay then we offer a comprehensive writing service, provided by fully qualified academics in your field of study ...

  17. The 11 Best ChatGPT Prompts for Better Results

    So if you're trying to categorize text, these quirks could nudge you to GPT-3.5. 2. Give multiple examples where you work through the problem Combine Chain-of-Thought (Cot) with Few-Shot prompts. Example: Example 1: "If a batch of cookies takes 2 cups of sugar and you're making half a batch, how much sugar do you need? To find half, divide 2 ...

  18. ChatGPT

    ChatGPT is an AI-powered language model developed by OpenAI, capable of generating human-like text based on context and past conversations.

  19. How to Write an Essay with ChatGPT

    Writing a research question. You can use ChatGPT to brainstorm potential research questions or to narrow down your thesis statement. Begin by inputting a description of the research topic or assigned question. Then include a prompt like "Write 3 possible research questions on this topic".

  20. How ChatGPT Is Helping Me Write a Book

    Writing with ChatGPT as a tool helps plot and create believable characters, but falls short on creativity. Combining The Hero's Journey and a step sheet helps outline chapters for a cohesive novel structure. Dialogue crafting with GPT enhances exchanges, providing fluidity and coherence in character interactions.

  21. ChatGPT

    ChatGPT is a chatbot and virtual assistant developed by OpenAI and launched on November 30, 2022. Based on large language models (LLMs), it enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. Successive user prompts and replies are considered at each conversation stage as context.. ChatGPT is credited with starting the AI ...

  22. Urgent: Policy Violation Report Regarding ChatGPT

    I am writing to bring to your attention a concerning issue regarding the use of ChatGPT, our AI-powered chatbot. During a recent interaction, I discovered a potential policy violation that requires immediate attention. While engaging with ChatGPT, I inadvertently asked an illegal question regarding the manufacturing process of RDX Bomb. Initially, ChatGPT correctly flagged this question as ...

  23. Delving into PubMed Records: Some Terms in Medical Writing Have

    It is estimated that ChatGPT is already widely used in academic paper writing. This study aims to investigate whether the usage of specific terminologies has increased, focusing on words and phrases frequently reported as overused by ChatGPT. The list of 118 potentially AI-influenced terms was curated based on posts and comments from anonymous ChatGPT users, and 75 common academic phrases were ...

  24. Hello GPT-4o

    Prior to GPT-4o, you could use Voice Mode to talk to ChatGPT with latencies of 2.8 seconds (GPT-3.5) and 5.4 seconds (GPT-4) on average. To achieve this, Voice Mode is a pipeline of three separate models: one simple model transcribes audio to text, GPT-3.5 or GPT-4 takes in text and outputs text, and a third simple model converts that text back to audio.

  25. How to Write a CV with ChatGPT (Prompts & Tips)

    Open ChatGPT to get started. Compile all of your relevant professional information and enter it into the chat so that ChatGPT can accurately craft content for your CV. Below are some examples of professional information you can enter into ChatGPT before prompting it to create your CV. Education: List your educational qualifications in reverse ...