Pages

Using ChatGPT to Write Quiz Questions

Using ChatGPT to Write Quiz Questions

The following are notes from OTL’s session for the AI in Action series.

Mark Kaysar and Agustin Ríos used AI to create quiz questions with different question types. They also prompted ChatGPT to properly format a quiz for uploading into Bruin Learn.

Can ChatGPT help faculty create and format low stakes assessments for using in Canvas?

Demo Question Generation

  • Question Types
  • Multiple Choice
  • True/False
  • Fill in the blank
  • Short Answer
  • Short Essay
  • Adding feedback to answers

Quiz Formatting

When working in ChatGPT, if you include the format instructions shown below you will receive questions that are ready for upload to Canvas. ChatGPT will usually respond in the format you tell it to use, but if it does not, resubmit and ask it to reformat.

Please create all quiz questions using the following format. Each choice needs to start with a lower case alphabet, 
a, b, c, d, etc. with a close parenthesis. The correct choice is designated with an asterisk.

1. What is 2+3?
a) 6
b) 1
*c) 5
d) 10

Sample ChatGPT Prompts

Provide a topic

Can you write some multiple choice questions about [Insert topic]?

Provide a link to a reading

Please write multiple choice questions to evaluate this content.
[paste content]

1.2 The Weakness of Early Patent Systems - Introduction to Intellectual Property | OpenStax

Provide a link to a Canvas page

Please write multiple choice questions to evaluate this content.

Can you provide feedback on why the incorrect items are incorrect?
[paste content]

https://canvas.ucdavis.edu/courses/34528/pages/being-present-in-your-online-course

Provide a desired outcome

Please write multiple choice questions that evaluate this outcome [paste outcome]

Analyze how the costume designer’s interpretation of the screenplay is affected by the 
tone of the politics of the era and the pressures from the studio

Provide a video script or video captions

Please write multiple choice questions to evaluate this content.

[paste content]

Provide a set of answers

Please write multiple choice questions that have the following as answers.

[paste answers]

Provide similar questions to use as a model

Please write multiple choice questions that evaluate the same content as this question.

[paste question]

Uploading to Canvas

Before you can upload the quiz into Canvas, you need to save the questions in the QTI file type.

Step one

Step two

This will create a quiz with your new questions.

See also the original session document.

Commonly Used AI Terms

Commonly Used AI Terms


ChatGPT Terms


Note: These terms are based on the version of ChatGPT freely available (as of this writing, May 2023). ChatGPT, a subscription service, uses ChatGPT 4. The technology is rapidly changing and these terms will as well. For example, as of this writing, ChatGPT can only take a text prompt, but the ability to upload images is expected to be available soon in ChatGPT4.

  • ChatGPT: A system/service/tool that generates text from a “prompt” input.
  • Prompt: Text input, such as a sentence, paragraphs, or whole pages of text. The prompt is the request for which ChatGPT generates a response.
  • Session: A set of interactions with ChatGPT in a browser. A user’s interaction with ChatGPT within a session can helps refine the prompts and the output. Once a user closes their browser or logs out, the session terminates. Unlike Siri, Alexa, and Google Assistant, at this point in its development ChatGPT does not “remember” a user and does not carry over conversations between sessions.
  • GPT: “Generative Pre-trained Transformer” – A type of artificial intelligence model.
    • Generative: The system creates output
    • Pre-Trained: The system is trained with labeled data (see supervised learning, below)
    • Transformer: A type of artificial intelligence model.
  • GPT-3, GPT-3.5, GPT-4: The models that power services such as DALL-E and ChatGPT.
    • The initial public release of ChatGPT was based on GPT-3, then quickly updated to use GPT-3.5. GPT-3 and GPT-3.5 are examples of “large language models.”
    • In March 2023, OpenAI announced GPT-4 with improved functionality and expanded capabilities, including the ability to work with images. As such, GPT-4 is a “large multimodal model.”

Additional AI Terms


When one approaches artificial intelligence from a field other than computer science, the use of familiar terms from other contexts can be confusing.

  • Model: At its most simple level, a calculating processor. Like a handheld calculator, a model takes an input and produces an output. There are many types of models that take a particular type of input, such as text or images, or a combination, such as text and images.
  • Training: Providing input to a model from a source. For GPT-x, the “training data” is a corpora of pre-2022 documents from the Internet.
  • Bias: In a statistical context, bias refers to a non-random data set used to train a model. For example, a data set intended for a world language application but drawn only from English novels, or a data set intended for general architectural design but consisting only of images of seaside houses, are said to be “biased training data.”
  • Learning is found in computer science and statistical analysis terms such as “machine learning” and “deep learning.” In the context of artificial intelligence, “learning” is when a system ingests information and, through its processing model, develops output. Two common forms of machine learning are:
    • Supervised Learning: A model using labeled data, such as a collection of images labeled as “bicycles.”
    • Unsupervised Learning: A model using unlabeled data.

To understand the difference between these two forms, with supervised learning, think of a kindergarten teacher who shows a child objects made from Legos and tells the student, “Here are 5 examples of houses, and here are 5 examples of cars.” By instructing the student which items are houses and which are cars (i.e., labeling the data), the teacher is supervising the learning. The child then learns to distinguish which features of a house make it characteristic of a house (such as having a porch, windows, and doors), and which features of a car make it characteristic of a car (such as having an engine, wheels, and seats).

In contrast, with unsupervised learning, the data has no labels. The kindergarten teacher would just say to the child, “Here are 10 Lego objects,” but does not tell the student which 5 objects are cars and which 5 objects are houses (i.e., unlabeled data). The child then compares the objects to each other and identifies patterns. Without knowing what the objects are called, the student may realize that 5 objects share common characteristics and therefore groups them together. The other 5 objects look similar to each other, and so the student puts them into a separate group.

The words in a text corpora are annotated with parts-of-speech, semantic roles, syntactic structures, language identifiers, and other characteristics. In earlier days of the development of artificial intelligence, computer time was very costly so creating curated datasets was a computational necessity. While having experts hand-code supervised learning datasets is time consuming and expensive, it resulted in very high-quality outputs. For example, when the Brown Corpus (1961) — the first million-word electronic dataset — was developed, each term in 500 works were manually tagged with a genre.

Unsupervised learning is computationally more challenging because the training data does not have labels, but modern computing resources are inexpensive and readily available and as a result can computationally process much greater raw data.


The term “deep learning” has different meanings in teaching and learning and artificial intelligence. Kim DeBacco shares this definition in the realm of teaching and learning: “Martin & Saljo (the original authors of the theory), and Entwistle, Ramsden, Prosser, Trigwell, differentiate deep learning (e.g., the ability to apply new concepts to a different context) from surface learning (e.g., cramming for exams). Surface learning relies on short-term memory: in one ear, and out the other! The constructive alignment theory (sometimes called backward design) holds that if course-level learning outcomes, activities, and assessments are closely aligned, then students will engage in deep learning which lasts over time.

In their Deep Learning (MIT Press, 2016), Goodfellow, et al, provide this definition: “A major source of difficulty in many real-world artificial intelligence applications is that many of the factors of variation influence every single piece of data we are able to observe. … Deep learning solves this central problem in representation learning by introducing representations that are expressed in terms of other, simpler representations. Deep learning enables the computer to build complex concepts out of simpler concepts.” (5)

  • Network: In artificial intelligence, often used in the phrase, “neural network.” Inspired by the neurons of the brain, this type of a network uses linked computers, wherein the output of one “node” (simply: a computer) is sent to multiple other nodes. These nodes, layered in a hierarchical fashion like the boxes of a family tree or an org chart, form the network. The output of one layer becomes the input for the next layer, and the last layer produces the result from the previous computations.

ChatGPT and AI

Alongside the introduction of DALL-E and Stable Diffusion (images), TOME (slide deck presentations), SoundRaw (music), and EDGE (dance), ChatGPT (text) marks the emergence of AI in our everyday lives. Because of the ability of these tools to generate human quality output, these technologies have spurred a great deal of conversation about what it means to be teachers, students, writers, artists, programmers, and others. The power for anyone to generate text, images, and code leads to new questions and considerations.

ChatGPT and AI


Jess Gregg hosted a UCLA CEILS Ed Talk, “What’s all the buzz about ChatGPT?” (February 8, 2023)

For Teaching and Learning

Topics shared and discussed at the UC Centers for Teaching and Learning forum, January 20, 2023

“Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach”The New York Times, January 16, 2023: “Across the country, university professors like Mr. Aumann, department chairs and administrators are starting to overhaul classrooms in response to ChatGPT, prompting a potentially huge shift in teaching and learning. Some professors are redesigning their courses entirely, making changes that include more oral exams, group work and handwritten assessments in lieu of typed ones.”

Practical Responses to ChatGPT – Montclair State University: “ChatGPT is not without precedent or competitors (such as Jasper, Sudowrite, QuillBot, Katteb, etc). Souped-up spell-checkers such as Grammarly, Hemingway, and Word and Google-doc word-processing tools precede ChatGPT and are often used by students to review and correct their writing. Like spellcheck, these tools are useful, addressing spelling, usage, and grammar problems, and some compositional stylistic issues (like overreliance on passive voice). However, they can also be misused when writers accept suggestions quickly and thus run the danger of accepting a poor suggestion.”

  • The Latest Technology: ChatGPT and other Generative AI bots
    • What is ChatGPT?
  • Practical Suggestions to Mitigate Non-Learning/Cheating
    • Course Design and Pedagogy
    • Assignment Design
    • Extend Flipped Learning: Class Time Can Be for Writing/Creating
    • When All Else Fails
  • Resources and References

Using Artificial Intelligence in the Classroom – U Wisconsin-Madison: “Like with any new technology, one approach instructors can take in the classroom is to transparently make use of it. This allows you to move beyond stress over whether or not students are using AI, and instead have a voice in when, why, and how you and your class use it.”

  • Using Artificial Intelligence in the Classroom
  • Steps to Consider
  • Exploring Capabilities and Limitations of AI in Your Classroom
  • Establishing Norms and Boundaries for Using AI in Your Classroom
  • Integrating AI into the Writing Process in Your Classroom
  • References & Further Reading

Update Your Course Syllabus for ChatGPT: “… some easy to implement suggestions that will help you prepare for the upcoming semester”

Schools Must Embrace the Looming Disruption of ChatGPT: “Educators face a choice: They can dig in their heels, attempting to lock down assignments and assessments, or use this opportunity to imagine what comes next.”

Screenshot of ChatGPT from CEILS Senior Associate Director for Educational Development Jess Gregg:

ChatGPT Screenshot. Question:

Question: “Is ChatGPT racist?” Answer: “As a language model, I do not have personal beliefs or biases. I am trained on a large dataset of text, which may contain both racist and anti-racist content. My responses will reflect the information I have been trained on. However, OpenAl is committed to using Al in a way that is beneficial for all, and actively works to mitigate any potential biases in its models.”

Additional Topics for Discussion


Ethical Concerns


Microsoft shuts down AI chatbot after it turned into a Nazi – “Yesterday the company launched “Tay,” an artificial intelligence chatbot designed to develop conversational understanding by interacting with humans.… Today, Microsoft had to shut Tay down because the bot started spewing a series of lewd and racist tweets.”

Problems Identified in “Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims” – “Recent progress in artificial intelligence (AI) has enabled a diverse array of applications across commercial, scientific, and creative domains. With this wave of applications has come a growing awareness of the large-scale impacts of AI systems, and recognition that existing regulations and norms in industry and academia are insufficient to ensure responsible AI development.”


Generative AI: Right and Wrong


ChatGPT and DALL-E work on technically similar underlying foundations (“generative” machine learning models). Briefly, this class of AI tools takes data as input (ChatGPT’s data is text; DALL-E’s data input is images) and produces an original output (ChatGPT produces text; DALL-E produces images). The source input data is the Internet.

It has often been pointed out that ChatGPT can produce incorrect text: it will output incorrect dates, assign people to incorrect occupations or disciplines, create fictitious academic citations, etc. Because of this, it may be easier to demonstrate DALL-E with people trying to understand these technologies. When DALL-E produces an image, we don’t generally focus on whether its image output is “right or wrong.”

However, assessing ChatGPT with a “right vs. wrong” lens misses out on what it is doing. At this stage of its development, getting things “right” is not its goal; the mind boggling thing to know is that, as with DALL-E, ChatGPT  is a machine that automatically creates unique, original work.

Imagine it’s like having your sewing machine suddenly start mopping your floor. You prompt the sewing machine with, “Sewing machine: mop the floors.” While it may not be very good at mopping the floor (yet), the fact that the machine does it at all is the thing to marvel about:

large-scale language models trained on large amounts of multi-lingual data and source code are capable of improving the state-of-the-art across a wide variety of natural language, translation, and coding tasks, despite never having been trained to specifically perform those tasks (From: “Google Research, 2022 & beyond: Language, vision and generative models” – “Google Research, 2022 & beyond: Language, vision and generative models” – Emphasis added.)

Here is an example of why focusing on “right vs. wrong” is looking past the achievement. Imagine the following verse never existed and it came out of ChatGPT:

Hey diddle-diddle
The cat and the fiddle,
The cow jumped over the moon.

Overlooking what ChatGPT is doing might lead someone to say, “Well, that verse is wrong because cats can’t play the violin and cows cannot jump over the moon.” At this stage of the technology, ChatGPT is not intended to be factually correct: we should instead marvel that given a prompt, a machine can create new, original, human-passable text.

U Wisconsin describes this distinction: “ChatGPT can write with correct grammar and confident flow, but cannot create accurate citations or write with much depth. This is because it creates word patterns, with some variability or randomness, but does not generate meaning (Warner, 2022).” (https://idc.ls.wisc.edu/guides/using-artificial-intelligence-in-the-classroom/)

These technologies are changing rapidly. Google and Microsoft are building generative AI into their search engines and in the near future, as their aims of factual accuracy improve and their search engines provide more conversational output, holding the companies responsible for errors will make more sense.


Generative AI, Intellectual Property Rights, and Copyright Law


The End of Art: An Argument Against Image AIs” – This 50-minute video essay focuses on the “flawed, unethical, and deceptive environment around AI systems” and their appropriation of artistic works without attribution, compensation, or permission. The video’s description panel offers links to additional resources.

Generative Artificial Intelligence and Copyright Law – (Congressional Research Service, February 23, 2023)

Recent innovations in artificial intelligence (AI) are raising new questions about how copyright law principles such as authorship, infringement, and fair use will apply to content created or used by AI. So- called “generative AI” computer programs—such as Open AI’s DALL-E 2 and ChatGPT programs, Stability AI’s Stable Diffusion program, and Midjourney’s self-titled program—are able to generate new images, texts, and other content (or “outputs”) in response to a user’s textual prompts (or “inputs”). These generative AI programs are “trained” to generate such works partly by exposing them to large quantities of existing works such as writings, photos, paintings, and other artworks. This Legal Sidebar explores questions that courts and the U.S. Copyright Office have begun to confront regarding whether the outputs of generative AI programs are entitled to copyright protection as well as how training and using these programs might infringe copyrights in other works.


Additional Links


Resources for exploring ChatGPT and higher education – From Bryan Alexander: “What might ChatGPT mean for higher education and society?”

​​Generative AI Tools and Resources – From OTL Senior Instructional Designer Kim DeBacco

The Robots are Coming, the Robots Are Coming! – Nah, the robots are here (Podcast episode) – “In this episode, we’ll chat through what I think the arrival of ChatGPT means for us in higher ed. Key point: Yes, we can try to get around it, but really, we should invite it in.”

ChatGPT is enabling script kitties to write functional malware – “Researchers at security firm Check Point Research reported Friday that within a few weeks of ChatGPT going live, participants in cybercrime forums—some with little or no coding experience—were using it to write software and emails that could be used for espionage, ransomware, malicious spam, and other malicious tasks.”

A Collection of ChatGPT and AI Links

ChatGPT and AI

Alongside the introduction of DALL-E and Stable Diffusion (images), TOME (slide deck presentations), SoundRaw (music), and EDGE (dance), ChatGPT (text) marks the emergence of AI in our everyday lives. Because of the ability of these tools to generate human quality output, these technologies have spurred a great deal of conversation about what it means to be teachers, students, writers, artists, programmers, and others. The power for anyone to generate text, images, and code leads to new questions and considerations.

ChatGPT and AI: Starting Points for Discussion


For Teaching and Learning

The UCLA Academic Senate has posted, “Teaching Guidance for ChatGPT and Related AI Developments” for faculty. What you need to know:

  • ChatGPT and related AI tools are rapidly transforming higher education
  • Instructors are encouraged to clarify and communicate expectations to students
  • Consider incorporating academic integrity policies into your syllabus

OTL hosted a session titled, “Course Design Opportunities with AI” as part of the campus AI in Action: Exploring AI’s Potential in Teaching and Learning series. (May 16, 2023)

UCLA’s Jess Gregg hosted a CEILS Ed Talk, “What’s all the buzz about ChatGPT?” (February 8, 2023)

UCLA Professors Safiaya Noble, Ramesh Srinivasan, and John Villasenor discussed, “What is ChatGPT, and How Does It Relate to UCLA’s Academic Mission?” UCLA Virtual Town Hall. (March 3, 2023)

“Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach”The New York Times, January 16, 2023: “Across the country, university professors like Mr. Aumann, department chairs and administrators are starting to overhaul classrooms in response to ChatGPT, prompting a potentially huge shift in teaching and learning. Some professors are redesigning their courses entirely, making changes that include more oral exams, group work and handwritten assessments in lieu of typed ones.”

Practical Responses to ChatGPT – Montclair State University: “ChatGPT is not without precedent or competitors (such as Jasper, Sudowrite, QuillBot, Katteb, etc). Souped-up spell-checkers such as Grammarly, Hemingway, and Word and Google-doc word-processing tools precede ChatGPT and are often used by students to review and correct their writing. Like spellcheck, these tools are useful, addressing spelling, usage, and grammar problems, and some compositional stylistic issues (like overreliance on passive voice). However, they can also be misused when writers accept suggestions quickly and thus run the danger of accepting a poor suggestion.”

  • The Latest Technology: ChatGPT and other Generative AI bots
    • What is ChatGPT?
  • Practical Suggestions to Mitigate Non-Learning/Cheating
    • Course Design and Pedagogy
    • Assignment Design
    • Extend Flipped Learning: Class Time Can Be for Writing/Creating
    • When All Else Fails
  • Resources and References

Using Artificial Intelligence in the Classroom – U Wisconsin-Madison: “Like with any new technology, one approach instructors can take in the classroom is to transparently make use of it. This allows you to move beyond stress over whether or not students are using AI, and instead have a voice in when, why, and how you and your class use it.”

  • Using Artificial Intelligence in the Classroom
  • Steps to Consider
  • Exploring Capabilities and Limitations of AI in Your Classroom
  • Establishing Norms and Boundaries for Using AI in Your Classroom
  • Integrating AI into the Writing Process in Your Classroom
  • References & Further Reading

Update Your Course Syllabus for ChatGPT: “… some easy to implement suggestions that will help you prepare for the upcoming semester”

Schools Must Embrace the Looming Disruption of ChatGPT: “Educators face a choice: They can dig in their heels, attempting to lock down assignments and assessments, or use this opportunity to imagine what comes next.”

Ethical Concerns


Screenshot of ChatGPT from CEILS Senior Associate Director for Educational Development Jess Gregg:

ChatGPT Screenshot. Question:

Question: “Is ChatGPT racist?” Answer: “As a language model, I do not have personal beliefs or biases. I am trained on a large dataset of text, which may contain both racist and anti-racist content. My responses will reflect the information I have been trained on. However, OpenAl is committed to using Al in a way that is beneficial for all, and actively works to mitigate any potential biases in its models.”

Microsoft shuts down AI chatbot after it turned into a Nazi – “Yesterday the company launched “Tay,” an artificial intelligence chatbot designed to develop conversational understanding by interacting with humans.… Today, Microsoft had to shut Tay down because the bot started spewing a series of lewd and racist tweets.”

Problems Identified in “Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims” – “Recent progress in artificial intelligence (AI) has enabled a diverse array of applications across commercial, scientific, and creative domains. With this wave of applications has come a growing awareness of the large-scale impacts of AI systems, and recognition that existing regulations and norms in industry and academia are insufficient to ensure responsible AI development.”


Generative AI, Intellectual Property Rights, and Copyright Law


The End of Art: An Argument Against Image AIs” – This 50-minute video essay focuses on the “flawed, unethical, and deceptive environment around AI systems” and their appropriation of artistic works without attribution, compensation, or permission. The video’s description panel offers links to additional resources.

Generative Artificial Intelligence and Copyright Law – (Congressional Research Service, February 23, 2023)

Recent innovations in artificial intelligence (AI) are raising new questions about how copyright law principles such as authorship, infringement, and fair use will apply to content created or used by AI. So- called “generative AI” computer programs—such as Open AI’s DALL-E 2 and ChatGPT programs, Stability AI’s Stable Diffusion program, and Midjourney’s self-titled program—are able to generate new images, texts, and other content (or “outputs”) in response to a user’s textual prompts (or “inputs”). These generative AI programs are “trained” to generate such works partly by exposing them to large quantities of existing works such as writings, photos, paintings, and other artworks. This Legal Sidebar explores questions that courts and the U.S. Copyright Office have begun to confront regarding whether the outputs of generative AI programs are entitled to copyright protection as well as how training and using these programs might infringe copyrights in other works.


Generative AI: Right and Wrong


It has often been pointed out that ChatGPT can produce incorrect text: it will output incorrect dates, assign people to incorrect occupations or disciplines, create fictitious academic citations, etc. Because of this, it may be easier to share DALL-E with people trying to understand these technologies. When DALL-E produces an image, we don’t generally focus on whether its image is “right or wrong.”

ChatGPT and DALL-E work on technically similar underlying foundations (“generative” machine learning models). Briefly, this class of AI tools uses source data culled from the Internet (ChatGPT’s source data are text; DALL-E’s source data are images), then produces an original output (ChatGPT produces text; DALL-E produces images). The mind boggling thing to know is that, as with DALL-E, ChatGPT  is a machine that automatically creates unique, original work.

However, assessing ChatGPT with a “right vs. wrong” lens misses out on what it is doing. At this stage of its development, ChatGPT’s aim is to produce human-like text. Getting things “right” is not necessarily its primary goal.

The technical marvel is as if your sewing machine suddenly start mopping the floor. While it may not be very good at mopping the floor (yet), the fact that the machine does it at all is the thing to marvel about because the machine has learned to do something without you teaching it what to do:

large-scale language models trained on large amounts of multi-lingual data and source code are capable of improving the state-of-the-art across a wide variety of natural language, translation, and coding tasks, despite never having been trained to specifically perform those tasks (From: “Google Research, 2022 & beyond: Language, vision and generative models” – “Google Research, 2022 & beyond: Language, vision and generative models” – Emphasis added.)

ChatGPT does not understand what it produces: it can make a joke, but does not understand humor. Thus, focusing on “right vs. wrong” is looking past its achievement. Imagine the following verse never existed and it came out of ChatGPT:

Hey diddle-diddle
The cat and the fiddle,
The cow jumped over the moon.

Overlooking what ChatGPT is doing might lead someone to say, “Well, that verse is wrong because cats can’t play the violin and cows cannot jump over the moon.” At this stage of the technology, ChatGPT is not intended to be factually correct: we should instead be impressed that–given a prompt–a machine can create new, original, human-passable text.

U Wisconsin describes this distinction: “ChatGPT can write with correct grammar and confident flow, but cannot create accurate citations or write with much depth. This is because it creates word patterns, with some variability or randomness, but does not generate meaning (Warner, 2022).” (https://idc.ls.wisc.edu/guides/using-artificial-intelligence-in-the-classroom/)

With time, this will all change. These technologies are changing rapidly. Google and Microsoft are building generative AI into their search engines and in the near future, as their aims of factual accuracy improve and their search engines provide more conversational output, holding the companies responsible for errors will make more sense.