ChatGPT and AI Resources
Categories
UC Resources
The working group’s report presents UC Ethical AI Principles to guide the development and application of AI in ways that are consistent with UC’s mission and values. The final report provides recommendations to President Drake regarding best practices and guidance to:
- Develop methods and mechanisms to operationalize the UC Ethical AI Principles in the use of existing AI systems and the development of new applications of AI within the UC system, especially in areas likely to impact individual rights, including Health, Human Resources, Policing, and Student Experience.
- Make further recommendations on appropriate data stewardship standards for UC data that may be used in the development and use of AI-enabled tools and systems.
- Create the foundation for a permanent council that will further the principles, standards, methods, and mechanisms developed by this working group to counter the potentially harmful effects of AI and strengthen positive outcomes within the UC system.
What you need to know:
- ChatGPT and related AI tools are rapidly transforming higher education
- Instructors are encouraged to clarify and communicate expectations to students
- Consider incorporating academic integrity policies into your syllabus
“This document is meant as a guideline for instructors on what to consider as these tools evolve. We will provide strategies for adopting AI technologies in a responsible, ethical manner, and innovating within each discipline, major, and course. Exploring and communicating about the opportunities and limitations to using these tools will allow instructors and students to critically think about how knowledge is created.”
UCLA’s Center for the Advancement of Teaching (CAT), the Center for Education, Innovation, and Learning in the Sciences (CEILS), the Excellence in Pedagogy and Innovative Classrooms program (EPIC), Online Teaching and Learning (OTL), the Bruin Learn Center of Excellence (COE), the Writing Programs, and Humanities Technology (HumTech) collaborated on this series.
OTL’s session was titled, “Course Design Opportunities with AI” and included breakout rooms focused on:
- Using ChatGPT to Lead In-depth Conversations with Instructional Designers
- Using ChatGPT to Write Quiz Questions
- Syllabus Refresh Using Prompt Engineering in ChatGPT
Ethical Issues with ChatGPT; What is ChatGPT? (with tour); Inflection Points: Academic Integrity & How to Utilize ChatGPT; Activity and Discussion
The primary architect of ChatGPT and leading Berkeley AI faculty will present insights and viewpoints in a series of seven public lectures presented by The Center for Information Technology Research in the Interest of Society and the Banatao Institute (CITRIS), Berkeley Artificial Intelligence Research Lab (BAIR), Electrical Engineering and Computer Sciences (EECS), the Academic Senate and UC Berkeley.
Presentations and resources from Andrea Ross and Lisa Sperber from the University Writing Program.
Recording of wide-ranging discussion with panel of UCLA experts.
Provides instructors with some initial information on the tool and offers recommendations for elements they should address in their courses/syllabi.
“Our team … thought you would enjoy seeing what ChatGPT could do with our original write-up if we instructed the tool to rewrite our words.”
Implications for Teaching and Learning
“While foundational knowledge is required for higher-order thinking, we often focus primarily or almost exclusively on the foundational. In this new paradigm, we would point students toward the appropriate modules to develop that foundational knowledge, and we’d move students as soon as possible into problem/project/case-based learning, much of it personalized and experiential or field-based. We would be mostly working with, working alongside, facilitating and supporting, and letting AI do some of the heavy lifting.”
“In this special Future Trends Forum session we’ll collectively explore this new technology. How does the chatbot work? How might it reshape academic writing? Does it herald an age of AI transforming society, or is it really BS? Experts who joined us on stage includes Brent A. Anders, Rob Fentress, Philip Lingard, John Warner, Jess Stahl, and Anne Fensie.”
“Last week we hosted a session on this topic. Demand was so great for it, and so many questions remained, that we followed up right away with a sequel. What do we know about how the chatbot works? Does ChatGPT pose an existential threat to higher education, or instead offer new ways of teaching, learning, and researching?”
“For all of these reasons, we should proceed with caution. But used wisely, ChatGPT may actually make our teaching more rather than less humane. By using AI to streamline our analytic tasks, we can devote more time to fostering deeper connections with our students – connections that not only benefit them, but also serve as a much-needed source of rejuvenation for educators who have been stretched thin by years of teaching during a pandemic. In this sense, ChatGPT can be seen as a gift – a tool that can help us reconnect with our students and reignite our passion for teaching.”
“In this post, I will suggest a form of examination that I consider essentially ideal, even if we had no worries about plagiarism or artifical intelligence, but one that the increasingly sophisticated technologies in this area now make virtually necessary. That is, I’m hopeful that the fact that the take-home assignment no longer constitutes a serious test of the student’s knowledge of a subject or ability to write about it will force us to adopt a form of testing that was always much more serious.”
“A few experiments with online AI software services suggest some ways to address AI essay cheating, and interventions will require refining and revisiting course prompts.”
“Edward Tian, a 22-year-old senior at Princeton University, has built an app to detect whether text is written by ChatGPT”
I hope these three observations are useful as you make sense of this new technology landscape. Here they are again for easy reference:
- We are going to have to start teaching our students how AI generation tools work.
- When used intentionally, AI tools can augment and enhance student learning, even towards traditional learning goals.
- We will need to update our learning goals for students in light of new AI tools, and that can be a good thing.
“I am skeptical of the tech inevitability standpoint that ChatGPT is here and we just have to live with it. The all out rejection of this tech is appealing to me as it seems tied to dark ideologies and does seem different, perhaps more dangerous, than stuff that has come before. I’m just not sure how to go about that all out rejection. I don’t think trying to hide ChatGPT from students is going to get us very far and I’ve already expressed my distaste for cop shit. In terms of practice, the rocks and the hard places are piling up on me.”
“Australia’s leading universities say redesign of how students are assessed is ‘critical’ in the face of a revolution in computer-generated text”
“Before alarm spreads about the impact on student learning, let us consider the historical value of technological advances in education. The calculator, once banned in classrooms, is now a common sight on school supply lists and in the college classroom. Instructors use calculators to explore deeper connections with mathematical concepts and instead of limiting their use, can be more intentional about how they are used to encourage critical thinking among students. Similarly, ChatGPT and other AI technologies are here to stay, and we hope that academics will actively participate in decisions around their use and integration in higher education. We can influence how ChatGPT and other AI tools might be brought into higher education to assist students in developing things like critical thinking and executive function skills.”
“These new AI-powered writing generation technologies are going to change college writing substantially. But they won’t end college writing. Instead, we’re going to need to create some new guard rails for the assumptions we make about writing assignments in higher education. What will that future look like?”
“To harness the potential and avert the risks of OpenAI’s new chat bot, academics should think a few years out, invite students into the conversation and—most of all—experiment, not panic.”
“Human- and machine-generated prose may one day be indistinguishable. But that does not quell academics’ search for an answer to the question ‘What makes prose human?'”
“Educators can dig in their heels, attempting to lock down assignments and assessments, or use this new technology to imagine what comes next.”
“This resource is created by Lance Eaton for the purposes of sharing and helping other instructors see the range of policies available by other educators to help in the development of their own for navigating AI-Generative Tools (such as ChatGPT, MidJourney, Dall-E, etc).”
“if you are afraid of an explosion of cheating in your classes because of ChatGPT or any other new technological advance, you are not alone, but honestly, technology isn’t the problem. Stay tuned for more . . .”
“My suggestion: In education, tools of so-called artificial intelligence can best be classified as a further development of search engines.” [from original using Google Translate]
“I can see the shape of a pedagogical process—and preferably a supporting end-to-end tool—that teaches many of the skills involved with good writing, including some hard ones like checking sources and editing—while including some elements of creativity. If it is scaffolded properly—again, with the right tool and process but also with a good, solid rubric—it could enable educators to spend more of their time honing in on specific aspects of the writing process with less drudgery.”
“This paper shares results from a pedagogical experiment that assigns undergraduates to “cheat” on a final class essay by requiring their use of text-generating AI software.”
“We need to embrace these tools and integrate them into pedagogies and policies. Lockdown browsers, strict dismissal policies and forbidding the use of these platforms is not a sustainable way forward.”
“Assessment is also affected. To me, it seems anachronistic to prepare students for an academic world where online translation does not exist. If we are preparing them to write essays and reports that can be supported by online translation, we should allow them to develop these competencies as part of the assessment process.”
“there’s another fix—one that might have been worth implementing even before the arrival of ChatGPT: Make students write out essays by hand. Apart from outflanking the latest AI, a return to handwritten essays could benefit students in meaningful ways.”
“The proliferation of these easily accessible large language models raises an important question: How will we know whether what we read online is written by a human or a machine? I’ve just published a story looking into the tools we currently have to spot AI-generated text. Spoiler alert: Today’s detection tool kit is woefully inadequate against ChatGPT. “
“If you’re looking for historical analogues, this would be like the printing press, the steam drill, and the light bulb having a baby, and that baby having access to the entire corpus of human knowledge and understanding. My life—and the lives of thousands of other teachers and professors, tutors and administrators—is about to drastically change.”
“Mr. Aumann decided to transform essay writing for his courses this semester. He plans to require students to write first drafts in the classroom, using browsers that monitor and restrict computer activity. In later drafts, students have to explain each revision. Mr. Aumann, who may forgo essays in subsequent semesters, also plans to weave ChatGPT into lessons by asking students to evaluate the chatbot’s responses.”
“I have done research on microRNAs in the past, but it can be challenging to come up with an easy-to-understand elevator pitch of what a microRNA is and what it does in the body. So I typed into ChatGPT the following request: explain what microRNAs are and what they do at the reading level of a high school freshman. The answer started to appear a few seconds later, one word at a time, as if the complex computer program was typing it out. And it was a good answer!”
“I have tested my assignments against multiple AI programs as a faculty member and Writing Across the Curriculum director. I may incorporate this technology in future courses, but for now, here are my 10 strategies that prevent the use of AI by students.”
“Moving forward, we’ll need to think of ways AI can be used to support teaching and learning, rather than disrupt it. Here are three ways to do this.”
“Anyway, the point I’m trying to make here (and this is something that I think most people who teach writing regularly take as a given) is that there is a big difference between assigning students to write a “college essay” and teaching students how to write essays or any other genre.”
“In 2014, a department of the U.K. government published a study of history and English papers produced by online-essay writing services for senior high school students. Most of the papers received a grade of C or lower. Much like the work of ChatGPT, the papers were vague and error-filled. It’s hard to write a good essay when you lack detailed, course-specific knowledge of the content that led to the essay question.”
“The willingness to learn is related to the growth mind-set—the belief that your abilities are not fixed but can improve. But there is a key difference: This willingness is a belief not primarily about the self but about the world. It’s a belief that every class offers something worthwhile, even if you don’t know in advance what that something is. Unfortunately, big economic and cultural obstacles stand in opposition to that belief.”
“The essay, in particular the undergraduate essay, has been the center of humanistic pedagogy for generations. It is the way we teach children how to research, think, and write. That entire tradition is about to be disrupted from the ground up. Kevin Bryan, an associate professor at the University of Toronto, tweeted in astonishment about OpenAI’s new chatbot last week: ‘You can no longer give take-home exams/homework … Even on specific questions that involve combining knowledge across domains, the OpenAI chat is frankly better than the average MBA at this point. It is frankly amazing.’ Neither the engineers building the linguistic tech nor the educators who will encounter the resulting language are prepared for the fallout.”
“Scholars of teaching, writing, and digital literacy say there’s no doubt that tools like ChatGPT will, in some shape or form, become part of everyday writing, the way calculators and computers have become integral to math and science. It is critical, they say, to begin conversations with students and colleagues about how to shape and harness these AI tools as an aid, rather than a substitute, for learning.”
“If we want to dissuade students from using artificial intelligence to help produce their writing, we need to treat writing differently. If we want to teach writing in our classes, if we want students to use writing as a deliberative, reflective space to facilitate critical thinking, innovation, and self-awareness, we need to move away from framing writing assignments as primarily product-based endeavors.”
“The stakes are high. Many teachers agree that learning to write can take place only as students grapple with ideas and put them into sentences. Students start out not knowing what they want to say, and as they write, they figure it out. ‘The process of writing transforms our knowledge,’ said Joshua Wilson, an associate professor in the School of Education at the University of Delaware. ‘That will completely get lost if all you’re doing is jumping to the end product.'”
“AI just stormed into the classroom with the emergence of ChatGPT. How do we teach now that it exists? How can we use it? Here are some ideas.”
Compiled for the Writing Across the Curriculum Clearinghouse as part of a larger resource collection: “AI and Teaching Writing: Starting Points for Inquiry.” This is an open and evolving list put together by a writing teacher who is not an expert in the field, with suggestions from a few other more knowledgeable folks.
“As a historian I should be cautious and should beware of frenetic enthusiasm. We know all too well that highly touted technologies, like the blockchain, frequently fail to live up to the hype. So let me echo the Lincoln Steffens’s words after visiting the Soviet Union in 1919, fully aware that the phrase is fraught with irony: “I have seen the Future and it works.'”
“Yes, we can control costs, reduce performance gaps and improve learning outcomes without sacrificing quality or rigor.”
“The threat now is to the very knowledge workers who many assumed were invulnerable to technological change. If we fail to instill within our students the advanced skills and expertise that they need in today’s rapidly shifting competitive landscape, they too will be losers in the unending contest between technological innovation and education.”
“Mis- and disinformation is newly comprising about a third of the material of this new course. For the last couple of semesters, I have been inching my way toward including that topic. Given the political landscape globally as well as in the United States, that topic could, and should, be its own course. Artificial intelligence will certainly play a role in that sphere as it emerges in all significant walks of life; take a look, for example, at the essay Bruce Schneier published just yesterday in The New York Times on the subject of lobbying and political influence. We must deal with it. Panic will not help.”
“Chatbots are able to produce high-quality, sophisticated text in natural language. The authors of this paper believe that AI can be used to overcome three barriers to learning in the classroom: improving transfer, breaking the illusion of explanatory depth, and training students to critically evaluate explanations. The paper provides background information and techniques on how AI can be used to overcome these barriers and includes prompts and assignments that teachers can incorporate into their teaching. The goal is to help teachers use the capabilities and drawbacks of AI to improve learning.”
“All of my classes have become AI classes. And I wanted to share with you the experiments I am running to integrate AI into class (I will update you later in the semester about how they are going).”
“A look at OpenAI’s ChatGPT and how teachers in medieval studies can prevent their students from using it.”
“ChatGPT is not without precedent or competitors (such as Jasper, Sudowrite, QuillBot, Katteb, etc). Souped-up spell-checkers such as Grammarly, Hemingway, and Word and Google-doc word-processing tools precede ChatGPT and are often used by students to review and correct their writing. Like spellcheck, these tools are useful, addressing spelling, usage, and grammar problems, and some compositional stylistic issues (like overreliance on passive voice). However, they can also be misused when writers accept suggestions quickly and thus run the danger of accepting a poor suggestion. Automation bias is in effect—we often trust an automated suggestion more than we trust ourselves. Further, over-reliance can mean students simply miss opportunities to grow and develop as writers.”
“This open crowdsourced collection by #creativeHE presents a rich tapestry of our collective thinking in the first months of 2023 stitching together potential alternative uses and applications of Artificial Intelligence (AI) that could make a difference and create new learning, development, teaching and assessment opportunities.”
“Even ChatGPT’s flaws—such as the fact that its answers to factual questions are often wrong—can become fodder for a critical thinking exercise. Several teachers told me that they had instructed students to try to trip up ChatGPT, or evaluate its responses the way a teacher would evaluate a student’s.”
“A high school teacher on how the new chatbot from OpenAI is transforming her classroom—for the better.”
“The current iteration of GPT-3 has its quirks and limitations, to be sure. Most notably, it will write absolutely anything. It will generate a full essay on “how George Washington invented the internet” or an eerily informed response to “10 steps a serial killer can take to get away with murder.” In addition, it stumbles over complex writing tasks. It cannot craft a novel or even a decent short story. Its attempts at scholarly writing—I asked it to generate an article on social-role theory and negotiation outcomes—are laughable. But how long before the capability is there? Six months ago, GPT-3 struggled with rudimentary queries, and today it can write a reasonable blog post discussing ‘ways an employee can get a promotion from a reluctant boss.'”
From two MIT professors: “The use of AI/LLM text generation is here to stay. Those of us involved in writing instruction will need to be thoughtful about how it impacts our pedagogy. … We also believe students are here at MIT to learn, and will be willing to follow thoughtfully advanced policies so they can learn to become better communicators. To that end, we hope that what we have offered here will help to open an ongoing conversation.”
“AI tools are available today that can write compelling university level essays. Taking an example of sample essay produced by the GPT-3 transformer, Mike Sharples discusses the implications of this technology for higher education and argues that they should be used to enhance pedagogy, rather than accelerating an ongoing arms race between increasingly sophisticated fraudsters and fraud detectors.”
“As a lawyer who represents students accused of cheating, ChatGPT worries me. If we want to maintain the credibility of our universities and the weight of a degree, we must get back to in-person assessments.”
“In the upcoming years, we’ll need to think about how we can help students develop these critical human skills. It might be inquiry-based learning or project-based learning. But it might also be game-based learning. It might be a lo-fi makerspace. It might be an epic, face-to-face science lab or a sketchnote video, or an interview with community members. Notice that none of these ideas are new. These are the things teachers are already doing when they empower their students with voice and choice. So am I nervous about AI? Absolutely. But am I hopeful? Most definitely. Because I know that teachers will always be at the heart of innovation.”
“Cynthia Alby discusses how artificial intelligence (like ChatGPT) is impacting higher education on episode 448 of the Teaching in Higher Ed podcast.”
“Uncanny, creepy and bland: Brian Strang reflects on his chat with the artificial intelligence language model ChatGPT and the threat it does (or doesn’t) pose to writing instruction.”
“It is an open question as to what jobs will be the first to be disrupted by AI; what became obvious to a bunch of folks this weekend, though, is that there is one universal activity that is under serious threat: homework.”
“USC President Carol Folt is announcing a new Center for Generative AI and Society to explore the transformative impact of artificial intelligence on culture, education, media and society.”
General, Social, Cultural Issues
“But I think the sheer volume and scale of what’s coming will be meaningfully different. And I think we’re unprepared. Or at least, I am.”
“The idea of an all-knowing computer program comes from science fiction and should stay there. Despite the seductive fluency of ChatGPT and other language models, they remain unsuitable as sources of knowledge. We must fight against the instinct to trust a human-sounding machine”
“People at the margins of society who are disproportionately impacted by these systems are experts at vetting them, due to their lived experience. Not coincidentally, crucial contributions that demonstrate the failure of these large language models and ways to mitigate the problems are often made by scholars of color—many of them Black women—and junior scholars who are underfunded and working in relatively precarious conditions.”
“Treat it like a toy, not a tool.”
On December 22, 2023, Bryan Alexander hosted an edition of his Future Trends Forum focused on ChatGPT and other AI (artificial intelligence) writing generators and their potential impact on education: “I wanted to share a few highlights and observations here on the blog.”
“Given the growing influence of this technology, it’s time to focus on how we can start reaping the benefits in a responsible way. Many A.I. experts and computer scientists agree that these tools can provide a major perk that does no harm: editing our writing.”
That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments. The Borgesian revelation of understanding has not and will not — and, we submit, cannot — occur if machine learning programs like ChatGPT continue to dominate the field of A.I. However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.
“Humanities, arts and higher education could use a little reminder that we do human. That’s our business, when we do it well. We are as safe from ChatGPT as the Temptations are from Pentatonix.”
“The machine’s fundamental weakness is a lack of substance beneath the surface. Still, for many requirements in our present state of surface-industrial economy, this might be perfectly good enough. The machine’s greatest strength is our sufficiency with surfaces.”
“Three sources briefed on OpenAI’s recent pitch to investors said the organization expects $200 million in revenue next year and $1 billion by 2024. The forecast, first reported by Reuters, represents how some in Silicon Valley are betting the underlying technology will go far beyond splashy and sometimes flawed public demos.”
“‘The new innovations here—first of all, just more size. It’s got more training data,’ said Emily Bender, a professor of linguistics at the University of Washington and director of the Computational Linguistics Laboratory. ‘And then it had a second training step where they had human raters give responses about how good its responses were, and then it adjusted its distributions to try to get better scores from the human raters.'”
“I will herein examine how people are using generative AI for uses that aren’t on the up and up. You can use generative AI such as ChatGPT for all manner of unsavory uses. It is like falling off a log, meaning that it is relatively easy to do bad things and you don’t need to be a rocket scientist to do so.”
“The education department blocked access to the program, citing “negative impacts on student learning, and concerns regarding the safety and accuracy of content,” a spokesperson said. The move from the nation’s largest school system could have ripple effects as districts and schools across the country grapple with how to respond to the arrival of the dynamic new technology.”
“In Artificial Communication, Elena Esposito argues that drawing this sort of analogy between algorithms and human intelligence is misleading. If machines contribute to social intelligence, it will not be because they have learned how to think like us but because we have learned how to communicate with them. Esposito proposes that we think of “smart” machines not in terms of artificial intelligence but in terms of artificial communication.”
“While some bias concerns might be addressed by improving the software, creativity concerns are only likely to be exacerbated as software like ChatGPT gets better. Creators and educators might say that ChatGPT should not exist at all, even if it could be freed from bias altogether.”
“A tutorial on how to use GPT-3 and DALL-E to generate original content for the funny pages”
“In this tutorial, we’ll generate a comic strip or a graphic novel. But that’s the easy part. The hard part will be to guide the AI to building something meaningful. And for that, we’ll adapt/adop Jeremiah McCall’s ‘Historical Problem Space Framework.'”
“A new wave of chat bots like ChatGPT use artificial intelligence that could reinvent or even replace the traditional internet search engine.”
“We will see ChatGPT and tools like it used in adversarial ways that are intended to undermine trust in information environments, pushing people away from public discourse to increasingly homogenous communities.”
“Artists are caught in the middle of one of the biggest upheavals in a generation. Some will lose work; some will find new opportunities. A few are headed to the courts to fight legal battles over what they view as the misappropriation of images to train models that could replace them.”
“OpenAI, the research lab behind the viral ChatGPT chatbot, is in talks to sell existing shares in a tender offer that would value the company at around $29 billion, according to people familiar with the matter, making it one of the most valuable U.S. startups on paper despite generating little revenue.”
“Buzzy products like ChatGPT and DALL-E 2 will have to turn a profit eventually.”
“In a way, when you ask an AI to make you a movie, it’s just mimicking the formulaic process by which many Hollywood blockbusters get made: Look around, see what’s been successful, lift elements of it (actors, directors, plot structures) and mash them together into a shape that looks new but actually isn’t. “
“Gary Marcus is an emeritus professor of psychology and neural science at N.Y.U. who has become one of the leading voices of A.I. skepticism. He’s not “anti-A.I.”; in fact, he’s founded multiple A.I. companies himself. But Marcus is deeply worried about the direction current A.I. research is headed, and even calls the release of ChatGPT A.I.’s ‘Jurassic Park moment.'”
“.. crucially, ChatGPT is not perfect. As genius as its answers seem, the technology can still be easily thwarted in many ways. Here, a short list of times when it might just fail you:
- IF YOU ASK ABOUT AN ESOTERIC TOPIC
- IF YOU GIVE IT TASKS THAT REQUIRE FACTUAL ACCURACY, SUCH AS REPORTING THE NEWS
- IF YOU WANT IT TO UNWIND BIAS
- IF YOU NEED THE VERY LATEST DATA
“While machines are not yet as intelligent as people, the tech that OpenAI has since released is taking many aback (including Musk), with some critics fearful that it could be our undoing, especially with more sophisticated tech reportedly coming soon.”
“… computer scientist Yejin Choi, a 2022 recipient of the prestigious MacArthur ‘genius’ grant … has been doing groundbreaking research on developing common sense and ethical reasoning in A.I. ‘There is a bit of hype around A.I. potential, as well as A.I. fear,’ admits Choi…”
“So the best way to think about this is you are chatting with a omniscient, eager-to-please intern who sometimes lies to you.”
Essay on using automated intelligence as support for a real-estate company.
“The motivations of these so-called “artificial intelligences” is to fulfill their assigned task: to perform better than their previous iteration. Entirely artificial. The motivations of the people deploying these AIs on the world is to use them to make profit at any cost to society. Our motivation in using them is therefore the first and last bastion against being turned into advertising consuming bots.”
“…as I started to ask it more challenging academic and intellectual questions, including composing syllabi or writing student essays, I was both impressed by some of the output (it produced a lovely short essay on why Ibn Tufayl presents two creation stories in Hayy ibn yaqzan) and taken aback how often it simply makes up stuff out of whole cloth (including completely fake publications by me).”
“I wondered what this release might mean for the future of continuing higher education. Of course, nothing had yet been written about the potential of this just-released version, so I asked ChatGPT to write a short poem about it. In just three seconds, far faster than I could have typed the words, the poem was complete on my screen”
“We are not without precedent confronting the dilemma of how to respond to a new technology that impacts learner responses in traditional quizzes and exams. In the mid-1960s when hand calculators were developed, and later programmable calculators, educators in math and science were confronted with a similar challenge.”
“With almost no hesitation, the A.I. spit out a bunch of references. It declared that my Wikipedia page was its source for the courses I teach (the Wikipedia page doesn’t have, and never has had, such a list.) My education at Berkeley, it said, came from my New York Times obituary, which it cited with a URL that looked like it had come from the Times website: https://www.nytimes.com/2021/04/12/books/charles-seife-dead.html. Of course, no such obituary ever existed. The A.I. was making up BS references to back up its BS facts.”
“The powerful new chatbot could make all sorts of trouble. But for now, it’s mostly a meme machine.”
“Unless you already knew the answer or were an expert in the field, you could be subjected to a high-quality intellectual snow job. You would face, as Plato predicted, ‘the show of wisdom without the reality.'”
“On the problems of propertarian and dignitarian approaches to data governance.”
“My conclusion, after reading up on all this, is that ChatGPT is multilingual but monocultural—but that by using it, we’re all helping to train it to align its values with our own.”
“If an algorithm is the death of high school English, maybe that’s an okay thing.”
“To my eye, too much of the ChatGPT discourse is about how to corral and control this technology so we can keep students doing the same stuff they’ve been doing. This presumes that the status quo is working in terms of student learning, but who seriously believes that?”
“Hopefully these suggestions will help you feel better prepared to teach in a classroom where chatGPT is widely available on your students’ phones and computers.”
Research and Technical
“For all of these reasons, we should proceed with caution. But used wisely, ChatGPT may actually make our teaching more rather than less humane. By using AI to streamline our analytic tasks, we can devote more time to fostering deeper connections with our students—connections that not only benefit them, but also serve as a much-needed source of rejuvenation for educators who have been stretched thin by years of teaching during a pandemic. In this sense, ChatGPT can be seen as a gift—a tool that can help us reconnect with our students and reignite our passion for teaching.”
“But what exactly are these large language models, and why are they suddenly so popular?”
“With the recent wave of progress in artificial intelligence (AI) has come a growing awareness of the large-scale impacts of AI systems, and recognition that existing regulations and norms in industry and academia are insufficient to ensure responsible AI development. In order for AI developers to earn trust from system users, customers, civil society, governments, and other stakeholders that they are building AI responsibly, they will need to make verifiable claims to which they can be held accountable. Those outside of a given organization also need effective means of scrutinizing such claims. This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems. We analyze ten mechanisms for this purpose–spanning institutions, software, and hardware–and make recommendations aimed at implementing, exploring, or improving those mechanisms.”
“Whereas ChatGPT is a general-purpose conversation engine, AlphaCode is more specialized: it was trained exclusively on how humans answered questions from software-writing contests. “AlphaCode was designed and trained specifically for competitive programming, not for software engineering,” David Choi, a research engineer at DeepMind and a co-author of the Science paper, told Nature in an e-mail.
“With many recent advances in artificial intelligence (AI), prompt engineering has become a sought-after and valuable skill for getting AI to do what you want. This course focuses on applied PE techniques, and we expect readers to have minimal knowledge of machine learning.”
“The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but could not produce real understanding. Hence the “Turing Test” is inadequate. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. The broader conclusion of the argument is that the theory that human minds are computer-like computational or information processing systems is refuted.”
“‘It’s still too early to decide whether or not ChatGPT capabilities will become the new favorite tool for participants in the Dark Web,’ company researchers wrote. ‘However, the cybercriminal community has already shown significant interest and are jumping into this latest trend to generate malicious code.'”
“With this post, I am kicking off a series in which researchers across Google will highlight some exciting progress we’ve made in 2022 and present our vision for 2023 and beyond. I will begin with a discussion of language, computer vision, multi-modal models, and generative machine learning models.”
“GPTZero is an app that detects essays written by the impressive AI-powered language model known as ChatGPT. Edward Tian, a computer science major who is minoring in journalism, spent part of his winter break creating GPTZero, which he said can ‘quickly and efficiently’ decipher whether a human or ChatGPT authored an essay.”
“Although AIED [Artificial Intelligence in Education] has been identified as the primary research focus in the field of computers and education, the interdisciplinary nature of AIED presents a unique challenge for researchers with different disciplinary backgrounds. In this paper, we present the definition and roles of AIED studies from the perspective of educational needs. We propose a framework to show the considerations of implementing AIED in different learning and teaching settings.”
“We build a Generatively Pretrained Transformer (GPT), following the paper “Attention is All You Need” and OpenAI’s GPT-2 / GPT-3. We talk about connections to ChatGPT, which has taken the world by storm. We watch GitHub Copilot, itself a GPT, help us write a GPT”
“This is a slightly revised version of my position paper for the “Always Already Computational: Collections as Data” Forum, UC Santa Barbara, March 1-3, 2017. (The original version is included among a collection of such position statements by participants in the conference.) A further revised version was later published as “Data Moves: Libraries and Data Science Workflows,” in Libraries and Archives in the Digital Age, ed. Susan L. Mizruchi (Cham: Palgrave Macmillan, 2020), 211-19, https://doi.org/10.1007/978-3-030-33373-7_15.”
Update to foundational model: “GPT-4, which learns its skills by analyzing huge amounts of data culled from the internet, improves on what powered the original ChatGPT in several ways. It is more precise. It can, for example, ace the Uniform Bar Exam, instantly calculate someone’s tax liability and provide detailed descriptions of images. . . . ‘I don’t want to make it sound like we have solved reasoning or intelligence, which we certainly have not,’ Sam Altman, OpenAI’s chief executive, said in an interview. ‘But this is a big step forward from what is already out there.'”
“We are immediately reminded of the fundamental limits of computation. Granted, Turing showed that there can be no general algorithm that solves the halting problem, not that the halting problem cannot be solved for a specific program like the one described above.”
From the company behind ChatGPT: “Commonly asked questions about ChatGPT”
From the company behind ChatGPT: “We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”
“Our classifier is not fully reliable. In our evaluations on a “challenge set” of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives). Our classifier’s reliability typically improves as the length of the input text increases. Compared to our previously released classifier, this new classifier is significantly more reliable on text from more recent AI systems.”
“We are introducing a new and improved content moderation tool. The Moderation endpoint improves upon our previous content filter, and is available for free today to OpenAI API developers.”
“Starting with a set of labeler-written prompts and prompts submitted through the OpenAI API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning. We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback. We call the resulting models InstructGPT. In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters. Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets. Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent.”
“The more adept LLMs become at mimicking human language, the more vulnerable we become to anthropomorphism, to seeing the systems in which they are embedded as more human-like than they really are. This trend is amplified by the natural tendency to use philosophically loaded terms, such as ‘knows’, ‘believes’, and ‘thinks’, when describing these systems. To mitigate this trend, this paper advocates the practice of repeatedly stepping back to remind ourselves of how LLMs, and the systems of which they form a part, actually work.”
Early version of technology behind ChatGPT explored: “Hopefully, this article gave you ideas on how to finetune and generate texts creatively. There’s still a lot of untapped potential, and there are still many cool applications that have been untouched, and many cool datasets that haven’t been used for AI text generation. GPT-2 will likely be used more for mass-producing crazy erotica than fake news. However, GPT-2 and the Transformer architecture aren’t the end-game of AI text generation. Not by a long shot.”
Additional Directories and Lists
This document serves as a landing page for links to other documents, webpages, shared folders, including:
- Lists of AI Tools
- Compilations of Readings and Videos
- Resources for Instructors
- Links to AI Institutional Policies & Info on Faculty Development Websites
Curated by Dr. Kim DeBacco, Senior Instructional Designer , Online Teaching and Learning, UCLA.
Curated by Dr. Margaret Merrill, Senior Instructional Design Consultant, University of California, Davis.