In an age where technology is advancing rapidly, artificial intelligence, or AI, is becoming an increasingly important tool in various industries. Among the AI models that have emerged, ChatGPT has quickly come to the forefront in the past few months. Its ability to generate human-like responses to questions and prompts has sparked both a popular trend and an increase in public interest in AI.
And that would have been a pretty impressive introduction, except for the part where I didn’t write it. I had ChatGPT do it for me.
For the record, I did edit it a bit, but it’s mostly the AI’s work.
But how did ChatGPT come to be, anyway?
ChatGPT is just the newest AI created by the tech company OpenAI, and while ChatGPT is the first of its AI language models to earn the attention of the public, OpenAI has actually been creating algorithms since 2015, when it was founded with an initial fund of over $1 billion from both wealthy individuals like Sam Altman and Elon Musk (he’s everywhere, isn’t he?), as well as tech companies, including Amazon. The company was founded with the intent of creating safe and ethical AI.
ChatGPT is merely the latest development in the company’s GPT AI algorithms, which started with GPT-1 back in 2018. While previous AI models were usually only designed to provide accurate responses on specific subjects or fields, OpenAI’s GPT programs are all designed to take in human input and generate responses on a wide variety of topics. The GPT algorithms are a type of AI known as a large language model (LLM), which in simple terms is basically a program that can learn how different words, phrases, and concepts are used in a specific topic. Programmers teach these LLMs by feeding them huge amounts of content, like books or Wikipedia pages, on a topic they want the AI to find connections on. Feed the LLM enough content on a specific topic, and it will have made enough connections between different words and phrases to generate its own content on the topic, like say, if a human asked for answers to a question on that topic.
“It’s a lot like how humans process information and think, except I can do it much faster and with greater accuracy,” said ChatGPT, when I asked about how it’s able to understand user input. “Instead of using intuition and personal experience, I rely on data and programming to generate my responses.”
OpenAI’s GPT algorithms are LLMs that have been trained on a massive range of topics with the intent of making GPT an LLM that can generate new content on practically any topic it comes across. The currently readily available ChatGPT version is actually just a variant of OpenAI’s GPT-3.5, which has been improved to better provide responses as though the user and the AI are having a conversation. (Hence the “chat” in ChatGPT.)
“In one sense you can understand ChatGPT as a version of an AI system that we’ve had for a while,” said Jan Lieke, a machine learning researcher at OpenAI, when interviewed by the MIT Technology Review. “In another sense, we made it more aligned with what humans want to do with it. It talks to you in dialogue, it’s easily accessible in a chat interface, it tries to be helpful.”
The sheer amount of users trying out ChatGPT is actually causing OpenAI quite a bit of trouble. The amount of computing power necessary to run the AI for millions of people is reportedly costing OpenAI at least $100,000 dollars a day. Such a staggering figure makes the company’s plans to make ChatGPT a paid service unsurprising; the company is just losing too much money by keeping the AI free to use.
It’s a common misconception that ChatGPT is a search engine that basically “googles” answers to questions users ask; ChatGPT is not capable of gleaning the internet for answers. The AI instead generates all of its answers using the connections created from the content it was fed by the programmers, which is why the AI is not infallible. ChatGPT, as an LLM, can only provide accurate answers on topics on which it has been fed enough content to form strong connections. This is why the AI can give misleading answers on overly specific or complicated topics; it hasn’t received enough training in those areas. While this may seem like a major oversight, the team behind ChatGPT knew the AI’s weaknesses and released ChatGPT anyway as a sort of calculated risk.
“Our biggest concern was around factuality, because the model likes to fabricate things,” said John Schulman, co-founder of OpenAI. “But [our GPT AIs] and other LLMs are already out there, so we thought that as long as ChatGPT is better than those in terms of factuality and other issues of safety, it should be good to go.”
ChatGPT’s ability to respond to users in a humanlike way has turned the chatbot into an internet sensation. Unlike OpenAI’s other AI models, ChatGPT was released to the general public as a prototype in late November of last year, where its ability to generate reasonably accurate responses to questions made the AI go viral. While those at OpenAI did expect some level of public interest, the craze around their new AI far exceeded their expectations. Just five days after the release of ChatGPT to the public, over 1 million users had been registered. As of January, over 100 million OpenAI accounts have been created by people wishing to use ChatGPT. And nobody at OpenAI is quite sure why their LLM has become so popular.
“It’s not a fundamentally more capable model than what we had previously,” said Lieke. “I would love to understand better what’s driving all of this—what’s driving the virality. Like, honestly, we don’t understand.”
The positive response and popularity of OpenAI’s ChatGPT has driven the company to continue improving the chatbot. For example, some users of ChatGPT have attempted to “jailbreak” the AI, basically making it say things it probably shouldn’t. Jailbreakers have managed to make ChatGPT condone discrimination against minorities or curse out the user. Jailbreakers manage these feats by basically tricking the AI into ignoring the rules it is meant to follow. Fortunately, programmers at OpenAI quickly caught onto the methods used by the jailbreakers. These blind spots in ChatGPT’s programming are then quickly patched up and an updated version of the AI, more resilient against jailbreaking, is pushed out to the public. OpenAI is also hoping to leverage the chatbot’s massive user base to find bugs in OpenAI’s code. On April 11, OpenAI announced a “bug bounty” program, where people can report bugs and other potential problems in ChatGPT and other OpenAI products in exchange for monetary compensation of up to $20,000 dollars.
While none of the AI created by OpenAI has topped the craze around its ChatGPT, its AI models have been achieving feats for years, such as when the company’s OpenAI Five beat the world champions of the video game “Dota 2” in 2018. OpenAI’s reputation for creating quality AI has not gone unnoticed by others in the tech world: in 2019, Microsoft provided the company with an additional $1 billion in funding. In January 2023 Microsoft provided OpenAI with yet more funding, this time a $1 billion dollar investment, with the hopes that its advanced AI algorithms could help Microsoft’s Bing search engine become more competitive against the ubiquitous Google.
As of now, OpenAI is now valued at over $29 billion.
Artificial Intelligence, also known as “AI,” is an increasingly popular technological tool that is run by data held on the internet. It has introduced online programs that can enhance learning and create personalized experiences within the education system. These AI programs are beginning to be more widely implemented into school systems throughout the world and have created a debate as to whether or not they will be beneficial.
AI programs come with a fear of giving rise to a spiral in students cheating or plagiarizing on assignments, along with a possibility of students relying on the tool, rather than learning independently.
“If this is the future, students should learn how to use such tools,” said Ava Paolone, a senior at RV, “However, this results in lack of effort and true responses from students in written assignments.”
As AI programs continue to crawl into educational systems, more students can gain access to the program and will be more likely to use it. RV’s technology team currently has Chat GPT blocked on the school-owned student Macbooks.
“I think a better choice is to help students adapt that technology for their growth and not to expect that they are necessarily going to cheat with it because it is the way of the world now,” said RV Librarian Mrs. Dee Venuto. “We’re not going to be able to get away from it, so we should use it as an opportunity to educate you how to use it ethically and for good purposes.”
AI technology can be used to assist students and teachers with different parts of the school day such as checking for plagiarism, providing tutoring support, assisting in management, and scheduling.
“I also read that the originators of AI see it as a way to save the world from its problems. I think we’ll see how it unfolds,” said Venuto. “I don’t know that it’s necessarily going to be doom and gloom or a solution.”
As AI continues to develop in our world today and become more prevalent in school systems, we could see a possible change in the way that high schools implement the tool into every day learning and the effects that follow.
Your donation will support the student journalists of Rancocas Valley Regional High School. Your contribution will allow us to enter into regional and national competitions, and will help fund trips to journalism conferences to continue to improve our writing and work!