As a mathematics major who has taken numerous computer science classes at the College, I can imagine a world where artificial intelligence (AI) dramatically enhances education. But I can also envision a world where it severely damages education. So how can we set ourselves up in a way that maximizes the chance we reach that first world? Well, what would such a world even look like?
First, in addition to their regular teacher, every student would have a personalized AI tutor. It’s invaluable to have a teacher who truly understands you — your strengths and weaknesses, your interests and goals, and what helps you learn efficiently. However, there simply aren’t enough human teachers for everyone to have their own personal tutor.
What if AI could fill this role? Imagine a world where all students have a personalized AI tutor. This AI doesn’t give away answers; it engages with the student, encouraging them to build foundational skills in a way that adapts to their specific needs, interests, and learning styles. And it does so over many years, tracking how the student learns and develops over time. This isn’t even too far-fetched! Khan Academy has already augmented ChatGPT to create Khanmigo, something that feels like a precursor to the AI I’m describing.
Second, AI can also enable a higher order of thinking. Before the age of the internet, there was reason to memorize all of the presidents, as that baseline knowledge wasn’t as easily accessible. The rise of the internet has made this type of knowledge much easier to find, allowing more time to be allocated for more complex questions, such as, “How did Thomas Paine’s Common Sense influence the American Revolution?” Perhaps AI can take us one layer of abstraction further. ChatGPT can write better code than I can in a fraction of the time. However, I don’t think this renders me useless; rather, I take on a different role, where I get to spend more time looking at the big picture and generating new ideas.
So, how can we move toward a world like this? The first step is awareness. AI is a buzzword surrounded by lots of misinformation. It’s important to understand what AI is and what it isn’t. Here’s my crash course: Almost all AI you hear about falls under Machine Learning (ML). Large language models (LLMs) like GPT take a sentence and predict the next word. Sometimes, they hallucinate and completely make stuff up. LLMs require lots of data, so they are often trained on a large chunk of the internet. However, the internet is filled with bad people spewing hatred, which often causes versions of these models to internalize hateful ideas. Obviously this is incredibly problematic, and there have also been problematic ways of eradicating this toxic behavior. For example, OpenAI paid Kenyan workers $2 per hour to sift through egregious material in order to make GPT less toxic. These issues of biased data, hallucinations, exploitation, and more are obviously horrible, and they should be kept at the forefront of everyone’s minds as we continue to build AI.
As we move into a world where AI is more and more prevalent, understanding these basic facts and potential issues becomes even more important. Not only should people start becoming more aware of what AI really is, but they should also start learning how to use it effectively. AI should augment, rather than replace, human intelligence. Kids should spend time playing around with different AIs like ChatGPT or Midjourney at a young age to get used to how they work, as well as the strengths and weaknesses of the model.
This is absolutely not to say that all second graders should do all their work using AI. In fact, it’s very much the opposite. The most valid concern about integrating this level of AI with education is that children never learn or forget foundational skills. Of course, this begs the question of specifying which “foundational skills” are truly necessary.
It’s necessary to learn how to multiply, despite the fact that I will almost always have access to a calculator. A recurring phenomenon is that new technology (e.g. AI) requires new skills (e.g. prompting, fact-checking hallucinations, etc.), but that certain old skills are often a gateway to unlocking these new skills. Maybe this article inspires you to buy 14 AI textbooks that each cost $37. Now, I never mentioned multiplication, but almost immediately your mind tells you that the total cost will be $37 times 14, which you can then plug into your calculator. In other words, this fundamental skill of multiplication enabled you to utilize the calculator’s technology effectively, and without this baseline knowledge, you likely could not even interface with the technology at all. As we continue to develop and scale AI, we must discern what fundamental skills are, well, fundamental, and what we can begin to abstract away.
This goes beyond just deciding what material to teach in class; we must spend time clarifying educational priorities and roles. What’s the role of a human teacher? Pre-internet, teachers often were the sole source of information, but now, there’s infinite knowledge one click away. Thus, teachers focused more on finding a way to navigate through this infinite knowledge and present a carefully curated curriculum in a manner that is cohesive and intuitive. In other words, they mastered the art of creating a fire course packet. As AI continues to rise, this skill will remain important. However, there’s additional responsibility. There’s a tradeoff between teaching students to do things on their own (i.e. multiplication tables) and doing things with technology (i.e. with an LLM). This trade-off will get harder and more important to balance as our tools get stronger, and this responsibility will fall onto the teacher.
AI is powerful. It has the potential to transform education in a positive way, but it’s up to us to try to make sure this happens. AI is disruptive, and the industry is moving quickly, which requires us to ask questions and be aware, informed, and skeptical. Please don’t just accept or reject what I am or anyone else is saying. Instead, take into account my perspective and approach the question of AI with an open mind. I’m not a teacher, and I’m certainly not a writer. I’m just a student who is lucky enough to go to a great school. I’m interested in AI and education, and I’m hopeful that the two will complement each other in the future.
Justin Cheigh ’24 is a mathematics major from Rye, N.Y.